Kategorien
Comment

Infrastructure is commodity! Vertical services are the future of the cloud.

In 2014, the infrastructure-as-a-service market expects a total revenue of estimated $6 billion worldwide. A good reason to try its luck in this cloud segment. Numerous providers have jumped on this train in recent years and try to catch market share from the top dog Amazon Web Services (AWS). No easy task, since the innovation curve of most followers behaves rather flat compared to AWS. One reason is stiffened in the adherence to the word infrastructure in infrastructure-as-a-service. Arguments to change from AWS to a competitor because of a significantly higher performance sounds tempting in the first moment. But at the end of the day the maturity of the portfolio and the view over the entire offering counts and not just a small area where you have procured a technological advantage.

Infrastructure-as-a-services indeed mean services

Even if the Amazon Web Services was the first IaaS provider on the market, the word „web services“ take a central role in the philosophy of the company. Started with the basic and central services Amazon EC2 (computing power, virtual machine) and Amazon S3 (storage) more new services were rolled out in very short periods of time, which only have in the broadest sense something to do with infrastructure. Services like Amazon SQS, Amazon SNS, Amazon SWF or Amazon SES help AWS customers to use the infrastructure. Starting a single Amazon EC2 instance has in fact just as much value as a virtual server at a classic webhoster. Neither more nor less – and is in the end even more expensive per month. So, who hopes to be invited to the party by offering infrastructure components – virtual computing power, storage, gateway – in the future, will probably have to stay out.

To largely stay out of the price war, Amazon, Microsoft and Google currently fighting on, is also advisable. While this pleases the customer, but will sooner or later lead to a market adjustment. Moreover, if you look at how Jeff Bezos leads Amazon (eg Kindle strategy), he gets involved in price wars, just to gain market share. Therefore IaaS providers should prefer to use the capital to increase the attractiveness of their portfolio. Customers are willing to pay for innovation and quality. In addition, decision-makers are willing to partially spend the same for cloud solutions or even more as for on-premise offerings. The significance is enhanced on the flexibility of the enterprise IT, in order to give the company more agility.

To match this, a tweet from my friend and fellow analyst Ben Kepes from New Zealand who hit the nail ironically on the head.

Vertical services are the future of the cloud

The infrastructure-as-a-service market has not yet reached its zenith by far. Nevertheless, infrastructure has become a commodity and is not innovative anymore. We have reached a point in the cloud, where it is important to use cloud infrastructures now to build vertical services on it. For this, companies and developers, along with virtual computing power and storage, are depending on services from the provider to operate its offer, performant, scalable and fail-safe. Despite the meanwhile extensive service portfolio from vendors such as Amazon, Microsoft and Google still a lot of time, knowledge, effort, and thus capital is needed to reach this state. Furthermore, only proprietary infrastructure-related services are offered to work with the infrastructure of the providers. Everything else should be self-developed under the aid of these services.

For this reason, the market has a lack of service-portfolios from external providers that can be used by companies and developers in order to use ready-services on-demand which otherwise must be developed on the infrastructures and platforms themselves. Suchlike value-added services can be integrated horizontally into the vertical services for a specific business scenario and be used when needed.

This BBaaS (business-bricks-as-a-services) integrate provider-independent in existing infrastructure-as-a-service and platform-as-a-service and create added value for the user. The individual business components are standardized and already highly scalable and highly available implemented as web-services and can be easily integrated without much effort.

More about the BBaaS concept can be found at „Business-Bricks-as-a-Service (BBaaS) – Business Building Blocks in the Cloud„.

Kategorien
Comment

Cloud Rockstar 2013: Netflix is ​​the undisputed king of cloud computing

I think it is time to ennobling a company for its work in the cloud. Unlike to what one might imagine now, it is not one of the providers. No! Even though it’s the providers that create the capabilities, it’s the customers who ultimately make something out of it and show the general public the power of the cloud. Here, of course, you have to distinguish the ordinary customers from those who show a lot of commitment and have adapted the way of thinking cloud and thus also architecturally make a lot of effort. To make it short, the king of the cloud is Netflix and especially Adrian Cockroft, the father of the Netflix cloud architecture.

Born for the cloud

Before Netflix has decided to use its system in the cloud (migration from their own infrastructure), the company spent a lot of time trying to understand the cloud and build a test system within the cloud infrastructure. In particular, at this a lot of care was taken to generate as much realistic traffic scenarios as possible in order to examine the test-system for its stability.

Netflix initially developed a simple repeater that copied the real and complete customer requests to the system within the cloud infrastructure. Thus Netflix identified the possible bottlenecks of its system architecture and optimized in the course its scalability.

Netflix itself describes its software architecture like aa Rambo architecture. This has the fact that each system must properly function independently of the other systems. For this purpose each system within the distributed architecture was developed to be prepared that other systems to which a dependency exists, can fail and that this is tolerated.

For example, if the evaluation system fails, the quality of the answers deteriorated, but it will still answer. Instead of personalized offerings only known titles will be shown. If the system, that is responsible for the search function, is intolerably slow, the streaming of movies must still function properly.

Chaos Monkey: The secret star

One of the first systems that Netflix has developed on and for the cloud is called „Chaos Monkey“. His job is to randomly destroy instances and services within the architecture. Thus, Netflix ensures that all components function independently, even if partial components fail.

In addition to the Chaos Monkey Netflix has developed many other monitoring and testing tools for the operation of its system in the cloud, which the company calls as The Netflix Simian Army.

Netflix Simian Army: The role model

The Netflix Simian Army is an extreme example, how a cloud architecture has to look. The company has invested a lot of time, effort and capital in the development of its system architecture that runs on the cloud infrastructure of the Amazon Web Services. But it is worth, and any company that wants to use the cloud seriously to present a highly available offering, should definitely take Netflix as a role model.

The effort illustrates the complexity of the cloud

Bottom line, Netflix plans in the case of an error and does not rely on the cloud. Because sometimes something goes wrong in the cloud, as in any ordinary data center. You only have to be prepared for it.

Netflix shows very impressively that it works.

However, when you consider what an effort Netflix is doing to be successful in the cloud, you just have to say that cloud computing is not simple and a cloud infrastructure, no matter at which provider, needs to be built with the corresponding architecture.

This means, conversely, that the use of the cloud must become simpler in order to achieve the promised cost advantages. Because if one uses cloud computing right, it is not necessarily cheaper. In addition to savings in infrastructure costs which are always pre-calculated, the other costs may never be neglected for the staff with the appropriate skills and the costs for the development of scalable and fault-tolerant application in the cloud.

Successful and socially at once

Most companies keep their success for themselves. The competitive advantage is unwillingly given out of hand. But not Netflix. At regularly distance of time, parts of the Simian Army are released under the open source license, with each cloud users get the possibility to develop cloud applications with the architecture DNA of Netflix.

Cloud Rockstar 2013

Due to its success, and in particular its involvement in and for the cloud, it is high time, to ennoble Netflix publicly. Netflix has understood to use the cloud almost in perfection and not to keep the success for themselves. Instead, the company is giving other cloud users the opportunity to build similarly highly scalable and highly available cloud applications using the same tools and services.

For that reason the analysts of New Age Disruption and CloudUser.de appoint Netflix to be the „Cloud Rockstar 2013“ in the category „best cloud user“. This independent award from New Age Disruption is presented this year for the first time. Giving to vendors and users for their innovations and extraordinary commitment in cloud computing.

Congratulations Netflix and Adrian Cockcroft!

Rene Buest

Cloud Rockstar 2013 - Netflix

Further information: Cloud Rockstar Award

Kategorien
Comment

Salesforce Customer Company Tour 13: Salesforce is using the cloud as a tool for the maximum interconnection

At once with a new self-created claim to be a „customer company“, Salesforce changed the name of their customer event „Cloudforce“ to „Customer Company Tour 13 (CCT13)“. And the CCT13 has shown the name fits, both the claim and the name of the event. Instead of letting Salesforce employees do the work predominantly, satisfied customer came to word and spread the messages.

Salesforce needs and wants more market share in Germany

Besides Amazon AWS and Google, Salesforce belongs to the early stage cloud provider. Compared to the international and European market the reputation in Germany is rather little. This is not due to Salesforce‘s portfolio, but rather the skeptical German cloud market. Furthermore Salesforce is still recognized as a pure SaaS-CRM in their outside image, but what the company no longer is. The focus needs to be changed on marketing the whole platform to show that and which business processes can be mapped to it.

With that the company starts during its CCT13. The idea for the new „Customer Company“ claim Salesforce gets from a 2012 IBM study. At it one can see that even young disruptive companies can learn something from the old stager. To ensure the claim to be a customer company Salesforce sets on seven pillars: social, mobile, big data, community, apps, cloud and trust. The first six can already be find in the portfolio. Trust is something that is naturally and should stand over all.

By the way, Salesforce will build a new data center in Europe, London, in spring 2014. In addition a global research and development center in Grenoble is planned.

The Internet of Things has a high priority

I was totally excited by the GE use case, which unfortunately just got only a little marginal attention. However, the use case is showing the big potential of Salesforce platform and in which direction the company will steer. GE is using Salesforce Chatter to let their jet engines send support teams status information (m2m communication). More background information following soon. But this video gives a good insight of the use case.

http://www.youtube.com/watch?v=GvC1reb9Ik0

Salesforce is showing exactly what I already attested the cloud. The cloud serves as the technological basis for the maximum interconnection of everything, e.g. The Internet of Things. And here Salesforce with its platform and the strategic direction is on the right way to become one of the big player in this market.

But it is showing something else. Salesforce is more than just sales in the meantime. Maybe we experience the next mammoth project. Changing the name from Salesforce to „- we should be curious -“

Kategorien
Comment

Missed SpotCloud: Deutsche Börse Cloud Exchange is not the industry-first, vendor-neutral cloud marketplace

I already pointed out in my comment on the cloud marketplace from the Deutsche Börse, that this is not the first marketplace of its kind and Reuven Cohen in 2010 was much earlier with SpotCloud. After I read across the press, I have to say that the „Deutsche Börse Cloud Exchange“ want to be something it is not: the industry-first, vendor-neutral cloud marketplace for cloud infrastructure resources.

Three years too late!

The marketing of the Deutsche Börse seems to want all the credit which does not belong to them. As interesting the idea of ​​the Deutsche Börse Cloud Exchange (DBCE) is, one should stick to the truth. Because the marketplace is by far not the industry-first and also vendor-neutral marketplace for cloud infrastructure resources. This crown belongs to Reuven Cohen, who has launched SpotCloud in 2010. In the top SpotCloud has been managed 3,200 suppliers and 100,000 servers worldwide so far.

In addition, SpotCloud also supports OpenStack since April 2011. A point also Stefan Ried has justifiably criticized on DBCE.

So dear marketing of the Deutsche Börse, good idea/ solution, but please stick to the truth. Even the NSA is no longer able to hide something from the public.

Kategorien
Comment

The Deutsche Börse starts its own cloud marketplace to care for standardization and more trust in the cloud

At the beginning of 2014, the Deutsche Börse will step into the area of cloud marketplaces and offer their own broker for infrastructure-as-a-service under the brand „Deutsche Börse Cloud Exchange AG“. As the basis technology, the company relies on the German cloud management provider Zimory with which they also founded a joint venture.

Independent cloud marketplace for infrastructure services

The cloud marketplace will start in early 2014 and serve as a marketplace for cloud storage and infrastructure resources. In order to realize the technical side, the Deutsche Börse has established a joint venture with the German cloud management provider Zimory. Thus, Zimory will have the responsibility to ensure that all customers can seamlessly access the cloud resources they purchase.

With the cloud market place, both companies focus on the public sector as well as research institutions, which require more infrastructure resources such as memory and computing power on demand, or even have excess capacity and would like to offer in the marketplace.

The Deutsche Börse Cloud Exchange is organized as an international and vendor-neutral cloud-marketplace and is responsible for standards such as product offerings, the admission process, changing suppliers and the warranties of the purchased resources. Customers should be able to choose their providers freely and thereby be able to decide in which jurisdiction the data is stored. For this, the specifications and standards as well as the technical provisioning is aligned in close collaboration with the participants of the marketplace. As potential partners, the Deutsche Börse Cloud Exchange named cloud providers from the traditional IT sector and national and international medium-sized enterprises and large corporations. These include inter alia CloudSigma, Devoteam, Equinix, Host Europe, Leibniz data centre, Profi AG, T-Systems and the TÜV Rheinland.

Comment: An independent and standardized marketplace provides more confidence in the cloud

A cloud marketplace as the Deutsche Börse is offering is basically nothing new. The first marketplace of its kind was launched as Spotcloud by Reuven Cohen. Even the Amazon Web Services provides the ability to trade, based on their Spot Instances, with virtual instances. However, it is a proprietary marketplace. And that is the crucial advantage for the Deutsche Börse Cloud Exchange, it is vendor independent. Which certifies a greater range of resources and on the other hand more confidence by the dependency is resolved to a single vendor. Another credit of trust the Deutsche Börse provides itself. Those is trading in its context all along with securities, energy and other raw materials as virtual goods. Why shouldn’t they do that with virtual IT resources. The Deutsche Börse is therefore the face to the outside and take care for the organizational safety and awareness, whereas Zimory is responsible for the technical execution in the background.

Another important fact, which maybe a key to success, is the topic of standardization. Since the beginning of the cloud the discussion is on „standards in the cloud“ and who will care for THE standard. With the Deutsche Börse Cloud Exchange we maybe have found a serious initiative from the industry, which will ensure that there will soon be a uniform cloud standard. This of course depends on the appropriate involvement of the cloud provider. Nevertheless, the first partner ahead CloudSigma, Equinix and T-Systems (is also using OpenStack in some of their services) show their interest in this direction. In this context it is to be seen how the open cloud community is set up for this purpose.

In the first version the Deutsche Börse Cloud Exchange will be a pure marketplace for cloud infrastructure. A next evolutionary step should be to take also external value-added services within the marketplace, with which the infrastructure resources can be used effectively to motivate developers using the resources to create new web applications and backend software for mobile apps. Furthermore, the market place should be set up as a true cloud broker.

Kategorien
Comment

PRISM plays into German and European cloud computing providers hands

The U.S. government and above all PRISM has done the U.S. cloud computing providers a bad turn. First discussions now kindle if the public cloud market is moribund. Not by a long shot. On the contrary, European and German cloud computing providers play this scandal into the hands and will ensure that the European cloud computing market will grow stronger in the future than predicted. Because the trust in the United States and its vendors, the U.S. government massively destroyed itself and thus have them on its conscience, whereby companies, today, have to look for alternatives.

We’ve all known it

There have always been suspicions and concerns of companies to store their data in a public cloud of a U.S. provider. Here, the Patriot Act was the focus of discussion in the Q&A sessions or panels after presentations or moderations that I have kept. With PRISM the discussion now reached its peak and confirm, unfortunately, those who have always used eavesdropping by the United States and other countries as an argument.

David Lithicum has already thanked the NSA for the murder of the cloud. I argue with a step back and say that the NSA „would be“ responsible for the death of U.S. cloud providers. If it comes to, that remains to be seen. Human decisions are not always rational nature.

Notwithstanding the above, the public cloud is not completely death. Even before the announcement of the PRISM scandal, companies had the task to classify their data according to business critical and public. This now needs to be further strengthen, because completely abandon the public cloud would be wrong.

Bye Bye USA! Welcome Europe und Germany

As I wrote above, I see less death of the cloud itself, but much more to come the death of U.S. providers. Hence I include those who have their locations and data centers here in Europe or Germany. Because the trust is so heavily destroyed that all declarations and appeasement end in smoke in no time.

The fact is that U.S. providers and their subsidiary companies are subordinate to the Patriot Act and therefore also the „Foreign Intelligence Surveillance Act (FISA)“, which requires them to provide information about requested information. The provider currently trying to actively strengthen themselves by claiming more responsibility from the U.S. government, to keep at least the rest of trust what is left behind. This is commendable but also necessary. Nevertheless, the discussion about the supposed interfaces, „copy-rooms“ or backdoors at the vendors, with which third parties can freely tap the data, left a very bad aftertaste.

This should now encourage more European and German cloud providers. After all, not to be subject to the U.S. influence should played out as an even greater competitive advantage than ever. These include inter alia the location of the data center, the legal framework, the contract, but also the technical security (end-to-end encryption).

Depending on how the U.S. government will react in the next time, it will be exciting to see how U.S. provider will behave on the European market. So far, there are always 100% subsidiaries of the large U.S. companies that are here locally only as an offshoot and are fully subordinated to the mother in the United States.

Even though I do not advocate a pure „Euro-Cloud“ neither a „German Cloud“. But, under these current circumstances, there can only be a European solution. Viviane Reding, EU Commissioner for Justice, is now needed to enforce an unconditional privacy regulation for Europe, which European companies strengthens against the U.S. companies from this point in the competition.

The courage of the providers is required

It appears, that there will be no second Amazon, Google, Microsoft or Salesforce from Europe or even Germany. The large ones, especially T-Systems and SAP strengthen their current cloud business and giving companies a real alternative to U.S. providers. Even bright spots of startups are sporadic seen on the horizon. What is missing are inter alia real and good infrastructure-as-a-service (IaaS) offerings of young companies who do not only have infrastructure resources in their portfolio, but rely on services similar to Amazon. The problem with IaaS are the high capital requirements that are necessary for it to ensure massive scalability and more.

Other startups who are offering for example platform-as-a-service (PaaS), in many cases, set in the background on the infrastructure of Amazon – U.S. provider. But here have providers such as T-Systems the duty not to focus exclusively on enterprises and also allow developers to build their ideas and solutions on a cloud infrastructure in Germany and Europe through the „Amazon Way“. There is still a lack of a real(!) German-European alternatives to Amazon Web Services, Google, Microsoft and Salesforce!

How should companies behave now?

Among all these aspects one have to advise companies, to look for a provider that is located in a country that guarantees the required legal conditions for the company itself regarding data protection and information security. And that can currently only be a provider from Europe or Germany. Incidentally, that was even before PRISM. Furthermore, companies themselves have the duty to classify their data and to evaluate mission-critical information at a much higher level of protection than less important and publicly available information.

How it actually looks at U.S. companies is hard to say. After all, 56 percent of the U.S. population find the eavesdropping of telephone calls as acceptable. Europeans, and especially the Germans, will see that from a different angle. In particular, we Germans will not accept a Stasi 2.0, which instead of rely on the spies from the ranks (neighbors, friends, parents, children, etc.), on machines and services.

Kategorien
Comment

If you sell your ASP as SaaS you do something basically wrong!

Despite the continuing spread of the cloud and the software-as-a-service (SaaS) model, you always meet again and again people, who are anchored in the firm belief to offer cloud computing since 20 years. Because ASP (Application Service Providing) was finally nothing else. The situation is similar with traditional outsourcers, whose pre-sales team will gladly agreed for an appointment after a call to model the customers ‚tailored‘ cloud computing server infrastructure locally, which may be paid by the customer in advance. In this post it’s only about why ASP has nothing to do with SaaS and cloud.

ASP: 50 customers and 50 applications

Let’s be honest. A company that wants to distribute an application will come sooner or later to the idea that it somehow wants to maximize its profits. In this context, economies of scale have an important role to place an efficient solution on the market that is designed that it remains profitable in spite of its own growth. Unfortunately, an ASP model can not prove exactly that. Because ASP has one problem. It does not scale dynamically. But is only as fast as the administrator must ensure that another server is purchased, installed in the server room, equipped with the operating system and other basic software plus the actual client software.

Furthermore, in the typical ASP model, for each customer an instance of the respective software is needed. In the worst case (depending on performance) at least even a separate (physical) server for each customer is required. This means in numbers, that for 50 customers who want to use exactly the same application but separated from each other, 50 installations of the application and 50 servers are required. Topics such as databases should not be forgotten, where in many cases, up to three times as many databases must be used as applications are provided to customers.

Just consider yourself the effort (cost) that an ASP provider operates to integrate and manage the hosted systems, to service new customers and beyond that to provide them with patches and upgrades. This is unprofitable!

SaaS: 50 customers and 1 application

Compared to ASP SaaS sets on a much more efficient and more profitable model. Instead of running one application for each customer, only one instance of an application for all customers is used. This means that for 50 customers only one instance of the application is required, that all together are using but isolated from each other. Thus, the expenses for the operation and management of the application is reduced in particular. Where an administrator at the ASP model had to update each of the 50 software installation, it is sufficient for SaaS, if a single instance is updated. If new customers want to take advantage to access the application, it is automatically set up, without an administrator needs to install a new server and set up the application for them. This saves both time and capital. This means that the application grows profitable with the requirements of new customers.

Multi-tenancy is the key to success

The concept behind SaaS, which accounts for a significant difference between ASP and SaaS, is called multi-tenancy. Here are several mandates, ie customers, hosted on the same server or software system, without beeing able to look in the data, settings, etc. of each other. This means that only a customer can see and edit its data. A single mandate within the system forms a related to the data and organizationally closed unity.

As noted above, the benefits of a multitenancy system is to centrally install the application, to maintain and optimize memory requirements for data storage. This is due to the fact that data and objects are held across clients and just need to be stored for an installed system and not per mandate. In this context, it must be stressed again that a software system is not multitenant by giving each client its own instance of software. Within the multi-tenancy method all clients using one instance of an application that is centrally managed.

If you still have the question why not to sell ASP as SaaS, read „Software-as-a-Service: Why your application belongs to the cloud as well„.

Kategorien
Comment

PRISM: Even the University of Furtwangen is planning interfaces for cloud monitoring in Germany

PRISM is the latest buzzword. That similar should happen in Germany too, causes to worry. Despite that this interview was published in the German Manager Magazin, it seems to be a little lost in the rest of the German media landscape, unfortunately. Because in Germany we are also on the way to integrate interfaces for monitoring cloud solutions. Developed and promoted by the University of Furtwangen!

Third-party organizations should be able to check data streams

In an interview with the Manager Magazin under the apparently innocuous title „SAFETY IN CLOUD COMPUTING – „The customer come off second best“ says Professor Christoph Reich of the computer science department at the University of Furtwangen and director of its cloud research center: „We want to define interfaces that third party organizations get the opportunity to review the data streams.

This statement was in response to the question of how it can be technically realized that some kind of accountability chain to be set up that works across vendors. This has the background, that the property can be transferred, so that personal data are particularly well protected. For this reason, these accountability chain must not break if another provider comes into play.

So far so good. However, it will be exciting. In the subsequent question by the Manager Magazin:

Would also the federal government be a potential customer? German law enforcement agencies even ask for a single interface to monitor cloud communications data in real time.

Professor Reich answers:

In principle this is going in this direction. But a judicially actionable verifiability looks very different. If you want to record evidential data, you need special memory for that, these are very expensive. We want to give customers the opportunity to get visualized where their data are.

Regardless that the University of Furtwangen do not have this „special memory“, but certainly a government organization. And as long as the interfaces are available, the data can be captured and stored at a third location.

Cloud computing lives from the trust

I already wrote about it three years ago „Cloud computing is a question of trust, availability and security„. The University of Furtwangen would also like to create more trust in the cloud with their „surveillance project“.

I just wonder how to continue building confidence, if you want to integrate interfaces for the potential (government) monitoring of data streams in cloud solutions?

Kategorien
Comment

SAP Cloud: Lars Dalgaard will leave a huge hole

Now he is gone, Lars Dalgaard. He was my personal hope for SAP and in my opinion the driving force behind the cloud strategy of the company from Walldorf, which has received more and more momentum in the last month. But not only for the Cloud Dalgaard has revamped SAP, even cultural it would have been good for the company to have him longer in their own ranks. Or was it maybe exactly this problem why he had to go after such a short time.

Credible positive energy

The cloud thoughts SAP has incorporated externally by the akquisiton of SuccessFactors and with its founder and CEO Lars Dalgaard. After taking over the Dane was the person in authority for the SAP Cloud area and the driving force behind the SAP cloud services. And he has done this job very well. In his keynote, during the Sapphire Now 2012, in which he introduced the new SAP cloud strategy, Dalgaard presented himself as explosive entrepreneur who did not discarded his startups genes by far. On the contrary, with his motivation, he brought a fresh breeze in the established group based in Walldorf, Germany.

On my retrospective on the Sapphire Now, 2012 i had written (only in German): „It is to be hoped for Dalgaard and especially for SAP, that he can realize his cloud vision and that he may hope for open doors with his startup character. Because SAP is far away from what can be logically described as a startup.

Different corporate cultures

Has precisely this circumstance for Dalgaard or better SAP become the fate? Lars Dalgaard resigned from his executive position on the first of June 2013, has left SAP and entered as a general partner at Andreessen Horowitz. He will continue to be available for SAPs cloud business as an advisor for the cloud governance board .

I only can speculate at this point. Although I do not personally know Dalgaard. But his positive energy that he exudes during his lectures, evidence of someone who knows what he wants and have a clear focus. Admittedly, I’m not a SAP expert with regard to the corporate culture. However, many good people already questionably broken at SAP. Just take Shai Agassi as one example.

Quo vadis, SAP?

To moor an entire company on a single and also purchased externally person is exaggerated. However, I take the view that the loss of Lars Dalgaard could have a significant impact on the cloud business of SAP. One remember the very rocky start of SAP in the cloud area with its product line „SAP Business By Design“, which had not deserves the term cloud solution in its first version and which was a real letdown. With Business By Design 4.0 SAP has – under the direction of Lars Dalgaard – once again started from scratch and the train began to move.

It is desirable for SAP that they do not lose this „Dalgaard mentality“ in order to remain competitive in the cloud market. Because one should not forget that a cloud business must be conducted differently as a traditional software company. Just compare Amazon and Google. The same also Microsoft needed to understand and learned to reinvent themselves. SAP was – with Lars Dalgaard – at least on the right track, so far.

Kategorien
Comment

The cloud computing world is hybrid! Is Dell showing us the right direction?

With a clear cut, Dell said good bye to the public cloud and align its cloud computing strategy with own OpenStack-based private cloud solutions, including a cloud service broker for other public cloud providers. At first the move comes surprising, but makes sense when you take a closer look at Dell, whose recent past and especially the current market situation.

Dell’s new cloud strategy

Instead of having an own public cloud offering on the market, Dell will sell OpenStack-based private clouds on Dell hardware and software in the future. With the recent acquisition of cloud management technology Enstratius customers will also be able to deploy their resources to more than 20 public cloud provider.

With a new „partner ecosystem“, which currently consists of three providers and that is further expanded in the future, integration opportunities between the partner’s public clouds and private clouds of Dell customers will be offered. The current partner network includes Joyent, ZeroLag and ScaleMatrix. All three are rather small names in the infrastructure-as-a-service (IaaS) market.

Further, Dell also strengthens its consulting business to show customers which workloads and processes could flow into the public cloud.

Thanks Enstratius, Dell becomes a cloud broker

A view on the current public cloud IaaS market shows that Dell is right with its strategy change. All current IaaS providers of all sizes to chafe in the fight for market share against industry leader Amazon Web Services. Since their innovative strength is limited compared to Amazon and most of them to limit themselves, with the exception of Google and Microsoft, to the provision of pure infrastructure resources (computing power, storage, etc.), the chances of success are rather low. Moreover, into a price war with Amazon the fewest should get involved. That can quickly backfire.

Rather than dealing with Amazon and other public IaaS providers in the boxing ring, Dell positioned itself on the basis of Boomi, Enstratius and other previous acquisitions as a supportive force for cloud service providers and IT departments and provides them with hardware, software and more added values.

In particular, the purchase of Enstratius was a good move and let Dell become a cloud service broker. Enstratius is a toolset for managing cloud infrastructures, including the provisioning, management and automation of applications for – currently 23 – public and private cloud solutions. It should be mentioned that Enstratius in addition to managing a single cloud infrastructure also allows the operation of multi-cloud environments. For this purpose Enstratius can be used as a software-as-a-service and installed in on-premise environments in the own data center as a software.

The cloud computing world is hybrid

Does Dell rise in the range of cloud innovators with this change in strategy? Not by a long shot! Dell is anything but a leader in the cloud market. The trumps in their cloud portfolio are entirely due to acquisitions. But, at the end of the day, that does not matter. To be able to take part in the highly competitive public cloud market, massive investment should have been made ​​that would not have been necessarily promising. Dell has been able to focus on his tried and true strengths and these are primarily in the sale of hardware and software, keyword: „Converged Infrastructures“ and the consulting business. With the purchase of cloud integration service Boomi, the recent acquisition of Enstratius and the participation in the OpenStack community externally relevant knowledge was caught up to position itself in the global cloud market. In particular, the Enstratius technology will help Dell to diversify the market and take a leading role as a hybrid cloud broker.

The cloud world is not only public or only private. The truth lies somewhere in the middle and strongly depends on the particular use case of a company, which can be divided into public, private as well as hybrid cloud use cases. In the future, all three variants will be represented within a company. Thereby the private cloud does not necessarily have to be directly connected to a public cloud to span a hybrid cloud. IT departments will play a central role in this case, get back more strongly into focus and act as the company’s own IT service broker. In this role, they have the task of coordinating the use of internal and external cloud services for the respective departments to have an overall view of all cloud services for the whole enterprise. For this purpose, cloud broker services such as Dell’s will support.