Kategorien
Analysis

Enterprise Cloud: IBM to get into position

Last week IBM announced to invest more than 1.2 billion dollar for improving its global cloud offering. For this, cloud services are delivered from 40 data centers and 15 countries world wide. The German data center is based in Ehningen. Softlayer, IBM acquired in 2013, will play a key role. Therefore its resource capacities will be doubled.

Investments are a clear sign

The 1.2 billion dollar is not the first investment IBM let flew into its cloud business in the last couple of years. Since 2007 over 7 billion dollar IBM invested for the expansion of this business area. Among others 2 billion dollar for acquiring Softlayer. In addition more than 15 other acquisitions, which brought more cloud computing solutions and know-how into the company. The last big announcement was the amount of 1 billion dollar, IBM will invest into its cognitive computing system Watson, to make it available as a service for the masses.

These investments show that IBM is serious with the cloud and sees it as one pillar for the prospective company strategy. This is also seen in the expectations. IBM assumes that the worldwide cloud market growth up to 200 billion dollar in 2020. Until 2015, IBM wants to earn seven billion dollar per year worldwide with its cloud solutions.

Softlayer positioning is of crucial importance

A special attention IBM will pay to the Softlayer cloud. Therefore the capacities will be doubled in 2014. According to IBM, due to the acquisition of the cloud computing provider in 2013, about 2.400 new customers had been obtained. With Softlayer the important markets and financial centers should be strengthen. In addition, the Softlayer cloud should profit from Watson’s technologies in the future.

The Softlayer acquisition was an important step for IBM’s cloud strategy, to get a better cloud DNA. The IBM SmartCloud disappointed until today. These are not only the results of conversations with potential customers who, for the most part, comment negative on the offering and its usage.

The expansion of the Softlayer cloud will be interesting to watch. According to the official Softlayer website at the moment 191.500 servers are operated at seven locations worldwide.

  • Dallas: 104,500+ Servers
  • Seattle: 10,000+ Servers
  • Washington: 16,000+ Servers
  • Houston: 25,000+ Servers
  • San Jose: 12,000+ Servers
  • Amsterdam: 8,000+ Servers
  • Singapore: 16,000+ Servers

Compared to cloud players like Amazon Web Services (more than 450.000), Microsoft (about 1.000.000) and Google these are indeed peanuts. Even after doubling to 383,300 servers in 2014.

IBM’s worldwide cloud data centers

One part of the mentioned 1.2 billion dollar IBM will invest in 2014 to open 15 new data centers. In total, 40 data centers distributed over 15 countries will build the basis of the IBM cloud. The new cloud data centers will be located in China, Hongkong, Japan, India, London, Canada, Mexiko City, Washington D.C. and Dallas. The German data center is located in Ehningen near Stuttgart, the Swiss in Winterthur.

On the technical side, each cloud data center consists of at least for PODs (Points of Distribution). One single unit consist of 4.000 servers. Each POD works independently from another POD to provide the services and should have a guaranteed availability of 99.9 percent. Customers can decide for a single POD with 99.9 percent availability or a distribution over several PODs to increase the availability ratio.

The footprint within the enterprise

With its investments IBM wants to clearly position against the Amazon Web Services. However, one must distinguish one thing. Amazon AWS (AWS) is solely represented in the public cloud and this, according to a statement from Andy Jassy at last re:Invent, should remain. AWS believes in the public cloud and there is no interest to invest in (hosted) private cloud infrastructures. However, a company who brings the wherewithal get there quite its own private region, which is managed by AWS (similar to AWS GovCloud).

In the competition between IBM and AWS it is not primarily about the advance in the public, private, or else a kind of cloud shape. It is about the supremacy for corporate customers. AWS has grown up with startups and developers and tries to increase the attractiveness at companies for some time. This they have shown with several new services at the past re:Invent. This, however, is a market in which IBM has been present for decades and has a good and broad customer base and valuable contacts with the decision makers. This is an advantage, in spite of AWS’s current technical lead, should not be underestimated. Another advantage for IBM is the large base of partners in terms of technical support and sales. Because the pure self-service, a public cloud offers, does not fit the majority of enterprise customers. However, along with the Softlayer cloud IBM has yet to convince the companies technically and show customers a concrete benefit for the value to be successful.

What will prove to be an advantage for IBM over other cloud providers such as AWS, is the local presence with data centers in several countries. In particular, the German market will be appreciating this, when the offering fits the needs. It will be exciting to see if other U.S. cloud providers will follow and open a data center in Germany, to meet the concerns of German companies.

IBM’s investments indicate that the provider sees the cloud as a future supporting pillar and want to build a viable business in this area.

Kategorien
Analysis

Off to private cloud. OpenStack is no infrastructure solution for the public cloud!

OpenStack is undoubtably the new rising star for open source cloud infrastructures. Many known players, including Rackspace, RedHat, HP, Deutsche Telekom and IBM rely on the software to build their offers. But one question remains exciting. Everyone is talking about to get market share from the undisputed market leader Amazon Web Services. Above all, the named public cloud providers, who have decided for OpenStack. No question, in the big cloud computing market everyone will find its place. But is OpenStack the right technology to become a leader in the public cloud, or is it in the end „only“ sufficient for the private cloud (hybrid cloud)?

Rackspace is expanding its portfolio to include cloud consulting services

I already had written about the diversification problem and have called OpenStack as the golden cage in this context, because all involved providers stuck in the same dilemma and do not differentiate from each other. Because if you just confront the top two OpenStack provider Rackspace and HP, it shows that the portfolios are approximately equal to 99 percent.

To appease the favor of its shareholder Rackspace has already pursued first new ways and changed its strategy and even stopped the pursuit race in the public cloud. According to Rackspace CTO John Engates customers are increasingly asking for help, so Rackspace should support with its knowledge to build cloud infrastructures. Thus Rackspace seems a little less focus on hosting of cloud services in the future and instead invest more in consulting services of cloud infrastructures. This could be a smart move. Eventually, the demand for private cloud infrastructures continues and OpenStack will play a leading role here. Another opportunity could be the hybrid cloud, by connecting the private cloud infrastructures with Rackspace’s public cloud.

Own technologies are a competitive advantage

Another interesting question is whether a technology, what OpenStack ultimately only is, is critical of success? A look at leading vendors such as Amazon Web Services, Microsoft Windows Azure and now the Google Compute Engine as well as their perceived Crown Princes (CloudSigma, ProfitBricks) show one thing. All have built proprietary infrastructure solutions under the IaaS offerings. Although as far as all set on one or the other open-source solution. But in the end everything is self developed and integrated. This leads to the conclusion that proprietary technologies are a competitive advantage, since eventually only a single provider benefits from it. Or was Amazon AWS „just the first provider on the market“ and therefore has this huge lead?

Figures speak for OpenStack private clouds

On the official OpenStack website use cases are presented, that show how OpenStack is actually used. These are divided into the categories: On-Premise Private Cloud, Hosted Private Cloud, Hybrid Cloud and Public Cloud. A look at the current deployment (as of 01/14/2014) for the OpenStack use comes to the following result.

OpenStack-Deployments 01/14/2014

Thus, On-Premise Private Cloud (55) installations are quite clear at the top with a wide margin followed by Hosted Private Cloud (19) and Public Cloud (17) deployments. The following are Hybrid Clouds (10) and not exactly specified projects (4).

Please note, the figures are the deployments that were officially reported to OpenStack. However, those show a clear trend where the future of OpenStack is. In the private cloud. Where I assume that hosted private cloud deployments and hybrid clouds will increase even more. As well as OpenStack installations that serve as a means to an end and build the pure infrastructural basis for Web services.

Kategorien
Comment

Whether public or private, a cloud is indispensable for everyone!

In 2014, we are finally in the year of the cloud. Promise! As in 2012 and 2013, it will make the breakthrough this year. Promise! It is written everywhere. If we take the sarcasm a little bit aside and face the reality, the truth does not look so bleak. It just depends on the shape of the cloud. IDC and Gartner are talking about billions of dollars to be invested in the global public IaaS market in the coming years. Crisp Research has looked at the German IaaS market in 2013 and came to completely opposite numbers. In public IaaS about 210 million euros were invested, while private cloud infrastructures reached investments of several billion euros. The contrast between two markets can’t hardly be different. But that’s ok. This our German mindset. Careful. Slowly. Successful. However, one thing companies should write on their agenda for 2014 and the years ahead. Whether talking about a public or private cloud. One thing is certain. No company can do without a cloud anymore! Guaranteed! Why? Read on.

The deployment model of IT has changed

Basically, in a cloud it is about how IT resources are provided. This is done on-demand via a self-service and using a billing model that determined costs according to actual consumption.

The above mentioned IT resources, in the form of applications (SaaS), platforms (development environments; PaaS) and infrastructures (virtual servers, storage; IaaS) are provided as services that a user can order. This means in reverse, that you may not stop at an ordinary virtualization. Virtualization is just a means to an end. Finally, the user must somehow get to the resources. Pick up the phone, call the IT department and wait is not a sign that a cloud infrastructure is available. Quite the contrary.

Based on what kind of cloud the resources are provided depends on the use case. There is no „Uber Cloud“ that solves all the problems at once. For a public cloud exist sufficient use cases. Even for topics or industries that seem far away at first. Bottom line, in many cases it is a question of data. And exactly these needs to precisely be classified. Then it can come to the decision that only a private cloud comes into question. In this case, a company will become its own cloud provider (with all the highs and lows a public cloud provider has to deal with), develops its own cloud infrastructure and supplies its internal customers directly. Or one can go to one of the managed cloud providers, that exclusively operate a private cloud within a dedicated environment, only for one customer and also have professional services in their portfolio, that public cloud providers typically offer only via a network of partners.

It is solely crucial that companies turn to a cloud model, because…

Employees ask for services on demand

Employees want services and they want them now and not in two weeks or three months. And if they do not get what they need, then they find a way to get it. Honestly! There are many attractive alternatives on the IT market for quite some time, which are just a few clicks and a credit card number away to satisfy the needs. That especially in the IaaS area still waits a lot of work, with this case most non IT people are unaware. But they obviously got what they needed and if it was just the desire for attention. The cloud provider has finally responded immediately. The miracle of self-service!

IT-as-a-Service is not just a buzz word. It is the reality. IT departments are under pressure to work as a separate business unit and even develop products and services for their company or to provide them at least according to the needs. Therefore, they must respond proactively. And here not to apply cuffs is meant, by closing the firewall ports. No, here it is about to question themself.

That this works, the Deutsche Bahn subsidiary DB Systel impressively demonstrated by reducing the deployment process from 5 days to 5 minutes(!) per virtual server with its own private cloud.

Keep hybrid cloud in mind

While the constant discussions, whether a public or private cloud comes into question, one should always keep the option of a hybrid cloud in mind.

A hybrid cloud allows a unique use case for the use of a public cloud. Here, certain areas of the IT infrastructure (computing power and storage) are translocated in a public cloud environment. The rest and mission-critical areas remain within the self-managed on-premise IT infrastructure or private cloud.

In addition, the hybrid cloud model provides a valuable approach for the architecture design by sections of the local infrastructure, which cause high costs, but equally are difficult to scale, are combined with infrastructure that can be provisioned massively scalable and when required. The applications and data to be rolled out on the best platform for the individual case and the processing between the two is integrated.

The use of hybrid scenarios confirms the fact that not all IT resources should be translocated in public cloud environments and for some it’s never even come into question. Are overall issues such as compliance, speed requirements and security restrictions considered, a local infrastructure is still necessary. However, the experiences gained from the hybrid model help to understand what data should be kept locally and which can be processed within a public cloud environment.

Kategorien
Analysis

Why I do not (yet) believe in the Deutsche Börse Cloud Exchange (DBCE)

After a global „drumbeat“ it has become quite silent around the Deutsche Börse Cloud Exchange (DBCE). However, I am often asked about the cloud marketplace, which have the pretension to revolutionize the infrastructure-as-a-service (IaaS) market, and asked about my estimation on its market potential and future. Well, summing up I always come to the same statement, that I, regarding the present situation, not yet believe in the DBCE and grant it a little market potential. Why? Read on.

Two business models

At first, I see two business models within the DBCE. The first one will become real in 2014. The marketplace for the supply and demand of virtual infrastructure resources (compute and storage), which should be used by enterprises to run their workloads (applications, store data).

The second one is a dream of the future. The trading of virtual resources, as we know it with Futures. Let’s be honest, what is the true job of a stock exchange? The real task? The core business? To organize and monitor the transfer of critical infrastructure resources and workloads? No. A stock exchange is able to determine the price of virtual goods and trade them. The IaaS marketplace is just the precursor, to prepare the market for this trade business and to bring together the necessary supply and demand.

Provider and User

For a marketplace basically two parties are required. The suppliers and demanders, in this case the user. Regarding the suppliers, the DBCE should not worry. If the financial, organizational and technical effort is relatively low, the supply side will offer enough resources quickly. The issues are on the buyer side.

I’ve talked with Reuven Cohen on that topic who released SpotCloud as the first worldwide IaaS marketplace in 2010. At the peak of demand, SpotCloud administrated 3.200 provider(!) and 100.000 server worldwide. The buyer side looked modest.

Even if Reuven was too early in 2010 I still see five important topics who are in charge of, that the adoption rate of the DBCE will delay: Trust, psychology, uses cases, technology (APIs) and management.

Trust and Psychology

In theory the idea behind DBCE sounds great. But why is the DBCE trustworthier compared to another IaaS marketplaces? Does a stock exchange still enjoy the confidence for what it stands respectively it has to stand for as an institution? In addition, IT decision maker have a different point of view. The majority of the enterprises is still overwhelmed with the public cloud and fear to outsource their IT and data. There is a reason why Crisp Research figures show that the spending’s for public IaaS in Germany for 2013 are just about 210 million euro. On the other side investments in private cloud infrastructures are about 2.3 billion euro.

This is also underwritten by a Forrester study which implies:

“From […] over 2,300 IT hardware buyers, […] about 55% plan to build an internal private cloud within the next 12 months.”

This, an independent marketplace does not change. On the contrary, even if it should offer more transparency, it creates a new complexity level between user and provider, which the user needs to understand and adopt. This is also reflected in use cases and workloads that should run on the DBCE.

Use Cases

Why should someone use the DBCE? A question that is not easy to answer. Why should someone buy resources through a marketplace, even if he can purchase them directly from a provider who already has a global footprint, many customers and a stable infrastructure? The price and comparability could be essential attributes. If a virtual machine (VM) at provider A is today at a reduced rate compared to provider B, then the VM of provider A is used. Really? No, this leads to a more complex application architecture which will be not in due proportion to its use. Cloud computing is too complex, that a smart architect to refrain from doing this. In this context you should also not forget the technical barriers a cloud user have to deal with by adopting current but further developed cloud infrastructures.

One interesting scenario that could be build with the DBCE is a multi-cloud concept to spread the technical risks (eg. outage of a provider).

Where we come to the biggest barrier – the APIs.

API and Management

The discussions on „the“ preferred API standard in the cloud do not stop. Amazon EC2 (compute) and Amazon S3 (storage) are deemed to be the de-facto standard which most of the provider and projects support.

As a middleware the DBCE tries to settle between the provider and user to provide a consistent own(!) interface for both sides. For this purpose the DBCE sets on the technology of Zimory, which offers an open but proprietary API. Instead of focusing on a best-known standard or to adopt one from the open source community (OpenStack), the DBCE tries to make its own way.

Question: Against the background, that we Germans, regarding the topic of cloud, not to cover us with own glory. Why should the market admit to a new proprietary standard that comes from Germany?

Further issues are the management solutions for cloud infrastructures. Either potential user already decide for a solution and thus have the challenge to integrate the new APIs or they are still in the decision process. In this case the issue comes with the non-supported DBCE APIs by common cloud management solutions.

System integrator and Cloud-Broker

There exist two target groups who have the potential to open the door to the user. System integrators (channel partner) and cloud-broker.

A cloud service broker is a third party vendor, who, on the demand of its customer, enriches the services with an added value and ensures that the service fulfills the specific requirements of an enterprise. In addition, he helps with the integration and aggregation of the services to increase its security or to enhance the original service with further properties.

A system integrator develops (and operates) on the demand of its customer a system or application on a cloud infrastructure.

Since both are acting on behalf of the user and operate the infrastructures, systems and applications, they can adopt the proprietary APIs and make sure, that the users don’t have to deal with it. Moreover, system integrators as well as cloud-broker can use the DBCE to purchase cloud resources cheap and setup a multi-cloud model. Therefore the complexity of the system architecture in the cloud plays a leading role, but which the user may not notice.

It is a matter of time

In this article I’ve asked more questions as I’ve answered. I don’t want to rate the DBCE too negative, because the idea is good. But the topics mentioned above are essentially important to get a foot in the door of the user. This will become a learning process for both sides I estimate a timeframe of about five years, until this model leads to a significant adoption rate on the user side.

Kategorien
Analysis

Diversification? Microsoft’s Cloud OS Partner Network mutates to an "OpenStack Community"

At first sight the announcement of the new Microsoft Cloud OS Partner Network sounds indeed interesting. Who doesn’t want to use the Microsoft public cloud directly, as of now can select from one of several partner to access the Microsoft technologies indirectly. It is also possible to span a hybrid cloud between the Microsoft cloud and the partner clouds or the own data center. For Microsoft and its technological extension within the cloud it is indeed a clever move. But for the so-called „Cloud OS Partner Network“ this activity could backfire.

Microsoft Cloud OS Partner Network

The Cloud OS Partner Network consists of more than 25 cloud service provider worldwide, which, according to Microsoft, focuses especially on hybrid cloud scenarios based on Microsoft’s cloud platform. For this purpose, they rely on a combination of the Windows Server with Hyper-V, System Center and the Windows Azure Pack. With that, Microsoft tries to underwrite its vision to establish its Cloud OS as a basis for customer data center, service provider clouds and the Microsoft public cloud.

For this reason, the Cloud OS Partner Network serves more than 90 different markets with a total customer base of three million companies worldwide. Overall 2.4 million server and more than 425 data center build the technological base.

Among others the partner network includes provider like T-Systems, Fujitsu, Dimension Data, CSC and Capgemini.

Advantages for Microsoft and the customer: Locality and extension

For Microsoft, the Cloud OS Partner Network is a clever move to, measured by the distribution of Microsoft cloud technologies, gains more worldwide market share. In addition, it fits perfectly into Microsoft’s proven strategy to serve the customer not directly but over a network of partner.

The partner network also comes towards the customer. Companies who avoided the Microsoft public cloud (Windows Azure), due to data locality reasons or local policies like data privacy, are now able to find a provider in their own country and do not have to dispense with the desired technologies. For Microsoft, another advantage is the fact, not coercively to build a data center in each country, but to concentrate on the existing or the strategically important once.

With that, Microsoft can lean back a little and let again make a partner network do the uncomfortable sales. The incomes come, as before the age of the cloud, over selling licenses to the partner.

Downside for the partner: Diversification

You can find great stories about this partner network. But the fact is, with the Cloud OS Partner Network, Microsoft creates a similar competitive situation you can find in the OpenStack Community. Especially in the public cloud, with Rackspace and HP, there exist just two „top provider“, who only play a minor part in the worldwide cloud circus. Notably HP fights more with itself and is therefore not able to concentrate on innovations. However, the main problem of both and all the other providers is that they are not able to differentiate from each other. Instead, most of the providers stand in a direct competition to each other and currently not diverge significantly. This is due to the fact, that all set on the same technological base. An analysis on the current situation of the OpenStack community can be found under „Caught in a gilded cage. OpenStack provider are trapped.

The situation for the Cloud OS Partner Network is even more uncomfortable. Unlike in OpenStack, Microsoft is the only technology supplier and decides where it is going on. The partner network need to swallow what is set before them and just can adapt further technology stacks which lead to more overhead and thus further cost for development and operations.

Except for the local markets, all Cloud OS service provider are in a direct competition and based solely on Microsoft technologies are not able to differentiate otherwise. Good support and professional services are extremely important and an advantage but no USP in the cloud.

If the Cloud OS Partner Network flops, Microsoft will get away with a black eye. The great harm the partners will carry home.

Technology as a competitive advantage

Looking at the real successful cloud provider on the market it becomes clear that those who have developed their clouds based on an own technology stack and therefore differentiate technological from the rest. These are Amazon, Google or Microsoft and precisely not Rackspace or HP who both set on OpenStack.

This, Cloud OS partner like Dimension Data, CSC or Capgemini should keep in mind. In particular CSC and Dimension Data have big claims to have a say in the cloud.

Kategorien
Analysis

Quo vadis VMware vCloud Hybrid Service? What to expect from vCHS?

In May of this year, with the vCloud hybrid service (vCHS) VMware presented its first infrastructure-as-a-service (IaaS) offering in a private beta. The infrastructure service is fully managed by VMware and is based on VMware vSphere. Companies that already have virtualized their own on-premise infrastructure with VMware, are to be given the opportunity to seamlessly expand their data center resources such as applications and processes to the cloud, to span a hybrid cloud. As part of the VMworld 2013 Europe in Barcelona in October, VMware has announced the availability of the vCHS in England with a new data center facility in Slough near London. The private beta of the European vCloud hybrid service will be available from the fourth quarter of 2013, with general availability planned for the first quarter of 2014. This step shows that VMware ascribes the public cloud an important role, but was especially made ​​to meet the needs of European customers.

Be softly with gleeful expectations?

During the press conference at VMworld in Barcelona VMware CEO Pat Gelsinger made ​​the glad announcement that instead of the expected 50 registrations even 100 participants have registered for the test of vCHS private beta. After all, this is an improvement over the expectations by 100 percent. However, 100 test customers cannot be a success for a vendor like VMware. Just take a look on the broad customer base and the effectiveness of their sales.

It is therefore necessary to ask the question how attractive a VMware IaaS offering actually can be and how attractive VMware technology, most notably VMware’s in the enterprise widespread hypervisor, will be in the future. Discussions with IT leaders arise more frequently that VMware’s hypervisor for cost reasons and sense of independence (lock-in) should be superseded by an open source hypervisor like KVM in short to medium term.

Challenges: Amazon Web Services and the partner network

With vCHS VMware is positioning itself exactly as about 95 percent of all vendors in the IaaS market: With compute power and storage. A service portfolio is not available. However, VMware sees the Amazon Web Services (AWS) as a main goal and primary competitor when it comes to attract customers.

However, AWS customers – also enterprise customers – use more than just the infrastructure. Every time I talk to AWS customers, I ask my standard question: „How many infrastructure-related AWS services do you use?“ If I sum up all previous answers together, I come to an average of eleven to twelve services that an AWS customer is using. Most say that they would use as many services as necessary in order to have as little effort as possible. A key experience was a customer who, without hesitation and with wide eyes, said he is using 99 percent of all the services.

AWS is a popular and particularly attractive target. With the view on AWS only, VMware should be careful and not lose its network of partners and in particular lose the sight of the service provider that have built their infrastructures with VMware technology, including CSC, Dimension Data, Savvis, Tier 3, Verizon or Virtustream.

From first service providers there are already comments, to turn one’s back on VMware and support other hypervisors such as Microsoft Hyper-V. VMware sees these discussions so far relaxed as vCHS is intended purely for standardized workloads and VMware-powered service provider should take care of the specific customer needs.

Quo vadis vCloud Hybrid Service?

Because of its technology, which is used in a variety of cloud service providers, VMware operates passive for some time in the IaaS environment. However, the still in the beta vCloud hybrid Service is in direct competition to this partner network, so it will come to the question of trust between the two sides.

This is particularly due to the fact that vCHS offers more functionality than the typical vCloud Datacenter Service. This includes, among other things, the new vCloud hybrid service online marketplace, with which the user in accordance to VMware have access to more than 3,800 applications and services, which they can use in conjunction with vCHS.

VMware’s strategy consists primarily to build a seamless technical bridge between local VMware installations and vCHS – including the ability to transfer existing standard workloads between the own data center and vCHS. How much success VMware will have with this strategy needs to be seen.

One result of our exploration of the European cloud market has shown that most European companies like the properties of the cloud – flexible use, pay-per-use and scalability – but rather passed the construction and operation of their applications and systems to the provider (as a managed or hosted service).

These are good conditions for all service providers who have built their infrastructure based on VMware technology, but help customers with professional services on the way to the cloud. However, these are not the best conditions for VMware vCHS since vCHS will only support self-service and standard workloads.

As is typical for U.S. companies, VMware launched its European vCHS in England. The company has already announced plans to be active in other countries within Europe. The European commitment has mainly the background to appease the concerns of Europeans and to respond to their specific needs. Here VMware’s EMEA strategy envisages the points data locality, sovereignty, trust, security, compliance and governance. I do not want to appear too critical, but here the name is interchangeable: All cloud providers advertise with the same requirements.

Basically it can be said that the vCloud hybrid service in combination with the rest of VMware’s portfolio has a good chance to play a leading role in the IaaS market. This is mainly due to the broad customer base, a strong ecosystem and the many years of experience with enterprise customers.

According to VMware figures more than 500,000 customers including 100 percent of the Fortune 500, 100 percent of the Fortune Global 100 and 95 percent of the 100 DAX companies using VMware technology. Furthermore, more than 80 percent of virtualized workloads and a large number of mission-critical applications running on VMware.

This means that the transition to a VMware-based hybrid cloud is not an unusual step to scale without great expense. Furthermore, VMware like no other virtualization or cloud provider have a very large ecosystem of partners, resellers, consultants, trainers and distribution channels. With this ecosystem VMware has a great advantage over its competitors in the IaaS and generally in the cloud computing environment.

The situation is similar in the technical attractiveness. Organizationally VMware does not set on a typical public cloud environment, but either provides a physically isolated dedicated cloud per customer or a virtual private cloud. The dedicated cloud provides much more power than the virtual private cloud and provides the customers a physically separated pool of vCPUs and vRAM. The storage and network are logically isolated between the client. The virtual private cloud provides customers with the same design architecture of the dedicated cloud with resources, but these are only logical and not be physically separated.

With vCHS existing VMware infrastructures can comfortable be spanned to a hybrid cloud to move resources back and forth as needed. In addition, a company can thus build test and development or disaster recovery environments. However, this is for what a number of VMware service providers stand by, which could be the biggest challenge for both sides.

Kategorien
Analysis

Cloud Computing Myth: Less know-how is needed

I recently found an interesting statement in an article which describes the advantages of cloud computing. A caption was named „It’s not needed to build additional know-how within the company“. This is totally wrong. On the contrary it’s exactly the opposite. It is more knowledge needed as each vendor is promising.

There is a lack of knowledge

The kind and amount of the needed knowledge depends on the service that is used from the cloud. For an alleged high-standardized software-as-a-service (SaaS) application like e-mail there is less knowledge needed regarding the service and its characteristics compared to a service, which maps a certain process.

For the use of infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) it looks quite different. Although the provider takes care about building, operating and maintaining the physical infrastructure in this case. The responsibility for the virtual infrastructure (IaaS) behooves by the customer. The provider itself – for fee-based support – or certified system integrator help here by building and operating. It’s the same while operating an own application on a cloud infrastructure. The cloud provider is not responsible and just serves the infrastructure and means and ways with APIs, web interfaces and sometimes added value services, to help customers with the development. In this context one need to understand, that depending on the cloud scale – scale out vs. scale up – the application have to be developed completely different – namely distributed and automatically scalable across multiple systems (scale out) – as in the case of a non-cloud environment. This architecturally knowledge is lacking in most companies at every nook and corner, which is also due to the fact that colleges and universities have not taught this kind of programmatic thinking.

Cloud computing is still more complex than it appears at first glance. The prime example for this is Netflix. The U.S. video-on-demand provider operates its platform within a public cloud infrastructure (Amazon AWS). In addition to an extensive production system, which ensures the scalable and high-performance operation, it also has developed an extensive test suite – the Netflix Symian Army – which is only responsible for ensuring the smooth operation of the production system – inter alia virtual machines are constantly arbitrarily being shot down, but the production system must still continue to function properly.

The demand for the managed cloud rises

Although, the deployment model can less reduce the complexity, but the responsibility and the necessary know-how can be shifted. Within a public cloud the self-service rules. This means that the customer is first 100 percent on his own and is solely responsible for the development and operation of his application.

This, many companies have recognized and confess to themselves that they either do not have the necessary knowledge, staff and other resources to successfully use a public cloud (IaaS). Instead, they prefer or expect help from the cloud providers. In these cases it is not about public cloud providers, but managed cloud/ business cloud providers who also help in addition to infrastructure with professional services.

Find more on the topic of managed cloud at „The importance of the Managed Cloud for the enterprise„.

Kategorien
Analysis

The Amazon Web Services to grab at the enterprise IT. A reality check.

The AWS re:Invent 2013 is over and the Amazon Web Services (AWS) continue to reach out to the corporate clients with some new services. After AWS has established itself as a leading infrastructure provider and enabler for startups and new business models in the cloud, the company from Seattle tries to get one foot directly into the lucrative business environment for quite some time. Current public cloud market figures for 2017 from IDC ($107 billion) and Gartner ($244 billion) to give AWS tailwind and encourage the IaaS market leader in its pure public cloud strategy.

The new services

With Amazon WorkSpaces, Amazon AppStream, AWS CloudTrail and Amazon Kinesis, Amazon introduced some interesting new services, which in particular address enterprises.

Amazon WorkSpaces

Amazon WorkSpaces is a service which provides virtual desktops based on Microsoft Windows, to build an own virtual desktop infrastructure (VDI) within the Amazon Cloud. As basis a Windows Server 2008 R2 is used, which rolls out desktops with a Windows 7 environment. All services and applications are streamed from Amazon data centers to the corresponding devices, for what the PCoIP (PC over IP) by Teradici is used. It may be desktop PCs, laptops, smartphones or tablets. In addition, Amazon WorkSpaces can be combined with a Microsoft Active Directory, what simplifies the user management. By default, the desktops are delivered with familiar applications such as Firefox or Adobe Reader/ Flash. This can be adjusted as desired by the administrators.

With Amazon WorkSpaces Amazon enters a completely new territory in which Citrix and VMware, two absolute market players, already waiting. During VMworld in Barcelona, VMware just announced the acquisition of Desktone. VDI is basically a very exciting market segment because it redeemed the corporate IT administration tasks and reduces infrastructure costs. However, this is a very young market segment. Companies are also very careful when outsourcing their desktops as, different from the traditional on-premise terminal services, the bandwidth (network, Internet, data connection) is crucial.

Amazon AppStream

Amazon AppStream is a service that serves as a central backend for graphically extensive applications. With that, the actual performance of the device on which the applications are used, should no longer play a role, since all inputs and outputs are processed within the Amazon Cloud.

Since the power of the devices is likely to be more increasingly in the future, the local power can probably be disregarded. However, for the construction of a real mobile cloud, in which all the data and information are located in the cloud and the devices are only used as consumers, the service is quite interesting. Furthermore, the combination with Amazon WorkSpaces should be considered, to provide applications on devices that serve only as thin clients and require no further local intelligence and performance.

AWS CloudTrail

AWS CloudTrail helps to monitor and record the AWS API calls for one or more accounts. Here, calls from the AWS Management Console, the AWS Command Line Interface (CLI), own applications or third party applications are considered. The collected data are stored either in Amazon S3 or Amazon Glacier for evaluation and can be viewed via the AWS Management Console, the AWS Command Line Interface or third-party tools. At the moment, only Amazon EC2, Amazon ECS, Amazon RDS and Amazon IAM can be monitored. Amazon CloudTrail can be used free of charge. Costs incurred for storing the data to Amazon S3 and Amazon Glacier and for Amazon SNS notifications.

AWS CloudTrial belongs, even if it is not very exciting (logging), to the most important services for enterprise customers that Amazon has released lately. The collected logs assist during compliance by allowing to record all accesses to AWS services and thus demonstrate the compliance of government regulations. It is the same with security audits, which thus allow to comprehend vulnerabilities and unauthorized or erroneous data access. Amazon is well advised to expand AWS CloudTrail as soon as possible for all the other AWS services and make them available worldwide for all regions. In particular, the Europeans will be thankful.

Amazon Kinesis

Amazon Kinesis is a service for real-time processing of large data streams. To this end, Kinesis is able to process data streams of any size from a variety of sources. Amazon Kinesis is controlled via the AWS Management Console by assigning and saving different data streams to an application. Due to Amazon’s massive scalability there are no capacity limitations. However, the data are automatically distributed to the global Amazon data centers. Use cases for Kinesis are the usual suspects: Financial data, social media and data from the Internet of Things/ Everything (sensors, machines, etc.).

The real benefit of Kinesis, as big data solution, is the real-time processing of data. Common standard solutions on the market process the data via batch. Means the data can never be processed direct in time and at most a few minutes later. Kinesis removes this barrier and allows new possibilities for the analysis of live data.

Challenges: Public Cloud, Complexity, Self-Service, „Lock-in“

Looking at the current AWS references, the quantity and quality is impressive. Looking more closely, the top references are still startups, non-critical workloads or completely new developments that are processed. This means that most of the existing IT systems, we are talking about, are still not located in the cloud. Besides the concerns of loss of control and compliance issues, this depends on the fact that the scale-out principle makes it to complicated for businesses to migrate their applications and systems into the AWS cloud. In the end it boils down to the fact, that they have to start from scratch, because a non-distributed developed system is not working the way it should run on a distributed cloud infrastructure – key words: scalability, high availability, multi-AZ. These are costs that should not be underestimated. This means that even the migration of a supposedly simple webshop is a challenge for companies that do not have the time and the necessary cloud knowledge to develop the webshop for the (scale-out) cloud infrastructure.

In addition, the scalability and availability of an application can only be properly realized on the AWS cloud when you stick to the services and APIs that guarantee this. Furthermore, many other infrastructure-related services are available and are constantly being published, which make life clearly easier for the developer. Thus the lock-in is preprogrammed. Although I am of the opinion that a lock-in must not be bad, as long as the provider meets the desired requirements. However, a company should consider in advance whether these services are actually needed mandatory. Virtual machines and standard workloads are relatively easy to move. For services that are very close engaged into the own application architecture, it looks quite different.

Finally. Even if some market researchers predict a golden future for the public cloud, the figures should be taken with a pinch of salt. Cloud market figures are revised downwards for years. You also have to consider in each case how these numbers are actually composed. But that is not the issue here. At the end of the day it’s about what the customer wants. At re:Invent Andy Jassy once again made ​​clear that Amazon AWS is consistently rely on the public cloud and will not invest in own private cloud solutions. You can interpret this as arrogance and ignorance towards customers, the pure will to disruption or just self-affirmation. The fact is, even if Amazon will probably build the private cloud for the CIA, they have not the resources and knowledge by far to act as a software provider on the market. Amazon AWS is a service provider. However, with Eucalyptus they have set up a powerful ally on the private cloud side, which makes it possible to build an AWS-like cloud infrastructure in the own data center

Note: Nearly all Eucalyptus customers should also be AWS customers (source: Eucalyptus). This means conversely, that some hybrid cloud infrastructures exist between on-premise Eucalyptus infrastructures and the Amazon public cloud.

Advantages: AWS Marketplace, Ecosystem, Enabler, Innovation Driver

What is mostly ignored during the discussions about Amazon AWS and corporate customers is the AWS Marketplace. In addition, Amazon also does not advertised it too much. Compared to the cloud infrastructure, customers can use to develop their own solutions, the marketplace offers full-featured software solutions from partners (eg SAP), which can be automatically rolled out on the AWS infrastructure. The cost of using the software are charged per use (hour or month). In addition, the AWS fees for the necessary infrastructure are charged. Herein lies the real added value for companies to easily outsource their existing standard systems to the cloud and to separate from the on-premise systems.

One must therefore distinguish strictly between the use of infrastructure for in-house development and operation of ready-made solutions. Both are possible in the Amazon cloud. There is also the ecosystem of partners and system integrators which help AWS customers to develop their solutions. Because, even if AWS itself is (currently still) a pure infrastructure provider, they must equally be understood as a platform for other providers and partners who operate their businesses on it. This is also the key success and advantage over other providers in the market and will increase the long-term attractiveness of corporate customers.

In addition, Amazon is the absolute driving force for innovation in the cloud, no other cloud provider technologically is able to reach at the moment. For this purpose, it does not require re:Invent. Instead, it shows almost every month anew.

Amazon AWS is – partly – suitable for enterprise IT

Depending on the country and use case the requirements vary, Amazon has to meet. European customers are mostly cautious with the data management and store the data rather in their own country. I already met with more than one customer, who was technically confident but storing the data in Ireland was not an option. In some cases it is also the lack of ease of use. This means that a company dones’t want to (re)-develop its existing application or website for the Amazon infrastructure. Reasons are the lack of time and the knowledge to implement, what may results in a longer time to market. Both can be attributed to the complexity to achieve scalability and availability at the Amazon Web Services. After all, there are not just a few API calls. Instead, the entire architecture needs to be oriented on the AWS cloud. In Amazon’s case its about the horizontal scaling (scale-out) which makes this necessary. Instead, companies prefer vertical scaling (scale-up) to migrate the existing system 1:1 and not to start from scratch, but directly achieve success in the cloud.

However, the AWS references also show that sufficient use cases for companies exist in the public cloud in which the data storage can be considered rather uncritical, as long as the data are classified before and then stored in an encrypted way.

Analysts colleague Larry Carvalho has talked with a few AWS enterprise customers at re:Invent. One customer has implemented a hosted website on AWS for less than $7,000, for what an other system integrator wanted to charge $ 70,000. Another customer has calculated that he would pay for an on-premise business intelligence solution including maintenance about $200,000 per year. On Amazon AWS he only pays $10,000 per year. On the one hand these examples show that AWS is an enabler. However, on the other hand, that security concerns in some cases are yield to cost savings.

Kategorien
Analysis

Runtastic: A consumer app in the business cloud

The success stories from the public cloud do not stop. Constantly new web and mobile applications appear that rely on infrastructure-as-a-service (IaaS) offerings. This is just one reason that animates market researcher like IDC ($107 billion) and Gartner ($244 billion) to certify a supposedly golden future for the public cloud by the year 2017. The success of the public cloud cannot be argued away, the facts speak for themselves. However, there are currently mostly startups, and many times non-critical workloads moving to the public cloud. Talking to IT decision-makers from companies that have mature IT infrastructures, the reality looks quite different. This also shows our study on the European cloud market. In particular, the self-service and the complexity is underestimated by the public cloud providers, when they go to corporate customers. But also for young companies the public cloud is not necessarily the best choice. I recently spoke with Florian Gschwandtner (CEO) and Rene Giretzlehner (CTO) of Runtastic who decided against the public cloud and choose a business cloud, more precisely T-Systems.

Runtastic: From 0 to over 50 million downloads in three years

Runtastic consists of a series of apps for endurance, strength & toning, health & wellness and fitness, helping users to achieve their health and fitness goals.

The company was founded in 2009 in Linz (Austria) and has, due to its huge growth in the international mobile health and fitness industry, now locations in Linz, Vienna and San Francisco. The growth figures are impressive. After 100,000 downloads in July 2010, there are now over 50 million downloads (as of October 2013). Of these, 10,000,000 downloads alone in Germany, Austria and Switzerland, which roughly means more than one download per second. Runtastic has 20 million registered users, up to 1 million daily active users and more than 6,000,000 monthly active users.


Source: Runtastic, Florian Gschwandtner, November 2013

In addition to the mobile apps Runtastic is also offering its own hardware to use the apps and services more conveniently. Another interesting service is „Story Running“, which should lead to more variety and action while running.

Runtastic: Reasons for the business cloud

Considering this growth numbers and the number of growing apps and services Runtastic is an ideal candidate for a cloud infrastructure. Compared to similar offerings even for a public cloud. You would think. Runtastic began with its infrastructure on a classic web host – without a cloud model. In recent years, some experience with different providers were collected. Due to the continuing expansion, new underlying conditions, the growing number of customers and the need for high availability the reason was to outsource the infrastructure into the cloud.

Here, a public cloud was never an option for Runtastic because of the dedicated(!) location of the data as a top priority, not only because of security concerns or the NSA scandal. So, the Amazon Web Services (AWS) just for technical reasons are out of the question, since many demands of Runtastic can not be mapped. Among other things, the core database comprises about 500GB on a single MySQL node, which AWS can not meet, according to Giretzlehner. In addition, the cost for such large servers in a public cloud are so expensive that an own hardware quickly pays for itself.

Instead Runtastic decided for the business cloud of T-Systems. This was due to a good setup, a high quality standard and two data centers in one location (in two separate buildings). In addition, a high level of security, a professional technical service as well as the expertise of the staff and partners and a very good connection to the Internet hub in Vienna and Frankfurt.

To that end Runtastic combines the colocation model with a cloud approach. This means that Runtastic covers the base load with its own hardware and absorbs peak loads with infrastructure-as-a-service (IaaS) computing power of T-Systems. The management of the infrastructure is automated by Chef. Although the majority of the infrastructure is virtualized. However Runtastic drives a hybrid approach. This means that the core database, the storage and the NoSQL clusters are operated on bare metal (non-virtualized). Because of the global reach (90 T-Systems data centers worldwide) Runtastic can continue to ensure that all apps work seamlessly and everywhere.

A unconventional decision is the three-year contract, Runtastic closed down with T-Systems. These include the housing of the central infrastructure, a redundant Internet connection and the demand of computing power and storage of the T-Systems cloud. Runtastic chose this model because they see the partnership with T-Systems in the long term and do not want to leave the infrastructure from one day to another.

Another important reason for a business cloud was the direct line to counterparts and professional services. This also reflects in a higher price, what Runtastic consciously takes into account.

The public cloud is not absolutely necessary

Runtastic is the best example that young companies can be successful without a public cloud and that a business cloud is definitely an option. Although Runtastic runs not completely on a cloud infrastructure. However, the business case must be calculated in each case, whether it is (technically) worthwhile to invest parts in an own infrastructure and intercept the peak by cloud infrastructure.

This example also shows, that the public cloud is not easy sledding for providers and consulting and professional services from a business or managed cloud are in demand.

During the keynote at AWS re:Invent 2013 Amazon AWS presented its flagship customer Netflix and the Netflix cloud platform (see the video from 11 minutes), which was developed specifically for the Amazon cloud infrastructure. Amazon sells the Netflix tools of course as a positive aspect, what they definitely are. However, Amazon conceals the effort that Netflix operates in order to use the Amazon Web Services.

Netflix shows very impressive how a public cloud infrastructure is used via self-service. However, when you consider what an effort Netflix operates to be successful in the cloud, you just have to say that cloud computing is not simple and a cloud infrastructure, no matter what kind of provider, needs to be built with the corresponding architecture. This means, conversely, that the use of the cloud does not always lead to the promised cost advantages. In addition to savings in infrastructure costs which are always pre-calculated, the other costs may never be neglected for the staff with the appropriate skills and the costs for the development of scalable and fault-tolerant application in the cloud.

In this context, the excessive use of infrastructure-near services should be reconsidered. Although these simplify the developers life. As a company, you should think in advance whether these services are actually absolutely needed to keep your options open. Virtual machines and standard workloads are relatively easy to move. For services that engage very close to the own application architecture, it looks different.

Kategorien
Comment

Verizon Cloud: Compute and Storage. Boring. Next, please!

There are those moments in which I read a press release and think: „Wow this could be exciting.“ But there are mostly this very press releases I do not want to continue after the second paragraph. Oh by the way, Verizon has reinvented the enterprise cloud and thus would compete with the Amazon Web Services. Oh well, and Verizon offers compute power and storage. These are very promising services in order to be successful in the IaaS market and make sure to turn the market upside down. Where 100% of all IaaS providers have exactly these services in their portfolio and do not differentiate from one another.

Verizon Cloud

An excerpt from the press release:

“Verizon created the enterprise cloud, now we’re recreating it,” said John Stratton, president of Verizon Enterprise Solutions. “This is the revolution in cloud services that enterprises have been calling for. We took feedback from our enterprise clients across the globe and built a new cloud platform from the bottom up to deliver the attributes they require.”

And then this:

„Verizon Cloud has two main components: Verizon Cloud Compute and Verizon Cloud Storage. Verizon Cloud Compute is the IaaS platform. Verizon Cloud Storage is an object-based storage service.“

Verzion seriously offers compute power and storage. Something that 100% of all IaaS providers on the market have in their portfolio and sells it as a REVOLUTION? Not really, right?

ProfitBricks and CloudSigma to salute

In addition, Verizon to praise the granularity of their infrastructure resources:

„Previously, services had pre-set configurations for size (e.g. small, medium, large) and performance, with little flexibility regarding virtual machine and network performance and storage configuration. No other cloud offering provides this level of control.

Wrong! From the beginning ProfitBricks and CloudSigma offer this level of granularity and flexibility. Even this property of the Verizon cloud is not a revolution.

Which is likewise not a revolution, but could appeal to current VMware users is the compatibility with the VMware hypervisor. A promising market because many companies still continue to use VMware hypervisor. But where also VMware with its own vCloud hybrid service to have some say.

Fight a war with the wrong weapons

It’s amazing how many providers try to fight the Amazon Web Services with new offerings and only bring a blunt knife rather than a nuclear bomb. Especially since Verizon has developed the solution from scratch.

For me this slowly raises the question, when we will see the first Nirvanix within the IaaS providers who only offer compute power and storage. Finally, more and more candidates position oneself.

Next, please! (Viva la Revolution.)