Kategorien
Analysis

My cloud computing predictions for 2014

The year 2013 is coming to an end and it is once again time to look into the crystal ball for the next year. To this end, I’ve picked out ten topics that I believe will be relevant in the cloud area.

1. The AWS vs. OpenStack API discussion will never end

This prediction is also valid for the years 2015, 2016, etc. It sounds like fun at first. However, these discussions are annoying. OpenStack has more important issues to address than the never-ending comparisons with Amazon Web Services (AWS). Especially since the troubles constantly come from outside, that does not help. If the OpenStack API is supposed to be 100% compatible with the AWS API, then OpenStack can be renamed in Eucalyptus!

The OpenStack community must find its own way and make it possible for service providers, away from the industry leader AWS, to be able to build up their own offerings. However, this mammoth task is again only in the hands of the provider. Eventually, OpenStack is just a large construction kit for building cloud computing infrastructures. What happens at the end (business model) is not the task of the OpenStack community.

Read more: „Caught in a gilded cage. OpenStack providers are trapped.

2. Security gets more weight

Regardless of the recent NSA scandal, the security and confidence of (public) cloud providers have always been questioned. In the course of this (public) cloud providers are better positioned to provide desired data protection than most of the world’s businesses. This is partly due to the extensive resources (hardware, software, data center and staff) that can be invested in the development of strong security solutions. In addition, as data protection is part of (public) cloud providers’ core business, its administration by the provider can ensure smoother and safer operations.

Trust is the foundation of any relationship. This applies to both private and business environments. The hard-won confidence of the recent years is being put to the test and the provider would do well to open further. „Security through obscurity“ is an outdated concept. The customers need and want more clarity about what is happening with their data. This will vary depending on the world’s region. But at least in Europe, customers will continually put their providers to the test.

Read more: „How to protect companies’ data from surveillance in the cloud?

3. Private cloud remains attractive

The private cloud is financially and in terms of resource allocation (scalability, flexibility) not the most attractive form of cloud. Nevertheless, it is, despite the predictions of some market researchers, not losing its significance. On the contrary, here, despite the cost pressures, the sensitivity of the decision makers who want retain control and sovereignty over the data and systems is underestimated. This is also reflected in recent figures from Crisp Research. According to this, only about 210 million euros were spent on public infrastructure-as-a-service (IaaS) in Germany in 2013. By contrast, investment for private cloud infrastructure has exceeded 2.3 billion euros.

A recent study by Forrester also shows:

„From […] over 2,300 IT hardware buyers, […] about 55% plan to build an internal private cloud within the next 12 months.“

This does not mean that Germany has to be the land of private clouds. Finally, it’s always about the specific workload and use case, which is running in a private or public cloud. But there is clear trend towards a further form of cloud computing usage in the future.

4. Welcome hosted private cloud

After Salesforce has also jumped on the train, it is clear that enterprise customers can not make friends with the use of a pure public cloud. In cooperation with HP, Salesforce has announced a dedicated version of its cloud offering – „Salesforce Superpod“.

This extreme change in Salesforce strategy confirms our results from the exploration of the European cloud market. Companies are interested in cloud computing and its properties (scalability, flexibility or pay-per-use). However, they admit to themselves that they do not have the knowledge and time, nor is it their core business to operate IT systems; they instead let the cloud provider take care of this, expecting a flexible type of managed services. Public cloud providers are not prepared for this, because their business is to provide highly standardized infrastructure, platforms, applications and services. The remedy is that the provider secures specially certified system integrators alongside.

Here is an opportunity for providers of business clouds. Cloud providers that do not offer a public cloud model, can on the basis of managed services help the customer find the way to master the cloud and can take over the operations, supplying an “everything from one source” service offer. This typically does not happen on shared infrastructure but within a hosted private cloud or a dedicated cloud where a customer is explicitly in an isolated area. Professional services round off the portfolio with integration, interfaces and development. Due to the exclusivity and higher safety (usually physical isolation), business clouds are more expensive than public clouds. However, considering the effort that a company must make to operate itself in one or another public cloud to actually be successful or in getting the help of a certified system integrator, then the public cloud cost advantage is almost eliminated.

5. Hybrid cloud remains the continuous burner

The hybrid cloud is always a hot topic during the discussions about the future of the cloud. But it is real. Worldwide, the public cloud is initially mainly used to get access to resources and systems easily and conveniently in the short term. In the long run, this will form into a hybrid cloud, by IT departments that bring their own infrastructure to the level of a public cloud (scalability, self-service, etc.) to serve their internal customers. In Germany and Europe, it is exactly the opposite. Here private clouds are preferred, because the topics privacy and data security have a top priority. Europeans must and will get used to the public cloud to connect certain components – even some critical systems – to a public cloud.

The hybrid cloud is about the mobility of the data. This means that the data are rather held locally in an own infrastructure environment, shifted to another cloud for processing, e.g. public cloud, then being placed back within the local infrastructure. This must not always refer to the same cloud of a provider, but depending on the cost, service level and requirements more clouds are involved in the process.

6. Multi-cloud is reality

The topic of multi-cloud is currently highly debated, especially in IaaS context, with the ultimate goal being to spread the risk and take advantage of the costs and benefits of different cloud infrastructures. But even in the SaaS environment the subject must necessarily become of greater importance to avoid data silos and isolated applications in the future, to simplify the integration and to support companies in their adaptation of their best-of-breed strategies.

Notwithstanding these expectations, the multi-cloud use is already reality in companies using multiple cloud solutions from many different vendors, even if the services are not yet (fully) integrated.

Read more: „Multi-Cloud is “The New Normal”„.

7. Mobile cloud finally arrives

Meanwhile, there are providers who have discovered the mobile cloud slogan for their marketing. Much too late. The fact is that, since the introduction of the first smartphones, we are living in a mobile cloud world. The majority of all data and information, which we can access from smartphones and tablets, are no longer on the local device but on a server in the cloud.

Solutions such as Amazon WorkSpaces or Amazon AppStream support this trend. Even if companies are still careful with the outsourcing of desktops, Amazon WorkSpaces will strengthen this trend, from which also vendors such as Citrix and VMware will benefit. Amazon AppStream underlines the mobile cloud to the full extent by graphic-intensive applications and games that are processed entirely in the cloud and only streamed to the device.

Read more: „The importance of mobile and cloud-based ways of working continues to grow„.

8. Hybrid PaaS is the top trend

The importance of platform-as-a-service (PaaS) is steadily increasing. Observations and discussions with vendors that have found no remedy against Amazon AWS in the infrastructure-as-a-service (IaaS) environment so far show that they will expand their pure compute and storage offerings vertically with a PaaS and thus try to win the favor of the developer.

Other trends show that private PaaS and hosted private PaaS try to gain market share. With the Application Lifecycle Engine, cloudControl has enclosed its public PaaS in a private PaaS that enterprises can use to operate an own PaaS in a self-managed infrastructure. In addition, a bridge to a hybrid PaaS can be spanned. IaaS provider Pironet NDH has adapted the Windows Azure Pack from Microsoft to offer on this basis an Azure PaaS in a hosted private environment. This is interesting, since it closes the gap between a public and a private PaaS. With a private PaaS companies have the complete control over the environment, but they also need to build and manage it. Within a hosted version, the provider takes care of it.

9. Partner ecosystems become more important

Public cloud providers should increasingly take care to build a high quality network of partners and ISVs to pave the way for customers to the cloud. This means that the channel should be strongly embraced to address a broader base of customers who are potential candidates for the cloud. However, the channel will struggle with the problem of finding enough well-trained staff for the age of the cloud.

10. Cloud computing becomes easier

In 2014 more offerings will appear on the market, making the use of cloud computing easier. With these offers, the „side issues“ of availability and scalability need to be given less attention. Instead, cloud users can focus on their own core competencies and invest their energy into developing the actual application.

An example: Infrastructure-as-a-service market leader Amazon AWS says that IT departments using the AWS public cloud no longer need to take care about the procurement and maintenance of the necessary physical infrastructure. However, the complexity has shifted to a higher level. Even if numerous tutorials and white papers already exist, AWS does not stress upon the fact that scalability and availability within a scale-out cloud infrastructure can be arbitrarily complicated. These are costs that should never be neglected.

Hint: (Hosted) Community Clouds

I see a growing potential for the community cloud (more in German), which currently has no wide distribution. In this context, I also see a shift from a currently pronounced centralization to a semi-decentralized nature.

Most companies and organizations see in the public cloud a great advantage and want to benefit in order to reduce costs, to consolidate IT and equally get more scalability and flexibility. On the other hand, future-proof, trust, independence and control are important “artifacts” that no one would like to give up.

The community cloud is the medium to achieve both. It combines the future-proof, trust, independency and control characteristics with the benefits of a public cloud that come from a real cloud infrastructure.

Some solutions that can be used as a basis for creating own professional clouds already exist. However, one should always keep close watch on the basic infrastructure that forms the backbone of the entire cloud. In this context, one should not underestimate the effort it takes to build, properly operate and maintain a professional cloud environment. Furthermore, the cost of the required physical resources needs to be calculated.

For this reason, for many small and medium-sized enterprises, as well as shared offices, co-working spaces or regular project partnerships, it makes sense to consider the community cloud. This type of cloud environment will offer the benefits of the public cloud (shared environment) while providing the future-proof, trust, independence and control requirements, alongside cost advantages that can be achieved among others by dynamic resource allocation. For this purpose, one should think about building a team that exclusively takes care of the installation, operations, administration and maintenance of the community cloud. In this case, the availability of own data center or a co-location is not a prerequisite. Instead, an IaaS provider can serve as an ideal partner for a hosted community cloud.

Kategorien
Analysis

Criteria for selecting a cloud storage provider

Who is searching for secure and enterprise ready options for Dropbox should have a closer look to the vendors. The quest for a cloud storage vendor depends in most cases on the individual requirements. These decision makers previously need to debate and define. In particular, this includes classifying the data. Here is defined which data is stored in the cloud and which is still located in an own on premise infrastructure. During the selection of a cloud storage vendor companies should regard the following characteristics.

Configuration and integration

The storage service should be able to integrate in existing or further cloud infrastructure in a simple manner. Thus users are empowered to expand the existing storage through a hybrid scenario cost-efficient. In addition, data can be migrated from the local storage into the cloud in a self-defined period. This leads to the option to disclaim an own storage system for specific data in the long run. It is the same with the straightforward and seamless export of data from the cloud that needs to be ensured.

A further characteristic is the interaction of the cloud service with internal systems like directory services (Active Directory or LDAP) for a centralized collection of data providing to applications. For an easy and holistic administration of user access to the storage resources this characteristic is mandatory. For this, the vendor should provide an open and well documented API to realize the integration. Alternatively he can also deliver a native software.

Platform independence to access data from everywhere

The mobility for the employees become more and more important. For companies it is of vital importance to appoint their working habits and deliver appropriate solutions.

In the best case the cloud provider should enable a platform independent access to the data by providing applications for all common mobile and local operating systems as well as an access over a web interface.

Separation of sensitive and public data

To give employees data access over mobile and web applications further security mechanisms like DMZs (demilitarized zone) and right controls on granular file level are necessary. A cloud storage provider should have functions to separate data with a higher security demand from public data. Companies who want to provide the data from an own infrastructure need to invest in further security systems or find a vendor who has integrated these type of security.

Connection to external cloud services

A cloud storage can be used as a common and consistent data base for various cloud services to integrate services like software-as-a-service (SaaS) or platform-as-a-service (PaaS). The cloud storage serves as a central storage. For this purpose the vendor needs to provide an open API to realize the connectivity.

Cloud storage – Eco- and partner system

Especially for storage vendors who exclusively dispose cloud solutions, a big ecosystem of applications and services is attractive and important to expand the storage service with further value added functions. This includes, for example, an external word processor to edit documents within the storage with multiple colleagues.

Size of the vendor – national and international

The track record is the most important evidence for the past success giving a statement about the popularity based on well-known customer and succeeded projects. This aspect can be considered for a national as well as an international footprint. Besides its available capacity and therefore its technology size, for a cloud storage vendor the international scope is also vital importance. If a company wants to enable its worldwide employees to access a central cloud storage, but decides for a vendor who just have data centers in the US or Europe, not only the latency can lead to problems. Insofar the scalability regarding the storage size as well as the scope are a crucial criteria.

In addition, it is interesting to look at the vendor’s roadmap: What kind of changes and enhancements are planned for the future? Are these enhancements interesting for the customer compared to another potential vendor who does not consider this?

Financial background

A good track record is not the only reason while choosing a vendor. Not least the drama of smashup storage vendor Nirvanix has shown that the financial background must be considered. Especially during the risk assessment a company should take a look on the vendor’s current financial situation.

Location and place of jurisdiction

The location where the company data is stored becomes more and more important. The demand for the physical storage of the data in the own country increasingly rises. This is not a German phenomenon. Even the French, Spain or Portuguese expect their data stored in a data center in the own country. (http://research.gigaom.com/report/the-state-of-europes-homegrown-cloud-market/) The Czechs prefer a data center in Austria instead of Germany. More relaxed are the Netherlands on this topic. Thereby the local storage of the data is basically not a guarantee for the legal compliance of the data. However, it becomes easier to apply local laws.

Most of the US vendor cannot fulfill a physical locality of the data in each European country. The data centers are either located in Dublin (Ireland) or Amsterdam (Netherlands) and just comply with European law. Although many vendors joined Safe Harbor which allows to legally transfer personal data into the US. However, it is just a pure self-certification that based on the NSA scandal is challenged by the Independent Regional Centre for Data Protection of Schleswig-Holstein (Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein (ULD)).

Cloud storage – Security

Regarding the topic of security it is mostly all about trust. This, a vendor only achieves with openness. He needs to show his hands to his customers technologically. Especially IT vendors are often criticize when it’s about talking on their proprietary security protocols. Mostly the critics are with good cause. But there are also vendors who willingly talk about it. These companies need to be find. Besides the subjective topic of trust it is in particular about the implemented security which is playing a leading role. Here it’s important to look on the current encryption mechanism a vendors is using. This includes: Advanced Encryption Standard – AES 256 for encrypting the data, Diffie-Hellman and RSA 3072 for key exchange.

Even the importance of the end-to-end encryption of the whole communication rises. This means, that the whole process a user is running through the solution, from the starting point until the end, is completely encrypted. This includes among others: The user registration, the login, the data transfer (send/ receive), the transfer of the key pairs (public/ private key), the storage location on the server, the storage location of the local device as well as the session while a document is edit. In this context it is to advise against separate tools who try to encrypt a non-secure storage. Security and encryption is not a feature, but rather a main function and belongs into the field of activity of the storage vendor. He has to ensure a high integrated security and a good usability at once.

In this context it is also important that the private key for accessing the data and systems is exclusively in the hands of the user. It also should be stored encrypted on the user’s local system. The vendor should have no capabilities to restore this private key. He should never be able to access the stored data. Note: There are cloud storage vendors that are able to restore the private key and are also able to access the user’s data.

Certification for the cloud

Certifications are a further attribute for the quality of storage vendors. Besides the standards like ISO 27001, with which the security of information and IT environments are rated, there also exist national and international certificates by approved certification centers.

These independent and professional certificates are necessary to get an honest statement on the quality and characteristic of a cloud service, the vendor and all down streamed processes like security, infrastructure, availability, etc. Depending on how good the process and the auditor is, a certification can also lead to an improvement of the product, by the auditor proactively gives advices for security and further functionality.

Kategorien
Analysis

The Amazon Web Services to grab at the enterprise IT. A reality check.

The AWS re:Invent 2013 is over and the Amazon Web Services (AWS) continue to reach out to the corporate clients with some new services. After AWS has established itself as a leading infrastructure provider and enabler for startups and new business models in the cloud, the company from Seattle tries to get one foot directly into the lucrative business environment for quite some time. Current public cloud market figures for 2017 from IDC ($107 billion) and Gartner ($244 billion) to give AWS tailwind and encourage the IaaS market leader in its pure public cloud strategy.

The new services

With Amazon WorkSpaces, Amazon AppStream, AWS CloudTrail and Amazon Kinesis, Amazon introduced some interesting new services, which in particular address enterprises.

Amazon WorkSpaces

Amazon WorkSpaces is a service which provides virtual desktops based on Microsoft Windows, to build an own virtual desktop infrastructure (VDI) within the Amazon Cloud. As basis a Windows Server 2008 R2 is used, which rolls out desktops with a Windows 7 environment. All services and applications are streamed from Amazon data centers to the corresponding devices, for what the PCoIP (PC over IP) by Teradici is used. It may be desktop PCs, laptops, smartphones or tablets. In addition, Amazon WorkSpaces can be combined with a Microsoft Active Directory, what simplifies the user management. By default, the desktops are delivered with familiar applications such as Firefox or Adobe Reader/ Flash. This can be adjusted as desired by the administrators.

With Amazon WorkSpaces Amazon enters a completely new territory in which Citrix and VMware, two absolute market players, already waiting. During VMworld in Barcelona, VMware just announced the acquisition of Desktone. VDI is basically a very exciting market segment because it redeemed the corporate IT administration tasks and reduces infrastructure costs. However, this is a very young market segment. Companies are also very careful when outsourcing their desktops as, different from the traditional on-premise terminal services, the bandwidth (network, Internet, data connection) is crucial.

Amazon AppStream

Amazon AppStream is a service that serves as a central backend for graphically extensive applications. With that, the actual performance of the device on which the applications are used, should no longer play a role, since all inputs and outputs are processed within the Amazon Cloud.

Since the power of the devices is likely to be more increasingly in the future, the local power can probably be disregarded. However, for the construction of a real mobile cloud, in which all the data and information are located in the cloud and the devices are only used as consumers, the service is quite interesting. Furthermore, the combination with Amazon WorkSpaces should be considered, to provide applications on devices that serve only as thin clients and require no further local intelligence and performance.

AWS CloudTrail

AWS CloudTrail helps to monitor and record the AWS API calls for one or more accounts. Here, calls from the AWS Management Console, the AWS Command Line Interface (CLI), own applications or third party applications are considered. The collected data are stored either in Amazon S3 or Amazon Glacier for evaluation and can be viewed via the AWS Management Console, the AWS Command Line Interface or third-party tools. At the moment, only Amazon EC2, Amazon ECS, Amazon RDS and Amazon IAM can be monitored. Amazon CloudTrail can be used free of charge. Costs incurred for storing the data to Amazon S3 and Amazon Glacier and for Amazon SNS notifications.

AWS CloudTrial belongs, even if it is not very exciting (logging), to the most important services for enterprise customers that Amazon has released lately. The collected logs assist during compliance by allowing to record all accesses to AWS services and thus demonstrate the compliance of government regulations. It is the same with security audits, which thus allow to comprehend vulnerabilities and unauthorized or erroneous data access. Amazon is well advised to expand AWS CloudTrail as soon as possible for all the other AWS services and make them available worldwide for all regions. In particular, the Europeans will be thankful.

Amazon Kinesis

Amazon Kinesis is a service for real-time processing of large data streams. To this end, Kinesis is able to process data streams of any size from a variety of sources. Amazon Kinesis is controlled via the AWS Management Console by assigning and saving different data streams to an application. Due to Amazon’s massive scalability there are no capacity limitations. However, the data are automatically distributed to the global Amazon data centers. Use cases for Kinesis are the usual suspects: Financial data, social media and data from the Internet of Things/ Everything (sensors, machines, etc.).

The real benefit of Kinesis, as big data solution, is the real-time processing of data. Common standard solutions on the market process the data via batch. Means the data can never be processed direct in time and at most a few minutes later. Kinesis removes this barrier and allows new possibilities for the analysis of live data.

Challenges: Public Cloud, Complexity, Self-Service, „Lock-in“

Looking at the current AWS references, the quantity and quality is impressive. Looking more closely, the top references are still startups, non-critical workloads or completely new developments that are processed. This means that most of the existing IT systems, we are talking about, are still not located in the cloud. Besides the concerns of loss of control and compliance issues, this depends on the fact that the scale-out principle makes it to complicated for businesses to migrate their applications and systems into the AWS cloud. In the end it boils down to the fact, that they have to start from scratch, because a non-distributed developed system is not working the way it should run on a distributed cloud infrastructure – key words: scalability, high availability, multi-AZ. These are costs that should not be underestimated. This means that even the migration of a supposedly simple webshop is a challenge for companies that do not have the time and the necessary cloud knowledge to develop the webshop for the (scale-out) cloud infrastructure.

In addition, the scalability and availability of an application can only be properly realized on the AWS cloud when you stick to the services and APIs that guarantee this. Furthermore, many other infrastructure-related services are available and are constantly being published, which make life clearly easier for the developer. Thus the lock-in is preprogrammed. Although I am of the opinion that a lock-in must not be bad, as long as the provider meets the desired requirements. However, a company should consider in advance whether these services are actually needed mandatory. Virtual machines and standard workloads are relatively easy to move. For services that are very close engaged into the own application architecture, it looks quite different.

Finally. Even if some market researchers predict a golden future for the public cloud, the figures should be taken with a pinch of salt. Cloud market figures are revised downwards for years. You also have to consider in each case how these numbers are actually composed. But that is not the issue here. At the end of the day it’s about what the customer wants. At re:Invent Andy Jassy once again made ​​clear that Amazon AWS is consistently rely on the public cloud and will not invest in own private cloud solutions. You can interpret this as arrogance and ignorance towards customers, the pure will to disruption or just self-affirmation. The fact is, even if Amazon will probably build the private cloud for the CIA, they have not the resources and knowledge by far to act as a software provider on the market. Amazon AWS is a service provider. However, with Eucalyptus they have set up a powerful ally on the private cloud side, which makes it possible to build an AWS-like cloud infrastructure in the own data center

Note: Nearly all Eucalyptus customers should also be AWS customers (source: Eucalyptus). This means conversely, that some hybrid cloud infrastructures exist between on-premise Eucalyptus infrastructures and the Amazon public cloud.

Advantages: AWS Marketplace, Ecosystem, Enabler, Innovation Driver

What is mostly ignored during the discussions about Amazon AWS and corporate customers is the AWS Marketplace. In addition, Amazon also does not advertised it too much. Compared to the cloud infrastructure, customers can use to develop their own solutions, the marketplace offers full-featured software solutions from partners (eg SAP), which can be automatically rolled out on the AWS infrastructure. The cost of using the software are charged per use (hour or month). In addition, the AWS fees for the necessary infrastructure are charged. Herein lies the real added value for companies to easily outsource their existing standard systems to the cloud and to separate from the on-premise systems.

One must therefore distinguish strictly between the use of infrastructure for in-house development and operation of ready-made solutions. Both are possible in the Amazon cloud. There is also the ecosystem of partners and system integrators which help AWS customers to develop their solutions. Because, even if AWS itself is (currently still) a pure infrastructure provider, they must equally be understood as a platform for other providers and partners who operate their businesses on it. This is also the key success and advantage over other providers in the market and will increase the long-term attractiveness of corporate customers.

In addition, Amazon is the absolute driving force for innovation in the cloud, no other cloud provider technologically is able to reach at the moment. For this purpose, it does not require re:Invent. Instead, it shows almost every month anew.

Amazon AWS is – partly – suitable for enterprise IT

Depending on the country and use case the requirements vary, Amazon has to meet. European customers are mostly cautious with the data management and store the data rather in their own country. I already met with more than one customer, who was technically confident but storing the data in Ireland was not an option. In some cases it is also the lack of ease of use. This means that a company dones’t want to (re)-develop its existing application or website for the Amazon infrastructure. Reasons are the lack of time and the knowledge to implement, what may results in a longer time to market. Both can be attributed to the complexity to achieve scalability and availability at the Amazon Web Services. After all, there are not just a few API calls. Instead, the entire architecture needs to be oriented on the AWS cloud. In Amazon’s case its about the horizontal scaling (scale-out) which makes this necessary. Instead, companies prefer vertical scaling (scale-up) to migrate the existing system 1:1 and not to start from scratch, but directly achieve success in the cloud.

However, the AWS references also show that sufficient use cases for companies exist in the public cloud in which the data storage can be considered rather uncritical, as long as the data are classified before and then stored in an encrypted way.

Analysts colleague Larry Carvalho has talked with a few AWS enterprise customers at re:Invent. One customer has implemented a hosted website on AWS for less than $7,000, for what an other system integrator wanted to charge $ 70,000. Another customer has calculated that he would pay for an on-premise business intelligence solution including maintenance about $200,000 per year. On Amazon AWS he only pays $10,000 per year. On the one hand these examples show that AWS is an enabler. However, on the other hand, that security concerns in some cases are yield to cost savings.

Kategorien
Analysis

Runtastic: A consumer app in the business cloud

The success stories from the public cloud do not stop. Constantly new web and mobile applications appear that rely on infrastructure-as-a-service (IaaS) offerings. This is just one reason that animates market researcher like IDC ($107 billion) and Gartner ($244 billion) to certify a supposedly golden future for the public cloud by the year 2017. The success of the public cloud cannot be argued away, the facts speak for themselves. However, there are currently mostly startups, and many times non-critical workloads moving to the public cloud. Talking to IT decision-makers from companies that have mature IT infrastructures, the reality looks quite different. This also shows our study on the European cloud market. In particular, the self-service and the complexity is underestimated by the public cloud providers, when they go to corporate customers. But also for young companies the public cloud is not necessarily the best choice. I recently spoke with Florian Gschwandtner (CEO) and Rene Giretzlehner (CTO) of Runtastic who decided against the public cloud and choose a business cloud, more precisely T-Systems.

Runtastic: From 0 to over 50 million downloads in three years

Runtastic consists of a series of apps for endurance, strength & toning, health & wellness and fitness, helping users to achieve their health and fitness goals.

The company was founded in 2009 in Linz (Austria) and has, due to its huge growth in the international mobile health and fitness industry, now locations in Linz, Vienna and San Francisco. The growth figures are impressive. After 100,000 downloads in July 2010, there are now over 50 million downloads (as of October 2013). Of these, 10,000,000 downloads alone in Germany, Austria and Switzerland, which roughly means more than one download per second. Runtastic has 20 million registered users, up to 1 million daily active users and more than 6,000,000 monthly active users.


Source: Runtastic, Florian Gschwandtner, November 2013

In addition to the mobile apps Runtastic is also offering its own hardware to use the apps and services more conveniently. Another interesting service is „Story Running“, which should lead to more variety and action while running.

Runtastic: Reasons for the business cloud

Considering this growth numbers and the number of growing apps and services Runtastic is an ideal candidate for a cloud infrastructure. Compared to similar offerings even for a public cloud. You would think. Runtastic began with its infrastructure on a classic web host – without a cloud model. In recent years, some experience with different providers were collected. Due to the continuing expansion, new underlying conditions, the growing number of customers and the need for high availability the reason was to outsource the infrastructure into the cloud.

Here, a public cloud was never an option for Runtastic because of the dedicated(!) location of the data as a top priority, not only because of security concerns or the NSA scandal. So, the Amazon Web Services (AWS) just for technical reasons are out of the question, since many demands of Runtastic can not be mapped. Among other things, the core database comprises about 500GB on a single MySQL node, which AWS can not meet, according to Giretzlehner. In addition, the cost for such large servers in a public cloud are so expensive that an own hardware quickly pays for itself.

Instead Runtastic decided for the business cloud of T-Systems. This was due to a good setup, a high quality standard and two data centers in one location (in two separate buildings). In addition, a high level of security, a professional technical service as well as the expertise of the staff and partners and a very good connection to the Internet hub in Vienna and Frankfurt.

To that end Runtastic combines the colocation model with a cloud approach. This means that Runtastic covers the base load with its own hardware and absorbs peak loads with infrastructure-as-a-service (IaaS) computing power of T-Systems. The management of the infrastructure is automated by Chef. Although the majority of the infrastructure is virtualized. However Runtastic drives a hybrid approach. This means that the core database, the storage and the NoSQL clusters are operated on bare metal (non-virtualized). Because of the global reach (90 T-Systems data centers worldwide) Runtastic can continue to ensure that all apps work seamlessly and everywhere.

A unconventional decision is the three-year contract, Runtastic closed down with T-Systems. These include the housing of the central infrastructure, a redundant Internet connection and the demand of computing power and storage of the T-Systems cloud. Runtastic chose this model because they see the partnership with T-Systems in the long term and do not want to leave the infrastructure from one day to another.

Another important reason for a business cloud was the direct line to counterparts and professional services. This also reflects in a higher price, what Runtastic consciously takes into account.

The public cloud is not absolutely necessary

Runtastic is the best example that young companies can be successful without a public cloud and that a business cloud is definitely an option. Although Runtastic runs not completely on a cloud infrastructure. However, the business case must be calculated in each case, whether it is (technically) worthwhile to invest parts in an own infrastructure and intercept the peak by cloud infrastructure.

This example also shows, that the public cloud is not easy sledding for providers and consulting and professional services from a business or managed cloud are in demand.

During the keynote at AWS re:Invent 2013 Amazon AWS presented its flagship customer Netflix and the Netflix cloud platform (see the video from 11 minutes), which was developed specifically for the Amazon cloud infrastructure. Amazon sells the Netflix tools of course as a positive aspect, what they definitely are. However, Amazon conceals the effort that Netflix operates in order to use the Amazon Web Services.

Netflix shows very impressive how a public cloud infrastructure is used via self-service. However, when you consider what an effort Netflix operates to be successful in the cloud, you just have to say that cloud computing is not simple and a cloud infrastructure, no matter what kind of provider, needs to be built with the corresponding architecture. This means, conversely, that the use of the cloud does not always lead to the promised cost advantages. In addition to savings in infrastructure costs which are always pre-calculated, the other costs may never be neglected for the staff with the appropriate skills and the costs for the development of scalable and fault-tolerant application in the cloud.

In this context, the excessive use of infrastructure-near services should be reconsidered. Although these simplify the developers life. As a company, you should think in advance whether these services are actually absolutely needed to keep your options open. Virtual machines and standard workloads are relatively easy to move. For services that engage very close to the own application architecture, it looks different.

Kategorien
Analysis

Airbus separates the traveler from the luggage #tsy13

During the T-Systems Symposium 2013, Airbus has introduced Bag2Go an intelligent suitcase which can travel completely independently and transported to the destination without the owner. Travelers Airbus want to allow more flexibility and mobility by arriving the target free of luggage by plane, ship, train, car, bike or by foot. The Bag2Go service relies on the business cloud infrastructure of T-Systems.

Background of Bag2Go

A Bag2Go is self-weighing, self-labeling, self-travelling, self-tracing and has features of a continuous tracking. In addition, according to Airbus, it consistently adhere with all common standards today, so that the current infrastructure at airports must not be changed. Using a smartphone app the status and whereabouts can constantly be monitored.

Airbus plans to cooperate with different transport services (eg DHL). So, for example, the traveler can fly by plane to the destination, but his baggage arrive by land, ship or another aircraft to the target. Package services or door-to-door luggage transportation services to provide transportation of luggage to any destination in inland or abroad in a hotel or cruise booking.

A follow-up is generally convenient, but makes only sense if one can also intervene in the action. For this purpose, Airbus relies on a GPS tracking in conjunction with a transport service provider, such as transportation / logistics or the airline.

Motivation of Airbus

Airbus has obviously developed the concept Bag2Go not without self-interest. The point is to make the first steps for the aircraft of the future. This means that Airbus is building sustainable and ultra-light aircrafts, for which is as little weight as possible required. At the same time, fuel consumption is significantly decreasing. To remove the heavy luggage from the plane is the key component. The separation of passenger and baggage streams to enable Airbus and the entire industry to control the carriage of baggage proactively and open up completely new business opportunities.

According to Airbus, the containerization of luggage is continuously gaining in importance as more and more people will take a door-to-door transport in and claim their baggage up to three days prior to departure or use a baggage kiosk. For this reason, Airbus will try to establish fully networked transport capsules as a standard.

Interesting approach – but not for everyone

Airbus faces with his concept less technical but more legal problems. Means a traveler and his luggage need to fly together. A suitcase can fly afterwards the passenger, but not ahead. In addition, business executives challenged the three days in advance luggage check-in during the presentation, because they pack their suitcase itself up to one hour before the flight. A few more conversations after the presentation confirmed the general opinion and attitudes of business travelers on the subject.
​​
Nevertheless, Bag2Go is an interesting concept we will see soon in reality. At the same time it is a great use case for the Internet of Things. Bag2Go forms an analogous to the architecture of the Internet routing, where a single IP packet, normally associated with a complete data stream, can also take different paths to the goal.

Kategorien
Analysis

The fully interconnected world becomes reality #tsy13

Companies are constantly exposed to shorter cycles of change in their everyday lives. The consumerization of IT is a great driver that will retain an important role in the future. For several years mobility, cloud, new applications, the Internet of Things and Big Data (analysis) show their effects. Besides these technological influences there is also the business impact, such as new business models, constant growth, globalization, and at the same time issues of security and compliance that lead to new challenges.

The evolution of the Internet leaves its marks

Looking at the evolution of the Internet, it is clear that the impact on companies and our society grow with the growth of smart connections. Started with simple connections via e-mail or web and therefore the digitization of information access, followed by the networked economy, in which it came with e-commerce and collaboration to digitize business processes. This was followed by something Cisco called as „realistic experiences“. This includes the digitization of business and social interactions such as social media, mobile and cloud. The next state we will achieve is the „Internet of Everything“, where people, processes, data and things are connected – the digitization of the world.

Each block has its own importance in the Internet of Everything. The connections between people become more relevant and valuable. The right person or machine receives the right information about intelligent processes at the right time. The data are processed into valuable information for decision making. Physical devices (things) are connected via the Internet with each other to enable intelligent decisions.

At the T-Systems Symposium 2013 in Dusseldorf, Cisco said, that they expect about 50 billion Smart Objects in 2020 that are connected to each other. On the contrary, there are just 7.6 billion people expected worldwide. Furthermore, Cisco is of the opinion that currently 99 percent of the world is not interconnected and that we will see trillions of smart sensor in the future. These are installed e.g. in intelligent buildings, cities or houses and help to save energy and to live more efficiently. But they will also lead to increase productivity or improve healthcare.

Technological foundations for the fully interconnected world

Due to its massive scalability, distribution, new applications and the possibilities for anywhere access, the cloud is one of the main bases for the fully interconnected world. Therefore, Cisco’s cloud strategy is to enable cloud providers and enterprises to deploy and broker different kinds of cloud services. At the same time Cisco application platforms should be provided for different of cloud categories.

However, the cloud needs to be enriched by approaches from the Internet of Things. These include intelligent business processes, people to machine-2-machine communication as well as other things such as sensors and actuators which are distributed everywhere. In the end, a worldwide highly scalable infrastructure is necessary that can control temporal variations and requirements between the different workloads.

Another key component is therefore something that Cisco called Fog Computing. The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

So, the essential characteristic of Fog Computing is that workloads can autonomously operate from the cloud to ensure the rapid, timely and stable access of a service.

The interconnected world needs intelligent technologies

To use the cloud as the fundamental basis for the Internet of Things and the Internet of Everything, some hurdles still need to convey out of the way.

The current cloud deployment model set aside that the data are normally delivered directly and without an intermediate layer to the device or the end user. (Remark: There already exist CDN and edge locations, which thus provide caching and acceleration.) It is therefore assumed that the bandwidth for transmission of the data is maximum, and theoretically there is a delay of zero. However, this is only theoretically. In practice, high latencies and thus delays, poor elasticity and compared to the bandwidth significantly faster increasing amounts of data, lead to problems.

For this reason, approaches such as fog computing are required to be able to assume that a limited bandwidth, different delay times and dropped connections are available yet. Ideas such as the fog establish themselves as an intelligent intermediate layer, cache or as a kind of amplifier and coordinate the requirements of different workloads, data and information so that they are delivered independently and efficiently to the objects in the Internet of Everything.

Kategorien
Analysis

How to solve shadow IT and covered clouds #tsy13

The shadow IT or as VMware calls it, Covered Clouds (hidden used cloud services) are a major problem for businesses. According to a VMware study 37 percent of Europe’s leading IT decision makers suggest in their companies not covered expenses for cloud services. 58 percent of European knowledge workers would use unauthorized cloud services.

Pros and cons of shadow IT (Covered Clouds)

This unauthorized use has also its financial influences. In the affected companies, executives in the IT area believe that an average of 1.6 million euros will be spent without the permission of the company. This represents on average 15 percent of the annual IT budget of these companies.

However, this development is considered to be positive by many. 72 percent of IT managers see this as a benefit for their company. Because 31 percent of these are of the opinion that shadow IT and Covered Clouds accelerate growth and innovation. More than half reported that the company can thus respond more quickly to customer requirements.

However, there are major security concerns. Of those who do not advocate shadow IT, more than half feared a tightening of security risks. This insights VMware gave during the T-Systems Symposium 2013 in Dusseldorf.

A method: IT-as-a-Service

IT-as-a-Service is a business model in which the IT department is managed as a separate business unit and developed even products and services for the own company. Here, the IT department competes against external providers. Finally, at the present, departments have a limitless selection of other vendors in the market.

VMware has ​​it made to its mission to provide IT departments the technical means for this and sees IT-as-a-Service as a chance to balance the ratio of maintenance to innovation at 50:50 instead of investing the majority of the expenses in the maintenance. Today the ratio is about 80 percent (maintenance) to 20 percent (innovation).

VMware is right trying to establish itself as a leading provider of IT-as-a-Service solutions. As one of the few infrastructure providers on the market and with their abilities, they are able to provide the technical means for this turnaround in the enterprise. However, one must note, that IT-as-a-Service is not a technical approach, but must be anchored firmly in the minds of IT departments in order to be successfully implemented. Therefore, VMware can only show the technical means that are available to initiate the change.

Service portal as middleware for employees

IT-as-a-Service is not the ne plus ultra solution for the shadow IT, but may help to counteract this over the years grown phenomenon. The relevant concepts and technologies are available and need to be implemented.

Is an IT-as-a-Service philosophy emerged within the IT department, it should be started to establish an own service portal for employees, which controls the access to internal and external cloud services. This can be either used for infrastructure (virtual servers and storage) or software and platforms. The IT department becomes more and more a service broker and is able to ensure, through the use of external resources (hybrid model), that the employees can expect a high quality service. Thus, for example, a server can be provided to a developer within five minutes instead of several weeks. It should also be considered that developers are not only interested in computing power and memory but also need services to develop their own applications and services in a simpler way. This can be accomplished, among others, with the characteristics of a Platform-as-a-Service (PaaS).

Kategorien
Analysis

The importance of the Managed Cloud for the enterprise

Cloud computing continues to grow. The public cloud is predicted the greatest growth for the future. Depending on who you listen to, in 2017 this lies between $107 billion (IDC) and $244 billion (Gartner). At this point, as a reader you may feel free to ask the question, where do these not insignificant difference comes from. However, talking to IT decision-makers, the public cloud, which is designed primarily for standardized workloads and self-service, does not corresponds the needs of most businesses. Cloud computing is definitely attractive, but the shape is crucial.

The complexity should not be underestimated

The greatest success stories in the public cloud come from startups or companies that have implemented ideas and business models on it. Although it is repeatedly reported about migrations of SaaS applications with up to 90,000 users. And these reports seem to reflect the fulfillment of the requirements good. But, the truth in most companies is rather different. Even the migration of an e-mail system, one of the supposedly most standard applications, becomes an adventure. The reason is that many companies do not use a standard e-mail system and instead have connected further proprietary add-ons or proprietary extensions, which do not exist in the cloud.

There are also other applications that belong to the IT enterprise architecture to meet very individual needs. I recently spoke with the strategy manager of a German DAX company with a global footprint, which is currently evaluating a cloud strategy. The company has about 10,000(!) applications globally (desktop, server, production systems, etc.). This example is extreme and does not reflect the average challenges of a company. But it shows the dimension in which one can move and which can also be broken down to much smaller application infrastructures.

Managed cloud services are the future in business environments

One may argue that the IT architectures of most companies are too complex and standardization would do any good. Unfortunately the reality is not that simple. Although some standard business applications are already certified for public clouds. Looking at the most successful workloads in the public cloud these are web applications for the mass market and less specific business applications. The majority of enterprises are not prepared for the cloud. But to rely on and expect equally help on the way to the cloud as well as during the operation of applications and systems.

This also had shown our investigation of the European cloud market. Companies are interested in cloud computing and its properties (scalability, flexibility or pay per use). However, they admit to themselves that they do not have the knowledge and the time or its simply not their core business to operate IT systems and instead let it take from the cloud provider. So it is about a flexible type of managed services. Public cloud providers are not prepared for this, because their business is to provide highly standardized infrastructure, platforms, applications and services. The remedy is to provide specially certified system integrators by the provider.

Here is an opportunity for providers of business clouds. Cloud providers that do not offer a public cloud model, but on the basis of managed services to help the customer find the way to master the cloud and take over the operation and thus a „everything from one source“ service offer. This normally is not on a shared infrastructure but within a hosted private cloud or a dedicated cloud where a customer is explicitly in an isolated area. Professional services to round off the portfolio, which will help by integration, interfaces and development. Business clouds are due to this service, exclusivity and higher safety (usually physical isolation) more expensive than public clouds. However, considering the effort that a company has to operate itself in one or other public cloud to actually be successful or getting help of a certified system integrator, then the cost advantage is mostly eliminated. In addition, business cloud (managed cloud) providers to credit a higher knowledge in the own cloud and its properties as system integrators, because they know the infrastructure and also can make more specific adjustments for the customer than in a mostly highly standardized public cloud infrastructure.

Public Cloud is not a no-go for the enterprise

The public cloud is per se not a taboo subject for businesses. On the contrary, because on the one hand a public cloud offers the most easiest and most straightforward way to the cloud world, without coming into contact with sales people. On the other hand, it depends on the workload / use cases that need to be identified.

The biggest advantage of a public cloud for businesses is that they can make their own experiences in a real cloud environment and just can try „things“ out very cheap and quick. It also offers the opportunity to make mistakes very cheap and fast and learn from them early, what would formerly have been very painful. Based on a top-down cloud strategy new business models can also be build in the public cloud, which are realized through web applications. Here, for example, a hybrid approach can separate applications of the data to take advantage of the scalability of the public cloud for the application itself, but to store the data in a self-defined safe area.

Whether critical workloads should be operated within a public cloud is among other things a decision of the risk management and the own effort you want / can run to ensure this. This must be considered during the evaluation between a public and a managed (business) cloud.

Kategorien
Analysis

HP Cloud Portfolio: Overview & Analysis

HP’s cloud portfolio consists of a range of cloud-based solutions and services. The HP Public Cloud is a true infrastructure as a service (IaaS) offering and the current core product, which is marketed generous. The HP Enterprise Services – Virtual Private Cloud provides a hosted Private Cloud from HP. The Public Cloud services are delivered exclusively from the US with data centers in the west and east. Even if HPs sales force is distributed around the world English is the only supported language.

Portfolio

The IaaS core offering includes compute power (HP Cloud Compute), storage (HP Cloud Storage) and network capacity. Furthermore, the value-added services HP Cloud Load Balancer, HP Cloud Relational DB, HP Cloud DNS, HP Cloud Messaging, HP Cloud CDN, HP Cloud Object Storage, HP Cloud Block Storage and HP Cloud Monitoring are available with which a virtual infrastructure for own applications and services can be build.

The HP Cloud infrastructure based on OpenStack and is multi-tenant. The virtual machines are virtualized with KVM and can be booked in fixed sizes (Extra Small to Double Extra Large) per hour. The local storage of a virtual machine is not persistent. Long-term data can be stored and connected on an independent block storage. Own virtual machine images cannot be uploaded to the cloud. The load balancer is currently still in a private beta. The infrastructure is a multi-fault domain, which is considered by the service level agreement. A multi-factor authorization is currently not offered.

The HP Enterprise Cloud Services offer a variety of solutions geared to business. Including recovery-as-a-service, Dedicated Private Cloud and Hosted Private Cloud. The application services include solutions for collaboration, messaging, mobility and unified communications. Specific business applications include HP Enterprise Cloud Services for Microsoft Dynamics CRM, HP Enterprise Cloud Services for Oracle and HP Enterprise Cloud Services for SAP. Professional services complement the Enterprise Cloud Services portfolio. The Enterprise Cloud Services – Virtual Private Cloud is a multi-tenant and by HP hosted Private Cloud and is aimed at SAP, Oracle and other enterprise applications.

Analysis

HP has a lot of experience in building and operating IT infrastructures and achieve the same objectives in the areas of Public and Private Cloud infrastructures. For this HP depends on own hardware components and an extensive partner network. HP has a global sales force and an equally high marketing budget and is therefore able to reach a variety of customers. Even if the data center for the Public Cloud are exclusively in the US.

In recent years HP has invested much effort and development. Nevertheless, the HP Public Cloud compute service is only available to the general public since December 2012. Due to this, HP has no significant track record for its Public Cloud. There is only a limited interoperability between the HP Public Cloud based on OpenStack, and the Private Cloud (HP CloudSystem, HP Converged Cloud) based on HP Cloud OS. Since the HP Public Cloud does not have the ability to upload own virtual machine images on the basis of a self-service, customers currently cannot transfer workloads from the Private Cloud to the Public Cloud. Even if the Private Cloud is based on OpenStack.

INSIGHTS Report

The INSIGHTS Report „HP Cloud Portfolio – Overview & Analysis“ can be downloaded for free as a PDF.

Kategorien
Analysis

Multi-Cloud is "The New Normal"

Not least the disaster of Nirvanix had shown that one should not rely on a single cloud provider. But regardless of spreading the risk, the usage of a multi-cloud strategy is a recommended approach which already is conscious or unconscious practionzed.

Hybrid or multi-cloud?

What actually means multi-cloud? Or what is the difference to a hybrid cloud? Well, by definition a hybrid cloud connects a private cloud or an on-premise IT infrastructure with a public cloud to supply the local infrastructure with further resources on-demand. This resources could be compute power, storage but even services or software. Is a local email system integrated with a SaaS CRM system it can spoken about a hybrid cloud. That means a hybrid cloud stands not just for IaaS or PaaS scenarios.

The multi-cloud approach extends the hybrid cloud idea by the amount of connected clouds. Exactly spoken, this can be n-clouds which are integrated in some kind of forms. Here, for example, cloud infrastructures are connected that applications can use different infrastructures or services in parallel or depending on the workload or the current price. Even the parallel or distributed storage of data over multiple clouds is imaginable to ensure the availability and redundance of the data. At the moment multi-cloud is intensively discussed in the IaaS area. Therefore taking a look on Ben Kepes‘ and Paul Miller’s Mapping Session on multi-cloud as well as Paul’s Sector RoadMap: Multicloud management in 2013 is recommendable.

What is often neglected, multi-cloud has a special importance in the SaaS area. The amount of new SaaS applications is growing from day to day and with that the demand to integrate this varying solutions and let exchange the data. Today, the cloud market moves uncontrolled into the direction of isolated applications while each solution offers an added value, but in the result leads to many small data silos. With this kind of progress enterprises already fought vainly in pre cloud times.

Spread the risk

Even if the cloud marketing always promise the availability and security of data, systems and applications, the responsibilty to ensure this lies in the own hands (This refers to an IaaS public cloud.). Although cloud vendor mostly provide ways and means using IaaS, the customer has to do things on its own. Outages known from the Amazon Web Services or unpredictable closures like Nirvanix should lead to more sensitivity while using cloud services. The risk needs to be spread consciously. Here not all eggs shut be put into one basked but strategically distributed over severall.

Best-of-Breed

The best-of-breed strategy is the most widespread approach within IT enterprise architectures. Here multiple integrated industry solution for various sections within a company are used. The idea behind best-of-breed is to use the best solution in each case, an all-in-one solution usually cannot offer. This means one assembled the best services and applications for its purpose. Following this approach in the cloud, one is already in a multi-cloud scenario what means that approxiamtely 90 percent of all companies using cloud services are multi-cloud users. If the used cloud services are integrated remains doubtful.

Which is recommended on the one hand, is on the other side unavoidable in many cases. Although there are already a few good all-in-one solutions on the market. Nevertheless, most pearls are implemented independently, and must be combined with other solutions, for example email and office /collaboration with CRM and ERP. In respect of the risk this has its advantage, especially in the cloud. If a provider fails only a partial service is not available and not the overall productivity environment.

Avoid data silos: APIs and integration

Such best-of-breed approaches cloud marketplaces attempt to create, by grouping individual cloud solutions into different categories to offer companies a broad portfolio of different solutions, with which a cloud productivity suite can be put together.

Nevertheless, very highly controlled and supposedly integrated marketplaces show a crucial point with massive integration problems. There are many individual SaaS applications that do not interact. This means that there is no common database and for example the email service can not access the data in the CRM service and vice versa. This creates, as described above, individual data silos and isolated applications within the marketplace.

This example also illustrates the biggest problem with a multi-cloud approach. The integration of the many different and usually independent operating cloud services and their interfaces to each other.

Multi-cloud is „The New Normal“ in the cloud

The topic of multi-cloud is currently highly debated especially in IaaS environment to spread the risk and to take advantage of the cost and benefits of different cloud infrastructures. But even in the SaaS environment the subject must necessarily become a greater importance to avoid data silos and isolated applications in the future, to simplify the integration and to support companies in their adaptation of their best-of-breed strategy.

Notwithstanding these expectations, the multi-cloud use is already reality by companies using multiple cloud solutions from many different vendors, even if the services are not yet (fully) integrated.