Kategorien
Analysis

Why I do not (yet) believe in the Deutsche Börse Cloud Exchange (DBCE)

After a global „drumbeat“ it has become quite silent around the Deutsche Börse Cloud Exchange (DBCE). However, I am often asked about the cloud marketplace, which have the pretension to revolutionize the infrastructure-as-a-service (IaaS) market, and asked about my estimation on its market potential and future. Well, summing up I always come to the same statement, that I, regarding the present situation, not yet believe in the DBCE and grant it a little market potential. Why? Read on.

Two business models

At first, I see two business models within the DBCE. The first one will become real in 2014. The marketplace for the supply and demand of virtual infrastructure resources (compute and storage), which should be used by enterprises to run their workloads (applications, store data).

The second one is a dream of the future. The trading of virtual resources, as we know it with Futures. Let’s be honest, what is the true job of a stock exchange? The real task? The core business? To organize and monitor the transfer of critical infrastructure resources and workloads? No. A stock exchange is able to determine the price of virtual goods and trade them. The IaaS marketplace is just the precursor, to prepare the market for this trade business and to bring together the necessary supply and demand.

Provider and User

For a marketplace basically two parties are required. The suppliers and demanders, in this case the user. Regarding the suppliers, the DBCE should not worry. If the financial, organizational and technical effort is relatively low, the supply side will offer enough resources quickly. The issues are on the buyer side.

I’ve talked with Reuven Cohen on that topic who released SpotCloud as the first worldwide IaaS marketplace in 2010. At the peak of demand, SpotCloud administrated 3.200 provider(!) and 100.000 server worldwide. The buyer side looked modest.

Even if Reuven was too early in 2010 I still see five important topics who are in charge of, that the adoption rate of the DBCE will delay: Trust, psychology, uses cases, technology (APIs) and management.

Trust and Psychology

In theory the idea behind DBCE sounds great. But why is the DBCE trustworthier compared to another IaaS marketplaces? Does a stock exchange still enjoy the confidence for what it stands respectively it has to stand for as an institution? In addition, IT decision maker have a different point of view. The majority of the enterprises is still overwhelmed with the public cloud and fear to outsource their IT and data. There is a reason why Crisp Research figures show that the spending’s for public IaaS in Germany for 2013 are just about 210 million euro. On the other side investments in private cloud infrastructures are about 2.3 billion euro.

This is also underwritten by a Forrester study which implies:

“From […] over 2,300 IT hardware buyers, […] about 55% plan to build an internal private cloud within the next 12 months.”

This, an independent marketplace does not change. On the contrary, even if it should offer more transparency, it creates a new complexity level between user and provider, which the user needs to understand and adopt. This is also reflected in use cases and workloads that should run on the DBCE.

Use Cases

Why should someone use the DBCE? A question that is not easy to answer. Why should someone buy resources through a marketplace, even if he can purchase them directly from a provider who already has a global footprint, many customers and a stable infrastructure? The price and comparability could be essential attributes. If a virtual machine (VM) at provider A is today at a reduced rate compared to provider B, then the VM of provider A is used. Really? No, this leads to a more complex application architecture which will be not in due proportion to its use. Cloud computing is too complex, that a smart architect to refrain from doing this. In this context you should also not forget the technical barriers a cloud user have to deal with by adopting current but further developed cloud infrastructures.

One interesting scenario that could be build with the DBCE is a multi-cloud concept to spread the technical risks (eg. outage of a provider).

Where we come to the biggest barrier – the APIs.

API and Management

The discussions on „the“ preferred API standard in the cloud do not stop. Amazon EC2 (compute) and Amazon S3 (storage) are deemed to be the de-facto standard which most of the provider and projects support.

As a middleware the DBCE tries to settle between the provider and user to provide a consistent own(!) interface for both sides. For this purpose the DBCE sets on the technology of Zimory, which offers an open but proprietary API. Instead of focusing on a best-known standard or to adopt one from the open source community (OpenStack), the DBCE tries to make its own way.

Question: Against the background, that we Germans, regarding the topic of cloud, not to cover us with own glory. Why should the market admit to a new proprietary standard that comes from Germany?

Further issues are the management solutions for cloud infrastructures. Either potential user already decide for a solution and thus have the challenge to integrate the new APIs or they are still in the decision process. In this case the issue comes with the non-supported DBCE APIs by common cloud management solutions.

System integrator and Cloud-Broker

There exist two target groups who have the potential to open the door to the user. System integrators (channel partner) and cloud-broker.

A cloud service broker is a third party vendor, who, on the demand of its customer, enriches the services with an added value and ensures that the service fulfills the specific requirements of an enterprise. In addition, he helps with the integration and aggregation of the services to increase its security or to enhance the original service with further properties.

A system integrator develops (and operates) on the demand of its customer a system or application on a cloud infrastructure.

Since both are acting on behalf of the user and operate the infrastructures, systems and applications, they can adopt the proprietary APIs and make sure, that the users don’t have to deal with it. Moreover, system integrators as well as cloud-broker can use the DBCE to purchase cloud resources cheap and setup a multi-cloud model. Therefore the complexity of the system architecture in the cloud plays a leading role, but which the user may not notice.

It is a matter of time

In this article I’ve asked more questions as I’ve answered. I don’t want to rate the DBCE too negative, because the idea is good. But the topics mentioned above are essentially important to get a foot in the door of the user. This will become a learning process for both sides I estimate a timeframe of about five years, until this model leads to a significant adoption rate on the user side.

Kategorien
Analysen

Warum ich (noch) nicht an den Deutsche Börse Cloud Exchange (DBCE) glaube

Nach einem globalen „Paukenschlag“ ist es medial wieder sehr ruhig um den „Deutsche Börse Cloud Exchange“ (DBCE) geworden. Dennoch werde ich immer wieder auf den Cloud-Marktplatz angesprochen – der die Ambitionen hat, den Infrastructure-as-a-Service (IaaS) Markt zu revolutionieren – und werde dabei um eine Einschätzung zu dessen Marktpotential und Zukunft gefragt. Nun, zusammenfassend komme ich ständig zur selben Aussage, dass ich, gemessen an der heutigen Situation, noch nicht an den DBCE glaube und ihm wenig Marktpotential einräume. Warum das so ist? Weiterlesen.

Zwei Geschäftsmodelle

Zunächst sehe ich im DBCE zwei Geschäftsmodelle. Das eine wird in 2014 zur Realität. Der Marktplatz für das Angebot und die Nachfrage von virtuellen Infrastruktur-Ressourcen (Rechenleistung und Speicherplatz), die von Anwendern für den realen Einsatz (Applikationsbetrieb, Daten speichern) genutzt werden sollen.

Bei dem zweiten Geschäftsmodell handelt es sich noch um Zukunftsmusik. Der Handel von virtuellen Ressourcen wie man es von den Futures kennt. Denn sind wir ehrlich. Was ist es, was eine Börse wirklich kann? Ihre eigentliche Aufgabe? Ihr Kerngeschäft? Den Transfer von kritischen Infrastrukturressourcen und Workloads organisieren und überwachen? Nein. Die Börse kann den Preis von virtuellen Gütern bestimmen und damit handeln lassen. Der IaaS-Marktplatz ist nur der Vorbote, um den Markt an dieses Handelsgeschäft heranzuführen und das dafür notwendige Angebot und die Nachfrage zusammenzubringen.

Anbieter und Anwender

Für einen Marktplatz werden grundsätzlich zwei Parteien benötigt. Die Anbieter und die Nachfrager, in diesem Fall die Anwender. Um die Anbieter wird sich der DBCE wenig Gedanken machen müssen. Ist der finanzielle, organisatorische und technische Aufwand relativ gering, wird die Angebotsseite relativ schnell Ressourcen zu bieten haben. Die Problematik besteht auf der Seite der Nachfrager.

Ich habe mich hierzu mit Reuven Cohen unterhalten, der mit SpotCloud im Jahr 2010 den ersten weltweiten IaaS-Marktplatz veröffentlicht hat. In der Spitze hatte SpotCloud 3.200 Anbieter(!) und 100.000 Server weltweit verwaltet. Die Nachfrageseite viel eher bescheiden aus.

Auch wenn Reuven im Jahr 2010 damit viel zu früh dran war, mache ich noch heute dafür fünf Themen verantwortlich, die den DBCE hemmen werden: Das Vertrauen, die Psychologie, die Use Cases, die Technik (APIs) und das Management.

Vertrauen und Psychologie

Die Idee hinter dem DBCE klingt theoretisch klasse. Aber warum ist der DBCE nun tatsächlich vertrauenswürdiger als andere IaaS-Marktplätze? Genießt eine Börse weiterhin dass Vertrauen für das sie als Institution steht bzw. stehen sollte? Hinzu kommt, dass IT-Entscheider ganz anders ticken. Der Großteil der Unternehmen ist weiterhin mit der Public Cloud überfordert und hat Angst die IT und Daten aus der Hand zu geben. Es gibt einen guten Grund, warum die Zahlen von Crisp Research zeigen, dass in Deutschland im Jahr 2013 nur etwa 210 Millionen Euro für Public Infrastructure-as-a-Service (IaaS) ausgegeben wurde. Hingegen lagen die Investitionen für Private Cloud Infrastrukturen bei 2,3 Milliarden Euro.

Das unterstreicht ebenfalls eine Forrester Studie, die besagt:

“From […] over 2,300 IT hardware buyers, […] about 55% plan to build an internal private cloud within the next 12 months.”

Daran wird auch ein unabhängiger Marktplatz nichts ändern. Im Gegenteil, selbst wenn der DBCE für mehr transparenz sorgen soll, schafft er eine weitere Komplexitätsebene zwischen den Anwendern und den Anbietern, die von den Anwendern erst einmal verstanden und adaptiert werden muss. Das spiegelt sich auch in den Use Cases bzw. Workloads wieder, die darauf laufen sollen.

Use Cases

Warum sollte man den DBCE nutzen? Das ist eine Frage, die nicht leicht zu beantworten ist. Warum sollte man über einen Marktplatz Ressourcen einkaufen, wenn ich sie auch von einem Anbieter direkt beziehen kann, der bereits über eine globale Reichweite, viele Kunden und eine bewährte Infrastruktur verfügt? Der Preis und die Vergleichbarkeit können ein entscheidendes Merkmal sein. Wenn die virtuelle Maschine (VM) bei Anbieter A heute ein wenig günstiger ist als bei Anbieter B, dann wird die VM bei Anbieter A genutzt. Wirklich? Nein, das würde die Anwendungsarchitektur dermaßen verkomplizieren, dass die Entwicklung für dieses Szenario in keinem Verhältnis zu dessen Nutzen steht. Cloud Computing ist eh schon viel zu kompliziert, dass ein kluger Cloud Architekt davon Abstand nehmen würde. Man sollte in diesem Zusammenhang auch nicht die technischen Hürden vergessen, die Cloud-Anwender bereits heute mit sehr weit entwickelten Cloud Infrastrukturen haben.

Ein Szenario was sich mit dem DBCE gut abbilden lassen würde ist ein Multi-Cloud Konzept um technische Risiken (z.B. Ausfall eines Anbieters) zu streuen.

Wofür wir zur nächsten und der wohl größten Hürde kommen – den APIs.

API und Management

Die Diskussionen um „den“ bevorzugten API-Standard in der Cloud hören nicht auf. Zum de-facto Standard für Rechenleistung und Speicherplatz haben sich Amazon EC2 (Compute) und Amazon S3 (Storage) entwickelt, die von so gut wie allen anderen Anbietern und Projekten unterstützt werden.

Der DBCE will sich quasi als Middleware zwischen die Anbieter und die Anwender setzen und für beide Seiten eine einheitliche eigene(!) Schnittstelle bieten. Hierzu setzt der DBCE auf die Technologie von Zimory, die zwar über offene Schnittstellen verfügt, welche aber proprietär sind. Anstatt sich auf einen bekannten Standard zu konzentrieren oder einen aus der Open-Source Gemeinde (OpenStack) zu adaptieren, versucht der DBCE einen eigenen Weg zu finden.

Frage: Mit dem Hintergrund, dass wir Deutschen, was das Thema Cloud angeht, uns bisher nicht gerade mit Ruhm bekleckert haben. Warum sollte sich der Markt auf einen neuen Standard einlassen der aus Deutschland kommt und dazu auch noch proprietär ist?

Ein weiteres Problem besteht in den Managementlösungen für Cloud Infrastrukturen. Entweder haben sich potentielle Anwender bereits für eine Lösung entschieden und stehen damit vor der Herausforderung die neuen APIs in irgendeiner Form zu integrieren oder Sie befinden sich weiterhin im Entscheidungsprozess. Hier besteht die Problematik darin, dass bisher keine gängige Cloud-Managementlösung die DBCE APIs unterstützt.

Systemintegratoren und Cloud-Broker

Es gibt zwei Zielgruppen in denen Potential steckt und die gleichzeitig die Tür zu den Anwendern öffnen können. Die Systemintegratoren (Channelpartner) und Cloud-Broker.

Ein Cloud Service Broker ist ein Drittanbieter, der im Auftrag seiner Kunden Cloud Services mit Mehrwerten anreichert und dafür sorgt, dass der Service die spezifischen Erwartungen eines Unternehmens erfüllt. Darüber hinaus hilft er bei der Integration und Aggregation der Services, um ihre Sicherheit zu erhöhen oder den originalen Service mit bestimmten Eigenschaften zu erweitern.

Ein Systemintegrator entwickelt (und betreibt) im Auftrag seiner Kunden ein System oder eine Applikation auf einer Cloud-Infrastruktur.

Da beide im Auftrag der Anwender agieren und die Infrastrukturen, Systeme und Applikationen betreiben, können sie die proprietären APIs adaptieren und stellen damit sicher, dass sich der Anwender damit nicht auseinandersetzen muss. Darüber hinaus können sowohl Systemintegratoren als auch Cloud-Broker den DBCE nutzen, um für sich kostengünstig Cloud-Ressourcen einzukaufen und ein Multi-Cloud Modell nutzen. Hierbei spielt die Komplexität der Systemarchitektur wieder ein Rolle, von welcher der End-Anwender aber nichts mitbekommen darf.

Eine Frage der Zeit

Ich habe in diesem Artikel mehr Fragen aufgeworfen als beantwortet. Ich möchte den DBCE auch nicht zu negativ bewerten, denn die Idee ist gut. Aber die oben genannten Punkte sind essentiell wichtig, um überhaupt ein Fuß in die Tür der Anwender zu bekommen. Dabei wird es sich um einen Lernprozess für beide Seite handeln, den ich auf etwa fünf Jahre schätze, bis dieses Modell bei den Anwendern zu einer signifikanten Adaptionsrate führt.

Kategorien
Analysis

Diversification? Microsoft’s Cloud OS Partner Network mutates to an "OpenStack Community"

At first sight the announcement of the new Microsoft Cloud OS Partner Network sounds indeed interesting. Who doesn’t want to use the Microsoft public cloud directly, as of now can select from one of several partner to access the Microsoft technologies indirectly. It is also possible to span a hybrid cloud between the Microsoft cloud and the partner clouds or the own data center. For Microsoft and its technological extension within the cloud it is indeed a clever move. But for the so-called „Cloud OS Partner Network“ this activity could backfire.

Microsoft Cloud OS Partner Network

The Cloud OS Partner Network consists of more than 25 cloud service provider worldwide, which, according to Microsoft, focuses especially on hybrid cloud scenarios based on Microsoft’s cloud platform. For this purpose, they rely on a combination of the Windows Server with Hyper-V, System Center and the Windows Azure Pack. With that, Microsoft tries to underwrite its vision to establish its Cloud OS as a basis for customer data center, service provider clouds and the Microsoft public cloud.

For this reason, the Cloud OS Partner Network serves more than 90 different markets with a total customer base of three million companies worldwide. Overall 2.4 million server and more than 425 data center build the technological base.

Among others the partner network includes provider like T-Systems, Fujitsu, Dimension Data, CSC and Capgemini.

Advantages for Microsoft and the customer: Locality and extension

For Microsoft, the Cloud OS Partner Network is a clever move to, measured by the distribution of Microsoft cloud technologies, gains more worldwide market share. In addition, it fits perfectly into Microsoft’s proven strategy to serve the customer not directly but over a network of partner.

The partner network also comes towards the customer. Companies who avoided the Microsoft public cloud (Windows Azure), due to data locality reasons or local policies like data privacy, are now able to find a provider in their own country and do not have to dispense with the desired technologies. For Microsoft, another advantage is the fact, not coercively to build a data center in each country, but to concentrate on the existing or the strategically important once.

With that, Microsoft can lean back a little and let again make a partner network do the uncomfortable sales. The incomes come, as before the age of the cloud, over selling licenses to the partner.

Downside for the partner: Diversification

You can find great stories about this partner network. But the fact is, with the Cloud OS Partner Network, Microsoft creates a similar competitive situation you can find in the OpenStack Community. Especially in the public cloud, with Rackspace and HP, there exist just two „top provider“, who only play a minor part in the worldwide cloud circus. Notably HP fights more with itself and is therefore not able to concentrate on innovations. However, the main problem of both and all the other providers is that they are not able to differentiate from each other. Instead, most of the providers stand in a direct competition to each other and currently not diverge significantly. This is due to the fact, that all set on the same technological base. An analysis on the current situation of the OpenStack community can be found under „Caught in a gilded cage. OpenStack provider are trapped.

The situation for the Cloud OS Partner Network is even more uncomfortable. Unlike in OpenStack, Microsoft is the only technology supplier and decides where it is going on. The partner network need to swallow what is set before them and just can adapt further technology stacks which lead to more overhead and thus further cost for development and operations.

Except for the local markets, all Cloud OS service provider are in a direct competition and based solely on Microsoft technologies are not able to differentiate otherwise. Good support and professional services are extremely important and an advantage but no USP in the cloud.

If the Cloud OS Partner Network flops, Microsoft will get away with a black eye. The great harm the partners will carry home.

Technology as a competitive advantage

Looking at the real successful cloud provider on the market it becomes clear that those who have developed their clouds based on an own technology stack and therefore differentiate technological from the rest. These are Amazon, Google or Microsoft and precisely not Rackspace or HP who both set on OpenStack.

This, Cloud OS partner like Dimension Data, CSC or Capgemini should keep in mind. In particular CSC and Dimension Data have big claims to have a say in the cloud.

Kategorien
Analysis

Hosted Private Platform-as-a-Service: Chances and opportunities for ISVs and enterprises

While platform-as-a-service (PaaS) has repeatedly predicted a rosy future, the service model not really gains momentum. During conversations you find out, that PaaS is likely be used for prototyping. But when it comes to production environments it is switched to infrastructure-as-a-service (IaaS) offerings. The main reason for this is the level of control. This is the main selection criterion for respectively against a PaaS. IaaS just offers more opportunities to influence the virtual infrastructure, software, and services. In contrast using a PaaS you have to develop against a standardized API, which offers not a lot liberty. Especially for ISVs (Independent Software Vendor) and enterprises a PaaS offers some capabilities to comfortable serve developers with resources. Who has no trust in a public PaaS can also access so-called Hosted Private PaaS.

Platform-as-a-Service

Platform-as-a-service (PaaS) is the middle layer of the cloud computing service model and goes one step further as infrastructure-as-a-service (IaaS). A PaaS is responsible to provide a transparent development environment. Here the provider serves a platform on which web-applications can be developed, tested and hosted. Afterwards the applications are running on the infrastructure of the provider using its resources. Thereby the complete lifecycle of an application can be managed. The services on the provider’s platform are accessed using its APIs. The benefits, especially for small companies, exist to reduce the development infrastructure to a minimum. They only need a computer, a web browser, maybe a local IDE, a data connection and their knowledge to develop the application. The rest is part of the provider who is responsible for the infrastructure (OS, webserver, runtime environments, etc.).

Platform as a Service
Source: Cloud Computing mit der Microsoft Plattform, Microsoft Press PreView 1-2009

Hosted Private Platform-as-a-Service

Hosted Private Platform-as-a-Services (Hosted PaaS) converts PaaS into a dedicated and by a provider-managed alternative. It is especially attractive to enterprises that try to avoid the public approach (shared infrastructure, multi-tenancy) but do not own the resources and knowledge to provide their developers a true PaaS within the own infrastructure. In this case, they can access provider who offer them an exclusive PaaS in a reserved environment. The advantage is that the customer can use the hosted private PaaS exactly as a public PaaS but within a non-shared infrastructure, which is located in the data center of the provider.

Another advantage of a hosted PaaS is the professional services the vendor directly includes to help its customers migrating the application onto the PaaS or develop them from scratch. The concept is comparable to the managed clouds respectively business clouds in the IaaS area. (http://clouduser.de/analysen/die-bedeutung-der-managed-cloud-fuer-unternehmen-22699) This is interesting for enterprises as well as ISVs, who until now, has not much or no experiences developing cloud applications or do not trust public cloud offerings like Amazon Elastic Beanstalk, Microsoft Windows Azure, Heroku or Force.com

A sign that public PaaS in Germany is not gaining momentum, is the fact that Germany’s very first PaaS provider cloudControl encapsulated its public PaaS into the Application Lifecycle Engine to offer it as a private PaaS for enterprises and webhoster (white label) to empower them running a PaaS in a self-managed infrastructure. In addition it’s possible to span a bridge to a hybrid PaaS.

The managed IaaS provider Pironet NDH is the first German provider who has jumped on the hosted private PaaS train. The company from cologne tries to offer ISVs and SaaS provider a platform including professional services to let them provide their web applications from a German data center. Besides .NET, PHP, Java, Perl, Python, Ruby, node.js or Go, Pironet NDH offers a complete Windows Azure PaaS support, which is known from Microsoft’s Windows Azure public cloud. With that, Azure developed applications can also be operated within the German Pironet NDH data center. Both PaaS offerings are building separately. The polyglot PaaS (multi-language) based on a RedHat OpenShift implementation. The Azure PaaS based on the Windows Azure Pack. Even if Pironet NDH mostly focus on one on one business relationships, they also start a public PaaS alternative in Q1/Q2 2014, but which are just market secondary.

In particular at traditional ISVs, Pironet NDH and its offering to preach to the choir. In the future its customers will more and more ask for web applications, which will become a big challenge for one or the other ISV. Amount other things, these will profit from professional services to deliver existing and new applications faster to the market.

Public cloud provider need to react

The „hosted private“ trend comes from the IaaS area, where currently also Hosted Private Clouds have a high momentum for managed respectively business clouds. This for a good reason. Especially the data privacy topic pushes ISVs and enterprises into the arms of provider with dedicated solutions. In addition, customers are willing to pay more for higher security, data control and consulting.

Public cloud provider needs to react to face customer requirements. With Salesforce, the first big public cloud player drops its pants down. First, a quote by Joachim Schreiner, Managing Director, Salesforce Germany: “Private clouds are no clouds. This is nothing else like an application service provider contract, which already had been closed in the year 2000, when the necessary bandwidth had been available.“ Apparently, Salesforce CEO Marc Benioff sees this quite different. Eventually, the cash till keeps ringing when the customers are satisfied. For this, Salesforce introduced its „Salesforce Superpod“ during the Dreamforce 2013. This is a dedicated cloud based on HPs Converged Infrastructure. Thus, in principle nothing else like a hosted private cloud.

Kategorien
Analysen

Diversifizierung? Microsofts Cloud OS Partner Netzwerk mutiert zu einer "OpenStack Community"

Im ersten Moment ließt sich die Ankündigung zum neuen Microsoft Cloud OS Partner Network in der Tat interessant. Wer nicht direkt die Microsoft Public Cloud nutzen möchte, kann ab sofort einen von zahlreichen Partnern wählen und dort indirekt auf dieselben Microsoft Technologien zurückgreifen. Ebenfalls lässt sich eine Hybrid Cloud zwischen der Microsoft Cloud und den Partner Clouds oder dem eigenen Rechenzentrum aufspannen. Für Microsoft und dessen technologischer Verbreitung innerhalb der Cloud ist es tatsächlich ein kluger Schachzug. Für das sogenannte „Cloud OS Partner Network“ kann die Aktion aber schön nach hinten losgehen.

Das Microsoft Cloud OS Partner Network

Das Cloud OS Netzwerk besteht aus mehr als 25 Cloud Service Provider weltweit, die sich nach Microsoft Angaben, auf hybride Cloud-Szenarien speziell mit der Microsoft Cloud Plattform konzentrieren. Hierzu setzen diese auf eine Kombination des Windows Server mit Hyper-V, System Center und dem Windows Azure Pack. Microsoft möchte damit seine Vision unterstreichen, sein Cloud OS als Basis für Kunden-Rechenzentren, Service-Provider Clouds und der Microsoft Public Cloud zu etablieren.

Hierzu bedient das Cloud OS Partner Netzwerk mehr als 90 verschiedene Märkte mit einer Kundenbasis von insgesamt drei Millionen Unternehmen weltweit. Insgesamt 2,4 Millionen Server und mehr als 425 Rechenzentren bilden die technologische Basis.

Zu dem Partner Netzwerk gehören unter anderem T-Systems, Fujitsu, Dimension Data, CSC und Capgemini.

Vorteile für Microsoft und die Kunden: Lokalität und Verbreitung

Für Microsoft ist das Cloud OS Partner Netzwerk ein kluger Schachzug, um gemessen an der Verbreitung von Microsoft Cloud Technologien, weltweit mehr Marktanteile zu bekommen. Hinzu kommt, dass es sehr gut in Microsofts bewährte Strategie passt, die Kunden überwiegend nicht direkt, sondern über ein Netzwerk von Partnern zu versorgen.

Auch den Kunden kommt das Partner Netzwerk prinzipiell entgegen. Unternehmen, die bisher aus Gründen der Datenlokalität, oder lokaler Richtlinien wie dem Datenschutz, die Microsoft Public Cloud (Windows Azure) gemieden haben, können sich nun einen Anbieter in ihrem eigenen Land suchen, ohne auf die gewünschte Technologie zu verzichten. Für Microsoft eröffnet sich damit ein weiterer Vorteil, nicht zwingend ein Rechenzentrum in jedem Land zu bauen und sich lediglich auf die Bisherigen oder strategisch Wichtigeren zu konzentrieren.

Microsoft kann sich damit ein wenig zurücklehnen und den unbequemen Vertrieb erneut das Partnernetzwerk übernehmen lassen. Einnahmen werden wie vor dem Cloud Zeitalter mit dem Lizenzverkauf an den Partnern generiert.

Nachteile für die Partner: Diversifizierung

Man kann an dem Partner Netzwerk vieles schönreden. Fakt ist, dass Microsoft mit dem Cloud OS Netzwerk eine vergleichbare Konkurrenzsituation schafft, wies sie sich in der OpenStack Community gebildet hat. Speziell in der Public Cloud existieren dort mit Rackspace und HP nur zwei „Top Anbieter“, die im weltweiten Cloud Zirkus jedoch nur eine untergeordnete Rolle spielen. Insbesondere HP hat derzeit zu viel mit sich selbst zu kämpfen und konzentriert sich dadurch viel zu wenig auf Innovationen. Das Hauptproblem beider und all den anderen OpenStack Anbietern besteht jedoch darin, dass sie sich nicht voneinander unterscheiden. Stattdessen stehen die meisten Anbieter in direkter Konkurrenz zueinander und heben sich derzeit nicht nennenswert voneinander ab. Das ist dem Umstand geschuldet, dass alle auf exakt dieselbe technologische Basis setzen. Eine Analyse zur Situation der OpenStack Community befindet sich unter „Gefangen im goldenen Käfig. OpenStack Provider sitzen in der Falle.

Die Situation für das Cloud OS Partner Netzwerk ist sogar noch ein wenig unbequemer. Anders als bei OpenStack ist Microsoft alleiniger Technologielieferant und bestimmt wo es lang geht. Das Partner Netzwerk muss schlucken was es vorgesetzt bekommt oder kann sich nur durch die Adaption weiterer fremder Technologiestacks behelfen, was zu einem Mehraufwand führt und für weitere Kosten hinsichtlich der Entwicklung und den Betrieb sorgt.

Abgesehen von den lokalen Märkten, stehen alle Cloud OS Service Provider in einem direkten Wettbewerb zueinander und haben ausschließlich auf Basis der Microsoft Technologien keine Möglichkeit sich anderweitig zu differenzieren. Ein Guter Support und Professional Services sind zwar enorm wichtig und ein Vorteil, aber kein USP in der Cloud.

Falls das Cloud OS Partner Netzwerk floppt wird Microsoft mit einem blauen Auge davon kommen. Den großen Schaden werden die Partner davontragen.

Technologie als Wettbewerbsvorteil

Betrachtet man die wirklich erfolgreichen Cloud Anbieter am Markt, wird deutlich, dass es sich dabei um diejenigen handelt, die ihre Cloud auf einem eigenen Technologiestack aufgebaut haben und sich damit technologisch von den restlichen Anbietern am Markt differenzieren. Dabei handelt es sich um Amazon, Google oder Microsoft und eben nicht Rackspace oder HP die beide auf OpenStack setzen.

Das sollten sich Cloud OS Partner wie Dimension Data, CSC oder Capgemini vor Augen halten. Insbesondere CSC und Dimension Data haben selbst große Ansprüche in der Cloud ein Wort mitreden zu wollen.

Kategorien
Analysis

Quo vadis VMware vCloud Hybrid Service? What to expect from vCHS?

In May of this year, with the vCloud hybrid service (vCHS) VMware presented its first infrastructure-as-a-service (IaaS) offering in a private beta. The infrastructure service is fully managed by VMware and is based on VMware vSphere. Companies that already have virtualized their own on-premise infrastructure with VMware, are to be given the opportunity to seamlessly expand their data center resources such as applications and processes to the cloud, to span a hybrid cloud. As part of the VMworld 2013 Europe in Barcelona in October, VMware has announced the availability of the vCHS in England with a new data center facility in Slough near London. The private beta of the European vCloud hybrid service will be available from the fourth quarter of 2013, with general availability planned for the first quarter of 2014. This step shows that VMware ascribes the public cloud an important role, but was especially made ​​to meet the needs of European customers.

Be softly with gleeful expectations?

During the press conference at VMworld in Barcelona VMware CEO Pat Gelsinger made ​​the glad announcement that instead of the expected 50 registrations even 100 participants have registered for the test of vCHS private beta. After all, this is an improvement over the expectations by 100 percent. However, 100 test customers cannot be a success for a vendor like VMware. Just take a look on the broad customer base and the effectiveness of their sales.

It is therefore necessary to ask the question how attractive a VMware IaaS offering actually can be and how attractive VMware technology, most notably VMware’s in the enterprise widespread hypervisor, will be in the future. Discussions with IT leaders arise more frequently that VMware’s hypervisor for cost reasons and sense of independence (lock-in) should be superseded by an open source hypervisor like KVM in short to medium term.

Challenges: Amazon Web Services and the partner network

With vCHS VMware is positioning itself exactly as about 95 percent of all vendors in the IaaS market: With compute power and storage. A service portfolio is not available. However, VMware sees the Amazon Web Services (AWS) as a main goal and primary competitor when it comes to attract customers.

However, AWS customers – also enterprise customers – use more than just the infrastructure. Every time I talk to AWS customers, I ask my standard question: „How many infrastructure-related AWS services do you use?“ If I sum up all previous answers together, I come to an average of eleven to twelve services that an AWS customer is using. Most say that they would use as many services as necessary in order to have as little effort as possible. A key experience was a customer who, without hesitation and with wide eyes, said he is using 99 percent of all the services.

AWS is a popular and particularly attractive target. With the view on AWS only, VMware should be careful and not lose its network of partners and in particular lose the sight of the service provider that have built their infrastructures with VMware technology, including CSC, Dimension Data, Savvis, Tier 3, Verizon or Virtustream.

From first service providers there are already comments, to turn one’s back on VMware and support other hypervisors such as Microsoft Hyper-V. VMware sees these discussions so far relaxed as vCHS is intended purely for standardized workloads and VMware-powered service provider should take care of the specific customer needs.

Quo vadis vCloud Hybrid Service?

Because of its technology, which is used in a variety of cloud service providers, VMware operates passive for some time in the IaaS environment. However, the still in the beta vCloud hybrid Service is in direct competition to this partner network, so it will come to the question of trust between the two sides.

This is particularly due to the fact that vCHS offers more functionality than the typical vCloud Datacenter Service. This includes, among other things, the new vCloud hybrid service online marketplace, with which the user in accordance to VMware have access to more than 3,800 applications and services, which they can use in conjunction with vCHS.

VMware’s strategy consists primarily to build a seamless technical bridge between local VMware installations and vCHS – including the ability to transfer existing standard workloads between the own data center and vCHS. How much success VMware will have with this strategy needs to be seen.

One result of our exploration of the European cloud market has shown that most European companies like the properties of the cloud – flexible use, pay-per-use and scalability – but rather passed the construction and operation of their applications and systems to the provider (as a managed or hosted service).

These are good conditions for all service providers who have built their infrastructure based on VMware technology, but help customers with professional services on the way to the cloud. However, these are not the best conditions for VMware vCHS since vCHS will only support self-service and standard workloads.

As is typical for U.S. companies, VMware launched its European vCHS in England. The company has already announced plans to be active in other countries within Europe. The European commitment has mainly the background to appease the concerns of Europeans and to respond to their specific needs. Here VMware’s EMEA strategy envisages the points data locality, sovereignty, trust, security, compliance and governance. I do not want to appear too critical, but here the name is interchangeable: All cloud providers advertise with the same requirements.

Basically it can be said that the vCloud hybrid service in combination with the rest of VMware’s portfolio has a good chance to play a leading role in the IaaS market. This is mainly due to the broad customer base, a strong ecosystem and the many years of experience with enterprise customers.

According to VMware figures more than 500,000 customers including 100 percent of the Fortune 500, 100 percent of the Fortune Global 100 and 95 percent of the 100 DAX companies using VMware technology. Furthermore, more than 80 percent of virtualized workloads and a large number of mission-critical applications running on VMware.

This means that the transition to a VMware-based hybrid cloud is not an unusual step to scale without great expense. Furthermore, VMware like no other virtualization or cloud provider have a very large ecosystem of partners, resellers, consultants, trainers and distribution channels. With this ecosystem VMware has a great advantage over its competitors in the IaaS and generally in the cloud computing environment.

The situation is similar in the technical attractiveness. Organizationally VMware does not set on a typical public cloud environment, but either provides a physically isolated dedicated cloud per customer or a virtual private cloud. The dedicated cloud provides much more power than the virtual private cloud and provides the customers a physically separated pool of vCPUs and vRAM. The storage and network are logically isolated between the client. The virtual private cloud provides customers with the same design architecture of the dedicated cloud with resources, but these are only logical and not be physically separated.

With vCHS existing VMware infrastructures can comfortable be spanned to a hybrid cloud to move resources back and forth as needed. In addition, a company can thus build test and development or disaster recovery environments. However, this is for what a number of VMware service providers stand by, which could be the biggest challenge for both sides.

Kategorien
Analysen

Hosted Private Platform-as-a-Service: Chancen und Möglichkeiten für ISVs und Unternehmen

Auch wenn Platform-as-a-Service (PaaS) immer wieder eine rosige Zukunft vorhergesagt wurde, so richtig kommt das Servicemodell nicht in Fahrt. In Gesprächen erfährt man dann, dass PaaS zwar gerne für die Entwicklung von Prototypen genutzt wird. Wenn es um den Produktivbetrieb geht, wird aber auf ein Infrastructure-as-a-Service (IaaS) Angebot gewechselt. Der Hauptgrund ist das Kontroll-Level. Dabei handelt es sich ebenfalls um das Hauptentscheidungskriterium für bzw. gegen einen PaaS. IaaS bietet einfach mehr Möglichkeiten, Einfluss auf die virtuelle Infrastruktur, Software und Services zu nehmen. Bei einem PaaS hingegen wird gegen eine standardisierte API programmiert, die nicht viele Freiheiten ermöglicht. Dabei bietet ein PaaS vor allem für ISVs (Independent Software Vendor) und Unternehmen einige Möglichkeiten, um ihre Entwickler bequem mit Ressourcen zu versorgen. Wer kein Vertrauen in einen Public PaaS hat, der kann mittlerweile auch auf gehostete sogenannte Hosted Private PaaS zurückgreifen.

Platform-as-a-Service

Platform-as-a-Service (PaaS) ist die mittlere Schicht des Cloud Computing Service-Models und geht einen Schritt weiter als Infrastructure-as-a-Service (IaaS). Ein PaaS ist dafür zuständig eine transparente Entwicklungsumgebung bereitzustellen. Dabei stellt der Anbieter eine Plattform zur Verfügung auf der (Web)-Anwendungen entwickelt, getestet und gehostet werden können. Die Anwendungen werden anschließend auf der Infrastruktur des Anbieters ausgeführt und nutzen dessen Ressourcen. Der vollständige Lebenszyklus einer Anwendung kann darüber verwaltet werden. Über APIs werden die Dienste auf der Plattform des jeweiligen Anbieters angesprochen. Der Vorteil besteht darin, dass vor allem kleine Unternehmen ihre Entwicklungsinfrastruktur auf ein Minimum beschränken können. Sie benötigen lediglich einen Computer, einen Web-Browser, evtl. eine lokale IDE, eine Datenverbindung und ihr Wissen, um Anwendungen zu entwickeln. Der Rest obliegt dem Anbieter, der für die Infrastruktur (Betriebssystem, Webserver, Laufzeitumgebungen etc.) verantwortlich ist.

Platform as a Service
Quelle: Cloud Computing mit der Microsoft Plattform, Microsoft Press PreView 1-2009

Hosted Private Platform-as-a-Service

Hosted Private Platform-as-a-Services (Hosted PaaS) überführen die Idee des Public PaaS in eine dedizierte und von einem Anbieter verwaltete Variante. Sie ist insbesondere für Unternehmen attraktiv, die einen Public Ansatz (Shared Infrastructure, Multi-Tenancy) meiden wollen, aber nicht die Ressourcen und das Wissen besitzen, um ihren Entwicklern einen echten PaaS in der eigenen IT-Infrastruktur bereitzustellen. In diesem Fall können sie auf Anbieter zurückgreifen, die ihnen einen exklusiven PaaS in einer für sie reservierten Umgebung zur Verfügung stellen. Der Vorteil besteht darin, dass der Kunde den Hosted Private PaaS genau so nutzen kann wie einen Public PaaS, aber das auf einer nicht geteilten Infrastruktur, die sich beim Anbieter im Rechenzentrum befindet.

Ein weiterer Vorteil eines Hosted PaaS besteht in den Professional Services, die der Anbieter direkt mitliefert, und die dem Kunden dabei helfen, seine Applikationen entweder auf den PaaS zu überführen oder dort neu zu entwickeln. Das Konzept ist exakt vergleichbar mit den Managed Clouds bzw. Business Clouds im IaaS Umfeld. Das ist sowohl für Unternehmen, aber auch für ISVs interessant, die bisher wenig bis keine Erfahrung bei der Entwicklung von Cloud Applikationen haben und Public Angeboten wie Amazon Elastic Beanstalk, Microsoft Windows Azure, Heroku oder Force.com nicht vertrauen.

Ein mögliches Zeichen dafür, dass Public PaaS in Deutschland nicht so richtig in Fahrt kommt ist, das der erste deutsche PaaS Anbieter cloudControl mit der Application Lifecycle Engine seinen Public PaaS in einen Private PaaS gekapselt hat, mit dem Unternehmen und Webhoster (White-Label) einen eigenen PaaS in einer selbstverwalteten Infrastruktur betreiben können. Zusätzlich lässt sich eine Brücke zu einem Hybrid PaaS aufspannen.

Der Managed IaaS Anbieter Pironet NDH ist der erste deutsche Anbieter, der auf den Hosted Private PaaS Zug aufgesprungen ist. Das Unternehmen aus Köln will damit ISVs und SaaS Anbieter eine Plattform inklusive Professional Services bieten, um deren Web-Applikationen aus einem deutschen Rechenzentrum heraus bereitzustellen. Neben .NET, PHP, Java, Perl, Python, Ruby, node.js oder Go bietet Pironet NDH ebenfalls die vollständige Windows Azure PaaS Unterstützung, wie sie von der Microsoft Windows Azure Public Cloud bekannt ist. Damit lassen sich auf für Azure entwickelte Applikationen ebenfalls innerhalb des deutschen Pironet NDH Rechenzentrums betreiben. Beide PaaS Angebote sind separat aufgebaut. Bei dem Polyglot PaaS (multi-language) kommt eine RedHat OpenShift Implementierung zum Einsatz. Der Azure PaaS basiert auf dem Microsoft Windows Azure Pack. Auch wenn sich Pironet NDH vorwiegend auf 1:1 Geschäftsbeziehungen konzentriert, werden im Q1/Q2 2014 ebenfalls Public PaaS Varianten folgen, die allerdings nur sekundär vermarket werden sollen.

Insbesondere bei traditionellen ISVs rennt Pironet NDH mit seinem Angebot offene Türen ein. Deren Kunden werden in Zukunft verstärkt nach Web-Applikationen fragen, was für den einen oder anderen ISV große Herausforderungen birgt. Diese werden unter anderem von den Professional Services profitieren, um bestehende und neue Applikationen schneller auf den Markt zu bringen.

Public Cloud Anbieter werden reagieren müssen

Der „Hosted Private“ Trend kommt aus dem IaaS Bereich, in dem derzeit auch verstärkt Hosted Private Clouds in Form von Managed bzw. Business Clouds nachgefragt werden. Das aus gutem Grund. Insbesondere das Thema Datenschutz treibt ISVs und Unternehmen in die Arme von Anbietern dedizierter Lösungen. Zudem sind Kunden bereit, für eine höhere Sicherheit, Datenhoheit und Consulting mehr Kapital in Cloud-Angebote zu investieren.

Public Cloud Anbieter werden darauf reagieren müssen, um sich den Bedürfnissen der Kunden zu stellen. Mit Salesforce hat der erste große Public Cloud Player die Hosen heruntergelassen. Zunächst ein O-Ton von Joachim Schreiner, Salesforce Deutschland Geschäftsführer: „Private Clouds sind keine Clouds. Das ist nichts anderes als ein Application-Server-Provider-Vertrag, der auch schon im Jahr 2000 hätte abgeschlossen werden können, wenn die nötige Bandbreite verfügbar gewesen wäre.“ Das sieht Salesforce CEO Marc Benioff scheinbar ein wenig anders. Schließlich rollt der Rubel nur dann, wenn die Kunden zufrieden sind. Dafür hat Salesforce auf der Dreamforce 2013 seinen „Salesforce Superpod“ vorgestellt. Dabei handelt es sich um eine Dedicated Cloud auf Basis von HPs Converged Infrastructure. Also im Grunde genommen nichts anderes als eine Hosted Private Cloud.

Kategorien
Analysis

Cloud Computing Myth: Less know-how is needed

I recently found an interesting statement in an article which describes the advantages of cloud computing. A caption was named „It’s not needed to build additional know-how within the company“. This is totally wrong. On the contrary it’s exactly the opposite. It is more knowledge needed as each vendor is promising.

There is a lack of knowledge

The kind and amount of the needed knowledge depends on the service that is used from the cloud. For an alleged high-standardized software-as-a-service (SaaS) application like e-mail there is less knowledge needed regarding the service and its characteristics compared to a service, which maps a certain process.

For the use of infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS) it looks quite different. Although the provider takes care about building, operating and maintaining the physical infrastructure in this case. The responsibility for the virtual infrastructure (IaaS) behooves by the customer. The provider itself – for fee-based support – or certified system integrator help here by building and operating. It’s the same while operating an own application on a cloud infrastructure. The cloud provider is not responsible and just serves the infrastructure and means and ways with APIs, web interfaces and sometimes added value services, to help customers with the development. In this context one need to understand, that depending on the cloud scale – scale out vs. scale up – the application have to be developed completely different – namely distributed and automatically scalable across multiple systems (scale out) – as in the case of a non-cloud environment. This architecturally knowledge is lacking in most companies at every nook and corner, which is also due to the fact that colleges and universities have not taught this kind of programmatic thinking.

Cloud computing is still more complex than it appears at first glance. The prime example for this is Netflix. The U.S. video-on-demand provider operates its platform within a public cloud infrastructure (Amazon AWS). In addition to an extensive production system, which ensures the scalable and high-performance operation, it also has developed an extensive test suite – the Netflix Symian Army – which is only responsible for ensuring the smooth operation of the production system – inter alia virtual machines are constantly arbitrarily being shot down, but the production system must still continue to function properly.

The demand for the managed cloud rises

Although, the deployment model can less reduce the complexity, but the responsibility and the necessary know-how can be shifted. Within a public cloud the self-service rules. This means that the customer is first 100 percent on his own and is solely responsible for the development and operation of his application.

This, many companies have recognized and confess to themselves that they either do not have the necessary knowledge, staff and other resources to successfully use a public cloud (IaaS). Instead, they prefer or expect help from the cloud providers. In these cases it is not about public cloud providers, but managed cloud/ business cloud providers who also help in addition to infrastructure with professional services.

Find more on the topic of managed cloud at „The importance of the Managed Cloud for the enterprise„.

Kategorien
Analysen

Cloud Computing Mythos: Es ist weniger Know-How erforderlich

Ich bin in einem Artikel, der die Vorteile des Cloud Computing hervorhebt, auf eine interessante Aussage gestossen. In einer Bildunterschrift heißt es „Es muss kein zusätzliches Know-how im Unternehmen aufgebaut werden.“ Das ist völlig falsch. Denn genau das Gegenteil ist der Fall. Es wird immer noch viel mehr Wissen benötigt, als von den Anbietern versprochen wird.

Das notwendige Wissen fehlt

Die Art und Menge des benötigten Wissens hängt von der Art des Service ab, der aus der Cloud genutzt wird. Für eine vermeintlich hochstandardisierte Software-as-a-Service (SaaS) Applikation wie E-Mail wird hinsichtlich des Service und seiner Eigenschaft deutlich weniger Wissen benötigt, als ein Service, der einen bestimmten Prozess abbildet.

Für die Nutzung von Infrastructure-as-a-Service (IaaS) oder Platform-as-a-Service (PaaS) verhält es sich jedoch ganz anders. Zwar übernimmt der Anbieter in diesem Fall den Aufbau, Betrieb und die Wartung der physikalischen Infrastruktur. Die Verantwortung für die virtuelle Infrastruktur (IaaS) obliegt jedoch dem Kunden. Der Anbieter selbst – über kostenpflichtigen Support – oder zertifizierte Systemintegratoren helfen hier beim Aufbau und Betrieb. Genau so verhält es sich bei dem Betrieb einer eigenen Applikation auf einer Cloud-Infrastruktur. Der Cloud Anbieter selbst ist dafür nicht verantwortlich, sondern stellt lediglich die Infrastruktur sowie Mittel und Wege in Form von APIs, Weboberflächen und ggf. weiteren Mehrwertservices zur Verfügung, um den Kunden die Entwicklung zu erleichtern. In diesem Zusammenhang muss auch verstanden werden, dass je nach Cloud-Skalierung – Scale-out vs. Scale-up – die Applikation vollständig anders für die Cloud entwickelt werden muss – nämlich über mehrere Systeme hinweg automatisch skalierbar verteilt (Scale-out) – als es in einer nicht Cloud-Umgebung der Fall ist. Dieses architekturelle Wissen fehlt in den meisten Unternehmen an allen Ecken und Enden, was auch dem Fakt geschuldet ist, dass Hochschulen und Universitäten diese Art des programmatischen Denkens bisher nicht vermittelt haben.

Cloud Computing ist derzeit noch komplexer als es auf dem ersten Blick erscheint. Das Paradebeispiel hierfür ist Netflix. Der US-amerikanische Video-on-Demand Anbieter betreibt seine Plattform innerhalb einer Public Cloud Infrastruktur (Amazon AWS) und hat neben einem umfangreichen Produktivsystem, welches den skalierbaren und performanten Betrieb sicherstellt, ebenfalls eine umfangreiche Test-Suite – die Netflix Symian Army – entwickelt, die nur dafür zuständig ist, den einwandfreien Betrieb des Produktivsystems sicherzustellen – u.a. werden laufend willkürlich virtuelle Maschinen bewusst abgeschossen und das Produktivsystem muss dennoch weiterhin einwandfrei funktionieren.

Bedarf an Managed Cloud steigt

Die Komplexität lässt sich durch das Deploymentmodell zwar weniger verringern, die Verantwortung und das notwendige Know-How jedoch verschieben. Innerhalb einer Public Cloud regiert der Self-Service. Das bedeutet, dass der Kunde zunächst zu 100 Prozent auf sich selbst gestellt ist und die alleinige Verantwortung für die Entwicklung und den Betrieb seiner Applikation trägt.

Dieses haben viele Unternehmen erkannt und gestehen sich ein, dass sie entweder nicht über das notwendige Wissen, Personal und weitere Ressourcen verfügen, um erfolgreich eine Public Cloud (IaaS) zu nutzen. Stattdessen bevorzugen bzw. erwarten sie Hilfe von den Cloud Anbietern. In diesen Fällen handelt es sich allerdings nicht um Public Cloud Anbieter, sondern um Managed Cloud/ Business Cloud Anbieter, die neben Infrastruktur ebenfalls mit Professional Services helfen.

Mehr zum Thema Managed Cloud kann unter „Die Bedeutung der Managed Cloud für Unternehmen“ nachgelesen werden.

Kategorien
Analysis

Dropbox alternatives for the enterprise IT

The popularity of easy of use cloud storage services like Dropbox to cause IT decision makers quite a headache. Withal, the market already offers enterprise ready solutions. This article introduces cloud services for the professional use.

Dropbox drives shadow IT

Dropbox has driven cloud storage services into the enterprise. The fandom of the US provider extends from the ordinary employee up to the executive floor. In particular, the fast access, the ease of use on each device and the little costs made Dropbox to an attractive product. But what sounds like a true success story at first, is in reality a serious problem for CIOs and IT manager. Dropbox has led to a new form of shadow IT. Meant, here is the widely uncontrolled growth of IT solutions, employees and departments use without taking care of the IT department, purchasing these using credit cards. Behind this mostly stands the criticism internal IT departments are not able to deliver suitable solutions fast and in a desired quality. This leads to situations, where company data are stored on private Dropbox accounts, where they do not have to belong.

The Dropbox boom and the easy access to public cloud services in general led to a discussion about the right to exist of traditional IT departments. Sooner or later they could die out some analysts predict. Then the IT strings are in the hand of the Line of Business Manager (LOB). Yet, the reality looks different: In particular, the often anxious LOB Manager have normally neither the time nor the knowledge, to make such IT decisions. They indeed know what is important for their area, but do they have the knowledge, which systems also have to play together? For many years companies fight with not ideal integrated isolated applications and data silos. Public cloud solutions exponentiate this problem and Dropbox is just the tip of the iceberg.

To get the Dropbox phenomenon under control several vendor of enterprise cloud storage have established in the past years. The widely used Dropbox service offers by far not what typical enterprise policies and IT governance models demand.

Dropbox for Business

Since 2011 „Dropbox for Business“, a corporate offer with advanced features for more safety, team management and reporting capabilities, exists. However, the solution does not have the breadth and variety of functions like other similar offers on the market. Therefore, Dropbox is more suited for small and familiar teams that do not require as much control as larger companies. For $795 per year for five users unlimited space is available. Each additional user cost $125 per year.

Administrators get access over a dashboard to information about the activities of their users. This includes the used devices, browser sessions and applications. Here it is also possible to close browser sessions, disconnect devices and disable third-party apps.

For improved security, various authentication mechanisms can be activated, including a two-factor authentication. There is also a single sign-on (SSO) integration with Active Directory and other SSO providers. For the technical infrastructure Dropbox uses Amazon S3. This means that the data is stored in one of the global Amazon data centers. Although these data centers meet high safety standards as SSAE16, ISAE 3402 and ISO 27001. However, Dropbox does not guarantee a specific location of the data within the Amazon Cloud, like a data center in the EU. Dropbox indicates that the data is encrypted with AES 256-bit before it is stored on Amazon S3. However, Dropbox has plain text access to user files. A separate encryption is only possible with external tools.

Another deficit is the lack of audit mechanisms at file level and activities of the user. It is not possible to centrally look into a single user account, or to look for an earlier version of the file. This only works if one register as a user to look into the data. In addition, the reports provide no information about user activities such as uploading and sharing of files – a big gap in the audit process.

Strengths

  • Ease of use.
  • Supports the major operating systems.
  • Big market share and acceptance in consumer space.
  • Unlimited storage space at an attractive price.

Weaknesses

  • Dropbox has full plain text access to user files.
  • No end-to-end encryption.
  • Data encryption using external tools.
  • Weak reporting.
  • Insufficient administration and audit options.
  • Location of the data can not be set.

Box

Box is one of the well-known providers of public cloud enterprise storage and targets its functions to small and medium-sized as well as large companies.The business plan is $15 per user per month for 3 to 500 users. This includes 1,000 GB of storage space. Box for Enterprise IT offers an unlimited number of users and unlimited disk space, the prices are obtained on request.

Clients for common desktop and mobile operating systems allow synchronization and uploading of data with almost any device. Files can be locked and automatically be released after time. In addition, depending on the plan, the version history is stored between 25 to 100 files. Other functions allow external authentication mechanisms, user management and auditing capabilities. The enterprise plan offers further management functions and access to APIs.

Depending on the plan more functions open. This can be particularly well seen on the permissions level. The higher the plan, the more types of users and access rights can be assigned to an object. Business and enterprise customers also get detailed reporting capabilities. These include, among other things, information on who has viewed and modified which files. Other safety features Box offers with authentication mechanisms for Active Directory, Salesforce, NetSuite, Jive and DocuSign and single sign-on (SSO) integration capabilities. In terms of data center capacity Box cooperates with Equinix. Among others, there is a data center in Amsterdam for the European market. Where Equinix has no sites, Box relies on Amazon Web Services.

Box ‚biggest weakness is the limitation on 40,000 objects for files and folders. This restrictions customers have already pointed out in mid-2012. So far, nothing has changed. There is only the information that the limit is raised to 100,000 objects in „Box Sync 4“.

Strengths

  • Ease of use.
  • Variety of extensions.
  • Supports the major operating systems.
  • Many relevant features for business (management, audit, etc).

Weaknesses

  • Files and folders are limited to 40,000 objects.
  • Encryption codes are owned by Box.

TeamDrive

TeamDrive from Hamburg is a file sharing and synchronization solution. It is intended for companies that do not want to save their sensitive data at external cloud services, but still want to allow their teams to synchronize data or documents. For this TeamDrive monitors any folder on a PC, laptop or smartphone that can be used and edited together with invited users. Thus, data is also offline available at all times. An automatic synchronization, backup and versioning of documents protect users against data loss. With the possibility to operate TeamDrive registration and hosting server in an own data center, the software can be integrated into existing IT infrastructures. For this reason all necessary APIs are available. For TeamDrive Professional enterprise customers pay 5.99 euros per user per month, or 59.99 euros per year.

Using the global TeamDrive DNS service several independently operated TeamDrive systems can be linked together. If necessary, this allows customers to build a controlled community cloud in a hybrid scenario.

TeamDrive offers many business-related functions for the management and control of a storage service. These include a rights management on Space-level for different user groups, as well as a version control system to access older versions of documents and changes of group members. For the synchronization of the data, clients for all major local and mobile operating systems are available, including Windows, Mac, Linux, iOS and Android. With TeamDrive SecureOffice, the vendor has also brought an expansion of its mobile clients on the market, with which documents can be processed within an end-to-end encryption. An integrated mobile device management (MDM) helps to manage all devices used with TeamDrive. These can be added, blocked or erased. TeamDrive can be bound to existing directory services such as Active Directory and LDAP to synchronize the user administration.

In addition to these management functions TeamDrive features a fully integrated end-to-end encryption where the encryption keys are exclusively owned by the user. Thus, TeamDrive is not able to access the data at no time. For encryption, TeamDrive relies on AES 256 and RSA 3072

It should also be mentioned that TeamDrive, as the only enterprise storage solution, carries the privacy seal by the Independent Centre for Privacy Protection Schleswig-Holstein (ULD). The privacy seal confirms that TeamDrive is suitable for the use in businesses and governments for the confidential exchange of data.

Strengths

  • End-to-end encryption.
  • Different encryption mechanisms.
  • SecureOffice for mobile secure processing of documents.
  • Certification by the ULD.
  • Integrated mobile device management.
  • Many relevant functions for businesses.

Weaknesses

  • No locking of files.
  • No browser access.

Microsoft SkyDrive Pro

SkyDrive Pro is Microsoft’s enterprise cloud storage, which is provided in conjunction with SharePoint Online and Office 365. The service is exclusively designed for business purposes and therefore should be different from SkyDrive. SkyDrive is aimed at home users who should predominantly store and share documents and photos in the Microsoft cloud. The management of SkyDrive Pro is in the responsibility of a company. Employees should store, share, and collaborate business documents with colleagues within a private domain.

SkyDrive Pro is fully synchronized with SharePoint 2013 and Office 365. An administrator decides how the libraries can be used within SkyDrive Pro for each user. For this purpose, different access rights for users and user groups can be assigned. Using a client documents can be synchronized with the local computer. Mobile clients are available for iOS and Windows Phone. Android and Blackberry are currently not supported.

Documents or entire folders can be shared with individual colleagues or distribution lists. Access rights can be assigned for read or write access. A recipient then receives an e-mail including the comment and the link to the document and can follow it to get change information later. Sharing with partners and customers outside the domain is possible if the company supports external sharing.

According to Microsoft, all data in SkyDrive Pro will be protected with several layers of encryption. The only way to get the information, is if an administrator granted access rights to it. Furthermore, Microsoft guarantees that the private corporate data is protected from search engines so that no meta-data is collected in any form. In addition, SkyDrive Pro is compliant with HIPAA, FISMA and other data protection standards.

Strengths

  • Integration with Office 365 and SharePoint.
  • Clients for mobile operating systems.

Weaknesses

  • Proprietary Microsoft system.
  • European data center only (Dublin, Amsterdam).
  • No Android client.

Amazon S3

Over a web service, Amazon S3 (Amazon Simple Storage Service) provides the access to an unlimited amount of storage in the Amazon cloud. Unlike to competing cloud storage services the storage can only be accessed via a REST and SOAP interface (API). Amazon does not provide an own local client for synchronization. This is due to the fact, that Amazon S3 basically serves as a central storage location, many other Amazon services use to store or retrieve data. Here an ecosystem of partners help with paid clients to make use of synchronization capabilities with desktop and mobile operating systems. Using the own Amazon AWS Management Console, folders and files can be accessed via the web interface.

With the API, data as objects can be stored, read, and deleted in the Amazon Cloud. The maximum size of an object is 5 GB. Objects are organized in buckets (folders). Authentication mechanisms ensure that the data is protected from unauthorized third parties. For this purpose, objects can be marked for private or public access and assigned with different user access rights to the objects.

Amazon S3 pricing varies by region in which the data is stored. One GB of storage used for the first TB in the EU region cost 0,095 U.S. dollars per month. In addition, the outgoing data transfer is charged. Up to 10 TB per month the traffic costs $0.12 per GB.

Many other cloud storage services use Amazon S3 to store the user data, including Dropbox, Bitcasa or Ubuntu One.

Strengths

  • The API is the de facto standard in the market.
  • Very high scalability.
  • Very good track record.

Weaknesses

  • No own clients.
  • The pay-per-use model requires strict cost control.

ownCloud

Like TeamDrive, ownCloud is a file sharing and synchronization solution. It is aimed at companies and organizations that want to keep their data under control and not to rely on external cloud storage services. The core of the application is the ownCloud server. This allows to integrate the software along with the ownCloud clients seamlessly into the existing IT infrastructure. In addition, the server enables the use of existing IT management tools. ownCloud serves as a local directory which mounts different local storages. Thus, the files are available to all employees on all devices. In addition to a local storage, directories can be connected via NFS and CIFS.

The ownCloud functions form a set of add-ons that are directly integrated into the system. These include a file manager, a contact manager and extensions to OpenID, WebDAV and a browser plugin for viewing of documents such as ODF and PDF. Other applications for enterprise collaboration are available on ownCloud’s own marketplace. Files can be uploaded using a browser or synchronized with clients for local and mobile operating systems.

Security is provided via a plugin for the server-side encryption, but which is not enabled by default. Is the plugin enabled, the files are encrypted when they are stored on the server. Here, only the contents of the files, the file names themselves are not encrypted. In addition ownCloud relies exclusively on security „at rest“.

The biggest advantage of ownCloud is also its disadvantage. The control over the data, which a company recovers through the use of ownCloud, on the other hand causes costs for the setup and operation. Administrators need to have enough knowledge about the operation of web servers such as Apache, but also about PHP and MySQL to successfully run ownCloud. In addition, a meticulous configuration is needed, without the expected performance of an ownCloud installation can not be reached.

Strengths

  • Open source.
  • Variety of applications.
  • Clients support the major operating systems.

Weaknesses

  • Weak security and encryption.
  • High costs for the operation of an own ownCloud infrastructure.