Kategorien
Comment

Transformation: The role of IT and the CIO is changing! #tsy13

On November 6 this year’s T-Systems Symposium is held in Dusseldorf. Last year under the slogan „Zero Distance – Working in Harmony“ based on innovative business models and modern ICT, a new kind of vicinity to the customer was central. This year the consequences of Zero Distance will be discussed. I’ll be on site at the symposium and comment and analyze the topics cloud, mobile and collaboration. For this purpose, appropriate article will be published on CloudUser during this week.

It’s all about the customer

Never the relationship with the customer and end users has been as important as today. Equally, the technological access to those groups have also never been as easy as today. This also the business side has recognized and steady increases the demands on IT. High availability as well as a maximum speed under the best possible security requirements are the claims to which CIOs have to be measured today. Simultaneously current solutions must be as simple and intuitive to use, in order to be prepared against the ever-increasing competition for customers.

Disruptive IT: The role of IT will have to change

Cloud Computing, Big Data, Mobility and Collaboration are currently the disruptive technologies that will trigger a massive change in many areas and to face IT departments and CIOs with great challenges. However, these can be use for the own purposes and thus create new opportunities.

I just recently spoke with an analyst colleague on the role of the CIO. His amusing but serious conclusion was CIO = Career Is Over. I am not of the opinion. Nevertheless, the role of IT and also of the CIO exposed to a change. The CIO needs to be understood as an innovation driver instead of a maintenance engineer and be much more involved in discussions with the departments (marketing, sales, production, finance, personnel) respectively search for the dialog. He needs to understand what requirements will be expected to make the necessary applications and infrastructures quickly, scalable and easy to use and to provide apps and tools as from the consumer sector. This means that the internal IT needs to be exposed to a transformation process, without compromising the safety and cost. This change will decide on the future of every CIO and if he is still regarded as Dr. No or a business enabler, which is a strategically important partner for the CEO and the departments as a business driver.

Kategorien
Analysis

HP Cloud Portfolio: Overview & Analysis

HP’s cloud portfolio consists of a range of cloud-based solutions and services. The HP Public Cloud is a true infrastructure as a service (IaaS) offering and the current core product, which is marketed generous. The HP Enterprise Services – Virtual Private Cloud provides a hosted Private Cloud from HP. The Public Cloud services are delivered exclusively from the US with data centers in the west and east. Even if HPs sales force is distributed around the world English is the only supported language.

Portfolio

The IaaS core offering includes compute power (HP Cloud Compute), storage (HP Cloud Storage) and network capacity. Furthermore, the value-added services HP Cloud Load Balancer, HP Cloud Relational DB, HP Cloud DNS, HP Cloud Messaging, HP Cloud CDN, HP Cloud Object Storage, HP Cloud Block Storage and HP Cloud Monitoring are available with which a virtual infrastructure for own applications and services can be build.

The HP Cloud infrastructure based on OpenStack and is multi-tenant. The virtual machines are virtualized with KVM and can be booked in fixed sizes (Extra Small to Double Extra Large) per hour. The local storage of a virtual machine is not persistent. Long-term data can be stored and connected on an independent block storage. Own virtual machine images cannot be uploaded to the cloud. The load balancer is currently still in a private beta. The infrastructure is a multi-fault domain, which is considered by the service level agreement. A multi-factor authorization is currently not offered.

The HP Enterprise Cloud Services offer a variety of solutions geared to business. Including recovery-as-a-service, Dedicated Private Cloud and Hosted Private Cloud. The application services include solutions for collaboration, messaging, mobility and unified communications. Specific business applications include HP Enterprise Cloud Services for Microsoft Dynamics CRM, HP Enterprise Cloud Services for Oracle and HP Enterprise Cloud Services for SAP. Professional services complement the Enterprise Cloud Services portfolio. The Enterprise Cloud Services – Virtual Private Cloud is a multi-tenant and by HP hosted Private Cloud and is aimed at SAP, Oracle and other enterprise applications.

Analysis

HP has a lot of experience in building and operating IT infrastructures and achieve the same objectives in the areas of Public and Private Cloud infrastructures. For this HP depends on own hardware components and an extensive partner network. HP has a global sales force and an equally high marketing budget and is therefore able to reach a variety of customers. Even if the data center for the Public Cloud are exclusively in the US.

In recent years HP has invested much effort and development. Nevertheless, the HP Public Cloud compute service is only available to the general public since December 2012. Due to this, HP has no significant track record for its Public Cloud. There is only a limited interoperability between the HP Public Cloud based on OpenStack, and the Private Cloud (HP CloudSystem, HP Converged Cloud) based on HP Cloud OS. Since the HP Public Cloud does not have the ability to upload own virtual machine images on the basis of a self-service, customers currently cannot transfer workloads from the Private Cloud to the Public Cloud. Even if the Private Cloud is based on OpenStack.

INSIGHTS Report

The INSIGHTS Report „HP Cloud Portfolio – Overview & Analysis“ can be downloaded for free as a PDF.

Kategorien
Comment

Cloud PR Disaster: Google's light-heartedness destroys trust.

It is common in companies that only certain spokesperson are chosen that may speak in public about the company. And it is tragic when favored few to make statements leading to question marks and uncertainty. Google has entered the second time within a short time in such a faux pas. After Cloud Platform Manager Greg DeMichillie peculiar had commented the long-term availability of the Google Compute Engine, Google CIO Ben Fried commented on Google’s own use of the cloud.

We’re the good guys – the others are evil

In an interview with AllThingsD Google CIO Ben Fried talked about the dealing of Google with bring your own device and the use of external cloud services. As any IT manager may have noticed, for quite some time Google promotes its Google Apps for Business solution by hook or by crook. The more surprising is the statement of Friedman regarding the use of Dropbox, Google strictly prohibits for internal purposes.

The important thing to understand about Dropbox,” […] “is that when your users use it in a corporate context, your corporate data is being held in someone else’s data center.

Right, if I do not save my data on my own servers, but with Dropbox, then they are probably in a foreign data center. To be more accurate in case of Dropbox on Amazon S3. This applies also for the case if I store my data on Google Drive, Google Apps, or Google Cloud Platform. Then the data is located at? Right, Google. This the cloud model brings along.

Fried, of course, as already DeMichillie, didn’t mean it like that and corrected himself by e-mail, via AllThingsD.

Fried says he meant that the real concern about Dropbox and other apps is more around security than storage. “Any third-party cloud providers that our employees use must pass our thorough security review and agree under contract to maintain certain security levels,”

So, Fried was actually talking about the security of Dropbox and other cloud services, and not the location.

Google is a big kid

I’m not sure what to make of Google. But one thing is clear, professional corporate communication looks different. The same applies to building trust among corporate customers. Google is undoubtedly an innovative company, if not the world’s most innovative company. This light-heartedness of a child, Google and its employees need to continually develop new and interesting ideas and technologies, is also the greatest weakness. It is this degree of naivety in the external communications, which will make it difficult for Google in the future when there’s nothing fundamentally changed. At least when it comes to have a say in the matter within the sensitive market for corporate customers. The major players, most notably Microsoft, VMware, IBM, HP and Oracle know what businesses need to hear in order to appear attractive. And this not includes the statements of a Greg DeMichillie or Ben Fried.

Another interesting comment on Ben Kepes Forbes‘ article „Google Shoots Itself In The Foot. Again„.

„[…]Do you really think that Google management really cares about cloud app business or its customer base? Somebody at Google said that they have the capacity they built for themselves and they have the engineering talent so why not sell it. So Brin and Page shoke their heads and they was the last they ever wanted to hear about it. There is nothing exciting about this business, they do not want the responsibilites that come with this client base and they really don’t care. I bet they shut it down.

Kategorien
Analysis

Multi-Cloud is "The New Normal"

Not least the disaster of Nirvanix had shown that one should not rely on a single cloud provider. But regardless of spreading the risk, the usage of a multi-cloud strategy is a recommended approach which already is conscious or unconscious practionzed.

Hybrid or multi-cloud?

What actually means multi-cloud? Or what is the difference to a hybrid cloud? Well, by definition a hybrid cloud connects a private cloud or an on-premise IT infrastructure with a public cloud to supply the local infrastructure with further resources on-demand. This resources could be compute power, storage but even services or software. Is a local email system integrated with a SaaS CRM system it can spoken about a hybrid cloud. That means a hybrid cloud stands not just for IaaS or PaaS scenarios.

The multi-cloud approach extends the hybrid cloud idea by the amount of connected clouds. Exactly spoken, this can be n-clouds which are integrated in some kind of forms. Here, for example, cloud infrastructures are connected that applications can use different infrastructures or services in parallel or depending on the workload or the current price. Even the parallel or distributed storage of data over multiple clouds is imaginable to ensure the availability and redundance of the data. At the moment multi-cloud is intensively discussed in the IaaS area. Therefore taking a look on Ben Kepes‘ and Paul Miller’s Mapping Session on multi-cloud as well as Paul’s Sector RoadMap: Multicloud management in 2013 is recommendable.

What is often neglected, multi-cloud has a special importance in the SaaS area. The amount of new SaaS applications is growing from day to day and with that the demand to integrate this varying solutions and let exchange the data. Today, the cloud market moves uncontrolled into the direction of isolated applications while each solution offers an added value, but in the result leads to many small data silos. With this kind of progress enterprises already fought vainly in pre cloud times.

Spread the risk

Even if the cloud marketing always promise the availability and security of data, systems and applications, the responsibilty to ensure this lies in the own hands (This refers to an IaaS public cloud.). Although cloud vendor mostly provide ways and means using IaaS, the customer has to do things on its own. Outages known from the Amazon Web Services or unpredictable closures like Nirvanix should lead to more sensitivity while using cloud services. The risk needs to be spread consciously. Here not all eggs shut be put into one basked but strategically distributed over severall.

Best-of-Breed

The best-of-breed strategy is the most widespread approach within IT enterprise architectures. Here multiple integrated industry solution for various sections within a company are used. The idea behind best-of-breed is to use the best solution in each case, an all-in-one solution usually cannot offer. This means one assembled the best services and applications for its purpose. Following this approach in the cloud, one is already in a multi-cloud scenario what means that approxiamtely 90 percent of all companies using cloud services are multi-cloud users. If the used cloud services are integrated remains doubtful.

Which is recommended on the one hand, is on the other side unavoidable in many cases. Although there are already a few good all-in-one solutions on the market. Nevertheless, most pearls are implemented independently, and must be combined with other solutions, for example email and office /collaboration with CRM and ERP. In respect of the risk this has its advantage, especially in the cloud. If a provider fails only a partial service is not available and not the overall productivity environment.

Avoid data silos: APIs and integration

Such best-of-breed approaches cloud marketplaces attempt to create, by grouping individual cloud solutions into different categories to offer companies a broad portfolio of different solutions, with which a cloud productivity suite can be put together.

Nevertheless, very highly controlled and supposedly integrated marketplaces show a crucial point with massive integration problems. There are many individual SaaS applications that do not interact. This means that there is no common database and for example the email service can not access the data in the CRM service and vice versa. This creates, as described above, individual data silos and isolated applications within the marketplace.

This example also illustrates the biggest problem with a multi-cloud approach. The integration of the many different and usually independent operating cloud services and their interfaces to each other.

Multi-cloud is „The New Normal“ in the cloud

The topic of multi-cloud is currently highly debated especially in IaaS environment to spread the risk and to take advantage of the cost and benefits of different cloud infrastructures. But even in the SaaS environment the subject must necessarily become a greater importance to avoid data silos and isolated applications in the future, to simplify the integration and to support companies in their adaptation of their best-of-breed strategy.

Notwithstanding these expectations, the multi-cloud use is already reality by companies using multiple cloud solutions from many different vendors, even if the services are not yet (fully) integrated.

Kategorien
Analysis

Fog Computing: Data, Information, Application and Services needs to be delivered more efficient to the enduser

You read it correctly, this is not about CLOUD Computing but FOG Computing. After the cloud is on a good way to be adapted in the broad, new concepts follow to enhance the utilization of scalable and flexible infrastructures, platforms, applications and further services to ensure the faster delivery of data and information to the enduser. This is exactly the core function of fog computing. The fog ensures that cloud services, compute, storage, workloads, applications and big data be provided at any edge of a network (Internet) on a trully distributed way.

What is fog computing?

The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

What are use cases of fog computing?

One should not be too confused by this new term. Although fog computing is a new terminology. But looking behind the courtain it quickly becomes apparent that this technology is already used in modern data centers and the cloud. A look at a few use cases illustrates this.

Seamless integration with the cloud and other services

The fog should not replace the cloud. Based on fog services the cloud should be enhanced by isolating the user data which are exclusively located at the edge of a network. From there it should allow administrators to connect analytical applications, security functions and more services directly to the cloud. The infrastructure is still based entirely on the cloud concept, but extends to the edge with fog computing.

Services to set vertical on top of the cloud

Many companies and various services already using the ideas of fog computing by delivering extensive content target-oriented to their customer. This includes among others webshops or provider of media content. A good example for this is Netflix, who is able to reach its numerous globally distributed customers. With the data management in one or two central data centers, the delivery of video-on-demand service would otherwise not be efficiently enough. Fog computing thus allows providing very large amounts of streamed data by delivering the data directly performant into the vicinity of the customer.

Enhanced support for mobile devices

With the steadily growth of mobile devices and data administrators gain more control capabilities where the users are located at any time, from where they login and how they access to the information. Besides a faster velocity for the enduser this leads to a higher level of security and data privacy by data can be controlled at various edges. Moreover fog computing allows a better integration with several cloud services and thus ensures an optimized distribution across multiple data centers.

Setup a tight geographical distribution

Fog computing extends existing cloud services by spanning up an edge network which consist of many distributed endpoints. This tight geographical distributed infrastructure offers advantages for variety of use cases. This includes a faster elicitation and analysis of big data, a better support for location-based services by the entire WAN links can be better bridged as well as the capabilities to evaluate data massively scalable in real time.

Data is closer to the user

The amount of data caused by cloud services require a caching of the data or other services which take care of this subject. This services are located close to the enduser to improve latency and optimize the data access. Instead of storing the data and information centralized in a data center far away from the user the fog ensures the direct proximity of the data to the customer.

Fog computing makes sense

You can think about buzzwords whatever you want. Only if you take a look behind the courtain it’s becoming interesting. Because the more services, data and applications are deployed to the end user, the more the vendors have the task of finding ways to optimize the deployment processes. This means that information needs to be delivered closer to the user, while latency must be reduced in order to be prepared for the Internet of Things. There is no doubt that the consumerization of IT and BYOD will increasing the use and therefore the consumption of bandwidth.

More and more users rely on mobile solutions to run their business and to bring it into balance with the personal live. Increasingly rich content and data are delivered over cloud computing platforms to the edges of the Internet where at the same time are the needs of the users getting bigger and bigger. With the increasing use of data and cloud services fog computing will play a central role and help to reduce the latency and improve the quality for the user. In the future, besides ever larger amounts of data we will also see more services that rely on data and that must be provided more efficient to the user. With fog computing administrators and providers get the capabilities to provide their customers rich content faster and more efficient and especially more economical. This leads to faster access to data, better analysis opportunities for companies and equally to a better experience for the end user.

Primarily Cisco will want to characterize the word fog computing to use it for a large-scale marketing campaign. However, at the latest when the fog generates a similar buzz as the cloud, we will find more and more CDN or other vendors who offer something in this direction as fog provider.

Kategorien
Comment

AWS Activate. Startups. Market share. Any questions?

Startups are the groundwork for the success of the Amazon Web Services. With them, the IaaS market leader has grown. Based on an official initiative AWS builds out this customer target market and will leave the rest of IaaS providers further in the rain. Because with the exception of Microsoft and Rackspace, nothing happens in this direction.

AWS Activate

It’s no secret that AWS is specifically „bought“ the favor of startups. Apart from the attractive service portfolio founders, who are supported by Accelerator, obtain AWS credits in the double digits to start their ideas on the cloud infrastructure.

With AWS Activate Amazon has now launched an official startup program to serve as a business enabler for developers and startups. This consists of the „Self-Starter Package“ and the „Portfolio Package“.

The Self-Starter Package is aimed at startups that are trying their own luck and includes the well-known free AWS offering, which may take any new customer. In addition, an AWS Developer Support for a month, „AWS Technical Professional“ training and discounted access to solutions of SOASTA or Opscode are included. The Portfolio Package is for startups that are in a accelerator program. These will get AWS credits worth between $1,000 and $15,000 and a month or one year of free AWS business support. There are also „AWS Technical Professional“ and „AWS Essentials“ training included.

Any questions on the high market shares?

With this initiative, the Amazon Web Services will keep making the pace in the future. With the exception of Microsoft, Google and Rackspace, AWS positioned oneself as the only platform for users to implement their own ideas. All other vendors rather embrace corporate customers, who are more hesitant to jump on the cloud train, and do not offer the possibilities of the AWS cloud infrastructure by far. Instead of interpret the portfolio on cloud services, they try to catch customers with compute power and storage. But, infrastructure from the cloud means more than just infrastructure.

Kategorien
Analysis

Nirvanix. A living hell. Why multi-cloud matters.

One or two will certainly have heard of it. Nirvanix has oneself transported to the afterlife. The enterprise cloud storage service, which had a wide cooperation with IBM, on September 16, 2013 suddenly announced its closure and initially granted its existing customers a period of two weeks to migrate their data. The period has been extended to October 15, 2013 as customers need more time for migration. As a Nirvanix customer reported, it has stored 20 petabytes of data.

The end of Nirvanix

Out of nowhere enterprise cloud storage provider Nirvanix announced its end on September 16, 2013. To date its not declared how it happened. Rumor has it that a further round financing failed. Other reasons are seemingly on the faulty management. Thus, the company had five CEOs since 2008 until today. One should also not forget the strong competition in the cloud storage environment. Firstly, in recent years, many vendors have tried their luck. On the other hand the two top dogs Amazon Web Services with Amazon S3, and Microsoft with Azure Storage reduce the prices of their services in regular cycles, which are also enterprise-ready. Even to be named as one of the top cloud storage service provider by Gartner couldn’t help Nirvanix.

Particularly controversial is the fact that in 2011, Nirvanix has completed a five-year contract with IBM to expand IBM’s SmartCloud Enterprise storage services with cloud-based storage. As IBM has announced, stored data on Nirvanix will be migrated to the IBM SoftLayer object storage. As an IBM customer, I would still ask carefully about my stored data.

Multi-Cloud: Spread your eggs over multiple nests

First, a salute to the venture capital community. If it’s true that Nirvanix had to stop the service due to a failed round financing, then we see what responsibility is in their hands. Say no more.

How to deal with such a horror scenario like Nirvanix as cloud user? Well, as you can see a good customer base and partnerships with global players seems to be no guarantee that a service survived long term. Even Google currently plays its cloud strategy on the back of its customers and makes no binding commitment over the long-term consist of its services on the Google Cloud Platform, such as the Google Compute Engine (GCE). On the contrary, it is assumed that the GCE will not survive as long as other well-known Google services.

Backup and Multi-Cloud

Even if the cloud storage provider has to ensure the availability of the data, as a customer you have a duty of care and must be informed about the state of your data and – even in the cloud – take care of redundancy and backup. Meanwhile functions in the most popular cloud storage services are integrated to make seamless backups of the data and create multiple copies.

Although we are in the era of cloud, yet still applies: Backup! You should therefore ensure that a constantly checked(!) and a reliable backup and recovery plan exist. Furthermore, sufficient bandwidth must be available to move the data as soon as possible. This should also be checked at regular intervals using a migration audit to act quickly in the case of all cases.

To just move 20 petabytes of data is no easy task. Therefore you have to think about other approaches. Multi-cloud is a concept which is gaining more and more importance in the future. At it data and applications are distributed (in parallel) across multiple cloud platforms and providers. On this my analyst colleagues and friends Paul Miller and Ben Kepes already had discussed during their mapping session at the GigaOM Structure in San Francisco. Paul subsequently had written an interesting sector roadmap report on multi-cloud management.

Even next to Scalr, CliQr, RightScale and Enstratius already exist some management platforms for multi-cloud, we still find ourselves in a very early stage in terms of use. schnee von morgen webTV by Nikolai Longolius for example, is primarily on the Amazon Web Services and has developed a native web application 1:1 for the Google App Engine as a fallback scenario. This is not a multi-cloud approach, but shows its importance to achieve less effort for a provider-cross high availability and scalability. As Paul’s Sector Roadmap shows, it is in particular the compatibility of the APIs that must be attributed a great importance. In the future companies can no longer rely on a single provider, but distribute their data and applications across multiple providers to drive a best-of-breed strategy and to specifically spread the risk.

This should also be taken into consideration when simply store „only“ data in the cloud. The golden nest is the sum of a plurality of distributed.

Kategorien
Comment

Verizon Cloud: Compute and Storage. Boring. Next, please!

There are those moments in which I read a press release and think: „Wow this could be exciting.“ But there are mostly this very press releases I do not want to continue after the second paragraph. Oh by the way, Verizon has reinvented the enterprise cloud and thus would compete with the Amazon Web Services. Oh well, and Verizon offers compute power and storage. These are very promising services in order to be successful in the IaaS market and make sure to turn the market upside down. Where 100% of all IaaS providers have exactly these services in their portfolio and do not differentiate from one another.

Verizon Cloud

An excerpt from the press release:

“Verizon created the enterprise cloud, now we’re recreating it,” said John Stratton, president of Verizon Enterprise Solutions. “This is the revolution in cloud services that enterprises have been calling for. We took feedback from our enterprise clients across the globe and built a new cloud platform from the bottom up to deliver the attributes they require.”

And then this:

„Verizon Cloud has two main components: Verizon Cloud Compute and Verizon Cloud Storage. Verizon Cloud Compute is the IaaS platform. Verizon Cloud Storage is an object-based storage service.“

Verzion seriously offers compute power and storage. Something that 100% of all IaaS providers on the market have in their portfolio and sells it as a REVOLUTION? Not really, right?

ProfitBricks and CloudSigma to salute

In addition, Verizon to praise the granularity of their infrastructure resources:

„Previously, services had pre-set configurations for size (e.g. small, medium, large) and performance, with little flexibility regarding virtual machine and network performance and storage configuration. No other cloud offering provides this level of control.

Wrong! From the beginning ProfitBricks and CloudSigma offer this level of granularity and flexibility. Even this property of the Verizon cloud is not a revolution.

Which is likewise not a revolution, but could appeal to current VMware users is the compatibility with the VMware hypervisor. A promising market because many companies still continue to use VMware hypervisor. But where also VMware with its own vCloud hybrid service to have some say.

Fight a war with the wrong weapons

It’s amazing how many providers try to fight the Amazon Web Services with new offerings and only bring a blunt knife rather than a nuclear bomb. Especially since Verizon has developed the solution from scratch.

For me this slowly raises the question, when we will see the first Nirvanix within the IaaS providers who only offer compute power and storage. Finally, more and more candidates position oneself.

Next, please! (Viva la Revolution.)

Kategorien
Comment

Google PR agency to feedback on the long-term availability of the Google Compute Engine

After I have called the long-term availability of Google’s Compute Engine (GCE) into question, Google’s PR agency has contacted me to understand the motivations for the article. In the article I have reacted to a GigaOM interview of Googles Cloud Platform manager Greg DeMichillie who wouldn’t guarantee the long-term availability of GCE.

Background

Google Cloud Platform manager Greg DeMichillie responded in a GigaOM interview to a question on the long-term availability of Google cloud services and answered unexpected and not within the meaning of the customer.

„DeMichillie wouldn’t guarantee services like Compute Engine will be around for the long haul, but he did try to reassure developers by explaining that Google’s cloud services are really just externalized versions of what it uses internally. ”There’s no scenario in which Google suddenly decides, ‘Gee, I don’t think we need to think about storage anymore or computing anymore.“

Although DeMichillie to qualify in the end that Google wouldn’t shut down their cloud services in a kamikaze operation. However, it’s an odd statement on a service which is relatively as of late on the market.

Feedback of Google’s PR agency

Beforehand I want to clarify that this was no call to influence me, but to understand how it came to the article. Google’s PR agency said that DeMichillie was apparently misunderstood and my article only highlights this negative statement and neglects the positive themes. Google today severely invests in infrastructure resources and there are no signs and reasons that the Google Compute Engine will be closed.

My statement

It was and is never about to cast a shadow on Google (or any other vendor). This is what I told the PR agency. But at the end of the day the user needs to be advised and equally be protected. Moreover I am an analyst and advisor and counsel companies who rely on my judgment. For this reason I need to react on those statements and include it in my decision matrix, especially when it directly comes from an employee of a vendor. What should I do when I recommend the use of the GCE since the technical things and requirements fits, but Google subsequently announced to close the service? For this reason I react extremely sensitive on such topics. In addition, Google had not cover oneself in glory the recent months and years when it comes to maintain its service portfolio for the long-term. The end of Google Reader caused more negative reactions by the users than the current NSA scandal. Notabene, this is a free service for consumers. With the Google Compute Engine we are talking about a service which mainly address companies. For companies it’s about a lot of money to spend to bring the workloads to the GCE. If the service is suddenly closed this generates, depending on the respective company, a non incalculable economic damage to migrate the data and applications. This Google should consider when making decisions. Even if this was just one statement as part of an interview. This does not create trust in the cloud portfolio and fits well with the experience of the recent past when Google cleaned it’s room.

Kategorien
Comment

"Amazon is just a webshop!" – "Europe needs bigger balls!"

This year I had the honor to host the CEO Couch of the Open-Xchange Summit 2013. This is a format were Top CEOs are confronted with provocative questions on a specific topic and had to answer to the point. Among the guests on the couch were Herrmann-Josef Lamberti (EADS), Dr. Marten Schoenherr (Deutsche Telekom Laboratories), Herbert Bockers (Dimension Data) and Rafael Laguna (Open-Xchange). This year’s main topic was cloud computing and how German and European provider to assert oneself against the alleged overwhelming competition from the US. I’ve picked out two statements mentioned during the CEO Couch, I would like to discuss critically.

Amazon is just a webshop!

One statement has got me worry lines. One the hand VMware already had underlined that it seemingly underestimates its supposed biggest competitors, on the other hand its absolutely wrong. To call Amazon today still a webshop one must close its eyes very wide and hide about 90% of the company. Amazon is today more than just a webshop. Amazon is a technology company respectively provider. Rafael has illustrated this very well during his keynote. There are currently three vendor who have managed to set up their own closed ecosystem of the web services over the content to the devices. These include Google, Apple and Amazon.

Besides the webshop Amazon.com further technology and services belong to the company. Including the Amazon Web Services, content distribution for digital books, music, movies (LoveFilm, Amazon Instant Video), ebook reader Kindle and Kindle Fire (with an own Android version), the search engines A9.com, Alexa Internet and the movie database IMDb.

Above that, if you take a look at how Jeff Bezos leads Amazon (e.g. Kindle strategy; sell at cost price; sales over content), he focuses on the long-term growth and market share, rather than to achieve quick profits.

Who wants to get a good impression of Jeff Bezos‘ mindset, I recommend the Fireside Chat with Werner Vogels during the 2012 AWS re: Invent. The 40 minutes are worth it.

Europe needs bigger balls!

The couch completely agreed. Although Europe has the potential and the companies to technically and innovative play its role in the cloud – apart from privacy issues, compared to the United States we eat humble pie, or better expressed: „Europe needs bigger balls!“. That depends on the one hand with the capital that investors in the U.S. are willing to invest, on the other hand, to the mentality to take risks, to fail and to think big and long term. At this point, European entrepreneurs and investors in particular can learn from Jeff Bezos. It’s about the long-term success, not about the short term money.

This is in my opinion one of the reasons why we will never see a European company that, for example, is able to hold a candle to the Amazon Web Services (AWS). The potential candidates like T-Systems, Orange and other major ICT providers that have data centers, infrastructure, personnel and necessary knowledge, rather focus on the target customers they have always served – the corporate customers. However, the public cloud and AWS similar services for startups and developers are completely neglected. On one side this is alright since cloud offerings on enterprise level and virtual private or hosted private cloud solutions are required to meet the needs of enterprise customers. On the other hand, nobody should be surprised that AWS currently has the most market share and is seen as an innovation machine. The existing ICT providers are not willing to change their current business or to expand it with new models to address another attractive audiences.

However, as it also my friend Ben Kepes well described, Amazon is currently quite popular and by far the market leader in the cloud. But there is still enough room for other provider in the market who can offer use cases and workloads that Amazon can not serve. Or because the customers simply decide against the use of AWS, since it’s too complicated, too costly or too expensive for them, or is simply inconsistent with legal issues.

So Europe, put on bigger eggs! Sufficient potential exists. Finally, providers such as T-Systems, Greenqloud, UpCloud, Dimension Data, CloudSigma or ProfitBricks have competitive offerings. Marten Schoenherr told me that he and his startup process of developing a Chromebook without Chrome. However, I have a feeling that Rafael and Open-Xchange (OX App Suite) have a finger in the pie.