Kategorien
Analysis

How to solve shadow IT and covered clouds #tsy13

The shadow IT or as VMware calls it, Covered Clouds (hidden used cloud services) are a major problem for businesses. According to a VMware study 37 percent of Europe’s leading IT decision makers suggest in their companies not covered expenses for cloud services. 58 percent of European knowledge workers would use unauthorized cloud services.

Pros and cons of shadow IT (Covered Clouds)

This unauthorized use has also its financial influences. In the affected companies, executives in the IT area believe that an average of 1.6 million euros will be spent without the permission of the company. This represents on average 15 percent of the annual IT budget of these companies.

However, this development is considered to be positive by many. 72 percent of IT managers see this as a benefit for their company. Because 31 percent of these are of the opinion that shadow IT and Covered Clouds accelerate growth and innovation. More than half reported that the company can thus respond more quickly to customer requirements.

However, there are major security concerns. Of those who do not advocate shadow IT, more than half feared a tightening of security risks. This insights VMware gave during the T-Systems Symposium 2013 in Dusseldorf.

A method: IT-as-a-Service

IT-as-a-Service is a business model in which the IT department is managed as a separate business unit and developed even products and services for the own company. Here, the IT department competes against external providers. Finally, at the present, departments have a limitless selection of other vendors in the market.

VMware has ​​it made to its mission to provide IT departments the technical means for this and sees IT-as-a-Service as a chance to balance the ratio of maintenance to innovation at 50:50 instead of investing the majority of the expenses in the maintenance. Today the ratio is about 80 percent (maintenance) to 20 percent (innovation).

VMware is right trying to establish itself as a leading provider of IT-as-a-Service solutions. As one of the few infrastructure providers on the market and with their abilities, they are able to provide the technical means for this turnaround in the enterprise. However, one must note, that IT-as-a-Service is not a technical approach, but must be anchored firmly in the minds of IT departments in order to be successfully implemented. Therefore, VMware can only show the technical means that are available to initiate the change.

Service portal as middleware for employees

IT-as-a-Service is not the ne plus ultra solution for the shadow IT, but may help to counteract this over the years grown phenomenon. The relevant concepts and technologies are available and need to be implemented.

Is an IT-as-a-Service philosophy emerged within the IT department, it should be started to establish an own service portal for employees, which controls the access to internal and external cloud services. This can be either used for infrastructure (virtual servers and storage) or software and platforms. The IT department becomes more and more a service broker and is able to ensure, through the use of external resources (hybrid model), that the employees can expect a high quality service. Thus, for example, a server can be provided to a developer within five minutes instead of several weeks. It should also be considered that developers are not only interested in computing power and memory but also need services to develop their own applications and services in a simpler way. This can be accomplished, among others, with the characteristics of a Platform-as-a-Service (PaaS).

Kategorien
Comment

CIO: Quo vadis? Cost Center or Business Enabler? #tsy13

On tuesday I’ve discussed the future role of IT and the CIO. Today we are in the biggest change (Disruptive IT) the IT industry has experienced since its inception. This shift suggests a direct impact on the CIO and thus also to the IT departments. Dr. No and cost center were yesterday. The CIO as a business enabler is required who, together with its IT staff, builds new business models as a strategic partner to the CEO and the departments and established itself as a Business Driver.

Disruptive IT: Cloud, Big Data and Co. to turn everything upside down

Cloud Computing, Big Data, Mobility and Collaboration are the four disruptive technologies, which today cater for a big quake and challenge CIOs. The expectations and demands of the management and departments continue to grow, employees become independent and purchase IT services past by the IT department (shadow IT), because the IT cannot provide in sufficient time or quality. Nobody should say the job of a CIO was simple, but currently it is the way to hell, everyone likes to waive.

But that’s the situation. And who continue to hug to the status quo, sooner or later will remain on the track. Because the business side cannot afford to tread water. And they will find their ways and means to get what they need. If they are not already on the way.

Panel: The Future Role of CIOs: Managing Costs or Enabling Business?

Within the T-Systems Symposium Stefanie Kemp (IT Governance, RWE), Prof. Dr. Michael Müller-Wünsch (CIO, Lekkerland Gruppe), Dr. Hans-Joachim Popp (CIO, Deutsches Luft- und Raumfahrtzentrum), Philipp Erler (CIO, Zalando) and Thomas Spreitzer (Chief Marketing Officer, T-Systems) discussed the future of the CIO. Is he a cost manager or a innovation driver?

Although it was basically an exciting panel discussion. However, it was, as expected, monodirectional since there were only CIOs represented in the panel and the counterpart of the CEO was missing. This was probably the reason why there was no or just a little self-criticism voiced by the CIOs.

Nevertheless, one could see that the CIOs were aware of what to expect. Hans-Joachim Popp ​​by DLR made clear that existing business models are influenced by new technologies and thereby it becomes more exhausting for the CIO in the future. He also criticized the fact that not everyone who can build a new business model is able to understand the critical processes behind it. Zalando CIO Philipp Erler joined and considerably made ​​clear that the operation of an Excel sheet does not necessarily cover the skills to control a process. That is a fact I can confirm. Just because an employee can use an iPhone or a SaaS application, he is not able to decide on major IT services for the enterprise. Furthermore Erler explained the concept of the prioritization round at Zalando. So, a department should question itself whether it is actually worth to go the own way and path by the IT, if the request was not approved. This is a possible remedy against the shadow IT. The question remains, whether this is of interest to the employees in the departments. Eventually, in 2012 about 25 percent of the IT budget was being managed outside the IT department, according to Gartner.

Lekkerland CIO Michael Müller-Wünsch sees it as a crucial factor that CIOs get also the temporal scope to act as a business enabler. To show and evidence the own right to exist is important, but also difficult. However, the business side and the IT at Lekkerland working together actively. T-Systems CMO Thomas Spreitzer admitted that the marketing looks readily across the challenges of the CIO. The main thing: as fast as can be. However, he also criticized the nitpicking. IT departments should more focus on the rapid prototyping instead of the requirements specification.

RWE IT governance lead Stefanie Kemp asked the question whether IT has actually run after every trend or should it better focus on specific areas rather that the company actually help. To this end she sees the need for a commodity IT as well as a differentiating IT within the enterprise. So that part of IT, which keeps things running and the part that ensures innovation. Kemp also made clear that departments about her can go its own way. But in the end they should also stand straight for the pile of fragments when the integration into existing systems, etc. does not work. Furthermore, Kemp still sees a lot of homework to do within the business so that IT can become a business enabler.

CIO vs. Business: Communication is the silver bullet

Summarizing the problems that were addressed during the panel by the CIOs, you can certainly ask the question of how companies operate nowadays. At the end of the day it was to understand that seemingly both sides sit protected in their ivory towers and do not really talk to each other. The reality is of course different. But for both sides, life would be easier if they exchange information with each other transparently and at eye level. Here also the role model will become increasingly important to clearly define the responsibilities in the future.

Conclusion: The role of the CIO will not be easier in the future. Quite the contrary. But when he and the business side work as partners and actively communicate with each other and to agree to go the same way, it becomes easier for both sides.

Kategorien
Comment

Business in the Internet of Everything, Hybrid Cloud, A travelling suitcase #tsy13

During the breakout sessions at the T-Systems Symposium I visited three topics. One cloud, mobile and collaboration respectively The Internet of Things each. In the next days I’m going to write a comment or detailed analysis on each of these topics. However, I already would give a sneak preview, since these are really interesting use cases.

Business Transformation in the age of collaboration and The Internet of Things

Cisco sees its future as an enabler for the Internet of Everything (IoE). The difference to the today frequenty discussed Internet of Things (IoT) is in the amount of connected objects. Where IoT is connecting people with machines and machines with machines, IoE is on people, processes, data and things. This means there is a need for more connectivity. For this purpose, Cisco sees about 50 billionen smart objects in 2020 worldwide which are connected with each other. Here fog computing should help in the future what I’ve already introduced and analyzed.

Ready for Hybrid Cloud with T-Systems DSI vCloud = VMware vCloud Datacenter Services

A VMware study shows that 37 percent of leading european IT decision makers suppose not captured costs for used cloud services within their enterprise. In addition, 58 percent of european knowledge workers would use not approved cloud services. VMware sees a solution in IT-as-a-Service. Here IT departments position themselves as a competitor to external service provider. VMware also notes that historically arised IT silos like storage, network, server, Windows, Unix and Linux are the biggest challenges for IT-as-a-Service. Here the software-defined datacenter which is based on the components virtual server, software-defined network and software-defined storage should help. All together, this builds the foundation to migrate workloads on demand to a certified vCloud datacenter via a hybrid cloud.

Bag2Go: The modern suitcase on a lonely travel

With its Bag2Go, an intelligent bag, Airbus will separate people from their luggage. This means that a bag can have another route to the destination as the traveller itself. Therefore the bag offers various characterstics: self-weighing, self-labeling, self-travelling. Even the permanent tracking of the bag and its status is possible. Airbus promises that no changes are need to be made to the existing infrastructure of an airport. Airbus future goal is to establish fully interlinked transport capsules as a standard. An Internet of Things use case. Bag2Go uses the infrastructure of the T-Systems business cloud.

Kategorien
Comment

Transformation: The role of IT and the CIO is changing! #tsy13

On November 6 this year’s T-Systems Symposium is held in Dusseldorf. Last year under the slogan „Zero Distance – Working in Harmony“ based on innovative business models and modern ICT, a new kind of vicinity to the customer was central. This year the consequences of Zero Distance will be discussed. I’ll be on site at the symposium and comment and analyze the topics cloud, mobile and collaboration. For this purpose, appropriate article will be published on CloudUser during this week.

It’s all about the customer

Never the relationship with the customer and end users has been as important as today. Equally, the technological access to those groups have also never been as easy as today. This also the business side has recognized and steady increases the demands on IT. High availability as well as a maximum speed under the best possible security requirements are the claims to which CIOs have to be measured today. Simultaneously current solutions must be as simple and intuitive to use, in order to be prepared against the ever-increasing competition for customers.

Disruptive IT: The role of IT will have to change

Cloud Computing, Big Data, Mobility and Collaboration are currently the disruptive technologies that will trigger a massive change in many areas and to face IT departments and CIOs with great challenges. However, these can be use for the own purposes and thus create new opportunities.

I just recently spoke with an analyst colleague on the role of the CIO. His amusing but serious conclusion was CIO = Career Is Over. I am not of the opinion. Nevertheless, the role of IT and also of the CIO exposed to a change. The CIO needs to be understood as an innovation driver instead of a maintenance engineer and be much more involved in discussions with the departments (marketing, sales, production, finance, personnel) respectively search for the dialog. He needs to understand what requirements will be expected to make the necessary applications and infrastructures quickly, scalable and easy to use and to provide apps and tools as from the consumer sector. This means that the internal IT needs to be exposed to a transformation process, without compromising the safety and cost. This change will decide on the future of every CIO and if he is still regarded as Dr. No or a business enabler, which is a strategically important partner for the CEO and the departments as a business driver.

Kategorien
Analysis

HP Cloud Portfolio: Overview & Analysis

HP’s cloud portfolio consists of a range of cloud-based solutions and services. The HP Public Cloud is a true infrastructure as a service (IaaS) offering and the current core product, which is marketed generous. The HP Enterprise Services – Virtual Private Cloud provides a hosted Private Cloud from HP. The Public Cloud services are delivered exclusively from the US with data centers in the west and east. Even if HPs sales force is distributed around the world English is the only supported language.

Portfolio

The IaaS core offering includes compute power (HP Cloud Compute), storage (HP Cloud Storage) and network capacity. Furthermore, the value-added services HP Cloud Load Balancer, HP Cloud Relational DB, HP Cloud DNS, HP Cloud Messaging, HP Cloud CDN, HP Cloud Object Storage, HP Cloud Block Storage and HP Cloud Monitoring are available with which a virtual infrastructure for own applications and services can be build.

The HP Cloud infrastructure based on OpenStack and is multi-tenant. The virtual machines are virtualized with KVM and can be booked in fixed sizes (Extra Small to Double Extra Large) per hour. The local storage of a virtual machine is not persistent. Long-term data can be stored and connected on an independent block storage. Own virtual machine images cannot be uploaded to the cloud. The load balancer is currently still in a private beta. The infrastructure is a multi-fault domain, which is considered by the service level agreement. A multi-factor authorization is currently not offered.

The HP Enterprise Cloud Services offer a variety of solutions geared to business. Including recovery-as-a-service, Dedicated Private Cloud and Hosted Private Cloud. The application services include solutions for collaboration, messaging, mobility and unified communications. Specific business applications include HP Enterprise Cloud Services for Microsoft Dynamics CRM, HP Enterprise Cloud Services for Oracle and HP Enterprise Cloud Services for SAP. Professional services complement the Enterprise Cloud Services portfolio. The Enterprise Cloud Services – Virtual Private Cloud is a multi-tenant and by HP hosted Private Cloud and is aimed at SAP, Oracle and other enterprise applications.

Analysis

HP has a lot of experience in building and operating IT infrastructures and achieve the same objectives in the areas of Public and Private Cloud infrastructures. For this HP depends on own hardware components and an extensive partner network. HP has a global sales force and an equally high marketing budget and is therefore able to reach a variety of customers. Even if the data center for the Public Cloud are exclusively in the US.

In recent years HP has invested much effort and development. Nevertheless, the HP Public Cloud compute service is only available to the general public since December 2012. Due to this, HP has no significant track record for its Public Cloud. There is only a limited interoperability between the HP Public Cloud based on OpenStack, and the Private Cloud (HP CloudSystem, HP Converged Cloud) based on HP Cloud OS. Since the HP Public Cloud does not have the ability to upload own virtual machine images on the basis of a self-service, customers currently cannot transfer workloads from the Private Cloud to the Public Cloud. Even if the Private Cloud is based on OpenStack.

INSIGHTS Report

The INSIGHTS Report „HP Cloud Portfolio – Overview & Analysis“ can be downloaded for free as a PDF.

Kategorien
Comment

Cloud PR Disaster: Google's light-heartedness destroys trust.

It is common in companies that only certain spokesperson are chosen that may speak in public about the company. And it is tragic when favored few to make statements leading to question marks and uncertainty. Google has entered the second time within a short time in such a faux pas. After Cloud Platform Manager Greg DeMichillie peculiar had commented the long-term availability of the Google Compute Engine, Google CIO Ben Fried commented on Google’s own use of the cloud.

We’re the good guys – the others are evil

In an interview with AllThingsD Google CIO Ben Fried talked about the dealing of Google with bring your own device and the use of external cloud services. As any IT manager may have noticed, for quite some time Google promotes its Google Apps for Business solution by hook or by crook. The more surprising is the statement of Friedman regarding the use of Dropbox, Google strictly prohibits for internal purposes.

The important thing to understand about Dropbox,” […] “is that when your users use it in a corporate context, your corporate data is being held in someone else’s data center.

Right, if I do not save my data on my own servers, but with Dropbox, then they are probably in a foreign data center. To be more accurate in case of Dropbox on Amazon S3. This applies also for the case if I store my data on Google Drive, Google Apps, or Google Cloud Platform. Then the data is located at? Right, Google. This the cloud model brings along.

Fried, of course, as already DeMichillie, didn’t mean it like that and corrected himself by e-mail, via AllThingsD.

Fried says he meant that the real concern about Dropbox and other apps is more around security than storage. “Any third-party cloud providers that our employees use must pass our thorough security review and agree under contract to maintain certain security levels,”

So, Fried was actually talking about the security of Dropbox and other cloud services, and not the location.

Google is a big kid

I’m not sure what to make of Google. But one thing is clear, professional corporate communication looks different. The same applies to building trust among corporate customers. Google is undoubtedly an innovative company, if not the world’s most innovative company. This light-heartedness of a child, Google and its employees need to continually develop new and interesting ideas and technologies, is also the greatest weakness. It is this degree of naivety in the external communications, which will make it difficult for Google in the future when there’s nothing fundamentally changed. At least when it comes to have a say in the matter within the sensitive market for corporate customers. The major players, most notably Microsoft, VMware, IBM, HP and Oracle know what businesses need to hear in order to appear attractive. And this not includes the statements of a Greg DeMichillie or Ben Fried.

Another interesting comment on Ben Kepes Forbes‘ article „Google Shoots Itself In The Foot. Again„.

„[…]Do you really think that Google management really cares about cloud app business or its customer base? Somebody at Google said that they have the capacity they built for themselves and they have the engineering talent so why not sell it. So Brin and Page shoke their heads and they was the last they ever wanted to hear about it. There is nothing exciting about this business, they do not want the responsibilites that come with this client base and they really don’t care. I bet they shut it down.

Kategorien
Analysis

Multi-Cloud is "The New Normal"

Not least the disaster of Nirvanix had shown that one should not rely on a single cloud provider. But regardless of spreading the risk, the usage of a multi-cloud strategy is a recommended approach which already is conscious or unconscious practionzed.

Hybrid or multi-cloud?

What actually means multi-cloud? Or what is the difference to a hybrid cloud? Well, by definition a hybrid cloud connects a private cloud or an on-premise IT infrastructure with a public cloud to supply the local infrastructure with further resources on-demand. This resources could be compute power, storage but even services or software. Is a local email system integrated with a SaaS CRM system it can spoken about a hybrid cloud. That means a hybrid cloud stands not just for IaaS or PaaS scenarios.

The multi-cloud approach extends the hybrid cloud idea by the amount of connected clouds. Exactly spoken, this can be n-clouds which are integrated in some kind of forms. Here, for example, cloud infrastructures are connected that applications can use different infrastructures or services in parallel or depending on the workload or the current price. Even the parallel or distributed storage of data over multiple clouds is imaginable to ensure the availability and redundance of the data. At the moment multi-cloud is intensively discussed in the IaaS area. Therefore taking a look on Ben Kepes‘ and Paul Miller’s Mapping Session on multi-cloud as well as Paul’s Sector RoadMap: Multicloud management in 2013 is recommendable.

What is often neglected, multi-cloud has a special importance in the SaaS area. The amount of new SaaS applications is growing from day to day and with that the demand to integrate this varying solutions and let exchange the data. Today, the cloud market moves uncontrolled into the direction of isolated applications while each solution offers an added value, but in the result leads to many small data silos. With this kind of progress enterprises already fought vainly in pre cloud times.

Spread the risk

Even if the cloud marketing always promise the availability and security of data, systems and applications, the responsibilty to ensure this lies in the own hands (This refers to an IaaS public cloud.). Although cloud vendor mostly provide ways and means using IaaS, the customer has to do things on its own. Outages known from the Amazon Web Services or unpredictable closures like Nirvanix should lead to more sensitivity while using cloud services. The risk needs to be spread consciously. Here not all eggs shut be put into one basked but strategically distributed over severall.

Best-of-Breed

The best-of-breed strategy is the most widespread approach within IT enterprise architectures. Here multiple integrated industry solution for various sections within a company are used. The idea behind best-of-breed is to use the best solution in each case, an all-in-one solution usually cannot offer. This means one assembled the best services and applications for its purpose. Following this approach in the cloud, one is already in a multi-cloud scenario what means that approxiamtely 90 percent of all companies using cloud services are multi-cloud users. If the used cloud services are integrated remains doubtful.

Which is recommended on the one hand, is on the other side unavoidable in many cases. Although there are already a few good all-in-one solutions on the market. Nevertheless, most pearls are implemented independently, and must be combined with other solutions, for example email and office /collaboration with CRM and ERP. In respect of the risk this has its advantage, especially in the cloud. If a provider fails only a partial service is not available and not the overall productivity environment.

Avoid data silos: APIs and integration

Such best-of-breed approaches cloud marketplaces attempt to create, by grouping individual cloud solutions into different categories to offer companies a broad portfolio of different solutions, with which a cloud productivity suite can be put together.

Nevertheless, very highly controlled and supposedly integrated marketplaces show a crucial point with massive integration problems. There are many individual SaaS applications that do not interact. This means that there is no common database and for example the email service can not access the data in the CRM service and vice versa. This creates, as described above, individual data silos and isolated applications within the marketplace.

This example also illustrates the biggest problem with a multi-cloud approach. The integration of the many different and usually independent operating cloud services and their interfaces to each other.

Multi-cloud is „The New Normal“ in the cloud

The topic of multi-cloud is currently highly debated especially in IaaS environment to spread the risk and to take advantage of the cost and benefits of different cloud infrastructures. But even in the SaaS environment the subject must necessarily become a greater importance to avoid data silos and isolated applications in the future, to simplify the integration and to support companies in their adaptation of their best-of-breed strategy.

Notwithstanding these expectations, the multi-cloud use is already reality by companies using multiple cloud solutions from many different vendors, even if the services are not yet (fully) integrated.

Kategorien
Analysis

Fog Computing: Data, Information, Application and Services needs to be delivered more efficient to the enduser

You read it correctly, this is not about CLOUD Computing but FOG Computing. After the cloud is on a good way to be adapted in the broad, new concepts follow to enhance the utilization of scalable and flexible infrastructures, platforms, applications and further services to ensure the faster delivery of data and information to the enduser. This is exactly the core function of fog computing. The fog ensures that cloud services, compute, storage, workloads, applications and big data be provided at any edge of a network (Internet) on a trully distributed way.

What is fog computing?

The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

What are use cases of fog computing?

One should not be too confused by this new term. Although fog computing is a new terminology. But looking behind the courtain it quickly becomes apparent that this technology is already used in modern data centers and the cloud. A look at a few use cases illustrates this.

Seamless integration with the cloud and other services

The fog should not replace the cloud. Based on fog services the cloud should be enhanced by isolating the user data which are exclusively located at the edge of a network. From there it should allow administrators to connect analytical applications, security functions and more services directly to the cloud. The infrastructure is still based entirely on the cloud concept, but extends to the edge with fog computing.

Services to set vertical on top of the cloud

Many companies and various services already using the ideas of fog computing by delivering extensive content target-oriented to their customer. This includes among others webshops or provider of media content. A good example for this is Netflix, who is able to reach its numerous globally distributed customers. With the data management in one or two central data centers, the delivery of video-on-demand service would otherwise not be efficiently enough. Fog computing thus allows providing very large amounts of streamed data by delivering the data directly performant into the vicinity of the customer.

Enhanced support for mobile devices

With the steadily growth of mobile devices and data administrators gain more control capabilities where the users are located at any time, from where they login and how they access to the information. Besides a faster velocity for the enduser this leads to a higher level of security and data privacy by data can be controlled at various edges. Moreover fog computing allows a better integration with several cloud services and thus ensures an optimized distribution across multiple data centers.

Setup a tight geographical distribution

Fog computing extends existing cloud services by spanning up an edge network which consist of many distributed endpoints. This tight geographical distributed infrastructure offers advantages for variety of use cases. This includes a faster elicitation and analysis of big data, a better support for location-based services by the entire WAN links can be better bridged as well as the capabilities to evaluate data massively scalable in real time.

Data is closer to the user

The amount of data caused by cloud services require a caching of the data or other services which take care of this subject. This services are located close to the enduser to improve latency and optimize the data access. Instead of storing the data and information centralized in a data center far away from the user the fog ensures the direct proximity of the data to the customer.

Fog computing makes sense

You can think about buzzwords whatever you want. Only if you take a look behind the courtain it’s becoming interesting. Because the more services, data and applications are deployed to the end user, the more the vendors have the task of finding ways to optimize the deployment processes. This means that information needs to be delivered closer to the user, while latency must be reduced in order to be prepared for the Internet of Things. There is no doubt that the consumerization of IT and BYOD will increasing the use and therefore the consumption of bandwidth.

More and more users rely on mobile solutions to run their business and to bring it into balance with the personal live. Increasingly rich content and data are delivered over cloud computing platforms to the edges of the Internet where at the same time are the needs of the users getting bigger and bigger. With the increasing use of data and cloud services fog computing will play a central role and help to reduce the latency and improve the quality for the user. In the future, besides ever larger amounts of data we will also see more services that rely on data and that must be provided more efficient to the user. With fog computing administrators and providers get the capabilities to provide their customers rich content faster and more efficient and especially more economical. This leads to faster access to data, better analysis opportunities for companies and equally to a better experience for the end user.

Primarily Cisco will want to characterize the word fog computing to use it for a large-scale marketing campaign. However, at the latest when the fog generates a similar buzz as the cloud, we will find more and more CDN or other vendors who offer something in this direction as fog provider.

Kategorien
Comment

AWS Activate. Startups. Market share. Any questions?

Startups are the groundwork for the success of the Amazon Web Services. With them, the IaaS market leader has grown. Based on an official initiative AWS builds out this customer target market and will leave the rest of IaaS providers further in the rain. Because with the exception of Microsoft and Rackspace, nothing happens in this direction.

AWS Activate

It’s no secret that AWS is specifically „bought“ the favor of startups. Apart from the attractive service portfolio founders, who are supported by Accelerator, obtain AWS credits in the double digits to start their ideas on the cloud infrastructure.

With AWS Activate Amazon has now launched an official startup program to serve as a business enabler for developers and startups. This consists of the „Self-Starter Package“ and the „Portfolio Package“.

The Self-Starter Package is aimed at startups that are trying their own luck and includes the well-known free AWS offering, which may take any new customer. In addition, an AWS Developer Support for a month, „AWS Technical Professional“ training and discounted access to solutions of SOASTA or Opscode are included. The Portfolio Package is for startups that are in a accelerator program. These will get AWS credits worth between $1,000 and $15,000 and a month or one year of free AWS business support. There are also „AWS Technical Professional“ and „AWS Essentials“ training included.

Any questions on the high market shares?

With this initiative, the Amazon Web Services will keep making the pace in the future. With the exception of Microsoft, Google and Rackspace, AWS positioned oneself as the only platform for users to implement their own ideas. All other vendors rather embrace corporate customers, who are more hesitant to jump on the cloud train, and do not offer the possibilities of the AWS cloud infrastructure by far. Instead of interpret the portfolio on cloud services, they try to catch customers with compute power and storage. But, infrastructure from the cloud means more than just infrastructure.

Kategorien
Analysis

Nirvanix. A living hell. Why multi-cloud matters.

One or two will certainly have heard of it. Nirvanix has oneself transported to the afterlife. The enterprise cloud storage service, which had a wide cooperation with IBM, on September 16, 2013 suddenly announced its closure and initially granted its existing customers a period of two weeks to migrate their data. The period has been extended to October 15, 2013 as customers need more time for migration. As a Nirvanix customer reported, it has stored 20 petabytes of data.

The end of Nirvanix

Out of nowhere enterprise cloud storage provider Nirvanix announced its end on September 16, 2013. To date its not declared how it happened. Rumor has it that a further round financing failed. Other reasons are seemingly on the faulty management. Thus, the company had five CEOs since 2008 until today. One should also not forget the strong competition in the cloud storage environment. Firstly, in recent years, many vendors have tried their luck. On the other hand the two top dogs Amazon Web Services with Amazon S3, and Microsoft with Azure Storage reduce the prices of their services in regular cycles, which are also enterprise-ready. Even to be named as one of the top cloud storage service provider by Gartner couldn’t help Nirvanix.

Particularly controversial is the fact that in 2011, Nirvanix has completed a five-year contract with IBM to expand IBM’s SmartCloud Enterprise storage services with cloud-based storage. As IBM has announced, stored data on Nirvanix will be migrated to the IBM SoftLayer object storage. As an IBM customer, I would still ask carefully about my stored data.

Multi-Cloud: Spread your eggs over multiple nests

First, a salute to the venture capital community. If it’s true that Nirvanix had to stop the service due to a failed round financing, then we see what responsibility is in their hands. Say no more.

How to deal with such a horror scenario like Nirvanix as cloud user? Well, as you can see a good customer base and partnerships with global players seems to be no guarantee that a service survived long term. Even Google currently plays its cloud strategy on the back of its customers and makes no binding commitment over the long-term consist of its services on the Google Cloud Platform, such as the Google Compute Engine (GCE). On the contrary, it is assumed that the GCE will not survive as long as other well-known Google services.

Backup and Multi-Cloud

Even if the cloud storage provider has to ensure the availability of the data, as a customer you have a duty of care and must be informed about the state of your data and – even in the cloud – take care of redundancy and backup. Meanwhile functions in the most popular cloud storage services are integrated to make seamless backups of the data and create multiple copies.

Although we are in the era of cloud, yet still applies: Backup! You should therefore ensure that a constantly checked(!) and a reliable backup and recovery plan exist. Furthermore, sufficient bandwidth must be available to move the data as soon as possible. This should also be checked at regular intervals using a migration audit to act quickly in the case of all cases.

To just move 20 petabytes of data is no easy task. Therefore you have to think about other approaches. Multi-cloud is a concept which is gaining more and more importance in the future. At it data and applications are distributed (in parallel) across multiple cloud platforms and providers. On this my analyst colleagues and friends Paul Miller and Ben Kepes already had discussed during their mapping session at the GigaOM Structure in San Francisco. Paul subsequently had written an interesting sector roadmap report on multi-cloud management.

Even next to Scalr, CliQr, RightScale and Enstratius already exist some management platforms for multi-cloud, we still find ourselves in a very early stage in terms of use. schnee von morgen webTV by Nikolai Longolius for example, is primarily on the Amazon Web Services and has developed a native web application 1:1 for the Google App Engine as a fallback scenario. This is not a multi-cloud approach, but shows its importance to achieve less effort for a provider-cross high availability and scalability. As Paul’s Sector Roadmap shows, it is in particular the compatibility of the APIs that must be attributed a great importance. In the future companies can no longer rely on a single provider, but distribute their data and applications across multiple providers to drive a best-of-breed strategy and to specifically spread the risk.

This should also be taken into consideration when simply store „only“ data in the cloud. The golden nest is the sum of a plurality of distributed.