Kategorien
Analysis

Enterprise Cloud: IBM to get into position

Last week IBM announced to invest more than 1.2 billion dollar for improving its global cloud offering. For this, cloud services are delivered from 40 data centers and 15 countries world wide. The German data center is based in Ehningen. Softlayer, IBM acquired in 2013, will play a key role. Therefore its resource capacities will be doubled.

Investments are a clear sign

The 1.2 billion dollar is not the first investment IBM let flew into its cloud business in the last couple of years. Since 2007 over 7 billion dollar IBM invested for the expansion of this business area. Among others 2 billion dollar for acquiring Softlayer. In addition more than 15 other acquisitions, which brought more cloud computing solutions and know-how into the company. The last big announcement was the amount of 1 billion dollar, IBM will invest into its cognitive computing system Watson, to make it available as a service for the masses.

These investments show that IBM is serious with the cloud and sees it as one pillar for the prospective company strategy. This is also seen in the expectations. IBM assumes that the worldwide cloud market growth up to 200 billion dollar in 2020. Until 2015, IBM wants to earn seven billion dollar per year worldwide with its cloud solutions.

Softlayer positioning is of crucial importance

A special attention IBM will pay to the Softlayer cloud. Therefore the capacities will be doubled in 2014. According to IBM, due to the acquisition of the cloud computing provider in 2013, about 2.400 new customers had been obtained. With Softlayer the important markets and financial centers should be strengthen. In addition, the Softlayer cloud should profit from Watson’s technologies in the future.

The Softlayer acquisition was an important step for IBM’s cloud strategy, to get a better cloud DNA. The IBM SmartCloud disappointed until today. These are not only the results of conversations with potential customers who, for the most part, comment negative on the offering and its usage.

The expansion of the Softlayer cloud will be interesting to watch. According to the official Softlayer website at the moment 191.500 servers are operated at seven locations worldwide.

  • Dallas: 104,500+ Servers
  • Seattle: 10,000+ Servers
  • Washington: 16,000+ Servers
  • Houston: 25,000+ Servers
  • San Jose: 12,000+ Servers
  • Amsterdam: 8,000+ Servers
  • Singapore: 16,000+ Servers

Compared to cloud players like Amazon Web Services (more than 450.000), Microsoft (about 1.000.000) and Google these are indeed peanuts. Even after doubling to 383,300 servers in 2014.

IBM’s worldwide cloud data centers

One part of the mentioned 1.2 billion dollar IBM will invest in 2014 to open 15 new data centers. In total, 40 data centers distributed over 15 countries will build the basis of the IBM cloud. The new cloud data centers will be located in China, Hongkong, Japan, India, London, Canada, Mexiko City, Washington D.C. and Dallas. The German data center is located in Ehningen near Stuttgart, the Swiss in Winterthur.

On the technical side, each cloud data center consists of at least for PODs (Points of Distribution). One single unit consist of 4.000 servers. Each POD works independently from another POD to provide the services and should have a guaranteed availability of 99.9 percent. Customers can decide for a single POD with 99.9 percent availability or a distribution over several PODs to increase the availability ratio.

The footprint within the enterprise

With its investments IBM wants to clearly position against the Amazon Web Services. However, one must distinguish one thing. Amazon AWS (AWS) is solely represented in the public cloud and this, according to a statement from Andy Jassy at last re:Invent, should remain. AWS believes in the public cloud and there is no interest to invest in (hosted) private cloud infrastructures. However, a company who brings the wherewithal get there quite its own private region, which is managed by AWS (similar to AWS GovCloud).

In the competition between IBM and AWS it is not primarily about the advance in the public, private, or else a kind of cloud shape. It is about the supremacy for corporate customers. AWS has grown up with startups and developers and tries to increase the attractiveness at companies for some time. This they have shown with several new services at the past re:Invent. This, however, is a market in which IBM has been present for decades and has a good and broad customer base and valuable contacts with the decision makers. This is an advantage, in spite of AWS’s current technical lead, should not be underestimated. Another advantage for IBM is the large base of partners in terms of technical support and sales. Because the pure self-service, a public cloud offers, does not fit the majority of enterprise customers. However, along with the Softlayer cloud IBM has yet to convince the companies technically and show customers a concrete benefit for the value to be successful.

What will prove to be an advantage for IBM over other cloud providers such as AWS, is the local presence with data centers in several countries. In particular, the German market will be appreciating this, when the offering fits the needs. It will be exciting to see if other U.S. cloud providers will follow and open a data center in Germany, to meet the concerns of German companies.

IBM’s investments indicate that the provider sees the cloud as a future supporting pillar and want to build a viable business in this area.

Kategorien
Analysis

The Amazon Web Services to grab at the enterprise IT. A reality check.

The AWS re:Invent 2013 is over and the Amazon Web Services (AWS) continue to reach out to the corporate clients with some new services. After AWS has established itself as a leading infrastructure provider and enabler for startups and new business models in the cloud, the company from Seattle tries to get one foot directly into the lucrative business environment for quite some time. Current public cloud market figures for 2017 from IDC ($107 billion) and Gartner ($244 billion) to give AWS tailwind and encourage the IaaS market leader in its pure public cloud strategy.

The new services

With Amazon WorkSpaces, Amazon AppStream, AWS CloudTrail and Amazon Kinesis, Amazon introduced some interesting new services, which in particular address enterprises.

Amazon WorkSpaces

Amazon WorkSpaces is a service which provides virtual desktops based on Microsoft Windows, to build an own virtual desktop infrastructure (VDI) within the Amazon Cloud. As basis a Windows Server 2008 R2 is used, which rolls out desktops with a Windows 7 environment. All services and applications are streamed from Amazon data centers to the corresponding devices, for what the PCoIP (PC over IP) by Teradici is used. It may be desktop PCs, laptops, smartphones or tablets. In addition, Amazon WorkSpaces can be combined with a Microsoft Active Directory, what simplifies the user management. By default, the desktops are delivered with familiar applications such as Firefox or Adobe Reader/ Flash. This can be adjusted as desired by the administrators.

With Amazon WorkSpaces Amazon enters a completely new territory in which Citrix and VMware, two absolute market players, already waiting. During VMworld in Barcelona, VMware just announced the acquisition of Desktone. VDI is basically a very exciting market segment because it redeemed the corporate IT administration tasks and reduces infrastructure costs. However, this is a very young market segment. Companies are also very careful when outsourcing their desktops as, different from the traditional on-premise terminal services, the bandwidth (network, Internet, data connection) is crucial.

Amazon AppStream

Amazon AppStream is a service that serves as a central backend for graphically extensive applications. With that, the actual performance of the device on which the applications are used, should no longer play a role, since all inputs and outputs are processed within the Amazon Cloud.

Since the power of the devices is likely to be more increasingly in the future, the local power can probably be disregarded. However, for the construction of a real mobile cloud, in which all the data and information are located in the cloud and the devices are only used as consumers, the service is quite interesting. Furthermore, the combination with Amazon WorkSpaces should be considered, to provide applications on devices that serve only as thin clients and require no further local intelligence and performance.

AWS CloudTrail

AWS CloudTrail helps to monitor and record the AWS API calls for one or more accounts. Here, calls from the AWS Management Console, the AWS Command Line Interface (CLI), own applications or third party applications are considered. The collected data are stored either in Amazon S3 or Amazon Glacier for evaluation and can be viewed via the AWS Management Console, the AWS Command Line Interface or third-party tools. At the moment, only Amazon EC2, Amazon ECS, Amazon RDS and Amazon IAM can be monitored. Amazon CloudTrail can be used free of charge. Costs incurred for storing the data to Amazon S3 and Amazon Glacier and for Amazon SNS notifications.

AWS CloudTrial belongs, even if it is not very exciting (logging), to the most important services for enterprise customers that Amazon has released lately. The collected logs assist during compliance by allowing to record all accesses to AWS services and thus demonstrate the compliance of government regulations. It is the same with security audits, which thus allow to comprehend vulnerabilities and unauthorized or erroneous data access. Amazon is well advised to expand AWS CloudTrail as soon as possible for all the other AWS services and make them available worldwide for all regions. In particular, the Europeans will be thankful.

Amazon Kinesis

Amazon Kinesis is a service for real-time processing of large data streams. To this end, Kinesis is able to process data streams of any size from a variety of sources. Amazon Kinesis is controlled via the AWS Management Console by assigning and saving different data streams to an application. Due to Amazon’s massive scalability there are no capacity limitations. However, the data are automatically distributed to the global Amazon data centers. Use cases for Kinesis are the usual suspects: Financial data, social media and data from the Internet of Things/ Everything (sensors, machines, etc.).

The real benefit of Kinesis, as big data solution, is the real-time processing of data. Common standard solutions on the market process the data via batch. Means the data can never be processed direct in time and at most a few minutes later. Kinesis removes this barrier and allows new possibilities for the analysis of live data.

Challenges: Public Cloud, Complexity, Self-Service, „Lock-in“

Looking at the current AWS references, the quantity and quality is impressive. Looking more closely, the top references are still startups, non-critical workloads or completely new developments that are processed. This means that most of the existing IT systems, we are talking about, are still not located in the cloud. Besides the concerns of loss of control and compliance issues, this depends on the fact that the scale-out principle makes it to complicated for businesses to migrate their applications and systems into the AWS cloud. In the end it boils down to the fact, that they have to start from scratch, because a non-distributed developed system is not working the way it should run on a distributed cloud infrastructure – key words: scalability, high availability, multi-AZ. These are costs that should not be underestimated. This means that even the migration of a supposedly simple webshop is a challenge for companies that do not have the time and the necessary cloud knowledge to develop the webshop for the (scale-out) cloud infrastructure.

In addition, the scalability and availability of an application can only be properly realized on the AWS cloud when you stick to the services and APIs that guarantee this. Furthermore, many other infrastructure-related services are available and are constantly being published, which make life clearly easier for the developer. Thus the lock-in is preprogrammed. Although I am of the opinion that a lock-in must not be bad, as long as the provider meets the desired requirements. However, a company should consider in advance whether these services are actually needed mandatory. Virtual machines and standard workloads are relatively easy to move. For services that are very close engaged into the own application architecture, it looks quite different.

Finally. Even if some market researchers predict a golden future for the public cloud, the figures should be taken with a pinch of salt. Cloud market figures are revised downwards for years. You also have to consider in each case how these numbers are actually composed. But that is not the issue here. At the end of the day it’s about what the customer wants. At re:Invent Andy Jassy once again made ​​clear that Amazon AWS is consistently rely on the public cloud and will not invest in own private cloud solutions. You can interpret this as arrogance and ignorance towards customers, the pure will to disruption or just self-affirmation. The fact is, even if Amazon will probably build the private cloud for the CIA, they have not the resources and knowledge by far to act as a software provider on the market. Amazon AWS is a service provider. However, with Eucalyptus they have set up a powerful ally on the private cloud side, which makes it possible to build an AWS-like cloud infrastructure in the own data center

Note: Nearly all Eucalyptus customers should also be AWS customers (source: Eucalyptus). This means conversely, that some hybrid cloud infrastructures exist between on-premise Eucalyptus infrastructures and the Amazon public cloud.

Advantages: AWS Marketplace, Ecosystem, Enabler, Innovation Driver

What is mostly ignored during the discussions about Amazon AWS and corporate customers is the AWS Marketplace. In addition, Amazon also does not advertised it too much. Compared to the cloud infrastructure, customers can use to develop their own solutions, the marketplace offers full-featured software solutions from partners (eg SAP), which can be automatically rolled out on the AWS infrastructure. The cost of using the software are charged per use (hour or month). In addition, the AWS fees for the necessary infrastructure are charged. Herein lies the real added value for companies to easily outsource their existing standard systems to the cloud and to separate from the on-premise systems.

One must therefore distinguish strictly between the use of infrastructure for in-house development and operation of ready-made solutions. Both are possible in the Amazon cloud. There is also the ecosystem of partners and system integrators which help AWS customers to develop their solutions. Because, even if AWS itself is (currently still) a pure infrastructure provider, they must equally be understood as a platform for other providers and partners who operate their businesses on it. This is also the key success and advantage over other providers in the market and will increase the long-term attractiveness of corporate customers.

In addition, Amazon is the absolute driving force for innovation in the cloud, no other cloud provider technologically is able to reach at the moment. For this purpose, it does not require re:Invent. Instead, it shows almost every month anew.

Amazon AWS is – partly – suitable for enterprise IT

Depending on the country and use case the requirements vary, Amazon has to meet. European customers are mostly cautious with the data management and store the data rather in their own country. I already met with more than one customer, who was technically confident but storing the data in Ireland was not an option. In some cases it is also the lack of ease of use. This means that a company dones’t want to (re)-develop its existing application or website for the Amazon infrastructure. Reasons are the lack of time and the knowledge to implement, what may results in a longer time to market. Both can be attributed to the complexity to achieve scalability and availability at the Amazon Web Services. After all, there are not just a few API calls. Instead, the entire architecture needs to be oriented on the AWS cloud. In Amazon’s case its about the horizontal scaling (scale-out) which makes this necessary. Instead, companies prefer vertical scaling (scale-up) to migrate the existing system 1:1 and not to start from scratch, but directly achieve success in the cloud.

However, the AWS references also show that sufficient use cases for companies exist in the public cloud in which the data storage can be considered rather uncritical, as long as the data are classified before and then stored in an encrypted way.

Analysts colleague Larry Carvalho has talked with a few AWS enterprise customers at re:Invent. One customer has implemented a hosted website on AWS for less than $7,000, for what an other system integrator wanted to charge $ 70,000. Another customer has calculated that he would pay for an on-premise business intelligence solution including maintenance about $200,000 per year. On Amazon AWS he only pays $10,000 per year. On the one hand these examples show that AWS is an enabler. However, on the other hand, that security concerns in some cases are yield to cost savings.

Kategorien
Comment

AWS Activate. Startups. Market share. Any questions?

Startups are the groundwork for the success of the Amazon Web Services. With them, the IaaS market leader has grown. Based on an official initiative AWS builds out this customer target market and will leave the rest of IaaS providers further in the rain. Because with the exception of Microsoft and Rackspace, nothing happens in this direction.

AWS Activate

It’s no secret that AWS is specifically „bought“ the favor of startups. Apart from the attractive service portfolio founders, who are supported by Accelerator, obtain AWS credits in the double digits to start their ideas on the cloud infrastructure.

With AWS Activate Amazon has now launched an official startup program to serve as a business enabler for developers and startups. This consists of the „Self-Starter Package“ and the „Portfolio Package“.

The Self-Starter Package is aimed at startups that are trying their own luck and includes the well-known free AWS offering, which may take any new customer. In addition, an AWS Developer Support for a month, „AWS Technical Professional“ training and discounted access to solutions of SOASTA or Opscode are included. The Portfolio Package is for startups that are in a accelerator program. These will get AWS credits worth between $1,000 and $15,000 and a month or one year of free AWS business support. There are also „AWS Technical Professional“ and „AWS Essentials“ training included.

Any questions on the high market shares?

With this initiative, the Amazon Web Services will keep making the pace in the future. With the exception of Microsoft, Google and Rackspace, AWS positioned oneself as the only platform for users to implement their own ideas. All other vendors rather embrace corporate customers, who are more hesitant to jump on the cloud train, and do not offer the possibilities of the AWS cloud infrastructure by far. Instead of interpret the portfolio on cloud services, they try to catch customers with compute power and storage. But, infrastructure from the cloud means more than just infrastructure.

Kategorien
Comment

"Amazon is just a webshop!" – "Europe needs bigger balls!"

This year I had the honor to host the CEO Couch of the Open-Xchange Summit 2013. This is a format were Top CEOs are confronted with provocative questions on a specific topic and had to answer to the point. Among the guests on the couch were Herrmann-Josef Lamberti (EADS), Dr. Marten Schoenherr (Deutsche Telekom Laboratories), Herbert Bockers (Dimension Data) and Rafael Laguna (Open-Xchange). This year’s main topic was cloud computing and how German and European provider to assert oneself against the alleged overwhelming competition from the US. I’ve picked out two statements mentioned during the CEO Couch, I would like to discuss critically.

Amazon is just a webshop!

One statement has got me worry lines. One the hand VMware already had underlined that it seemingly underestimates its supposed biggest competitors, on the other hand its absolutely wrong. To call Amazon today still a webshop one must close its eyes very wide and hide about 90% of the company. Amazon is today more than just a webshop. Amazon is a technology company respectively provider. Rafael has illustrated this very well during his keynote. There are currently three vendor who have managed to set up their own closed ecosystem of the web services over the content to the devices. These include Google, Apple and Amazon.

Besides the webshop Amazon.com further technology and services belong to the company. Including the Amazon Web Services, content distribution for digital books, music, movies (LoveFilm, Amazon Instant Video), ebook reader Kindle and Kindle Fire (with an own Android version), the search engines A9.com, Alexa Internet and the movie database IMDb.

Above that, if you take a look at how Jeff Bezos leads Amazon (e.g. Kindle strategy; sell at cost price; sales over content), he focuses on the long-term growth and market share, rather than to achieve quick profits.

Who wants to get a good impression of Jeff Bezos‘ mindset, I recommend the Fireside Chat with Werner Vogels during the 2012 AWS re: Invent. The 40 minutes are worth it.

Europe needs bigger balls!

The couch completely agreed. Although Europe has the potential and the companies to technically and innovative play its role in the cloud – apart from privacy issues, compared to the United States we eat humble pie, or better expressed: „Europe needs bigger balls!“. That depends on the one hand with the capital that investors in the U.S. are willing to invest, on the other hand, to the mentality to take risks, to fail and to think big and long term. At this point, European entrepreneurs and investors in particular can learn from Jeff Bezos. It’s about the long-term success, not about the short term money.

This is in my opinion one of the reasons why we will never see a European company that, for example, is able to hold a candle to the Amazon Web Services (AWS). The potential candidates like T-Systems, Orange and other major ICT providers that have data centers, infrastructure, personnel and necessary knowledge, rather focus on the target customers they have always served – the corporate customers. However, the public cloud and AWS similar services for startups and developers are completely neglected. On one side this is alright since cloud offerings on enterprise level and virtual private or hosted private cloud solutions are required to meet the needs of enterprise customers. On the other hand, nobody should be surprised that AWS currently has the most market share and is seen as an innovation machine. The existing ICT providers are not willing to change their current business or to expand it with new models to address another attractive audiences.

However, as it also my friend Ben Kepes well described, Amazon is currently quite popular and by far the market leader in the cloud. But there is still enough room for other provider in the market who can offer use cases and workloads that Amazon can not serve. Or because the customers simply decide against the use of AWS, since it’s too complicated, too costly or too expensive for them, or is simply inconsistent with legal issues.

So Europe, put on bigger eggs! Sufficient potential exists. Finally, providers such as T-Systems, Greenqloud, UpCloud, Dimension Data, CloudSigma or ProfitBricks have competitive offerings. Marten Schoenherr told me that he and his startup process of developing a Chromebook without Chrome. However, I have a feeling that Rafael and Open-Xchange (OX App Suite) have a finger in the pie.

Kategorien
Analysis

Reasons for a decision against the Amazon Web Services

No question, during each vendor selection for the cloud the Amazon Web Services are always a big option. However, I frequently experience situations where companies make a decision against the use and instead prefer another provider. Enclosed are the two most common reasons.

Legal security and data privacy

The storage location of the data, especially the customer data, is the obvious and most common reason for a decision against the Amazon Web Services. The data should still be stored in the own country and after switching to the cloud not be located in a data center of another country in the EU. In this case the AWS data center in Ireland is no option.

Lack of simplicity, knowledge and time to market

Very often it is also the lack of the ease of use. This means that a company don’t want to (re)-develop its existing application or website for the Amazon infrastructure. Reasons are the lack of time and the knowledge to implement, what may results in a longer time to market. Both can be attributed to the complexity to achieve scalability and availability at the Amazon Web Services. After all, there are not just a few API calls. Instead, the entire architecture needs to be oriented on the AWS cloud. In Amazon’s case its about the horizontal scaling (scale-out) which makes this necessary. Instead, companies prefer vertical scaling (scale-up) to migrate the existing system 1:1 and not to start from scratch, but directly achieve success in the cloud.

Kategorien
Analysis

Amazon EBS: When does Amazon AWS eliminate its single point of failure?

Immediately the Amazon Web Services again stood still in the US-EAST-1 region in North Virginia. First, on 19 August 2013, which apparently also drag Amazon.com under and recently on 25 August 2013. Storms to have been the cause. Of course it has thereby again caught the usual suspects, especially Instagram and Reddit. It seems that Instagram strongly rely on its cloud architecture that blows up in its face regularly when the lightning strikes in North Virginia. However, also Amazon AWS should start to change its infrastructure, because it can’t go on like this.

US-EAST-1: Old, cheap and fragile

The Amazon region US-EAST-1 is the oldest and most widely used region in Amazon’s cloud computing infrastructure. This is on the one hand related to the cost which are considerably cheaper in comparison to other regions. Whereby the Oregon region has been adjusted in price. Secondly, because of the age, here are also many old customers with probably not optimized system architectures for the cloud.

All eggs in one nest

The mistake most users commit is that they only rely on one region, in this case, US-East-1. Should this fail, also the own service is of course no longer accessible. This applies to all of Amazon’s worldwide regions. To overcome this situation, a multi-region approach should be chosen by the application is scalable and highly available distributed over several regions.

Amazon EBS: Single Point of Failure

I have already suggested in the last year that the Amazon Web Services have a single point of failure. Below are the status messages of 25 August 2013 which underline my thesis. The first time I become aware of it was a blog post by the Amazon AWS customer awe.sm who tell about their experiences in the Amazon cloud. The bold points are crucial.

EC2 (N. Virginia)
[RESOLVED] Degraded performance for some EBS Volumes
1:22 PM PDT We are investigating degraded performance for some volumes in a single AZ in the US-EAST-1 Region
1:29 PM PDT We are investigating degraded performance for some EBS volumes and elevated EBS-related API and EBS-backed instance launch errors in a single AZ in the US-EAST-1 Region.
2:21 PM PDT We have identified and fixed the root cause of the performance issue. EBS backed instance launches are now operating normally. Most previously impacted volumes are now operating normally and we will continue to work on instances and volumes that are still experiencing degraded performance.
3:23 PM PDT From approximately 12:51 PM PDT to 1:42 PM PDT network packet loss caused elevated EBS-related API error rates in a single AZ, a small number of EBS volumes in that AZ to experience degraded performance, and a small number of EC2 instances to become unreachable due to packet loss in a single AZ in the US-EAST-1 Region. The root cause was a „grey“ partial failure with a networking device that caused a portion of the AZ to experience packet loss. The network issue was resolved and most volumes, instances, and API calls returned to normal. The networking device was removed from service and we are performing a forensic investigation to understand how it failed. We are continuing to work on a small number of instances and volumes that require additional maintenance before they return to normal performance.
5:58 PM PDT Normal performance has been restored for the last stragglers of EC2 instances and EBS volumes that required additional maintenance.

ELB (N. Virginia)
[RESOLVED] Connectivity Issues
1:40 PM PDT We are investigating connectivity issues for load balancers in a single availability zone in the US-EAST-1 Region.
2:45 PM PDT We have identified and fixed the root cause of the connectivity issue affecting load balancers in a single availability zone. The connectivity impact has been mitigated for load balancers with back-end instances in multiple availability zones. We continue to work on load balancers that are still seeing connectivity issues.
6:08 PM PDT At 12:51 PM PDT, a small number of load balancers in a single availability zone in the US-EAST-1 Region experienced connectivity issues. The root cause of the issue was resolved and is described in the EC2 post. All affected load balancers have now been recovered and the service is operating normally.

RDS (N. Virginia)
[RESOLVED] RDS connectivity issues in a single availability zone
1:39 PM PDT We are currently investigating connectivity issues to a small number of RDS database instances in a single availability zone in the US-EAST-1 Region
2:07 PM PDT We continue to work to restore connectivity and reduce latencies to a small number of RDS database instances that are impacted in a single availability zone in the US-EAST-1 Region
2:43 PM PDT We have identified and resolved the root cause of the connectivity and latency issues in the single availability zone in the US-EAST-1 region. We are continuing to recover the small number of instances still impacted.
3:31 PM PDT The majority of RDS instances in the single availability zone in the US-EAST-1 Region that were impacted by the prior connectivity and latency issues have recovered. We are continuing to recover a small number of remaining instances experiencing connectivity issues.
6:01 PM PDT At 12:51 PM PDT, a small number of RDS instances in a single availability zone within the US-EAST-1 Region experienced connectivity and latency issues. The root cause of the issue was resolved and is described in the EC2 post. By 2:19 PM PDT, most RDS instances had recovered. All instances are now recovered and the service is operating normally.

Amazon EBS is the basis of many other services

The bold points are therefore of crucial importance, since all of these services are dependent on a single service at the same time: Amazon EBS. To this result awe.sm came during their analysis. How awe.sm has determined, EBS is nine times out of ten the main problem of major outages on Amazon. How in the outage above.

Among the services that are dependent on Amazon EBS include the Elastic Load Balancer (ELB), the Relational Database Service (RDS) or Elastic Beanstalk.

Q: What is the hardware configuration for Amazon RDS Standard storage?
Amazon RDS uses EBS volumes for database and log storage. Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance.

Source: http://aws.amazon.com/rds/faqs/

EBS Backed
EC2: If you select an EBS backed AMI
ELB: You must select an EBS backed AMI for EC2 host
RDS
Elastic Beanstalk
Elastic MapReduce

Source: Which AWS features are EBS backed?

Load balancing across availability zones is excellent advice in principle, but still succumbs to the problem above in the instance of EBS unavailability: ELB instances are also backed by Amazon’s EBS infrastructure.

Source: Comment – How to work around Amazon EC2 outages

As you can see more than a few services depend on Amazon EBS. This means, conversely, if EBS fails these services are also no longer available. Particularly tragic is the case with the Amazon Elastic Load Balancer (ELB), which is responsible for directing the traffic in case of failure or heavy load. Thus, if Amazon EBS fails and the data traffic should be transmitted to another region, this does not work because the load balancer also dependents on EBS.

I may be wrong. However, if you look at the past failures the evidence indicates that Amazon EBS is the main source of error in the Amazon Cloud.

It may therefore be allowed to ask the question whether constantly the same component of its infrastructure blow up in a leader’s face, which, in principle, is to be regarded as a single point of failure? And whether an infrastructure-as-a-service (IaaS) provider, who has experienced the most outages of all providers in the market, can be described under this point of view as a leader. Even if I frequently preach that a user of a public IaaS must take responsibility for the scalability and high availability on its own, the provider has the duty to ensure that the infrastructure is reliable.

Kategorien
Comment

ProfitBricks opens price war with Amazon Web Services for infrastructure-as-a-service

ProfitBricks takes the gloves off. The Berlin-based infrastructure-as-a-service (IaaS) startup acts with a hard edge against the Amazon Web Services and reduced its prices in both U.S. and Europe by 50 percent. Furthermore, the IaaS provider has presented a comparison which shows that its own virtual server should have at least twice as high as the performance of Amazon Web Services and Rackspace. Thus ProfitBricks is trying to diversify on the price and the performance of the U.S. top providers.

The prices for infrastructure-as-a-service are still too high

Along with the announcement, CMO Andreas Gauger shows correspondingly aggressive. „We have the impression that the dominant occurring cloud companies from the U.S. are abusing their market power to exploit high prices. They expect deliberately opaque pricing models of companies and regularly announce punctual price cuts to awaken the impression of a price reduction.“, says Gauger (translated from German).

ProfitBricks has therefore the goal to attack the IaaS market from the rear on the price and let their customer directly and noticeably participate from technical innovations resulting in cost savings.

Up to 45 percent cheaper than Amazon AWS

ProfitBricks positioned very clearly against Amazon AWS and shows a price comparison. For example, an Amazon M1 Medium instance with 1 core, 3.75GB of RAM and 250GB of block storage is $0.155 per hour or $111.40 per month. A similar instance on ProfitBricks costs $0.0856 per hour or $61.65 per month. A saving of 45 percent.

Diversification just on the price is difficult

To diversify as IaaS provider just on price is difficult. We remember, Infrastructure is commodity! Vertical services are the future of the cloud, with which the customer receives an added value.

To defy the IaaS top dog this way is brave and foolhardy. However, one should not forget something. As hosting experts of the first hour Andreas Gauger and Achim Weiss have validated the numbers around their infrastructure and seek with this action certainly not the brief glory. It remains to be seen how Amazon AWS and other IaaS providers will react to this stroke. Because with this price reduction ProfitBricks shows that customer can actually get much cheaper infrastructure resources as is currently the case.

As an IaaS user there is something you should certainly do not lose sight of during this price discussion. In addition to the price of computing power and storage – which are held up again and again – there are more factors to take into account, that determine the price and which are actually just call to mind at the end of the month. This includes the cost of transferring data in and out of the cloud as well as costs for other services offered around the infrastructure that are charged per API call. In some respects there is a lack of transparency. Furthermore, a comparison of the various IaaS providers is difficult to represent as many operate with different units, configurations and/or packages.

Kategorien
Services @en

Exclusive: openQRM 5.1 to be extended with hybrid cloud functionality and integrates Amazon AWS and Eucalyptus as a plugin

It’s happening soon. This summer openQRM 5.1 will be released. Project manager and openQRM-Enterprise CEO Matt Rechenburg already has told me some very interesting new features. In addition to a completely redesigned backend design the open source cloud infrastructure software from Cologne, Germany will be expanded with hybrid cloud functionality by integrating Amazon EC2, Amazon S3, and their clone Eucalyptus as a plugin.

New hybrid cloud features in openQRM 5.1

A short overview of the new hybrid cloud features in the next version of openQRM 5.1:

  • openQRM Cloud works transparently with Amazon EC2 and Eucalyptus.
  • Via a self-service end-user within a private openQRM Cloud can order by an administrator selected instance types or AMI’s which are then used by openQRM Cloud to automatically provision to Amazon EC2 (or Amazon compatible clouds).
  • User-friendly password login for the end-user of the cloud via WebSSHTerm directly in the openQRM Cloud portal.
  • Automatic applications deployment using Puppet.
  • Automatic cost accounting via the openQRM cloud billing system.
  • Automatic service monitoring via Nagios for Amazon EC2 instances.
  • openQRM high-availability at infrastructure-level for Amazon EC2 (or compatible private clouds). This means: If an EC2 instance fails or an error occurs in an Amazon Availability Zone (AZ) an exact copy of this instance is restarted. In case of a failure of an entire AZ, the instance starts up again in another AZ of the same Amazon region.
  • Integration of Amazon S3. Data can be stored directly on Amazon S3 via openQRM. When creating an EC2 instance a script that is stored on S3 can be specified, for example, that executes other commands during the start of the instance.

Comment: openQRM recognizes the trend at the right time

With its extension also openQRM-Enterprise shows that the hybrid cloud is becoming a serious factor in the development of cloud infrastructures and comes with the new features just at the right time. The Cologne based company are not surprisingly orientate at the current public cloud leader Amazon Web Services. Thus, in combination with Eucalyptus or other Amazon compatible cloud infrastructures, openQRM can also be used to build massively scalable hybrid cloud infrastructures. For this purpose openQRM focuses on its proven plugin-concept and integrates Amazon EC2, S3 and Eucalyptus exactly this way. Besides its own resources from a private openQRM Cloud, Amazon and Eucalyptus are used as further resource providers to get more computing power quickly and easily.

In my opinion, the absolute killer features include the automatic applications deployment using Puppet, with which the end-user to conveniently and automatically can provide EC2 instances with a complete software stack itself, as well as the consideration of the Amazon AZ-wide high-availability functionality, which is neglected by many cloud users again and again due to ignorance.

Much attention the team has also given the optics and the interface of the openQRM backend. The completely redesigned user interface looks tidier and easier in the handling and will positively surprise the existing customers.