Kategorien
Conferences

CloudCamp Frankfurt 2013 – Here we go!

On 29th October CloudCamp Frankfurt 2013 takes place as part of the SNW Europe. This year we focus on the leading theme „Cloud First! – Is cloud the new normal?“ and expect as in 2012 a range of participants. Besides ProfitBricks CMO Andreas Gauger also CloudCamp co-founder and Citrix new Chief Cloud Advocate Reuven Cohen will give a talk.

Cloud First! – Is cloud the new normal?

Talking with providers and depending on the numbers of market researcher this question is quite easy to answer – YES! However, this must be proved with significant uses cases and facts. Furthermore not only the status quo will be discussed, but also controversial issues which should exactly call this current status into question and show where we need to go in the future to map cloud computing and its impacts optimal on the enterprise IT.

To that end we already can announce some attractive speaker.

Confirmed speaker

  • Reuven Cohen, Chief Cloud Advocate Citrix, Co-Founder CloudCamp (@ruv)
  • Chris Boos, CEO arago AG (@boosc)
  • René Buest, Analyst New Age Disruption, Analyst GigaOM Research (@renebuest)
  • Mark Masterson, Troublemaker CSC Financial Services EMEA (@mastermark)
  • Hendrik Andreas Reese, TÜV Rheinland i-sec GmbH
  • Steffen Krause, Cloud & Database Evangelist Amazon Web Services (@sk_bln)
  • Andreas Gauger, ProfitBricks (@gaugi)
  • Thomas King, audriga (@thking)
  • Moderation: Roland Judas, Technical Evangelist arago AG (@rolandjudas)

Registration and further information

The registration for CloudCamp Frankfurt 2013 is as always free of charge. In addition each CloudCamp participant can also attend „SNW Europe – Powering the Cloud“ for free.

The CloudCamp Frankfurt 2013 registration can be found under http://cloudcamp-frankfurt-2013.eventbrite.com. The registration for SNW Europe we initiate afterwards.

Facts

  • CloudCamp Frankfurt 2013
    Within „SNW Europe – Powering the Cloud“
  • Wenn: 29. Oktober 2013, Frankfurt
  • Time: 16:00 Uhr bis 20:00 Uhr
  • Where: Congress Frankfurt, Ludwig-Erhard-Anlage 1, 60327 Frankfurt am Main
Kategorien
Analysis

Enterprise Cloud Computing: T-Systems to launch "Enterprise Marketplace"

According to Deutsche Telekom it already could reach about 10.000 mid-size business for its Business Marketplace. Now a similar success should be achieved in the corporate customer environment. For this the Enterprise Marketplace is available to purchase software, hardware or packaged solutions on demand and even deploy firm-specific in-house development for the own employees.

SaaS, IaaS and ready packages

The Enterprise Marketplace is aimed to the specific requirements of large business and offers besides Software-as-a-Service solutions (SaaS) also pre-configured images (appliances) and pre-configured overall packages which can be integrated into existing system landscapes. All services including integrated in-house developments are located in a hosted private cloud and are delivered through the backbone of the Telekom.

Standard solutions and in-house development

The Enterprise Marketplace offers mostly standardized services. The usage of these services includes new releases, updates and patches. The accounting is on a monthly base and based on the use of the respective service. T-Systems also provides ready appliances like a pre-configured Apache Application Server which can be managed and expanded with own software. In the future customers will be able to choose which T-Systems data center they would like to use.

Within the Enterprise Marketplace different offerings from the marketplace can be combined e.g. a webserver with a database and a content management system. Furthermore own solutions like monitoring tools can be integrated. These own solutions can also be provided to the remaining marketplace participants. Who and how many participants get access to the solution the customer decides on its own.

In addition the performance of each appliance in the Enterprise Marketplace can be individually customized. Therefore the user can affect the amount of CPUs as well as the RAM size and storage and the connection bandwidth. In a next expansion stage managed services take care of patches, updates and new releases.

Marketplace with solutions from external vendors

A custom dashboard provides an overview of all deployed or approved applications within the Enterprise Marketplace. Current solutions on the marketplace include webserver (e.g. Apache and Microsoft IIS), application server (e.g. Tomcat and JBoss), SQL databases, Microsoft Enterprise Search as well as open source packages (e.g. LAMP stack or Joomla). In addition T-Systems partners with a number of SaaS solution provider who have been specifically tested for use on the T-Systems infrastructure. This includes among others the tax software TAXOR, the enterprise social media solution tibbr, the business process management suite T-Systems Process Cloud as well as the sustainability management WeSustain.

Cloud computing at enterprise level

T-Systems with its Enterprise Marketplace already arrived where other providers like Amazon, Microsoft, Google, HP and Rackspace starts its journey – at the lucrative corporate customers. While the Business Marketplace primarily presents as a marketplace for SaaS solutions, T-Systems and the Enterprise Marketplace go ahead and also provide infrastructure ressources and complete solutions for companies including the integration of in-house development as well as managed services.

Compared to the current cloud players T-Systems does not operate a public cloud but instead focused on providing a (hosted/ virtual) private cloud. This can pay off in the medium term. Current market figures from IDC promise a growth for the public cloud by 2013 with 47.3 billion dollars to 107 billion dollars in 2017. According to IDC the popularity of the private cloud will decline for the benefit of the virtual private cloud (hosted private cloud) once the public cloud provider start to offer appropriate solutions to meet the concerns of the customers.

Similar observations we have made ​​during our GigaOM study „The state of Europe’s homegrown cloud market„. Cloud customers are increasingly demanding managed services. Although they want to benefit from the properties of the public cloud – flexible use of the resources, pay per use – but this at the same time within a very secure environment including the support for the integration of applications and systems by the provider. At present, all of this the public cloud player cannot offer in this form since they mainly focus on their self-service offerings. This is fine, because they effectively operate certain use cases and have achieved significant market share in a specific target customer segment. However, if they want to get the big fish in the enterprise segment, they will have to align their strategy to the T-Systems portfolio in order to meet the individual needs of the customers.

A note on the selection process of partners for the Enterprise Marketplace: Discussions with partners that offer their solutions on the Business Marketplace confirm a tough selection process, which can be a real „agony“ for the partner. Specific adjustments specifically to the security infrastructure of T-Systems are not individual cases and show a high quality management by T-Systems.

Kategorien
Comment

Google Compute Engine seems to be no solution for the long haul

In an interview with GigaOM, Google‘s Cloud Platform manager Greg DeMichillie made an odd statement on the future of the Google Compute Engine which again have to lead to a discussion on the future-proofness of Google’s cloud service portfolio and if it makes sense to depend on the non core business areas of the search engine provider.

Google is to agile for its customers

After Google announced to close the Google Reader, I already asked the question how future-proof the Google cloud portfolio is. In particular, because of the background, that Google starts to monetize more and more services, those due to the revenue get a new KPI and thus are threaten a closure. Google Cloud Platform manager Greg DeMichillie exactly meets this question in a GigaOM interview and answered unexpected and not within the meaning of the customer.

„DeMichillie wouldn’t guarantee services like Compute Engine will be around for the long haul, but he did try to reassure developers by explaining that Google’s cloud services are really just externalized versions of what it uses internally. ”There’s no scenario in which Google suddenly decides, ‘Gee, I don’t think we need to think about storage anymore or computing anymore.“

Although DeMichillie to qualify in the end that Google wouldn’t shut down their cloud services in a kamikaze operation. However, it’s an odd statement on a service which is relatively as of late on the market.

These are things customers should better not hear

The crucial question is why a potential customer should decide for the Google Compute Engine for the long haul? Due to this statement one have to advise against the use of the Google Compute Engine and instead set on a cloud computing provider who has its actually core business in infrastructure-as-a-service and not be indulgent to sell its overcapacities and instead operate a serious cloud computing business.

I don’t want to speak of the devil and the devil shows up! But news like the sudden death of Nirvanix – an enterprise cloud storage service – to make massive waves and outface the users. This also Google should carefully understand if it wants to become a serious provider of cloud computing resources.

Kategorien
Analysis

A view of Europe's own cloud computing market

Europe’s cloud market is dominated by Amazon Web Services, Microsoft Windows Azure, and Rackspace. So the same providers that serve the rest of the world. Each of these global providers has a local presence in Europe and all have made efforts to satisfy European-specific concerns with respect to privacy and data protection. Nevertheless, a growing number of local providers are developing cloud computing offerings in the European market. For GigaOM Research, Paul Miller and I explore in more detail the importance of having these local entrants and asks whether Europe’s growing concern with U.S. dominance in the cloud is a real driver for change. The goal is to discover whether there is a single European cloud market these companies can address or whether there are several different markets.

Key highlights from the report

  • European concern about the dominance of U.S. cloud providers
  • Rationale for developing cloud solutions within Europe
  • The value of transnational cloud infrastructure
  • The value of local or regional cloud infrastructure
  • A representative sample of Europe’s native cloud providers

European cloud providers covered in the report

  • ProfitBricks
  • T-Systems
  • DomainFactory
  • City Cloud
  • Colt
  • CloudSigma
  • GreenQloud

Get the report

To read the full report go to „The state of Europe’s homegrown cloud market„.

Kategorien
Analysis

Reasons for a decision against the Amazon Web Services

No question, during each vendor selection for the cloud the Amazon Web Services are always a big option. However, I frequently experience situations where companies make a decision against the use and instead prefer another provider. Enclosed are the two most common reasons.

Legal security and data privacy

The storage location of the data, especially the customer data, is the obvious and most common reason for a decision against the Amazon Web Services. The data should still be stored in the own country and after switching to the cloud not be located in a data center of another country in the EU. In this case the AWS data center in Ireland is no option.

Lack of simplicity, knowledge and time to market

Very often it is also the lack of the ease of use. This means that a company don’t want to (re)-develop its existing application or website for the Amazon infrastructure. Reasons are the lack of time and the knowledge to implement, what may results in a longer time to market. Both can be attributed to the complexity to achieve scalability and availability at the Amazon Web Services. After all, there are not just a few API calls. Instead, the entire architecture needs to be oriented on the AWS cloud. In Amazon’s case its about the horizontal scaling (scale-out) which makes this necessary. Instead, companies prefer vertical scaling (scale-up) to migrate the existing system 1:1 and not to start from scratch, but directly achieve success in the cloud.

Kategorien
Conferences

CloudCamp Frankfurt 2013: Call for Papers opened

On 29th October the CloudCamp Frankfurt 2013 takes place as part of the SNW Europe. With the leading theme „Cloud First! – Is cloud the new normal?“ we expect to see many participants again. After we already could gain a few very attractive speaker, including CloudCamp co-founder Reuven Cohen, we now open this year call for papers.

CfP: Until 10th October 2013

Who would like to submit a talk (Lightning Talk, 6 minutes) on the topic „Cloud First! – Is cloud the new normal?“ has the opportunity until the 10th of October 2013. Please use the form below.

Requirements

  • Lightning Talk, 6 minutes
  • No advertising
  • No sales presentation
  • Experiences and visions
  • Language: English

Confirmed speakers

  • Reuven Cohen, Chief Cloud Advocate Citrix, Co-Founder CloudCamp (@ruv)
  • Chris Boos, CEO arago AG (@boosc)
  • René Buest, Analyst New Age Disruption, Analyst GigaOM Research (@renebuest)
  • Mark Masterson, Troublemaker CSC Financial Services EMEA (@mastermark)
  • Hendrik Andreas Reese, TÜV Rheinland i-sec GmbH
  • Moderation: Roland Judas, Technical Evangelist arago AG (@rolandjudas)

Additional information on CloudCamp Frankfurt 2013 can be found under http://cloudcamp.org/Frankfurt and http://cloudcamp-frankfurt.de.

Who don’t want do speak but participate as an attendee can register for free under http://cloudcamp-frankfurt-2013.eventbrite.com. Hint: CloudCamp participants can also attend SNW Europe for free.

„Cloud First! – Is cloud the new normal?“

Kategorien
Analysis

AWS OpsWorks + VPC: AWS accelerates the pace in the hybrid cloud

After the Amazon Web Services (AWS) have acquired the German company Peritor and its solution Scalarium last year and renamed it to AWS OpsWorks, the further integration into the AWS cloud infrastructure follows. The next step is the integration with the Virtual Private Cloud which is, given the current market development, a further punch in the direction of the pursuer.

AWS OpsWorks + AWS Virtual Private Cloud

AWS OpsWorks is a DevOps solution with which applications can be deployed, customized and managed. In short, it ensures the entire application lifecycle. This includes features such as a user-based SSH management, CloudWatch metrics for the current load and the memory consumption, automatic RAID volume configurations and other options for application deployment. In addition, OpsWorks can be expanded with Chef by using individual Chef recipes.

The new Virtual Private Cloud (VPC) support now enables these OpsWorks functions within an isolated private network. For example applications can be transfered from an own IT infrastructure into a VPC in the Amazon Cloud by a secure connection between the own datacenter and the Amazon cloud is established. Thus, Amazon EC2 instances within a VPC behave as if it would be running in the existing corporate network.

OpsWorks and VPC are just the beginning

Believing in the results of the Rackspace 2013 Hybrid Cloud survey, 60 percent of IT decision-makers have the hybrid cloud as the main goal in mind. Here 60 percent will or have withdrawn their applications and workloads in the public cloud. 41 percent left the public cloud partially. 19 percent want to leave the public cloud even completely. The reasons for the use of a hybrid cloud rather than a public cloud are higher security (52 percent), more control (42 percent) and better performance and higher reliability (37 percent). The top benefits, which hybrid cloud users report, including more control (59 percent), a higher security (54 percent), a higher reliability (48 percent), cost savings (46 percent) and better performance (44 percent).

AWS has recognized this trend early in March 2012, and started the first strategic steps. The integration of OpsWorks with VPC is basically just a sideshow. The real trump card holds AWS with the cooperation of Eucalyptus, which was signed last year. The aim is to improve the compatibility with the AWS APIs by AWS supplies Eucalyptus with further information. Furthermore, developers from both companies focus on creating solutions that enterprise customers to help migrate existing data between data centers and the AWS cloud. The customer will also get the opportunity to use the same management tools and their knowledge of both platforms. With version 3.3 Eucalyptus already could present first results of this cooperation.

It will be exciting how the future of Eucalyptus looks. Eventually, the open source company is bought or sold. The question is who makes the first move. My advice is still that AWS must seek an acquisition of Eucalyptus. From a strategic and technological point of view there is actually no way around it.

Given this fact Rackspace and all the other who jumped on the hybrid cloud bandwagon may not have high hopes. Amazon put things on the right track and is also prepared for this scenario.

Kategorien
Services @en

openQRM 5.1 is available: OpenStack and Eucalyptus have a serious competitor

openQRM project manager Matt Rechenburg already told me some of the new features of openQRM 5.1 at the end of July. Now the final release has been released with additional functions. The overall picture looks really promising. Although the heading at first can suspect that openQRM is something completely new, which is trying to outstrip OpenStack or Eucalyptus. But this is not by a long shot. openQRM already exists since 2004 and has built up a wide range of functions over the years. The open source project from Cologne, Germany unfortunately lost a little in the crowd, because the marketing machines behind OpenStack and Eucalypts run at full speed and the loudspeakers are many times over. Certainly, openQRM must not hide from them. On the contrary, with version 5.1, Rechenburg’s team again has vigorously gain functions and particular enhances the user experience.

The new features in openQRM 5.1

openQRM-Enterprise sees in the hybrid cloud an important significance for the future of cloud as well. For this reason, openQRM 5.1 was extended with a plugin to span a hybrid cloud with the cloud infrastructures of the Amazon Web Services (AWS), Eucalyptus Cloud and Ubuntu Enterprise Cloud (UEC). Thus, openQRM is using AWS as well as Eucalyptus and UEC as a resource provider for more computing power and memory and is able to manage public cloud infrastructures as well as local virtual machines. This allows administrators, based on openQRM, to transparently provide their end-users Amazon AMIs via a cloud portal and to monitor the state of the virtual machines via Nagios.

Another new feature is the tight integration of Puppet. With that end-users can order virtual machines as well as public cloud infrastructures with personalized, managed software stacks. The best technical renovation is the consideration of the Amazon Web Services high-availability concept. Does an Amazon EC2 instance fail, automatically a new one starts. openQRM 5.1 is even able to offset the outage of an entire Amazon Availability Zone (AZ). Is an AZ not available anymore the correlate EC instance will be started in another Availability Zone of the same region.

According to CEO Matt Rechenburg, openQRM-Enterprise expands the open source solution openQRM consistently with cloud spanning management solutions in order to empower its customers to flexibility grow beyond the borders of their own IT capabilities.

In addition to technical enhancements much emphasis was placed on ease of use as well. In openQRM 5.1 the user interface for the administrator was completely revised. This ensures a better overview and user guide. Even the dashboard, as a central point has benefited greatly, by all of the information such as the status of the data center, are displayed at a glance.

New features have been introduced for the openQRM-Enterprise Edition, too. This includes a new plugin for role-based access rights management, which allows fine-grained setting permissions for administrators within the entire openQRM system. With that complete enterprise topologies can be mapped to openQRM roles in order to restrict administrators who are responsible only for virtualization in the enterprise to the management and provisioning of virtual machines.

Further renovations in openQRM 5.1 include an improved support for virtualization technology KVM, by now using it for KVM GlusterFS volumes as well. In addition, VMware technologies are better supported. This means that now even existing VMware ESX systems can be managed as well as local or over the network bootable VMware ESX machines and can be install and managed.

openQRM takes a big step forward

Although openQRM significantly longer exists on the market as OpenStack or Eucalyptus. Nevertheless, the level of awareness of both projects is larger. This is mainly due to the substantial marketing efforts of both the OpenStack community and of Eucalyptus. But technological and functional openQRM must not hide. On the contrary, openQRMs functions is many times more powerful than that of OpenStack or Eucalyptus. Where the two focus exclusively on the topic of cloud, openQRM also has a complete data center management solution included.

Due to the long history openQRM has received many new and important features in recent years. However, as a result it also lose track of what the solution is able to afford. But who has understood, that openQRM is composed of integrated building blocks such as „Data Center Management“, „Cloud Backend“ and „Cloud Portal“, will recognize that the open source solution, especially in the construction of private clouds, provides an advantage over OpenStack and Eucalyptus. Especially the area of ​​data center management must not be neglected for building a cloud to keep track and control of its infrastructure.

With version 5.0, the structures already begun to sharpen and to summarize the individual functions into workflows. This was worked out by the 5.1 release once again. The new look and layout of the openQRM backend has been completely overhauled. It looks tidier and easier to use and will positive surprise all customers.

The extension with the hybrid cloud functionality is an important step for the future of openQRM. The result of the Rackspace 2013 Hybrid Cloud survey showed that 60 percent of IT decision-makers have the hybrid cloud as the main goal in mind. Here 60 percent will or have withdrawn their applications and workloads in the public cloud. 41 percent left the public cloud partially. 19 percent want to leave the public cloud even completely. The reasons for the use of a hybrid cloud rather than a public cloud are higher security (52 percent), more control (42 percent) and better performance and higher reliability (37 percent). The top benefits, which hybrid cloud users report, including more control (59 percent), a higher security (54 percent), a higher reliability (48 percent), cost savings (46 percent) and better performance (44 percent).

openQRM not surprisingly orientates at the current public cloud leader Amazon Web Services. Thus, in combination with Eucalyptus or other Amazon compatible cloud infrastructures, openQRM can also be used to build massively scalable hybrid cloud infrastructures. For this purpose openQRM focuses on its proven plugin-concept and integrates Amazon EC2, S3 and Eucalyptus exactly this way. Besides its own resources from a private openQRM Cloud, Amazon and Eucalyptus are used as further resource providers to get more computing power quickly and easily.

The absolute killer features include the automatic applications deployment using Puppet, with which the end-user to conveniently and automatically can provide EC2 instances with a complete software stack itself, as well as the consideration of the Amazon Availability Zone-wide high-availability functionality, which is neglected by many cloud users again and again due to ignorance. But even the improved integration of technologies such as VMware ESX systems should not be ignored. Finally, VMware is still the leading virtualization technology on the market. Thus openQRM also increases its attractiveness to be used as an open source solution for the management and control of VMware environments.

Technological and functional openQRM is on a very good path into the future. However, bigger investments in public relations and marketing are imperative.

Kategorien
Analysis

Amazon EBS: When does Amazon AWS eliminate its single point of failure?

Immediately the Amazon Web Services again stood still in the US-EAST-1 region in North Virginia. First, on 19 August 2013, which apparently also drag Amazon.com under and recently on 25 August 2013. Storms to have been the cause. Of course it has thereby again caught the usual suspects, especially Instagram and Reddit. It seems that Instagram strongly rely on its cloud architecture that blows up in its face regularly when the lightning strikes in North Virginia. However, also Amazon AWS should start to change its infrastructure, because it can’t go on like this.

US-EAST-1: Old, cheap and fragile

The Amazon region US-EAST-1 is the oldest and most widely used region in Amazon’s cloud computing infrastructure. This is on the one hand related to the cost which are considerably cheaper in comparison to other regions. Whereby the Oregon region has been adjusted in price. Secondly, because of the age, here are also many old customers with probably not optimized system architectures for the cloud.

All eggs in one nest

The mistake most users commit is that they only rely on one region, in this case, US-East-1. Should this fail, also the own service is of course no longer accessible. This applies to all of Amazon’s worldwide regions. To overcome this situation, a multi-region approach should be chosen by the application is scalable and highly available distributed over several regions.

Amazon EBS: Single Point of Failure

I have already suggested in the last year that the Amazon Web Services have a single point of failure. Below are the status messages of 25 August 2013 which underline my thesis. The first time I become aware of it was a blog post by the Amazon AWS customer awe.sm who tell about their experiences in the Amazon cloud. The bold points are crucial.

EC2 (N. Virginia)
[RESOLVED] Degraded performance for some EBS Volumes
1:22 PM PDT We are investigating degraded performance for some volumes in a single AZ in the US-EAST-1 Region
1:29 PM PDT We are investigating degraded performance for some EBS volumes and elevated EBS-related API and EBS-backed instance launch errors in a single AZ in the US-EAST-1 Region.
2:21 PM PDT We have identified and fixed the root cause of the performance issue. EBS backed instance launches are now operating normally. Most previously impacted volumes are now operating normally and we will continue to work on instances and volumes that are still experiencing degraded performance.
3:23 PM PDT From approximately 12:51 PM PDT to 1:42 PM PDT network packet loss caused elevated EBS-related API error rates in a single AZ, a small number of EBS volumes in that AZ to experience degraded performance, and a small number of EC2 instances to become unreachable due to packet loss in a single AZ in the US-EAST-1 Region. The root cause was a „grey“ partial failure with a networking device that caused a portion of the AZ to experience packet loss. The network issue was resolved and most volumes, instances, and API calls returned to normal. The networking device was removed from service and we are performing a forensic investigation to understand how it failed. We are continuing to work on a small number of instances and volumes that require additional maintenance before they return to normal performance.
5:58 PM PDT Normal performance has been restored for the last stragglers of EC2 instances and EBS volumes that required additional maintenance.

ELB (N. Virginia)
[RESOLVED] Connectivity Issues
1:40 PM PDT We are investigating connectivity issues for load balancers in a single availability zone in the US-EAST-1 Region.
2:45 PM PDT We have identified and fixed the root cause of the connectivity issue affecting load balancers in a single availability zone. The connectivity impact has been mitigated for load balancers with back-end instances in multiple availability zones. We continue to work on load balancers that are still seeing connectivity issues.
6:08 PM PDT At 12:51 PM PDT, a small number of load balancers in a single availability zone in the US-EAST-1 Region experienced connectivity issues. The root cause of the issue was resolved and is described in the EC2 post. All affected load balancers have now been recovered and the service is operating normally.

RDS (N. Virginia)
[RESOLVED] RDS connectivity issues in a single availability zone
1:39 PM PDT We are currently investigating connectivity issues to a small number of RDS database instances in a single availability zone in the US-EAST-1 Region
2:07 PM PDT We continue to work to restore connectivity and reduce latencies to a small number of RDS database instances that are impacted in a single availability zone in the US-EAST-1 Region
2:43 PM PDT We have identified and resolved the root cause of the connectivity and latency issues in the single availability zone in the US-EAST-1 region. We are continuing to recover the small number of instances still impacted.
3:31 PM PDT The majority of RDS instances in the single availability zone in the US-EAST-1 Region that were impacted by the prior connectivity and latency issues have recovered. We are continuing to recover a small number of remaining instances experiencing connectivity issues.
6:01 PM PDT At 12:51 PM PDT, a small number of RDS instances in a single availability zone within the US-EAST-1 Region experienced connectivity and latency issues. The root cause of the issue was resolved and is described in the EC2 post. By 2:19 PM PDT, most RDS instances had recovered. All instances are now recovered and the service is operating normally.

Amazon EBS is the basis of many other services

The bold points are therefore of crucial importance, since all of these services are dependent on a single service at the same time: Amazon EBS. To this result awe.sm came during their analysis. How awe.sm has determined, EBS is nine times out of ten the main problem of major outages on Amazon. How in the outage above.

Among the services that are dependent on Amazon EBS include the Elastic Load Balancer (ELB), the Relational Database Service (RDS) or Elastic Beanstalk.

Q: What is the hardware configuration for Amazon RDS Standard storage?
Amazon RDS uses EBS volumes for database and log storage. Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance.

Source: http://aws.amazon.com/rds/faqs/

EBS Backed
EC2: If you select an EBS backed AMI
ELB: You must select an EBS backed AMI for EC2 host
RDS
Elastic Beanstalk
Elastic MapReduce

Source: Which AWS features are EBS backed?

Load balancing across availability zones is excellent advice in principle, but still succumbs to the problem above in the instance of EBS unavailability: ELB instances are also backed by Amazon’s EBS infrastructure.

Source: Comment – How to work around Amazon EC2 outages

As you can see more than a few services depend on Amazon EBS. This means, conversely, if EBS fails these services are also no longer available. Particularly tragic is the case with the Amazon Elastic Load Balancer (ELB), which is responsible for directing the traffic in case of failure or heavy load. Thus, if Amazon EBS fails and the data traffic should be transmitted to another region, this does not work because the load balancer also dependents on EBS.

I may be wrong. However, if you look at the past failures the evidence indicates that Amazon EBS is the main source of error in the Amazon Cloud.

It may therefore be allowed to ask the question whether constantly the same component of its infrastructure blow up in a leader’s face, which, in principle, is to be regarded as a single point of failure? And whether an infrastructure-as-a-service (IaaS) provider, who has experienced the most outages of all providers in the market, can be described under this point of view as a leader. Even if I frequently preach that a user of a public IaaS must take responsibility for the scalability and high availability on its own, the provider has the duty to ensure that the infrastructure is reliable.

Kategorien
Analysis

Cloud storage Box could become a threat for Dropbox and Microsoft SkyDrive

To become more attractive for private users and small businesses, the cloud storage provider Box has expanded its pricing model. Immediately in addition to the existing plans for private, business and enterprise customers a Starter plan can be selected as well, which is interesting for both small businesses and freelancers as well as private customers.

Private customers get more free storage, small businesses a new plan

The offer for private users has been increased from formerly free 5GB to 10GB. In addition, the portfolio was extended with a new Starter plan, which should be target at smaller companies. This offers 100GB disk space for 1 to max. 10 users per company account for $ 5 per user per month.

Box, that addressed large companies in the first place, thus hoped that smaller enterprise customers and consumers increased to store their data in the cloud, rather than save it to error-prone local media. According to CEO Aaron Levie, Box is particularly driven by the issues of information and collaboration. Whether it is a global corporation, a small business or a freelancer, in the end it is important that you are able to share content and access it securely and reliably from anywhere, so Levie.

The new Starter plan is just a hook

To be honest, the new Starter plan is very interesting as it meets the needs of a specific target group. However, these are not small companies, but definitely private users and freelancers. The features that are offered around the storage are definitely on enterprise level. In addition to various safety features (no end-to-end encryption) at different levels, integration options over apps on the basis of an ecosystem of third party developers are available. However, 100GB are far too little for small businesses, especially since this account is designed for 1 to 10 users. 10 GB per user is very scarce very quickly. In addition, many other interesting and important features for businesses are offered just with the next plan „Business“ for $15 per user per month. Where at least three users are need to set up. This will include 1000GB storage and other security functions on folder and file level per user, integration into an Active Directory, Google Apps and Salesforce, an advanced user management, etc. So, at the end of the day, the Starter plan just serves as a hook to drum up business customer.

On the other hand, this plan is a very interesting deal for private users and freelancers who need more features at a cheaper price and a similar performance like Dropbox. Since, although the free plan was extended to 10GB, but the free limit of 50GB has been dropped. Who now needs more than 10GB must buy 100GB for $10 per month. It therefore makes a lot more sense for private users to opt for a Starter plan and only pay $5 per month or $60 per year.

The Starter plan may well ensure that Dropbox and Microsoft SkyDrive losing market share if this renewal gets around. Particular SkyDrive should dress up warmly. Although Microsoft’s cloud storage is well integrated with the Windows operating systems and continues to be the cheapest on the market. However, SkyDrive is very slow and the user experience is below average. Just to highlight a tiny but crucial detail that makes Box simply better. Transparency, what is happening in the background. By way of comparison: Box has a small app for Windows in which the status is displayed. Here you can see: the progress in percent; the approximate time until the upload is completed; the file that is being processed; how many files need to be processed; how many files are processed in total. Microsoft SkyDrive shows nothing of this. The user is completely left in the dark.

Dropbox is known as performance king. Also the ease of use is good. Nevertheless, the Box Starter plan, due to its extended functional possibilities at a cheaper price and a similar performance, has certainly the potential to compete Dropbox.

Note: Due to the current security situation, it is pointed out that Box is a U.S. based provider and the data is stored in the United States. Although, the data is stored server side encrypted. However, Box doesn’t offer an end-to-end encryption (only SSL during transmission). The key for on Box‘ infrastructure encrypted data are owned by Box and not by the user. For this reason, Box has the opportunity to decode the data independent to allow third parties access it anytime.