Kategorien
Analysis

AWS OpsWorks + VPC: AWS accelerates the pace in the hybrid cloud

After the Amazon Web Services (AWS) have acquired the German company Peritor and its solution Scalarium last year and renamed it to AWS OpsWorks, the further integration into the AWS cloud infrastructure follows. The next step is the integration with the Virtual Private Cloud which is, given the current market development, a further punch in the direction of the pursuer.

AWS OpsWorks + AWS Virtual Private Cloud

AWS OpsWorks is a DevOps solution with which applications can be deployed, customized and managed. In short, it ensures the entire application lifecycle. This includes features such as a user-based SSH management, CloudWatch metrics for the current load and the memory consumption, automatic RAID volume configurations and other options for application deployment. In addition, OpsWorks can be expanded with Chef by using individual Chef recipes.

The new Virtual Private Cloud (VPC) support now enables these OpsWorks functions within an isolated private network. For example applications can be transfered from an own IT infrastructure into a VPC in the Amazon Cloud by a secure connection between the own datacenter and the Amazon cloud is established. Thus, Amazon EC2 instances within a VPC behave as if it would be running in the existing corporate network.

OpsWorks and VPC are just the beginning

Believing in the results of the Rackspace 2013 Hybrid Cloud survey, 60 percent of IT decision-makers have the hybrid cloud as the main goal in mind. Here 60 percent will or have withdrawn their applications and workloads in the public cloud. 41 percent left the public cloud partially. 19 percent want to leave the public cloud even completely. The reasons for the use of a hybrid cloud rather than a public cloud are higher security (52 percent), more control (42 percent) and better performance and higher reliability (37 percent). The top benefits, which hybrid cloud users report, including more control (59 percent), a higher security (54 percent), a higher reliability (48 percent), cost savings (46 percent) and better performance (44 percent).

AWS has recognized this trend early in March 2012, and started the first strategic steps. The integration of OpsWorks with VPC is basically just a sideshow. The real trump card holds AWS with the cooperation of Eucalyptus, which was signed last year. The aim is to improve the compatibility with the AWS APIs by AWS supplies Eucalyptus with further information. Furthermore, developers from both companies focus on creating solutions that enterprise customers to help migrate existing data between data centers and the AWS cloud. The customer will also get the opportunity to use the same management tools and their knowledge of both platforms. With version 3.3 Eucalyptus already could present first results of this cooperation.

It will be exciting how the future of Eucalyptus looks. Eventually, the open source company is bought or sold. The question is who makes the first move. My advice is still that AWS must seek an acquisition of Eucalyptus. From a strategic and technological point of view there is actually no way around it.

Given this fact Rackspace and all the other who jumped on the hybrid cloud bandwagon may not have high hopes. Amazon put things on the right track and is also prepared for this scenario.

Kategorien
Services @en

TeamDrive SecureOffice: Seamless and secure document processing for smartphones and tablets

With “TeamDrive SecureOffice” the German vendor TeamDrive, known for its secure enterprise cloud storage solutions, and Picsel, the Smart Office manufacturer, have joined forces to launch the first secure office solution for iOS and Android smartphones and tablets. TeamDrive SecureOffice presents itself as the first Dropbox-like synchronization solution with built-in office document handling and end-to-end encryption.

End-to-end encryption for enterprises and government agencies

With the integration of TeamDrive’s – for safety tested and certified – sync & share technology, alongside Picsel’s Smart Office solution for mobile devices, the two companies strive to set a milestone for safe productivity solutions for mobile devices. According to both companies, TeamDrive SecureOffice is the first technology of its kind that covers the needs of companies and government agencies with complete end-to-end encryption while allowing the sharing of data by different users on mobile devices.

A sandbox provides seamless security on the device

Based on sandbox technology, shared documents never leave the secure environment provided by the application. Complete end-to-end encryption is initiated when employees send and receive files via mobile devices. Data stored in the application remains within the secure environment of the application even when, for example, a Word document is opened. The document is forwarded directly to the internally implemented Smart Office application and opened without ever leaving TeamDrive SecureOffice’s secure environment. With the integration of Smart Office in TeamDrive, an encrypted synchronization solution for iPhones, iPads and Android smartphones will be realized.

Please note: On Android the encryption for the device itself and the SD card needs to be specifically activated in order ensure the highest level of security. On the iPhone and iPad, this is automatically active.

Features of TeamDrive SecureOffice

TeamDrive SecureOffice offers numerous functions. These functions include:

  • Support for multiple operating systems.
  • Powerful viewing and editing functions.
  • Comprehensive support for file formats such as Microsoft Word, PowerPoint, Excel and Adobe PDF.
  • Automatic document adjustment by zooming and panning.

Furthermore, the user expects a reliable printing solution for mobile devices and the ability to export files from an office application to Adobe PDF and provide PDF documents with comments.

Secure mobile collaboration becomes more and more vital

The safe use of corporate data is top priority. As we continue with the ongoing transition to a mobile society, the way we work and collaborate in the future is bound to progress and change. The introduction and toleration of Bring Your Own Device (BYOD) in enterprises has led to a new era of protection needs, needs which present a particular challenge for any organization wanting to protect critical data and intellectual property to the fullest. Therefore, new technological advances are more than necessary to help protect against the loss of sensitive data during the use of mobile devices and to not lose sight of simplicity and ease of use.

TeamDrive has proven itself to be the safest German vendor of sync, share and cloud storage solutions. The company leverages its expertise and proven security technology combined with Picsel’s Smart Office to increase the productivity of employees through the use of mobile devices. At the same time, both companies break fresh ground with their sandbox technology and give the critical issues of data security and data loss special attention and value.

For large companies, government agencies and any other organization working with particularly sensitive data, it is crucial to adapt to the mobile habits of employees and to respond to these habits with appropriate solutions. A first look at the Android version of the app displays how the seamless workflow of Picsel’s Smart Office for mobile devices in conjunction with TeamDrive’s cloud-based sync and encryption technology for business can help to achieve this goal.

Kategorien
Services @en

openQRM 5.1 is available: OpenStack and Eucalyptus have a serious competitor

openQRM project manager Matt Rechenburg already told me some of the new features of openQRM 5.1 at the end of July. Now the final release has been released with additional functions. The overall picture looks really promising. Although the heading at first can suspect that openQRM is something completely new, which is trying to outstrip OpenStack or Eucalyptus. But this is not by a long shot. openQRM already exists since 2004 and has built up a wide range of functions over the years. The open source project from Cologne, Germany unfortunately lost a little in the crowd, because the marketing machines behind OpenStack and Eucalypts run at full speed and the loudspeakers are many times over. Certainly, openQRM must not hide from them. On the contrary, with version 5.1, Rechenburg’s team again has vigorously gain functions and particular enhances the user experience.

The new features in openQRM 5.1

openQRM-Enterprise sees in the hybrid cloud an important significance for the future of cloud as well. For this reason, openQRM 5.1 was extended with a plugin to span a hybrid cloud with the cloud infrastructures of the Amazon Web Services (AWS), Eucalyptus Cloud and Ubuntu Enterprise Cloud (UEC). Thus, openQRM is using AWS as well as Eucalyptus and UEC as a resource provider for more computing power and memory and is able to manage public cloud infrastructures as well as local virtual machines. This allows administrators, based on openQRM, to transparently provide their end-users Amazon AMIs via a cloud portal and to monitor the state of the virtual machines via Nagios.

Another new feature is the tight integration of Puppet. With that end-users can order virtual machines as well as public cloud infrastructures with personalized, managed software stacks. The best technical renovation is the consideration of the Amazon Web Services high-availability concept. Does an Amazon EC2 instance fail, automatically a new one starts. openQRM 5.1 is even able to offset the outage of an entire Amazon Availability Zone (AZ). Is an AZ not available anymore the correlate EC instance will be started in another Availability Zone of the same region.

According to CEO Matt Rechenburg, openQRM-Enterprise expands the open source solution openQRM consistently with cloud spanning management solutions in order to empower its customers to flexibility grow beyond the borders of their own IT capabilities.

In addition to technical enhancements much emphasis was placed on ease of use as well. In openQRM 5.1 the user interface for the administrator was completely revised. This ensures a better overview and user guide. Even the dashboard, as a central point has benefited greatly, by all of the information such as the status of the data center, are displayed at a glance.

New features have been introduced for the openQRM-Enterprise Edition, too. This includes a new plugin for role-based access rights management, which allows fine-grained setting permissions for administrators within the entire openQRM system. With that complete enterprise topologies can be mapped to openQRM roles in order to restrict administrators who are responsible only for virtualization in the enterprise to the management and provisioning of virtual machines.

Further renovations in openQRM 5.1 include an improved support for virtualization technology KVM, by now using it for KVM GlusterFS volumes as well. In addition, VMware technologies are better supported. This means that now even existing VMware ESX systems can be managed as well as local or over the network bootable VMware ESX machines and can be install and managed.

openQRM takes a big step forward

Although openQRM significantly longer exists on the market as OpenStack or Eucalyptus. Nevertheless, the level of awareness of both projects is larger. This is mainly due to the substantial marketing efforts of both the OpenStack community and of Eucalyptus. But technological and functional openQRM must not hide. On the contrary, openQRMs functions is many times more powerful than that of OpenStack or Eucalyptus. Where the two focus exclusively on the topic of cloud, openQRM also has a complete data center management solution included.

Due to the long history openQRM has received many new and important features in recent years. However, as a result it also lose track of what the solution is able to afford. But who has understood, that openQRM is composed of integrated building blocks such as „Data Center Management“, „Cloud Backend“ and „Cloud Portal“, will recognize that the open source solution, especially in the construction of private clouds, provides an advantage over OpenStack and Eucalyptus. Especially the area of ​​data center management must not be neglected for building a cloud to keep track and control of its infrastructure.

With version 5.0, the structures already begun to sharpen and to summarize the individual functions into workflows. This was worked out by the 5.1 release once again. The new look and layout of the openQRM backend has been completely overhauled. It looks tidier and easier to use and will positive surprise all customers.

The extension with the hybrid cloud functionality is an important step for the future of openQRM. The result of the Rackspace 2013 Hybrid Cloud survey showed that 60 percent of IT decision-makers have the hybrid cloud as the main goal in mind. Here 60 percent will or have withdrawn their applications and workloads in the public cloud. 41 percent left the public cloud partially. 19 percent want to leave the public cloud even completely. The reasons for the use of a hybrid cloud rather than a public cloud are higher security (52 percent), more control (42 percent) and better performance and higher reliability (37 percent). The top benefits, which hybrid cloud users report, including more control (59 percent), a higher security (54 percent), a higher reliability (48 percent), cost savings (46 percent) and better performance (44 percent).

openQRM not surprisingly orientates at the current public cloud leader Amazon Web Services. Thus, in combination with Eucalyptus or other Amazon compatible cloud infrastructures, openQRM can also be used to build massively scalable hybrid cloud infrastructures. For this purpose openQRM focuses on its proven plugin-concept and integrates Amazon EC2, S3 and Eucalyptus exactly this way. Besides its own resources from a private openQRM Cloud, Amazon and Eucalyptus are used as further resource providers to get more computing power quickly and easily.

The absolute killer features include the automatic applications deployment using Puppet, with which the end-user to conveniently and automatically can provide EC2 instances with a complete software stack itself, as well as the consideration of the Amazon Availability Zone-wide high-availability functionality, which is neglected by many cloud users again and again due to ignorance. But even the improved integration of technologies such as VMware ESX systems should not be ignored. Finally, VMware is still the leading virtualization technology on the market. Thus openQRM also increases its attractiveness to be used as an open source solution for the management and control of VMware environments.

Technological and functional openQRM is on a very good path into the future. However, bigger investments in public relations and marketing are imperative.

Kategorien
Analysis

Amazon EBS: When does Amazon AWS eliminate its single point of failure?

Immediately the Amazon Web Services again stood still in the US-EAST-1 region in North Virginia. First, on 19 August 2013, which apparently also drag Amazon.com under and recently on 25 August 2013. Storms to have been the cause. Of course it has thereby again caught the usual suspects, especially Instagram and Reddit. It seems that Instagram strongly rely on its cloud architecture that blows up in its face regularly when the lightning strikes in North Virginia. However, also Amazon AWS should start to change its infrastructure, because it can’t go on like this.

US-EAST-1: Old, cheap and fragile

The Amazon region US-EAST-1 is the oldest and most widely used region in Amazon’s cloud computing infrastructure. This is on the one hand related to the cost which are considerably cheaper in comparison to other regions. Whereby the Oregon region has been adjusted in price. Secondly, because of the age, here are also many old customers with probably not optimized system architectures for the cloud.

All eggs in one nest

The mistake most users commit is that they only rely on one region, in this case, US-East-1. Should this fail, also the own service is of course no longer accessible. This applies to all of Amazon’s worldwide regions. To overcome this situation, a multi-region approach should be chosen by the application is scalable and highly available distributed over several regions.

Amazon EBS: Single Point of Failure

I have already suggested in the last year that the Amazon Web Services have a single point of failure. Below are the status messages of 25 August 2013 which underline my thesis. The first time I become aware of it was a blog post by the Amazon AWS customer awe.sm who tell about their experiences in the Amazon cloud. The bold points are crucial.

EC2 (N. Virginia)
[RESOLVED] Degraded performance for some EBS Volumes
1:22 PM PDT We are investigating degraded performance for some volumes in a single AZ in the US-EAST-1 Region
1:29 PM PDT We are investigating degraded performance for some EBS volumes and elevated EBS-related API and EBS-backed instance launch errors in a single AZ in the US-EAST-1 Region.
2:21 PM PDT We have identified and fixed the root cause of the performance issue. EBS backed instance launches are now operating normally. Most previously impacted volumes are now operating normally and we will continue to work on instances and volumes that are still experiencing degraded performance.
3:23 PM PDT From approximately 12:51 PM PDT to 1:42 PM PDT network packet loss caused elevated EBS-related API error rates in a single AZ, a small number of EBS volumes in that AZ to experience degraded performance, and a small number of EC2 instances to become unreachable due to packet loss in a single AZ in the US-EAST-1 Region. The root cause was a „grey“ partial failure with a networking device that caused a portion of the AZ to experience packet loss. The network issue was resolved and most volumes, instances, and API calls returned to normal. The networking device was removed from service and we are performing a forensic investigation to understand how it failed. We are continuing to work on a small number of instances and volumes that require additional maintenance before they return to normal performance.
5:58 PM PDT Normal performance has been restored for the last stragglers of EC2 instances and EBS volumes that required additional maintenance.

ELB (N. Virginia)
[RESOLVED] Connectivity Issues
1:40 PM PDT We are investigating connectivity issues for load balancers in a single availability zone in the US-EAST-1 Region.
2:45 PM PDT We have identified and fixed the root cause of the connectivity issue affecting load balancers in a single availability zone. The connectivity impact has been mitigated for load balancers with back-end instances in multiple availability zones. We continue to work on load balancers that are still seeing connectivity issues.
6:08 PM PDT At 12:51 PM PDT, a small number of load balancers in a single availability zone in the US-EAST-1 Region experienced connectivity issues. The root cause of the issue was resolved and is described in the EC2 post. All affected load balancers have now been recovered and the service is operating normally.

RDS (N. Virginia)
[RESOLVED] RDS connectivity issues in a single availability zone
1:39 PM PDT We are currently investigating connectivity issues to a small number of RDS database instances in a single availability zone in the US-EAST-1 Region
2:07 PM PDT We continue to work to restore connectivity and reduce latencies to a small number of RDS database instances that are impacted in a single availability zone in the US-EAST-1 Region
2:43 PM PDT We have identified and resolved the root cause of the connectivity and latency issues in the single availability zone in the US-EAST-1 region. We are continuing to recover the small number of instances still impacted.
3:31 PM PDT The majority of RDS instances in the single availability zone in the US-EAST-1 Region that were impacted by the prior connectivity and latency issues have recovered. We are continuing to recover a small number of remaining instances experiencing connectivity issues.
6:01 PM PDT At 12:51 PM PDT, a small number of RDS instances in a single availability zone within the US-EAST-1 Region experienced connectivity and latency issues. The root cause of the issue was resolved and is described in the EC2 post. By 2:19 PM PDT, most RDS instances had recovered. All instances are now recovered and the service is operating normally.

Amazon EBS is the basis of many other services

The bold points are therefore of crucial importance, since all of these services are dependent on a single service at the same time: Amazon EBS. To this result awe.sm came during their analysis. How awe.sm has determined, EBS is nine times out of ten the main problem of major outages on Amazon. How in the outage above.

Among the services that are dependent on Amazon EBS include the Elastic Load Balancer (ELB), the Relational Database Service (RDS) or Elastic Beanstalk.

Q: What is the hardware configuration for Amazon RDS Standard storage?
Amazon RDS uses EBS volumes for database and log storage. Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance.

Source: http://aws.amazon.com/rds/faqs/

EBS Backed
EC2: If you select an EBS backed AMI
ELB: You must select an EBS backed AMI for EC2 host
RDS
Elastic Beanstalk
Elastic MapReduce

Source: Which AWS features are EBS backed?

Load balancing across availability zones is excellent advice in principle, but still succumbs to the problem above in the instance of EBS unavailability: ELB instances are also backed by Amazon’s EBS infrastructure.

Source: Comment – How to work around Amazon EC2 outages

As you can see more than a few services depend on Amazon EBS. This means, conversely, if EBS fails these services are also no longer available. Particularly tragic is the case with the Amazon Elastic Load Balancer (ELB), which is responsible for directing the traffic in case of failure or heavy load. Thus, if Amazon EBS fails and the data traffic should be transmitted to another region, this does not work because the load balancer also dependents on EBS.

I may be wrong. However, if you look at the past failures the evidence indicates that Amazon EBS is the main source of error in the Amazon Cloud.

It may therefore be allowed to ask the question whether constantly the same component of its infrastructure blow up in a leader’s face, which, in principle, is to be regarded as a single point of failure? And whether an infrastructure-as-a-service (IaaS) provider, who has experienced the most outages of all providers in the market, can be described under this point of view as a leader. Even if I frequently preach that a user of a public IaaS must take responsibility for the scalability and high availability on its own, the provider has the duty to ensure that the infrastructure is reliable.

Kategorien
Analysis

Cloud storage Box could become a threat for Dropbox and Microsoft SkyDrive

To become more attractive for private users and small businesses, the cloud storage provider Box has expanded its pricing model. Immediately in addition to the existing plans for private, business and enterprise customers a Starter plan can be selected as well, which is interesting for both small businesses and freelancers as well as private customers.

Private customers get more free storage, small businesses a new plan

The offer for private users has been increased from formerly free 5GB to 10GB. In addition, the portfolio was extended with a new Starter plan, which should be target at smaller companies. This offers 100GB disk space for 1 to max. 10 users per company account for $ 5 per user per month.

Box, that addressed large companies in the first place, thus hoped that smaller enterprise customers and consumers increased to store their data in the cloud, rather than save it to error-prone local media. According to CEO Aaron Levie, Box is particularly driven by the issues of information and collaboration. Whether it is a global corporation, a small business or a freelancer, in the end it is important that you are able to share content and access it securely and reliably from anywhere, so Levie.

The new Starter plan is just a hook

To be honest, the new Starter plan is very interesting as it meets the needs of a specific target group. However, these are not small companies, but definitely private users and freelancers. The features that are offered around the storage are definitely on enterprise level. In addition to various safety features (no end-to-end encryption) at different levels, integration options over apps on the basis of an ecosystem of third party developers are available. However, 100GB are far too little for small businesses, especially since this account is designed for 1 to 10 users. 10 GB per user is very scarce very quickly. In addition, many other interesting and important features for businesses are offered just with the next plan „Business“ for $15 per user per month. Where at least three users are need to set up. This will include 1000GB storage and other security functions on folder and file level per user, integration into an Active Directory, Google Apps and Salesforce, an advanced user management, etc. So, at the end of the day, the Starter plan just serves as a hook to drum up business customer.

On the other hand, this plan is a very interesting deal for private users and freelancers who need more features at a cheaper price and a similar performance like Dropbox. Since, although the free plan was extended to 10GB, but the free limit of 50GB has been dropped. Who now needs more than 10GB must buy 100GB for $10 per month. It therefore makes a lot more sense for private users to opt for a Starter plan and only pay $5 per month or $60 per year.

The Starter plan may well ensure that Dropbox and Microsoft SkyDrive losing market share if this renewal gets around. Particular SkyDrive should dress up warmly. Although Microsoft’s cloud storage is well integrated with the Windows operating systems and continues to be the cheapest on the market. However, SkyDrive is very slow and the user experience is below average. Just to highlight a tiny but crucial detail that makes Box simply better. Transparency, what is happening in the background. By way of comparison: Box has a small app for Windows in which the status is displayed. Here you can see: the progress in percent; the approximate time until the upload is completed; the file that is being processed; how many files need to be processed; how many files are processed in total. Microsoft SkyDrive shows nothing of this. The user is completely left in the dark.

Dropbox is known as performance king. Also the ease of use is good. Nevertheless, the Box Starter plan, due to its extended functional possibilities at a cheaper price and a similar performance, has certainly the potential to compete Dropbox.

Note: Due to the current security situation, it is pointed out that Box is a U.S. based provider and the data is stored in the United States. Although, the data is stored server side encrypted. However, Box doesn’t offer an end-to-end encryption (only SSL during transmission). The key for on Box‘ infrastructure encrypted data are owned by Box and not by the user. For this reason, Box has the opportunity to decode the data independent to allow third parties access it anytime.

Kategorien
Analysis

Google Apps vs. Microsoft Office 365: What do the assumed market shares really tell us?

The blog of UserActivion compares at regular intervals the spread of office and collaboration solutions from Google Apps and Microsoft Office 356. I have accidentally become aware of the market shares per industry, thanks Michael Herkens. The result is from July and shows the market shares of both cloud solutions for June. The post also shows a comparison to the previous month May. What stands out here is that Google Apps is by far in the lead in any industry. With the size and general dissemination of Microsoft that’s actually unusual. Therefore, to take a look behind the scenes make sense.

Google Apps with a clear lead of Office 365

Trusting the numbers from UserActivion, Google Apps has a significant advantage in the sectors of trade (84.3 percent), technology (88.2 percent), education (77.4 percent), health care (80.3 percent) and governments (72.3 percent).

In a global comparison, it looks similar. However, Office 365 has caught up with May in North America, Europe and Scandinavia. Nevertheless, the current ratio is, according to UserActivion, at 86:14 for Google Apps. In countries such as Norway and Denmark is a balanced ratio of 50:50.


Source: useractivation.com


Source: useractivation.com

Do the Google Apps market shares have a significance?

This question is relatively easy to answer. They have a significance on the dissemination of Google Apps. However, they say nothing about what sales Google makes with these market shares. As measured by this, the expected ratio between Google Apps and Office 365 looks a little different. Why?

Well, Google Apps has become such a big market share because the Google Apps Standard version (no longer exists) for a given number of users and the Google Apps for Education could be used for a very very long time for free of charge. The Education version is still free. This naturally leads to the fact that also a lot of users who have their own domain, have chosen a free Google Apps Standard version, which even after the discontinuation of the version may continue to be used. The only limitation are the maximum users per domain.

So it comes, for example, that only I as a single person have registered nine (9) Google Apps Standard accounts over the last years. Most of them I still have. I currently use two of them (Standard, Business) active and pay for one (Business).

The Google Apps market shares must therefore be broken down as follows:

  • Google Apps users.
  • Not active Google Apps users.
  • Active Google Apps users who do not pay.
  • Active Google Apps users who pay.

If this ratio is now weighted and applied to the market share in terms of turnover, Microsoft would fare better. Why?

At the end of the day it is about cash. From the beginning Microsoft has nothing to give away. Admittedly Office 365 can be tested for free for a short time. Subsequently, the decision is made for a paid plan or not. This means that it is assumed that each Office 365 client, who no longer is in the testing phase, at the same time is an active paying user. Surely Google receives from the non-paying active users insights about their behavior and can place a bit of advertising. But the question remains how much that really matters.

This does not mean that Google Apps has not widely prevalent. But it shows that the strategy of Google rises at least in one direction. Give away of accounts for free to get market share pays (naturally). It also shows that market shares are not simultaneously mean profitability. Most people like to take what they get for free – the Internet principle. The fact that Google has to slowly start to monetize its customer base can be seen by the discontinuation of the free standard version. It appears that the critical mass has been reached.

Compared to Google Apps, Microsoft was to late with Office 365 on the market and must catch up once strong. On top of that Microsoft has (apparently) nothing to give away. Assuming that the numbers of UserActivion are valid, another reason may be, that the website and the offer of Office 365 – compared to Google Apps – are far too opaque and complicated. (Hint: Just visit the websites.)

Google copied the Microsoft principle

In conclusion, based on the UserActivion numbers, it can be said that Google is on the best way to apply the Microsoft principle to itself. Microsoft’s way into the enterprise went over the home user of Windows, Office and Outlook products. The strategy worked. Who even works in the evening with the known programs from Redmond has it during the day at work much easier because it is so familiar. Even the recommendations for a Microsoft product by the employees were inevitable. The same applied to the Windows server. If a Windows operating system is so easy to use and configure, then a server can not ultimately be much more complicated. With it the decision makers could be taken on board. A similar principle can be seen at Google. Google Apps is nothing more than a best of the Google services for end users, GMail, GDocs, etc. packed in a suite. Meanwhile, the dissemination of these services is also relatively high, thus it can be assumed that most IT decision makers are familiar with it, too. It is therefore interesting to see how the real(!) market share between Google Apps and Office 365 to develop.

Kategorien
Comment

Dangerous: VMware underestimates OpenStack

Slowly we should seriously be worried who dictated VMware executives, what they have to say. In early March this year, COO Carl Eschenbach says, that he finds it really hard to believe that VMware and their partners cannot collectively beat a company that sells books (Note: Amazon). Well, the current reality looks different. Then in late March, a VMware employee from Germany comes to the conclusion, that VMware is the technology enabler of the cloud, and he currently see no other. That this is far from it, we all know, too. Lastly, CEO Pat Gelsinger puts another one on top and claims that OpenStack is not for the enterprise. Time for an enlightenment.

Grit your teeth and get to it!

In an interview with Network World, VMware CEO Pat Gelsinger spoke on OpenStack and does not expect that the open source project will have a significant reach in enterprise environments. Instead, he considered it as a framework service providers can use to build public clouds. In contrast, VMware has a very wide spread with extremely large VMware environment. The cost of switching and other topics are therefore not particularly effective. However, Gelsinger sees for cloud and service providers, in areas where VMware has not successfully done business in the past, a lot of potential for OpenStack.

Furthermore, Gelsinger considered OpenStack as a strategic initiative for VMware, which they are happy to support. VMware will ensure that its products and services work within cloud environments based on the open source solution. In this context OpenStack also opens new opportunities for VMware to enter the market for service providers, an area that VMware has neglected in the past. Therefore, VMware and Gelsinger see OpenStack as a way to line up wider.

Pride comes before a fall

Pat Gelsinger is right when he says that OpenStack is designed for service providers. VMware also remains to the leading virtualization vendors in the enterprise environment. However, this type of stereotypical thinking is dangerous for VMware. Because the tide can turn very quickly. Although Gelsinger like to see the high switching costs as a reason why companies should continue to rely on VMware. However, there is one thing to consider. VMware has its strengthen only(!) in virtualization – with the hypervisor. In relation to cloud computing, where OpenStack has its center of gravity, it does not look as rosy as it might look. To be honest, VMware has missed the trend to offer early solutions that make it possible to enable the virtualized infrastructure for the cloud, and serve with higher quality services and self-service capabilities to allow the IT department to become a service broker. Meanwhile, solutions are available, but the competition, not only from the open source environment, grows steadily.

This is also seen by IT buyers and decision makers. I’ve spoken with more than one IT manager who plans to exchange its VMware infrastructure against something open, in most cases KVM was named, and more cost-effective. This is just the virtualization layer which can break away. Furthermore, there are already some use cases of large companies (see on-premise private cloud) that use OpenStack to build a private cloud infrastructure. What also must not be forgotten is that more and more companies change in the direction of a hosted hosted private cloud or public cloud provider and the own data center will become less important over time. In this context, the hybrid cloud plays an increasingly important role to make the transfer and migration of data and systems more convenient. This is where OpenStack due to its wide distribution in hosted private cloud and public cloud environments has a great advantage over VMware.

With such statements, of course, VMware tries to suppress OpenStack from their own territory – on-premise enterprise customers – to place their own solutions. Nevertheless, VMware should not make the mistake of underestimating OpenStack.

Kategorien
Analysis

Google expands the Compute Engine with load balancer functionality. But where is the innovation machine?

In a blog post, Google has announced further renovations to its cloud platform. In addition to the expansion and improvement of the Google Cloud Datastore and the announcement of the Google App Engine 1.8.3, the infrastructure-as-a-service (IaaS) offer Google Compute Engine has received a load balancing service.

News from the Google Cloud Datastore

To further increase the productivity of developers, Google has expanded its Cloud Datastore with several functionalities. These include the Google Query Language (GQL). The GQL is a SQL-like language and is specifically aimed at data-intensive applications to query Entities and Keys from the Cloud Datastore. Furthermore, more statistics can get from the underlying data by the query of metadata. Google sees this as an advantage, for example, to develop an own internal administrative console to do own analysis or debug an application. In addition, improvements to the local SDK were made. These include enhancements to the command line tool and support for developers using Windows. After the first version of the Cloud Datastore has support Java, Python, and Node.js, now Ruby has been added.

Google App Engine 1.8.3

The renewal of the Google App Engine version 1.8.3 mainly directed to the PHP runtime environment. Furthermore, the integration with the Google Cloud Storage, which according to Google is gaining popularity, was reinforced. From now on functions such as opendir() or writedir() can be used to directly access bucket in Cloud Storage. Further stat()-ing information can be queried via is_readable() and is_file(). Metadata can also be write to Cloud Storage files now. Moreover performance improvements through memcache-backed optimistic read caching were made, to improve the performance of applications that need to read frequently from the same Cloud Storage file.

Load Balancing with the Compute Engine

The Google Compute Engine is expanded with a Layer 3 load balancing functionality which allows to develop scalable and fault-tolerant web applications on the Google IaaS. With the load balancing function, the incoming TCP/UDP traffic can be distributed across different Compute Engine machines (virtual machines) within the same region. Moreover, this ensures that no defective virtual machines are used to respond to HTTP requests and load peaks are offset. The configuration of the load balancer is carried out either via command line or via the REST API. The Load Balancer function can be used free of charge until the end of 2013, then the service will be charged.

Google’s progress is too slow

Two important innovations stick out in the news about the Google Cloud Platform. On the one hand, the new load balancing function of the Compute Engine, on the other hand the strengthen integration of the App Engine with Google Cloud Storage.

The load balancing extension is admittedly much too late. After all, this is one of the essential functions of a cloud infrastructure offering, to ensure to develop scalable and highly available systems and which all other existing providers on the market already have in their portfolio.

The extended integration of the Cloud Storage with the Google App Engine is important to give developers more S3-like features out from the PaaS and create more opportunities in terms of access and to provide the processing of data in a central and scalable storage location.

Bottom line, it can be stated that Google continues to expand its IaaS portfolio steadily. What stands out here, is the very slow speed. From the innovation engine Google, which we know from many other areas of the company, is nothing to see. Google gives the impression that it has to offer IaaS to not lose too many developers on the Amazon Web Services and other providers, but the Compute Engine is not highly prioritized. As a result, the offer has treated very little attention. The very late extension to the load balancing is a good example. Otherwise you might expect more ambition and speed to invest in the expansion of the offer, from a company like Google. Finally, except for Amazon, no other company has more experience in the field of highly scalable infrastructures. This should be relatively quickly transformed into a public offering, especially like Google advertises that customers can use the same infrastructure on which also all Google services are running. Apart from the App Engine, which is classified in the PaaS market, the Google Compute Engine must continue to hire back. This will be the case for a long time, if the expansion speed is not accelerated, and other essential services are rolled out.

Kategorien
Analysis

Cloud computing requires professional, independent and trustworthy certifications

I had written about a year ago on the sense and nonsense of cloud seals, certificates, associations and initiatives. At that time I already come to the conclusion that we need trustworthy certifications. However, the German market looked rather weak and was represented alongside EuroCloud with other advocacy groups as the „Initiative Cloud Services Made in Germany“ or „Deutsche Wolke“. But there is a promising independent freshman.

Results from last year

„Cloud Services Made in Germany“ and „Deutsche Wolke“

What initiatives generally have in common is to try to steer as many providers of cloud computing services as possible in their own ranks with various promises. Especially „Cloud Services Made in Germany“ jump on the supposed quality feature Made in Germany, and promises to „more legal certainty in the selection of cloud-based services …“.

And exactly this is how „Cloud Services Made in Germany“ and „Deutsche Wolke“ position themselves. With weak criteria, both of them are very attractive for vendors from Germany, which in turn can advertise with the „stickers“ on their websites. But in the criteria nothing is discussed in any way about the real quality of a service. Is the service really a cloud service? How is the pricing model? Ensures the provider a cloud computing conformal scalability and high availability? And many more questions that are essentially important to evaluate the quality of a cloud service!

Both initiatives have in any case credentials in their form. However, they should not be used as a quality criterion for a good cloud service. Instead, they belong to the category „Patriotism: Hello World, look we Germans can also do cloud.“

EuroCloud and Cloud-EcoSystem

In addition to these two initiatives, there are the association EuroCloud and Cloud-EcoSystem, both advertise with a seal and certificate. EuroCloud has its SaaS Star Audit. The SaaS Star Audit is aimed, as the name implies, exclusively to software-as-a-service provider. Depending on the budget, the provider may be honored by one to five stars by the EuroCloud federation, but must also be a member of EuroCloud. The number of stars says something about the scope of the audit. While with one star only „contract and compliance“ and a bit „operating infrastructure“ are being checked, five stars also check processes and security intensively.

The Cloud-EcoSystem by contrast has with its „Cloud Expert“ a quality certificate for Saas & Cloud Computing consultants and with its „Trust in Cloud“ one for cloud computing providers. A „Cloud Expert“ after the definition of the Cloud-EcoSystem should offer providers and users a decision guidance. In addition to writing, creating professional articles and checklists an expert also carry out quality checks. Furthermore, a customer should be able to trust that the advisor has certain properties of criteria for „cloud experts.“ So every „cloud expert“ should have a deep understanding and basic skills, and have references available and provide its self-created documents on request. Basically, according to the Cloud-EcoSystem, it is about to shape and present the Cloud-EcoSystem.

The „Trust in cloud“ certificate should serve as guidance for companies and users and establish itself as a quality certificate for SaaS and cloud solutions. On the basis of the certificate users receive the opportunity to compare cloud solutions objectively and come to a secure decision. The certification is based on a set of 30 questions, divided into 6 categories each of 5 questions. The questions must be answered by the examinee with Yes or No and also be proved. If the cloud provider answers a question with Yes, he receives a „cloud“. The checklist includes the categories of references, data security, quality of provision, decision confidence, contract terms, service orientation and cloud architecture.

Both EuroCloud and the Cloud-EcoSystem go the right way and try to evaluate providers based on self-imposed criteria. However, in this case two points should be questioned. First, these are associations, that means as a provider you have to be a member. It is legitimately asked which association member can fail an examination – independence? Furthermore, both create their own requirements catalogs, which are not comparable. Just because a provider has a „seal of approval“ of two different associations, which evaluate according to different criteria, does not mean at all that the cloud service also provides real quality – confidentiality.

The pros get into the ring: TÜV Rheinland

Regardless of all the organizations that have come together specifically for the cloud, TÜV Rheinland has launched a cloud-certification. TÜV itself is most likely aware of the testing and acceptance of cranes, fun rides and the general inspection for the car. But also have more than 15 years of experience in the IT areas of consulting and certification with regard to compliance, risk management and information security.

The cloud-certification process is extensive and has a price. A first look at the audit process and the list of requirements shows that the TÜV Rheinland has thus developed a very powerful tool for the testing of cloud services and infrastructures.

Starting with a „Cloud-Readiness Check“ first security, interoperability, compliance and data privacy are checked for their cloud-based suitability and a plan of action is created. This is followed by the review of the „cloud design“ in which the concept and solution are examine carefully. Among others, topics such as architecture but also the network security and access controls are examined. Afterwards, the actual implementation of the cloud solution is considered and quality checks are carried out. After, the preparation of the certification follows and later the actual certification.

The cloud requirements catalogue of the TÜV Rheinland comprises five main areas, which are in turn subdivided into a number of sub-elements. This includes organizing processes, organizational structure, data security, compliance / data privacy and processes. All in all a very profound requirement catalog.

In a called reference project TÜV Rheinland requires eight weeks for the certification of an international infrastructure-as-a-service provider.

Independent and trustworthy cloud certifications are mandatory

The quality and usefulness of certificates and labels stand and fall with the companies that are responsible for auditing and their defined criteria. Weak requirements catalogs meet neither an honest statement, nor will they help to illustrate the clear differences in quality of cloud solutions for the buyer. On the contrary, IT decision-makers in doubt rely on these supposedly tested services, whose quality is another matter. In addition, in cloud computing it is not about to install a software or a service. At the end it is consumed only and the provider is responsible for all other processes that would otherwise have taken the customer himself.

For this reason, independent, trustworthy, and above all professional certifications are necessary to ensure an honest statement about the quality and property of a cloud service, its provider and all downstream processes such as security, infrastructure, availability, etc. As a provider one should be honest with themselves and at the end decide on a certification, which focuses on professional lists of criteria, not just scratch the surface but deeply immersed in the solution and thus make a credible statement about the own solution.

Kategorien
Analysis

GigaOM Analyst Webinar – The Future of Cloud in Europe [Recording]

On July 9 Jo Maitland, Jon Collins, George Anadiotis and I talked about the opportunities and challenges of the cloud in Europe and countries such as Germany or the UK, and gave an insight into the cloud computing market in Europe. The recording of the international GigaOM analyst webinar „The Future of Cloud in Europe“ is online now.

Background of the webinar

The European Commission unveiled its “pro cloud” strategy a year ago, hoping to reignite the stagnant economy through innovation. The Commissioner proclaimed boldly that the cloud must “happen not to Europe, but with Europe”. And rightly so. A year later, three GigaOM Research analysts from Europe Jo Collins (Inter Orbis), George Anadiotis (Linked Data Orchestration) and Rene Buest (New Age Disruption) – moderated by Jo Maitland (GigaOM Research) – looked at who the emerging cloud players are in the region and their edge over U.S. providers. They digged into the issues for cloud buyers in Europe and the untapped opportunities for providers. Can Europe build a vibrant cloud computing ecosystem? That’s a tough question today as U.S. cloud providers still dominant the industry.

Questions which were answered

  • What’s driving cloud opportunities and adoption in Europe?
  • What are the inhibitors to adoption of cloud in Europe?
  • Are there trends and opportunities within specific countries (UK, Germany, peripheral EU countries?)
  • Which European providers show promise and why?
  • What are the untapped opportunities for cloud in Europe?
  • Predictions for the future of cloud in Europe.

The recording of the analyst webinar