Kategorien
Analysis

Fog Computing: Data, Information, Application and Services needs to be delivered more efficient to the enduser

You read it correctly, this is not about CLOUD Computing but FOG Computing. After the cloud is on a good way to be adapted in the broad, new concepts follow to enhance the utilization of scalable and flexible infrastructures, platforms, applications and further services to ensure the faster delivery of data and information to the enduser. This is exactly the core function of fog computing. The fog ensures that cloud services, compute, storage, workloads, applications and big data be provided at any edge of a network (Internet) on a trully distributed way.

What is fog computing?

The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

What are use cases of fog computing?

One should not be too confused by this new term. Although fog computing is a new terminology. But looking behind the courtain it quickly becomes apparent that this technology is already used in modern data centers and the cloud. A look at a few use cases illustrates this.

Seamless integration with the cloud and other services

The fog should not replace the cloud. Based on fog services the cloud should be enhanced by isolating the user data which are exclusively located at the edge of a network. From there it should allow administrators to connect analytical applications, security functions and more services directly to the cloud. The infrastructure is still based entirely on the cloud concept, but extends to the edge with fog computing.

Services to set vertical on top of the cloud

Many companies and various services already using the ideas of fog computing by delivering extensive content target-oriented to their customer. This includes among others webshops or provider of media content. A good example for this is Netflix, who is able to reach its numerous globally distributed customers. With the data management in one or two central data centers, the delivery of video-on-demand service would otherwise not be efficiently enough. Fog computing thus allows providing very large amounts of streamed data by delivering the data directly performant into the vicinity of the customer.

Enhanced support for mobile devices

With the steadily growth of mobile devices and data administrators gain more control capabilities where the users are located at any time, from where they login and how they access to the information. Besides a faster velocity for the enduser this leads to a higher level of security and data privacy by data can be controlled at various edges. Moreover fog computing allows a better integration with several cloud services and thus ensures an optimized distribution across multiple data centers.

Setup a tight geographical distribution

Fog computing extends existing cloud services by spanning up an edge network which consist of many distributed endpoints. This tight geographical distributed infrastructure offers advantages for variety of use cases. This includes a faster elicitation and analysis of big data, a better support for location-based services by the entire WAN links can be better bridged as well as the capabilities to evaluate data massively scalable in real time.

Data is closer to the user

The amount of data caused by cloud services require a caching of the data or other services which take care of this subject. This services are located close to the enduser to improve latency and optimize the data access. Instead of storing the data and information centralized in a data center far away from the user the fog ensures the direct proximity of the data to the customer.

Fog computing makes sense

You can think about buzzwords whatever you want. Only if you take a look behind the courtain it’s becoming interesting. Because the more services, data and applications are deployed to the end user, the more the vendors have the task of finding ways to optimize the deployment processes. This means that information needs to be delivered closer to the user, while latency must be reduced in order to be prepared for the Internet of Things. There is no doubt that the consumerization of IT and BYOD will increasing the use and therefore the consumption of bandwidth.

More and more users rely on mobile solutions to run their business and to bring it into balance with the personal live. Increasingly rich content and data are delivered over cloud computing platforms to the edges of the Internet where at the same time are the needs of the users getting bigger and bigger. With the increasing use of data and cloud services fog computing will play a central role and help to reduce the latency and improve the quality for the user. In the future, besides ever larger amounts of data we will also see more services that rely on data and that must be provided more efficient to the user. With fog computing administrators and providers get the capabilities to provide their customers rich content faster and more efficient and especially more economical. This leads to faster access to data, better analysis opportunities for companies and equally to a better experience for the end user.

Primarily Cisco will want to characterize the word fog computing to use it for a large-scale marketing campaign. However, at the latest when the fog generates a similar buzz as the cloud, we will find more and more CDN or other vendors who offer something in this direction as fog provider.

Kategorien
Analysis

Nirvanix. A living hell. Why multi-cloud matters.

One or two will certainly have heard of it. Nirvanix has oneself transported to the afterlife. The enterprise cloud storage service, which had a wide cooperation with IBM, on September 16, 2013 suddenly announced its closure and initially granted its existing customers a period of two weeks to migrate their data. The period has been extended to October 15, 2013 as customers need more time for migration. As a Nirvanix customer reported, it has stored 20 petabytes of data.

The end of Nirvanix

Out of nowhere enterprise cloud storage provider Nirvanix announced its end on September 16, 2013. To date its not declared how it happened. Rumor has it that a further round financing failed. Other reasons are seemingly on the faulty management. Thus, the company had five CEOs since 2008 until today. One should also not forget the strong competition in the cloud storage environment. Firstly, in recent years, many vendors have tried their luck. On the other hand the two top dogs Amazon Web Services with Amazon S3, and Microsoft with Azure Storage reduce the prices of their services in regular cycles, which are also enterprise-ready. Even to be named as one of the top cloud storage service provider by Gartner couldn’t help Nirvanix.

Particularly controversial is the fact that in 2011, Nirvanix has completed a five-year contract with IBM to expand IBM’s SmartCloud Enterprise storage services with cloud-based storage. As IBM has announced, stored data on Nirvanix will be migrated to the IBM SoftLayer object storage. As an IBM customer, I would still ask carefully about my stored data.

Multi-Cloud: Spread your eggs over multiple nests

First, a salute to the venture capital community. If it’s true that Nirvanix had to stop the service due to a failed round financing, then we see what responsibility is in their hands. Say no more.

How to deal with such a horror scenario like Nirvanix as cloud user? Well, as you can see a good customer base and partnerships with global players seems to be no guarantee that a service survived long term. Even Google currently plays its cloud strategy on the back of its customers and makes no binding commitment over the long-term consist of its services on the Google Cloud Platform, such as the Google Compute Engine (GCE). On the contrary, it is assumed that the GCE will not survive as long as other well-known Google services.

Backup and Multi-Cloud

Even if the cloud storage provider has to ensure the availability of the data, as a customer you have a duty of care and must be informed about the state of your data and – even in the cloud – take care of redundancy and backup. Meanwhile functions in the most popular cloud storage services are integrated to make seamless backups of the data and create multiple copies.

Although we are in the era of cloud, yet still applies: Backup! You should therefore ensure that a constantly checked(!) and a reliable backup and recovery plan exist. Furthermore, sufficient bandwidth must be available to move the data as soon as possible. This should also be checked at regular intervals using a migration audit to act quickly in the case of all cases.

To just move 20 petabytes of data is no easy task. Therefore you have to think about other approaches. Multi-cloud is a concept which is gaining more and more importance in the future. At it data and applications are distributed (in parallel) across multiple cloud platforms and providers. On this my analyst colleagues and friends Paul Miller and Ben Kepes already had discussed during their mapping session at the GigaOM Structure in San Francisco. Paul subsequently had written an interesting sector roadmap report on multi-cloud management.

Even next to Scalr, CliQr, RightScale and Enstratius already exist some management platforms for multi-cloud, we still find ourselves in a very early stage in terms of use. schnee von morgen webTV by Nikolai Longolius for example, is primarily on the Amazon Web Services and has developed a native web application 1:1 for the Google App Engine as a fallback scenario. This is not a multi-cloud approach, but shows its importance to achieve less effort for a provider-cross high availability and scalability. As Paul’s Sector Roadmap shows, it is in particular the compatibility of the APIs that must be attributed a great importance. In the future companies can no longer rely on a single provider, but distribute their data and applications across multiple providers to drive a best-of-breed strategy and to specifically spread the risk.

This should also be taken into consideration when simply store „only“ data in the cloud. The golden nest is the sum of a plurality of distributed.

Kategorien
Analysis

Enterprise Cloud Computing: T-Systems to launch "Enterprise Marketplace"

According to Deutsche Telekom it already could reach about 10.000 mid-size business for its Business Marketplace. Now a similar success should be achieved in the corporate customer environment. For this the Enterprise Marketplace is available to purchase software, hardware or packaged solutions on demand and even deploy firm-specific in-house development for the own employees.

SaaS, IaaS and ready packages

The Enterprise Marketplace is aimed to the specific requirements of large business and offers besides Software-as-a-Service solutions (SaaS) also pre-configured images (appliances) and pre-configured overall packages which can be integrated into existing system landscapes. All services including integrated in-house developments are located in a hosted private cloud and are delivered through the backbone of the Telekom.

Standard solutions and in-house development

The Enterprise Marketplace offers mostly standardized services. The usage of these services includes new releases, updates and patches. The accounting is on a monthly base and based on the use of the respective service. T-Systems also provides ready appliances like a pre-configured Apache Application Server which can be managed and expanded with own software. In the future customers will be able to choose which T-Systems data center they would like to use.

Within the Enterprise Marketplace different offerings from the marketplace can be combined e.g. a webserver with a database and a content management system. Furthermore own solutions like monitoring tools can be integrated. These own solutions can also be provided to the remaining marketplace participants. Who and how many participants get access to the solution the customer decides on its own.

In addition the performance of each appliance in the Enterprise Marketplace can be individually customized. Therefore the user can affect the amount of CPUs as well as the RAM size and storage and the connection bandwidth. In a next expansion stage managed services take care of patches, updates and new releases.

Marketplace with solutions from external vendors

A custom dashboard provides an overview of all deployed or approved applications within the Enterprise Marketplace. Current solutions on the marketplace include webserver (e.g. Apache and Microsoft IIS), application server (e.g. Tomcat and JBoss), SQL databases, Microsoft Enterprise Search as well as open source packages (e.g. LAMP stack or Joomla). In addition T-Systems partners with a number of SaaS solution provider who have been specifically tested for use on the T-Systems infrastructure. This includes among others the tax software TAXOR, the enterprise social media solution tibbr, the business process management suite T-Systems Process Cloud as well as the sustainability management WeSustain.

Cloud computing at enterprise level

T-Systems with its Enterprise Marketplace already arrived where other providers like Amazon, Microsoft, Google, HP and Rackspace starts its journey – at the lucrative corporate customers. While the Business Marketplace primarily presents as a marketplace for SaaS solutions, T-Systems and the Enterprise Marketplace go ahead and also provide infrastructure ressources and complete solutions for companies including the integration of in-house development as well as managed services.

Compared to the current cloud players T-Systems does not operate a public cloud but instead focused on providing a (hosted/ virtual) private cloud. This can pay off in the medium term. Current market figures from IDC promise a growth for the public cloud by 2013 with 47.3 billion dollars to 107 billion dollars in 2017. According to IDC the popularity of the private cloud will decline for the benefit of the virtual private cloud (hosted private cloud) once the public cloud provider start to offer appropriate solutions to meet the concerns of the customers.

Similar observations we have made ​​during our GigaOM study „The state of Europe’s homegrown cloud market„. Cloud customers are increasingly demanding managed services. Although they want to benefit from the properties of the public cloud – flexible use of the resources, pay per use – but this at the same time within a very secure environment including the support for the integration of applications and systems by the provider. At present, all of this the public cloud player cannot offer in this form since they mainly focus on their self-service offerings. This is fine, because they effectively operate certain use cases and have achieved significant market share in a specific target customer segment. However, if they want to get the big fish in the enterprise segment, they will have to align their strategy to the T-Systems portfolio in order to meet the individual needs of the customers.

A note on the selection process of partners for the Enterprise Marketplace: Discussions with partners that offer their solutions on the Business Marketplace confirm a tough selection process, which can be a real „agony“ for the partner. Specific adjustments specifically to the security infrastructure of T-Systems are not individual cases and show a high quality management by T-Systems.

Kategorien
Analysis

A view of Europe's own cloud computing market

Europe’s cloud market is dominated by Amazon Web Services, Microsoft Windows Azure, and Rackspace. So the same providers that serve the rest of the world. Each of these global providers has a local presence in Europe and all have made efforts to satisfy European-specific concerns with respect to privacy and data protection. Nevertheless, a growing number of local providers are developing cloud computing offerings in the European market. For GigaOM Research, Paul Miller and I explore in more detail the importance of having these local entrants and asks whether Europe’s growing concern with U.S. dominance in the cloud is a real driver for change. The goal is to discover whether there is a single European cloud market these companies can address or whether there are several different markets.

Key highlights from the report

  • European concern about the dominance of U.S. cloud providers
  • Rationale for developing cloud solutions within Europe
  • The value of transnational cloud infrastructure
  • The value of local or regional cloud infrastructure
  • A representative sample of Europe’s native cloud providers

European cloud providers covered in the report

  • ProfitBricks
  • T-Systems
  • DomainFactory
  • City Cloud
  • Colt
  • CloudSigma
  • GreenQloud

Get the report

To read the full report go to „The state of Europe’s homegrown cloud market„.

Kategorien
Analysis

Reasons for a decision against the Amazon Web Services

No question, during each vendor selection for the cloud the Amazon Web Services are always a big option. However, I frequently experience situations where companies make a decision against the use and instead prefer another provider. Enclosed are the two most common reasons.

Legal security and data privacy

The storage location of the data, especially the customer data, is the obvious and most common reason for a decision against the Amazon Web Services. The data should still be stored in the own country and after switching to the cloud not be located in a data center of another country in the EU. In this case the AWS data center in Ireland is no option.

Lack of simplicity, knowledge and time to market

Very often it is also the lack of the ease of use. This means that a company don’t want to (re)-develop its existing application or website for the Amazon infrastructure. Reasons are the lack of time and the knowledge to implement, what may results in a longer time to market. Both can be attributed to the complexity to achieve scalability and availability at the Amazon Web Services. After all, there are not just a few API calls. Instead, the entire architecture needs to be oriented on the AWS cloud. In Amazon’s case its about the horizontal scaling (scale-out) which makes this necessary. Instead, companies prefer vertical scaling (scale-up) to migrate the existing system 1:1 and not to start from scratch, but directly achieve success in the cloud.

Kategorien
Analysis

AWS OpsWorks + VPC: AWS accelerates the pace in the hybrid cloud

After the Amazon Web Services (AWS) have acquired the German company Peritor and its solution Scalarium last year and renamed it to AWS OpsWorks, the further integration into the AWS cloud infrastructure follows. The next step is the integration with the Virtual Private Cloud which is, given the current market development, a further punch in the direction of the pursuer.

AWS OpsWorks + AWS Virtual Private Cloud

AWS OpsWorks is a DevOps solution with which applications can be deployed, customized and managed. In short, it ensures the entire application lifecycle. This includes features such as a user-based SSH management, CloudWatch metrics for the current load and the memory consumption, automatic RAID volume configurations and other options for application deployment. In addition, OpsWorks can be expanded with Chef by using individual Chef recipes.

The new Virtual Private Cloud (VPC) support now enables these OpsWorks functions within an isolated private network. For example applications can be transfered from an own IT infrastructure into a VPC in the Amazon Cloud by a secure connection between the own datacenter and the Amazon cloud is established. Thus, Amazon EC2 instances within a VPC behave as if it would be running in the existing corporate network.

OpsWorks and VPC are just the beginning

Believing in the results of the Rackspace 2013 Hybrid Cloud survey, 60 percent of IT decision-makers have the hybrid cloud as the main goal in mind. Here 60 percent will or have withdrawn their applications and workloads in the public cloud. 41 percent left the public cloud partially. 19 percent want to leave the public cloud even completely. The reasons for the use of a hybrid cloud rather than a public cloud are higher security (52 percent), more control (42 percent) and better performance and higher reliability (37 percent). The top benefits, which hybrid cloud users report, including more control (59 percent), a higher security (54 percent), a higher reliability (48 percent), cost savings (46 percent) and better performance (44 percent).

AWS has recognized this trend early in March 2012, and started the first strategic steps. The integration of OpsWorks with VPC is basically just a sideshow. The real trump card holds AWS with the cooperation of Eucalyptus, which was signed last year. The aim is to improve the compatibility with the AWS APIs by AWS supplies Eucalyptus with further information. Furthermore, developers from both companies focus on creating solutions that enterprise customers to help migrate existing data between data centers and the AWS cloud. The customer will also get the opportunity to use the same management tools and their knowledge of both platforms. With version 3.3 Eucalyptus already could present first results of this cooperation.

It will be exciting how the future of Eucalyptus looks. Eventually, the open source company is bought or sold. The question is who makes the first move. My advice is still that AWS must seek an acquisition of Eucalyptus. From a strategic and technological point of view there is actually no way around it.

Given this fact Rackspace and all the other who jumped on the hybrid cloud bandwagon may not have high hopes. Amazon put things on the right track and is also prepared for this scenario.

Kategorien
Analysis

Amazon EBS: When does Amazon AWS eliminate its single point of failure?

Immediately the Amazon Web Services again stood still in the US-EAST-1 region in North Virginia. First, on 19 August 2013, which apparently also drag Amazon.com under and recently on 25 August 2013. Storms to have been the cause. Of course it has thereby again caught the usual suspects, especially Instagram and Reddit. It seems that Instagram strongly rely on its cloud architecture that blows up in its face regularly when the lightning strikes in North Virginia. However, also Amazon AWS should start to change its infrastructure, because it can’t go on like this.

US-EAST-1: Old, cheap and fragile

The Amazon region US-EAST-1 is the oldest and most widely used region in Amazon’s cloud computing infrastructure. This is on the one hand related to the cost which are considerably cheaper in comparison to other regions. Whereby the Oregon region has been adjusted in price. Secondly, because of the age, here are also many old customers with probably not optimized system architectures for the cloud.

All eggs in one nest

The mistake most users commit is that they only rely on one region, in this case, US-East-1. Should this fail, also the own service is of course no longer accessible. This applies to all of Amazon’s worldwide regions. To overcome this situation, a multi-region approach should be chosen by the application is scalable and highly available distributed over several regions.

Amazon EBS: Single Point of Failure

I have already suggested in the last year that the Amazon Web Services have a single point of failure. Below are the status messages of 25 August 2013 which underline my thesis. The first time I become aware of it was a blog post by the Amazon AWS customer awe.sm who tell about their experiences in the Amazon cloud. The bold points are crucial.

EC2 (N. Virginia)
[RESOLVED] Degraded performance for some EBS Volumes
1:22 PM PDT We are investigating degraded performance for some volumes in a single AZ in the US-EAST-1 Region
1:29 PM PDT We are investigating degraded performance for some EBS volumes and elevated EBS-related API and EBS-backed instance launch errors in a single AZ in the US-EAST-1 Region.
2:21 PM PDT We have identified and fixed the root cause of the performance issue. EBS backed instance launches are now operating normally. Most previously impacted volumes are now operating normally and we will continue to work on instances and volumes that are still experiencing degraded performance.
3:23 PM PDT From approximately 12:51 PM PDT to 1:42 PM PDT network packet loss caused elevated EBS-related API error rates in a single AZ, a small number of EBS volumes in that AZ to experience degraded performance, and a small number of EC2 instances to become unreachable due to packet loss in a single AZ in the US-EAST-1 Region. The root cause was a „grey“ partial failure with a networking device that caused a portion of the AZ to experience packet loss. The network issue was resolved and most volumes, instances, and API calls returned to normal. The networking device was removed from service and we are performing a forensic investigation to understand how it failed. We are continuing to work on a small number of instances and volumes that require additional maintenance before they return to normal performance.
5:58 PM PDT Normal performance has been restored for the last stragglers of EC2 instances and EBS volumes that required additional maintenance.

ELB (N. Virginia)
[RESOLVED] Connectivity Issues
1:40 PM PDT We are investigating connectivity issues for load balancers in a single availability zone in the US-EAST-1 Region.
2:45 PM PDT We have identified and fixed the root cause of the connectivity issue affecting load balancers in a single availability zone. The connectivity impact has been mitigated for load balancers with back-end instances in multiple availability zones. We continue to work on load balancers that are still seeing connectivity issues.
6:08 PM PDT At 12:51 PM PDT, a small number of load balancers in a single availability zone in the US-EAST-1 Region experienced connectivity issues. The root cause of the issue was resolved and is described in the EC2 post. All affected load balancers have now been recovered and the service is operating normally.

RDS (N. Virginia)
[RESOLVED] RDS connectivity issues in a single availability zone
1:39 PM PDT We are currently investigating connectivity issues to a small number of RDS database instances in a single availability zone in the US-EAST-1 Region
2:07 PM PDT We continue to work to restore connectivity and reduce latencies to a small number of RDS database instances that are impacted in a single availability zone in the US-EAST-1 Region
2:43 PM PDT We have identified and resolved the root cause of the connectivity and latency issues in the single availability zone in the US-EAST-1 region. We are continuing to recover the small number of instances still impacted.
3:31 PM PDT The majority of RDS instances in the single availability zone in the US-EAST-1 Region that were impacted by the prior connectivity and latency issues have recovered. We are continuing to recover a small number of remaining instances experiencing connectivity issues.
6:01 PM PDT At 12:51 PM PDT, a small number of RDS instances in a single availability zone within the US-EAST-1 Region experienced connectivity and latency issues. The root cause of the issue was resolved and is described in the EC2 post. By 2:19 PM PDT, most RDS instances had recovered. All instances are now recovered and the service is operating normally.

Amazon EBS is the basis of many other services

The bold points are therefore of crucial importance, since all of these services are dependent on a single service at the same time: Amazon EBS. To this result awe.sm came during their analysis. How awe.sm has determined, EBS is nine times out of ten the main problem of major outages on Amazon. How in the outage above.

Among the services that are dependent on Amazon EBS include the Elastic Load Balancer (ELB), the Relational Database Service (RDS) or Elastic Beanstalk.

Q: What is the hardware configuration for Amazon RDS Standard storage?
Amazon RDS uses EBS volumes for database and log storage. Depending on the size of storage requested, Amazon RDS automatically stripes across multiple EBS volumes to enhance IOPS performance.

Source: http://aws.amazon.com/rds/faqs/

EBS Backed
EC2: If you select an EBS backed AMI
ELB: You must select an EBS backed AMI for EC2 host
RDS
Elastic Beanstalk
Elastic MapReduce

Source: Which AWS features are EBS backed?

Load balancing across availability zones is excellent advice in principle, but still succumbs to the problem above in the instance of EBS unavailability: ELB instances are also backed by Amazon’s EBS infrastructure.

Source: Comment – How to work around Amazon EC2 outages

As you can see more than a few services depend on Amazon EBS. This means, conversely, if EBS fails these services are also no longer available. Particularly tragic is the case with the Amazon Elastic Load Balancer (ELB), which is responsible for directing the traffic in case of failure or heavy load. Thus, if Amazon EBS fails and the data traffic should be transmitted to another region, this does not work because the load balancer also dependents on EBS.

I may be wrong. However, if you look at the past failures the evidence indicates that Amazon EBS is the main source of error in the Amazon Cloud.

It may therefore be allowed to ask the question whether constantly the same component of its infrastructure blow up in a leader’s face, which, in principle, is to be regarded as a single point of failure? And whether an infrastructure-as-a-service (IaaS) provider, who has experienced the most outages of all providers in the market, can be described under this point of view as a leader. Even if I frequently preach that a user of a public IaaS must take responsibility for the scalability and high availability on its own, the provider has the duty to ensure that the infrastructure is reliable.

Kategorien
Analysis

Cloud storage Box could become a threat for Dropbox and Microsoft SkyDrive

To become more attractive for private users and small businesses, the cloud storage provider Box has expanded its pricing model. Immediately in addition to the existing plans for private, business and enterprise customers a Starter plan can be selected as well, which is interesting for both small businesses and freelancers as well as private customers.

Private customers get more free storage, small businesses a new plan

The offer for private users has been increased from formerly free 5GB to 10GB. In addition, the portfolio was extended with a new Starter plan, which should be target at smaller companies. This offers 100GB disk space for 1 to max. 10 users per company account for $ 5 per user per month.

Box, that addressed large companies in the first place, thus hoped that smaller enterprise customers and consumers increased to store their data in the cloud, rather than save it to error-prone local media. According to CEO Aaron Levie, Box is particularly driven by the issues of information and collaboration. Whether it is a global corporation, a small business or a freelancer, in the end it is important that you are able to share content and access it securely and reliably from anywhere, so Levie.

The new Starter plan is just a hook

To be honest, the new Starter plan is very interesting as it meets the needs of a specific target group. However, these are not small companies, but definitely private users and freelancers. The features that are offered around the storage are definitely on enterprise level. In addition to various safety features (no end-to-end encryption) at different levels, integration options over apps on the basis of an ecosystem of third party developers are available. However, 100GB are far too little for small businesses, especially since this account is designed for 1 to 10 users. 10 GB per user is very scarce very quickly. In addition, many other interesting and important features for businesses are offered just with the next plan „Business“ for $15 per user per month. Where at least three users are need to set up. This will include 1000GB storage and other security functions on folder and file level per user, integration into an Active Directory, Google Apps and Salesforce, an advanced user management, etc. So, at the end of the day, the Starter plan just serves as a hook to drum up business customer.

On the other hand, this plan is a very interesting deal for private users and freelancers who need more features at a cheaper price and a similar performance like Dropbox. Since, although the free plan was extended to 10GB, but the free limit of 50GB has been dropped. Who now needs more than 10GB must buy 100GB for $10 per month. It therefore makes a lot more sense for private users to opt for a Starter plan and only pay $5 per month or $60 per year.

The Starter plan may well ensure that Dropbox and Microsoft SkyDrive losing market share if this renewal gets around. Particular SkyDrive should dress up warmly. Although Microsoft’s cloud storage is well integrated with the Windows operating systems and continues to be the cheapest on the market. However, SkyDrive is very slow and the user experience is below average. Just to highlight a tiny but crucial detail that makes Box simply better. Transparency, what is happening in the background. By way of comparison: Box has a small app for Windows in which the status is displayed. Here you can see: the progress in percent; the approximate time until the upload is completed; the file that is being processed; how many files need to be processed; how many files are processed in total. Microsoft SkyDrive shows nothing of this. The user is completely left in the dark.

Dropbox is known as performance king. Also the ease of use is good. Nevertheless, the Box Starter plan, due to its extended functional possibilities at a cheaper price and a similar performance, has certainly the potential to compete Dropbox.

Note: Due to the current security situation, it is pointed out that Box is a U.S. based provider and the data is stored in the United States. Although, the data is stored server side encrypted. However, Box doesn’t offer an end-to-end encryption (only SSL during transmission). The key for on Box‘ infrastructure encrypted data are owned by Box and not by the user. For this reason, Box has the opportunity to decode the data independent to allow third parties access it anytime.

Kategorien
Analysis

Google Apps vs. Microsoft Office 365: What do the assumed market shares really tell us?

The blog of UserActivion compares at regular intervals the spread of office and collaboration solutions from Google Apps and Microsoft Office 356. I have accidentally become aware of the market shares per industry, thanks Michael Herkens. The result is from July and shows the market shares of both cloud solutions for June. The post also shows a comparison to the previous month May. What stands out here is that Google Apps is by far in the lead in any industry. With the size and general dissemination of Microsoft that’s actually unusual. Therefore, to take a look behind the scenes make sense.

Google Apps with a clear lead of Office 365

Trusting the numbers from UserActivion, Google Apps has a significant advantage in the sectors of trade (84.3 percent), technology (88.2 percent), education (77.4 percent), health care (80.3 percent) and governments (72.3 percent).

In a global comparison, it looks similar. However, Office 365 has caught up with May in North America, Europe and Scandinavia. Nevertheless, the current ratio is, according to UserActivion, at 86:14 for Google Apps. In countries such as Norway and Denmark is a balanced ratio of 50:50.


Source: useractivation.com


Source: useractivation.com

Do the Google Apps market shares have a significance?

This question is relatively easy to answer. They have a significance on the dissemination of Google Apps. However, they say nothing about what sales Google makes with these market shares. As measured by this, the expected ratio between Google Apps and Office 365 looks a little different. Why?

Well, Google Apps has become such a big market share because the Google Apps Standard version (no longer exists) for a given number of users and the Google Apps for Education could be used for a very very long time for free of charge. The Education version is still free. This naturally leads to the fact that also a lot of users who have their own domain, have chosen a free Google Apps Standard version, which even after the discontinuation of the version may continue to be used. The only limitation are the maximum users per domain.

So it comes, for example, that only I as a single person have registered nine (9) Google Apps Standard accounts over the last years. Most of them I still have. I currently use two of them (Standard, Business) active and pay for one (Business).

The Google Apps market shares must therefore be broken down as follows:

  • Google Apps users.
  • Not active Google Apps users.
  • Active Google Apps users who do not pay.
  • Active Google Apps users who pay.

If this ratio is now weighted and applied to the market share in terms of turnover, Microsoft would fare better. Why?

At the end of the day it is about cash. From the beginning Microsoft has nothing to give away. Admittedly Office 365 can be tested for free for a short time. Subsequently, the decision is made for a paid plan or not. This means that it is assumed that each Office 365 client, who no longer is in the testing phase, at the same time is an active paying user. Surely Google receives from the non-paying active users insights about their behavior and can place a bit of advertising. But the question remains how much that really matters.

This does not mean that Google Apps has not widely prevalent. But it shows that the strategy of Google rises at least in one direction. Give away of accounts for free to get market share pays (naturally). It also shows that market shares are not simultaneously mean profitability. Most people like to take what they get for free – the Internet principle. The fact that Google has to slowly start to monetize its customer base can be seen by the discontinuation of the free standard version. It appears that the critical mass has been reached.

Compared to Google Apps, Microsoft was to late with Office 365 on the market and must catch up once strong. On top of that Microsoft has (apparently) nothing to give away. Assuming that the numbers of UserActivion are valid, another reason may be, that the website and the offer of Office 365 – compared to Google Apps – are far too opaque and complicated. (Hint: Just visit the websites.)

Google copied the Microsoft principle

In conclusion, based on the UserActivion numbers, it can be said that Google is on the best way to apply the Microsoft principle to itself. Microsoft’s way into the enterprise went over the home user of Windows, Office and Outlook products. The strategy worked. Who even works in the evening with the known programs from Redmond has it during the day at work much easier because it is so familiar. Even the recommendations for a Microsoft product by the employees were inevitable. The same applied to the Windows server. If a Windows operating system is so easy to use and configure, then a server can not ultimately be much more complicated. With it the decision makers could be taken on board. A similar principle can be seen at Google. Google Apps is nothing more than a best of the Google services for end users, GMail, GDocs, etc. packed in a suite. Meanwhile, the dissemination of these services is also relatively high, thus it can be assumed that most IT decision makers are familiar with it, too. It is therefore interesting to see how the real(!) market share between Google Apps and Office 365 to develop.

Kategorien
Analysis

Google expands the Compute Engine with load balancer functionality. But where is the innovation machine?

In a blog post, Google has announced further renovations to its cloud platform. In addition to the expansion and improvement of the Google Cloud Datastore and the announcement of the Google App Engine 1.8.3, the infrastructure-as-a-service (IaaS) offer Google Compute Engine has received a load balancing service.

News from the Google Cloud Datastore

To further increase the productivity of developers, Google has expanded its Cloud Datastore with several functionalities. These include the Google Query Language (GQL). The GQL is a SQL-like language and is specifically aimed at data-intensive applications to query Entities and Keys from the Cloud Datastore. Furthermore, more statistics can get from the underlying data by the query of metadata. Google sees this as an advantage, for example, to develop an own internal administrative console to do own analysis or debug an application. In addition, improvements to the local SDK were made. These include enhancements to the command line tool and support for developers using Windows. After the first version of the Cloud Datastore has support Java, Python, and Node.js, now Ruby has been added.

Google App Engine 1.8.3

The renewal of the Google App Engine version 1.8.3 mainly directed to the PHP runtime environment. Furthermore, the integration with the Google Cloud Storage, which according to Google is gaining popularity, was reinforced. From now on functions such as opendir() or writedir() can be used to directly access bucket in Cloud Storage. Further stat()-ing information can be queried via is_readable() and is_file(). Metadata can also be write to Cloud Storage files now. Moreover performance improvements through memcache-backed optimistic read caching were made, to improve the performance of applications that need to read frequently from the same Cloud Storage file.

Load Balancing with the Compute Engine

The Google Compute Engine is expanded with a Layer 3 load balancing functionality which allows to develop scalable and fault-tolerant web applications on the Google IaaS. With the load balancing function, the incoming TCP/UDP traffic can be distributed across different Compute Engine machines (virtual machines) within the same region. Moreover, this ensures that no defective virtual machines are used to respond to HTTP requests and load peaks are offset. The configuration of the load balancer is carried out either via command line or via the REST API. The Load Balancer function can be used free of charge until the end of 2013, then the service will be charged.

Google’s progress is too slow

Two important innovations stick out in the news about the Google Cloud Platform. On the one hand, the new load balancing function of the Compute Engine, on the other hand the strengthen integration of the App Engine with Google Cloud Storage.

The load balancing extension is admittedly much too late. After all, this is one of the essential functions of a cloud infrastructure offering, to ensure to develop scalable and highly available systems and which all other existing providers on the market already have in their portfolio.

The extended integration of the Cloud Storage with the Google App Engine is important to give developers more S3-like features out from the PaaS and create more opportunities in terms of access and to provide the processing of data in a central and scalable storage location.

Bottom line, it can be stated that Google continues to expand its IaaS portfolio steadily. What stands out here, is the very slow speed. From the innovation engine Google, which we know from many other areas of the company, is nothing to see. Google gives the impression that it has to offer IaaS to not lose too many developers on the Amazon Web Services and other providers, but the Compute Engine is not highly prioritized. As a result, the offer has treated very little attention. The very late extension to the load balancing is a good example. Otherwise you might expect more ambition and speed to invest in the expansion of the offer, from a company like Google. Finally, except for Amazon, no other company has more experience in the field of highly scalable infrastructures. This should be relatively quickly transformed into a public offering, especially like Google advertises that customers can use the same infrastructure on which also all Google services are running. Apart from the App Engine, which is classified in the PaaS market, the Google Compute Engine must continue to hire back. This will be the case for a long time, if the expansion speed is not accelerated, and other essential services are rolled out.