Kategorien
Analysis

Google Apps vs. Microsoft Office 365: What do the assumed market shares really tell us?

The blog of UserActivion compares at regular intervals the spread of office and collaboration solutions from Google Apps and Microsoft Office 356. I have accidentally become aware of the market shares per industry, thanks Michael Herkens. The result is from July and shows the market shares of both cloud solutions for June. The post also shows a comparison to the previous month May. What stands out here is that Google Apps is by far in the lead in any industry. With the size and general dissemination of Microsoft that’s actually unusual. Therefore, to take a look behind the scenes make sense.

Google Apps with a clear lead of Office 365

Trusting the numbers from UserActivion, Google Apps has a significant advantage in the sectors of trade (84.3 percent), technology (88.2 percent), education (77.4 percent), health care (80.3 percent) and governments (72.3 percent).

In a global comparison, it looks similar. However, Office 365 has caught up with May in North America, Europe and Scandinavia. Nevertheless, the current ratio is, according to UserActivion, at 86:14 for Google Apps. In countries such as Norway and Denmark is a balanced ratio of 50:50.


Source: useractivation.com


Source: useractivation.com

Do the Google Apps market shares have a significance?

This question is relatively easy to answer. They have a significance on the dissemination of Google Apps. However, they say nothing about what sales Google makes with these market shares. As measured by this, the expected ratio between Google Apps and Office 365 looks a little different. Why?

Well, Google Apps has become such a big market share because the Google Apps Standard version (no longer exists) for a given number of users and the Google Apps for Education could be used for a very very long time for free of charge. The Education version is still free. This naturally leads to the fact that also a lot of users who have their own domain, have chosen a free Google Apps Standard version, which even after the discontinuation of the version may continue to be used. The only limitation are the maximum users per domain.

So it comes, for example, that only I as a single person have registered nine (9) Google Apps Standard accounts over the last years. Most of them I still have. I currently use two of them (Standard, Business) active and pay for one (Business).

The Google Apps market shares must therefore be broken down as follows:

  • Google Apps users.
  • Not active Google Apps users.
  • Active Google Apps users who do not pay.
  • Active Google Apps users who pay.

If this ratio is now weighted and applied to the market share in terms of turnover, Microsoft would fare better. Why?

At the end of the day it is about cash. From the beginning Microsoft has nothing to give away. Admittedly Office 365 can be tested for free for a short time. Subsequently, the decision is made for a paid plan or not. This means that it is assumed that each Office 365 client, who no longer is in the testing phase, at the same time is an active paying user. Surely Google receives from the non-paying active users insights about their behavior and can place a bit of advertising. But the question remains how much that really matters.

This does not mean that Google Apps has not widely prevalent. But it shows that the strategy of Google rises at least in one direction. Give away of accounts for free to get market share pays (naturally). It also shows that market shares are not simultaneously mean profitability. Most people like to take what they get for free – the Internet principle. The fact that Google has to slowly start to monetize its customer base can be seen by the discontinuation of the free standard version. It appears that the critical mass has been reached.

Compared to Google Apps, Microsoft was to late with Office 365 on the market and must catch up once strong. On top of that Microsoft has (apparently) nothing to give away. Assuming that the numbers of UserActivion are valid, another reason may be, that the website and the offer of Office 365 – compared to Google Apps – are far too opaque and complicated. (Hint: Just visit the websites.)

Google copied the Microsoft principle

In conclusion, based on the UserActivion numbers, it can be said that Google is on the best way to apply the Microsoft principle to itself. Microsoft’s way into the enterprise went over the home user of Windows, Office and Outlook products. The strategy worked. Who even works in the evening with the known programs from Redmond has it during the day at work much easier because it is so familiar. Even the recommendations for a Microsoft product by the employees were inevitable. The same applied to the Windows server. If a Windows operating system is so easy to use and configure, then a server can not ultimately be much more complicated. With it the decision makers could be taken on board. A similar principle can be seen at Google. Google Apps is nothing more than a best of the Google services for end users, GMail, GDocs, etc. packed in a suite. Meanwhile, the dissemination of these services is also relatively high, thus it can be assumed that most IT decision makers are familiar with it, too. It is therefore interesting to see how the real(!) market share between Google Apps and Office 365 to develop.

Kategorien
Comment

Dangerous: VMware underestimates OpenStack

Slowly we should seriously be worried who dictated VMware executives, what they have to say. In early March this year, COO Carl Eschenbach says, that he finds it really hard to believe that VMware and their partners cannot collectively beat a company that sells books (Note: Amazon). Well, the current reality looks different. Then in late March, a VMware employee from Germany comes to the conclusion, that VMware is the technology enabler of the cloud, and he currently see no other. That this is far from it, we all know, too. Lastly, CEO Pat Gelsinger puts another one on top and claims that OpenStack is not for the enterprise. Time for an enlightenment.

Grit your teeth and get to it!

In an interview with Network World, VMware CEO Pat Gelsinger spoke on OpenStack and does not expect that the open source project will have a significant reach in enterprise environments. Instead, he considered it as a framework service providers can use to build public clouds. In contrast, VMware has a very wide spread with extremely large VMware environment. The cost of switching and other topics are therefore not particularly effective. However, Gelsinger sees for cloud and service providers, in areas where VMware has not successfully done business in the past, a lot of potential for OpenStack.

Furthermore, Gelsinger considered OpenStack as a strategic initiative for VMware, which they are happy to support. VMware will ensure that its products and services work within cloud environments based on the open source solution. In this context OpenStack also opens new opportunities for VMware to enter the market for service providers, an area that VMware has neglected in the past. Therefore, VMware and Gelsinger see OpenStack as a way to line up wider.

Pride comes before a fall

Pat Gelsinger is right when he says that OpenStack is designed for service providers. VMware also remains to the leading virtualization vendors in the enterprise environment. However, this type of stereotypical thinking is dangerous for VMware. Because the tide can turn very quickly. Although Gelsinger like to see the high switching costs as a reason why companies should continue to rely on VMware. However, there is one thing to consider. VMware has its strengthen only(!) in virtualization – with the hypervisor. In relation to cloud computing, where OpenStack has its center of gravity, it does not look as rosy as it might look. To be honest, VMware has missed the trend to offer early solutions that make it possible to enable the virtualized infrastructure for the cloud, and serve with higher quality services and self-service capabilities to allow the IT department to become a service broker. Meanwhile, solutions are available, but the competition, not only from the open source environment, grows steadily.

This is also seen by IT buyers and decision makers. I’ve spoken with more than one IT manager who plans to exchange its VMware infrastructure against something open, in most cases KVM was named, and more cost-effective. This is just the virtualization layer which can break away. Furthermore, there are already some use cases of large companies (see on-premise private cloud) that use OpenStack to build a private cloud infrastructure. What also must not be forgotten is that more and more companies change in the direction of a hosted hosted private cloud or public cloud provider and the own data center will become less important over time. In this context, the hybrid cloud plays an increasingly important role to make the transfer and migration of data and systems more convenient. This is where OpenStack due to its wide distribution in hosted private cloud and public cloud environments has a great advantage over VMware.

With such statements, of course, VMware tries to suppress OpenStack from their own territory – on-premise enterprise customers – to place their own solutions. Nevertheless, VMware should not make the mistake of underestimating OpenStack.

Kategorien
Analysis

Google expands the Compute Engine with load balancer functionality. But where is the innovation machine?

In a blog post, Google has announced further renovations to its cloud platform. In addition to the expansion and improvement of the Google Cloud Datastore and the announcement of the Google App Engine 1.8.3, the infrastructure-as-a-service (IaaS) offer Google Compute Engine has received a load balancing service.

News from the Google Cloud Datastore

To further increase the productivity of developers, Google has expanded its Cloud Datastore with several functionalities. These include the Google Query Language (GQL). The GQL is a SQL-like language and is specifically aimed at data-intensive applications to query Entities and Keys from the Cloud Datastore. Furthermore, more statistics can get from the underlying data by the query of metadata. Google sees this as an advantage, for example, to develop an own internal administrative console to do own analysis or debug an application. In addition, improvements to the local SDK were made. These include enhancements to the command line tool and support for developers using Windows. After the first version of the Cloud Datastore has support Java, Python, and Node.js, now Ruby has been added.

Google App Engine 1.8.3

The renewal of the Google App Engine version 1.8.3 mainly directed to the PHP runtime environment. Furthermore, the integration with the Google Cloud Storage, which according to Google is gaining popularity, was reinforced. From now on functions such as opendir() or writedir() can be used to directly access bucket in Cloud Storage. Further stat()-ing information can be queried via is_readable() and is_file(). Metadata can also be write to Cloud Storage files now. Moreover performance improvements through memcache-backed optimistic read caching were made, to improve the performance of applications that need to read frequently from the same Cloud Storage file.

Load Balancing with the Compute Engine

The Google Compute Engine is expanded with a Layer 3 load balancing functionality which allows to develop scalable and fault-tolerant web applications on the Google IaaS. With the load balancing function, the incoming TCP/UDP traffic can be distributed across different Compute Engine machines (virtual machines) within the same region. Moreover, this ensures that no defective virtual machines are used to respond to HTTP requests and load peaks are offset. The configuration of the load balancer is carried out either via command line or via the REST API. The Load Balancer function can be used free of charge until the end of 2013, then the service will be charged.

Google’s progress is too slow

Two important innovations stick out in the news about the Google Cloud Platform. On the one hand, the new load balancing function of the Compute Engine, on the other hand the strengthen integration of the App Engine with Google Cloud Storage.

The load balancing extension is admittedly much too late. After all, this is one of the essential functions of a cloud infrastructure offering, to ensure to develop scalable and highly available systems and which all other existing providers on the market already have in their portfolio.

The extended integration of the Cloud Storage with the Google App Engine is important to give developers more S3-like features out from the PaaS and create more opportunities in terms of access and to provide the processing of data in a central and scalable storage location.

Bottom line, it can be stated that Google continues to expand its IaaS portfolio steadily. What stands out here, is the very slow speed. From the innovation engine Google, which we know from many other areas of the company, is nothing to see. Google gives the impression that it has to offer IaaS to not lose too many developers on the Amazon Web Services and other providers, but the Compute Engine is not highly prioritized. As a result, the offer has treated very little attention. The very late extension to the load balancing is a good example. Otherwise you might expect more ambition and speed to invest in the expansion of the offer, from a company like Google. Finally, except for Amazon, no other company has more experience in the field of highly scalable infrastructures. This should be relatively quickly transformed into a public offering, especially like Google advertises that customers can use the same infrastructure on which also all Google services are running. Apart from the App Engine, which is classified in the PaaS market, the Google Compute Engine must continue to hire back. This will be the case for a long time, if the expansion speed is not accelerated, and other essential services are rolled out.

Kategorien
Analysis

Cloud computing requires professional, independent and trustworthy certifications

I had written about a year ago on the sense and nonsense of cloud seals, certificates, associations and initiatives. At that time I already come to the conclusion that we need trustworthy certifications. However, the German market looked rather weak and was represented alongside EuroCloud with other advocacy groups as the „Initiative Cloud Services Made in Germany“ or „Deutsche Wolke“. But there is a promising independent freshman.

Results from last year

„Cloud Services Made in Germany“ and „Deutsche Wolke“

What initiatives generally have in common is to try to steer as many providers of cloud computing services as possible in their own ranks with various promises. Especially „Cloud Services Made in Germany“ jump on the supposed quality feature Made in Germany, and promises to „more legal certainty in the selection of cloud-based services …“.

And exactly this is how „Cloud Services Made in Germany“ and „Deutsche Wolke“ position themselves. With weak criteria, both of them are very attractive for vendors from Germany, which in turn can advertise with the „stickers“ on their websites. But in the criteria nothing is discussed in any way about the real quality of a service. Is the service really a cloud service? How is the pricing model? Ensures the provider a cloud computing conformal scalability and high availability? And many more questions that are essentially important to evaluate the quality of a cloud service!

Both initiatives have in any case credentials in their form. However, they should not be used as a quality criterion for a good cloud service. Instead, they belong to the category „Patriotism: Hello World, look we Germans can also do cloud.“

EuroCloud and Cloud-EcoSystem

In addition to these two initiatives, there are the association EuroCloud and Cloud-EcoSystem, both advertise with a seal and certificate. EuroCloud has its SaaS Star Audit. The SaaS Star Audit is aimed, as the name implies, exclusively to software-as-a-service provider. Depending on the budget, the provider may be honored by one to five stars by the EuroCloud federation, but must also be a member of EuroCloud. The number of stars says something about the scope of the audit. While with one star only „contract and compliance“ and a bit „operating infrastructure“ are being checked, five stars also check processes and security intensively.

The Cloud-EcoSystem by contrast has with its „Cloud Expert“ a quality certificate for Saas & Cloud Computing consultants and with its „Trust in Cloud“ one for cloud computing providers. A „Cloud Expert“ after the definition of the Cloud-EcoSystem should offer providers and users a decision guidance. In addition to writing, creating professional articles and checklists an expert also carry out quality checks. Furthermore, a customer should be able to trust that the advisor has certain properties of criteria for „cloud experts.“ So every „cloud expert“ should have a deep understanding and basic skills, and have references available and provide its self-created documents on request. Basically, according to the Cloud-EcoSystem, it is about to shape and present the Cloud-EcoSystem.

The „Trust in cloud“ certificate should serve as guidance for companies and users and establish itself as a quality certificate for SaaS and cloud solutions. On the basis of the certificate users receive the opportunity to compare cloud solutions objectively and come to a secure decision. The certification is based on a set of 30 questions, divided into 6 categories each of 5 questions. The questions must be answered by the examinee with Yes or No and also be proved. If the cloud provider answers a question with Yes, he receives a „cloud“. The checklist includes the categories of references, data security, quality of provision, decision confidence, contract terms, service orientation and cloud architecture.

Both EuroCloud and the Cloud-EcoSystem go the right way and try to evaluate providers based on self-imposed criteria. However, in this case two points should be questioned. First, these are associations, that means as a provider you have to be a member. It is legitimately asked which association member can fail an examination – independence? Furthermore, both create their own requirements catalogs, which are not comparable. Just because a provider has a „seal of approval“ of two different associations, which evaluate according to different criteria, does not mean at all that the cloud service also provides real quality – confidentiality.

The pros get into the ring: TÜV Rheinland

Regardless of all the organizations that have come together specifically for the cloud, TÜV Rheinland has launched a cloud-certification. TÜV itself is most likely aware of the testing and acceptance of cranes, fun rides and the general inspection for the car. But also have more than 15 years of experience in the IT areas of consulting and certification with regard to compliance, risk management and information security.

The cloud-certification process is extensive and has a price. A first look at the audit process and the list of requirements shows that the TÜV Rheinland has thus developed a very powerful tool for the testing of cloud services and infrastructures.

Starting with a „Cloud-Readiness Check“ first security, interoperability, compliance and data privacy are checked for their cloud-based suitability and a plan of action is created. This is followed by the review of the „cloud design“ in which the concept and solution are examine carefully. Among others, topics such as architecture but also the network security and access controls are examined. Afterwards, the actual implementation of the cloud solution is considered and quality checks are carried out. After, the preparation of the certification follows and later the actual certification.

The cloud requirements catalogue of the TÜV Rheinland comprises five main areas, which are in turn subdivided into a number of sub-elements. This includes organizing processes, organizational structure, data security, compliance / data privacy and processes. All in all a very profound requirement catalog.

In a called reference project TÜV Rheinland requires eight weeks for the certification of an international infrastructure-as-a-service provider.

Independent and trustworthy cloud certifications are mandatory

The quality and usefulness of certificates and labels stand and fall with the companies that are responsible for auditing and their defined criteria. Weak requirements catalogs meet neither an honest statement, nor will they help to illustrate the clear differences in quality of cloud solutions for the buyer. On the contrary, IT decision-makers in doubt rely on these supposedly tested services, whose quality is another matter. In addition, in cloud computing it is not about to install a software or a service. At the end it is consumed only and the provider is responsible for all other processes that would otherwise have taken the customer himself.

For this reason, independent, trustworthy, and above all professional certifications are necessary to ensure an honest statement about the quality and property of a cloud service, its provider and all downstream processes such as security, infrastructure, availability, etc. As a provider one should be honest with themselves and at the end decide on a certification, which focuses on professional lists of criteria, not just scratch the surface but deeply immersed in the solution and thus make a credible statement about the own solution.

Kategorien
Analysis

GigaOM Analyst Webinar – The Future of Cloud in Europe [Recording]

On July 9 Jo Maitland, Jon Collins, George Anadiotis and I talked about the opportunities and challenges of the cloud in Europe and countries such as Germany or the UK, and gave an insight into the cloud computing market in Europe. The recording of the international GigaOM analyst webinar „The Future of Cloud in Europe“ is online now.

Background of the webinar

The European Commission unveiled its “pro cloud” strategy a year ago, hoping to reignite the stagnant economy through innovation. The Commissioner proclaimed boldly that the cloud must “happen not to Europe, but with Europe”. And rightly so. A year later, three GigaOM Research analysts from Europe Jo Collins (Inter Orbis), George Anadiotis (Linked Data Orchestration) and Rene Buest (New Age Disruption) – moderated by Jo Maitland (GigaOM Research) – looked at who the emerging cloud players are in the region and their edge over U.S. providers. They digged into the issues for cloud buyers in Europe and the untapped opportunities for providers. Can Europe build a vibrant cloud computing ecosystem? That’s a tough question today as U.S. cloud providers still dominant the industry.

Questions which were answered

  • What’s driving cloud opportunities and adoption in Europe?
  • What are the inhibitors to adoption of cloud in Europe?
  • Are there trends and opportunities within specific countries (UK, Germany, peripheral EU countries?)
  • Which European providers show promise and why?
  • What are the untapped opportunities for cloud in Europe?
  • Predictions for the future of cloud in Europe.

The recording of the analyst webinar

Kategorien
Comment

ProfitBricks opens price war with Amazon Web Services for infrastructure-as-a-service

ProfitBricks takes the gloves off. The Berlin-based infrastructure-as-a-service (IaaS) startup acts with a hard edge against the Amazon Web Services and reduced its prices in both U.S. and Europe by 50 percent. Furthermore, the IaaS provider has presented a comparison which shows that its own virtual server should have at least twice as high as the performance of Amazon Web Services and Rackspace. Thus ProfitBricks is trying to diversify on the price and the performance of the U.S. top providers.

The prices for infrastructure-as-a-service are still too high

Along with the announcement, CMO Andreas Gauger shows correspondingly aggressive. „We have the impression that the dominant occurring cloud companies from the U.S. are abusing their market power to exploit high prices. They expect deliberately opaque pricing models of companies and regularly announce punctual price cuts to awaken the impression of a price reduction.“, says Gauger (translated from German).

ProfitBricks has therefore the goal to attack the IaaS market from the rear on the price and let their customer directly and noticeably participate from technical innovations resulting in cost savings.

Up to 45 percent cheaper than Amazon AWS

ProfitBricks positioned very clearly against Amazon AWS and shows a price comparison. For example, an Amazon M1 Medium instance with 1 core, 3.75GB of RAM and 250GB of block storage is $0.155 per hour or $111.40 per month. A similar instance on ProfitBricks costs $0.0856 per hour or $61.65 per month. A saving of 45 percent.

Diversification just on the price is difficult

To diversify as IaaS provider just on price is difficult. We remember, Infrastructure is commodity! Vertical services are the future of the cloud, with which the customer receives an added value.

To defy the IaaS top dog this way is brave and foolhardy. However, one should not forget something. As hosting experts of the first hour Andreas Gauger and Achim Weiss have validated the numbers around their infrastructure and seek with this action certainly not the brief glory. It remains to be seen how Amazon AWS and other IaaS providers will react to this stroke. Because with this price reduction ProfitBricks shows that customer can actually get much cheaper infrastructure resources as is currently the case.

As an IaaS user there is something you should certainly do not lose sight of during this price discussion. In addition to the price of computing power and storage – which are held up again and again – there are more factors to take into account, that determine the price and which are actually just call to mind at the end of the month. This includes the cost of transferring data in and out of the cloud as well as costs for other services offered around the infrastructure that are charged per API call. In some respects there is a lack of transparency. Furthermore, a comparison of the various IaaS providers is difficult to represent as many operate with different units, configurations and/or packages.

Kategorien
Services @en

Exclusive: openQRM 5.1 to be extended with hybrid cloud functionality and integrates Amazon AWS and Eucalyptus as a plugin

It’s happening soon. This summer openQRM 5.1 will be released. Project manager and openQRM-Enterprise CEO Matt Rechenburg already has told me some very interesting new features. In addition to a completely redesigned backend design the open source cloud infrastructure software from Cologne, Germany will be expanded with hybrid cloud functionality by integrating Amazon EC2, Amazon S3, and their clone Eucalyptus as a plugin.

New hybrid cloud features in openQRM 5.1

A short overview of the new hybrid cloud features in the next version of openQRM 5.1:

  • openQRM Cloud works transparently with Amazon EC2 and Eucalyptus.
  • Via a self-service end-user within a private openQRM Cloud can order by an administrator selected instance types or AMI’s which are then used by openQRM Cloud to automatically provision to Amazon EC2 (or Amazon compatible clouds).
  • User-friendly password login for the end-user of the cloud via WebSSHTerm directly in the openQRM Cloud portal.
  • Automatic applications deployment using Puppet.
  • Automatic cost accounting via the openQRM cloud billing system.
  • Automatic service monitoring via Nagios for Amazon EC2 instances.
  • openQRM high-availability at infrastructure-level for Amazon EC2 (or compatible private clouds). This means: If an EC2 instance fails or an error occurs in an Amazon Availability Zone (AZ) an exact copy of this instance is restarted. In case of a failure of an entire AZ, the instance starts up again in another AZ of the same Amazon region.
  • Integration of Amazon S3. Data can be stored directly on Amazon S3 via openQRM. When creating an EC2 instance a script that is stored on S3 can be specified, for example, that executes other commands during the start of the instance.

Comment: openQRM recognizes the trend at the right time

With its extension also openQRM-Enterprise shows that the hybrid cloud is becoming a serious factor in the development of cloud infrastructures and comes with the new features just at the right time. The Cologne based company are not surprisingly orientate at the current public cloud leader Amazon Web Services. Thus, in combination with Eucalyptus or other Amazon compatible cloud infrastructures, openQRM can also be used to build massively scalable hybrid cloud infrastructures. For this purpose openQRM focuses on its proven plugin-concept and integrates Amazon EC2, S3 and Eucalyptus exactly this way. Besides its own resources from a private openQRM Cloud, Amazon and Eucalyptus are used as further resource providers to get more computing power quickly and easily.

In my opinion, the absolute killer features include the automatic applications deployment using Puppet, with which the end-user to conveniently and automatically can provide EC2 instances with a complete software stack itself, as well as the consideration of the Amazon AZ-wide high-availability functionality, which is neglected by many cloud users again and again due to ignorance.

Much attention the team has also given the optics and the interface of the openQRM backend. The completely redesigned user interface looks tidier and easier in the handling and will positively surprise the existing customers.

Kategorien
Services @en

Application Lifecycle Engine: cloudControl launched Private Platform-as-a-Service

Platform-as-a-Service (PaaS) provider cloudControl has published its Application Lifecycle Engine, a private PaaS based on the technologies of its public PaaS. With it the Berlin based company wants to give companies the ability to operate a self-managed PaaS in the own data center in order to get more control over the location of the data and to control access to the relevant data.

cloudControl Application Lifecycle Engine

The cloudControl Application Lifecycle Engine technology derived from its own public PaaS offering which serves since 2009 as the foundation for the cloud service.

The Application Lifecycle Engine is to be understood as a middleware between the basic infrastructure and the actual applications (PaaS concept) and provides a complete runtime environment for applications. It offers developers the opportunity to focus on developing their application and not to be uncomfortable with processes and the deployment.

The published Application Lifecycle Engine is designed to enable large companies their own private PaaS to operate in a self-managed infrastructure in order to get the benefits of the cloud in their own four walls. The platform sets on open standards and promises to avoid vendor lock-in. cloudControl also spans a hybrid bridge by running already on cloudControl’s public PaaS developed applications on the private PaaS, and vice versa.

Furthermore cloudControl addressed service providers who want to build a PaaS offering itself to combine their existing services and infrastructure services.

Trend towards private and hybrid solutions intensifies

cloudControl is not the first company that has recognized the trend towards the private PaaS. Nevertheless the current security situation shows that companies are much more confident in dealing with the cloud and consider where they can store and process their data. For this reason cloudControl comes at exactly the right time to make it possible for companies to regain more control over their data and to raise awareness about where their data is actually located. Concurrently cloudControl also allows companies to enhance their own self-managed IT infrastructure with a true cloud-DNA by IT departments can provide developers with an easy access to resources to deliver faster and more productive results. In addition, IT departments can use the Application Lifecycle Engine as an opportunity to position themselves as a service broker and partner for the departments within the company.

Not to disregard is the possibility using cloudControl in a hybrid operation. It may well, relatively quickly, come to the fact that not enough local resources (computing power, storage space) are available, which are needed for the private PaaS. In this case the homogeneous architecture of the Application Lifecycle Engine can be used to fall back on cloudControl’s public PaaS, not to interfere with the progress of the projects and development activities.

Kategorien
Analysis

Cloud computing is a good remedy against shadow IT

Cloud computing is always represented as a great promoter of shadow IT. This has still its accuracy and I also have always strongly advocated the topic. But, with the right IT strategy, shadow IT can preemptively be prevented, be controlled and even eliminated through cloud computing. This is no simple way for all involved but worthwhile.

Reasons for a shadow IT

Shadow IT is not a cloud computing phenomenon. In any larger company one or the other developer has its own server under the table or IT projects somewhere have self-installed MySQL databases, in the worst case outside the company at a hoster. Users have, as long as they have the appropriate rights to install, their own software solutions in use by which they can be more productive than with their prefixed solutions. What are the reasons for a shadow IT?

  • Employee dissatisfaction with used technologies.
  • IT technologies do not meet the desired purpose.
  • IT departments are too slow.
  • IT departments do not deliver according to the desired requirements.
  • Due to cost pressure resources are cancelled.

How a shadow IT does express?

  • Own server under the table.
  • Workstation becomes a server.
  • Use of cloud infrastructures.
  • Own credit card is used, and then charged over expenses.
  • Undeclared or approved self-installed software.
  • Use of cloud-services and -software.

How does cloud computing help?

One just have to look at the reasons of the employees to understand, why they recourse to the shadow IT. The rebels among us left aside, it’s primarily about the dissatisfaction and the helplessness of the people who want to do their work more productive. At the end of the day and regardless of the technology it is about communication and mutual understanding. That an IT department may not have the speed, such as a public cloud provider is quite normal and very easy to follow, otherwise the IT could be the provider instead of the consumer. But there are ways and means not to miss the market too fast.

  • Do not prohibit everything and be open to requests.
  • Communicate and demand of employees ideas.
  • Establish think tanks and innovation teams who drive constantly new trends in the business.
  • Offering own self-services.
  • Allowing quick access to resources similar to public cloud providers.
  • Middleware as a service portal for employees, over which access is granted to internal and external cloud services.

Cloud computing is not the non plus ultra solution for shadow IT and definitely a driver of this problem. But at the same time cloud computing can help to counteract this over the years grown phenomenon. The relevant concepts and technologies are available and need to be implemented.

A promising approach is to create an own service portal for employees, over which these get a controlled access to internal and external cloud services. This can be either used for infrastructure (virtual servers and storage) or software and platforms. The IT department becomes more and more a service broker and is able to ensure, through the use of external resources (hybrid model), that the employees can expect a high quality service. Thus, for example, a server can be provided to a developer within five minutes instead of several weeks.

Kategorien
Comment

Top Cloud Computing Washer – These companies don't tell the truth about their products

Since the beginning of cloud computing, the old hardware manufacturers are trying to save their business from sales slumps by selling their storage solutions, such as NAS (Network Attached Storage), or other solutions as „private cloud“ products to position against real flexible, scalable and available solutions from the cloud. The Americans call this type of marketing „cloud-washing“. Of course, these providers use the current political situation (PRISM, Tempora, etc.) in order to promote their products with further strengthen. Tragically, young companies also jump on this train. Because what the ancients can, they may eventually can do, too. Wrong! What these providers not interested at all. They reckless ride roughshod over the real provider of cloud computing solutions. This is wrong and a distortion of competition, as they advertised with empty marketing phrases, that demonstrably can not be met. At the end of the day, not only the competition of these providers and the cloud is seen in a bad light, but also the customer buy a product with misconceptions. One or the other decision-makers will surely soon be experiencing a rude awakening.

Background: Cloud Computing vs. Cloud-Washing

In recent years many articles on the topic of cloud-washing have appeared here on CloudUser. A small selection, mostly in German:

What says Wikipedia?

Private Cloud according to Wikipedia

„Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities.

They have attracted criticism because users „still have to buy, build, and manage them“ and thus do not benefit from less hands-on management, essentially „[lacking] the economic model that makes cloud computing such an intriguing concept“.“

Cloud characteristics according to Wikipedia

Cloud computing exhibits the following key characteristics:

  • Agility improves with users‘ ability to re-provision technological infrastructure resources.
  • Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
  • Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is converted to operational expenditure. This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house). The e-FISCAL project’s state of the art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another.
  • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilisation and efficiency improvements for systems that are often only 10–20% utilised.
  • Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
  • Scalability and elasticity via dynamic („on-demand“) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.
  • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users‘ desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user’s computer and can be accessed from different places.

The National Institute of Standards and Technology’s definition of cloud computing identifies „five essential characteristics“:

  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. …
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Source: http://en.wikipedia.org/wiki/Cloud_computing

Top Cloud Computing Washer

Protonet

The latest stroke of genius comes from Hamburg’s startup scene in Germany. A NAS with a graphical user interface for social collaboration in a really(!) pretty orange box is marketed as a private cloud. Of course, the PRISM horse is ridden.

ownCloud

Except for the name basically there is not much cloud in ownCloud. ownCloud is a piece of software with which – not easily – a (real) cloud storage can be built. For this, of course, an operating system, hardware and much more is needed.

Synology

Synology writes itself, that they are „… a dedicated Network Attached Storage (NAS) provider.“ Fine, but why to jump on the private cloud train? Sure, it’s currently selling well. If the cloud is soon the qloud, Synology will definitively sell private qlouds.

D-Link

D-Link is also not bad. In a press release from last year it said generously:

D-Link, the networking expert for the digital home, enhanced the cloud family by adding a new router: With the portable DIR-506L the personal data cloud can be easily stick in your pocket.

and

D-Link continuously invests in the development of cloud products and services. … Already available are the Cloud router … multiple network cameras and network video recorders …

D-Link now even cloudifies network cameras and network video recorders to increase sales over the cloud train. Mind you, at the back and costs of the customers.

Oracle

Oracle loves hardware and licenses. This initially one could see clearly. Preconfigured application servers to rent for a monthly fee and then be installed in the customer’s data center were marketed as infrastructure-as-a-service. However, slowly the vendor catches up. It remains to be seen what impact the cooperation with Salesforce will have on Larry Ellison. He will certainly still annoyed that he simply no longer observes the Sun cloud technology after the acquisition.

Further tips are welcome

These are only a few providers that use the cloud computing train to secure their existing or even new business model. Who has more tips like these can send them with the subject „cloud washer“ to clouduser[at]newagedisruption[.]com.