Kategorien
Comment

Cloud Computing ist not simple!

Cloud computing promises to be simple. Starting here and there a virtual server and the own virtual cloud infrastructure is ready. Those who think that I’m right with the statement are totally wrong. Virtual servers are only a small component of a virtual infrastructure at a cloud provider. A few virtual machines aren’t a cloud. The complexity sticks in the architecture of the application. And thus in the intelligence that the architect and the software developer included. That this is sometimes not implemented, the one or the other cloud user have showed us more than once impressively. Regularly the same suspects fail when their cloud provider is struggling with himself. Cloud computing ist not simple! I do not mean simple software-as-a-service (SaaS). I am talking about infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), and here even granularity about the premier class named scalability and high-availability. And that’s what you should understand and fully take into account if you want to use cloud computing successfully, away from the marketing of the cloud providers.

Software-defined Scalability and Software-defined High-Availability

Currently new terms circulating through the IT stratosphere. Software-Defined Networking (SDN) and Software-Defined Data Center (SDD). A SDN introduces another level of abstraction above the network components. Typically, each router and switch has its own local software through which it is supplied with intelligence by programming. The network administrator tells the router, for example, under what conditions which packet should be routed or not. Within a SDN the task of incorporating a local intelligence for each component is eliminated. The intelligence moves one level up into a management layer in which the entire network is designed and each rule is defined centrally for each component. If the design is completed, it will be rolled out across the network components and the network is configured. With a SDN it should be possible to change a complete network design by „push the button“ directly without having to touch each individual component.

The idea of ​​the SDN concept must also be taken into account mandatory when using a cloud infrastructure. Because the use of a PaaS, but much more of an IaaS means a lot of self-responsibility. More than you might think at first glance. To start one or two virtual machines does not mean that one uses a virtual cloud infrastructure. There are and will remain two virtual servers. An IaaS provider only provides the components, such as the aforementioned virtual machines, storage, and other services plus APIs with which the infrastructure can be used. Bottom line, to put it simply, an IaaS provider only supplies its customers with the appropriate resources and tools in order to build their own virtual infrastructure respectively own virtual data center on the providers cloud infrastructure.

One must therefore ensure with software (the own application) that the cloud infrastructure scaled if necessary (Software-Defined Scalability, SDS) and in the event of a failure of a cloud infrastructure component a replacement component (eg, virtual machine) is started and is thus replaced (Software-Defined High-Availability, SDHA). The software therefore provides the scalability and high-availability of the virtual cloud infrastructure so that the web application itself is scaled and fail-safe, and uses the character of each cloud provider.

How a cloud computing infrastructure is used almost to perfection shows Netflix impressively.

Cloud Computing ist not simple!

Source: Adrian Cockcroft

Netflix the supreme example

Netflix is the world’s largest cloud service ​​by far. The video streaming service has become responsible for a third of all Internet traffic during peak periods. This user requests must be answered, of course, performant and at any time. Since its launch in 2009 Netflix sets on cloud technologies and has shifted its complete technology stack including the resulting infrastructure to the cloud of the Amazon Web Services in November 2012. Here about 1,000 virtual Linux-based Tomcat Java server and NGINX web server are running. There are also other services such as Amazon Simple Storage Service (S3) and the NoSQL database Cassandra in conjunction with memcached and a distributed memory object caching.

However, this is only one side of the coin. More important is the use of multiple Availability Zones in the Amazon Cloud. Netflix uses a total of three Availability Zones to increase the availability and speed of the own service. Occurs a problem in an Availability Zone, the architecture of the Netflix application is designed that the service can continue to run through the other two. Here Netflix has not relied on the pure marketing promises from Amazon, but developed with the Chaos Gorilla its own software that is testing the stability of the virtual servers Amazon Elastic Compute Cloud (EC2). In short, the failure of a complete EC2 Region or Availability Zone is simulated to ensure that the Netflix service continues to function in an emergency. One of the biggest challenges is in the fact that in the event of an error in an Amazon region, the Domain Name System (DNS) is automatically reconfigured so that Netflix customers do not notice the failure. The different DNS providers APIs make this task not easier. In addition, most have been developed that the settings have to be done manually, which does not make it easier to automate this.

Bottom line, it is to say that Netflix plans ahead for the error case and does not rely on the cloud. Because sometimes something goes wrong in the cloud, as in any ordinary data center. You only need to be prepared for it. Who is more interested in what Netflix makes to reach this state should read „Netflix: The Chaos Monkey and the Simian Army – The model for a good cloud system architecture“ (only in German).

Simplicity counts

Maybe I’m asking too much. Finally, cloud computing is a relatively new concept. Nevertheless, Netflix shows impressively that it works. However, when you consider what huge efforts Netflix makes to be successful in the cloud, you just have to say that cloud computing is not simple, and a cloud infrastructure, no matter at which provider, needs to be built with the corresponding architecture. This means, conversely, that the use of the cloud must be more simply in order to achieve the promised cost advantages. Because if one uses cloud computing the right way, it is necessarily not cheaper. In addition to savings in infrastructure costs which are always reckoned up, the other costs as may for the staff with the appropriate skills and the costs for the development of scalable and fault-tolerant applications in the cloud should never be neglected.

The positive sign is that I see first startups on the horizon, who take care of this problem and have set the simple demand of finished cloud resources to their task, without worrying about scalability and high-availability as a user.

Kategorien
Comment

Dead PC market – Microsoft is the scapegoat!

To start off with, I do not agree with every decision that Microsoft takes and by far there are products which I consider critical too. But what happened here just shows how narrow-minded some arguments are. The background of the excitement are the conclusions of the market researchers IDC and Gartner on the current situation in the PC market. Both make Microsoft respectively Windows 8 responsible for the death of the PC. Lastly, even the investment bankers at Goldman Sachs and other financial analysts associated and come to absurd statements. Microsoft is here wrongly used as a scapegoat!

Background: Current data from the PC market

According to IDC, in the first quarter of 2013, only 76.3 million devices were shipped. This is a decrease of 14 percent. Gartner’s numbers a day later showing a decline of 11.2 percent, that mean 79.2 million shipped personal computers.

Quickly the culprit for this was identified. Microsoft, respectively Windows 8. IDC actually assumed that Windows 8 should ensure a revival of the PC market. However, Microsoft have worsened the situation, according to IDC, because the customer can not get used to the new tile environment, etc.

Meanwhile, the financial sector had responded. One financial analyst is quoted as follows: „The PC used in the business is a mature market with declining replacement investments and in the consumer market you need not have an Office, so no Windows and thus no Microsoft.“ Investment bankers therefore have advised their customers to sell their Microsoft shares.

For me, this statement shows, that investment bankers and financial analysts have no providence and even worse, have not a hunch of ​​the market, they are asked to assess. Instead of critical scrutinize the conclusions of IDC and Gartner initially, unqualified statements are made that will possibly have a financial impact on Microsoft and an entire industry.

Hybrid Tablets are the trend

The PC market is not dead, it has changed in recent years. And if one would argue from the perspective of Gartner and IDC, it would be Apple, who is, with the launch of the iPad, responsible for the death of the PC market.

Certainly, the situation is quite different. The PC is not dead, it has now begun to assume a different shape. In my five cloud computing, mobile and big data predictions for 2013, I pointed out that particularly hybrid tablets will move into the enterprise. Hybrid tablets are tablets with a clip-on or rotating hardware keyboard. This has the background, that tablets help us to use most of the IT, the Internet and other things more convenient. However, for the full application in the enterprise, but also in the private sphere, they are not suitable. At conferences and travels, I regular see users, who bought an external keyboard for their tablet. Hybrid tablets are the best of both worlds. The tablet with its touch screen and an easy to use interface, plus the traditional laptops and their keyboards. Writing long texts on a tablet with the virtual keyboard is no fun and exhausting what are also the reactions of business users. In addition, hybrid tablets will replace today’s conventional laptops. And what runs inter alia on these hybrids? Right, Windows 8 – and even MS Office.

That hybrid tablets are the trend has now also been recognized by Apple. There are initial reports that Apple has filed a patent for a „Hybrid Tablet Notebook„.

User behavior is changing

Just as the technology market develops, our lives and our way we consume technologies is affected. If Microsoft had in 2002 waives the keyboard on their tablet and set on touch instead of a pen and chosen a thinner form factor they had been turn the PC market upside down 11 years ago. However, it was Apple that could go one better at the right time and under the hype of the iPod and iPhone with the iPad.

However, we also see that Microsoft’s concept of a hybrid tablet from 2002 already was the right decision. One may therefore say that Microsoft has already back the right horse at that time. It was, as so often, just not the right time.

Let’s look across to the automobile industry, we see similar changes. „Why are thousands of new cars secretly parking in Bavaria?“ Because the orders of the automakers to go back and the sales of new vehicles breaks. „Thousands of cars have to be swapped out to make room for newer models and keep the market alive.“

And why is that? Because the user behavior and the claims change. A high SAP manager has told me at CeBIT that in the interviews of young talents it’s not about whether and which company car to get since a long time. The question is, if you get an iPhone and iPad.

I’ve heard a lecture by a Roland Berger consultant a few days ago. The message was very clear, that the car market as we know it today is dead. Logically, new mobility concepts like flinc, Flinkster, book-n-drive or car2go are much more cost efficient. In many cases it is just not worth more to make large investments in an own car. In addition to the monthly cost of taxes and insurance, gasoline prices are one reason. Here you also can not make a single car manufacturer or model responsible for this mess. It’s about what the customers want.

We are currently experiencing the greatest technological revolution until today, which breaks down directly on our society. Starting from IT concepts (cloud computing, mobile, social media, etc.) the behavior of the users in other industries are also changing. And not a single company is responsible for that!

Kategorien
Comment

Amazon Web Services (AWS) may use Eucalyptus to build CIAs private cloud

More and more rumors appear that Amazon is building a private cloud for the CIA for espionage activities. The first question that arises here is why for an intelligence? Second, why Amazon AWS as a public cloud provider, because they are a service provider and not a software company. But, if we think a little further, Eucalyptus immediately comes into play.

The Amazon deal with the CIA

According to Federal Computer Week, the CIA and the Amazon Web Services (AWS) have signed a $ 600 million contract with a term of 10 years. The background is that Amazon AWS should build a private cloud for the CIA on the governments infrastructure. More information are not available yet.

However, the question is why the CIA has asked Amazon. Is there more behind the $$$? Amazon is one of the few companies that have the knowledge and staff to successfully operate a cloud infrastructure. Despite some preventable injuries, the infrastructure is designed to be extremely rugged and smart. However, Amazon has a downside. They are a service provider, not a software vendor. That means they do not have the experience on how to unroll software, provides customers with updates and more. Moreover, they are likely or hopefully not use the same source code for the cloud of the CIA.

Now a cooperation might come into play, which Amazon received some time ago with Eucalyptus. This could solve the problem of the lack of software for on-premise clouds and experience with maintenance, software fixes etc. for customers.

The cooperation between Amazon AWS and Eucalyptus

In short, Eucalyptus is a stand-alone fork of the Amazon cloud. With that you are able to build your own cloud with the basic features and functions of the Amazon cloud.

Amazon Web Services and the private cloud infrastructure software provider Eucalyptus announced in March 2012, to work more closely in the future to support the better migration of data between the Amazon cloud and private clouds. Initially, developers from both companies will focus on creating solutions to help enterprise customers to migrate data between existing data centers and the AWS cloud. Furthermore, and even more important is that the customer should be able to use the same management tools and their knowledge for both platforms. In addition, the Amazon Web Services will provide Eucalyptus with further information to improve the compatibility with the AWS APIs.

Amazon Eucalyptus private cloud for the CIA

Such cooperation could now bear fruits for both Amazon and Eucalyptus. Because of the very close similarity of Eucalyptus to the Amazon cloud, there is already a ready and directly usable cloud software that Amazon knows and can bring his knowledge with. Services that Eucalyptus supports not yet but needed by the CIA can be reproduced successively. This in turn could help Eucalyptus in their development by either flow back knowledge from the development or even parts of services in the general product.

PS:

I have my problems with the seemingly official cooperation between Amazon and the CIA. A keyword here is trust what Amazon through the cooperation with an intelligence not further promotes.

Kategorien
Comment

Actually, an own private cloud financially makes no sense?!

I was asked this question from a visitor at CeBIT after my lecture/ panel „Public vs. Private cloud“ during the Webciety 2013. We then talked a little about this topic. His question was, why some companies decide for a private cloud despite of the massive investments, and how they measure these capital expenditures and justify it.

Private cloud is a huge financial effort

Who decides for the construction of a real private cloud should also aware that this means more than just build a terminal server. Cloud computing, even in the private cloud, means among other things self-service, scalability, resource pools, automation, granular billing and on-demand retrieval of resources. One basis of the cloud is virtualization. For the virtualization hardware is required, from which the virtual resources are created. In order to ensure a significant scalability and availability an investment in hardware is necessary. Other cost items are the software that provides the cloud-functionality and is responsible for the pooling of resources, the automatic on-demand provisioning, self-service, billing, etc. In addition, you need the staff with the appropriate knowledge, that builds, maintains and operates the cloud infrastructure. Means, you become your own cloud provider.

So, if we consider which financial expenditures have to be lifted in order to realize the characteristics of a private cloud, we soon come to the conclusion that a private cloud from a purely economic point of view does not make sense. Unless one pays no attention to the cost and want to make his business more agile in a very private setting and increase the satisfaction of his employees, without going into the dependence of a provider and dependent on its level of data protection.

The KPIs of the cloud ROI need to be expanded

The successful use of cloud computing can’t always be expressed in hard numbers. I wrote more about this in the article „Cloud Computing ROI: Costs not always say a lot„.

I purposely advise not only to look at the actual cost while the rating of cloud computing, but include other factors which a typical return on investment (ROI) does not know. Its KPIs (key performance indicators) only measure the financial input and the resulting output. In the private cloud, the output will always be negative, because no real added value is directly seen. Unless you define one for yourself. This has nothing to do with cheating, but is a logical conclusion, which results from the flexibility and agility of cloud computing.

One of these KPIs and for me one of the most crucial is „the employee satisfaction„. This tells us how happy your employees are with the use and deployment of the current IT infrastructure. I claim that this KPI is dark red in many companies. Otherwise, there is no such high desire for shadow IT.

By building a private cloud the provision of resources can be strongly accelerated. This allows you to deploy servers within 5 minutes instead of days or weeks. Developers ask for the on-demand provisioning of resources. Just ask them. Unfortunately the value, which states that a developer or employee works satisfied, never flows into the ROI. But it should. Because the agility, flexibility and satisfaction should play as large a role as the actual costs.

Kategorien
Comment

VMware is NOT the technology enabler of the cloud

In a recently published German interview, a VMware employee has been thrilled himself to the, for me, very vague and questionable statement „VMware is the technology enabler of the cloud, and I currently see no other.“. Without offending VMware, this sentence sounds to me very overestimating and is like a loss of reality. There is no doubt that VMware is the king of virtualization, but to assert that nothing would work without them in the cloud is very presumptuous.

Reality distortion field

It is a development of Steve Jobs, the Reality distortion field„. It is the property of being able to convince someone of what is true and twist the facts, that in the end it looks as if it was his own idea. Fortunately, VMware says not that they invented cloud computing, which would not be a loss of reality, but delusions of grandeur.

Virtualization is not cloud computing

It is very presumptuous to assert beeing the technology God of the cloud. Just think about what VMware actually provides as the basis for the cloud. It is virtualization, so the hypervisor. Of course, virtualization is one of the main bases for the cloud. But there is much more to do in order to provision the virtual resources and use them.

Sticking to the hypervisor, nobody can deny that VMware is the market leader. Here they do a good job. Nevertheless one should not forget that the Amazon Web Services and Rackspace are using the open source hypervisor XEN. The HP Cloud just as the Google Compute Engine rely on KVM.

So, the real cloud computing market leader do not set on VMware. Therefore, there can be no talk of that VMware is the technology enabler of the cloud.

Furthermore, one should not forget the many other technology providers like openQRM, OpenStack, CloudStack, Eucalyptus, Microsoft, etc., who help companies to build infrastructures for the public as well for the private cloud. VMware is certainly a provider that meanwhile tries to got off his piece of the cake in the cloud. But in the end they are only one of many and have to hold one’s own like everyone else every day.

I know that I will have no friends at VMware with this article. But such an intolerable assertion I can not leave on the Internet, sorry VMware!

Kategorien
Comment

Stop talking about cloud vendor lock-in, because actually we all love him

A popular cloud computing discussion is the vendor lock-in. The dependence on one vendor which is so large that the change in the current situation is too uneconomical due to high switching costs. I had such a discussion recently during a panel and would like to describe my general attitude about vendor lock-in. Because I think that a lock-in is necessarily not a bad thing. Actually, we love him. But, we mostly do not realize it. Moreover, we live for decades with him and each of us even every day. A lock-in is unavoidable, it is always there!

The daily lock-in

A good example is IKEA. Our apartment is made up of furniture from IKEA – up to the kitchen. Why? Because it looks good, fits perfectly together AND combining with furniture from other vendors is tough. And? We think it’s great because it does it’s job, but above all it fulfills our requirements. Analogous to IT it means that companies engage consciously on a supposed lock-in with SAP, Microsoft, Oracle and other vendors, because they need it and the vendors meet the requirements as far as possible. One only considers how many companies have adapted to SAP and not the other way around. Because initially there were no alternatives and because SAP met the required functions for the daily business.

The paragon of a lock-in is the iPhone/ iPad. And? Right, all love him! Because the iPhone „raises“ the own coolness factor and also brings the functions so many need, even if they do not even know it before. Except for the coolness factor it’s the same for Android. One may argue that Android is open source and thus the freedom is finally greater. On the one hand this is true. And for the individual it’s possible to keep the change in check. From the perspective of a company it looks very different. Here the lock-in is comparable with an iPhone/ iPad, just in the Google universe.

No vendor lock-in is a pure marketing promise

It’s similarly in the cloud and here specifically with OpenStack. Rackspace is the loudhailer of the OpenStack community. The provider argued that it’s easier to go back into the own private cloud infrastructure if an enterprise depends on a public cloud provider who relies on OpenStack. And of course Rackspace delivers an own OpenStack private cloud distribution. At the end Rackspace earns with consulting and other services. Rackspace’s argument is correct, but rather to understand at the marketing level. Because even a vendor like Microsoft, with its Windows Server 2012, gives the ability to leave Azure and go back into the own infrastructure. And you can also free yourself from the Amazon Web Services (AWS) if you own your own Eucalyptus cloud. Although, Eucalyptus is still far away from providing all the services of AWS, but that will happen over time. I also continue to believe that Amazon will buy Eucalyptus sooner or later.

So, even with OpenStack you enter a lock-in. Although they talk about open standards and APIs. But in the end you will also be caught in the OpenStack ecosystem. This means that you can easily switch to another provider who supports OpenStack. But the topic Openness is here rather used as a marketing tool. After all, OpenStack and other commercial open source associations are no fun events. It’s about tough business. At the end OpenStack makes it only easier to change between the various providers that also rely on OpenStack. But, since all the vendors set up virtually on the same technology and distinguish themselves only through value-added services, you have the OpenStack technology lock-in. (Yes, of course it is possible to build your own OpenStack cloud regardless of a provider. But the (financial) burden is out of proportion to the later benefit.)

A lock-in offers more advantages than you think

A lock-in offers several advantages. He creates innovations that one as an alleged „slave“ participates from. The vendors constantly roll out improvements from which we benefit. And, the lock-in is good, as long as he just does the job that is expected. If the solution meets the requirements everything is great! As a company, after the decision for a provider, you admit to the lock-in deliberately. On-premise one has engaged for years in a lock-in. Certainly one does not quite agree with one or the other function, but on the whole, one is happy, because the problems are solved and the needs are met. And, as a company in the cloud you take a not insignificant sum of funds in your hand. These investments have to pay first before you think about switching back. One should not be forgotten. If another provider wants to break a company free from an existing provider, it will do everything possible, to enable this path. So, before you get too worried about a lock-in, it’s advisable to look first how future proof are the services and the cloud provider itself and include it in a possible exit strategy.

Conclusion: We live constantly with a lock-in. This can never be completely avoided, and just keep up to very low. A lock-in is somewhat more flexible than the other, but the decision for one remains. Indisputable, we love him, the lock-in!

Kategorien
Comment

How future-proof is the Google cloud portfolio?

Google is known for „just“ throw a new service on the market. That’s their corporate culture. It promotes creativity within the company and ensures innovation. But at the end of the day also Google must be clean up the playground. This resulted in numerous closures of famous and lesser-known services in the recent past. Latest victim is the popular Google Reader. It will be shut down at 1th of July 2013. This naturally raises the question of how vulnerable Google’s cloud services are for a portfolio adjustment. At least the finger on the close button seems to fit very loosely.

Longevity vs. popularity

Google scrubs its Google Reader as its popularity has fallen sharply in the past, Google said. That Google is apparently not quite right here, shows a recent petition against its closure. After all, 20,000+ subscribed against its closing.

Google sometimes gives me the impression that it is like a big kid. They find many things at once exciting (20 percent rule), play with it, investing time and then loses interest and pleasure, when the playmates apparently no longer want to play with it. The concentration phase is only with a few products really high. (From a entrepreneurial point of view correct.)

Companies do not like that

Google is currently making great efforts to broadly make it in the business environment. Regarding current providers such as Microsoft and IBM no easy task. Google’s trump card is that they are born in the cloud and know the rules inside out. Finally, they almost self-developed them.

Nevertheless, the question is permissible. Why should a business use Google cloud services, such as Google Apps or the Google Cloud Platform, if services that are apparently not used sufficiently well, to be suddenly closed? Even if Google has monetized above named cloud solutions now, this question retains its place. Because by the monetization of individual service suddenly gets a new KPI, the revenue!

Google may assume that enterprises will not be thrilled if they suddenly get an e-mail that the service they are using will be closed due to lower attractiveness and revenue figures in three months.

For enterprises, this type of product management is not attractive and Google must learn that enterprises need to be treated differently from private users. Even if the consumerization progresses.

Clean up the portfolio is good, but…

No question, it makes sense to clean up the portfolio steadily. It is also recommended for many other providers. However, these seem to act in the interests of their customers and offer their products and services on a long-term roadmap. However, it seems that the finger at the „service close button“ at Google sits relatively loose.

I do not think companies will come together for a petition against the closure of Google services. Indeed, you always hear about „too big to fail“, but Google is not as big as all that.

Kategorien
Comment

One third of German companies use the cloud. Really? I don't think so.

According to a survey of Bitkom among 436 German companies a third of all respondents use cloud computing in 2012. This sounds good at first and shows that the cloud adoption is going upwards in Germany. However, I assume that the number is sugarcoated. No, not by Bitkom itself, but because it is still unclear what cloud computing really means, and most of the surveyed companies have simply said yes, even though they are not using cloud. Support for my assumption I get from Forrester Research.

Survey results of Bitkom

That one in three companies in Germany relies on cloud roughly means a growth of 9 percent compared to 2011. Additionally, 29 percent plan to deploy cloud solutions. Another third sees cloud computing not on the agenda. The survey reveals that currently 65 percent of large firms with 2,000 employees have cloud solutions in the use. The middle class between 100 to 1999 employees is at 45 percent. Smaller companies with 20 to 99 employees cover a quarter.

Private cloud is preferred

Moreover 34 percent of surveyed companies rely on their own private clouds. Compared to 2011, a growth of 7 percent. 29 percent plan to use this cloud form.

Now, let’s come to my assertion that the statement that one third of German companies use the cloud, is sugarcoated. Because what I hear and see again and again, is now also publicly stated by Forrester Research, more precisely by James Staten, who even describes this as cloud-washing. 70 percent of „private clouds“ are no clouds.

70 percent of „private clouds“ are no clouds

The problem is mainly in the fact that most IT administrators continue to lack an understanding of what cloud computing, whether public or private cloud, really means. As James Staten writes, 70 percent of interviewed IT administrators are not aware of what a private cloud really is. Most named a fully virtualized environment already a cloud, which in general does not have the core features of a cloud.

Virtualization is not cloud computing

One has to make clear again at this point, that the mere virtualization of an infrastructure does not makes a private cloud. Virtualization is a subset of cloud computing and a key component. But: The areas self-service, scalability, resource pools, automation, granular billing, on-demand delivery of resources and so on, no ordinary virtualization solution is offering, and only is provided by a cloud infrastructure.

Frighteningly, some vendors are so perky and sold there former on-premise virtualization solutions now as a cloud. The „confession“ I have received from an employee of a very large U.S. vendor, who is now offering cloud solutions. The context in the personal conversation was about „We have adjusted our VMware solutions by simply written cloud on it to quickly have something „cloud-ready“ on the market.

German companies believe to have a „private cloud“

Similarly, I see it with German companies. I would not blame the Bitkom. Finally, they have to rely on the correct answers to the questions. And what should they do if the respondents due to ignorance may answer incorrect by claiming to use a private cloud, even though this is no more than a virtualized infrastructure without cloud properties.

With this in mind, you should see the results of this Bitkom survey critical, relativize it and acknowledge that not one third of German companies use cloud computing.

Update: 12.03.13

I do not want to give the impression that I take my statements out of the air. Yesterday somebody told me that their „Terminal-Server“ is a private cloud. Reason: There are so many definitions of cloud you can choose.

Update: 13.03.13

Also Exchange server with OWA are often named as a private mail cloud.

Kategorien
Comment

Big data and cloud computing not only help Obama

In my guest post on the automation experts blog two weeks ago, I discussed the topic of big bata and how companies should learn from the U.S. election of Barack Obama in 2012, to convert real-time information in the lead. In addition to cloud computing, mobile and social media, big data is one to the current top issues in the IT business environment. This is by far not only a trend but a reality. With a far-reaching influence on business, its strategic direction and IT. Known technologies and methods on the analysis of big data have reached its limits. And only the company that manages to obtain an information advantage from the data silos is one step ahead to the competition in the future.

Big Data: No new wine in old wineskins

Basically the idea behind big data is nothing new. From the early to mid 1990s, there was already the term „business intelligence“ about using procedures to systematically analyze data. The results are used to gain new insights that will help in achieving the objectives of a company better and to make strategic decisions. However, the data base, which had to be analyzed, was much smaller than today and only analyzes on data from the past could made, leading to uncertain forecasts for the future. Today everyday objects collect data with massive amounts of information every second. This includes smartphones, tablets, cars, electricity meters or cameras. There are also areas that are not located in the immediate vicinity of a person, such as fully automated manufacturing lines, distribution warehouse, measuring instruments, aircraft and other means of transport. And of course it is we humans who nurture big data with our habits. Tweets on Twitter, comments on Facebook, Google search queries, browsing with Amazon and even the vital signs during a jogging session provide modern companies vast amounts of data today which can turn in valuable information.

Structured and unstructured data

Large data sets are not a new phenomenon. For decades, retail chains, oil companies, insurance companies or banks collect solid information on inventories, drilling data and transactions. This also includes projects for parallel processing of large data sets, data mining grids, distributed file systems and databases, distributed to the typical areas of what is now known as big data. These include the biotech sector, projects of interdisciplinary scientific research, weather forecasting and the medical industry. All of the above areas and industries are struggling with the management and processing of large data volumes.

But now the problem has also affected the „normal“ industries. Today’s challenges are that data arise from many different sources and sometimes fast, unpredictable and, lest unstructured. Big data is to help in places where a lot of different data sources may be combined. Examples are tweeting on Twitter, browsing behavior or information about clearance sales and to this understanding to develop new products and services. New regulations in the financial sector lead to higher volumes of data and require better analysis. In addition, web portals like Google, Yahoo and Facebook collect an enormous amount of daily data which they also associate with users to understand how the user moves to the side and behaves. Big data becomes a general problem. According to Gartner, enterprise data could grow in the next five years by up to 650%. 80% of those will be unstructured data or big data that have already shown that they are difficult to manage. In addition, IDC estimates that the average company has to manage 50 times more information by 2020, while the number of IT staff will increase by only 1.5%. A challenge companies must respond to in an efficient manner if they try to remain competitive.

Why companies choose big data

But where do these huge amounts of data come from and what motivates a business to deal with the issue. Market researchers of Experton Group tried to clarify the questions in their „Big Data 2012 – 2015“ client study in October 2012. Accordingly, the main driver for the use of big data technologies and concepts is the rapid growth of data, including the appropriate quality management and automation of analysis and reporting. The topics of loyalty and marketing take about a third of the companies as an opportunity to renew the analysis of their databases. New database technologies give companies 27 percent of respondents as a motivation for new methods of data analysis. Furthermore almost all the features of big data matter the reasons for the expansion of strategic data management. This shows that big data is already reality, even though in many cases it is not known by this term. The big data drivers themselves are the same across all industries and company sizes across. The only difference is in the meaning and intensity. One big difference is the size of the company and the distribution of data and information to the right people in the company. Here companies see their biggest challenges. Whereas smaller companies classify the issue as uncritical.

Big data: An use case for the cloud

The oil and gas industry has solved the processing of large amounts of data through the use of traditional storage solutions (SAN and NAS). Research-oriented organizations or companies like Google, which have to do with the analysis of mass data are more likely to keep track of the Grid approach to invest the unused resources in software development.

Big data processing belongs to the cloud

Cloud infrastructures help to reduce costs for the IT infrastructure. This alows company to be able to focus more effectively on their core business and gain greater flexibility and agility for the implementation of new solutions. Thus a foundation is laid, to adapt to the ever-changing amounts of data and to provide the necessary scalability. Cloud computing providers are capable based on investments in their infrastructure, to develop a big data usable and friendly environment and maintain these. Whereas a single company can’t provide the adequate resources for scalability and also does not have the necessary expertise.

Cloud resources increasing with the amount of data

Cloud computing infrastructures are designed to grow or reduce with the demands and needs. Companies can meet the high requirements – such as high processing power, amount of memory, high I / O, high-performance databases, etc. – that are expected from big data, easily face through the use of cloud computing infrastructure without investing heavily in their own resources.

Cloud concepts such as infrastructure-as-a-service (IaaS) combine both worlds and take in a unique position. For those who understand the SAN / NAS approach, resources can also be use to design massively parallel systems. For companies who find it difficult to deal with the above technologies or understand this, IaaS providers offer appropriate solutions to avoid the complexity of storage technologies and to focus on the challenges facing the company.

An acceptable solution comes from cloud computing pioneer Amazon Web Services. With the AWS Data Pipeline (still in beta) Amazon offers a service which move and handle data automatically between different systems. The systems are to be either directly in the Amazon cloud or on an other system outside. Amazon makes the handling of the growing amounts of data to distributed system with different formats easier. To this number of pipelines, in which the different data sources, conditions, objectives, instructions and schedules are defined can be created. In short, it’s about what data is loaded from which system based on which conditions, then be processed, and afterall where the results should be save. The pipeline will be started as needed, hourly, daily or weekly. The processing can take place either directly in the Amazon cloud or on the systems in the company’s own data center.

Big Data = Big Opportunities?

Not only the Obama example shows how profitable the operation of structured and unstructured data from mobile devices, social media channels, the cloud, and many other different sources of a company can be. However, one has to be clear about one point regarding big data. It is ultimately not the mass of the data that is collected, but the quality and for which the data is to be ultimately used in general.

It is therefore crucial whether and how a company manages the masses of data generated by human and machine interactions and to analyze the highest-quality of information and thus secures a leading position in the market. Qualified data is the new oil and provides companies that recognize their own advantage therein, the lucrative drive.

Kategorien
Comment

The Windows Azure Storage outage shows: Winner crowns have no meaning

The global outage of Windows Azure Storage due to an expired SSL certificate again has struck big waves on Twitter. The ironic aftertaste is particularly due to the fact that Windows Azure was recently chosen by Nasuni as the „Leader in Cloud Storage“. Especially because of performance, availability and write errors Azure could stick out.

Back to business

Of course, the test result has been rightly celebrated on the Windows Azure team’s blog (DE).

„In a study of the Nasuni Corporation various cloud storage providers such as Microsoft, Amazon, HP, Rackspace, Google were compared. The result: Windows Azure could reap in almost all categories (including performance, availability, errors) the winners crown. Windows Azure is there clearly shown as a „Leader in Cloud Storage“. Here is an infographic with the main findings, the entire report is here.“

However, the Azure team was brought back to earth with a bump due to the really heavy outage which could be felt around the world. They should rather go back to the daily business and ensure that an EXPIRED SSL CERTIFICATE never lead to such a failure. Because this has relativized the test result and thus shows that winner crowns have no meaning at all. Very surprising is that always evitable errors lead to such catastrophic outages. Another example is the problem with the leapyear 2012.