Management @en

Design for failure in the AWS Cloud!

It’s definitely very important to understand how the Cloud of your Cloud Provider works. So take your time to study the whitepaper and manuals on how you can save your applications against infrastructure failures.

Therefore Amazon released a website where you can find some whitepapers on designing fault-tolerant applications, learn about cloud architectures and web app hosting.

Really interesting is that these whitepapers are from 2010 respectively January 2011.

AWS Cloud Architecture Best Practices Whitepaper Read

The Cloud reinforces some old concepts of building highly scalable Internet architectures and introduces some new concepts that entirely change the way applications are built and deployed. To leverage the full benefit of the Cloud, including its elasticity and scalability, it is important to understand AWS services, features, and best practices. This whitepaper provides a technical overview of all AWS services and highlights various architectural best practices to help you design efficient, scalable architectures.

Building Fault-Tolerant Applications on AWS Whitepaper Read

AWS provides you with the necessary tools, features and geographic regions that enable you to build reliable, affordable fault-tolerant systems that operate with a minimal amount of human interaction. This whitepaper discusses all the fault-tolerant features that you can use to build highly reliable and highly available applications in the AWS Cloud.

Web Hosting Best Practices Whitepaper Read

Hosting highly-available and scalable web applications can be a complex and expensive proposition. Traditional scalable web architectures have not only needed to implement complex solutions to ensure high levels of reliability, but have also required an accurate forecast of traffic to provide a high level of customer service. AWS provides the reliable, scalable, secure, and highly performing infrastructure required for the most demanding web applications – while enabling an elastic, scale-out and scale-down infrastructure model to match IT costs with real-time customer traffic patterns. This whitepaper will review Web application hosting solution in detail, including how each of the services can be used to create a highly available, scalable Web application.

Leveraging Different Storage Options in the AWS Cloud WhitepaperRead

The AWS Cloud platform includes a variety of Cloud-based data storage options. While these alternatives allow architects and developers to make design choices that best meet their application’s needs, the number of choices can sometimes cause confusion. In this whitepaper, we provide an overview of each storage option, describe ideal usage scenarios, and examine other important storage-specific characteristics (such as elasticity and cost) so that you can decide which storage option to use when.

AWS Security Best Practices Whitepaper Read

Security should be implemented in every layer of your Cloud application architecture. In this whitepaper, you will learn about some specific tools, features and guidelines on how to secure your Cloud application in the AWS environment. We will suggest strategies how security can be built into the application from the ground up.

Source: http://aws.amazon.com/architecture

Management @en

Cloud Computing does not solve all your problems automatically…

…, on the contrary, it initially creates problems! Especially if you want to provide yourself as a provider of Cloud Computing services.

It is a misconception, if you think, that the construction of a private cloud implies, that you no longer have to take care about the high availability of your infrastructure (physical machines, virtual machines, master-slave replication, hot standby, etc.).

Because: The Cloud will make it on their own!

Be careful!!!The Cloud is preparing a (private) cloud operator initially more problems than you think.

A cloud does not work by itself. It must be developed and equipped with intelligence. This applies to the construction of a private cloud as well as for the use of a public cloud offering (in the case of IaaS). Scripts must be written and new software must might be developed, which is distributed over the cloud. It is also important to read the white paper of the respective providers. Build up know how(!) and understand how the cloud works to use it for your own needs. Another possibility is, of course, to obtain advice from professionals (in addition).

There is no different, for example by using a public cloud offering of the Amazon Web Services. If an instance A has a problem, it can suddenly no longer be attainable, like any normal physical server as well. Now you might think: „I just take an instance B as a backup for instance A!“ „And then?“ One might think that the instance B automatically takes over the tasks of the instance A. It’s not that simple! Scripts etc., must ensure that the instance B is to assume the duties of instance A, if A suddenly is no longer available. Of course Instance B must also be prepared!
Here for example, the important content of the instance A, including all configurations, etc. can be stored in an EBS (Elastic Block Store) and not in the local instance store. Afterwards a script must ensure that the instance B is powered up automatically with the configurations and all the data from the EBS, if the instance A is no longer available.

The Cloud just gives us, in the area of Infrastructure as a Service, the capabilities to get the (infinite) number of resources out of a quasi-infinite pool of resources, at that time when we need them. We thus receive a highly scalable virtual private data center from the provider. This means, conversely for the operator of a Cloud (private, public), that he must hold that amount of physical resources as well, so that the requested virtual resources can be provided at any time and that sufficient resources are always available to the users.

Management @en

How can you identify true Cloud Computing?

People often ask me, how they can identify a true Cloud Computing offering. Often they say: „Hey, our data is processed by a service provider in his data center. Well, we are using the Cloud, right?“

Hm, Careful! Many marketing departments of webhosting provider abused Cloud Computing in the last month, whereby a dilution of the term has taken place. What you have known as a „Managed Server“ before is now a „Cloud Server“. Eventually they just change the badge.

To identify a true Cloud Computing offering, you shall take care of the following characteristics:

  • On Demand:
    I obtain the resources at the moment, when I actually need them. Afterwards I just „give them back“.
  • Pay as you Go:
    I just pay for the resources which I am actually using, when I am using them. Thereby it will be deducted per user, per gigabyte or per minute/hour.
  • No basic fee:
    Using a Cloud Computing offering I do not have to pay a monthly/annual basic fee!
  • High Availability:
    When I need the resources I can use them exactly at this time.
  • High Scalability:
    The resources adapt to my needs. This means that they either grow with my needs, if I need more performance or they become smaller if the requirements decrease.
  • High Reliability:
    Resources I am using during a specific period of time are actually available when I need them.
  • Blackbox:
    I don’t have to care about the things inside the Cloud offering. I just use the service over an open, well documented API.
  • Automation:
    After the establishment regarding my needs, I do not have to take any interventions manually, while using the offering. That means, I do not have to change the performance of the Server or the size of the storage space manually. The Cloud provider allocates capabilities (like tools etc.) for automation.
  • Access via the Internet:
    This is discussible! However, the cost advantage which is obtained by Cloud Computing is obsolete if an expensive exclusive leased line is required, for example to use the resources of a provider.
Management @en

SOA: Important facts to create a stable architecture

A SOA is often defined as death. But in most cases, failures are already made in the planning phase, because of a nonexistent process optimization. Often it is said: „Hey, let’s make a SOA.“ A further Problem: The responsibility for a SOA is often fully assigned to the IT department, due to the fact that they will know what is needed. This is fundamentally wrong, because the organisation and process optimization belongs to the general management respectively to a delegate department/ team. The IT department should advise the business actively and has to show how the information technology can help at this point to reach the goal.

Afterwards the IT functionality must be established as a service to support business critical processes and design a stable SOA. Therefor following aspects must consider during the implementation.

  • loose coupling: A less degree of multiple hard- and software components among one another
  • distributable: The system should not be confined locally
  • a clear definition: An explicit requirements definition is indispensable
  • divisibility: The entire system can be divided into subcomponents
  • commutability: The individual subcomponents can be replaced

And at least standards, standards, standards

Management @en

Is there any possibility to leave or switch the Cloud? The need for a transparent Cloud!

Think about the following scenario. You have migrated parts of your IT Infrastructure successfully in the Cloud of a provider. You think: „Well, everything is fine. We are saving costs, our infrastructure is now scalable and elastic and our software is always state of the art.“ But,… what if certain things happen? Maybe you want to leave the Cloud and go back into your own Datacenter or you would like to change the provider?

Or how could you map your business processes into the Cloud distributed over several providers. Maybe one provider works on process A and an other provider works on process B, a third provider works on process C using process A and B. Or you are using several independent services from different providers and integrate them to a connected one. An easier example – the data is stored at provider A and provider B processes the data.

Is this possible? How does it works?

One critical point of Cloud Computing is the lack of standards. Each provider is using different technologies and cooks his own soup inside his infrastructure. For this reason each relationship among a provider and a client is different.

The need for a transparent Cloud is indispensable!

One answer could be libcloud (http://libcloud.org). Libcloud is a standard library for Cloud providers like Amazon, Rackspace, Slicehost and many more including an uniform API. Developed by Cloudkick (https://www.cloudkick.com), libcloud has become an independent project. It is written in Python and free of charge (Apache License 2.0) to interact with different Cloud providers. Libcloud was developed to obtain low barriers between Cloud providers and „… to make it easy for developers to build products that work between any of the services that it supports.“[1]

[1] http://libcloud.org

Management @en

IT-Strategy: 10 facts how agility supports your strategy

Agility could be a competitive advantage for each business. There for here are some facts how agility could support your IT strategy.

1. Infrastructure
– Your infrastructure should be flexible and scalable.
– Standardize your technologie is the goal.

2. Datamanagement
– Centralize your data management (single source)
– Use standardized interfaces to access the data.

3. Information logistics
– Use company-wide uniform defined key data (KPIs) and computational procedures.
– Separate your data from your applications

4. Management informationsystems
– Use informationsystems on each management level
– Be flexible for business requirements

5. Company-wide integration
– Use technologie kits (cf. LEGO) connecting your partner and distributors.
– Use standardize interchange formats.

6. E-Business ability
– Having scalable Webserver and CMS.
– Having a high security standard.

7. Communication systems
– Using integrative E-Mail and Groupware solutions
– Integrate your mobile systems and devices
– Using VoIP

8. IT-Governance
– Having a fast decision process oriented on your business strategy.
– Save ressources for short-term projects.

9. Enterprise Ressource Planning
– Using a coherent business logic.
– Optimize your processes.
– Avoid redundancies.

10. Loose coupling
– Use autonomous functional components
– Standardize your interfaces
– Separate the functionality and process logic


IT-Agility is not for free. Flexibility is in contrast to cost optimization and performance optimization. It´s not necessary for each business or each business area. If cost and performance optimization is a competitive advantage, agility doesn’t greatly matter. This means that IT-Agility should be adopted in business areas, where flexibility and reactivity are the key factors. Have a look on your business strategy to adopt IT-Agility.

Management @en

Cloud Computing: a question of trust, availability and security

In spite of many advantages like scalable processing power or storage space, the acceptability of Cloud Computing will stay or fall with the faith in the Cloud! I am pointing out three basic facts Cloud providers would be faced with, when they are offering their Cloud Computing services.


Using Amazon services or Google apps does not give you the same functioning guarantee as using your own applications. Amazon guaranteed 99,9% availability for S3 and 99,95% for the Elastic Compute Cloud (EC2). Google also promised an availability of 99,9% for their „Google Apps Premier Edition“ including Mail, Calendar, Docs, Sites and Talk. In February 2008 Amazon S3 was down after a failure and Google was even affected by a downtime in Mai 2009. Just looking back to the undersea Internet cable which was broken last year and cut off the Mideast from the information highway, Google, Amazon etc. are not able to promise these SLAs, because they have no influence for such problems.

Companies must carefully identify their critical processes from the non critical ones first. After this classification they should host the critical ones within their own datacenter and maybe sourcing out the non critical ones to a cloud provider. This might be a lot of work but could be a benefit.

Cloud providers must care for an anytime availability of their services. 99,9 % availability is a standard by now and advertised from any service provider – but for a Cloud Computing service it is to insufficient. 100% should be the goal! The electric utility model might be a good pattern in this case. It’s not as simple as that! But then, a company won’t use Cloud services/ applications for critical business process if the availability is not clear.


Keeping crucial data secure has always been a high priority in Information Technology. Using Cloud Computing, companies have to take their information outside their own sphere and basically transfer them through a public data network.

SLAs (Service Level Agreements) are essential which closely describe how Cloud Computing providers are planning and organizing on protecting the data. This may cause a lot of litigations someday, if any company did not take care of the information.

A hybrid Cloud might be a good solution to avoid those kinds of problems. The company operates on a Private Cloud for crucial information stored within the own datacenter and uses the Public Cloud of a provider to add more features to the Private Cloud. Secure network connections are indispensable in this case and meet a today standard. This approach does not solve the problem of knowing what alse happends to my information I am sending into the „blackbox“.


Carry on the last sentence above there are doubts about what might happen to the information in the Cloud as well. Regarding to the data management and local data privacy, many companies such as insurance or financial institutes seeing a lot of problems using Cloud Computing. Using a Private Cloud is no issue, but a Public Cloud doesn’t even enter the equation. This is due to the fact that insurance companies are handling with social data and no letter may not be written or stored on an external system. Insurance companies subject to supervision of many national laws. For example, the data of a german insurance company may not be hosted on an american host.

Faith and local laws are big hurdles for Cloud Computing. If a word of data abuse in the Cloud gets out to the public, an irreparable damage will be the direct consequence – maybe for the whole Cloud!