Key Cloud Questions, Part 2: How many servers do I get?

Key Cloud Questions, Part 2: How many servers do I get?

No Comments

access360 and Infrastructure as a Service

We regularly get requests to assess customer inquiries for a cloud solution based on providing virtual servers for Active Directory, Microsoft Exchange, File Services with SharePoint, and a simple 2-tier application (i.e. client application with back end database). What follows next is a conversation between us, our reseller, and the customer, where we explain the reasons why this ‘Infrastructure-as-a-Service’ kind of request is not the best way to get the full benefit from moving into the cloud.  With access360 the customer simply needn’t worry about the number, type, and specification of servers they need. This article explains why, and is the second in our series on key cloud questions.

Information Technology (IT) has seen a massive shift toward “cloud computing” in the past 5 or so years (something amaze360 always strives to stay at the forefront of) and this shift is as significant a change as the internet was back in 1994 (some would argue that cloud computing is the logical result of Internet maturity). Prior to the cloud computing paradigm shift, businesses were left with few choices of how to run IT. Most ran their own servers in their own server room/closet, while a few would co-locate theirs with a data centre provider, and pay high network communications charges to connect their premises to their servers. On top of that, organisations then had to decide whether to manage IT themselves, or out-source management to a specialist company.  Businesses would also have to accept that either there would be lost productivity for technology related failures, or they would have to pay a large amount of money to mitigate those risks.  Every few years, businesses would need to undergo a ‘technology refresh’ program to update their systems to cope with increased demands of applications, so that they could keep abreast of customer demands. Business continuity planning and disaster recovery was something each business had to determine, invariably when the complicated process of restoring IT was something that simply wasn’t anything to do with their core business.  As a result, this era typically had high ongoing capital and operational costs for the hardware, software, and communications required to run the business, and for the skilled professionals required to manage it all.   In short, businesses faced seemingly endless spiraling IT costs.

Move ahead to the era of cloud computing and providers saw an opportunity to use server virtualisation and automation to streamline the provisioning of customer IT environments to give businesses the same server workload capability on hardware they didn’t need to purchase.  This virtualisation of the workload provided an abstraction layer that means customers who subscribe to this service no longer need to worry about hardware refresh aspect of managing their own IT systems.  The IaaS provider can worry about that for them, and seamlessly perform necessary maintenance and replacement without disrupting the customer.    Sound good?  It is, however consider above what the original problems were:

1) Management of the IT service.  Does this change under IaaS?  No.  The reality is that your chosen IaaS provider will happily provide you with the ability to grow your server footprint as and when needed, provide automated build services and a myriad of other benefits, but when it comes to management of what the servers actually do once they are up and running, that is your responsibility.  It’s up to you to manage them or for you to pay someone else to manage them at an additional cost.  This may be performed by the IaaS provider or a third party if the IaaS provider doesn’t offer a managed service.  So while you save on technology refresh costs, you don’t necessarily save on any other costs.

2) Business continuity and disaster recovery planning is still complicated.  While the IaaS provider in one sense simplifies DR (you no longer have on premise servers so the geographical concern is removed), you are now subject to the DR strategy adopted by the IaaS provider, and the triggers they have adopted to determine whether the nature of any failure they experience is significant enough to be considered a disaster event (remember it’s not just your situation they are evaluating, it’s their entire customer base), and upon completion of their assessment, their time frames for execution.   This assumes of course, that disaster recovery is something they will perform.  There are very well known IaaS providers that simply do not take on responsibility for any recovery (examine your service agreement closely!).

3) Technology failure will still occur.  No matter what IT solution you take on, technology within that solution will fail at some point.  This is a cold hard fact. Due to the very nature of IaaS, recovery from component failure is an all-or-nothing proposition (since you are buying discreet virtual servers that runs on discreet infrastructure).  If your business has a Microsoft Exchange Server through an IaaS provider and the provider experiences a failure of their underlying infrastructure, your entire email service goes offline, not just part of it.  It might be only a few minutes, or it might be several hours before recovery occurs, but in the mean time, your business is adversely affected and you’ve lost productivity. This is the same scenario you face if you were hosting the same service on-premise.

These things considered (and there are many others besides such as complicated cost structures and other deeper technical reasons), in the grander scheme of cloud computing Infrastructure-as-a-Service is a significant but small step from the pre-cloud era.  It provides an entry level opportunity to get into the cloud, but is fraught with many of the same problems that the pre-cloud era has.  It is better than the on-premise approach, but only just.  At the very least it provides a way to say you’re in the cloud, but you’ll still be conducting business as usual and still be subject to almost all of the pitfalls of the pre-cloud era.

At this point it’s hopefully clear that you cannot solve the problems associated with running your business IT systems on your own servers by simply moving those systems onto someone else’s hardware.  While you may gain some benefits, you still face many of the same problems as before and you need to take the same approach as you always did (and more often than not, face similar costs).

These are a few of the key reasons why the Infrastructure-as-a-Service cloud model is not something that amaze360 provides.  In our view it is too risky, inefficient, costly, confusing for customers, and is prone to many of the pre-cloud pitfalls that the cloud era solves. Instead the access360 service is built around the fundamental idea that businesses run on applications, and that these applications must be available on demand, function correctly, and perform well.  That’s it!  Sure there will be servers and infrastructure required to run the application, but what’s important is that the customer need only be concerned with whether the application is available, functions properly, and performs well.  The cloud provider’s role is to make that happen using whatever underlying infrastructure is required, with appropriate service monitoring to ensure performance expectations are met.  No infrastructure worries for customers means increased focus on their business, increased productivity, less time spent non-revenue-generating activities.

Here at amaze360 we firmly believe that this is the right way for businesses to use the cloud.

Back to Top