,

A perfect storm is brewing for your cloud computing operation

A perfect storm is brewing for your cloud computing operation.

A perfect storm is brewing for your cloud computing operation

Written by David Linthicum exclusively for Nelson Hilliard

It’s been years since you began migrating applications and data to a public cloud.  Where you once had a dozen applications on public clouds a few years ago, last year that led to 24 more, and now you’re pushing past 500.

While cost savings is the reason for migration to the cloud, something happens when you pass 500 applications.  The simple place to host and operate applications suddenly becomes complex, and this leads to an unexpected rise in operation complexity that you just did not see coming.  Why?

The short answer is that most enterprises will soon surpass 500 workloads, and that includes SaaS, PaaS, and IaaS, both public and private clouds.  451 Research’s latest Voice of the Enterprise: Cloud Transformation survey of IT buyers indicates that 41% of all enterprise workloads are currently running in some type of public or private cloud.  By mid-2018, that number is expected to rise to 60%, indicating that a majority of enterprise workloads will run in the cloud in the near term.” 

The tipping point?

The truth is that the tipping point is lower for some enterprises, say, 150-250 workloads, and higher for others, say, 500-700.  Thus 500 is on arbitrary number.  The only consistency is that there is a tipping point, where the number of workloads outpaces the enterprise’s ability to effectively manage those workloads. 

Why is there a tipping point for the cloud?  There is some baggage that comes with operating cloud-based applications that require automated operations to keep systems up and running.

Operations must adjust to management practices and tools that could span the world.  Thus, when enterprises reach the tipping point, it typically means that the applications are also geographically distributed, which makes operations even more complex. 

Clouds can abstract the underlying infrastructure from the platforms and applications.  While the developers have an advantage because the infrastructure is being hidden from them, operations need to set up processes and technology to ensure that the infrastructure is available and reliable. 

You don’t really care where the physical servers exist, but you must manage them consistently.  As the number of workloads grows, so do the number of servers that must be managed.  Excel worked okay to keep track of IP addresses, server names, and resources from 0 to 150 workloads, but the number of things to track will soon grow out of control. 

Clouds run applications that share common services.  For example, once we had 10 workloads accessing the same cloud-based database a year ago, and now it’s grown to over 100.  These workloads are typically decoupled from the database, meaning that they are all usually dependent upon that database.  The database goes down, so do the workloads.  We must operate the database accordingly, and the importance has changed a great deal from 10 to 100 workloads that all need that same database to work and to run. 

Most cloud operations leverage a great deal of automation.  From the jump, we knew that our cloud-based workloads needed automated tools to allow operations to scale.  Faced with the number of workloads, resources, and other things that make our cloud-based solutions complex, we not only need automation but must have it to survive. 

With usage-based accounting systems in place, those who leverage cloud resources have their cloud usage tracked. They can then allocate the costs accordingly, with showbacks and chargebacks.  For the most part, these types of systems are afterthoughts, often brought in after there is a need, and that’s typically too late.  These systems allow you to track costs, but keep the users within budget so additional cloud resources can be obtain later, and they keep the cloud providers honest about your usage and their charges. 

Preparing for the storm.

What’s great about this problem is that you know it’s coming.  For most enterprises, there’s still time.  Get your operations’ best practices and technology in place as soon as you can.  Move through a process that will ensure that you pick the right processes, people, and technology.  I would recommend the following strategy:

Define requirements.  Specifically, workload requirements, existing and future.  Keep in mind that you’re building operational best practices and tool sets that will allow you to keep the cloud-based workloads up and running, and thus need to understand just what they are doing, and the technology that they are employing.  This is perhaps the most important thing that you can do.  With perfect information, you can come close to making perfect decisions. 

Define people and processes.  Most enterprises have traditional systems operations processes and people in place.  When cloud computing comes along, they try to fit those processes and people into their new cloud-based systems.  This is a bad idea for many reasons, but the biggest issue is that public and private clouds are managed differently than traditional systems.  They need new skill sets and operations technology to provide the right processes and automation.  Thus, you’ll need to adjust your processes, and your people need to be replaced or retrained.  Most organizations fall down here, considering that we’re dealing with human resources that are difficult to change. 

Find the right tools, and understand that they won’t be perfect.  Those who deploy clouds and see a tipping point ahead think that technology will save them.  That’s almost never the case.  Those who focus too much on operations and management tools, especially without having a good understanding of their requirements, are likely to pick square peg tools for round hole problems. 

The tools should provide general-purpose capabilities that span all workloads and cloud computing systems.  If you’re using one tool for a few workloads, and adding a dozen tools to provide operations management, then you’re only making your job more complex. 

The idea is to provide a layer of automation and abstraction from you, the workloads, the infrastructure, the network, etc., and allow you to control many things using a single interface.  Moreover, everything within those tools should have the ability to be automated to allow for auto-recovery, auto-scaling, and other operational processes that kick off when certain conditions are met. 

Sound complicated?  We seem to go through these sorts of problems with any new technology scales.  We saw this with the rise of the PC, with the rise of the Web, and now the rise of the cloud.  New technology always creates the need and the complexity of managing the new technology.

It’s understandable that most in the Global 2000 who now leverage cloud-based applications and databases did not see the tipping point coming.  Most cloud providers don’t talk about it, and, for the most part, enterprises have not experienced it, having now only moved past 50-100 workloads on public and private clouds, on the average. 

Now the bigger complexity problem begins.  That said, with a bit of planning and selecting the right technology for the job, complexity is a solvable problem

Remember to Subscribe to our Youtube Channel for the Latest Cloud Computing Tech Jobs, News, and Cloud Shows.

   

David S. Linthicum is a managing director and chief cloud strategy officer at Deloitte Consulting, and an internationally recognized industry expert and thought leader.
Connect with David on LinkedIn and Twitter:

At Nelson Hilliard we specialise in cloud technologies, sourcing the top 20% of cloud professionals inspired to work for you through our specialised marketing and profiling. If you are interested in having a quick talk to me regarding your employment needs please feel free to reach out.

You can also check my availability and book your 15 minute discovery call here.

Brad Nelson