Monoliths, microservices and the Middle Ground – what can you containerise?

I love designing and building systems to solve problems. At this year’s Cloud Expo, I spoke to many people who shared the same passion and need for the best tools and techniques to modernise their systems.

So, here’s a whistle-stop tour of containerisation strategies to help businesses wrangle their estate and achieve more with less. It reflects my Cloud Expo session which looked at how businesses are updating their IT systems today while making sure they’re well-positioned for whatever comes tomorrow.

Breaking down your estate

Every business has a spectrum of workloads, ranging from the very old to the new. You could separate them into three categories. First, there’s the ‘hardcore legacy’, say, the old school mainframe and anything else monolithic or ‘old and special’. These are more than ten years old, demand careful handling, out-of-date dependencies and institutional knowledge. Problem is, of course, the creators of the hardcore legacy have long since left the building.

Next, we hit the Middle Ground. It’s older, but not ancient. It doesn’t play nicely with either the very old or the new, or even within itself. And it could represent the biggest chunk of your estate.

Then there’s the lovely new and ‘green’ workloads. They’re probably cloud native and benefit from a lot of love and attention from people still in-post. But here’s a terrifying thought: these green fields could become tomorrow’s untended fallow patches, viewed as unwieldy and challenging as today’s hardcore legacy.

So, how are businesses making all parts of their estate ‘play nicely’ together?

The rise of Containers

VMs are no longer viable for running a complex set of services – they’re too big, too slow and the overheads are too high. Enter containers.

We’re seeing an increasing adoption of containers for new workloads. If you’re using a Platform as a Service (PaaS), you probably have containers of some kind under the hood, even if you don’t know about it.

While more customers are using containers as their target platform, incrementally replacing old monoliths with sets of smaller services using containers, organisations still have loads of middle-ground items they’re barely touching.

This stuff is maybe eight to ten years old, it could be high value and low traffic, or low traffic but business critical. Either way, organisations often don’t want to spend development time updating it. Why? Because time is money. So, the only love the ‘not so old’ receive, is used to fix bugs and keep things up and running, without falling over.

Addressing the middle ground

The Middle Ground is consistently inconsistent. It might not be cloud native, it may have different platforms and run times. In short, you have a packing problem. They have different sets of dependencies. Rebuilding these existing functionalities as services will offer value but may take years to complete and refreshing everything at once also introduces lots of risk.

But what if you could put your developers onto containerising them without rewriting, and work towards a standardised pipeline? Can you containerise the middle ground?

There are various ways to solve this packing problem and escort the middle ground into an efficient cluster where you can treat them the same and standardise the pipeline.

Docker, for example, has a programme that helps larger businesses modernise their traditional apps by treating them as new while keeping them up and running. I’d recommend watching my Cloud Expo session for more detail on this and other processes that work to containerise the middle ground.

The overall message, however, is this works and it works well.

Consistency, consistency, consistency

The great thing about containers is that no matter what’s inside them, you can treat them as discreet units to deploy and push into any kind of environment. There’s also a huge amount of choice over where you can run them. So, the question isn’t ‘where do you run containers?’ it’s ‘how do you standardise and how do you build and manage the artefacts within containers, and with the right dependencies?’

This consistency is made possible by using containers, and is probably the most highly-prized quality when modernising a differentiated estate.

Huge efficiencies are possible too. You want to get to a place of ‘build once, deploy many times’. If the pipelines are standardised and everything looks the same, there’s much less rework and time spent discovering how everything works together.

In fact, recent research by Docker on the programme I mentioned earlier shows a 20% resource saving, once the organisation could efficiently pack workloads into containers.

Containers are also the sweet spot for portability – you can run them in any of the major public clouds. This is the best way of insuring against the future. You don’t know what the next killer tech is going to be. But if you’re containerising, you can have some confidence you’ll have a roadmap to move there.

Containers are becoming the default artefact format. If you store them in ways that include meta data and traceability, anyone can understand the provenance and how to deploy an artefact many times elsewhere. No more relying on institutional knowledge and starting from scratch or building things like staging URLs when you don’t need to.

The flavour I got from Cloud Expo is lots of businesses want to consider Containerisation strategies, but they need some help from experts, at the very least to start them on their journey. If this is you, click here to watch our Solution Architect webinar on-demand, and let us help you unpick your Middle Ground, as well as peel back those Hardcore Legacies.

LEAVE A REPLY

Please enter your comment!
Please enter your name here