For those visiting our website for the first time, I should start by saying we're definitely NOT anti-cloud (more than half our customers use our platform-agnostic load balancer in the cloud to support their applications). But we most definitely ARE anti-herd mentality. In other words, if you're going to do something, make sure you've fully researched your options to make sure you're not the unlucky member of the herd that gets picked off by the lions, or the crocodiles, or any other big predator out there ; ) .
While there a lot of scenarios where the public cloud does makes sense (hence the mass migration), there are also still a significant number of scenarios where it definitely isn't the right place for your applications. And we felt it important to offer an alternative view, based on our own experience, and highlight just some of those reasons.
What do we mean by 'the cloud'?
To many people, when we talk about the 'cloud' the big public cloud hyperscalers immediately spring to mind (AWS, Azure, GCP etc.). But to others, the cloud is synonymous with the 'high cloud' i.e. everything that's connected to the internet. Both of these are, again, totally different scenarios to the private cloud, where you own your own data center (either on or offsite) that you then have control and manageability of. So, really, the term 'cloud' is about as useful as a chocolate teapot. There are some really good, detailed descriptions elsewhere on the internet that describe the various cloud models in detail.
For the purposes of this blog, we're here to talk about the public cloud, and highlight some specific instances where it definitely doesn't make sense, based on a recent conversation I had with two of our Technical Engineers:
For those on the go, feel free to listen on spotify or apple podcasts, or read the summary below.
The public cloud is touted as more cost efficient — but is it?
Not if there's a skills gap...
To get the publicised benefits of the public cloud (e.g. scalability on-demand) the reality is you need a DevOps team who can programme that functionality and help you take full advantage of all the features. So it's not quite as simple as some of the hyperscalers might lead you to believe. There's still a notable skills gap in many organizations.
Not if you're talking Infrastructure as a Service (IaaS)...
And when you look at migrating Infrastructure as a Service (IaaS), that's certainly not a cheap exercise. For example, lifting old server estates 'as is' to a hyperscaler, turning them on, then spinning them up while managing the operating system, and patching the hardware resources underneath it, can get pretty pricey.
Not if you don't have cost transparency...
On the one hand migrating to public cloud mitigates the cost of running that yourself, but the big challenge is calculating the cost of that over time to avoid cost creep. All the big providers give you cost calculators, but for a lot of businesses the move is from CapEx (Capital Expenditure) to OpEx (Operating Expenses), or OpEx to CapEx which is a totally different business model so it's hard to have visibility of the costs you need to plug into the calculator in the first place.
Not if you need to run servers and operating systems over time...
What are you actually moving and shifting over? I personally think public cloud works best when you look at your server estate and break it down into its respective workflows. For example, if I want to move my file server with my Windows operating system and run it in the public cloud, it will become more expensive over time than running it locally. But it might be preferable as I can OpEx it, rather than have to invest in it all up front, and pay for it little bit by little bit. The approach is to look at what you're delivering i.e. what is the service you're looking to migrate (rather than the server)?
How do you avoid becoming beholden to a single public cloud provider?
If you're developing modern applications, there are some very real challenges that come with moving applications developed in a particular public cloud from one vendor to another. So these risks need to be taken into account. The bottom line is the importance of adopting a multi-cloud approach, to spread your risk and avoid putting all your eggs in one basket.
If you need to run data in the US, you need to find a hyperscaler that can store that locally in the US. And the main hyperscalers are definitely not all equal. They have different levels of certifications and badges which means they're more or less likely to be able to meet certain compliance requirements and security guarantees.
Decide whether security or price is your top priority...
People of course have their favourites but fundamentally decisions about the right public cloud provider need to be made based on pricing, what services the vendor delivers and, crucially, how secure it is and whether or not it meets the requirements of the service you're trying to run.
What should you NOT put in the public cloud?
Here are a few things we wouldn't recommend putting in the public cloud.
A backend database in a hyperscaler, and then a front-end client application in the office, so that the backend traffic has to travel across the internet! We would, of course, instead, put it all up there...
If my monolith, client-server applications were running locally and my SQL database was in the cloud, my only choice would be to connect with local applications to a database across an internet connection which would be really slow. So I want to put my front-end application up there as well to avoid this. But then I'd be in a situation where my users may struggle to access it. So then I'd need a Remote Desktop Solution (RDS) as well so they can use it remotely in the cloud. I've gone from having an idea of stability and scalability but actually, it's costing me a lot more than I had anticipated.
Needless to say, this scenario would not apply to modern web-based applications, where the application user interface is now driven through HTML webpages which is an easy use case for the cloud.
Large data transactions...
There's always an argument to keep large, active data files local because you need fast access and you don't want to drag vast amounts of information across the internet.
Whereas, in a cold scenario, if you don't need that data for 10 years and you need to archive it, push it out to the cloud.
If the drivers are no longer supported then they need to be kept local. They don't play well in the cloud because they were never designed for that. Whereas, if the application is cloud-ready then it's definitely worth investigating. For example, you wouldn't put SAGE in the cloud, but Xero would make sense there.
Fundamentally, I'd avoid putting anything in the cloud I have to significantly replatform or rearchitect because it's not likely to work well in that environment. In other words, if I have to shoe-horn it in, then it wasn't meant to be there.
When is public cloud the answer?
For those just getting started, you can get up and running much cheaper and quicker in the public cloud. It's also great for DevOps and modern applications.
However, for large, established organizations the same cannot necessarily be said. The bulk of applications aren't modern. Most of the apps out there are still legacy based and monolithic, still require servers, run mainframes etc. And even for organizations who can move their applications (e.g. hospitals and banks), security concerns mean they consciously chose not to, because for those with mission-critical applications it's just not worth the risk.
Load balancing in the cloud: cloud-native vs platform-agnostic
For those migrating to the public cloud, ensuring the availability of your applications is then your next key consideration i.e. do I opt for the cloud-native load balancing solution provided by my public cloud vendor? Or are there times when a platform-agnostic load balancer might make more sense?
For information on why and how to load balance in the cloud, this blog might give you some food for thought: "Rethinking cloud native load balancing".