Kubernetes is a hot topic these days, as is service mesh (as I wrote about recently) and basically all things cloud native.
But while container-based software deployment is de rigueur among those at the aggressive leading edge of software development, the actual infrastructure management remains – for the most part – something they’d prefer not to deal with personally.
Kubernetes management remains a complex and challenging system to configure and run, though great strides have been made since the platform was first unveiled to the wider world in 2014.
The most common solution is to use Kubernetes that’s maintained by someone else. This makes quite a bit of sense, and follows the same logic as utility services generally: why run your own power station when you can rent a fraction of its capacity as a fungible resource on demand? Leave the complexities of maintaining transmission lines and generators and the grid generally to specialists and concentrate on using the electricity generated in whatever way works best for you.
Thus we’ve seen a bunch of Kubernetes services spring up, run by the same people who brought you all the other Infrastructures-as-a-Service. Google (from whence Kubernetes emerged originally) has Google Kubernetes Engine (GKE), Amazon has Elastic Container Service for Kubernetes (EKS), Microsoft has Azure Container Service (AKS), VMware has VMware Kubernetes Engine (VKE), you get the idea. Pivotal has Pivotal Container Service (PKS) that can run on AWS, Google Cloud Platform, VMware, and (as of its recent PKS 1.3 launch) also Azure.
“No one disagrees that cloud native is the future,” says James Watters, Senior Vice President, Strategy at Pivotal.
Part of the attraction of Kubernetes is the idea that you can use containers and all the infrastructure-as-code abstractions for storage, networks, service mesh, etc. to get away from the hardware and the vendor lock-in it implies. Choose open-source Kubernetes, the theory goes, and you can easily move workloads around between clouds, on-site systems, co-location venues, even the burgeoning edge.
Those of us old enough to remember Java’s vision of write once, run anywhere will recognise the siren song of portability. Hopefully some of you will also recall the myriad practical realities that got in the way, and note that while Java is extremely popular, there are plenty of other languages in active use today. Compatibility between different Java compilers and frameworks was one of them, and we already see differences in what the various Kubernetes services support. There’s an inherent competitive reason for Kubernetes service providers to create reasons to choose their offer over others, and to make it hard to leave. Some of those reasons might even provide value to customers.
We’re also right at the start of the Kubernetes idea, and just as AWS finally conceded that not every workload will end up in the cloud tomorrow (if ever), we need to also acknowledge that not every workload needs to run in Kubernetes. In fact, right now, very few are built in a way that is amenable to a Kubernetes way of running.
“Autoscaling is a huge culture shift all by itself,” says Watters. This is before we even get to the idea of Functions-as-a-Service, where even the idea of containers melts away into the background, which Watters believes will require ten times as much change as did the shift to autoscaling.
This was the big lesson of cloud computing that many organisations are still struggling to learn: To use the technology well requires changing the way your organisation runs. You don’t simply build for cloud, you operate as cloud. That’s where successful software spends most of its time: being operated and maintained. This has much more profound implications for the way your organisation is operated than you might think at first.
“Take capacity management away from projects,” says Watters. “We need to fund products, not projects.” Running a product, rather than delivering a project, will require a fundamental rethink of how organisations operate, and that is the big challenge before us.
Compared to that, the technological complexity of running Kubernetes is a walk in the park.
Correction: I originally misnamed Google Kubernetes Engine as Google Container Services. It was rebranded from Google Container Engine at the end of 2017, but I still managed to mix it up somehow. Sorry about that!