Forbes

Jason Bloomberg

Think You’re Cloud Native? Only If You’re Doing This

“‘Cloud native’ is more than ‘cloud only,’” I wrote in my December 2017 article Seven Remarkable Takeaways From Massive Kubernetes Conference. “It means bringing cloud-centric best practices to software and IT generally, whether that be in the cloud or on premises.”

Why is this maturing definition of cloud native so important? “The big win for enterprises: Kubernetes provides cloud-native approaches to modernizing legacy assets.”

Fair enough, but the devil, of course, is in the details – and details there are here, in spades. One cannot simply install Kubernetes and hope to modernize older, legacy IT assets automatically – or even in a straightforward fashion.

The reality: there are many moving parts to cloud native architecture. And just as service-oriented architecture built upon n-tier architecture, and virtualization-based cloud architecture did the same in turn, so too will cloud native architecture leverage the approaches that came before, while breaking new ground.

There’s no arguing with the fact, however, that such rearchitecting is hard. And you have to get the details right.

Containers and State: How do We Keep Track of Things?

The first thing you have to understand about containers is that they are inherently ephemeral. Like old postcards from people you can’t remember and ticket stubs from games long since played, ephemera are objects that no one ever intended to keep around for very long.

Just so with containers.

As a result, we typically think of containers as stateless: send them some information, let them do their thing and send the results on, without keeping track of anything. After all, a container may pop up or disappear at any time, so it probably wouldn’t make sense to store anything important in one of them.

Deciding where to maintain this state information, therefore, is a critical part of being cloud native. “In the case of containers, you can maintain state within the container,” explains David Linthicum, industry thought leader and chief cloud strategy officer at Deloitte Consulting (see my profile of Linthicum from January 2016). “However, the best approach is to maintain state outside of the container.”

The goal here is the separation of architectural tiers – a core feature of n-tier architectures. “This cleanly separates behavior from the data, which is desirable since all container instances can participate in the operations of a single application, as long as they are able to write and read to a common state,” Linthicum adds.

Netflix ran into these challenges as it ramped up its Kubernetes deployment. “The benefits of running a data store inside Kubernetes are limited,” says Paul Bakker, senior software engineer at Netflix. “Out of the box, containers lose their data when they restart. This is fine for stateless components, but not for a persistent data store.”

The Rise of Kubernetes-Savvy Databases

One approach to addressing the problem of state in containers is to purpose-build a database that can run inside containers in such a way that data aren’t lost if the container disappears.

CockroachDB from Cockroach Labs is one such database. “Because it’s a distributed database, CockroachDB distributes its data across multiple nodes,” explains Leanne Miller, software engineer, and Carrie Utter, software engineer intern at Civis Analytics. “We were able to create a CockroachDB cluster by bringing up multiple Kubernetes pods running the CockroachDB Docker image.”

Kubernetes groups containers into pods which are themselves ephemeral. What Miller and Utter were able to do is use CockroachDB’s built-in clustering capability to put identical copies of a database into various pods. That way, if one pod went away, CockroachDB would maintain its data, and furthermore, copy the database instance into a new pod that required it behind the scenes.

This distributed database approach provides the horizontal scale we expect from cloud native databases and architectures in general, but suffers from two potential problems.

First, copying an instance of a database into a new pod might take too much time to meet the needs of the running application.

Second, we need some way to make sure pods are running on separate hardware – or a hardware failure might take down multiple pods at once.

It’s time to add hyperconverged infrastructure to this story.

Connecting Hyperconverged Infrastructure to Kubernetes

Hyperconverged infrastructure (HCI) arose from the data center world as a way of abstracting servers, storage, and network resources to support the dynamic needs of virtualized environments.

The HCI story is unquestionably hardware-centric. “HCI solutions are built as clusters of commodity servers (x86) that provide an abstracted pool of capacity, memory, and CPU cores that are used as the foundation for server-centric workloads (the hypervisor, virtual machines, and applications), as well as storage-centric workloads (e.g., data persistence, data access and data management),” explains Eric Sheppard, research director of worldwide storage software at IDC, in a blog for HCI vendor NetApp.

How, then, does the HCI story connect to Kubernetes and the question of state? Sheppard gives us a hint. “A truly optimized hyperconverged-based private cloud should be considered a component of a broader, hybrid public/private cloud ecosystem that provides a ‘lingua franca’ (or data fabric) that supports common data services for efficient placement, movement, and use of data across a true hybrid multi-cloud environment,” he continues.

Such an HCI-based ‘data fabric,’ therefore, can potentially address the challenge of making sure Kubernetes pods run on separate hardware – and furthermore, ensure that such hardware is well-suited for the needs of the containers in each pod.

However, not all HCI solutions are the same. Nutanix, for example, provides a single point of control for multi-cloud architectures. NetApp focuses on providing a seamless management experience across on-premises and cloud environments. Pivot3 targets high performance across such environments by leveraging NVMe flash storage technology.

Few vendors, however, leverage HCI to address the cloud native challenges that Kubernetes brings to the fore. One such vendor is ROBIN. “Containers might be ephemeral, but ROBIN storage is not,” explains Partha Seetala, CTO of ROBIN. “ROBIN automatically and transparently (to the workload/application) switches volume mounts from one host to the other without requiring data copies.”

In other words, because ROBIN offers HCI that abstracts the storage layer, it addresses both of the concerns with Kubernetes-savvy databases: it can support the failover of the database from one pod to another without the need for copying data, and it also ensures that pods run on different hardware.

Furthermore, HCI provides a seamless abstraction that hides hardware issues from the developers working in Kubernetes. “Developers don’t have to understand any of the underlying technology,” Seetala adds. “No storage knowledge required, no network knowledge required, no Kubernetes knowledge required.”

What About Modernizing Legacy Assets?

Cloud native architectures can clearly depend upon Kubernetes deployments that maintain state properly by leveraging HCI technology, but how do such architectures help with legacy modernization?

This article is too short to provide an in-depth answer to this question, but I can discuss one example: placing an Oracle database cluster into Kubernetes. For example, an Oracle shop might want to provide an Oracle as a ‘database-as-a-service’ offering that delivers the rapid provisioning and horizontal scalability of the cloud for its Oracle instances.

Oracle has long offered clustering capabilities for high availability via its Real Application Cluster (RAC) product. One might think, however, that putting an Oracle RAC cluster into a Kubernetes pod would be sheer folly, as such pods are ephemeral.

Given the goal of such clustering is to guarantee high availability, would we really want such a cluster to be subject to the vicissitudes of ephemerality?

ROBIN has an answer. “ROBIN deploys each instance of the Oracle RAC cluster in its own Kubernetes pod, but leverages Oracle’s cluster management to run within those pods,” Seetala explains. “However, one still needs to provide clustered storage outside of Kubernetes and Oracle. ROBIN storage provides this layer.”

What about failure scenarios? “When a server or pod fails, one needs to still relocate the pod from one server to another. Oracle can’t do this. Kubernetes can only relocate pods, but not storage,” Seetala adds. “ROBIN enables both by dynamically switching storage and network bindings from the failed server to the healthy one.”

Beyond Database Persistence

While this article’s discussion of state focused on the database, there is actually more to this issue: application state.

Even in the n-tier days, we struggled with maintaining state on the presentation and application tiers, either through caches on such layers or via the heavily maligned cookie.

Kubernetes’ essentially ephemeral nature ups this game, as a pod might contain state information in the form of data, but also metadata associated with the application state – for example, what items are in a shopping cart, say, before the app has time to store such information in the database.

ROBIN is able to keep track of such application state information, essentially preserving a seamless customer experience regardless of the appearance or disappearance of any pods or containers behind the scenes.

And without such a customer experience, cloud native architectures won’t be able to rise to the challenges of the digital era.

Image credit: Dave O.

Cloud-native may not be cloud-only, but you will need a good cloud partner. See how CenturyLink hybrid IT and cloud solutions can help your organization.

This article was written by Jason Bloomberg from Forbes and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.