By Andy Smith
About the author: These days, I spent most of my time enabling, rather than doing. What this means is that I design and build systems that perform Analytics. In my world, Kubernetes is particularly relevant when it comes to Analytics Platforms such as SAS Viya, and Posit (Workbench).
I think it’s fair to say that I’ve been around a bit. I’ve been in this game for over 30 years. I grew up fully-embracing the microcomputer revolution, and computing just continues to evolve. Rapidly. I haven’t seen it all, by any means, but I’ve seen many a thing that would make your toes curl. In Analytics, we’ve gone from big ol’ PCs to Client Server, Virtual Machines, Massively Parallel Processing, fancy Databases, t’Internet and the Cloud, super-fast Storage, APIs, and more. And then one day when I woke up, it was all about things like Microservices, and Containers.
Rather than having one big lump of software (the Monolithic Application), an application built on Microservices has many advantages. The Monolith is tightly-coupled, often difficult to scale, and usually challenging to maintain. When an application is designed with loosely-coupled Microservices, these components tend to scale better, and are simpler to maintain.
Containers provide a form of virtualization. A Container can run a Microservice, an entire application, or pretty much any software process. One of the things that make Containers really whizzy is that they are far more light-weight than a Virtual Machine, which has to contain (and run) an entire Operating System before you can even think about the rest of the software involved.
In general, Containers make software deployment more efficient, consistent, and easier to develop than alternative approaches. Popular Container platforms include Docker, Containerd, and PodMan.
So, let’s say we have an application that runs across a whole bunch of containers; dozens and dozens of them. How do we make sure they’re all doing what they should be doing, and behaving nicely? This is where Kubernetes comes in.
There are many “flavours” of Kubernetes, from many vendors, though recently I have been playing mostly with the Open Source version, which is great for deployments both in the Cloud, and on-premises.
Kubernetes, which apparently translates as ‘Helmsman’ from Greek, (or, according to Google Translate, as ‘Blanket’) deploys, maintains, and scales applications across a Cluster.
The Cluster consists of one or more machines, known as Nodes. These Nodes can be in the Cloud, or on-premises, as Bare Metal or Virtual Machines.
Within a Kubernetes cluster, there are Namespaces, which act as a collection of resources.
Our Containers run inside Pods, and, and, and…
That’s probably enough buzzwords for now. There’s a lot more to it than this, so If you’re interested in learning more of the technical gubbins, check out the resources at the end of this post.
I’ll be the first to admit, I was somewhat hesitant to put my hand up to learn about Kubernetes – it looked new and weird, but I’m glad I did.
My first Kubernetes cluster was particularly humble. A single-Node Cluster, a Namespace, and some Pods.
Before long, I had built a pretty sizeable Cluster, across multiple Nodes, with Pods aplenty, and lots more besides, and it was a thing of wonder. I should probably take up some new hobbies, but hey.
While it’s not Rocket Science, like any technology, it can be baffling at times. We learned Kubernetes so that you don’t have to. You’re welcome.
Resources:
Phippy & Friends | Cloud Native Computing Foundation (cncf.io)
Kubernetes – Wikipedia
Kubernetes
Deploy code faster: with CI/CD and Kubernetes | Google Kubernetes Engine (GKE) | Google Cloud