If you’ve been listening to any of the conversations around containers, you’ve heard of Kubernetes. The name, which my kids apparently love saying, comes from a greek word meaning “helmsman” or “pilot.” That’s not a bad name for the product: Kubernetes automates and orchestrates (or “pilots”) the deployment and scaling of containers. We’ll get to what that means shortly.
I plan to make this the first of a series of articles introducing Kubernetes. I suspect that daily developers generally understand the basics of Kubernetes without this kind of article. This summary is written more for you if you have cloud platform experience, but you don’t write software. If you want an understanding of how Kubernetes works and where it fits, then I hope this can help.
A Quick History
Kubernetes was developed by Google internally and released a little over 3 years ago (2015). The project was donated to the Cloud Native Computing Foundation and is currently listed as the only “graduated” product–presumably something that is now production-ready. Several interesting things are coming if you check the “Incubating Projects” list on the CNCF site.
Kubernetes is the gorilla in the park. That doesn’t mean the alternatives should be neglected, but it’s the big player, not just because Google developed it, but because of the depth of features and scope of support. Lots of third-party companies out there are getting into the Kubernetes management services business because of another thing: It’s complex. The variants are worth their own discussion. Hang in there and we’ll get to it.
What is Kubernetes For?
So let’s get back to what it means to “automate and orchestrate containers” and why it matters.
I recall doing web design and development in the early Web 2.0 days when the whole nature of website functionality changed. No longer was a website for disseminating information or buying a product. Everyone was launching sites that could make your life easier by giving you some kind of service that was delivered through the website. “Write-boards” come to mind.
Facebook and then Twitter were born in those days and social media came into it’s own. All these applications were written on technologies designed for fast development and fast server-side processing. PHP and Ruby on Rails were getting a lot of buzz and I was part of those tribes for about 5 years.
The ideas were simple: create small, lean applications built with frameworks that let you skeleton your application rapidly. Through good API’s developers could connect multiple small applications to deliver the parts of a larger package. One tiny application might handle the user signup process, while another handles the message-delivery while another runs and secures the database.
But what happens when you need to scale the whole thing up? Or what happens when your whole code base gets an upgrade and you need to coordinate upgrading that whole stack? The problem shifted quickly from “How do I manage my consumed hardware resources?” to “How do I manage my micro-services regardless of my hardware resources?”
Enter Kubernetes. And Nomad, and Docker Swarm, and others. In fact, all three just named saw initial release in 2015 because developers everywhere saw the need. We’ll talk about the alternatives to Kubernetes and why they may well be worth your time, but the thing to realize is this:
Container automation/orchestration provides an important function between the container/application itself and the resources that the software consumes.
So whether it’s Kubernetes or another good option, developers of cloud native applications need that layer of administration to deal with all the tasks related to keeping the applications up and running on their resources. The resources–the CPU/Memory/Disks/Network–also need to be interchangeable. Kubernetes does all this and a lot more.
Like any good solution, there’s contenders. Kubernetes is probably the most complicated but is arguably the most feature-rich solution for the need. Others actually carve their niche by focusing on being much simpler and quicker to deploy.
Nomad: Nomad is a leaner, simpler-to-deploy solution to many of the same problems. In fact, it’s so much simpler that it’s worth some attention. Another factor is that Nomad fits into a framework of 3 other products that are equally worthwhile, helping with provisioning, securing, and connecting applications in it’s own way.
I love the ideas behind Nomad and I’ve had long-term respect for Mitchell Hashimoto who wrote and maintained projects like Vagrant back when I used it as a Ruby Gem. That product and others like it have now been rolled into HashiCorp.
Docker Swarm: This is a mode that gets setup via additional configuration right from your existing Docker environment. If you don’t know what Docker is, then you should probably do some more reading before the next post in this series.
Docker may be the largest container-manager out there and lets you build applications into isolated, tiny environments that are able to be relocated or reproduced easily. In a way, Docker does for applications what VMware did for Windows with the SDDC.
To help compare, here’s a chart:
If you think I’ve missed a contender and you want to see something added to the chart, reach out on twitter. I originally included Apache Mesos: Marathon, but decided to drop it.
Container Orchestration/Automation Solutions
|Differentiator(s)||The Leader: Deep feature set; Advanced Self-Healing||Lightweight & simple Deployment||Grow Docker deployment to include Swarm mode|
|Downsides||Learning Curve||Not as broad adoption, not baked into major clouds||Docker Only, obviously|
|Other Features||Configurable priority of containers||Self-Healing; Multi-Datacenter/Region Aware|
|Hybrid Cloud||Y||Y||Not Native|
|Released||July 2015||September 2015||August 2015|
The more I look at Nomad, the more I want to give it some attention. Maybe I’ll get around to an article series on that one day.
I’m writing about Kubernetes not because it’s the best solution, but because it’s mainstream.
Where Are We Going Here?
In the next article we’ll get our hands dirty. I hope to get that out very shortly, so stay tuned. First, I’d like to talk a bit about how you can deploy and use K8s (as Kubernetes is often shortened).
- In a Cloud (Easiest)
Currently available as services from the big three: AWS, Google Cloud Platform, Azure
- In a Local/Private Environment (Hardest)
This is if you want to run K8s in a test/dev environment or home lab
- On your laptop (Not very hard, but a little bit of setup)
Don’t expect to run a lot of services or redundancy, but Minikube will let you get your hands into K8s while you’re on a flight without WiFi.
I’d like to walk through all three of these. But there’s a problem with what to deploy. The whole idea of K8s is to take complex microservices and simplify administering their parts. The idea of an introduction is to keep it simple.
No, we’re not going to revert to “hello world” applications. I have a few things in mind, but I think I’m going to go with something that might just be useful if you’re hands-on enough to give K8s a spin: the Ubiquiti Unifi controller. We’ll see where that goes. An alternative might be a logging application, like Splunk.
Finally, before you build anything with K8s, you need a solid idea of what it is you’re building. I’ll lay this architecture out with the next post and my goal is to push through a GCP-deployed cloud application that uses some of the K8s functionality. Then we’ll run the same thing locally with Minikube. Then we’ll do it again in a local environment.
So if you want to get your hands dirty, stay tuned. In the mean time, if you don’t have a Google Cloud Platform account, go create one. I hear they’re giving away $300 for use in the 1-year demo period when you open a new account, so that could make it worth your while. You are going to be consuming real resources, so that’s a good deal.
We could use AWS or Azure, but Google created Kubernetes, so it only seems appropriate. Want to see the same kind of workflow on AWS and Azure as followup posts? Let me know on Twitter.