Containership recently had the opportunity to attend KubeCon EU in Copenhagen. As always, KubeCon was an exciting few days jam-packed with interesting talks. Equally as important as the official schedule, it was great to attend roundtable discussions in person with people from around the globe that we normally only get to video chat with during Special Interest Group (SIG) meetings.
As an engineer working on automating the provisioning of highly available (HA) Kubernetes clusters on any cloud at Containership, I closely follow SIG Cluster Lifecycle and its projects. SIG Cluster Lifecycle is a group that focuses on deployment and upgrades of clusters. In this post I’ll share some of the more important updates and roadmap items for kubeadm, a SIG Cluster Lifecycle project that we leverage for our Containership Kubernetes Engine (CKE).
What is Kubeadm?
Kubeadm is a tool for setting up a best practice, TLS-secured Kubernetes cluster. Especially compared to doing things the hard way and from scratch, it makes bootstrapping and upgrading clusters relatively simple tasks.
That’s not the entire story, though, because it does not yet natively support some more advanced configurations such as highly available clusters. It also leaves choosing, configuring, and installing a networking solution up to you. This means that even a cluster with an entirely default configuration will not work out-of-the-box and won’t meet some of the requirements for what the industry considers to be a minimum viable production ready cluster.
The good news is that because kubeadm is simple yet highly configurable, it’s an ideal building block for higher-level tools. For example, kops is a well-known tool (also under the SIG Cluster Lifecycle umbrella) that leverages kubeadm in order to easily stand up Kubernetes clusters on a few major cloud providers. Along those lines, there’s also kubicorn, kubespray, kube-aws, …you get the point - all with various levels of support for a few cloud providers. At Containership, we’re using kubeadm in CKE to provide a dead simple UI for provisioning clusters on multiple cloud providers.
What’s New with Kubeadm
There’s a few new kubeadm features (in addition to many bug fixes) available today in v1.10.x that we’re particularly excited about. You can also view the full list in the release notes.
Generating and managing TLS certificates can be a pain, but it’s a necessary aspect of building production-ready clusters. In addition to securing Kubernetes itself with TLS (which kubeadm has long supported), etcd should also use TLS in a production environment. That’s why we’re excited about the new kubeadm feature to generate all necessary TLS assets for etcd and to configure etcd (when run in a static pod configuration) to use those assets during cluster initialization or upgrade.
Even if you’re not running etcd using static pods, you can still use a kubeadm `phase` command to generate all of the certificates and keys for you for an external etcd cluster:
kubeadm alpha phase certs etcd-ca etcd-healthcheck-client etcd-peer etcd-server
It should be noted that in general, the kubeadm phase commands are a great way to get more flexibility out of kubeadm. There are also ongoing talks about how to improve the UX of phases - something else to look out for!
Fine-Grained Control Plane Configuration
`kubeadm init`; now supports passing flags specific to each control plane component. This fine-grained control plane configuration can be necessary for more advanced use cases. For example, we use
to configure the API server to use the lease endpoint reconciler for high availability.
Upcoming Kubeadm Changes
Native Support for High Availability
Kubeadm can be used today to set up a highly available cluster, but it requires workarounds and additional tooling. One of the primary near-term goals for kubeadm is to introduce native support for high availability by providing a user-friendly master join workflow. While the basic idea is relatively simple, there are several blockers in the way to making this a reality. For example, the existing kubeadm configuration needs to be reworked in order to properly support multi-master clusters. For a complete checklist of what’s required for native HA support, see this issue on GitHub.
Self-Hosting as the Default
Self-hosting is the concept of running (components of) Kubernetes on Kubernetes itself. There are huge benefits to doing this, mainly because it allows us to use Kubernetes itself to manage and upgrade control plane components. Self-hosting has historically proved tricky to get correct, however. While self-hosting is available behind a feature gate in kubeadm today, it’s not yet suitable for production (for example, masters cannot recover from a reboot). In the future, it’s expected that self-hosting the control plane will become the default for clusters bootstrapped using kubeadm - but first, there’s work to be done.
There’s currently a large push to get more end-to-end tests up and running for kubeadm, especially for critical operations such as upgrades. Unfortunately, there were several uncaught bugs that broke `kubeadm upgrade` from v1.9.x to v1.10.0. The community was able to quickly triage and fix those bugs, and the ongoing push for end-to-end testing means that end users should expect more stability moving forward.
Other SIG Cluster Lifecycle Happenings
SIG Cluster Lifecycle encompasses much more than just kubeadm and kops. One of the most exciting things to watch is the recently formed Cluster API Working Group. This group is working towards defining and implementing a portable API that represents a Kubernetes cluster.
One other thing to watch is the wider adoption and upstreaming of kubeadm-dind-cluster. This is a great tool that we recently started playing with for spinning up local, multi-node development clusters using kubeadm and Docker in Docker.
I’ve found SIG Cluster Lifecycle (as well as the Kubernetes community as a whole) to be incredibly welcoming to newcomers. If you’re interested, a great way to start getting involved is to simply join the weekly meetings and Slack channel. From there, maybe start poking through the freshly groomed v1.11 and v1.12 backlogs or even open your first Kubernetes Enhancement Proposal (KEP)!
Tying it All Together
Navigating the ever-evolving landscape of Kubernetes provisioning tools, networking solutions, metrics plugins, and so forth is an incredibly daunting task. We built Containership Kubernetes Engine (CKE) so you don’t have to waste precious engineering time researching and building tooling to spin up clusters on any cloud provider. Try provisioning a multi-master cluster today (for free!) with a few clicks on containership.io and let us know what you think - we’re always looking for feedback!