Managed Kubernetes: CKE vs EKS

ckevsaks

Unless you have been living under a rock for the last decade, you probably have heard of the cloud computing tech giant, Amazon Web Services. According to Synergy Research Group, the subsidiary of Amazon currently owns around 33 percent of the cloud infrastructure market, and for good reason. They employ an ever increasing number of cloud services ranging from VMs, to databases, and more recently, container management services.

Near the end of 2014, Amazon announced their EC2 Container Service, allowing you take advantage of the growing container trend, and run your containerized workloads in a scalable and efficient way. That same year, a new project called Kubernetes was open-sourced by Google, based on their internal cluster management system, Borg. Fast forward to 2018, and Kubernetes has become the dominant player in the market of container management systems. As one of the fastest growing projects to date, the project touts unbelievable statistics such as over 230+ years of combined development time among its contributors in this short 4 year span.

If you have been following the Kubernetes landscape recently, you will have heard that AWS released their Kubernetes offering to General Availability (GA). After having the chance to test out the product first hand, I want to share some of the many pain points I encountered during the process. As we’ve seen with Azure’s container management solution, Azure Kubernetes Service (AKS), GA does not always mean production ready…

The Breakdown

tldr; EKS is seriously lacking in regional availability, user experience and setup, Kubernetes release velocity, pricing, and overall cluster launch time.

I want to go into detail about the above vertical; specifically, from the standpoint of how EKS compares with our own Containership Kubernetes Engine (CKE), which we’ve recently added support for running on AWS VMs. Did I also mention our platform runs agnostically across 4 of the most popular cloud providers with new providers continuously being added?

Region Availability

Only us-east-1 and us-west-1 are supported for EKS cluster launches at this time. This can quickly become a concern for your business especially if your systems are latency sensitive, requiring temporal locality to your end users. One of the features that EKS highlights is support for running the control plane across multiple availability zones. While true at some level, I ran into extremely frustrating issues with the feature.

Leaving the default zone options chosen, I repeatedly received error messages when attempting to create my control plane in the us-east-1 region. All errors pointed to there being insufficient capacity in various availability zones. An interface should never default with values that are incapable of being processed.

image (1)-2

Containership is able to launch clusters in any supported region, availability zone, or machine type offered by the AWS EC2 offering. This gives users an enormous amount of flexibility and scalability options in comparison with the current EKS offering. You can take advantage of any of the 17 regions or 42 availability zones AWS has to offer.

User Experience

Next on the list is user experience, or lack thereof. It is very clear that AWS was not happy about having to support an open-source project which supports running cloud agnostic workloads. Antithetical to traditional AWS offerings, supporting Kubernetes reduces their ability to capitalize on vendor lock-in, making it increasingly difficult to secure market control.

Your first trip to the EKS dashboard will probably leave you underwhelmed. It quickly forces you out of context in order to create an IAM role to power the creation of resources on your behalf. Once you finally get to launch a cluster, it gets even worse. Without digging through the getting started guides and other tutorials AWS provides, it is virtually impossible to figure out how to connect to your clusters, and more importantly, how to even attach worker nodes, which is the core reason to even run a cluster in the first place.

ekslaunch

Worker node pools are not a first class citizen in the EKS console, and in my opinion, virtually forgotten about. In order to connect worker node pools to your cluster, you have to wrestle with the AWS CloudFormation template service. You literally have to copy and paste values back and forth from the EKS console to the CloudFormation template configuration. Make sure you don’t have a typo, because your worker nodes will never connect if you do.

Once you finally attach worker node pools to your management plane, connecting with standard community tools is a entirely separate process. AWS requires a special authenticator binary to help authenticate with its clusters. This in itself is not a bad thing, but the recommended way to configure your local client is to copy a template, and once again copy and paste values from your EKS dashboard into your local YAML config.

At Containership, we take user experience very seriously, and fundamentally believe it is key to the successful adoption of Kubernetes. Our platform provides an agnostic way to launch and manage clusters, not only on AWS, but any one of our supported cloud providers. We support in-dashboard cluster launches of both master and worker node pools, with no need for copying and pasting at any step. One of our core values is to keep things simple, reduce vendor lock-in, and make managing your Kubernetes cluster enjoyable. To set up your local client, we provide a one-click command from any of your clusters managed in the dashboard. Your experience from provider to provider is the same no matter which you launch on.

Kubernetes Velocity

As of this article, the only supported Kubernetes version on EKS was 1.10.3. Do you remember that 230+ years of development work statistic I mentioned earlier? Kubernetes is a rapidly evolving and moving project. Many vendors are slow to support and certify new versions. Kubernetes 1.11.x became generally available a little over a month ago, and there have already been over 165 commits and more than 23 thousand lines of code added since then!

We pride ourselves at Containership in the fact that we are consistently one of the earliest companies to become certified for each new Kubernetes version. We are constantly preparing for future releases of Kubernetes to ensure you can keep your clusters up to date, even with the rapidly moving pace of Kubernetes. We can do this because we are based off a purely vanilla Kubernetes offering, and invest heavily in time working with the various special interest groups in the project to stay up to date not only on what is available today, but what is coming tomorrow.

Pricing

EKS immediately introduces the “pay to play” philosophy, charging $144 / month just to run the management plane for a single cluster. That does not include the ability to run any workloads at all! Any additional resources you attach to clusters on top of that will cost you the standard EC2 pricing fee.

At Containership, you pay for the VMs on your account, no additional fees on top. Many clusters do not need an extremely powerful management plane, and even if you do, you often want the flexibility to launch custom VMs or scale as needed. With Containership you choose the size and number of both your master and worker node pools. You can easily launch single node master pools for development or QA environments. Did I mention we also allow you to toggle the schedulability of your master nodes?

Launch Time

Finally, I wanted to speak to the length of time it takes to launch a cluster. Just provisioning the control plane alone took ~10 minutes on average on EKS. This does not even account for additional worker node pools that you may need to provision. Remember, the control plane does not allow you to run any workloads on its own.

At Containership, all your initial cluster pools are launched in parallel so you can be sure your clusters are available in the shortest time possible. Many of our cluster launches take 6 to 7 minutes on average across a number of cloud providers. You could even say we are the fastest multicloud Kubernetes engine to date.

Closing

AWS has grown to be a dominant force in the cloud market, but even these large companies are struggling to keep up with this new shift in the container management world. Kubernetes has brought us a new era of cloud computing, with a focus on interoperability, zero vendor lock-in, and an abstraction of the core components required to run and manage workloads across your cluster. The ecosystem is rapidly growing, with small niche communities joining in to build some really special projects on top of the great foundation Kubernetes has provided us.

Here at Containership, we are embracing this new shift, and our goal is to bring Kubernetes to the masses. Whether you are an individual developer just getting your feet wet, or a time-tested enterprise company beginning the transition, our expertise can help you move forward. If you have any more questions feel free to contact us using the link below. You can also sign up for Containership Community edition and take a tour of the platform yourself for free!

Show Comments

Get the latest posts delivered right to your inbox.