Over the past few years, Kubernetes has grown to become the most popular container orchestration tool. Kubernetes automates the process of creating, deploying, and monitoring containers across a cluster. And while it's built on the same principles as containers, Kubernetes introduces its own philosophy for building and deploying microservices. We'll look at the core concepts of Kubernetes and how to incorporate them into your microservice projects.
Kubernetes introduces three key concepts: Pods, Services, and Deployments.
Pods: Pods are the smallest deployable units in Kubernetes. A Pod consists of one or more containers and shared resources, such as data volumes and network addresses. Pods are tightly-coupled and act as an independent unit, much like containers themselves.
Deployments: Deployments dictate how Pod instances are created, deployed, and maintained. You define your desired Deployment using a configuration file, and Kubernetes handles the process of implementing the desired state. Deployments also manage replicating, scaling, and restarting Pods.
Services: Services organize Pods into logical units. Using Services, you can declare configurations that affect an entire group of Pods regardless of how many Pods are in that group or their location in the cluster. You can use Services to expose ports, discover services, configure load balancing, and more.
Consider a typical LAMP application. In a plain Docker environment, we might use two containers: one for the PHP application and web server, and a separate one for the MySQL server. The PHP container receives requests on port 80, and the MySQL server receives requests on port 3306. The MySQL server also has an attached data volume to store the database, but otherwise the containers are ephemeral.
This setup works well on a single host, but what happens when you want to introduce redundancy or load balancing? How do you route incoming requests to multiple PHP containers? What if you want to move your MySQL container and data volume to another server? Kubernetes was designed to handle issues like these without affecting how your application behaves.
In Docker, the smallest unit that you can deploy is a container. In Kubernetes, the smallest unit is a Pod. A Pod is a group of one or more containers bundled with supporting resources, such as data volumes. Containers in the same Pod can network with each other, write to shared data volumes, and even access shared memory. Pod resources are always deployed together, so if a Pod is terminated all of its containers and resources are also terminated.
With our LAMP application, we could split our containers into two separate Pods or run both in the same Pod. The benefit of a single Pod is that the containers can network with no additional setup. Pod containers share an IP address and port space, so the PHP server can communicate with the MySQL server by simply pointing to port 3306 on localhost. However, if we want to scale our application to meet increased demand, this means duplicating both the PHP container and MySQL container.
In general, separating each service into a separate Pod allows for more flexibility when scaling and updating services. While the PHP Pod might only contain the PHP container, the MySQL Pod can contain both the MySQL container and a data volume that travels with the container. Duplicating all three resources when we only need one would be wasteful and inefficient. To learn more about using Pods, click here
A Deployment describes the desired state of a cluster. You define parameters such as the Pods to create, the number of replicas for each Pod, and whether Pods should scale during high demand. The Deployment Controller not only deploys the Pods to match your configuration, but also monitors and maintains the cluster so that it always matches your configuration. Deployments also support rolling updates and rollbacks in a way that ensures at least one Pod is online at any given time.
Our LNMP application has two Deployments: one for the PHP Pod, and one for the MySQL Pod. Each Deployment configuration includes the name of the Deployment, the Pod specification (its containers and resources), the number of replicas to create, and any labels and selectors you want to attach to the Pod or Deployment. If we expect periods of high traffic, we can also enable autoscaling to automatically increase the number of Pods when CPU usage spikes.
The following Deployment creates three PHP Pods and applies the label "php" to each one:
apiVersion: apps/v1 kind: Deployment metadata: name: php-deployment labels: app: php spec: replicas: 3 selector: matchLabels: app: php template: metadata: labels: app: php spec: containers: - name: php image: php:7.2-apache ports: - containerPort: 80
To learn more about deployments, click here.
Services group sets of Pods together into a single logical unit. No matter how complicated a Deployment is, a Service lets you configure multiple Pods as if they were a single item. Services also provide an abstraction layer over Pod networking (such as tracking IP addresses), service discovery, and load balancing. Services also allow Pods to communicate with each other.
Services identify Pods through the use of selectors. Selectors let you search for Pods by label, which you define in the Pod's Deployment configuration. Since every instance of a Pod shares the same label, a selector will find and include them the Service. Any change made to the Service applies to each Pod instance regardless of its state or location in the cluster.
Going back to the LNMP example, we can create Services to open ports on each Pod. We'll define separate Services for each Deployment. For MySQL we'll open TCP port 3306, and for PHP we'll open TCP port 80. The following is an example Service for the PHP Pod:
kind: Service apiVersion: v1 metadata: name: php-service spec: selector: app: php ports: - protocol: TCP port: 80 targetPort: 80
Kubernetes offers two methods of discovering services: environment variables and DNS. Environment variables can embed the Service's IP address into your Pod configuration, or they can be used as Docker links Alternatively, you can use a DNS server plugin to automatically map Service addresses to IP addresses. In either case, Kubernetes will automatically maintain connections to Services even as Pods shutdown or change IP addresses. This way, you only need to provide the service name of the MySQL Service in your PHP app.
To learn more about Services,click here.
Getting Started with Kubernetes
We looked at the fundamentals of Kubernetes and how you might use it to deploy a simple web app. The easiest way to start building microservices in Kubernetes is by using Minikube Minikube creates a single-node Kubernetes cluster in a virtual machine, allowing you to test commands as if you were running them on a full cluster. If you currently use Docker Compose, you can even convert your Docker Compose files into Kubernetes resources using Kompose
You can learn more about Kubernetes' core concepts on the Kubernetes documentation site
Look to get started with Kubernetes? The team at Containership can help.