Running Containers in Production

If the past year has done nothing else, it has solidified containers as the accepted standard for deploying applications in production. I could spend time rehashing why Linux containers are important, what benefits they provide, or why they have caused such a stir; however, I am confident countless others have successfully accomplished that for me. I am writing this as a series of posts, aimed to convince readers that running containers in production, at scale, is currently a difficult task. And, if we as developers, operations engineers, managers, etc wish to reap the aforementioned benefits of containerization, sufficient improvements are needed.

Today, there are an array of platforms to choose from; from vendor specific solutions such as Google Container Engine/ EC2 Container Service, to cloud agnostic platforms such as Tutum or Marathon. Despite the increasing number of players in the space, complexity, vendor lock-in and stability remain the major issues plaguing container deployment platforms.

Regardless of the platform, it should be simple, extensible and reliable. Most of all, users should be confident that their choice is stable and will scale.

In the posts that follow, I will dive into the benefits and shortcomings of specific implementations, and in the end, hope to offer a solution to this elusive problem. Check back for additional parts to this series.

Show Comments

Get the latest posts delivered right to your inbox.