Really Easy Role-Based Access Control (RERBAC)

In a recent report from Accenture that surveyed 200 senior IT executives about their cloud journey, 65% reported that “Security and Compliance” are among the largest barriers they have faced when adopting the cloud. As such, a major part of dealing with security and compliance is regulating how users and services interact with your Kubernetes clusters. Luckily, Kubernetes operators are able to leverage Role-Based Access Control (RBAC) to control access to resources dynamically via the rbac.authorization.k8s.io API group. However, long term management and proper setup can be challenging and add complexity that we at Containership are trying to eliminate.

The Traditional Approach

The starting point would be to determine what types of restrictions you need to put in place to match your specific use case. Kubernetes provides two low-level concepts to enable role-based access control on the clusters. Roles define a group of permissions across a set of resources and RoleBindings tie these roles to various subjects (Kubernetes does not have a first-class user entity, leaving the authentication of who is making a request to one of the various authentication controllers). Roles and RoleBindings can operate at the namespace level or at the cluster level, using ClusterRoles and ClusterRoleBindings respectively. Operators can start to build out the primitives required to configure their system security as needed per their use case, but it is a tedious process and one that is difficult to maintain.  

Hopefully, while you were hacking the YAML together, you had a perfect naming convention and remember the specific permissions in each. Perhaps you created a document to track it all? Eventually, this information needs to be readily available for multiple people or teams within your organization to access and easily digest. It will also need to be managed moving forward as things change across all of the various clusters you have. Making those changes and making sure that everything is synchronized is difficult. This presents additional overhead and continues to be a major factor for organizations to create production-grade setups.

The Containership Way

After manually configuring RBAC for our own internal use and for a number of our customers, it became clear that there had to be a better way to manage RBAC. At the same time, there is no real need to recreate the wheel, perhaps just find a way to make it roll more efficiently. Containership now gives you the ability to set up and manage RBAC in a unified fashion. This means there is one place to define roles and one place to change them. As updates are made from within Containership, they will roll down to all your clusters in real-time.  What you are left with is happy operators, regulated access, and vanilla RBAC that will stand the test of time.

Containership RBAC gives users the ability to manage both Cloud Platform and Kubernetes rules from one central location. The process is familiar enough that Kubernetes operators will understand it, and designed to remove the challenges of managing RBAC across multiple clusters. By default, Containership provides roles with various levels of access. Standardized roles can be applied across multiple clusters on multiple providers. Roles can be applied to specific users, or teams can be created and roles will be applied to all members of the team.

For a more detailed breakdown of the roles and RBAC process within Containership head over to our documents page.

Other Product Updates

The Containership Kubernetes Engine (CKE) has gone through conformance testing for the last minor release of Kubernetes. Users are now able to launch 1.14 clusters from within Containership and easily upgrade from previous versions on clusters already launched on the platform.

This latest Kubernetes version brings a number of new enhancements such as pod priority and preemption, various kubectl updates, and RBAC hardening. Check out the full release details on the official release blog.

This release also brings version upgrades to several of the plugins supported by CKE. We have made several updates to the open-source Cerebral plugin. Additionally, updates have been made to our supported providers' Cloud Controller Managers (CCM), Container Storage Interfaces (CSI), and Container Network Interfaces (CNI) ensuring end users can utilize the resources each provider has to offer. Lastly, we have updated our Nvidia plugins for GPU instance types. Check out the full changelog here to get a full list of the upgrades made.

Show Comments

Get the latest posts delivered right to your inbox.