Today we're unveiling a very awesome and desperately needed feature, a cloud agnostic firewall service.
The firewall feature allows you to lock down your clusters from both internal inter-host traffic, as well as from the internet, selectively opening the ports across the cluster that need to be open for your public facing services, while keeping everything else secure.
This feature is especially important for a handful of our supported cloud providers that do not have any concept of firewalls of security groups. Users of those cloud providers are left to fend for themselves when it comes to configuring firewall rules which can be a painful and tedious task often with risk of human error locking them out of their machines.
Two providers that are lacking a network security product, but are not lacking in the number of customers using them are Rackspace Cloud and Linode. And another, Digital Ocean, only recently released a firewall product after years of users being left to fend for themselves. Our cloud agnostic firewall product will allow users of those providers to easily configure security rules without getting knee deep in iptables. For users of other cloud providers that do have security groups, such as AWS, Google Compute, Azure, and others, this product will offer an easier way to manage and secure systems in a cloud agnostic way.
How it works under the hood
We developed Tesserarius as a library to easily manipulate basic iptables rules, and have integrated it into the core of the Containership platform. The service makes it easy to automate the creation, deletion, and updating of rules via a RESTful API, or via the Containership Cloud web ui. By hooking in to the underlying core systems of Containership, we are able to apply rules across hosts automatically without having to think of and configure each server individually.
Types Of Rules
Application - Apply rules directly to a containerized service or application, ensuring that as you scale or add additional containers for the service, the rules are applied actively on new hosts
Host - Apply rules to specific hosts in a cluster, or to specific groups of hosts. For example, a firewall rule can be added to open a port to all follower hosts, or all leader hosts, or both. This is useful when you'd like to open up SSH access to every server in a cluster
Loadbalancer - When using our integrated cloud-agnostic load balancer product, rules can be added that apply specifically to the load balancer, allowing for traffic to be opened up to the internet for public facing services, or to specific IP ranges.
Interacting with Containership Firewalls
Firewall rules can be created interactively via the UI, or via the Containership Cloud API. Documentation for how to use the API to create and manage rules can be found here.
Utilizing the UI is the easiest way to manage your firewalls. Lets take a quick visual tour of how to create a firewall rule to open up access to a Ghost blog that was launched from the Containership Cloud Marketplace.
We can see that we have our core services running, as well as the ghost service.
By default, the firewall service denies all access to services, so a rule will need to be added to open it up for access.
The ghost blog service when deployed from the marketplace automatically has a load balancer setup to listen on port 80. We can add a rule that will allow all traffic from 0.0.0.0/0 (the whole internet) to the load balancer. If we only want to allow connections from our current local IP address, we can click the "Use My IP" button which will autofill your public IP address.
Once we click save we can see our new rule is listed.
And with that we should be able to go to the load balancer section of our ghost service and click on the auto-generated DNS entry to see if it's working.
Voila! In just a few minutes I deployed my new blog on Linode and setup firewall rules to allow access from the internet.
What makes this so cool?
Aside from the fact that at Containership we care about security and instilling best practices from the get-go, this new feature is really neat because it works the same exact way no matter where you decide to launch your cluster. Running on-prem? Firewalls. Running on the other 14 providers we currently support? You have firewalls. Want to use our snapshots to migrate between providers? Do nothing! Your firewall rules and load balanacers will move right along with the rest of your services. Traditionally, implementing firewalls or security groups in a highly dynamic hosting platform is just plain hard, this new feature makes it ridiculously easy. Sign up now for our free tier and deploy your first services in minutes.