Note: This blog post is related to the legacy Containership Platform and the content is no longer relevant.
We've released the initial version of Tide, our cluster scheduler for dealing with containerized batch jobs. It runs directly within ContainerShip, and doesn't require an additional process to be running on every server.
There are a few different kinds of workloads that often need to be run on a server, or group of servers.
- Long running - These are your web servers, databases, queues, and APIs, etc. They usually do not stop running once they have started.
- Scheduled / Batch jobs - Scheduled jobs are not always running, and instead start up on a schedule, handle some work, and then complete and shutdown when that work is finished.
Traditionally, scheduled jobs were often run via Cron on unix-like systems. Cron takes a schedule and then executes the job at the specified time.
For example, maybe there is a job that needs to execute every day at midnight, which runs a script that updates fields in a database.
Drawbacks with Cron
Cron is great, but there are some major drawbacks to using it which should be considered.
- Single Point of Failure - Cron is not distributed, so if the server you have added the jobs to goes down or is lost, your jobs go with it.
- Tedious Updates - Editing a cron job typically requires logging in to a server, copying over any updates to the script being run, and manually editing the job definition using the crontab command.
- Security - Since SSH access to the server with the job installed is required, it is difficult to provide access to different members of the team to add their own jobs without giving them full access to the machine and all other jobs potentially.
Why Tide is Better
- Tide runs jobs as containers - Running jobs via containers means that you can bundle your scripts and jobs into an image which can be shared and distributed just like the rest of your infrastructure components, and can easily be kept in version control.
- No single point of failure - Since the job definition and the script to be executed are not tied to an individual server, but instead live within the cluster scheduler, there is no fear of a server failure taking the job definitions and scripts down with it.
- Utilize Spare Resources - Tide jobs run on the ContainerShip cluster, right along side your long running tasks. This allows for higher utilization of resources you are already paying for, and eliminates the need for a "cron" server.
- Convenient - New jobs can be created and updates can be made via CLI or API.
Trying it out
Every ContainerShip cluster that you launch with ContainerShip Cloud will automatically have tide installed by default.
Go ahead and launch a cluster first, it only takes about 10 minutes to get going. Check out the video below if you are more of a visual learner.
Once your cluster is launched, you can play around with tide via the CLI.
First you need to Install ContainerShip locally on your laptop to get access to the CLI.
npm install n -g && n 0.10.43
npm install containership -g
Install the Cloud and Tide plugins
cs plugin add cloud tide
Authenticate with ContainerShip Cloud:
cs cloud login -u firstname.lastname@example.org
List the available clusters in MyAwesomeOrg
cs cloud list-clusters
Switch to the cluster you want to use
cs cloud use-cluster 95fe0ccfxffc470aav5e11e9a2f5b0cb
Add a new tide job:
cs tide create-job example --image myorg/mycron:1.2.2 --command /path/to/myscript.sh --cpus 0.5 --memory 512 --env-var KEY=value --instances 1 --schedule "0 0 * * *"
Once you have created the tide job, you can view the list of jobs to see if it was created:
cs tide list-jobs
And you can view the details of the job:
cs tide show-job <jobname>
After creation, you will see the job listed in your application list within the ContainerShip Cloud user interface. The job will be scaled down to 0 containers, but as soon as the scheduled time comes, it will scale up to the specified number of containers and execute.
Wondering what else you can do? Check out the tide plugin github repo here.