I travel to many DevOps and software engineering conferences, both large and small. When the topic of what DevOps really means arises, someone always brings up the argument:
DevOps is NOT a set of tools that you can string together and magically say "we do DevOps". DevOps is a methodology and set of processes that development and web operations teams adopt to work more closely. The end goal being the elimination of manual and time consuming labor in the software release process, more frequent releases, and less deployment failures.
And they are absolutely right.
However, with that being said, good luck ever doing DevOps or getting to Continuous Delivery without automation tools.
Why Automation Tools Are A Prerequisite
Let's set the scene: you're a development shop, a startup, or maybe even an enterprise. You have a good number of developers, and a handful of people who are known as "the ops team", or "the devops wizards", or maybe "the cloud ops gang", or in the enterprise they might be "infrastructure operations specialists". Maybe you have some really funny or awesome name for them (if you do let me know in the comments), but either way they are the folks who take the code that the dev team writes and enables it to be run reliably on servers in a production grade environment.
- Someone writes code and adds new features
- A handoff happens
- Now it's someone else's problem (maybe yours?)
- Server wizardry happens
Sound familiar? What's the major problem there? The separation of concerns between ops and dev happens way too early in the release process. It starts with the handoff, and is a problem from there on. Having server wizards on staff is great, because the developers don't need to worry about dealing with it, but it is a major bottleneck that holds you back from DevOps bliss.
Self-Service: Developers Need It, Ops Sanity Requires It
DevOps is all about teams working together to shorten feedback loops.
Make some changes, get them committed, merged, and stood up in a testing environment to make sure everything is working as expected. Looks good? Get that slick new functionality up on production and in the hands of users. Run into a problem? Rollback, fix it in a hurry, and get the update released, with no downtime.
The handoff between the developer who wrote the code, and the person who understands and maintains the hosting platform slows the feedback loop, often to a crawl.
The developer who wrote the code is the best person to release it (on their schedule and at their convenience), test it, monitor performance, and resolve any issues that happen to crop up.
So why isn't a workflow like this the default for every business (barring the case of security requirements surrounding separation of concerns)?
In so many cases, the server-wizard ops-gang cloud-ninja team are the only ones with access to, and an understanding of the hosting platform that everything runs on. There is no self-service for the devs or insight into what the heck is going on behind the scenes.
That might sound like a powerful position for the ops team to be in, but the reality is that it is an extremely painful and stressful situation. After being handed some custom software that you somewhat understand, it's your responsibility to get it released, and then you have to maintain it and be the first line of defense to debug if things go wrong, often 24 hours a day.
Ok, so why is this such a common scenario? (Hint hint. It's often related to tools).
The Tools You Pick May Or May Not Matter
At the end of the day, all that matters is that everyone who cares about DevOps, the ops people, developers, security folks, project managers, (etc.), are satisfied with the process by which new software is released, and it happens quickly enough to meet business needs. You can achieve that goal in many different ways. If you've reached something resembling self-service, congratulations. Don't bother over optimizing early when it isn't necessary, because you can quickly get into the weeds.
In my experience at a previous job, we adopted a configuration management tool, plugged in Jenkins, plugged together a ton of open source systems, and happily automated a large, multi-region, PCI compliant server fleet running in AWS. We made it possible for non-technical people to add and deploy new projects, and ran a full suite of tests for every change. Things were working well, aside from the one large issue: we had built something resembling self service, but it was a black box to 95% of the company. Developers handed their code off to us, and then it was our problem. They could review logs, look at graphs, and see how things performed, but when it came to brass tax, it was our responsibility now.
So what kept more people from getting involved in how the hosting platform we had built worked? The tools and custom configurations we used to connect everything made the learning curve too high.
Why Containers Have Helped
You knew I had to talk about containers, right? If you've been to a DevOps Days event (If you haven't, I highly suggest you do) in the last few years, Containers or their use is basically a default topic that someone always wants to talk about during open spaces.
Containers change the separation of concerns and improve the handoff process between dev and ops. Developers are able to get much closer to ensuring a successful deployment of their updates before they hand things off to someone else by building the image that will run on the hosting platform locally. Developers are also able to become much more familiar with the tools being used, because there is a lot less custom hackery being done to stand up a scalable hosting platform. Standards are being developed that make much more sense to people who are focused on applications, not servers.
Containers have spawned many competing solutions that offer a platform for getting to self-service right away. Because of that, the focus of the ops team and the development team can adjust to take advantage and produce much more efficient processes and results. So what does that look like in practice?
Getting To Infrastructure & Deployment Zen
Every SaaS or software startup sets out to develop a game changing product. If they have any success along the way they are inevitably going to run into scalability issues both from an infrastructure perspective, and a people management and software release perspective. As teams grow, and server footprints grow, not only does the startup need to be focused on executing on a product vision and roadmap, but they need to become experts in cloud computing automation to succeed. These days, the fastest to execute on a new idea has a distinct advantage. If you're still manually logging into servers or attempting to recreate the hosting-platform wheel, you're doing yourself and your company a disservice.
So how can choosing a hosting platform instead of trying to build one from scratch, something that has only been possible in recent times without running on someone else's hardware, help a startup that is focused on their product?
- Developers can manage their own server hardware needs through easy to use interfaces
- Developers can decrease feedback loop times by an order of magnitude, which means more code can be released faster
- Ops can stop trying to mash together a million tools to develop an in house PaaS, something that is hard to do, hard to manage, hard to patch, and hard to keep secure
- Ops can focus on issues further up the stack like application monitoring, performance, and giving valuable feedback to developers that improve the product more quickly
- The business has the flexibility to scale into new global regions and expand as necessary to meet business goals without time consuming and expensive updates to their custom platform
- The business can invest more of its cash into hiring people to focus on making the product a winner
Get the latest posts delivered right to your inbox.