Configuration Management And Containers: Which Is Better?

Configuration Management and Containers Which Is Better?

Configuration management is best understood through a simple example. If you, a new sysadmin, were to configure a web server with your Linux knowledge, you would know the commands to install packages and perhaps even build a few of your own. After editing some configuration files, loading kernel modules, restarting services, applying a few troubleshooting principles and documenting as you go along, a few things don’t work.

So you troubleshoot but you don’t update the documentation. In this case, one server takes X amount of time, and two servers take twice that amount. It scales linearly, which is bad.

Next, you would save the configuration files in their finished state and write a script that would input all of the commands automatically. You would perhaps include a bit of additional provisional logic so that the system can flag you if something isn’t working properly. This way, you would be able to configure more servers in parallel because you’ve been able to streamline the configuration management process a bit more effectively.

There are a bunch of tools that can be used for configuration management such as CFEngine, Puppet, Docker, SaltStack and so forth. Over the past decade, configuration management has become increasingly necessary and many dev agencies have in-house experts who intuitively know what type of configuration management is important on their projects, as well as when and how it needs to be implemented.

Cons of configuration management

With the configuration management scenario, however, you still have no way of managing the servers once they are deployed. Other drawbacks include the time that you need to spend replicating the process and the fact that configuration management skills often lies with only a few team members, making it difficult to deploy a project if these resources are already at capacity.

SEE ALSO: Configuration Management: What It Is And What It's Not

Configuration management also makes it hard to determine the true cost of the services that are being delivered and it is usually very difficult to determine which modifications to infrastructure were authorized. This can inherently lead to security weaknesses and many risks that can go undetected for a given period of time.

Specific cons of configuration management that CTOs and developers will relate to include:

  • Process integration with change/incident/problem management is imperative for checks and balances of data.
  • Building on top of an established and reliable asset and inventory management process is required to ensure quality data for input and update of CI data.
  • There is no one tool that is capable of delivering full functionality from the logical layer (business model) all the way to physical layer (data center infrastructure) for every CI type across the entire enterprise. Multiple tools will be necessary and their proper integration is vital.
  • Desktop connectivity/relationship with production applications is a considerably larger scoped effort that requires broader audience participation.
  • The integration effort is not a simple ‘install tool and run’ type of effort. Initial design will need to be revisited regularly for expansion and contraction of scope and business needs.

Creating containers: a better solution

After going through the trial and error process of configuration management, many dev agencies and IT teams are moving on to containers because they eliminate many of the complexities involved in working in different IT environments. Containers have a few distinct advantages over configuration management systems when it comes to deployments.

Developers find it easier to wrap their heads around containers since applications’ main dependencies are grouped within it. This means additional configuration and preparation on each server isn’t needed during deployment, thus introducing a new level of standardization.

SEE ALSO: Definitions From A Developer: What Is A Container

Here are some more reason DevOps teams are turning to containers:

  • Containers are making “frictionless” software deployments possible through their seamless scalability. Deployment models that are reliant on configuration management methodologies simply don’t respond to business requirements for rapid, large scale deployments of business technology.
  • Thanks to microservice architectures, developers can create easily customizable and manageable applications that can be improved, reused and scaled to improve the user experience while reducing overheads and improving IT efficiencies.
  • The independence from underlying infrastructure gives both Dev and Ops teams the agility they need to provision solutions and services at peak. In case of an infrastructure incident, system administrators can fire up and deploy new containers to virtually any infrastructure ensure continuity, without having to spend hours, if not days, troubleshooting servers or monolithic applications with innumerable dependencies and complexities.
  • Containers also make scaling infrastructure across old and new platforms far easier than before. Companies looking to build some type of hybrid cloud solution that caters to legacy and newer applications can do this while keeping the complexities of interweaving old and new systems to a minimum.  

Containers lighten the load of entire DevOps teams

The level overhead-reduction that containers introduce is hard to ignore for DevOPs teams looking to evolve their software delivery pipelines. From their light weight and portability, to their predictability across different platforms, containers are making DevOps teams more agile in their endeavors to deploy technology according to the rapidly changing demands of the businesses they serve.

Show Comments

Get the latest posts delivered right to your inbox.