The world of web hosting is moving quickly. It’s easy to get overwhelmed by the options and start to question if the way you’re currently doing things is the best way for your business or need. In this post, I’m going to break down the difference between a few competing options:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
- Containers as a Service (CaaS)
By the end of the post you should have a solid understanding of:
- What each of these terms actually means
- Why it should matter to you
- Which solution makes the most sense for your use case
Starting At The Beginning — Dedicated Servers
At the root of all web hosting infrastructure you have data centers filled with servers, switches, routers, storage arrays, and other network gear. Every other option we’re going to talk about whether it is PaaS/IaaS/CaaS is still the same thing under the hood, lots of servers in a big room. They just add layers of abstraction on top to make management easier, and to automate tasks that used to be slow or have to be done manually.
Dedicated Servers, also known as Bare Metal have their pros and cons.
- Performance — You’re using the computer directly without any added overhead of abstraction layers like virtualization.
- Reliability — With no layers of abstraction and virtualization there are less things that can go wrong.
- Resource Utilization — When using dedicated servers, your processes are not fighting with other virtual machines or processes for resources like CPU, Memory, and Bandwidth.
- Management Hassle — You can’t easily clone a dedicated server to create more of them, there is no concept of an AMI or image for a dedicated server. Want to expand some part of your infrastructure to multiple servers? I hope you’re using Configuration Management, or you better dust off your trusty old friend rsync.
- Cost — In most cases when using dedicated servers you’re paying for the hardware up front and paying to house it in a colocation facility, or you’re leasing them from a hosting provider. Either way, you cannot easily terminate the server when you’re not using it to save costs, so you need to be much more careful about your financial planning.
- All of your processes and applications running on the dedicated server are running on the same operating system. For scalability purposes you generally want servers to handle a single task, like being a web server or a database server. Running everything together on the same server makes it harder to optimize the OS for each use case.
Making Things Easier — Virtualization
Despite the pros of dedicated server hosting, the cons greatly outweigh them for most use cases. With the speed of deployments increasing and companies fighting to be first to market or outperform their competition, virtualization was a natural next step in the evolution of data centers.
What is virtualization?
To put it simply: Virtualization lets you split up your dedicated server into smaller virtual servers that only have access to part of the total resources of the physical server.
You can take a physical server with 2 quad core CPUs and 16GB RAM, and turn it into 8 virtual machines with 1CPU and 2GB RAM.
Some examples of Virtualization technologies you may have heard of before are Xen, KVM, VMware, and Hyper-V, but there are many others.
- You can clone virtual machines when you need more of them or if you want to share them
- You can backup virtual machines images for safe keeping and disaster recovery
- Using virtualization means added overhead and potentially degraded performance
- Virtual machine images are not portable across hosting providers and virtualization technologies *Generally speaking*
- Dealing with virtual machines is still a manual effort and requires management time and expertise.
Evolution — Virtualization becomes IaaS
Did you know that 51% of people think “The Cloud” is affected by the weather? Did you know that when people are talking about “Cloud” they are really talking about Infrastructure as a Service?
So what is Infrastructure as a Service?
- Virtualization of someone else’s hardware managed via an API
- Programmatic access to compute, storage, and network resources and configuration.
- Request a new virtual machine when you need it, terminate it when you’re done with it, and only pay for what you use.
- Treat data center resources like a utility.
Amazon pioneered this space with the launch of Amazon Web Services (AWS) and their EC2 product, in 2006.
Why was this evolution so important?
It used to be that when you wanted to launch an online business, you had to do quite a bit of planning to ensure that you had data center and rack space, enough servers and storage to handle your growth, and enough bandwidth to support all of your user’s traffic. Planning for that wasn’t easy, especially for very early businesses with uncertain futures and growth trajectories.
- It gave developers super powers:
- Conceive an idea and launch it immediately
- If it was a success, easily grow your server footprint
- If it was a failure, shut it down and incur nominal costs
2. It gave rise to more powerful data center automation:
- Fully automated infrastructure became an achievable reality, and gave rise to the concept of Infrastructure as Code.
- AutoScaling was not possible in the past, but now web infrastructure could take on a life like quality and grow and shrink itself as demand varied.
- The automation of different areas of the data center followed shortly, with storage, networking, and tons of other systems that used to require specialty skill sets getting the API treatment, and opening up larger possibilities to the global developer community.
Check out Part 2!
In part 2 of this series we discuss PaaS, the guts that make it up, and how it relates to Docker and containers. We then move on to discussing container hosting platforms and Containers as a Service.
Click here to read it now!