Containers Vs. Virtual Machines
By Kevin Vogl
Containers have taken the Open Source DevOps world by storm, especially in cloud environments like Microsoft Azure and Amazon AWS, enabling developers and IT admins to build and ship applications at a previously-unheard of rate. As with most progress, however,containers are finding it difficult to break into the production environments. So we’ve put together this quick and easy guide to getting to know more about containers. Right now, this is an Open Source solution but with the release of Windows Server 2016, containers will go mainstream in the very near future.
What is a container?
A container is a packaging platform for software that relies on a specific kernel (an instruction set of base commands). Each container uses the Linux host’s operating system kernel, but self-contains the files that make up its configuration and environmental dependencies. Anything you could install on a server, can also be placed in a container. This could allow developers to push software to production environments while preserving the same exact run-time environment used during the development cycle. These containers have become the standard for DevOps teams who need to produce software that is scalable, ships quickly, and can run on any environment.
What is the difference between a container and a more traditional VM?
Containers are more lightweight and make much more efficient use of the always-limited memory. Because all containers are daemons (a program that runs continuously and exists for the purpose of handling periodic service requests that a computer system expects to receive) that leverages the same Operating System kernel, they start up much faster than VMs since the operations system is already running, each VM would require its own guest OS. You can see in the graphic below how much “lighter” containers are than traditional VMs: