As we all know containers are massively popular and being used widely in today’s world. This can be validated by the fact that almost all the modern applications have a containerized version available.

We have seen that the container-based techniques, such as Docker, OpenVZ, and LXC(Linux Containers) have become an alternative to traditional virtual machines because of their agility.

The primary motivations for containers to be increasingly adopted are their convenience to encapsulate, deploy, and isolate applications, lightweight operations, as well as efficiency and flexibility in resource sharing. Instead of installing the operating system as well as all the necessary softwares in a virtual machine, a docker image can be easily built with a Dockerfile, which specifies the initial tasks when the docker image starts to run. Besides, container saves storage by allowing different running containers to share the same image. In other words, a new image can be created on top of an existing one by adding another layer.

Comparison with the traditional Virtual machines

Compared to traditional virtual machines, containers provide more flexibility and versatility to improve resource utilization. Since the hardware resources, such as CPU and memory, will be returned to the operating sytem immediately. Because there is no virtualization layer in a container, it also incurs less performance overhead on the applications. Therefore, almost all the new applications are programmed into containers.

There are three major differences between these two technologies.

First, a container is more lightweight than a VM. A container includes only the executables and its dependencies and different containers on the same machine share the same OS (operating system). While a VM contains a full set of OS, and different VMs do not share the OS. A VM can run an OS that is different from its host machine, while the container needs to use the same OS as the host (with an exception in windows 10 with the help of WSL).

Second, the hypervisor, such as VMware ESXi, Windows HyperV and KVM, is necessary in a VM environment, while it is not required for containers. For a VM, it needs to act as an independent machine which has the control of all its resources. However, in fact, a VM runs in a non-priviledged mode which does not have the capability to execute many privileged instructions. Therefore, a hypervisor is needed to translate a VM instruction into an instruction that can be executed by the host. On the contrary, since a container does not need to execute any priviledged instruction, it communicates with the OS through the system calls, thus no other layer is required in between.

Third, each VM has its own image file, while different containers may share some of their images. More specifically, container images are created in a layered manner,  in which a new image can be created upon an existing image by adding another layer that contains the difference between the two. The image files of different VMs are isolated from
each other.


As a lightweight solution, the size of a container is usually within tens of MB while that of a VM can take several GB.
Also, to run the same application, a container usually takes less hardware resources since it does not need maintain an OS. On the other hand, since there is no hypervisor, containers are able to provide better application performance, especially when the applications need to talk to the I/O devices.