Certified Technology Portal

My account

My account

Frequently Asked Questions

Containers and Virtualization

Does a container run on top of a hypervisor?

Yes. Container hosts can run inside a virtual machine; in fact, the OpenShift PaaS solution is deployed in this manner.

What are advantages of using hypervisors instead of containers?

Hypervisors will remain critical in the virtualization footprint and to many forms of cloud. For the most part, Hypervisor virtualization is operating system agnostic when it comes to guest OSes1. This is very effective for server consolidation when you're looking at consolidating existing workloads running on multiple OSes into a virtualized environment. In addition, hypervisors offer full control on the operating system and its parameters as well as dedicated resources (CPU, RAM and DISK) to the virtual machine.

What are the advantages of using containers instead of hypervisors?

Containers provide an attractive means of application packaging and delivery because of their low overhead and greater portability. Instead of virtualizing the hardware and carrying forward multiple full stacks of software from the application to the operating system (resulting in considerable replication of core system services), Linux containers rest on top of a single Linux operating system instance. Each container has fencing between itself and other containers but shares the same core OS kernel underneath. These lightweight and portable containers (amongst certified container hosts) enable applications and their dependencies to be packaged together, making the containers self-sufficient to run across any certified host environment.

How does Linux containers and virtualization compare in terms of security? What is the level of isolation and control versus a hypervisor such as KVM?

Virtualization provides a higher level of security by virtue of running the workload inside a guest operating system that is completely isolated from the host operating system and the virtual machines can be confined using security technology such as the Red Hat Enterprise Virtualization hypervisor. Containers, by virtue of its architecture, run on a shared host operating system; this provides isolation for each container from the others and controls interactions between the host environment and the containers. The Linux kernel subsystems that work in concert to isolate containers include: cgroups for resource and capacity management, process and network namespaces for logical grouping, and SELinux to enforce security policies.. However, an application running in a container can be enabled to have access to services on the host system.

When is it appropriate to use containers vs hypervisors?

Containers and hypervisors both have pros and cons and choosing which to use depends heavily on the applications and workloads that the customer would deploy. Virtualization provides flexibility by abstracting from hardware, while containers provide speed and agility with light-weight application isolation. When considering which to use, consider the type of workloads that customers are planning to run. For example, if an organization is running the services make up modern web and Linux applications, such as MongoDB and Apache HTTP, this is better suited for Linux containers because they might choose to run multiple copies of the same application at times. Running a Windows workload would require Virtualization and booting up a Windows guest OS.

Some factors to consider are:

  • start-up speed
  • light-weight deployments
  • application-centric packaging and DevOps
  • workload deployments
  • Linux vs. Windows applications
  • Security required for the workload

It is likely that many will conclude that both containers and hypervisors deserve first-class status, as they solve different problems.

Is containers the new wave of virtualization?

No. Linux containers and Virtualization are mostly complementary technologies that can coexist and work in concert to improve efficiency across heterogeneous environments, whether virtualized, bare metal, on private clouds or on public clouds.  Image-based containers provide the capability to package the application along with it’s userspace runtime environment and bring about portability of applications amongst a set of certified container hosts. Running containers inside a virtual machine is a use case that Red Hat customers are interested in deploying. For example, the OpenShift Platform-as-a-Service (PaaS) solution runs containers in Red Hat Enterprise Linux virtual machines on top of Red Hat Enterprise Linux OpenStack Platform at the virtualization layer.  While container hosts can run on bare metal, most often it’s not an either/or choice for customers. The recommended practice is for organizations to evaluate their workloads carefully to see how they can benefit from both technologies.

How are containers and virtualization similar? How do they differ?

While similar in concept, virtualization and containerization differ significantly in how they enable multi-tenancy and server consolidation. Virtualization provides a virtualized hardware environment in which a guest OS runs. A container host merely provides a logically isolated runtime environment within the same OS instance. Hence containers do not require the overhead of booting, managing and maintaining a guest OS environment, but with limited portability for the containerized workloads.

Virtualization has matured to include many resiliency capabilities such as live migration, high availability, SDN and storage integration which are not as mature with containerization to date. That said, Red Hat sees virtualization and containerization as complementary technologies, with virtualization abstracting from physical resources for compute, network and storage and containerization providing superior application delivery capabilities. Using these together, for example by running Red Hat Enterprise Linux Atomic Host instances in virtual machines provided by a Red Hat Enterprise Linux OpenStack Platform deployment combines the best of both worlds.

Both virtualization and Linux containers provide application isolation, allowing multiple applications to run on the same physical system with distinct, isolated runtime environments. Consequently, both technologies can be used to aggregate multiple workloads on a single physical system.

Linux containers and virtualization differ in how they work and the use cases for which they are best suited. Virtualization provides abstraction from the hardware and host operating system (OS) with a virtual hardware environment in which a guest OS runs, which can be different from the underlying hypervisor OS (for example, Windows on top of a Linux KVM hypervisor). Virtualizing the hardware environment creates a high degree of flexibility, but at the expense of requiring a separate OS instance for each virtual machine, introducing overhead. Linux containers provide isolated runtime environments all running on a shared Linux host operating system.  This allows applications to live alongside all their runtime dependencies in separate, confined environments, with minimal overhead, because they share a single OS with its hardware support, resource management, and security capabilities.

Still need help?

Contact Us