Error message

  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.
  • Access denied. You must log in to view this page.

Frequently Asked Questions

Container Certification

How does Container Certification work?

You must package your application as a container, using a base image layer provided by Red Hat, and validate its functionality using your standard test methodology. Then you submit it for review using the Red Hat certification tool. Red Hat will run a series of tests to make sure it complies with the Container Certification Policy, and generate a PASS/FAIL report. Upon successful completion, Red Hat will publish the image in the Red Hat Container Registry and list it in the certification catalog.

 

Where can my container be deployed?

Certified containers are compatible with any container host supported by Red Hat, such as Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic Host.

Can I use the Red Hat certification logo to promote my product?

Absolutely. The logo was created for that reason. Once your container has been successfully certified, you can use this logo in your own product web pages or collateral material. You must adhere to the logo usage guidelines.

What is the Red Hat Container Registry?

This is a service from Red Hat, available to all partners with certified containers. Red Hat provides the registry infrastructure, so your customers can download your application containers using standard docker commands. 

How do I certify my container?

Once you have joined the Container Zone as a partner, follow the Align-Build-Certify link. We’ll walk you through the process to build and submit your image for certification.

 

Can I certify containers built on a non-Red Hat base layer?

No. The goal of the Red Hat Container Certification is to provide assurance that the application containers are compatible and can be supported. Red Hat does not have plans to support non–Red Hat–based containers running on Red Hat hosts.

Can I run certified containers on other Linux distributions?

While this may be technically feasible, it is not a configuration supported by Red Hat. There are many interdependencies between a container base layer and the underlying host, and Red Hat cannot guarantee compatibility in such scenarios. 

Why should I certify my container?

To make sure that your application is incorporating layers and packages from Red Hat in a way that can be fully supported. This guarantees that your application container can be deployed across all supported container hosts from Red Hat, and a variety of deployment models: physical hosts, virtual hosts, private clouds and public clouds.

General Zone Information

How do I collaborate with Red Hat on customer support?

Red Hat offers partners participation in a collaborative support group. Through this group, partners can engage with Red Hat’s Global Support Services to assist troubleshooting of issues impacting mutual customers.

Why should ISVs choose to work with Red Hat on deploying applications via containers?

Red Hat has a proven track record of service and support for large enterprise customers, giving ISVs and their customers peace of mind that Red Hat can maintain the runtime environment. ISVs and their customers can trust that they are running a stable, reliable and secure runtime environment based on Red Hat Enterprise Linux. ISVs can be assured that Red Hat will develop and foster a broad ecosystem of partners in this area, which will create a network effect and overall benefit for all ISVs who participate.

Red Hat is delivering all of the resources needed for the ISV to quickly containerize their application, including: Red Hat Enterprise Linux base container image, tooling for building application containers, and the infrastructure necessary to patch and maintain application containers. Red Hat is providing multiple container hosts for ISV application containers: Red Hat Enterprise Linux 7, Red Hat Enterprise Linux Atomic Host, and Red Hat’s OpenShift Platform-as-a-Service offering.

Red Hat is providing a certification program that will give ISVs and customers the confidence that application containers built on Red Hat Enterprise Linux will operate correctly on all Red Hat container hosts. Red Hat is hosting an application container directory where customers can browse and discover new ISV solutions.

Why will software companies (ISVs, etc.) want to distribute applications in containers?

Companies that deploy their applications in containers benefit from:

  • Faster and easier customer deployments – application containers reduce the installation and configuration required by end customers, meaning shorter sales cycles and improved customer satisfaction.
  • Simplified support and maintenance – applications are isolated in their own containers with a known runtime environment, making troubleshooting issues and providing maintenance updates easier for the ISV.
  • Instant portability – the application container can be deployed, unmodified, on any certified container host, allowing the ISV to deliver their application to run on hardware, in a virtual environment or in IaaS or PaaS clouds.
  • Lower development costs – the application container includes everything the application needs to run, regardless of the environment on the underlying container host, so the ISV only needs to develop, test and certify against a single runtime.
  • Narrower scope of support when compared to software or virtual appliances – with a software or virtual appliance, the ISV must take responsibility for maintaining both the application and the runtime environment. Application containers allow the ISV to just maintain their application, while the provider of the runtime environment maintains the runtime environment.

What’s the elevator pitch?

Red Hat Enterprise Linux 7 redefines the enterprise operating system and how applications interact with it to speed application delivery across physical, virtual, and cloud environments using lightweight, portable containers. With Red Hat Enterprise Linux 7, you gain the flexibility to quickly adapt to demands for business agility and IT efficiency on a stable foundation known for its mission-critical reliability and military-grade security.

This is why more than 90% of Fortune 500 companies trust Red Hat Enterprise Linux for their critical business infrastructure. Red Hat has grown beyond being the leading Linux provider and is now a leading cloud infrastructure provider with a full portfolio of solutions including virtualization, middleware, storage, PaaS, management, and OpenStack.

What's Red Hat's relationship with Docker the company

Docker the company provides services such as Docker Hub which focuses on the distribution of free and open source containers and developer consulting services. We see these as complementary capabilities to Red Hat's investment in the Docker the technology and our Certified Container Strategy.

 

Red Hat and Google Kubernetes

When will Kubernetes capabilities be available?

Kubernetes capabilities are available as part of Red Hat Enterprise Linux Atomic Host.

What can Red Hat Enterprise Linux customers use today?

Many Red Hat Enterprise Linux customers can leverage containers technology today and now have support for Docker containers in Red Hat Enterprise Linux 7. Those customers who also are entitled to Red Hat Enterprise Linux Atomic Host also gain access to Kubernetes orchestration and scheduling tools. Kubernetes provides new options for those customers to manage a Docker containerized environment.           

Where will these container orchestration technologies be available?

Container orchestration technologies will be developed upstream in their respective projects, such as Kubernetes, the Docker project, and Apache Mesos. These technologies are critical for Red Hat customers, as they increasingly adopt Linux containers for their cloud application environments. These upstream technologies will most likely be incorporated in Red Hat commercial products down the road, as is the case with much of our upstream work.

Where does Apache Mesos fit?

Apache Mesos is a manages resources in a cluster to simplify the complexity of running applications on a shared pool of servers. Mesos is being closely looked at for the scheduling benefits it brings, with discussion about adding Mesos scheduling as a plugin to Kubernetes.

Were other container orchestration and management tools considered?

Red Hat development teams have investigated and continue to evaluate new tools coming from Docker and the ISV community. Related projects and tools like Docker libswarm, Apache Mesos, Yarn, Fleet and more are driving rapid innovation in this space. Our collaboration with Google does not preclude Red Hat from also collaborating with these other upstream communities and potentially integrating additional tools & capabilities into our commercial products and services. We felt that at this time that initiating a focused collaboration upstream around Kubernetes, and Project Atomic made sense and that Google’s container capabilities and focus made this even more compelling.

Why was Kubernetes selected?

Google launched the Kubernetes project to enable users to easily manage, monitor and control containerized application deployments across a large cluster of container hosts. The project benefits from Google’s vast knowledge of running containers at scale in their own datacenters. Given the strong alignment between the goals of the Kubernetes project and the goals of Project Atomic and Red Hat in general, we felt that collaborating with Google made sense. The early traction we’ve seen in the Kubernetes community combined with Google’s commitment to the project were also key factors.

Why is our work with Google and Kubernetes important?

Red Hat announced support for the Docker container format in Red Hat Enterprise Linux 7 and launched Project Atomic to redefine the operating system for the shift to application-centric IT and provide a new lightweight container host. In most use cases, the requirement to orchestrate and manage containers at scale, across multiple hosts is critical. The Kubernetes project, launched by Google, combined with work we are doing in Project Atomic, Red Hat Enterprise Linux and related products and projects will address this requirement.

What did Red Hat and Google announce on July 10, 2014?

Google recently released a container orchestration open-source project, Kubernetes, which has received significant interest from the open source community and early adopter users. Red Hat announced our involvement in the Kubernetes project and, in addition, that we have enabled early access for select customers to Red Hat Enterprise Linux 7 Atomic Host on the Google Cloud. There were three key announcements from Google and Red Hat:

  1. Google announced via their public blog that there are now several vendors involved in the Kubernetes community, including Red Hat, and that those vendors are working to extend the capabilities of orchestration for containers.
  2. Related, Red Hat announced that we are collaborating with Google and the Docker community to drive a new open standard around orchestrating Docker containers at scale, for the management of cloud application deployments (note: This is similar to announcement 1., but focuses on the Red Hat and Google specific elements).
  3. Red Hat also announced on the Google Cloud Blog that a technology preview of Red Hat Enterprise Linux Atomic Host is available hosted on Google Compute Engine for customers in the Red Hat Enterprise Linux 7 special interest groups. This provides early access for select customers and partners to Red Hat Enterprise Linux Atomic Host hosted within a public cloud environment.

What is orchestration needed for containerized applications?

While application-centric deployment and application density are two of the hallmarks of Linux containers, enterprises need better ways of managing distributed components across varied infrastructure to maximize their investment of running large-scale container deployments.  Applications composed of different services should still be managed as a single application instance in most cases. 

What is Google Kubernetes?

Google Kubernetes is an orchestration framework for managing clusters of containers, particularly useful for horizontal scaling of application components and interconnecting multiple layers of application stacks. 

What orchestration capabilities are included in Red Hat Enterprise Linux Atomic Host?

Through collaboration with Google in the Kubernetes project, Red Hat Enterprise Linux Atomic Host  integrates the Kubernetes cluster orchestration framework into its container stack providing a layer over the infrastructure to allow for this type of management. This allows enterprises to build composite applications by orchestrating multiple containers as microservices on a single host instance.

What is Red Hat working with Google on related to containers?

Red Hat and Google are collaborating to tackle the challenge of how to manage application containers at scale, across hundreds or thousands of hosts. In June, Google unveiled the Kubernetes open source project for container management. Red Hat joined the Kubernetes community and has become a core committer to the project to help Red Hat customers to orchestrate containers across multiple hosts, running on-premise, in the Google Cloud or in other public or private clouds.

What is Red Hat’s relationship with Google?

Google is a Red Hat Certified Cloud Provider. The Google Compute Engine (GCE) public cloud offers Red Hat Enterprise Linux on demand and can also host Red Hat Enterprise Linux subscriptions via Cloud Access on GCE.

Red Hat and Docker

Does Docker improve Linux portability by allowing containers to be moved across different Linux distributions?

While this is possible from a technology standpoint, providing support for containers across distributions is difficult. Red Hat has no plans to support Red Hat–based container images running on non–Red Hat hosts or non–Red Hat–based containers running on Red Hat hosts.

At a tactical level, what does the Red Hat and Docker partnership look like? How many Red Hatters are contributing code? Who are the committers?

Red Hat allocates a significant number of engineers to contribute to and shape the Docker project. According to Chris Dawson of THENEWSTACK in his posting, Who contributes to Docker development, Red Hat is the second most active contributor just below Docker (the company), with Red Hat employees represented by Project Atomic and Fedora cloud participants. In addition, Dan Walsh is a member of the Docker Governance Advisory Board (membership not yet public afaik https://www.docker.com/community/governance/). Commiters include:

  • Dan Walsh  - docker
  • Vincent Batts - docker
  • Alex Larson - docker
  • Mrunal Patel - libcontainer

What is Red Hat's relationship with Docker?

Red Hat is working with Docker to collaborate on the development of the upstream Docker technology as well as ensure mutual support and certification for customers using the Docker technology and containers. We also work together to ensure that industry standards exist for container technologies, including indexes and registries for Docker-formatted containers.

What/Who is Docker?

Docker can be used to refer to multiple things. Docker is:

  • a tool that can package an application and its runtime dependencies into a container.
  • a specific image format for containers.
  • a command that can manage (start, stop, update, configure) a container that is in Docker format.
  • a company: https://www.docker.io that provides an API and a central registry or index of containers
  • an open source project for the tools to create and run applications as a container

Docker offers technology that makes Linux containers easier to build, deploy, and update. Red Hat is collaborating with Docker (the company) on the development of the upstream Docker technology as well as complementary support and interoperability. 

Containers and Virtualization

Does a container run on top of a hypervisor?

Yes. Container hosts can run inside a virtual machine; in fact, the OpenShift PaaS solution is deployed in this manner.

What are the limits to container density and how reliable is a container host versus a hypervisor host?

Both container and virtualization densities depend on the workload being deployed. From Red Hat testing, it is possible to run more than 1000 containers on a single host system.  This allows cloud deployments to provide launch containers and keep them on standby, ready for deploying a workload.  However, typically customers have indicated that they would run no more than 15 - 30 containers for on-premise solutions.

What is “operating system virtualization” or “OS virtualization”?

Operating system virtualization (OS virtualization) is a server virtualization technology that involves tailoring a standard operating system so that it can run different applications handled by multiple users on a single computer at a time. The operating systems do not interfere with each other even though they are on the same computer.

In OS virtualization, the operating system is altered so that it operates like several different, individual systems. The virtualized environment accepts commands from different users running different applications on the same machine. The users and their requests are handled separately by the virtualized operating system.

What are advantages of using hypervisors instead of containers?

Hypervisors will remain critical in the virtualization footprint and to many forms of cloud. For the most part, Hypervisor virtualization is operating system agnostic when it comes to guest OSes1. This is very effective for server consolidation when you're looking at consolidating existing workloads running on multiple OSes into a virtualized environment. In addition, hypervisors offer full control on the operating system and its parameters as well as dedicated resources (CPU, RAM and DISK) to the virtual machine.

What are the advantages of using containers instead of hypervisors?

Containers provide an attractive means of application packaging and delivery because of their low overhead and greater portability. Instead of virtualizing the hardware and carrying forward multiple full stacks of software from the application to the operating system (resulting in considerable replication of core system services), Linux containers rest on top of a single Linux operating system instance. Each container has fencing between itself and other containers but shares the same core OS kernel underneath. These lightweight and portable containers (amongst certified container hosts) enable applications and their dependencies to be packaged together, making the containers self-sufficient to run across any certified host environment.

How does Linux containers and virtualization compare in terms of security? What is the level of isolation and control versus a hypervisor such as KVM?

Virtualization provides a higher level of security by virtue of running the workload inside a guest operating system that is completely isolated from the host operating system and the virtual machines can be confined using security technology such as the Red Hat Enterprise Virtualization hypervisor. Containers, by virtue of its architecture, run on a shared host operating system; this provides isolation for each container from the others and controls interactions between the host environment and the containers. The Linux kernel subsystems that work in concert to isolate containers include: cgroups for resource and capacity management, process and network namespaces for logical grouping, and SELinux to enforce security policies.. However, an application running in a container can be enabled to have access to services on the host system.

When is it appropriate to use containers vs hypervisors?

Containers and hypervisors both have pros and cons and choosing which to use depends heavily on the applications and workloads that the customer would deploy. Virtualization provides flexibility by abstracting from hardware, while containers provide speed and agility with light-weight application isolation. When considering which to use, consider the type of workloads that customers are planning to run. For example, if an organization is running the services make up modern web and Linux applications, such as MongoDB and Apache HTTP, this is better suited for Linux containers because they might choose to run multiple copies of the same application at times. Running a Windows workload would require Virtualization and booting up a Windows guest OS.

Some factors to consider are:

  • start-up speed
  • light-weight deployments
  • application-centric packaging and DevOps
  • workload deployments
  • Linux vs. Windows applications
  • Security required for the workload

It is likely that many will conclude that both containers and hypervisors deserve first-class status, as they solve different problems.

Is containers the new wave of virtualization?

No. Linux containers and Virtualization are mostly complementary technologies that can coexist and work in concert to improve efficiency across heterogeneous environments, whether virtualized, bare metal, on private clouds or on public clouds.  Image-based containers provide the capability to package the application along with it’s userspace runtime environment and bring about portability of applications amongst a set of certified container hosts. Running containers inside a virtual machine is a use case that Red Hat customers are interested in deploying. For example, the OpenShift Platform-as-a-Service (PaaS) solution runs containers in Red Hat Enterprise Linux virtual machines on top of Red Hat Enterprise Linux OpenStack Platform at the virtualization layer.  While container hosts can run on bare metal, most often it’s not an either/or choice for customers. The recommended practice is for organizations to evaluate their workloads carefully to see how they can benefit from both technologies.

How are containers and virtualization similar? How do they differ?

While similar in concept, virtualization and containerization differ significantly in how they enable multi-tenancy and server consolidation. Virtualization provides a virtualized hardware environment in which a guest OS runs. A container host merely provides a logically isolated runtime environment within the same OS instance. Hence containers do not require the overhead of booting, managing and maintaining a guest OS environment, but with limited portability for the containerized workloads.

Virtualization has matured to include many resiliency capabilities such as live migration, high availability, SDN and storage integration which are not as mature with containerization to date. That said, Red Hat sees virtualization and containerization as complementary technologies, with virtualization abstracting from physical resources for compute, network and storage and containerization providing superior application delivery capabilities. Using these together, for example by running Red Hat Enterprise Linux Atomic Host instances in virtual machines provided by a Red Hat Enterprise Linux OpenStack Platform deployment combines the best of both worlds.

Both virtualization and Linux containers provide application isolation, allowing multiple applications to run on the same physical system with distinct, isolated runtime environments. Consequently, both technologies can be used to aggregate multiple workloads on a single physical system.

Linux containers and virtualization differ in how they work and the use cases for which they are best suited. Virtualization provides abstraction from the hardware and host operating system (OS) with a virtual hardware environment in which a guest OS runs, which can be different from the underlying hypervisor OS (for example, Windows on top of a Linux KVM hypervisor). Virtualizing the hardware environment creates a high degree of flexibility, but at the expense of requiring a separate OS instance for each virtual machine, introducing overhead. Linux containers provide isolated runtime environments all running on a shared Linux host operating system.  This allows applications to live alongside all their runtime dependencies in separate, confined environments, with minimal overhead, because they share a single OS with its hardware support, resource management, and security capabilities.

Still need help?

Contact Us