Performance analysis between RHEL 7.6 and RHEL 8.0

Apart from all the new cool features in the freshly released Red Hat Enterprise Linux 8 one thing that is just as important is the improvements in performance. The team over at Red Hat has performed a bunch of benchmark tests on both RHEL 7.6 and RHEL 8.0 and the results show some really nice improvements.

Overall the performance looks good. The chart below shows around 5% improvement in CPU, 20% less memory usage, 15% increased disk I/O, and around 20-30% improved network performance.

a candlestick chart which combines multiple tests
Photo: Red Hat

Looking at more specific metrics we see a 40% increase in disk throughput on the XFS file system as shown in the chart below.

RHEL 7.6 vs RHEL 8 AIM7 shared throughput - XFS
Photo: Red Hat

If you are running OpenStack the network control plane will also see a large improvement when moving to RHEL 8. Read the full article at redhat.com for more details.

First impressions of moving from Docker to Podman

It’s been on the horizon for a while but when I decided to port some stuff over to RHEL 8 I was more or less forced to remove my dependency on Docker and use something else instead.

When it comes to the beef between Red Hat and Docker I’ve been on the side of Red Hat. Both for technical and philosophical reasons. Docker is a big fat daemon which I really don’t need to pull a container file from a URL, or to build a container image and save it to disk. Add to that the fact that Docker is very close minded to accepting code changes and that they once thought that just verifying the existence of a checksum as a proper image validation during pull.

But even though I knew I wanted to move away from Docker at some time, I also knew it would come with a bunch of work that I’d much rather spend adding features and fixing bugs.

Anyway, now I am porting stuff to RHEL 8 and this means I need to add support for Podman. So here I will lay out some of my experiences moving from Docker to Podman.

Background

So just to give you a little context on what I do. I develop and package IT systems. The system is installed and configured using Ansible and most services are packaged as containers. While we try to use container images from vendors, we sometimes have to resort to create our own containers. So the main focus here is on adapting our Ansible roles so they start and configure the containers using podman instead of docker.

Here are the services that I decided to port:

  • AWX (upstream for Ansible Tower)
  • Foreman (upstream for Red Hat Satellite)
  • Sonatype Nexus
  • HAProxy
  • NodePKI

Installation

This was the easy part. Podman has to be installed on the target and to do this I just added the following:

package: name=podman state=present

Ansible modules

One of the biggest issue is that there are no Podman equivalents to the Ansible modules docker_network and docker_container. There is a module podman_image though and podman_container was just merged into Ansible core. However, I cannot wait for Ansible 2.9 and need a solution today. These modules are used extensively by us to manage our containers using Ansible and having to resort to the command or shell modules really feels like a step back.

Luckily I actually found a way to make the transition much easier, using systemd services.

Cheating with Systemd services

So before I started the port to podman I decided to adjust all my roles to setup the docker containers so they are managed by systemd. This is quite simple:

Create a sysconfig file:

# {{ ansible_managed }}
C_VOLUMES="{% for x in container_volumes %}--volume {{ x }} {% endfor %}"
C_ENV="{% for k,v in container_env.items() %}--env {{ k }}='{{ v }}' {% endfor %}"
C_PORTS="{% for x in container_ports %}--publish {{ x }} {% endfor %}"
C_IMAGE="{{ container_image }}"
C_COMMAND="{{ container_cmd }}"
C_ARGS="{{ container_args }}"

Create a service unit file:

[Unit]
Description=My container
Wants=syslog.service

[Service]
Restart=always
EnvironmentFile=-/etc/sysconfig/my_service
ExecStartPre=-{{ container_mgr }} stop {{ container_name }}
ExecStartPre=-{{ container_mgr }} rm {{ container_name }}
ExecStart={{ container_mgr }} run --rm --name "{{ container_name }}" \
  $C_VOLUMES $C_ENV $C_PORTS $C_ARGS $C_IMAGE $API_COMMAND
ExecStop={{ container_mgr }} stop -t 10 {{ container_name }}

[Install]
WantedBy=multi-user.target

Start the service:

- service: name=my_service state=started

Thanks to the fact that podman is CLI-compatible with the Docker client, moving to podman is now as easy as setting container_manager to /usr/bin/podman instead of /usr/bin/docker.

Creating networks

Unfortunately Podman has no podman create network to create a private network where I can put a set of containers. This is really a shame. Docker networks makes it easy to create a private namespace for containers to communicate. Docker networks allows me to expose ports only to other containers (keeping them unexposed to the host) and name resolution so containers can find each other easily.

One alternative that was suggested to me on the Podman mailing list was to use a pod. But containers in pods share localhost which means that I run the risk of port collision if two containers use the same port. This also adds more complexity as I need to create and start/stop a new entity (the pod) which I never got working using systemd (systemd just killed the pod directly after starting it).

I also cannot use the built in CNI network, or create additional ones, since they don’t provide name resolution and I have no way of knowing the IP for a given container.

My only solution here was to skip networks all together and use host networking. It comes with some issues:

  • I still have the risk of port collision between containers
  • All ports are published and accessible from outside the host (unless blocked by a firewall)

Working on Mac

Another big thing missing from Podman is a client for macOS. While I use RHEL on all the servers (and Fedora at home) my workstation is a Macbook which means I cannot use Podman when I build containers locally, or troubleshoot podman commands locally. Luckily, I have a really streamlined development environment that makes it a breeze to quickly bring up a virtual machine running CentOS where I can play around. I do miss the ability to build containers on my Mac using Podman but since Docker and Podman both are CNI compatible I can build container images using Docker on my laptop and then manage and run them on RHEL using Podman without problems.

InSpec

My InSpec tests uses some docker resources but I decided to use the service resource instead to verify that the systemd services are running properly, and of course I have a bunch of tests that access the actually software that runs inside the containers.

Summary

So after moving to systemd services it was really easy to port from Docker to Podman. My wishlist for Podman would be the following:

  • Podman modules for Ansible to replace the Docker modules
  • Ability to manage CNI networks using podman network ...
  • Name resolution inside Podman networks
  • Support for macOS

Luckily none of these were showstoppers for me and after figuring it all out it took about a day to convert five Ansible roles from Docker to Podman without loss of end user functionality.

Red Hat OpenShift 4 is here

Wow! This is a biggie!

So Red Hat just released OpenShift 4 with a ton of new features. I haven’t had time to try it all out yet but here are some of my favorites.

RHEL CoreOS

Well, this might actually be a post on its own. This is the first new release of CoreOS after the Red Hat aquisition and serves as the successor of both CoreOS and RHEL Atomic Host. It’s basically RHEL built for OpenShift. Kinda like how RHV uses an ostree based RHEL as well.

I love Atomic Host. The OSTree model is really neat, allowing you to really lock down the operating system and do atomic upgrades. Either it works, or you roll back. There is nothing in between. And being able to lock down the OS completely (by disabling the ostree-rpm commands) means the attack surface is greatly reduced.

What CoreOS brings to the Atomic Host in this new, merged version is greater management and a more streamlined delivery of updates, as well as tighter integration with OpenShift.

Cluster management

So, that tighter integration with OpenShift is really what’s key here. This means that you can manage the lifecycle of the hosts running Kubernetes directly from Kubernetes. OpenShift 4 also comes with a new installer that uses a bootstrap node for spinning up all neccessary virtual machine for the cluster. Running OpenShift on premise will give you the exakt sweet experience as you would get running Google Kubernetes Engine or Amazon ECS. No need to manually manage virtual machine for applying updates or scaling our or in.

Service Mesh

Next up is Service Mesh. This is Red Hats supported implementation of Istio and Jaeger, two relatively new open source projects which brings some cool new features to Kubernetes for managing that growing network complexity that you get when you move more and more stuff into the microservice model.

Getting full visibility and control over the network is a great security win and you know how we at Basalt love security. I’ll sure check out OpenShift 4 and bring it into Basalt Container Platform to get that awesome new features to our customers.

Operators

Lastly is the Operators framework. This is really a natural evolvement of packaging, deploying and managing container based services. Just as CoreOS means improved management of the hosts running under OpenShift, Operators means improved management of the services running on top of it. My bet is that we will package more and more of our turn-key services such as Basalt Log Service and Basalt Monitor Service as Operators that run on top of OpenShift.

So that’s a wrap for the biggests news in OpenShift 4. I will do a deep dive later on when I get the chance and perhaps write a more detailed article when I’ve really gotten my hands dirty with it.