First impressions of moving from Docker to Podman

It’s been on the horizon for a while but when I decided to port some stuff over to RHEL 8 I was more or less forced to remove my dependency on Docker and use something else instead.

When it comes to the beef between Red Hat and Docker I’ve been on the side of Red Hat. Both for technical and philosophical reasons. Docker is a big fat daemon which I really don’t need to pull a container file from a URL, or to build a container image and save it to disk. Add to that the fact that Docker is very close minded to accepting code changes and that they once thought that just verifying the existence of a checksum as a proper image validation during pull.

But even though I knew I wanted to move away from Docker at some time, I also knew it would come with a bunch of work that I’d much rather spend adding features and fixing bugs.

Anyway, now I am porting stuff to RHEL 8 and this means I need to add support for Podman. So here I will lay out some of my experiences moving from Docker to Podman.

Background

So just to give you a little context on what I do. I develop and package IT systems. The system is installed and configured using Ansible and most services are packaged as containers. While we try to use container images from vendors, we sometimes have to resort to create our own containers. So the main focus here is on adapting our Ansible roles so they start and configure the containers using podman instead of docker.

Here are the services that I decided to port:

  • AWX (upstream for Ansible Tower)
  • Foreman (upstream for Red Hat Satellite)
  • Sonatype Nexus
  • HAProxy
  • NodePKI

Installation

This was the easy part. Podman has to be installed on the target and to do this I just added the following:

package: name=podman state=present

Ansible modules

One of the biggest issue is that there are no Podman equivalents to the Ansible modules docker_network and docker_container. There is a module podman_image though and podman_container was just merged into Ansible core. However, I cannot wait for Ansible 2.9 and need a solution today. These modules are used extensively by us to manage our containers using Ansible and having to resort to the command or shell modules really feels like a step back.

Luckily I actually found a way to make the transition much easier, using systemd services.

Cheating with Systemd services

So before I started the port to podman I decided to adjust all my roles to setup the docker containers so they are managed by systemd. This is quite simple:

Create a sysconfig file:

# {{ ansible_managed }}
C_VOLUMES="{% for x in container_volumes %}--volume {{ x }} {% endfor %}"
C_ENV="{% for k,v in container_env.items() %}--env {{ k }}='{{ v }}' {% endfor %}"
C_PORTS="{% for x in container_ports %}--publish {{ x }} {% endfor %}"
C_IMAGE="{{ container_image }}"
C_COMMAND="{{ container_cmd }}"
C_ARGS="{{ container_args }}"

Create a service unit file:

[Unit]
Description=My container
Wants=syslog.service

[Service]
Restart=always
EnvironmentFile=-/etc/sysconfig/my_service
ExecStartPre=-{{ container_mgr }} stop {{ container_name }}
ExecStartPre=-{{ container_mgr }} rm {{ container_name }}
ExecStart={{ container_mgr }} run --rm --name "{{ container_name }}" \
  $C_VOLUMES $C_ENV $C_PORTS $C_ARGS $C_IMAGE $API_COMMAND
ExecStop={{ container_mgr }} stop -t 10 {{ container_name }}

[Install]
WantedBy=multi-user.target

Start the service:

- service: name=my_service state=started

Thanks to the fact that podman is CLI-compatible with the Docker client, moving to podman is now as easy as setting container_manager to /usr/bin/podman instead of /usr/bin/docker.

Creating networks

Unfortunately Podman has no podman create network to create a private network where I can put a set of containers. This is really a shame. Docker networks makes it easy to create a private namespace for containers to communicate. Docker networks allows me to expose ports only to other containers (keeping them unexposed to the host) and name resolution so containers can find each other easily.

One alternative that was suggested to me on the Podman mailing list was to use a pod. But containers in pods share localhost which means that I run the risk of port collision if two containers use the same port. This also adds more complexity as I need to create and start/stop a new entity (the pod) which I never got working using systemd (systemd just killed the pod directly after starting it).

I also cannot use the built in CNI network, or create additional ones, since they don’t provide name resolution and I have no way of knowing the IP for a given container.

My only solution here was to skip networks all together and use host networking. It comes with some issues:

  • I still have the risk of port collision between containers
  • All ports are published and accessible from outside the host (unless blocked by a firewall)

Working on Mac

Another big thing missing from Podman is a client for macOS. While I use RHEL on all the servers (and Fedora at home) my workstation is a Macbook which means I cannot use Podman when I build containers locally, or troubleshoot podman commands locally. Luckily, I have a really streamlined development environment that makes it a breeze to quickly bring up a virtual machine running CentOS where I can play around. I do miss the ability to build containers on my Mac using Podman but since Docker and Podman both are CNI compatible I can build container images using Docker on my laptop and then manage and run them on RHEL using Podman without problems.

InSpec

My InSpec tests uses some docker resources but I decided to use the service resource instead to verify that the systemd services are running properly, and of course I have a bunch of tests that access the actually software that runs inside the containers.

Summary

So after moving to systemd services it was really easy to port from Docker to Podman. My wishlist for Podman would be the following:

  • Podman modules for Ansible to replace the Docker modules
  • Ability to manage CNI networks using podman network ...
  • Name resolution inside Podman networks
  • Support for macOS

Luckily none of these were showstoppers for me and after figuring it all out it took about a day to convert five Ansible roles from Docker to Podman without loss of end user functionality.

Ansible 2.8 has a bunch of cool new stuff

So Ansible 2.8.0 was just released and it comes with a few really nice new features. I haven’t had time to use it much, since I just upgraded like 10 minutes ago, but reading through the Release Notes I found some really cool new things that I know I’ll enjoy in 2.8.

Automatic detection of Python path

This is a really nice feature. It used to be that Ansible always looked for /usr/bin/python on the target system. If you wanted to use anything else you needed to adjust ansible_python_interpreter. No more! Now Ansible will do a much smarter lookup where it will not only look for Python in several locations before giving up, it will adapt to the system it is executing on. So for example on Ubuntu we always had to explicitly tell Ansible to use /usr/bin/python3 since there is no /usr/bin/python by default. Now Ansible will know this out of the box.

Better SSH on macOS

Ansible moved away from the Paramiko library in favor of SSH a long time ago. Except when executed on macOS. With 2.8 those of us using a MacBook will finally get some of those sweet performance improvements that SSH has over Paramiko which will mean a lot since the biggest downside to Ansible is its slow execution.

Accessing undefined variables is fine

So when you had a large structure with nested objects and wanted to access one and give it a default if it, or any parent, was undefined you needed to do this:

{{ ((foo | default({})).bar | \
default({})).baz | default('DEFAULT') }}

or

{{ foo.bar.baz if (foo is defined and \
foo.bar is defined and foo.bar.baz is defined) \
else 'DEFAULT' }}

Ansible 2.8 will no longer throw an error if you try to access an object of an undefined variable but instead just give you undefined back. So now you can just do this:

{{ foo.bar.baz | default('DEFAULT') }}

A lot more elegant!

Tons of new modules

Of course as with any new release of Ansible there is also a long list of new modules. For example the one that I am currently most interested in are the Foreman modules. Ansible comes with just a single module for Foreman / Satellite but I have been using the foreman-ansible-modules for a while now and 2.8 deprecates the old foreman plugin in favor of this collection. Hopefully they will soon be incorporated into Ansible Core so I don’t have to fetch them from GitHub and put inside my role.

There are also a ton of fact-gathering modules for Docker such as docker_volume_info, docker_network_info, docker_container_info and docker_host_info that will be great when checking and manipulating Docker objects. Although, with RHEL 8 we will hopefully be moving away from Docker so these may come a little too late to the party, to be honest.

There’s a bunch of new KubeVirt modules which may be really cool once we move over to OpenShift 4 and run some virtual machines in it.

Other noteworthy modules are:

  • OpenSSL fact gathering for certificates, keys and CSRs
  • A whole bunch of VMware modules
  • A few Ansible Tower modules
  • A bunch of Windows modules

Automatic testing of Docker containers

So we are all building and packaging our stuff inside containers. That’s great! Containers lets us focus more on configuration and less on installation. We can scale stuff on Kubernetes. We can run stuff on everything from macOS to CentOS. In short, containers opens up a ton of opportunities for deployment and operations. Great.

If you are using containers you are probably also aware of continuous integration and how important it is to use automatic tests of all your new code. But do you test your containers, or just the code inside it?

At Basalt we don’t make applications. We make systems. We put together a bunch of third party applications and integrate them together. Most of it run inside containers. So for us it is important to test these containers, to make sure they behave correctly, before we push them to the registry.

Use encapsulation

So the approach that we chose was to encapsulate the container we wanted to test inside a second test container. This is easily done by using the FROM directive when building the test container.

Screenshot from 2019-03-30 09-26-04

The test container installs additional test tools and copies the test code into the container. We chose to use InSpec as the test framework for this, but any other framework, or just plain Bash, works just as well.

Testing Nginx with InSpec

So let’s make an example test of a container. In this example I will build a very simple web service using the Nginx container. Then I will use InSpec to verify that the container works properly.

Let’s start by creating all files and folders:

$ mkdir -p mycontainer mycontainer_test/specs
$ touch mycontainer/Dockerfile \
    mycontainer_test/{Dockerfile,specs/web.rb}
$ tree .
.
├── mycontainer
│   └── Dockerfile
├── mycontainer_test
│   ├── Dockerfile
│   └── specs
│       └── web.rb
└── test_container

4 directories, 3 files

Then add the following content to mycontainer/Dockerfile:

FROM nginx
RUN echo "Hello, friend" > \
  /usr/share/nginx/html/index.html

Now we can build the app container:

$ docker build -t mycontainer mycontainer/.
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM nginx
 ---> 2bcb04bdb83f
Step 2/2 : RUN echo Hello > /usr/share/nginx/html/index.html
 ---> Running in 7af1cec318f9
Removing intermediate container 7af1cec318f9
 ---> fe25cbbf80f9
Successfully built fe25cbbf80f9
Successfully tagged mycontainer:latest

$ docker run -d --name hello -p 8080:80 mycontainer
cfd3c3ea70c3512c5a7cc9ac2c9d74244aec4dd1a4bb68645e671cbe551af4ab

$ curl localhost:8080
Hello, friend

$ docker rm -f hello
hello

Great! So the container build and manual testing shows that it works. Next step is to build the test container. Add the following content to mycontainer_test/Dockerfile:

FROM mycontainer
ARG INSPEC_VERSION=3.7.11

# install inspec
RUN apt-get update \
 && apt-get -y install curl procps \
 && curl -o /tmp/inspec.deb \
    https://packages.chef.io/files/stable/inspec/${INSPEC_VERSION}/ubuntu/18.04/inspec_${INSPEC_VERSION}-1_amd64.deb \
 && dpkg -i /tmp/inspec.deb \
 && rm /tmp/inspec.deb

# copy specs
COPY spec /spec

Next we write our tests. Add the following to mycontainer_test/specs/web.rb:

name = 'nginx: master process nginx -g daemon off;'
describe processes(name) do
  it { should exist }
  its('users') { should eq ['root'] }
end

describe processes('nginx: worker process') do
  it { should exist }
  its('users') { should eq ['nginx'] }
end

describe http('http://localhost') do
  its('status') { should eq 200 }
  its('body') { should eq "Hello, friend\n" }
end

Now we can build and run our test container:

$ docker build -t mycontainer:test mycontainer_test/.
Sending build context to Docker daemon  4.608kB

  ... snip ...

Successfully built 6b270e36447a
Successfully tagged mycontainer:test

$ docker run -d --name hello_test mycontainer:test
e3f0e4a06efa416167d5d30458785bf66975d7837d9fc2b04634bb8291bc5679

$ docker exec hello_test inspec exec /specs

Profile: tests from /specs (tests from .specs)
Version: (not specified)
Target:  local://

  Processes nginx: master process nginx -g daemon off;
     ✔  should exist
     ✔  users should eq ["root"]
  Processes nginx: worker process
     ✔  should exist
     ✔  users should eq ["nginx"]
  HTTP GET on http://localhost
     ✔  status should eq 200
     ✔  body should eq "Hello, friend\n"

Test Summary: 6 successful, 0 failures, 0 skipped

$ docker rm -f hello_test
hello_test

And that’s it! We have successfully built an app container and tested it using InSpec. If you want to use another version of InSpec you can specify it as an argument when building the test container:

$ docker build -t mycontainer:test \
    --build-arg INSPEC_VERSION=1.2.3 mycontainer_test/.

Happy testing!