Quick overview of the new features in Kubernetes 1.15

Kubernetes 1.15 has been released and it comes with a lot of new stuff that will improve the way you deploy and manage services on the platform. The biggest highlights are quota for custom resources and improved monitoring.

Quota for custom resources

We have had quota for native resources for a while now but this new release allows us to create quotas for custom resources as well. This means that we can control Operators running on Kubernetes using quotas. For example you could create a quota saying each developer gets to deploy 2 Elasticsearch clusters and 10 PostgreSQL clusters.

Improved monitoring

Whether you run a production cluster, or a lab where you test stuff out, it is important to have proper monitoring so you can detect issues before they become problems. Kubernetes 1.15 comes with support for third party vendors to supply device metrics without having to modify the code of Kubernetes. This means that your cluster can use hardware specific metrics, such as GPU metrics, without needing explicit in Kubernetes for that specific device.

The metrics for storage has also improved with support for monitoring of volumes from custom storage providers.

Lastly, the monitoring performance has improved since only the core metrics are collected by kubelet.

More info

How to get it

Most users consume Kubernetes as part of a distribution such as OpenShift. They will have to wait until that distribution upgrades to Kubernetes 1.15. The latest version of OpenShift, version 4.1, comes with Kubernetes 1.13 and I would expect Kubernetes 1.15 to be available in OpenShift 4.3 which should arrive in the beginning of 2020.

Principles of container-based application design

“Principles of software design:

  • Keep it simple, stupid (KISS)
  • Don’t repeat yourself (DRY)
  • You aren’t gonna need it (YAGNI)
  • Separation of concerns (SoC)

Red Hat approach to cloud-native containers:

  • Single concern principle (SCP)
  • High observability principle (HOP)
  • Life-cycle conformance principle (LCP)
  • Image immutability principle (IIP)
  • Process disposability principle (PDP)
  • Self-containment principle (S-CP)
  • Runtime confinement principle (RCP)”

After the move to Infrastructure-as-Code and containerization it is only natural we start to apply some of the lessons we learned during software development, to building our infrastructure.

Read more at redhat.com.

First impressions of moving from Docker to Podman

It’s been on the horizon for a while but when I decided to port some stuff over to RHEL 8 I was more or less forced to remove my dependency on Docker and use something else instead.

When it comes to the beef between Red Hat and Docker I’ve been on the side of Red Hat. Both for technical and philosophical reasons. Docker is a big fat daemon which I really don’t need to pull a container file from a URL, or to build a container image and save it to disk. Add to that the fact that Docker is very close minded to accepting code changes and that they once thought that just verifying the existence of a checksum as a proper image validation during pull.

But even though I knew I wanted to move away from Docker at some time, I also knew it would come with a bunch of work that I’d much rather spend adding features and fixing bugs.

Anyway, now I am porting stuff to RHEL 8 and this means I need to add support for Podman. So here I will lay out some of my experiences moving from Docker to Podman.

Background

So just to give you a little context on what I do. I develop and package IT systems. The system is installed and configured using Ansible and most services are packaged as containers. While we try to use container images from vendors, we sometimes have to resort to create our own containers. So the main focus here is on adapting our Ansible roles so they start and configure the containers using podman instead of docker.

Here are the services that I decided to port:

  • AWX (upstream for Ansible Tower)
  • Foreman (upstream for Red Hat Satellite)
  • Sonatype Nexus
  • HAProxy
  • NodePKI

Installation

This was the easy part. Podman has to be installed on the target and to do this I just added the following:

package: name=podman state=present

Ansible modules

One of the biggest issue is that there are no Podman equivalents to the Ansible modules docker_network and docker_container. There is a module podman_image though and podman_container was just merged into Ansible core. However, I cannot wait for Ansible 2.9 and need a solution today. These modules are used extensively by us to manage our containers using Ansible and having to resort to the command or shell modules really feels like a step back.

Luckily I actually found a way to make the transition much easier, using systemd services.

Cheating with Systemd services

So before I started the port to podman I decided to adjust all my roles to setup the docker containers so they are managed by systemd. This is quite simple:

Create a sysconfig file:

# {{ ansible_managed }}
C_VOLUMES="{% for x in container_volumes %}--volume {{ x }} {% endfor %}"
C_ENV="{% for k,v in container_env.items() %}--env {{ k }}='{{ v }}' {% endfor %}"
C_PORTS="{% for x in container_ports %}--publish {{ x }} {% endfor %}"
C_IMAGE="{{ container_image }}"
C_COMMAND="{{ container_cmd }}"
C_ARGS="{{ container_args }}"

Create a service unit file:

[Unit]
Description=My container
Wants=syslog.service

[Service]
Restart=always
EnvironmentFile=-/etc/sysconfig/my_service
ExecStartPre=-{{ container_mgr }} stop {{ container_name }}
ExecStartPre=-{{ container_mgr }} rm {{ container_name }}
ExecStart={{ container_mgr }} run --rm --name "{{ container_name }}" \
  $C_VOLUMES $C_ENV $C_PORTS $C_ARGS $C_IMAGE $API_COMMAND
ExecStop={{ container_mgr }} stop -t 10 {{ container_name }}

[Install]
WantedBy=multi-user.target

Start the service:

- service: name=my_service state=started

Thanks to the fact that podman is CLI-compatible with the Docker client, moving to podman is now as easy as setting container_manager to /usr/bin/podman instead of /usr/bin/docker.

Creating networks

Unfortunately Podman has no podman create network to create a private network where I can put a set of containers. This is really a shame. Docker networks makes it easy to create a private namespace for containers to communicate. Docker networks allows me to expose ports only to other containers (keeping them unexposed to the host) and name resolution so containers can find each other easily.

One alternative that was suggested to me on the Podman mailing list was to use a pod. But containers in pods share localhost which means that I run the risk of port collision if two containers use the same port. This also adds more complexity as I need to create and start/stop a new entity (the pod) which I never got working using systemd (systemd just killed the pod directly after starting it).

I also cannot use the built in CNI network, or create additional ones, since they don’t provide name resolution and I have no way of knowing the IP for a given container.

My only solution here was to skip networks all together and use host networking. It comes with some issues:

  • I still have the risk of port collision between containers
  • All ports are published and accessible from outside the host (unless blocked by a firewall)

Working on Mac

Another big thing missing from Podman is a client for macOS. While I use RHEL on all the servers (and Fedora at home) my workstation is a Macbook which means I cannot use Podman when I build containers locally, or troubleshoot podman commands locally. Luckily, I have a really streamlined development environment that makes it a breeze to quickly bring up a virtual machine running CentOS where I can play around. I do miss the ability to build containers on my Mac using Podman but since Docker and Podman both are CNI compatible I can build container images using Docker on my laptop and then manage and run them on RHEL using Podman without problems.

InSpec

My InSpec tests uses some docker resources but I decided to use the service resource instead to verify that the systemd services are running properly, and of course I have a bunch of tests that access the actually software that runs inside the containers.

Summary

So after moving to systemd services it was really easy to port from Docker to Podman. My wishlist for Podman would be the following:

  • Podman modules for Ansible to replace the Docker modules
  • Ability to manage CNI networks using podman network ...
  • Name resolution inside Podman networks
  • Support for macOS

Luckily none of these were showstoppers for me and after figuring it all out it took about a day to convert five Ansible roles from Docker to Podman without loss of end user functionality.

Ansible 2.8 has a bunch of cool new stuff

So Ansible 2.8.0 was just released and it comes with a few really nice new features. I haven’t had time to use it much, since I just upgraded like 10 minutes ago, but reading through the Release Notes I found some really cool new things that I know I’ll enjoy in 2.8.

Automatic detection of Python path

This is a really nice feature. It used to be that Ansible always looked for /usr/bin/python on the target system. If you wanted to use anything else you needed to adjust ansible_python_interpreter. No more! Now Ansible will do a much smarter lookup where it will not only look for Python in several locations before giving up, it will adapt to the system it is executing on. So for example on Ubuntu we always had to explicitly tell Ansible to use /usr/bin/python3 since there is no /usr/bin/python by default. Now Ansible will know this out of the box.

Better SSH on macOS

Ansible moved away from the Paramiko library in favor of SSH a long time ago. Except when executed on macOS. With 2.8 those of us using a MacBook will finally get some of those sweet performance improvements that SSH has over Paramiko which will mean a lot since the biggest downside to Ansible is its slow execution.

Accessing undefined variables is fine

So when you had a large structure with nested objects and wanted to access one and give it a default if it, or any parent, was undefined you needed to do this:

{{ ((foo | default({})).bar | \
default({})).baz | default('DEFAULT') }}

or

{{ foo.bar.baz if (foo is defined and \
foo.bar is defined and foo.bar.baz is defined) \
else 'DEFAULT' }}

Ansible 2.8 will no longer throw an error if you try to access an object of an undefined variable but instead just give you undefined back. So now you can just do this:

{{ foo.bar.baz | default('DEFAULT') }}

A lot more elegant!

Tons of new modules

Of course as with any new release of Ansible there is also a long list of new modules. For example the one that I am currently most interested in are the Foreman modules. Ansible comes with just a single module for Foreman / Satellite but I have been using the foreman-ansible-modules for a while now and 2.8 deprecates the old foreman plugin in favor of this collection. Hopefully they will soon be incorporated into Ansible Core so I don’t have to fetch them from GitHub and put inside my role.

There are also a ton of fact-gathering modules for Docker such as docker_volume_info, docker_network_info, docker_container_info and docker_host_info that will be great when checking and manipulating Docker objects. Although, with RHEL 8 we will hopefully be moving away from Docker so these may come a little too late to the party, to be honest.

There’s a bunch of new KubeVirt modules which may be really cool once we move over to OpenShift 4 and run some virtual machines in it.

Other noteworthy modules are:

  • OpenSSL fact gathering for certificates, keys and CSRs
  • A whole bunch of VMware modules
  • A few Ansible Tower modules
  • A bunch of Windows modules

Red Hat OpenShift 4 is here

Wow! This is a biggie!

So Red Hat just released OpenShift 4 with a ton of new features. I haven’t had time to try it all out yet but here are some of my favorites.

RHEL CoreOS

Well, this might actually be a post on its own. This is the first new release of CoreOS after the Red Hat aquisition and serves as the successor of both CoreOS and RHEL Atomic Host. It’s basically RHEL built for OpenShift. Kinda like how RHV uses an ostree based RHEL as well.

I love Atomic Host. The OSTree model is really neat, allowing you to really lock down the operating system and do atomic upgrades. Either it works, or you roll back. There is nothing in between. And being able to lock down the OS completely (by disabling the ostree-rpm commands) means the attack surface is greatly reduced.

What CoreOS brings to the Atomic Host in this new, merged version is greater management and a more streamlined delivery of updates, as well as tighter integration with OpenShift.

Cluster management

So, that tighter integration with OpenShift is really what’s key here. This means that you can manage the lifecycle of the hosts running Kubernetes directly from Kubernetes. OpenShift 4 also comes with a new installer that uses a bootstrap node for spinning up all neccessary virtual machine for the cluster. Running OpenShift on premise will give you the exakt sweet experience as you would get running Google Kubernetes Engine or Amazon ECS. No need to manually manage virtual machine for applying updates or scaling our or in.

Service Mesh

Next up is Service Mesh. This is Red Hats supported implementation of Istio and Jaeger, two relatively new open source projects which brings some cool new features to Kubernetes for managing that growing network complexity that you get when you move more and more stuff into the microservice model.

Getting full visibility and control over the network is a great security win and you know how we at Basalt love security. I’ll sure check out OpenShift 4 and bring it into Basalt Container Platform to get that awesome new features to our customers.

Operators

Lastly is the Operators framework. This is really a natural evolvement of packaging, deploying and managing container based services. Just as CoreOS means improved management of the hosts running under OpenShift, Operators means improved management of the services running on top of it. My bet is that we will package more and more of our turn-key services such as Basalt Log Service and Basalt Monitor Service as Operators that run on top of OpenShift.

So that’s a wrap for the biggests news in OpenShift 4. I will do a deep dive later on when I get the chance and perhaps write a more detailed article when I’ve really gotten my hands dirty with it.

Top new features in Red Hat Enterprise Linux 8

So Red Hat just released a new version of Red Hat Enterprise Linux (RHEL). Two questions: what’s new, and is RHEL still rhelevant with cloud, containers, serverless and all that?

Let’s start with the last question. The simple answer is: absolutely!

No matter how your workload is executed or where your data is stored, there will always be physical hardware running an operating system (OS). With containers and virtual machines the ration between physical servers and operating systems is actually going up! You now have even more instances of whatever OS you use. Picking a supported, stable and secure OS is still important, but features such as adaptability is growing even more important. We all want stable and secure, but with change going even faster we also crave the latest and greatest at the same time.

Here is where the biggest new features in RHEL 8 is very relevant.

Application Streams

Having a stable OS usually means you need to sacrifice modernity (it’s a word!) and hold off on the latest versions of your platforms and tools. Here’s where “Application Streams” come in. In RHEL 8 you can leave the core OS stable and predictable, and still run the latest version of Node.js or Python. This will not affect the core components such as dnf which will continue to use the version of Python shipped as standard.

Finally we can have both stable and modern!

Web Console with Session Recording

Based on Cockpit which has been around for a while, Red Hat Enterprise Linux Web Conaole allows non-geeks that doesn’t have much experience with Linux in general or RHEL in particular, administer and operate RHEL with ease. All the usual Cockpit tools are there like resource utilization, user management, shell console, and system logs. One really cool feature that is new is called Session Recording. This allows you to record SSH sessions on the server and play them back, allowing you to get total visibility on who did what. This is a great feature for the security conscious among us.

Universal Base Image

The last feature I would like to highlight is the new container image that is released along with RHEL 8: Universal Base Image (UBI). It’s not really new, it has been available for a while, but the big news is that it no longer requires an active RHEL subscription. This is big because it means we can build containers based on RHEL using our Apple laptops or CentOS temporary virtual machines. When the container goes into production it can be tied to an active subscription from the host. This gives you freedom to build and test containers anywhere, without sacrificing enterprise support in production. Finally!

Red Hat Summit 2019

Once every year the world’s biggest open source company invites customers, partners and nerds to come together and share their knowledge, stories and visions for three days.

I was blessed by Basalt with a trip to this year’s incarnation of Red Hat Summit at Boston. I got three days packed full of breakout sessions, hands-on labs, partner demos, and, most importantly, meeting cool people and making new connections.

So what is going on in the open source world right now? One of the biggest trends is container technology with Kubernetes at the forefront. The Red Hat product here is OpenShift and it is being pushed aggressively. But I think there is good reason for Red Hat to push for it. It’s a bet on the future and the future is containers (or at least that’s what a lot of us strongly believe). Pushing OpenShift is Red Hat trying to capitalize on that future as the provider of one of the key infrastructure components powering that future.

Another really big trend is automation. Well, to be fair automation has been around for thousands of years so calling it a trend might not be fair, but we see a strong push for Red Hat Ansible as the way to automate not only deployments and configurations, but what we call “day 2 operations”. Things such as managing users, granting and removing access, creating workspaces, moving stuff around, tweaking parameters. All the work that the IT admins do every day. Will Ansible steal the job of our beloved IT admins and create massive unemployment problems around the globe? Not likely. Ansible will be (and is!) helping IT admins focus on the fun part of their job such as developing the environment with new features, improved configurations, awesome optimizations and completely new deployments. Because let’s face it: IT admins don’t particularly enjoy feeding in the same data over and over when creating users. Or managing approval workflows just to close a service ticket from a developer asking for a port opening. Ansible coupled with a self-service portal will make life easier for the burdened IT admin, giving them an hour extra every morning to have breakfast with their kids. Cause that’s the ultimate goal of automation: removing the boring parts of life so we can spend our limited time doing stuff that makes us happy.

The last trend that’s a bit of an outsider relative to the others is Artificial Intelligence. There are lots of sessions and talks about the emerging use of AI for various use cases. But the thing that makes AI stand out is that Red Hat really has no product for this market right now. Mostly they position OpenShift as the platform on which you should run the AI engine, but they don’t offer their own AI engine today. I strongly believe this to change soon. AI is becoming more and more necessary. It’s moving from “something cool that makes for a sweet demo” to “something we require to continue to grow”. As systems become more dynamic and the number of events generated in the system grows, the more stuff you have to analyze. If a web request returns a 503 and it is related to a hundred different services running on many virtual machines across multiple clouds, it’s hard to do root cause analysis as a human. Using an AI engine you can quickly find out that the 503 is caused by a CPU overload that is in turn caused by a configuration issue causing an infinite loop in a completely separate process. And that’s just one use case where AI will become more or less required in the systems of the future. As data grows, AI is required to manage and make sense of that data.

So to summarize, the current trends that is prominent during Red Hat Summit 2019 is:

  • Containers
  • Automation
  • Artificial Intelligence

If you are not yet exploring these trends let me know and we can help you ensure you stay modern in a world where the only constant is change.

Automatic testing of Docker containers

So we are all building and packaging our stuff inside containers. That’s great! Containers lets us focus more on configuration and less on installation. We can scale stuff on Kubernetes. We can run stuff on everything from macOS to CentOS. In short, containers opens up a ton of opportunities for deployment and operations. Great.

If you are using containers you are probably also aware of continuous integration and how important it is to use automatic tests of all your new code. But do you test your containers, or just the code inside it?

At Basalt we don’t make applications. We make systems. We put together a bunch of third party applications and integrate them together. Most of it run inside containers. So for us it is important to test these containers, to make sure they behave correctly, before we push them to the registry.

Use encapsulation

So the approach that we chose was to encapsulate the container we wanted to test inside a second test container. This is easily done by using the FROM directive when building the test container.

Screenshot from 2019-03-30 09-26-04

The test container installs additional test tools and copies the test code into the container. We chose to use InSpec as the test framework for this, but any other framework, or just plain Bash, works just as well.

Testing Nginx with InSpec

So let’s make an example test of a container. In this example I will build a very simple web service using the Nginx container. Then I will use InSpec to verify that the container works properly.

Let’s start by creating all files and folders:

$ mkdir -p mycontainer mycontainer_test/specs
$ touch mycontainer/Dockerfile \
    mycontainer_test/{Dockerfile,specs/web.rb}
$ tree .
.
├── mycontainer
│   └── Dockerfile
├── mycontainer_test
│   ├── Dockerfile
│   └── specs
│       └── web.rb
└── test_container

4 directories, 3 files

Then add the following content to mycontainer/Dockerfile:

FROM nginx
RUN echo "Hello, friend" > \
  /usr/share/nginx/html/index.html

Now we can build the app container:

$ docker build -t mycontainer mycontainer/.
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM nginx
 ---> 2bcb04bdb83f
Step 2/2 : RUN echo Hello > /usr/share/nginx/html/index.html
 ---> Running in 7af1cec318f9
Removing intermediate container 7af1cec318f9
 ---> fe25cbbf80f9
Successfully built fe25cbbf80f9
Successfully tagged mycontainer:latest

$ docker run -d --name hello -p 8080:80 mycontainer
cfd3c3ea70c3512c5a7cc9ac2c9d74244aec4dd1a4bb68645e671cbe551af4ab

$ curl localhost:8080
Hello, friend

$ docker rm -f hello
hello

Great! So the container build and manual testing shows that it works. Next step is to build the test container. Add the following content to mycontainer_test/Dockerfile:

FROM mycontainer
ARG INSPEC_VERSION=3.7.11

# install inspec
RUN apt-get update \
 && apt-get -y install curl procps \
 && curl -o /tmp/inspec.deb \
    https://packages.chef.io/files/stable/inspec/${INSPEC_VERSION}/ubuntu/18.04/inspec_${INSPEC_VERSION}-1_amd64.deb \
 && dpkg -i /tmp/inspec.deb \
 && rm /tmp/inspec.deb

# copy specs
COPY spec /spec

Next we write our tests. Add the following to mycontainer_test/specs/web.rb:

name = 'nginx: master process nginx -g daemon off;'
describe processes(name) do
  it { should exist }
  its('users') { should eq ['root'] }
end

describe processes('nginx: worker process') do
  it { should exist }
  its('users') { should eq ['nginx'] }
end

describe http('http://localhost') do
  its('status') { should eq 200 }
  its('body') { should eq "Hello, friend\n" }
end

Now we can build and run our test container:

$ docker build -t mycontainer:test mycontainer_test/.
Sending build context to Docker daemon  4.608kB

  ... snip ...

Successfully built 6b270e36447a
Successfully tagged mycontainer:test

$ docker run -d --name hello_test mycontainer:test
e3f0e4a06efa416167d5d30458785bf66975d7837d9fc2b04634bb8291bc5679

$ docker exec hello_test inspec exec /specs

Profile: tests from /specs (tests from .specs)
Version: (not specified)
Target:  local://

  Processes nginx: master process nginx -g daemon off;
     ✔  should exist
     ✔  users should eq ["root"]
  Processes nginx: worker process
     ✔  should exist
     ✔  users should eq ["nginx"]
  HTTP GET on http://localhost
     ✔  status should eq 200
     ✔  body should eq "Hello, friend\n"

Test Summary: 6 successful, 0 failures, 0 skipped

$ docker rm -f hello_test
hello_test

And that’s it! We have successfully built an app container and tested it using InSpec. If you want to use another version of InSpec you can specify it as an argument when building the test container:

$ docker build -t mycontainer:test \
    --build-arg INSPEC_VERSION=1.2.3 mycontainer_test/.

Happy testing!