Red Hat OpenShift 4 is here

Wow! This is a biggie!

So Red Hat just released OpenShift 4 with a ton of new features. I haven’t had time to try it all out yet but here are some of my favorites.


Well, this might actually be a post on its own. This is the first new release of CoreOS after the Red Hat aquisition and serves as the successor of both CoreOS and RHEL Atomic Host. It’s basically RHEL built for OpenShift. Kinda like how RHV uses an ostree based RHEL as well.

I love Atomic Host. The OSTree model is really neat, allowing you to really lock down the operating system and do atomic upgrades. Either it works, or you roll back. There is nothing in between. And being able to lock down the OS completely (by disabling the ostree-rpm commands) means the attack surface is greatly reduced.

What CoreOS brings to the Atomic Host in this new, merged version is greater management and a more streamlined delivery of updates, as well as tighter integration with OpenShift.

Cluster management

So, that tighter integration with OpenShift is really what’s key here. This means that you can manage the lifecycle of the hosts running Kubernetes directly from Kubernetes. OpenShift 4 also comes with a new installer that uses a bootstrap node for spinning up all neccessary virtual machine for the cluster. Running OpenShift on premise will give you the exakt sweet experience as you would get running Google Kubernetes Engine or Amazon ECS. No need to manually manage virtual machine for applying updates or scaling our or in.

Service Mesh

Next up is Service Mesh. This is Red Hats supported implementation of Istio and Jaeger, two relatively new open source projects which brings some cool new features to Kubernetes for managing that growing network complexity that you get when you move more and more stuff into the microservice model.

Getting full visibility and control over the network is a great security win and you know how we at Basalt love security. I’ll sure check out OpenShift 4 and bring it into Basalt Container Platform to get that awesome new features to our customers.


Lastly is the Operators framework. This is really a natural evolvement of packaging, deploying and managing container based services. Just as CoreOS means improved management of the hosts running under OpenShift, Operators means improved management of the services running on top of it. My bet is that we will package more and more of our turn-key services such as Basalt Log Service and Basalt Monitor Service as Operators that run on top of OpenShift.

So that’s a wrap for the biggests news in OpenShift 4. I will do a deep dive later on when I get the chance and perhaps write a more detailed article when I’ve really gotten my hands dirty with it.

Top new features in Red Hat Enterprise Linux 8

So Red Hat just released a new version of Red Hat Enterprise Linux (RHEL). Two questions: what’s new, and is RHEL still rhelevant with cloud, containers, serverless and all that?

Let’s start with the last question. The simple answer is: absolutely!

No matter how your workload is executed or where your data is stored, there will always be physical hardware running an operating system (OS). With containers and virtual machines the ration between physical servers and operating systems is actually going up! You now have even more instances of whatever OS you use. Picking a supported, stable and secure OS is still important, but features such as adaptability is growing even more important. We all want stable and secure, but with change going even faster we also crave the latest and greatest at the same time.

Here is where the biggest new features in RHEL 8 is very relevant.

Application Streams

Having a stable OS usually means you need to sacrifice modernity (it’s a word!) and hold off on the latest versions of your platforms and tools. Here’s where “Application Streams” come in. In RHEL 8 you can leave the core OS stable and predictable, and still run the latest version of Node.js or Python. This will not affect the core components such as dnf which will continue to use the version of Python shipped as standard.

Finally we can have both stable and modern!

Web Console with Session Recording

Based on Cockpit which has been around for a while, Red Hat Enterprise Linux Web Conaole allows non-geeks that doesn’t have much experience with Linux in general or RHEL in particular, administer and operate RHEL with ease. All the usual Cockpit tools are there like resource utilization, user management, shell console, and system logs. One really cool feature that is new is called Session Recording. This allows you to record SSH sessions on the server and play them back, allowing you to get total visibility on who did what. This is a great feature for the security conscious among us.

Universal Base Image

The last feature I would like to highlight is the new container image that is released along with RHEL 8: Universal Base Image (UBI). It’s not really new, it has been available for a while, but the big news is that it no longer requires an active RHEL subscription. This is big because it means we can build containers based on RHEL using our Apple laptops or CentOS temporary virtual machines. When the container goes into production it can be tied to an active subscription from the host. This gives you freedom to build and test containers anywhere, without sacrificing enterprise support in production. Finally!

Red Hat Summit 2019

Once every year the world’s biggest open source company invites customers, partners and nerds to come together and share their knowledge, stories and visions for three days.

I was blessed by Basalt with a trip to this year’s incarnation of Red Hat Summit at Boston. I got three days packed full of breakout sessions, hands-on labs, partner demos, and, most importantly, meeting cool people and making new connections.

So what is going on in the open source world right now? One of the biggest trends is container technology with Kubernetes at the forefront. The Red Hat product here is OpenShift and it is being pushed aggressively. But I think there is good reason for Red Hat to push for it. It’s a bet on the future and the future is containers (or at least that’s what a lot of us strongly believe). Pushing OpenShift is Red Hat trying to capitalize on that future as the provider of one of the key infrastructure components powering that future.

Another really big trend is automation. Well, to be fair automation has been around for thousands of years so calling it a trend might not be fair, but we see a strong push for Red Hat Ansible as the way to automate not only deployments and configurations, but what we call “day 2 operations”. Things such as managing users, granting and removing access, creating workspaces, moving stuff around, tweaking parameters. All the work that the IT admins do every day. Will Ansible steal the job of our beloved IT admins and create massive unemployment problems around the globe? Not likely. Ansible will be (and is!) helping IT admins focus on the fun part of their job such as developing the environment with new features, improved configurations, awesome optimizations and completely new deployments. Because let’s face it: IT admins don’t particularly enjoy feeding in the same data over and over when creating users. Or managing approval workflows just to close a service ticket from a developer asking for a port opening. Ansible coupled with a self-service portal will make life easier for the burdened IT admin, giving them an hour extra every morning to have breakfast with their kids. Cause that’s the ultimate goal of automation: removing the boring parts of life so we can spend our limited time doing stuff that makes us happy.

The last trend that’s a bit of an outsider relative to the others is Artificial Intelligence. There are lots of sessions and talks about the emerging use of AI for various use cases. But the thing that makes AI stand out is that Red Hat really has no product for this market right now. Mostly they position OpenShift as the platform on which you should run the AI engine, but they don’t offer their own AI engine today. I strongly believe this to change soon. AI is becoming more and more necessary. It’s moving from “something cool that makes for a sweet demo” to “something we require to continue to grow”. As systems become more dynamic and the number of events generated in the system grows, the more stuff you have to analyze. If a web request returns a 503 and it is related to a hundred different services running on many virtual machines across multiple clouds, it’s hard to do root cause analysis as a human. Using an AI engine you can quickly find out that the 503 is caused by a CPU overload that is in turn caused by a configuration issue causing an infinite loop in a completely separate process. And that’s just one use case where AI will become more or less required in the systems of the future. As data grows, AI is required to manage and make sense of that data.

So to summarize, the current trends that is prominent during Red Hat Summit 2019 is:

  • Containers
  • Automation
  • Artificial Intelligence

If you are not yet exploring these trends let me know and we can help you ensure you stay modern in a world where the only constant is change.

Automatic testing of Docker containers

So we are all building and packaging our stuff inside containers. That’s great! Containers lets us focus more on configuration and less on installation. We can scale stuff on Kubernetes. We can run stuff on everything from macOS to CentOS. In short, containers opens up a ton of opportunities for deployment and operations. Great.

If you are using containers you are probably also aware of continuous integration and how important it is to use automatic tests of all your new code. But do you test your containers, or just the code inside it?

At Basalt we don’t make applications. We make systems. We put together a bunch of third party applications and integrate them together. Most of it run inside containers. So for us it is important to test these containers, to make sure they behave correctly, before we push them to the registry.

Use encapsulation

So the approach that we chose was to encapsulate the container we wanted to test inside a second test container. This is easily done by using the FROM directive when building the test container.

Screenshot from 2019-03-30 09-26-04

The test container installs additional test tools and copies the test code into the container. We chose to use InSpec as the test framework for this, but any other framework, or just plain Bash, works just as well.

Testing Nginx with InSpec

So let’s make an example test of a container. In this example I will build a very simple web service using the Nginx container. Then I will use InSpec to verify that the container works properly.

Let’s start by creating all files and folders:

$ mkdir -p mycontainer mycontainer_test/specs
$ touch mycontainer/Dockerfile \
$ tree .
├── mycontainer
│   └── Dockerfile
├── mycontainer_test
│   ├── Dockerfile
│   └── specs
│       └── web.rb
└── test_container

4 directories, 3 files

Then add the following content to mycontainer/Dockerfile:

FROM nginx
RUN echo "Hello, friend" > \

Now we can build the app container:

$ docker build -t mycontainer mycontainer/.
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM nginx
 ---> 2bcb04bdb83f
Step 2/2 : RUN echo Hello > /usr/share/nginx/html/index.html
 ---> Running in 7af1cec318f9
Removing intermediate container 7af1cec318f9
 ---> fe25cbbf80f9
Successfully built fe25cbbf80f9
Successfully tagged mycontainer:latest

$ docker run -d --name hello -p 8080:80 mycontainer

$ curl localhost:8080
Hello, friend

$ docker rm -f hello

Great! So the container build and manual testing shows that it works. Next step is to build the test container. Add the following content to mycontainer_test/Dockerfile:

FROM mycontainer

# install inspec
RUN apt-get update \
 && apt-get -y install curl procps \
 && curl -o /tmp/inspec.deb \${INSPEC_VERSION}/ubuntu/18.04/inspec_${INSPEC_VERSION}-1_amd64.deb \
 && dpkg -i /tmp/inspec.deb \
 && rm /tmp/inspec.deb

# copy specs
COPY spec /spec

Next we write our tests. Add the following to mycontainer_test/specs/web.rb:

name = 'nginx: master process nginx -g daemon off;'
describe processes(name) do
  it { should exist }
  its('users') { should eq ['root'] }

describe processes('nginx: worker process') do
  it { should exist }
  its('users') { should eq ['nginx'] }

describe http('http://localhost') do
  its('status') { should eq 200 }
  its('body') { should eq "Hello, friend\n" }

Now we can build and run our test container:

$ docker build -t mycontainer:test mycontainer_test/.
Sending build context to Docker daemon  4.608kB

  ... snip ...

Successfully built 6b270e36447a
Successfully tagged mycontainer:test

$ docker run -d --name hello_test mycontainer:test

$ docker exec hello_test inspec exec /specs

Profile: tests from /specs (tests from .specs)
Version: (not specified)
Target:  local://

  Processes nginx: master process nginx -g daemon off;
     ✔  should exist
     ✔  users should eq ["root"]
  Processes nginx: worker process
     ✔  should exist
     ✔  users should eq ["nginx"]
  HTTP GET on http://localhost
     ✔  status should eq 200
     ✔  body should eq "Hello, friend\n"

Test Summary: 6 successful, 0 failures, 0 skipped

$ docker rm -f hello_test

And that’s it! We have successfully built an app container and tested it using InSpec. If you want to use another version of InSpec you can specify it as an argument when building the test container:

$ docker build -t mycontainer:test \
    --build-arg INSPEC_VERSION=1.2.3 mycontainer_test/.

Happy testing!