How to master environment variables

If you have been using Linux for a while you might have encountered the term “environment variables” a few times. You might even have run the command export FOO=bar occasionally. But what are environment variables really and how can you master them?

In this post I will go through how you can manipulate environment variables both permanently and temporarily. Lastly I will round up with some tips on how to properly use environment variables in Ansible.

Check your environment

So what is your environment? You can inspect it by running env on the command line and search with a simple grep:

$ env
COLORTERM=truecolor
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
DESKTOP_SESSION=gnome
DISPLAY=:1
GDMSESSION=gnome
GDM_LANG=en_US.UTF-8
GJS_DEBUG_OUTPUT=stderr
GJS_DEBUG_TOPICS=JS ERROR;JS LOG

... snip ...

$ env | grep -i path
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
OMF_PATH=/home/ephracis/.local/share/omf
PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
WINDOWPATH=2

So what are all these variables coming from and how can we change them or add more, both permanently and temporarily?

Know your session

Before we can talk about how the environment is created and populated, we need to understand how sessions work. Different kinds of sessions reads different files for populating their environment.

Login shells

Login shells are created when you SSH to the server, or login at the physical terminal. These are easy to spot since you need to actually log in (hence the name) to the server in order to create the session. You can also identify these sessions by noting the small dash in front of the shell name when you run ps -f:

$ ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
ephracis 23382 23375  0 10:59 pts/0    00:00:00 -fish
ephracis 23957 23382  0 11:06 pts/0    00:00:00 ps -f

Interactive shells

Interactive shells are the ones that reads your input. This means most sessions that you, the human, are working with. For example every tab in your graphical terminal app is an interactive shell. Note that this means that the session created when you login to your server either over SSH or from the physical terminal, is both an interactive and a login shell.

Non-interactive shells

Apart from interactive shells (of which login shells are a subset) we have non-interactive shells. These would be the ones created from various scripts and tools that do not attach anything to stdin and can thus not provide interactive input to the session.

Know your environment files

Now when we know about the different types of sessions that can be created, we can start to talk about how the environment of these sessions are populated with variables. On most systems we use Bash since that’s the default shell on virtually all distributions. But you might have changed this to some other shell like Zsh or Fish, especially on your workstation where you spend most of your time. What kind of shell you use will determine which files are used to populate the environment.

Bash

Bash will look for the following files:

  • /etc/profile
    Run for login shells.
  • ~/.bash_profile
    Run for login shells.
  • /etc/bashrc
    Run for interactive, non-login shells.
  • ~/.bashrc
    Run for interactive, non-login shells.

That part about non-login is important and a reason why many users and distributions configure bash_profile to read bashrc so that it is applied in all sessions, like so:

[[ -r ~/.bashrc ]] && . ~/.bashrc

Zsh

Zsh will look for a bit more files than Bash does:

  • /etc/zshenv
    Run for every zsh shell.
  • ~/.zshenv
    Run for every zsh shell.
  • /etc/zprofile
    Run for login shells.
  • ~/.zprofile
    Run for login shells.
  • /etc/zshrc
    Run for interactive shells.
  • ~/.zshrc
    Run for interactive shells.
  • /etc/zlogin
    Run for login shells.
  • ~/.zlogin
    Run for login shells.

Fish

Fish will read the following files on start up:

  • /etc/fish/config.fish
    Run for every fish shell.
  • /etc/fish/conf.d/*.fish
    Run for every fish shell.
  • ~/.config/fish/config.fish
    Run for every fish shell.
  • ~/.config/fish/conf.d/*.fish
    Run for every fish shell.

As you can see Fish does not distinguish between login shell and interactive shells when it reads its startup files. If you need to run something only on login or interactive shells you can use if status --is-login or if status --is-interactive inside your scripts.

Manipulate the environment

So that’s a bit complicated but hopefully things are more clear now. Next step is to start manipulating the environment. First of all, you can obviously edit those files and wait until the next session is created, or load the newly edited file into your current session using either source /path/to/file or the shorthand . /path/to/file. That would be the way to make permanent changes to your environment. But sometimes you only want to change this temporarily.

To apply variables for a single command you just insert it to the beginning of the command like so:

# for bash or zsh
$ FOO=one BAR=two my_cool_command ...

# for fish
$ env FOO=one BAR=two my_cool_command ...

This will make the variable available for the command, and then go away as soon as the command finishes.

If you want to keep the variable and have it available to all future commands in your session you run the assignment as a stand alone command like so:

# for bash or zsh
$ FOO=one

# for fish
$ set FOO bar

# then use it later in your session
$ echo $FOO
one

As you can see the variable is available for the echo command run later in the session. The variable will not be available to other sessions, and will disappear when the current session ends.

Finally, you can export the variable to make it available to subprocess that are spawned from the session:

# for bash or zsh
[parent] $ export FOO=one

# for fish
[parent] $ set --export FOO one

# then spawn a subsession and access the variable
[parent] $ bash
[child] $ echo $FOO
one

What about Ansible

If you are running Ansible to orchestration your servers you might ask your self what kind of session that is and what files you should change to manipulate the environment Ansible uses on the target servers. While you could go down that road, a much simpler approach is to use the environment keyword in Ansible:

- name: Manipulating environment in Ansible
  hosts: my_hosts

  # play level environment
  environment:
    FOO: one
    BAR: two

  tasks:

    # task level environment
    - name: My task
      environment:
        FOO: uno
      some_module: ...

This can be combined with vars, environment: "{{ my_environment }}", allowing you to use group vars or host vars to adapt the environment for different servers and scenarios.

Conclusion

The environment in Linux is a complex beast but armed with the knowledge above you should be able to tame it and harvest its powers for your own benefit. The environment is populated by different files depending on the kind of session and shell used. You can temporarily set variable from a one shot command, or for the remaining duration of the session. To make subshells inherit a variable use the export keyword/flag.

Lastly, if you are using Ansible you should really look into the environment keyword before you start to experiment with the different profile and rc-files on the target system.

If you’re using netstat you’re doing it wrong – an ss tutorial for oldies

Become a modern master with some serious ss skills

If you are still using netstat you are doing it wrong. Netstat was replaced by ss many moons ago and it’s long overdue to throw out the old and learn how to get the same result but in a whole new way. Because we all love to learn stuff just for the fun of it, right.

But seriously, ss is way better than nestat because it talks to the kernel directly via Netlink and can thus give you much more info than the old netstat ever could. So to help old folks like me transition from netstat to ss I’ll give you a translation table to port you over. But first, in case there are some newcomers whom isn’t encumbered with old baggage I’ll quickly describe a few common tasks you can do in ss.

Check open ports that someone is listening to

One of my most common use cases is to see if my process is up and running and listening to connections, or if there’s is something listening to a port I wanna know who it is. To do this use the flags --listening to get sessions with the LISTEN state, --processes to get the process that is listening, and to clean up we use --numeric since I never remember that sunrpc means port 111:

$ ss --listening --tcp --numeric --processes
State     Recv-Q  Send-Q  Local Address:Port    Peer Address:Port                                                                                    
LISTEN    0       128     0.0.0.0:111           0.0.0.0:*                                                                                       
LISTEN    0       128     127.0.0.1:27060       0.0.0.0:*        users:(("steam",pid=29811,fd=45))                                              
LISTEN    0       10      0.0.0.0:57621         0.0.0.0:*        users:(("spotify",pid=11223,fd=106))                                           
LISTEN    0       32      192.168.122.1:53      0.0.0.0:*                                                                                       
LISTEN    0       128     0.0.0.0:22            0.0.0.0:*                                                                                       
LISTEN    0       5       127.0.0.1:631         0.0.0.0:*                                                                                       
LISTEN    0       128     0.0.0.0:17500         0.0.0.0:*        users:(("dropbox",pid=13706,fd=98))                                            
LISTEN    0       128     0.0.0.0:27036         0.0.0.0:*        users:(("steam",pid=29811,fd=82))                                              
LISTEN    0       128     127.0.0.1:57343       0.0.0.0:*        users:(("steam",pid=29811,fd=39))

Check active connections

Checking just active sessions is easy. Just type ss. If you want to filter and show only TCP connection use the --tcp flag like so:

$ ss --tcp
State        Recv-Q   Send-Q   Local Address:Port     Peer Address:Port     
ESTAB        0        0        192.168.1.102:57044    162.125.18.133:https    
ESTAB        0        0        192.168.1.102:34008    104.16.3.35:https    
CLOSE-WAIT   32       0        192.168.1.102:52008    162.125.70.7:https

The same goes for UDP and the --udp flag.

Get a summary

Instead of listing individual sessions you can also get a nice summary of all sessions by using the --summary flag:

$ ss --summary
Total: 1625
TCP:   77 (estab 40, closed 12, orphaned 0, timewait 6)

Transport Total     IP        IPv6
RAW       0         0         0        
UDP       33        29        4        
TCP       65        59        6        
INET      98        88        10       
FRAG      0         0         0

Translation table going from netstat to ss

Lastly, as promised here is a nice table to help you transition. Believe me, it’s quite easy to remember.

netstat -ass
netstat -auss -u
netstat -ap | grep sshss -p | grep ssh
netstat -lss -l
netstat -lpnss -lpn
netstat -rip route
netstat -gip maddr

Performance analysis between RHEL 7.6 and RHEL 8.0

Apart from all the new cool features in the freshly released Red Hat Enterprise Linux 8 one thing that is just as important is the improvements in performance. The team over at Red Hat has performed a bunch of benchmark tests on both RHEL 7.6 and RHEL 8.0 and the results show some really nice improvements.

Overall the performance looks good. The chart below shows around 5% improvement in CPU, 20% less memory usage, 15% increased disk I/O, and around 20-30% improved network performance.

a candlestick chart which combines multiple tests
Photo: Red Hat

Looking at more specific metrics we see a 40% increase in disk throughput on the XFS file system as shown in the chart below.

RHEL 7.6 vs RHEL 8 AIM7 shared throughput - XFS
Photo: Red Hat

If you are running OpenStack the network control plane will also see a large improvement when moving to RHEL 8. Read the full article at redhat.com for more details.

Principles of container-based application design

“Principles of software design:

  • Keep it simple, stupid (KISS)
  • Don’t repeat yourself (DRY)
  • You aren’t gonna need it (YAGNI)
  • Separation of concerns (SoC)

Red Hat approach to cloud-native containers:

  • Single concern principle (SCP)
  • High observability principle (HOP)
  • Life-cycle conformance principle (LCP)
  • Image immutability principle (IIP)
  • Process disposability principle (PDP)
  • Self-containment principle (S-CP)
  • Runtime confinement principle (RCP)”

After the move to Infrastructure-as-Code and containerization it is only natural we start to apply some of the lessons we learned during software development, to building our infrastructure.

Read more at redhat.com.

What’s new in Linux kernel 5.2

A new version of the Linux kernel has just been released. Here’s a short summary of the new stuff that might be interesting for end users.

  • Logitech
    • Support for MX5500
    • Support for S510 
    • Support for Unifying receiver
    • Viewing battery status
  • Realtek
    • Support for RTL8822BE
    • Support for RTL8822CE
  • SoC
    • Support for Nvidia Jetson Nano
    • Support for Orange Pi RK3399
    • Support for Orange Pi 3
  • Nvidia
    • Noveau supports GeForce GTX 1650
  • Intel
    • Support for Intel Comet Lake
    • Support for Intel Icelake graphics
    • Support for Intel Elkhart Lake graphics
    • Hibernation in Cherrytrail and Baytrail
    • Support for Thunderbolt on older Apple hardware
  • AMD
    • Improved support for Ryzen
    • Improved support for Radeon X1000
    • Support for upcoming EPYC
  • ARM
    • Spectre mitigation
    • Support for ARM Mali
  • Other
    • Improved support for DisplayPort over USB-C

Ansible 2.8 has a bunch of cool new stuff

So Ansible 2.8.0 was just released and it comes with a few really nice new features. I haven’t had time to use it much, since I just upgraded like 10 minutes ago, but reading through the Release Notes I found some really cool new things that I know I’ll enjoy in 2.8.

Automatic detection of Python path

This is a really nice feature. It used to be that Ansible always looked for /usr/bin/python on the target system. If you wanted to use anything else you needed to adjust ansible_python_interpreter. No more! Now Ansible will do a much smarter lookup where it will not only look for Python in several locations before giving up, it will adapt to the system it is executing on. So for example on Ubuntu we always had to explicitly tell Ansible to use /usr/bin/python3 since there is no /usr/bin/python by default. Now Ansible will know this out of the box.

Better SSH on macOS

Ansible moved away from the Paramiko library in favor of SSH a long time ago. Except when executed on macOS. With 2.8 those of us using a MacBook will finally get some of those sweet performance improvements that SSH has over Paramiko which will mean a lot since the biggest downside to Ansible is its slow execution.

Accessing undefined variables is fine

So when you had a large structure with nested objects and wanted to access one and give it a default if it, or any parent, was undefined you needed to do this:

{{ ((foo | default({})).bar | \
default({})).baz | default('DEFAULT') }}

or

{{ foo.bar.baz if (foo is defined and \
foo.bar is defined and foo.bar.baz is defined) \
else 'DEFAULT' }}

Ansible 2.8 will no longer throw an error if you try to access an object of an undefined variable but instead just give you undefined back. So now you can just do this:

{{ foo.bar.baz | default('DEFAULT') }}

A lot more elegant!

Tons of new modules

Of course as with any new release of Ansible there is also a long list of new modules. For example the one that I am currently most interested in are the Foreman modules. Ansible comes with just a single module for Foreman / Satellite but I have been using the foreman-ansible-modules for a while now and 2.8 deprecates the old foreman plugin in favor of this collection. Hopefully they will soon be incorporated into Ansible Core so I don’t have to fetch them from GitHub and put inside my role.

There are also a ton of fact-gathering modules for Docker such as docker_volume_info, docker_network_info, docker_container_info and docker_host_info that will be great when checking and manipulating Docker objects. Although, with RHEL 8 we will hopefully be moving away from Docker so these may come a little too late to the party, to be honest.

There’s a bunch of new KubeVirt modules which may be really cool once we move over to OpenShift 4 and run some virtual machines in it.

Other noteworthy modules are:

  • OpenSSL fact gathering for certificates, keys and CSRs
  • A whole bunch of VMware modules
  • A few Ansible Tower modules
  • A bunch of Windows modules

Red Hat OpenShift 4 is here

Wow! This is a biggie!

So Red Hat just released OpenShift 4 with a ton of new features. I haven’t had time to try it all out yet but here are some of my favorites.

RHEL CoreOS

Well, this might actually be a post on its own. This is the first new release of CoreOS after the Red Hat aquisition and serves as the successor of both CoreOS and RHEL Atomic Host. It’s basically RHEL built for OpenShift. Kinda like how RHV uses an ostree based RHEL as well.

I love Atomic Host. The OSTree model is really neat, allowing you to really lock down the operating system and do atomic upgrades. Either it works, or you roll back. There is nothing in between. And being able to lock down the OS completely (by disabling the ostree-rpm commands) means the attack surface is greatly reduced.

What CoreOS brings to the Atomic Host in this new, merged version is greater management and a more streamlined delivery of updates, as well as tighter integration with OpenShift.

Cluster management

So, that tighter integration with OpenShift is really what’s key here. This means that you can manage the lifecycle of the hosts running Kubernetes directly from Kubernetes. OpenShift 4 also comes with a new installer that uses a bootstrap node for spinning up all neccessary virtual machine for the cluster. Running OpenShift on premise will give you the exakt sweet experience as you would get running Google Kubernetes Engine or Amazon ECS. No need to manually manage virtual machine for applying updates or scaling our or in.

Service Mesh

Next up is Service Mesh. This is Red Hats supported implementation of Istio and Jaeger, two relatively new open source projects which brings some cool new features to Kubernetes for managing that growing network complexity that you get when you move more and more stuff into the microservice model.

Getting full visibility and control over the network is a great security win and you know how we at Basalt love security. I’ll sure check out OpenShift 4 and bring it into Basalt Container Platform to get that awesome new features to our customers.

Operators

Lastly is the Operators framework. This is really a natural evolvement of packaging, deploying and managing container based services. Just as CoreOS means improved management of the hosts running under OpenShift, Operators means improved management of the services running on top of it. My bet is that we will package more and more of our turn-key services such as Basalt Log Service and Basalt Monitor Service as Operators that run on top of OpenShift.

So that’s a wrap for the biggests news in OpenShift 4. I will do a deep dive later on when I get the chance and perhaps write a more detailed article when I’ve really gotten my hands dirty with it.

Top new features in Red Hat Enterprise Linux 8

So Red Hat just released a new version of Red Hat Enterprise Linux (RHEL). Two questions: what’s new, and is RHEL still rhelevant with cloud, containers, serverless and all that?

Let’s start with the last question. The simple answer is: absolutely!

No matter how your workload is executed or where your data is stored, there will always be physical hardware running an operating system (OS). With containers and virtual machines the ration between physical servers and operating systems is actually going up! You now have even more instances of whatever OS you use. Picking a supported, stable and secure OS is still important, but features such as adaptability is growing even more important. We all want stable and secure, but with change going even faster we also crave the latest and greatest at the same time.

Here is where the biggest new features in RHEL 8 is very relevant.

Application Streams

Having a stable OS usually means you need to sacrifice modernity (it’s a word!) and hold off on the latest versions of your platforms and tools. Here’s where “Application Streams” come in. In RHEL 8 you can leave the core OS stable and predictable, and still run the latest version of Node.js or Python. This will not affect the core components such as dnf which will continue to use the version of Python shipped as standard.

Finally we can have both stable and modern!

Web Console with Session Recording

Based on Cockpit which has been around for a while, Red Hat Enterprise Linux Web Conaole allows non-geeks that doesn’t have much experience with Linux in general or RHEL in particular, administer and operate RHEL with ease. All the usual Cockpit tools are there like resource utilization, user management, shell console, and system logs. One really cool feature that is new is called Session Recording. This allows you to record SSH sessions on the server and play them back, allowing you to get total visibility on who did what. This is a great feature for the security conscious among us.

Universal Base Image

The last feature I would like to highlight is the new container image that is released along with RHEL 8: Universal Base Image (UBI). It’s not really new, it has been available for a while, but the big news is that it no longer requires an active RHEL subscription. This is big because it means we can build containers based on RHEL using our Apple laptops or CentOS temporary virtual machines. When the container goes into production it can be tied to an active subscription from the host. This gives you freedom to build and test containers anywhere, without sacrificing enterprise support in production. Finally!