How to master environment variables

If you have been using Linux for a while you might have encountered the term “environment variables” a few times. You might even have run the command export FOO=bar occasionally. But what are environment variables really and how can you master them?

In this post I will go through how you can manipulate environment variables both permanently and temporarily. Lastly I will round up with some tips on how to properly use environment variables in Ansible.

Check your environment

So what is your environment? You can inspect it by running env on the command line and search with a simple grep:

$ env
COLORTERM=truecolor
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
DESKTOP_SESSION=gnome
DISPLAY=:1
GDMSESSION=gnome
GDM_LANG=en_US.UTF-8
GJS_DEBUG_OUTPUT=stderr
GJS_DEBUG_TOPICS=JS ERROR;JS LOG

... snip ...

$ env | grep -i path
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
OMF_PATH=/home/ephracis/.local/share/omf
PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin
WINDOWPATH=2

So what are all these variables coming from and how can we change them or add more, both permanently and temporarily?

Know your session

Before we can talk about how the environment is created and populated, we need to understand how sessions work. Different kinds of sessions reads different files for populating their environment.

Login shells

Login shells are created when you SSH to the server, or login at the physical terminal. These are easy to spot since you need to actually log in (hence the name) to the server in order to create the session. You can also identify these sessions by noting the small dash in front of the shell name when you run ps -f:

$ ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
ephracis 23382 23375  0 10:59 pts/0    00:00:00 -fish
ephracis 23957 23382  0 11:06 pts/0    00:00:00 ps -f

Interactive shells

Interactive shells are the ones that reads your input. This means most sessions that you, the human, are working with. For example every tab in your graphical terminal app is an interactive shell. Note that this means that the session created when you login to your server either over SSH or from the physical terminal, is both an interactive and a login shell.

Non-interactive shells

Apart from interactive shells (of which login shells are a subset) we have non-interactive shells. These would be the ones created from various scripts and tools that do not attach anything to stdin and can thus not provide interactive input to the session.

Know your environment files

Now when we know about the different types of sessions that can be created, we can start to talk about how the environment of these sessions are populated with variables. On most systems we use Bash since that’s the default shell on virtually all distributions. But you might have changed this to some other shell like Zsh or Fish, especially on your workstation where you spend most of your time. What kind of shell you use will determine which files are used to populate the environment.

Bash

Bash will look for the following files:

  • /etc/profile
    Run for login shells.
  • ~/.bash_profile
    Run for login shells.
  • /etc/bashrc
    Run for interactive, non-login shells.
  • ~/.bashrc
    Run for interactive, non-login shells.

That part about non-login is important and a reason why many users and distributions configure bash_profile to read bashrc so that it is applied in all sessions, like so:

[[ -r ~/.bashrc ]] && . ~/.bashrc

Zsh

Zsh will look for a bit more files than Bash does:

  • /etc/zshenv
    Run for every zsh shell.
  • ~/.zshenv
    Run for every zsh shell.
  • /etc/zprofile
    Run for login shells.
  • ~/.zprofile
    Run for login shells.
  • /etc/zshrc
    Run for interactive shells.
  • ~/.zshrc
    Run for interactive shells.
  • /etc/zlogin
    Run for login shells.
  • ~/.zlogin
    Run for login shells.

Fish

Fish will read the following files on start up:

  • /etc/fish/config.fish
    Run for every fish shell.
  • /etc/fish/conf.d/*.fish
    Run for every fish shell.
  • ~/.config/fish/config.fish
    Run for every fish shell.
  • ~/.config/fish/conf.d/*.fish
    Run for every fish shell.

As you can see Fish does not distinguish between login shell and interactive shells when it reads its startup files. If you need to run something only on login or interactive shells you can use if status --is-login or if status --is-interactive inside your scripts.

Manipulate the environment

So that’s a bit complicated but hopefully things are more clear now. Next step is to start manipulating the environment. First of all, you can obviously edit those files and wait until the next session is created, or load the newly edited file into your current session using either source /path/to/file or the shorthand . /path/to/file. That would be the way to make permanent changes to your environment. But sometimes you only want to change this temporarily.

To apply variables for a single command you just insert it to the beginning of the command like so:

# for bash or zsh
$ FOO=one BAR=two my_cool_command ...

# for fish
$ env FOO=one BAR=two my_cool_command ...

This will make the variable available for the command, and then go away as soon as the command finishes.

If you want to keep the variable and have it available to all future commands in your session you run the assignment as a stand alone command like so:

# for bash or zsh
$ FOO=one

# for fish
$ set FOO bar

# then use it later in your session
$ echo $FOO
one

As you can see the variable is available for the echo command run later in the session. The variable will not be available to other sessions, and will disappear when the current session ends.

Finally, you can export the variable to make it available to subprocess that are spawned from the session:

# for bash or zsh
[parent] $ export FOO=one

# for fish
[parent] $ set --export FOO one

# then spawn a subsession and access the variable
[parent] $ bash
[child] $ echo $FOO
one

What about Ansible

If you are running Ansible to orchestration your servers you might ask your self what kind of session that is and what files you should change to manipulate the environment Ansible uses on the target servers. While you could go down that road, a much simpler approach is to use the environment keyword in Ansible:

- name: Manipulating environment in Ansible
  hosts: my_hosts

  # play level environment
  environment:
    FOO: one
    BAR: two

  tasks:

    # task level environment
    - name: My task
      environment:
        FOO: uno
      some_module: ...

This can be combined with vars, environment: "{{ my_environment }}", allowing you to use group vars or host vars to adapt the environment for different servers and scenarios.

Conclusion

The environment in Linux is a complex beast but armed with the knowledge above you should be able to tame it and harvest its powers for your own benefit. The environment is populated by different files depending on the kind of session and shell used. You can temporarily set variable from a one shot command, or for the remaining duration of the session. To make subshells inherit a variable use the export keyword/flag.

Lastly, if you are using Ansible you should really look into the environment keyword before you start to experiment with the different profile and rc-files on the target system.

IBM joins Linux Foundation AI – But is it enough?

AI is advancing rapidly within the enterprise — by Gartner’s count, more than half of organizations already have at least one AI deployment in operation, and they’re planning to substantially accelerate their AI adoption within the next few years. At the same time, the organizations building and deploying these tools have yet to really grapple with the flaws and shortcomings of AI– whether the models deployed are fair, ethical, secure or even explainable.

Before the world is overrun with flawed AI systems, IBM is aiming to rev up the development of open source trusted AI workflows. As part of that effort, the company is joining the Linux Foundation AI (LF AI) as a General Member. 

Read more over at ZDNet.

I think this is a good step by IBM and it further proves that they are committed to the open source model. This should put to rest the concerns many has over the future of an open Red Hat under the ownership of IBM.

With AI growing more and more in the industry it is only a good sign that the development of open source models and frameworks continues to grow. However, there are still a lot of shortcomings when it comes to open source and AI. The models are only part of a complete AI system. The other, perhaps more vital part, is the data used to train the models. No data, no AI.

For this purpose there are some open AI datasets, such as Skymind.ai, but it is unlikely we will see companies and organisations sharing their datasets as it would give away a lot of their competitive edge. That’s not to say that open sourcing the AI models is a bad thing. But until we get proper data sets open to the public, we will all be in the hands of the companies owning the trained models. Lacking the kind of transparency and freedom that usually comes with the open source model.

Featured image via www.vpnsrus.com

Twitter joins the rest of the world – moves to Kubernetes

Twitter Inc / The Linux Foundation

Zhang Lei, Senior Technical Experts at Alibaba reports that David McLaughlin, Product and Technical Head of Twitter Computing Platform, has announced that Twitter is switching away from Apache Mesos to Kubernetes.

You would be forgiven for thinking that Twitter was already using Kubernetes to manage all its services, given that it’s used by Netflix, Google and Facebook among many others. But Twitter has actually been using Apache Mesos, a competitor to Kubernetes.

The biggest difference between Mesos and Kubernetes is that Mesos is much more ambitious and complex. This means that it is harder to get started with Mesos rather than Kubernetes. But this isn’t likely to affect Twitter as they already have ton of experience with Mesos (they have been heavily involved in developing it) already. The biggest problem, that also affects Twitter, is the size of the open source community around both projects. Mesos is much smaller than Kubernetes meaning there are fewer developers working on it, fewer companies using it and sharing their experiences, fewer experts ready to answer questions, and so on. This is a big deal and probably the main reason why Twitter is making the switch.

Hopefully this means that Twitter will bring some of their expertise and skills to the Kubernetes community and help develop the project even further.

If you’re using netstat you’re doing it wrong – an ss tutorial for oldies

Become a modern master with some serious ss skills

If you are still using netstat you are doing it wrong. Netstat was replaced by ss many moons ago and it’s long overdue to throw out the old and learn how to get the same result but in a whole new way. Because we all love to learn stuff just for the fun of it, right.

But seriously, ss is way better than nestat because it talks to the kernel directly via Netlink and can thus give you much more info than the old netstat ever could. So to help old folks like me transition from netstat to ss I’ll give you a translation table to port you over. But first, in case there are some newcomers whom isn’t encumbered with old baggage I’ll quickly describe a few common tasks you can do in ss.

Check open ports that someone is listening to

One of my most common use cases is to see if my process is up and running and listening to connections, or if there’s is something listening to a port I wanna know who it is. To do this use the flags --listening to get sessions with the LISTEN state, --processes to get the process that is listening, and to clean up we use --numeric since I never remember that sunrpc means port 111:

$ ss --listening --tcp --numeric --processes
State     Recv-Q  Send-Q  Local Address:Port    Peer Address:Port                                                                                    
LISTEN    0       128     0.0.0.0:111           0.0.0.0:*                                                                                       
LISTEN    0       128     127.0.0.1:27060       0.0.0.0:*        users:(("steam",pid=29811,fd=45))                                              
LISTEN    0       10      0.0.0.0:57621         0.0.0.0:*        users:(("spotify",pid=11223,fd=106))                                           
LISTEN    0       32      192.168.122.1:53      0.0.0.0:*                                                                                       
LISTEN    0       128     0.0.0.0:22            0.0.0.0:*                                                                                       
LISTEN    0       5       127.0.0.1:631         0.0.0.0:*                                                                                       
LISTEN    0       128     0.0.0.0:17500         0.0.0.0:*        users:(("dropbox",pid=13706,fd=98))                                            
LISTEN    0       128     0.0.0.0:27036         0.0.0.0:*        users:(("steam",pid=29811,fd=82))                                              
LISTEN    0       128     127.0.0.1:57343       0.0.0.0:*        users:(("steam",pid=29811,fd=39))

Check active connections

Checking just active sessions is easy. Just type ss. If you want to filter and show only TCP connection use the --tcp flag like so:

$ ss --tcp
State        Recv-Q   Send-Q   Local Address:Port     Peer Address:Port     
ESTAB        0        0        192.168.1.102:57044    162.125.18.133:https    
ESTAB        0        0        192.168.1.102:34008    104.16.3.35:https    
CLOSE-WAIT   32       0        192.168.1.102:52008    162.125.70.7:https

The same goes for UDP and the --udp flag.

Get a summary

Instead of listing individual sessions you can also get a nice summary of all sessions by using the --summary flag:

$ ss --summary
Total: 1625
TCP:   77 (estab 40, closed 12, orphaned 0, timewait 6)

Transport Total     IP        IPv6
RAW       0         0         0        
UDP       33        29        4        
TCP       65        59        6        
INET      98        88        10       
FRAG      0         0         0

Translation table going from netstat to ss

Lastly, as promised here is a nice table to help you transition. Believe me, it’s quite easy to remember.

netstat -ass
netstat -auss -u
netstat -ap | grep sshss -p | grep ssh
netstat -lss -l
netstat -lpnss -lpn
netstat -rip route
netstat -gip maddr

Performance analysis between RHEL 7.6 and RHEL 8.0

Apart from all the new cool features in the freshly released Red Hat Enterprise Linux 8 one thing that is just as important is the improvements in performance. The team over at Red Hat has performed a bunch of benchmark tests on both RHEL 7.6 and RHEL 8.0 and the results show some really nice improvements.

Overall the performance looks good. The chart below shows around 5% improvement in CPU, 20% less memory usage, 15% increased disk I/O, and around 20-30% improved network performance.

a candlestick chart which combines multiple tests
Photo: Red Hat

Looking at more specific metrics we see a 40% increase in disk throughput on the XFS file system as shown in the chart below.

RHEL 7.6 vs RHEL 8 AIM7 shared throughput - XFS
Photo: Red Hat

If you are running OpenStack the network control plane will also see a large improvement when moving to RHEL 8. Read the full article at redhat.com for more details.

Quick overview of the new features in Kubernetes 1.15

Kubernetes 1.15 has been released and it comes with a lot of new stuff that will improve the way you deploy and manage services on the platform. The biggest highlights are quota for custom resources and improved monitoring.

Quota for custom resources

We have had quota for native resources for a while now but this new release allows us to create quotas for custom resources as well. This means that we can control Operators running on Kubernetes using quotas. For example you could create a quota saying each developer gets to deploy 2 Elasticsearch clusters and 10 PostgreSQL clusters.

Improved monitoring

Whether you run a production cluster, or a lab where you test stuff out, it is important to have proper monitoring so you can detect issues before they become problems. Kubernetes 1.15 comes with support for third party vendors to supply device metrics without having to modify the code of Kubernetes. This means that your cluster can use hardware specific metrics, such as GPU metrics, without needing explicit in Kubernetes for that specific device.

The metrics for storage has also improved with support for monitoring of volumes from custom storage providers.

Lastly, the monitoring performance has improved since only the core metrics are collected by kubelet.

More info

How to get it

Most users consume Kubernetes as part of a distribution such as OpenShift. They will have to wait until that distribution upgrades to Kubernetes 1.15. The latest version of OpenShift, version 4.1, comes with Kubernetes 1.13 and I would expect Kubernetes 1.15 to be available in OpenShift 4.3 which should arrive in the beginning of 2020.

Principles of container-based application design

“Principles of software design:

  • Keep it simple, stupid (KISS)
  • Don’t repeat yourself (DRY)
  • You aren’t gonna need it (YAGNI)
  • Separation of concerns (SoC)

Red Hat approach to cloud-native containers:

  • Single concern principle (SCP)
  • High observability principle (HOP)
  • Life-cycle conformance principle (LCP)
  • Image immutability principle (IIP)
  • Process disposability principle (PDP)
  • Self-containment principle (S-CP)
  • Runtime confinement principle (RCP)”

After the move to Infrastructure-as-Code and containerization it is only natural we start to apply some of the lessons we learned during software development, to building our infrastructure.

Read more at redhat.com.