IBM joins Linux Foundation AI – But is it enough?

AI is advancing rapidly within the enterprise — by Gartner’s count, more than half of organizations already have at least one AI deployment in operation, and they’re planning to substantially accelerate their AI adoption within the next few years. At the same time, the organizations building and deploying these tools have yet to really grapple with the flaws and shortcomings of AI– whether the models deployed are fair, ethical, secure or even explainable.

Before the world is overrun with flawed AI systems, IBM is aiming to rev up the development of open source trusted AI workflows. As part of that effort, the company is joining the Linux Foundation AI (LF AI) as a General Member. 

Read more over at ZDNet.

I think this is a good step by IBM and it further proves that they are committed to the open source model. This should put to rest the concerns many has over the future of an open Red Hat under the ownership of IBM.

With AI growing more and more in the industry it is only a good sign that the development of open source models and frameworks continues to grow. However, there are still a lot of shortcomings when it comes to open source and AI. The models are only part of a complete AI system. The other, perhaps more vital part, is the data used to train the models. No data, no AI.

For this purpose there are some open AI datasets, such as Skymind.ai, but it is unlikely we will see companies and organisations sharing their datasets as it would give away a lot of their competitive edge. That’s not to say that open sourcing the AI models is a bad thing. But until we get proper data sets open to the public, we will all be in the hands of the companies owning the trained models. Lacking the kind of transparency and freedom that usually comes with the open source model.

Featured image via www.vpnsrus.com

Twitter joins the rest of the world – moves to Kubernetes

Twitter Inc / The Linux Foundation

Zhang Lei, Senior Technical Experts at Alibaba reports that David McLaughlin, Product and Technical Head of Twitter Computing Platform, has announced that Twitter is switching away from Apache Mesos to Kubernetes.

You would be forgiven for thinking that Twitter was already using Kubernetes to manage all its services, given that it’s used by Netflix, Google and Facebook among many others. But Twitter has actually been using Apache Mesos, a competitor to Kubernetes.

The biggest difference between Mesos and Kubernetes is that Mesos is much more ambitious and complex. This means that it is harder to get started with Mesos rather than Kubernetes. But this isn’t likely to affect Twitter as they already have ton of experience with Mesos (they have been heavily involved in developing it) already. The biggest problem, that also affects Twitter, is the size of the open source community around both projects. Mesos is much smaller than Kubernetes meaning there are fewer developers working on it, fewer companies using it and sharing their experiences, fewer experts ready to answer questions, and so on. This is a big deal and probably the main reason why Twitter is making the switch.

Hopefully this means that Twitter will bring some of their expertise and skills to the Kubernetes community and help develop the project even further.

Performance analysis between RHEL 7.6 and RHEL 8.0

Apart from all the new cool features in the freshly released Red Hat Enterprise Linux 8 one thing that is just as important is the improvements in performance. The team over at Red Hat has performed a bunch of benchmark tests on both RHEL 7.6 and RHEL 8.0 and the results show some really nice improvements.

Overall the performance looks good. The chart below shows around 5% improvement in CPU, 20% less memory usage, 15% increased disk I/O, and around 20-30% improved network performance.

a candlestick chart which combines multiple tests
Photo: Red Hat

Looking at more specific metrics we see a 40% increase in disk throughput on the XFS file system as shown in the chart below.

RHEL 7.6 vs RHEL 8 AIM7 shared throughput - XFS
Photo: Red Hat

If you are running OpenStack the network control plane will also see a large improvement when moving to RHEL 8. Read the full article at redhat.com for more details.

Quick overview of the new features in Kubernetes 1.15

Kubernetes 1.15 has been released and it comes with a lot of new stuff that will improve the way you deploy and manage services on the platform. The biggest highlights are quota for custom resources and improved monitoring.

Quota for custom resources

We have had quota for native resources for a while now but this new release allows us to create quotas for custom resources as well. This means that we can control Operators running on Kubernetes using quotas. For example you could create a quota saying each developer gets to deploy 2 Elasticsearch clusters and 10 PostgreSQL clusters.

Improved monitoring

Whether you run a production cluster, or a lab where you test stuff out, it is important to have proper monitoring so you can detect issues before they become problems. Kubernetes 1.15 comes with support for third party vendors to supply device metrics without having to modify the code of Kubernetes. This means that your cluster can use hardware specific metrics, such as GPU metrics, without needing explicit in Kubernetes for that specific device.

The metrics for storage has also improved with support for monitoring of volumes from custom storage providers.

Lastly, the monitoring performance has improved since only the core metrics are collected by kubelet.

More info

How to get it

Most users consume Kubernetes as part of a distribution such as OpenShift. They will have to wait until that distribution upgrades to Kubernetes 1.15. The latest version of OpenShift, version 4.1, comes with Kubernetes 1.13 and I would expect Kubernetes 1.15 to be available in OpenShift 4.3 which should arrive in the beginning of 2020.

What’s new in Linux kernel 5.2

A new version of the Linux kernel has just been released. Here’s a short summary of the new stuff that might be interesting for end users.

  • Logitech
    • Support for MX5500
    • Support for S510 
    • Support for Unifying receiver
    • Viewing battery status
  • Realtek
    • Support for RTL8822BE
    • Support for RTL8822CE
  • SoC
    • Support for Nvidia Jetson Nano
    • Support for Orange Pi RK3399
    • Support for Orange Pi 3
  • Nvidia
    • Noveau supports GeForce GTX 1650
  • Intel
    • Support for Intel Comet Lake
    • Support for Intel Icelake graphics
    • Support for Intel Elkhart Lake graphics
    • Hibernation in Cherrytrail and Baytrail
    • Support for Thunderbolt on older Apple hardware
  • AMD
    • Improved support for Ryzen
    • Improved support for Radeon X1000
    • Support for upcoming EPYC
  • ARM
    • Spectre mitigation
    • Support for ARM Mali
  • Other
    • Improved support for DisplayPort over USB-C

Ansible 2.8 has a bunch of cool new stuff

So Ansible 2.8.0 was just released and it comes with a few really nice new features. I haven’t had time to use it much, since I just upgraded like 10 minutes ago, but reading through the Release Notes I found some really cool new things that I know I’ll enjoy in 2.8.

Automatic detection of Python path

This is a really nice feature. It used to be that Ansible always looked for /usr/bin/python on the target system. If you wanted to use anything else you needed to adjust ansible_python_interpreter. No more! Now Ansible will do a much smarter lookup where it will not only look for Python in several locations before giving up, it will adapt to the system it is executing on. So for example on Ubuntu we always had to explicitly tell Ansible to use /usr/bin/python3 since there is no /usr/bin/python by default. Now Ansible will know this out of the box.

Better SSH on macOS

Ansible moved away from the Paramiko library in favor of SSH a long time ago. Except when executed on macOS. With 2.8 those of us using a MacBook will finally get some of those sweet performance improvements that SSH has over Paramiko which will mean a lot since the biggest downside to Ansible is its slow execution.

Accessing undefined variables is fine

So when you had a large structure with nested objects and wanted to access one and give it a default if it, or any parent, was undefined you needed to do this:

{{ ((foo | default({})).bar | \
default({})).baz | default('DEFAULT') }}

or

{{ foo.bar.baz if (foo is defined and \
foo.bar is defined and foo.bar.baz is defined) \
else 'DEFAULT' }}

Ansible 2.8 will no longer throw an error if you try to access an object of an undefined variable but instead just give you undefined back. So now you can just do this:

{{ foo.bar.baz | default('DEFAULT') }}

A lot more elegant!

Tons of new modules

Of course as with any new release of Ansible there is also a long list of new modules. For example the one that I am currently most interested in are the Foreman modules. Ansible comes with just a single module for Foreman / Satellite but I have been using the foreman-ansible-modules for a while now and 2.8 deprecates the old foreman plugin in favor of this collection. Hopefully they will soon be incorporated into Ansible Core so I don’t have to fetch them from GitHub and put inside my role.

There are also a ton of fact-gathering modules for Docker such as docker_volume_info, docker_network_info, docker_container_info and docker_host_info that will be great when checking and manipulating Docker objects. Although, with RHEL 8 we will hopefully be moving away from Docker so these may come a little too late to the party, to be honest.

There’s a bunch of new KubeVirt modules which may be really cool once we move over to OpenShift 4 and run some virtual machines in it.

Other noteworthy modules are:

  • OpenSSL fact gathering for certificates, keys and CSRs
  • A whole bunch of VMware modules
  • A few Ansible Tower modules
  • A bunch of Windows modules

Red Hat OpenShift 4 is here

Wow! This is a biggie!

So Red Hat just released OpenShift 4 with a ton of new features. I haven’t had time to try it all out yet but here are some of my favorites.

RHEL CoreOS

Well, this might actually be a post on its own. This is the first new release of CoreOS after the Red Hat aquisition and serves as the successor of both CoreOS and RHEL Atomic Host. It’s basically RHEL built for OpenShift. Kinda like how RHV uses an ostree based RHEL as well.

I love Atomic Host. The OSTree model is really neat, allowing you to really lock down the operating system and do atomic upgrades. Either it works, or you roll back. There is nothing in between. And being able to lock down the OS completely (by disabling the ostree-rpm commands) means the attack surface is greatly reduced.

What CoreOS brings to the Atomic Host in this new, merged version is greater management and a more streamlined delivery of updates, as well as tighter integration with OpenShift.

Cluster management

So, that tighter integration with OpenShift is really what’s key here. This means that you can manage the lifecycle of the hosts running Kubernetes directly from Kubernetes. OpenShift 4 also comes with a new installer that uses a bootstrap node for spinning up all neccessary virtual machine for the cluster. Running OpenShift on premise will give you the exakt sweet experience as you would get running Google Kubernetes Engine or Amazon ECS. No need to manually manage virtual machine for applying updates or scaling our or in.

Service Mesh

Next up is Service Mesh. This is Red Hats supported implementation of Istio and Jaeger, two relatively new open source projects which brings some cool new features to Kubernetes for managing that growing network complexity that you get when you move more and more stuff into the microservice model.

Getting full visibility and control over the network is a great security win and you know how we at Basalt love security. I’ll sure check out OpenShift 4 and bring it into Basalt Container Platform to get that awesome new features to our customers.

Operators

Lastly is the Operators framework. This is really a natural evolvement of packaging, deploying and managing container based services. Just as CoreOS means improved management of the hosts running under OpenShift, Operators means improved management of the services running on top of it. My bet is that we will package more and more of our turn-key services such as Basalt Log Service and Basalt Monitor Service as Operators that run on top of OpenShift.

So that’s a wrap for the biggests news in OpenShift 4. I will do a deep dive later on when I get the chance and perhaps write a more detailed article when I’ve really gotten my hands dirty with it.

Top new features in Red Hat Enterprise Linux 8

So Red Hat just released a new version of Red Hat Enterprise Linux (RHEL). Two questions: what’s new, and is RHEL still rhelevant with cloud, containers, serverless and all that?

Let’s start with the last question. The simple answer is: absolutely!

No matter how your workload is executed or where your data is stored, there will always be physical hardware running an operating system (OS). With containers and virtual machines the ration between physical servers and operating systems is actually going up! You now have even more instances of whatever OS you use. Picking a supported, stable and secure OS is still important, but features such as adaptability is growing even more important. We all want stable and secure, but with change going even faster we also crave the latest and greatest at the same time.

Here is where the biggest new features in RHEL 8 is very relevant.

Application Streams

Having a stable OS usually means you need to sacrifice modernity (it’s a word!) and hold off on the latest versions of your platforms and tools. Here’s where “Application Streams” come in. In RHEL 8 you can leave the core OS stable and predictable, and still run the latest version of Node.js or Python. This will not affect the core components such as dnf which will continue to use the version of Python shipped as standard.

Finally we can have both stable and modern!

Web Console with Session Recording

Based on Cockpit which has been around for a while, Red Hat Enterprise Linux Web Conaole allows non-geeks that doesn’t have much experience with Linux in general or RHEL in particular, administer and operate RHEL with ease. All the usual Cockpit tools are there like resource utilization, user management, shell console, and system logs. One really cool feature that is new is called Session Recording. This allows you to record SSH sessions on the server and play them back, allowing you to get total visibility on who did what. This is a great feature for the security conscious among us.

Universal Base Image

The last feature I would like to highlight is the new container image that is released along with RHEL 8: Universal Base Image (UBI). It’s not really new, it has been available for a while, but the big news is that it no longer requires an active RHEL subscription. This is big because it means we can build containers based on RHEL using our Apple laptops or CentOS temporary virtual machines. When the container goes into production it can be tied to an active subscription from the host. This gives you freedom to build and test containers anywhere, without sacrificing enterprise support in production. Finally!

Red Hat Summit 2019

Once every year the world’s biggest open source company invites customers, partners and nerds to come together and share their knowledge, stories and visions for three days.

I was blessed by Basalt with a trip to this year’s incarnation of Red Hat Summit at Boston. I got three days packed full of breakout sessions, hands-on labs, partner demos, and, most importantly, meeting cool people and making new connections.

So what is going on in the open source world right now? One of the biggest trends is container technology with Kubernetes at the forefront. The Red Hat product here is OpenShift and it is being pushed aggressively. But I think there is good reason for Red Hat to push for it. It’s a bet on the future and the future is containers (or at least that’s what a lot of us strongly believe). Pushing OpenShift is Red Hat trying to capitalize on that future as the provider of one of the key infrastructure components powering that future.

Another really big trend is automation. Well, to be fair automation has been around for thousands of years so calling it a trend might not be fair, but we see a strong push for Red Hat Ansible as the way to automate not only deployments and configurations, but what we call “day 2 operations”. Things such as managing users, granting and removing access, creating workspaces, moving stuff around, tweaking parameters. All the work that the IT admins do every day. Will Ansible steal the job of our beloved IT admins and create massive unemployment problems around the globe? Not likely. Ansible will be (and is!) helping IT admins focus on the fun part of their job such as developing the environment with new features, improved configurations, awesome optimizations and completely new deployments. Because let’s face it: IT admins don’t particularly enjoy feeding in the same data over and over when creating users. Or managing approval workflows just to close a service ticket from a developer asking for a port opening. Ansible coupled with a self-service portal will make life easier for the burdened IT admin, giving them an hour extra every morning to have breakfast with their kids. Cause that’s the ultimate goal of automation: removing the boring parts of life so we can spend our limited time doing stuff that makes us happy.

The last trend that’s a bit of an outsider relative to the others is Artificial Intelligence. There are lots of sessions and talks about the emerging use of AI for various use cases. But the thing that makes AI stand out is that Red Hat really has no product for this market right now. Mostly they position OpenShift as the platform on which you should run the AI engine, but they don’t offer their own AI engine today. I strongly believe this to change soon. AI is becoming more and more necessary. It’s moving from “something cool that makes for a sweet demo” to “something we require to continue to grow”. As systems become more dynamic and the number of events generated in the system grows, the more stuff you have to analyze. If a web request returns a 503 and it is related to a hundred different services running on many virtual machines across multiple clouds, it’s hard to do root cause analysis as a human. Using an AI engine you can quickly find out that the 503 is caused by a CPU overload that is in turn caused by a configuration issue causing an infinite loop in a completely separate process. And that’s just one use case where AI will become more or less required in the systems of the future. As data grows, AI is required to manage and make sense of that data.

So to summarize, the current trends that is prominent during Red Hat Summit 2019 is:

  • Containers
  • Automation
  • Artificial Intelligence

If you are not yet exploring these trends let me know and we can help you ensure you stay modern in a world where the only constant is change.