You are on page 1of 20

Memory

management in
Linux
Operating Systems
Instructor: Vagif Salimov
Student: Muhaddisa Narimanova
Group: BBA-041
What is Linux?

Just like Windows, iOS, and Mac OS, Linux is an operating


system. In fact, one of the most popular platforms on the planet,
Android, is powered by the Linux operating system. An operating
system is software that manages all of the hardware resources
associated with your desktop or laptop. To put it simply, the
operating system manages the communication between your
software and your hardware. Without the operating system (OS),
the software wouldn’t function.
Linux Memory
Management

The subsystem of Linux memory management is


responsible to manage the memory inside the system. It
contains the implementation of demand paging and
virtual memory.

Linux memory management is a complicated system


and included so many functionalities for supporting
various system varieties through the MMU-less
microcontrollers to the supercomputers.
«
F or systems, the memory management without the MMU is known
as nommu and it gains a dedicated document which will hopefully
be written eventually.

Here, we will assume that the MMU exists and the CPU can
convert any virtual address into the physical address.
Huge Pages Virtual Memory Primer

The translation of addresses requires various memory In a computer system, physical memory is a restricted
accesses. These memory accesses are very slow as resource. The physical memory isn't contiguous
compared to the speed of the CPU. To ignore spending necessarily. It may be accessible as a group of
precious cycles of the processor on the translation of the different ranges of addresses. Besides, distinct
address, CPUs manage the cache of these types of architectures of CPU and implementations of similar
translations known as Translation Lookaside Buffer (TLB). architecture have distinct perspectives of how these
types of ranges are specified . It will make dealing
with physical memory directly quite difficult and to
ignore this complexity a mechanism virtual memory
was specified . The virtual memory separates the
physical memory details through the application
software . It permits to keep of only required details
inside the physical memory. It gives a mechanism for
controlled data sharing and protection between
processes.
Zones Page Cache

Linux combines memory pages into some zones The common case to get data into memory is to read it
according to the possible usage. Let's say, through files as the physical memory is unstable.The
ZONE_HIGHMEM will include memory that isn't data will be put in the page cache to ignore expensive
mapped permanently into the address space of the access of disk on the subsequent reads whenever any file
kernel, ZONE_DMA will include memory that could be is read . Similarly, the data will be positioned inside the
used by various devices for DMA, and page cache and gets into the backing storage device
ZONE_NORMAL will include addressed pages whenever any file is written.
normally.
Nodes
Several multi-processor machines can be defined
as the NUMA - Non-Uniform Memory Access
systems. The memory is organized into banks that
include distinct access latency relying on the

Anonymous
"distance" through the processor in these types of
systems. All the banks are known as a node and
for all nodes, Linux creates a subsystem of

Memory
independent memory management. A single node
contains its zones set, list of used and free pages,
and several statistics counters.

The anonymous mapping or anonymous


memory specifies memory that isn't backed
by any file system. These types of mappings
are implicitly developed for heap and stack
of the program or by explicitly calls to the
mmap(2) system call . The anonymous
mappings usually only specify the areas of
virtual memory that a program is permitted to
access.
OOM killer Compaction Reclaim

It is feasible that the kernel would As the system executes, various tasks According to the usage of the page, it
not be able to reclaim sufficient allocate the free up the memory is treated by Linux memory
memory and the loaded machine space and it becomes partitioned. management differently. The pages
memory would be exhausted to However, it is possible to restrict that could be freed either due to they
proceed to implement. scattered physical pages with virtual cache the details that existed
memory. Memory compaction elsewhere on a hard disk or due to
defines the partitioning problems. they could be again swapped out to a
hard disk, are known as reclaimable.
CMA Debugfs Interface

It is helpful for retrieving basic details out of the distinct


areas of CMA and for testing release/allocation in all the
areas . All CMA zones specify a directory upon /CMA/
that is indexed via the CMA index of the kernel. Hence,
the initial CMA zone will be:
HugeTLB Pages

The goal of this file is to give a short overview of A TLB can be defined as the virtual-to-physical
hugetlbpage support inside the Linux kernel. This translation cache. It is typically a very scarce resource
type of support is created on the top of more than over a processor . Various operating systems try to
one support of the page size that is given by most create the best use of a restricted number of TLB
of the latest architectures. resources.

/proc/sys/vm/nr_hugepages file represents the


count of "persistent" huge pages (current) in the
Users could use the support of a huge page inside the Linux huge page pool of kernel . "Persistent" huge
kernel by either applying the classical SYSV shared memory pages would be returned to the pool of huge
system calls (shmat and shmget) or mmap system call . pages if freed via a task. Dynamically, a user
Initially, the Linux kernel requires to be created with a file, along with many root privileges can allocate or
i.e., CONFIG_HUGETLBFS and free a few persistent huge pages by decreasing
CONFIG_HUGETLB_PAGE configuration options . The file, or increasing the nr_hugepages value . The
i.e., /proc/meminfo gives details of the total count of pages that are utilized by huge pages can be
reserved in the kernel and can't be utilized for
persistent hugetlb pages inside the huge page pool of the
other objectives. Huge pages can't be swapped
kernel. out upon memory pressure.
Idle Page Tracking

Motivation User API


This features permits for tracking which memory page is The API of idle page tracking is found at
being accessed via the workload . This information could /sys/kernel/mm/page_idle. It currently combines
be helpful to estimate the working set size of the /sys/kernel/mm/page_idle/bitmap and read-write file.
workload which in turn could be taken into consideration
if configuring the parameters of workload, determining
where to position the workload, or setting limits of
memory cgroup in the computer cluster.
Kernel Samepage Merging

Kernel samepage merging (or KSM) is a de-duplication memory-saving aspect.


It is enabled by the CONFIG_KSM=y, included in 2.6.32 to the Linux kernel .
Originally, KSM was specified for using with KSM (in which it was called
Kernel Shared Memory) by sharing the information common among them to fit
other VMs into physical memory.But, it could be helpful for an application that
produces several instances of similar data . The ksmd daemon of KSM
periodically scans the user memory areas which are registered with it, checking
for identical content pages which could be substituted by an individual write-
protected page (copied automatically when any process later wishes for
updating its content). The page amounts that KSM daemon scans inside an
individual pass and also the time among the passes can be configured with the
help of sysfs interface. Kernel samepage merging only merge private
(anonymous) pages, never file (pagecache) pages. Originally, the merged pages
of KSM were locked into the memory of the kernel, but now can be swapped
out similarly to another user page.
«
Controlling KSM using madvise
KSM daemon sysfs interface
The daemon of KSM is managed by the sysfs file within sleep_millisecs
the /sys/kernel/mm/ksm/ file and readable by each but
It determines how many milliseconds the ksmd daemon
writable by root only: pages_to_scan
must sleep before the next scan.
It determines how many pages for the scanning process
before the ksmd daemon goes for sleeping.

run
It will set to 0 for stopping ksmd daemon from
executing but continue mergers pages,

It will set to 1 for running ksmd daemon e.g.,


KSM daemon sysfs interface
Configuration of Kernel
No-MMU support of memory support

The kernel includes limited support of memory mapping


upon no-MMU situations. From the perspective of
userspace, memory mapping uses conjunction along with
mmap() system call, execve() system call, and shmat()
call. From the perspective of the kernel, execve mapping
is performed via binfmt drivers. It calls back into the
routines of mmap() for doing the original work. Also, the
behavior of memory mapping associates the way
ptrace(), clone(), vfork(), and fork() work. There is no
clone() and fork() should be supplied a CLONE_VM
flag under uClinux. The behavior is the same between
the no-MMU and MMU cases but it is not identical.
Also, it is much more limited in the letter conditions.
No-MMU support of memory support
No-MMU support of memory support
THANKS
References
Linux Memory Management – javatpoint
What is Linux? - Linux.com
Concepts overview — The Linux Kernel documentation
CIS 3210 (tripod.com)

You might also like