You are on page 1of 7

Memory Storage and Management

Memory management is the act of managing computer memory. The essential


requirement of memory management is to provide ways to dynamically allocate portions
of memory to programs at their request, and freeing it for reuse when no longer needed.
This is critical to the computer system.

Several methods have been devised that increase the effectiveness of memory
management. Virtual memory systems separate the memory addresses used by a
process from actual physical addresses, allowing separation of processes and
increasing the effectively available amount of RAM using paging or swapping to
secondary storage. The quality of the virtual memory manager can have a big impact on
overall system performance.

Memory management also refers to a variety of methods used to store data and
programs in memory, keep track of them and reclaim the memory space when they are
no longer needed. It also includes virtual memory and memory protection techniques.

In the days of the first PCs, memory management used to


be a major consideration. The PC had more confusing
memory types than any computer in history as its
architecture was pushed, patched and expanded to meet
the increasing demand for more capabilities. DOS, the
operating system of the 1980s, was designed to address
only one megabyte (1MB) of memory. Today, we take
512MB and 1GB for granted, and Windows uses up every
available drop.

In the first decade of the PC, technicians had to deal with


conventional memory, upper memory, high memory,
extended memory and expanded memory in order to
support growing applications. Countless books were written
on PC memory management. There were even three-day
courses on the subject. Eventually, subsequent versions of
DOS, and especially Windows, added the necessary
memory management functions to eliminate the manual,
time-consuming tweaking and configuring how much
memory should be reserved for this and how much for that.
See memory allocation, virtual memory, garbage collection,
memory protection, EMS, EMM and DOS memory manager.

When an operating system manages the computer's memory, there are two broad tasks
to be accomplished:
Each process must have enough memory in which to execute, and it can neither run
into the memory space of another process nor be run into by another process.

The different types of memory in the system must be used properly so that each
process can run most effectively.

The first task requires the operating system to set up memory boundaries for types of
software and for individual applications.

As an example, let's look at an imaginary small system with 1 megabyte (1,024


kilobytes) of RAM. During the boot process, the operating system of our imaginary
computer is designed to go to the top of available memory and then "back up" far
enough to meet the needs of the operating system itself. Let's say that the operating
system needs 300 kilobytes to run. Now, the operating system goes to the bottom of the
pool of RAM and starts building up with the various driver software required to control
the hardware subsystems of the computer. In our imaginary computer, the drivers take
up 200 kilobytes. So after getting the operating system completely loaded, there are 500
kilobytes remaining for application processes.

When applications begin to be loaded into memory, they are loaded in block sizes
determined by the operating system. If the block size is 2 kilobytes, then every process
that's loaded will be given a chunk of memory that's a multiple of 2 kilobytes in size.
Applications will be loaded in these fixed block sizes, with the blocks starting and ending
on boundaries established by words of 4 or 8 bytes. These blocks and boundaries help
to ensure that applications won't be loaded on top of one another's space by a poorly
calculated bit or two. With that ensured, the larger question is what to do when the 500-
kilobyte application space is filled.

In most computers, it's possible to add memory beyond the original capacity. For
example, you might expand RAM from 1 to 2 gigabytes. This works fine, but can be
relatively expensive. It also ignores a fundamental fact of computing -- most of the
information that an application stores in memory is not being used at any given moment.
A processor can only access memory one location at a time, so the vast majority of
RAM is unused at any moment. Since disk space is cheap compared to RAM, then
moving information in RAM to hard disk can greatly expand RAM space at no cost. This
technique is called virtual memory management.

Disk storage is only one of the memory types that must be managed by the operating
system, and it's also the slowest. Generally speaking, ranked in order of speed, the
types of memory in a computer system are:

1. High-speed cache -- This is fast, relatively small amounts of memory that are
available to the CPU through the fastest connections. Cache controllers predict
which pieces of data the CPU will need next and pull it from main memory into
high-speed cache to speed up system performance.
2. Main memory -- This is the RAM that you see measured in megabytes when you
buy a computer.

3. Secondary memory -- This is most often some sort of rotating magnetic storage
that keeps applications and data available to be used, and serves as virtual RAM
under the control of the operating system.

The operating system must balance the needs of the various processes with the
availability of the different types of memory, moving data in blocks (called pages)
between available memory as the schedule of processes dictates.

Good Memory Management Will Help Speed Up Computer

When you want to speed up computer performance, taking care of your computer’s
memory will help you get the job done. Memory plays an essential role in computer
performance; the more memory you have available, the better your computer will
perform. (Up to a point, that is.) For the most part, you’ll run out of memory before you
reach the CPU’s processing capacity. As long as you have room to add more memory,
you should consider this approach for resolving long-term speed or performance issues.

Small Things Can Add Up

That having been said, there are a lot of things you can do to conserve the memory you
have. In some cases, simply conserving memory can make a big difference in computer
performance. In my last post, I talked about basic maintenance. If you’ve done the basic
maintenance on your computer (getting rid of viruses, throwing away old files,
defragmenting your hard disk) and you’re still not getting the performance you expect,
it’s time to look under the hood.

Looking under the hood means taking a look at what’s running. Applications, toolbars,
utility programs, screensavers, and desktop themes can all contribute to slow computer
performance. Paring down the system, getting rid of applications that are not needed,
shutting down the auto-starting applications and returning to desktop themes that
conserve –rather than waste – memory can all make a difference in terms of computer
performance.

To find out what’s running on your computer at the moment, use the Task Manager,
which you can start by pressing Ctrl+Alt+Del. Once you bring up the Task Manager, you
can look at what processes and application are running. You can also look at the
System Configuration to find out what programs are configured to run each time you
boot the computer. If you find programs you don’t use regularly among the startup
programs, reconfigure the computer to bypass these programs on startup. You can still
run the program when you actually need to, but if you don’t need these programs at
your fingertips all the time, don’t configure them to load automatically at startup.
While you’re in the decision-making mood, look at the icons that take up residence in
the task bar at the bottom of the window. This is another good way to spot the programs
that load automatically. If you don’t need these programs in the task bar, move them
aside. Also consider uninstalling toolbars that you may have loaded, or that may have
loaded automatically when you downloaded a new application. That, by the way, is a
good indicator of spyware or adware. Getting rid of these toolbars may restore more
than you think! Use the Add/Remove Programs tool to get rid of the programs, toolbars
and applications you don’t want, and become a bit more selective about what you load
onto your computer in the future.

Whatever memory chips or other devices are installed in a computer, the operating
system and application programs must have a way to allocate, use, and eventually
release portions of memory. The goal of memory management is to use available
memory most efficiently. This can be difficult in modern operating environments where
dozens of programs may be competing for memory resources.

Early computers were generally able to run only one program at a time. These
machines didn’t have a true operating system, just a small loader program that loaded
the application program, which essentially took over control of the machine and
accessed and manipulated the memory.

Later systems offered the ability to break main memory into several fixed partitions.
While this allowed more than one program to run at the same time, it wasn’t very
flexible.

Virtual memory

From the very start, computer designers knew that main memory (RAM) is fast but
relatively expensive, while secondary forms of storage (such as hard disks) are slower
but relatively cheap. Virtual memory is a way to treat such auxiliary devices (usually
hard drives) as though they were part of main memory. The operating system allocates
some storage space (often called a swap file) on the disk. When programs allocate
more memory than is available in RAM, some of the space on the disk is used instead.

Because RAM and disk are treated as part of the same address space, the application
requesting memory doesn’t “know” that it is not getting “real” memory. Accessing the
disk is much slower than accessing main memory, so programs using this secondary
memory will run more slowly.

Virtual memory has been a practical solution since the 1960s, and it has been used
extensively on PCs running operating systems such as Microsoft Windows. However,
with prices of RAM falling drastically in the new century, there is likely to be enough
main memory on the latest systems available to run most popular applications.
Memory Allocation

Most programs request memory as needed rather than a fixed amount being allocated
as part of program compilation.

The operating system is therefore faced with the task of matching the available memory
with the amounts being requested as programs run. One simple algorithm for memory
allocation is called first fit. When a program requests memory, the operating system
looks down its list of available memory blocks and allocates memory from the first one
that’s large enough to fulfil the request. (If there is memory left over in the block after
allocation, it becomes a new block that is added to the list of free memory blocks.)

As a result of repeated allocations using this method, the memory space tends to
become fragmented into many leftover small blocks of memory. As with fragmentation
of files on a disk, memory fragmentation slows down access, since the hardware must
issue repeated instructions to “jump” to different parts of the memory space.

Using alternative memory allocation algorithms can reduce fragmentation. For example,
the operating system can look through the entire list and find the smallest block that is
still large enough to fulfil the allocation request.

This best fit algorithm can be efficient. While it still creates fragments from the small
leftover pieces, the fragments usually don’t amount to a significant portion of the overall
memory.

The operating system can also enforce standard block sizes, keeping a “stockpile” of
free blocks of each permitted size. When a request comes in, it is rounded to the
nearest amount that can be made from a combination of the standard sizes (much like
making change). This approach, sometimes called the buddy system, means that
programs may receive somewhat more or less memory than they want, but this is
usually not a problem.

Recycling memory
In a multitasking operating system, programs should release memory when it is no
longer needed. In some programming environments memory is released automatically
when a data object is no longer valid, while in other cases memory may need to be
explicitly freed by calling the appropriate function.

Recycling is the process of recovering these freed-up memory blocks so they are
available for reallocation. To reduce fragmentation, some operating systems analyze
the free memory list and combine adjacent blocks into a single, larger block (this is
called coalescence). Operating systems that use fixed memory block sizes can do this
more quickly because they can use constants to calculate where blocks begin and end.
Many more sophisticated algorithms can be used to improve the speed or efficiency of
memory management. For example, the operating system may be able to receive
information that helps it determine whether the requested memory needs to be
accessed extremely quickly. In turn, the memory management system may be designed
to take advantage of particular processor architecture.

Combining these sources of knowledge, the memory manager might decide that a
particular requested memory block be allocated from special high speed memory. While
RAM is now cheap and available in relatively large quantities even on desktop PCs, the
never-ending race between hardware resources and the demands of ever larger
database and other applications guarantees that memory management will remain a
concern of operating system designers. In particular, distributed database systems
where data objects can reside on many different machines in the network require
sophisticated algorithms that take not only memory speed but also network load
and speed into consideration.

Summary

What is Memory Management Unit in operating system?

The memory management is the process of managing the


computer memory which consists of primary memory or
secondary memory. In this, we allocate the memory
portions to programs and software packages after freeing
the space of the computer memory. Basically, memory
management is of critical importance for operating system
because the multi-tasking can take place in the system
which switches the memory space from one process to another. Moreover, the concept
of the virtual memory is used in the system according to which programs from main
memory are loaded to the secondary memory when the space is not large enough to
hold the programs.

The disk swapping is used in which virtual memory separates the memory addresses
using the physical addresses. The management of the virtual memory is carried out in
computer system which will enhance the performance of the system by collecting the
garbage database which is meant for allocation and deallocation of resources. As this
garbage collection table is
implemented in the programming
structures that uses the region
based management for the
objects.

The concept of virtual memory is


derived from this memory
management unit because it
provides the loader tool which
loads the data from the secondary
memory in main memory when
required. There are so many benefits provided by this memory management system like
multi-tasking of the programs in memory and many more. Many of the features of
memory management unit are given below that describes the role of it in operating
system.

program for its execution requires some space in computer memory which is provided
by memory management unit using virtual memory that provides the external storage
addressing location for the Programs that does not have too much space in main
memory for their execution and saved them in secondary memory but when required
loaded them again in main memory. This deallocation and reallocation of programs in
main memory deal with concurrency. At last, we say that loading capabilities of
programs with the memory addressing is there in it.

The data which we are using in our computer is kept in the secondary storage medium
that stores the data permanently in computer memory because it is non volatile in
nature and protection of the stored data in memory is provided by the memory
management unit such that it will automatically repair and fix the errors there in the bad
tracks and sectors. Moreover, as far as the data security is concerned then certain
programs are protected with the master password that cannot allow the access of the
data without the administrator confirmation. Moreover, certain programs are provided
with the shell that protects them from malicious matter.

The data is organized in the well defined manner that provides the easier access of data
to the user because sharing is used in which various processes shares the memory with
each other using the inter process communication that allows the inter communication
between the shared processes and the data stored in memory uses the logical and
physical organization in which data are divided into modules that leads to internal and
external fragmentation of the main memory such that main memory frames are divided
into modules for the programs allocation that is known segmentation.

You might also like