You are on page 1of 4

Memory Management: Early Systems Memory Management: Virtual Memory

Virtual Memory Methods


Memory Manager • Paged Memory Allocation
• Access Memory • Demand Paging
• Random Access Memory • Segmented Memory Allocation
• Core Memory • Segmented/Demand Paged Memory Allocation
• Primary Storage
Page Memory Allocation
Early Memory Allocation Schemes • Page Memory Allocation based on the concept of dividing
• Single-User Systems each incoming job into pages of equal size.
• Fixed Partitions • The sections of a disk are called sectors (or sometimes
• Dynamic Partitions blocks), and the sections of main memory are called page
• Relocatable Dynamic Partitions frames.
• The pages do not have to be loaded in adjacent memory
Single-User Contiguous Scheme blocks. In fact, each page can be stored in any available page
• Each program to be processed was loaded in its frame anywhere in main memory.
entirety into memory and
• allocated as much contiguous space in memory as it • Job Table (JT) contains two values for each active job: the
needed size of the job (shown on the left) and the memory location
• It doesn’t support multiprogramming or networking where its Page Map Table is stored (on the right)

Fixed Partitions • Page Map Table (PMT), which contains the vital
• static partitions information for each page—the page number and its
• one partition for each job corresponding page frame memory address. • The Memory
• protection of the job’s memory space Map Table (MMT) has one entry for each page frame listing
• internal fragmentation its location and free/busy status.

Dynamic Partitions • The displacement, or offset, of a byte (that is, how far
• Contiguous blocks away a byte is from the beginning of its page) is the factor
• Jobs are given only as much memory used to locate that byte within its page frame.

Best-Fit Versus First-Fit Allocation Demand Paging


• First-fit memory allocation • Demand paging introduced the concept of loading only a
- first partition fitting the requirements part of the program into memory for processing.
• Best-fit memory allocation
- least wasted space, the smallest partition Page Replacement Policies and Concepts
fitting the requirements • The first-in first-out (FIFO) page replacement policy will
• Space versus speed remove the pages that have been in memory the longest.

Deallocation • The least recently used (LRU) page replacement policy


swaps out the pages that show the least amount of recent
• Fixed partition system activity, figuring that these pages are the least likely to be
- Resets the status of the memory block to “free” used again in the immediate future.
• Dynamic Partition
Case 1: Joining Two Free Blocks
Case 2: Joining Three Free Blocks
Case 3: Deallocating an Isolated Block

Relocatable Dynamic Partitions

• Memory Manager relocates programs to gather


together all of the empty blocks and compact them to
make one block of memory large enough to
accommodate some or all of the jobs waiting to get in

• The compaction of memory, sometimes referred to as


garbage collection or defragmentation, is performed by
the operating system to reclaim fragmented sections of
the memory space.
Introducing Operating Systems File Management
• The File Manager keeps track of every file in the
Operating System system, including data files, program files, compilers,
• Operating system software is the chief piece of and applications.
software • By using predetermined access policies, it enforces
• Portion of computing system that manages all of the restrictions on who has access to which files.
hardware and all of the other software. • For example, a user might have read-only access,
• Controls who can use the system and how. read-and-write access, or the authority to create and
• It’s the boss! delete files. • Managing access control is a key part of
file management.
Essential Managers of OS
• Memory Manager Network Management
• Processor Manager • Operating systems with Internet or networking
• Device Manager capability have a fifth essential manager called the
• File Manager Network Manager • It provides a convenient way for
• Network Manager users to share resources while controlling users’ access
to them.
Manager Tasks • These resources include hardware and software
• Monitor its resources continuously
• Enforce the policies that determine who gets what, User Interface
when, and how much • The user interface is the portion of the operating
• Allocate the resource when appropriate system that users interact with directly.
• Deallocate the resource when appropriate • In the old days, the user interface consisted of
commands typed on a keyboard and displayed on a
Main Memory Management monitor.
•The Memory Manager is in charge of main memory • Now most systems allow users to choose a menu
•It checks the validity of each request for memory option from a list.
space
•It allocates a portion of memory that isn’t already in History of Machine Hardware
use •It deallocates memory.
•Protect the space in main memory occupied by the OS History of Machine Hardware
itself. • Mainframe
• Minicomputer
Processor Management • Supercomputer
• The Processor Manager decides how to allocate the • Microcomputer
central processing unit • Servers
• Keep track of the status of each process
• Monitors whether the CPU is executing a process or Mainframe
waiting for a READ or WRITE command to finish • a large machine - in size and in internal memory
execution capacity • The IBM 360 (1964) is a classic example of
• Can be compared to a traffic controller an early mainframe.

Processor Management Level of Responsibility IBM 360 model 30


• required an air-conditioned room about 18 feet
• To handle jobs as they enter the system square
• Job Scheduler, the high-level portion of the Processor • The CPU was 5 feet high and 6 feet wide
Manager, which accepts or rejects the incoming jobs. • internal memory of 64K
• To manage each process within those jobs • Process • $200,000 in 1964 dollars
Scheduler, the low-level portion of the Processor
Manager, which is responsible for deciding which Minicomputer
process gets the CPU and for how long. • It was developed to meet the needs of smaller
institutions • Digital Equipment Corporation (early
Device Management 1970s)
• PDP-8 (less than $18,000)
• The Device Manager monitors every device, channel,
and control unit. Minicomputer vs Mainframe
• Its job is to choose the most efficient way to allocate • Smaller in size and memory capacity
all of the system’s devices • Cheaper
• Midrange computers
Supercomputer Types of Operating System
• It was developed primarily for government • Operating systems for computers large and small fall
applications into five categories distinguished by response time and
• Business and industry became interested in the how data is entered into the system: batch, interactive,
technology when the massive computers became faster real-time, hybrid, and embedded systems.
and less expensive.
Types of Operating System
Cray supercomputer • Batch systems
•six to thousands of processors • Interactive systems
•performing up to 2.4 trillion floating point operations • Real-time systems
per second (2.4 teraflops). • Hybrid systems
• Embedded systems
Supercomputer Usage
• scientific research to customer support and product Batch Systems
development • Batch systems date from the earliest computers,
• often used to perform the intricate calculations when they relied on stacks of punched cards or reels of
required to create animated motion pictures magnetic tape for input.
• help oil companies in their search for oil by analyzing • Jobs were entered by assembling the cards into a
massive amounts of data deck and running the entire deck of cards through a
card reader as a group—a batch.
Microcomputer • The efficiency of a batch system is measured in
• It was developed to offer inexpensive computation throughput—the number of jobs completed in a given
capability to individual users in the late 1970s. amount of time (for example, 550 jobs per hour)
• amount of memory: 64K
• physical size was smaller than the minicomputers Interactive Systems
• Eventually grew to accommodate software with larger •Give a faster turnaround than batch systems but are
capacity and greater speed. slower than the real-time systems
•They were introduced to satisfy the demands of users
Workstation who needed fast turnaround when debugging their
•Powerful microcomputers developed for use by programs. •It required the development of time-
commercial, educational, and government enterprises sharing software
•networked together •It provides immediate feedback to the user and
•used to support engineering and technical users response time can be measured in fractions of a
•perform massive mathematical computations or second.
computer-aided design (CAD)
Real-time Systems
Servers • Used in time-critical environments where
• Servers are powerful computers that provide reliability is key and data must be processed
specialized services to other computers on within a strict time limit.
client/server networks. •The time limit need not be ultra-fast (though it often
• Examples can include print servers, Internet servers, is), but system response time must meet the deadline
e-mail servers, etc. or risk significant consequences.
• Each performs critical network tasks. •These systems also need to provide contingencies to
fail gracefully—that is, preserve as much of the
system’s capabilities and data as possible to facilitate
recovery.
Two Types of Real-time Systems
• Hard real-time systems risk total system failure if
the predicted time deadline is missed.

• Soft real-time systems suffer performance


degradation, but not total system failure, as a
consequence of a missed deadline.

Embedded Systems
• Computers placed inside other products to add
features and capabilities.
• For example, you find embedded computers in
household appliances, automobiles, digital music
players, elevators, and pacemakers.
Embedded Systems 1980
• Each one is designed to perform a set of specific • Improved the cost/performance ratio of computer
programs, which are not interchangeable among components
systems. • Hardware was more flexible, with logical functions
• This permits the designers to make the operating built on easily replaceable circuit boards
system more efficient and take advantage of the • firmware, a word used to indicate that a program is
computer’s limited resources, such as memory, to their permanently held in read-only memory (ROM)
maximum. • multiprocessing (having more than one processor),
• more complex languages were designed to coordinate
Hybrid Systems the activities of the multiple processors servicing a
•Combination of batch and interactive single job
• Users can access the system and get fast responses • With network operating systems, users generally
•Accepts and runs batch programs in the background became aware of the existence of many networked
when the interactive load is light resources,
•It takes advantage of the free time between high • could log in to remote locations, and
demand usage of the system and low-demand times. • could manipulate files on networked computers
• Many large computer systems are hybrids. distributed over a wide geographical area

Operating System Development 1990


• The overwhelming demand for Internet capability
1940 sparked the proliferation of networking capability
• time of vacuum tube technology • The World Wide Web by Tim Berners-Lee made the
• computers are the size of classrooms Internet accessible by computer users worldwide
• little need for standard operating system software • introduced a proliferation of multimedia applications
familiar with the idiosyncrasies of their hardware demanding additional power, flexibility, and device
• compilers and assemblers compatibility for most operating systems

1950 2000
• developed to meet the needs of new markets— • virtual machines
government and business researchers • multiple operating systems running at the same time
• cost effectiveness of the system and sharing resources
• still very expensive • Virtualization is the creation of partitions on a single
• Two improvements were widely adopted: server, with each partition supporting a different
- Computer operators were hired to facilitate operating system. • A thread (or lightweight process)
each machine’s operation can be defined as a unit smaller than a process, which
- job scheduling was instituted can be scheduled and executed.
• Job scheduling introduced the need for control cards,
which defined the exact nature of each program and its Object-oriented design
requirements. • An important area of research that resulted in
• This was one of the first uses of a job control language substantial efficiencies was that of the system
• Speed of I/O devices increased architecture of operating systems.
• Blocking • the way their components are programmed and
• Buffering organized, specifically the use of object-oriented
• Spooling design and the reorganization of the operating
system’s nucleus, the kernel.
1960 • The kernel is the part of the operating system that
• Passive multiprogramming resides in memory at all times, performs the most
- the operating system didn’t control the essential operating system tasks, and is protected by
interrupts but waited for each job to end an hardware from user tampering.
execution sequence.
• Active multiprogramming
- allowed each program to use only a preset slice
of CPU time.
1970

• Programmers soon became more removed from the


intricacies of the computer
• application programs started using English-like
words, modular structures, and standard operations.

You might also like