You are on page 1of 5

SUMMARY (OPERATING SYSTEM)

CHAPTER 1: Introduction to Operating System


1- operating system is a program that manages a computer‘s hardware.

2- A computer system can be divided roughly into four components: the hardware, the
operating system, the application programs, and the users.

3- The user viewpoint focuses on how the user interacts with the operating system through the
usage of various application programs. In contrast, the system viewpoint focuses on how the
hardware interacts with the operating system to complete various tasks.

4- Memory hierarchy are : register, cache memory, main memory, electronic disk, magnetic
disc, optical disk and magnetic tape.

5- multiprocessor systems/ parallel systems / multicore system, and its advantages are:

- Increased Throughput - Economy of Scale - Increased Reliability

6- The multiple-processor systems in use today are of two types:

a- Asymmetric multiprocessing, which each processor is assigned a specific task.

b- SMP (symmetric multiprocessing) is computer processing done by multiple processors that


share a common operating system (OS) and memory.

7- evolution of Operating system:

a- serial processing b- simple batch system c- Multiprogrammed batch systems

CHAPTER 2: Operating System Structures


1- OS Structures:

a- Simple Structure b- Layered Approach c- Microkernel System Structure

2- Simple Structure: the systems don't have well- defined structure such operating systems
begins as small, simple & limited systems and then grow beyond their original scope.

e.g MS-DOS

3- OS is broken into a number of layers (levels) each built on top of lower layers

- The main advantage of the layered approach is modularity

- e.g UNIX OS
Layers: Functions
5 User program
4 I/O management
3 Operator Process communication
2 Memory management
1 CPU Scheduling
0 Hardware

4- Microkernel System Structure: This method structures the operating system by removing all
nonessential components from the kernel and implementing them as system and user-level
programs.

- The main function of the microkernel is to provide a communication facility between the client
program and the various services that are also running in user space.

CHAPTER 3: Proccess
1- process: is an instance of a program in execution.

2- process in memory consists of multiple parts:

- Program code / text section - Current activity including

- Stack containing temporary data - Data section containing global variables

- Heap containing memory dynamically allocated during run time

3- Process States:

New: The process is being created.

Ready: The process is waiting to be assigned to a processor.

Running: Instructions are being executed.

Waiting: The process is waiting for some event to occur.

Terminated: The process has finished execution.

4- Scheduling is a fundamental fuunction of OS. When a computer is multiprogrrammed, it has


multiple processes completing for the CPU at the same time.

5- Scheduler: A process migrates between the various scheduling queues throughout its life-
time purposes.

6- Types of scheduler:

a- Long term b- Short term c- Mid term


7- Long term scheduler selects process from the disk & loads them
into memory for execution.

8- The short term scheduler selects among the process that are ready to execute & allocates
the CPU to one of them.

9- some operating systems introduce an additional intermediate level of scheduling known as


medium - term scheduler.

10- When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process via a context switch.

11- A process control block contains many pieces of information associated with a specific
process. It includes:

- Process state - Program counter - CPU registers

- CPU scheduling information - Memory management information

- Accounting information - I/O Status Information

12- operations on processes:

a- Process creation b- Process Termination

13- process creation: A process may create several new processes, via a create-process system
call, during the course of execution. The creating process is called a parent process, and the
new processes are called the children of that process.

explaining.......

When a process creates a new process, two possibilities exist in terms of execution:

The parent continues to execute concurrently with its children.

The parent waits until some or all of its children have terminated.

There are also two possibilities in terms of the address space of the new process:

The child process is a duplicate of the parent process (it has the same program and data
as the parent), The child process has a new program loaded into it

14- Process Termination: A process terminates when it finishes executing its final statement
and asks the operating system to delete it by using the exit() system call.

15- Interprocess Commmunication (IPC):

 Processes executing concurrently in the operating system may be either independent


processes or cooperating processes.
 There are several reasons for providing an environment that allows process
cooperation:
A- Information sharing. B- Computation speedup. C- Modularity. D- Convenience.
 Cooperating processes require an interprocess communication(IPC) mechanism that
will allow them to exchange data and information.
 There are two fundamental models of interprocess communication:
1) shared memory 2) message passing.

16- Shared Memory: Interprocess communication using shared memory requires


communicating processes to establish a region of shared memory.

17- Two types of buffers can be used:

 Unbounded buffer places no practical limit on the size of the buffer.


 Bounded buffer assumes a fixed buffer size.

18- Message Passing:

 another way to achieve the same effect is for the operating system to provide the
means for cooperating processes to communicate with each other via a messagepassing
facility.
 A message-passing facility provides at least two operations:
a- Send (message) b- receive(message).
 Here are several methods for logically implementing a link and the send()/receive()
operations:
o Direct or indirect communication
o Synchronous or asynchronous communication
o Automatic or explicit buffering

19- IPC – Message Passing

• Processes communicate with each other without resorting to shared variables

• IPC facility provides two operations:

a- send (message) b- receive(message)

• The message size is either fixed or variable

20- Implementing of communication link:

 Physical:
a- Shared memory b- Hardware bus c- Network

• Logical:

a- Direct or indirect b- Synchronous or asynchronous c- Automatic or explicit buffering


21- Direct Communication:

 Processes must name each other explicitly:


a- send (P, message) – send a message to process P
b- b- receive(Q, message) – receive a message from process Q

22- Indirect Communication:

 Messages are directed and received from mailboxes (also referred to as ports)
a- Each mailbox has a unique id
b- Processes can communicate only if they share a mailbox

23- Synchronous:

 Message passing may be either blocking or non-blocking


 Blocking is considered synchronous
a- Blocking send -- the sender is blocked until the message is received
b- Blocking receive -- the receiver is blocked until a message is available
 Non-blocking is considered asynchronous
a- Non-blocking send -- the sender sends the message and continue
b- Non-blocking receive -- the receiver receives:
 A valid message, or
 Null message
 Different combinations possible
 If both send and receive are blocking, we have a rendezvous

24- Buffering:

• Queue of messages attached to the link.

• Implemented in one of three ways

1. Zero capacity – no messages are queued on a link.

- Sender must wait for receiver (rendezvous)

2. Bounded capacity – finite length of n messages


- Sender must wait if link full

3.Unbounded capacity – infinite length


- Sender never waits

You might also like