You are on page 1of 14

1

COURSE NAME: COMPUTER ARCHITECTURE & ORGANIZATION


PROGRAM : BACHELOR OF INFORMATION TECHNOLOGY
2

SECTION A MEMORY SYSTEMS

QUESTION ONE

a) Memory hierarchy is the structure that uses multiple levels of memory that has different
speeds and sizes. It takes advantage of locality and trade-offs in the cost performance of
memory technologies. Memory hierarchy consists all of the storage devices available in a
computer system ranging from the slow but high-capacity auxiliary memory to relatively
faster main memory.

The following diagram illustrates the components in a typical memory hierarchy.

The main memory on the diagram is often referred to as Random Access Memory. So this
memory unit Communicates with the CPU directly and with the Auxiliary Memory through
the Input/Output processor as shown above. The Auxiliary Memory in a computer system is
known as the lowest-cost, slowest- access and highest capacity storage. It provides storage
space for data and programs that are not in immediate use. Examples of such memory include
magnetic tapes, magnetic disks etc. The I/O processor manages the transfer of data between
the Auxiliary Memory and the Main Memory.Cache Memory stores data that the
processor(CPU) uses frequently in the system. It is the fastest int memory hierarchy and
approaches the speed of the CPU components. Magnetic disks are those disks that are coated
with magnetized material on there surface. Magnetic tapes are those which are coated using
the magnetic plastic trip and used as recording medium.

b) i. Temporal locality: Also known as locality in time, this states that if a data location is
referenced, then it will be referenced again soon. Meaning, if an item is referenced, it will be
referenced again soon.

ii. Spatial locality: Known as locality in space states that if a data location is referenced, data
locations with nearby addresses will be referenced soon. In other words, if an item is
referenced, items whose addresses are close by will be referenced soon.
3

iii. Hit rate: Fraction of memory accesses found in the upper level of the memory hierarchy.
Often used as the measure of the performance of the structure.

iv. Miss rate: A part of memory accesses not found in the upper level of the memory
hierarchy.
v. Miss penalty: Time required to replace a block in the upper level with a corresponding
block from the lower level and the time to deliver the block to the processor.

QUESTION TWO
a) Static memories: these are contents of computer memory that remain fixed until written to
or until the power has been turned off.
b) Memory cell diagrams to describe Static RAM and Dynamic RAM.
i. SRAM Memory Cell Diagram

Static RAM is random access memory that retains data bits in its memory as long as power is
being supplied. It does not have to be periodically refreshed and provides faster access to data
and is more expensive than DRAM.

ii. DRAM Memory Cell Diagram

c) Virtual memory is a concept that allows the main memory to act as the cache for the
secondary memory. It allows efficient and safe sharing of memory among multiple
programs.The purpose of virtual memory is to use the hard disk as an extension of RAM,
thus increasing the available address space a process can use. This area on the hard drive is
called page file. The common way to implement virtual memory is by using paging, a
4

method in which main memory is divided into fixed-sized blocks and diagrams are divided
into the size blocks. Some frequently used terms for virtual memory are divided into size
blocks.Virtual address – the logical or program address that the process uses. Whenever the
CPU generates an address, it is always in terms of virtual address space.Physical address –
the real address in physical memory.Mapping – the mechanism by which virtual addresses
are translated into physical ones (very similar to cache mapping).Page frames – the equal-
size chunks or blocks into which main memory is divided.Pages – the chunks or blocks into
which virtual memory is divided, each equal in size to a page frame. Virtual pages are stored
on disks until needed.Paging –the process of copying a virtual page from disk to a page
frame in main memory.Fragmentation –memory that becomes unusable.Page fault –an
event that occurs when a requested page is not in main memory and must be copied into
memory from disk.

QUESTION THREE
a) Tracks: these are concentric circles that divide the disk surface. They make up the surface
of the magnetic disk.
b) Sectors: these that make up a track on the magnetic disk and they each contain information
which is the smallest amount of information that is read or written on disk.
c) Seek: this is a process that involves putting the read/write head over a proper track on the
disk.
d) Rotational Latency: the time required for the desired sector of a disk to rotate under the
read/write head.

Rotational Latency

R. L= 7400/1minute
R.L= 7400/60
R.L=60/7400
R.L=0.0081 seconds

SECTION B BUS STRUCTURE AND I/O ORGANIZATION

QUESTION ONE
a) The Input/Output Subsystem provides an interface for communication between devices in a
computer system. It also transfers data between primary memory and various I/O peripherals.
b) i.Address Bus: To identify the source and destination of data, determines the maximum
memory capacity of the system and transport memory addresses which the processor wants to
access in order to read or write data.
ii.Control Bus: Control bus is the pathway for all timing and controlling functions sent by
the control unit to other parts of the system . It controls the use of data and the address line
and said to be a bidirectional bus, as it also transmits response signals from the hardware.
iii.Data Bus: The data bus transfers/sends data from one device to the other. Data is passed in
parallel and in serial manner Parallel normally pass in a multiple of 8 bits at a time serial
passes one bit at a time.
c) i. memory write and memory read
memory write helps in the storage of data into the memory
5

The CPU loads into memory data registers the word that has to be written into memory.
Memory read helps to retrieve data from the memory Address of data from where data has
been read is stored in MAR (memory Address Registers) CPU issues a read signal.
ii. I/O write and Read
I/O write Causes the I/O module to take an item of data (byte or word) from the
data bus and subsequently transmit that data item to the peripheral. while I/O read
Causes the I/O module to obtain an item of data from the peripheral and place it in an internal
buffer then the processor can then obtain the data item by requesting that the I/O module and
place it on the data bus.

QUESTION TWO
a)Bus arbitration is a process by which next device becomes the bus controller by transferring
bus master ship to another bus.Bus arbitration is in two types centralized and
distributed.Centralized Only single bus arbiter performs the required arbitration and it can be
either a processor or a separate DMS controller.Distributed devices participate in the selection
of the next bus master.
b) i. programmed I/O
is the result of instruction, written in program each data transfer is initiated by an instruction in
the program (CPU registers and peripherals) in this method the CPU stays in a program loop
until the I/O device indicates that is ready for data transfer With programmed I/O, data is
exchanged between the processor and the I/O module The processor executes a program that
gives it direct control of the I/O operation, including sensing device status, sending a read or
write command, and transferring the data.
i. interrupt-initiated I/O is the process by which the processor extracts data from main memory
for output and storing data in main memory for input.
ii. DMA channels are used to communicate data between the peripheral device and the system
memory. All four system resources rely on certain lines on a bus. Some lines on the bus are
used for IRQs, some for addresses (the I/O addresses and the memory address) and some for
DMA channels. A DMA channel enables a device to transfer data without exposing the CPU to
a work overload. Without the DMA channels, the CPU copies every piece of data using a
peripheral bus from the I/O device. Using a peripheral bus occupies the CPU during the
read/write process and does not allow other work to be performed until the operation is
completed
iii. Input-Output Processor(IOP) is a processor with direct memory access capability. the
computer system is divided into a memory unit and number of processors.Each IOP controls
and manage the input-output tasks The IOP can fetch and execute its own instructions. These
IOP instructions are designed to manage I/O transfers only.

SECTION C CPU ORGANIZATION

QUESTION ONE
a)Data register
The function of the data register is to hold the data in the memory address where the CPU
reads and writes.
6

b)Accumulator
The Accumulator holds the initial data to be operated upon, the intermediate results and the
final results of the operation.
c)Temporary Register
Used to hold intermediate results after processing.
d)Memory Address Register
Stores the address where the memory of the CPU reads and writes data.
e)Program Counter
Used to store the address of the next instruction to be fetched for execution.

SECTION D OPERATING SYSTEMS

QUESTION ONE
a)The three state process model is designed to overcome the two state process model by the
introduction of the third state called BLOCKED. This state describes any waiting process for
an I/O event to take place. The three state process model comprises of three states or stages of
execution. These are RUNNING, READY and BLOCKED. The RUNNING state is where
the process is currently being executed, The READY state is where processes have queued up
and prepared to execute when given the opportunity and the BLOCKED state is where where
a process has been kept and cannot execute until some event occurs such as the completion of
an input/output operation. Therefor, processes entering the system must first go into the
READY state Then into the Running state and the process only leaves the system from
RUNNING state.
7

b) A process is the unit of work in the system. The attributes of a process are the Parent and
Child. A thread also known as the light-weight process is a subdivision of work in a process.
It is an independent sequence of execution with the context of the parent process. A one such
real process may spawn thread which will execute independently from the main process but
managed by the parent process and share the same memory space. So, this mechanism can
perform tasks similar to processes but its much cheaper in terms of system overheads.
c) Race condition is situation in the critical section which happens when the result of multiple
thread execution in the critical section differs according to the order in which the threads
execute.The critical section in a code segment where the shared variables can be accessed.

In the above diagram, the entry sections handles the entry into the critical section. It acquires
the resources needed for execution by the process. The exit section handles the exit from the
8

critical section. It releases the resources and also informs the other processes that critical
section is free.

QUESTION TWO

a) Deadlock is a situation when two or more processes need some resource to complete its
execution that is held by the other process. A process is said to be in deadlock if it is waiting
for for an event which will never occur. Starvation is situation which occurs when a process
requires a resource for execution that is never allotted.

DEADLOCK STARVATION
No processes proceed and blocked High priority process is given the resource
If the request to resource is busy with the Low priority processes get blocked and high
process, deadlock occurs priority processes get to proceed.
Other name is circular wait Other name is life lock

b) Mutual Exclusion: this is where only one process wants to use the resource at a time. If a
resource can be shared simultaneously by all processes, it cannot cause a dead lock.
Resource Holding: this when processes hold some resources while requesting more.if a
process requests all the resources it needs at one time, it cannot be involved in a deadlock.
Deadlock is characterized by a process holding one resource while requesting and having to
wait on another. If all resources are obtained simultaneously, no wait and hence no deadlock.
No preemption: This means that resources cannot be forcibly removed from a process. So, if
one process could remove resources from another, it would not need to wait for the resource
and hence a deadlock cannot not occur.
Circular Wait: A closed circle of processes exists, where each process requires a resource
held by the next.
The general strategies for dealing with deadlocks are listed below and briefly described.
1. Deadlock prevention: Detection implies that we arrange one or more of the deadlock of
the deadlock conditions do not hold within the allocation policy of the system .since there are
four condition, one might expect this to be a realistic prospect and the conditions have been
explained above.
2. Deadlock Avoidance: this adopts a different philosophy. It tries to predict the possibilities
of deadlock dynamically as each resource request is made if process A request for a resource
held by process B, then make sure that process B is not waiting for a resource held by process
A.
3. Deadlock Detection: this strategy accepts the risk of a deadlock occurring and periodically
execute a procedure to detect any in place. This is done so to detect the circular wait situation
within the resource allocation and request information maintained by the system.
9

QUESTION TWO
a)

Inter-process Shared Memory


General platform for communication A technique of inter process platform
Consists of many different techniques Consists of system calls
Combines many techniques to achieve Combines call to achieve communication
communication

b) The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.

QUESTION FOUR

CPU scheduling is a process which allows one process to use the CPU while the execution of
another process is on hold(in waiting state) due to unavailability of any resource like I/O etc,
thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient,
fast and fair.

Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the short-term
scheduler (or CPU scheduler). The scheduler selects from among the processes in memory
that are ready to execute, and allocates the CPU to one of them.

Long-Term Scheduler:The job scheduler or long-term scheduler selects processes from the
storage pool in the secondary memory and loads them into the ready queue in the main
memory for execution.The long-term scheduler controls the degree of multiprogramming. It
must select a careful mixture of I/O bound and CPU bound processes to yield optimum
system throughput. If it selects too many CPU bound processes then the I/O devices are idle
and if it selects too many I/O bound processes then the processor has nothing to do.The job of
the long-term scheduler is very important and directly affects the system for a long time.

Short-Term scheduler:The short-term scheduler selects one of the processes from the ready
queue and schedules them for execution. A scheduling algorithm is used to decide which
process will be scheduled for execution next.The short-term scheduler executes much more
frequently than the long-term scheduler as a process may execute only for a few
milliseconds.The choices of the short term scheduler are very important. If it selects a process
with a long burst time, then all the processes after that will have to wait for a long time in the
10

ready queue. This is known as starvation and it may happen if a wrong decision is made by
the short-term scheduler.

Med-Term Scheduler:The medium-term scheduler swaps out a process from main memory.
It can again swap in the process later from the point it stopped executing. This can also be
called as suspending and resuming the process.This is helpful in reducing the degree of
multiprogramming.

QUESTION FIVE

a)
FIRST COME FIRST SERVED ALGORITHM
This is a non-preemptive (Once the CPU has been allocated to a process, that process keeps
the CPU until it releases the CPU, either by terminating or by requesting I/O) scheduling
algorithm which follows the first-in, first-out (FIFO) policy. This algorithm uses the concept
of a queue were each process is allowed in the order of their arrival. As each process becomes
ready, it joins the ready queue. When the current running process finishes execution, the
oldest process in the ready queue is selected to run next. When a process enters the ready
queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the
process at the head of the queue. The running process is then removed from the queue. The
average waiting time under the FCFS policy, however, is often quite long.

SHORTEST JOB FIRST

This scheduling algorithm favors processes with the shortest expected execution time. As each process
becomes ready, it joins the ready queue. When the current running process finishes execution, the
process in the ready queue with the shortest expected execution time is selected to run next. A different
approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This algorithm
associates with each process the length of the latter's next CPU burst. When the CPU is available, it is
assigned to the process that has the smallest next CPU burst. If two processes have the same length
next CPU burst, FCFS scheduling is used to break the tie (to choose one). The SJF scheduling
algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of
processes. Moving a short process before a long one decreases the waiting time of the short process
more than it increases the waiting time of the long process. Consequently, the average waiting time
decreases.
ROUND ROBIN

The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar
to FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called
a time quantum (or time slice), is defined. A time quantum is generally from 10 to 100 milliseconds.
The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue,
allocating the CPU to each process for a time interval of up to 1 time quantum. treat the ready queue as
a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler
picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and
dispatches the process. One of two things will then happen. The process may have a CPU burst of less
than 1 time quantum. In this case, the process itself will release the CPU voluntarily. The scheduler
will then proceed to the next process in the ready queue. If the CPU burst of the currently running
process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating
system. A context switch will be executed, and the process will be put at the tail of the ready queue.
11

The CPU scheduler will then select the next process in the ready queue. The average waiting time
under the RR policy is often long

FIXED PERIORITY PRE-EMPTIVE SCHEDULING

A priority is associated with each process, and the CPU is allocated to the process with the highest
priority. In the case of fixed priority pre-emptive. It’s an algorithm which uses the concept of pre-
emptive. meaning A process can be interrupted at any time when another process with higher priority
arrives and it does not wait for the current process to complete execution if its of lower priority.

b) GHANTT CHATS
i. FOR FCFS

P1 P2 P3

0
9 15 20

FOR SJF

P3 P2 P1

0 5 11
20

FOR ROUND ROBIN

P1 P2 P3 P1 P2 P3 P1 P2 P3 P1 P1

0 2 4 6 8 10 12 14 16
17 19 20

FOR FIXED PERIORITY PRE-EMPTIVE

P2 P1 P3

0 6 15
20
12

II)waiting time of each process


FOR FCFS
Waiting time = turnaround – burst time
P1=(9-9)=0
P2=(15-6)=9
P3=(20-5)=15
Hence average=(0+9+15)/3=8
FOR SJF
Waiting time=turnaround-burst time
P3=(5-5)=0
P2=(11-6)=5
P3=(20-9)=11

Hence average=(0+5+11)/3=5.33
FOR ROUND ROBIN
Waiting time=turnaround – burst time
P1=(20-9)=11
P2=(16-6)=10
P3=(17-5)=12
hence average=11

FOR FIXED PERIORITY PRE-EMPTIIVE SCHEDULING


Waiting time= turnaround time – Burst time
P2=(6-6)=0
P1=(15-9)=6
P3=(20-5)=15
Average=(0+6+15)/3=7

iii) TURNAROUND TIME


FOR FCFS SCHEDULING
Turnaround time = exit time – arrival
13

P1=(9-0)=9
P2=(15-0)=15
P3=(20-0)=20
Hence average=(9+15+20)/3=19.67
FOR SJF SCHEDULING
Turnaround time = exit time – arrival
P3=(5-0)=5
P2=(11-0)=11
P3=(20-0)=20

Hence average = (20+5+11)/3=12


FOR ROUND ROBIN SCHEDULING
turnaround time = exit time – arrival time
P1=(20-0)=20
P2=(16-0)=16
P3=(17-0)=17
hence average = (20+16+17)/3 = 17.6

FOR FIXED PERIORITY PRE-EMPTIIVE


Turnaround time = exit time – arrival
P2=(6-0)=6
P1=(15-0)=15
P3=(20-0)=20
Average=(20+6+15)/3=13.6
14

References

1.William stallings computer organization and architecture(2012)

2.https://www.tutorialspoint.com/different-types-of-process-
schedulers
3. Colin Ritchie, operating systems, 3rd edition
4. https://www.javatpoint.com/coa-auxiliary-memory
5. https://www.sciencedirect.com/topics/computer-science/temporal-locality
6. https://www.elprocus.com/memory-hierarchy-in-computer-architecture/

You might also like