You are on page 1of 16

WOLDIA UNIVERSITY

INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTING OF


SOFTWARE ENGINEERING
COURSE TITLE:COMPUTER ORGANIZATION AND
ARCHITECTURE
COURSE CODE: SEng2032
GROUP ASSIGNMENT
STUDENT NAME ID
1. BETELIHEM ASMIRO-------------------------------------------1306222

2. DAGIM WOLDEKIDAN------------------------------------------1304693

3. ELIAS FERHAN-----------------------------------------------------1301001

4. HAILEMARIAM SEMERE---------------------------------------1306338

5. KALEAB BAYEH---------------------------------------------------1301691

6. KASAHUN HASEN--------------------------------------------------1301717

7. YARED KASSA-------------------------------------------------------1303033

8. YONAS AKLILU -----------------------------------------------------1303115

SUBMITTED TO: DanielC.

SUBMITTED DATE: TUH 25 ,2023


INTRODUCTION

The article discusses various topics related to computer organization and


architecture. It explains asynchronous data transfer, RAID technology, memory
hierarchy, mapping procedures for cache memory, micro-programmed control
organization, and subroutine call and return. It also discusses the differences
between primary and secondary memory and provides an explanation of status
register and digital circuit. The article provides a comprehensive overview of these
topics and their importance in modern computer systems. It is a useful resource for
students and professionals interested in computer organization and architecture .
1) Asynchronous data transfer in computer organization.
A.Write about asynchronous data transfer in computer organization and
architecture.
Asynchronous data is data that is not synchronized when it is sent or received. In
this type of transmission, signals are sent between the computers and external
systems or vice versa in an asynchronous manner.
The asynchronous data transfer between two independent units requires that
control signals be transmitted between the communicating units to indicate when
they send the data. Thus, the two methods can achieve the asynchronous way of
data transfer.

Asynchronous data transfer in computer organization and architecture is a method


of transferring data between two or more devices without the use of a clock signal.
In this type of data transfer, the data is transmitted asynchronously, which means it
is sent without any fixed timing or synchronization between the sender and the
receiver.
One of the key advantages of asynchronous data transfer is its flexibility. This
method allows data to be transferred between devices of varying speeds and
capabilities, without relying on a fixed timing mechanism. Asynchronous data
transfer also allows for more efficient use of system resources, as devices can
initiate data transfer as and when they are ready, without waiting for a clock signal.
However, asynchronous data transfer also comes with some disadvantages .
One issue with this method is the possibility of data corruption or loss due to
timing errors or collisions. Asynchronous data transfer also requires more
sophisticated protocols for error detection and correction, which can increase the
complexity and cost of the system.

Despite these drawbacks, asynchronous data transfer is still widely used in


computer organization and architecture, particularly in serial communication and
networking. It offers a flexible and efficient way to transfer data between devices
with different speeds and capabilities, and has become an essential component of
modern computer systems.
B.What are the two methods that can achieve this asynchronous way of data
transfer?
We have two different methods of Asynchronous data transfer: Strobe Control
Method and Handshaking Method.
1.Strobe Control Method
The strobe control technique of asynchronous data transfer operates a single
control line to time each transfer. The strobe can be activated by either the source
or the destination unit. The diagram shows a source-initiated transfer. The data bus
gives the binary data from the source unit to the destination unit.
2.Handshaking Method
Handshaking is an automated process of negotiation that dynamically sets
parameters of a communications channel established between two entities before
normal communication over the channel begins. It follows the physical
establishment of the channel and precedes normal information transfer.
2.RAID Technology ?
What is RAID? RAID (redundant array of independent disks) is a way of storing
the same data in different places on multiple hard disks or solid-state drives (SSDs)
to protect data in the case of a drive failure. There are different RAID levels,
however, and not all have the goal of providing redundancy.

A.Explain Raid 0, Raid 1, Raid 5


RAID 0

RAID 0 (also known as a stripe set or striped volume) splits ("stripes") data evenly
across two or more disks, without parity information, redundancy, or fault
tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of
one drive will cause the entire array to fail; as a result of having data striped across
all disks, the failure will result in total data loss. This configuration is typically
implemented having speed as the intended goal. RAID 0 is normally used to
increase performance, although it can also be used as a way to create a large
logical volume out of two or more physical disks
RAID 1
RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks;
a classic RAID 1 mirrored pair contains two disks. This configuration offers no
parity, striping, or spanning of disk space across multiple disks, since the data is
mirrored on all disks belonging to the array, and the array can only be as big as the
smallest member disk. This layout is useful when read performance or reliability is
more important than write performance or the resulting data storage capacity.

RAID 5
RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4,
parity information is distributed among the drives. It requires that all drives but one
be present to operate. Upon failure of a single drive, subsequent reads can be
calculated from the distributed parity such that no data is lost. RAID 5 require least
three disks

.
B.Compare and contrast RAID 0, RAID 1 and RAID 5 with respect to
performance issues (read & write performances).

RAID 0
In a RAID 0 system, data are split up into blocks that get written across all the
drives in the array. By using multiple disks (at least 2) at the same time, this offers
fast read and write speeds. All storage capacity can be fully used with no overhead.
The downside to RAID 0 is that it is NOT redundant, the loss of any individual
disk will cause complete data loss. Thus, it is not recommended to use unless the
data has no value to you.

RAID 1
RAID 1 is a setup of at least two drives that contain the exact same data. If a drive
fails, the others will still work. It is recommended for those who need high
reliability. An additional benefit of RAID 1 is the high read performance, as data
can be read off any of the drives in the array. However, since the data needs to be
written to all the drives in the array, the write speed is slower than a RAID 0 array.
Also, only capacity of a single drive is available to you.
RAID 5
RAID 5 requires the use of at least 3 drives, striping the data across multiple drives
like RAID 0, but also has a “parity” distributed across the drives. In the event of a
single drive failure, data is pieced together using the parity information stored on
the other drives. There is zero downtime. Read speed is very fast but write speed is
somewhat slower due to the parity that has to be calculated. It is ideal for file and
application servers that have a limited number of data drives.

RAID 5 loses 33 percent of storage space (using three drives) for that parity, but it
is still a more cost-effective setup than RAID 1. The most popular RAID 5
configurations use four drives, which lowers the lost storage space to 25 percent. It
can work with up to 16 drives.

C.What The Difference Between Raid0 & Raid1?

RAID 0 offers the best performance and capacity but no fault tolerance.
Conversely, RAID 1 offers fault tolerance but does not offer any capacity of
performance benefits. While performance is an important factor, backup admins
may prioritize fault tolerance to better protect data.
RAID 01 is a mirrored stripe set. In other words, there are two groups of disks,
each acting as a stripe set. Any write operations that are sent to the first group are
also sent to the second group, thereby creating two synchronized, identical stripe
sets. This approach delivers the performance of RAID 0 along with the fault
tolerance of RAID 1. Like RAID 1, however, 50% of the total storage capacity is
lost to provide redundancy.

D. What The Difference Between Raid1 And Raid5?

RAID 1 is a simple mirror configuration where two (or more) physical disks store
the same data, thereby providing redundancy and fault tolerance.
RAID 5 also offers fault tolerance but distributes data by striping it across
multiple disks.The key feature of Raid 1 is mirroring the data while Raid 5
distributes data in multiple disks equally .Information about data and disks are
stored in Raid 5 so that even if the hard disk fails, the data can be recovered. This
feature is not present in Raid 1.Information about data and disks are stored in Raid
5 so that even if the hard disk fails, the data can be recovered. This feature is not
present in Raid 1.
Raid 1 can tolerate more than 1 disk failure, while Raid 5 allows fault tolerance of
only 1 disk.
Raid 1 has slow write speeds when compared with Raid 5.
The parity disk is not used in Raid 1, while Parity information is used well in Raid
5.Data loss cannot be managed and unacceptable in
Raid 1. Raid 5 has good failure resistance and better security.The performance is
great in Raid 1, but in Raid 5, performance is slow due to disks’ redundancy. Data
cannot be accessed from a failed drive in Raid 1, whereas data can be accessed
from a failed drive in Raid 5. In Raid 1, storage capacity is reduced to half because
two copies of data are stored. This does not happen in Raid 5 While recovering
data from failure, Raid 1 should be turned off. In Raid 5, data can be recovered
when the Raid is working. Recovery is faster in Raid 1. Recovery is slow in Raid
5 due to parity and calculations for rebuilding the data storage.
Raid 1 does not do any error correction and detection, whereas Raid 5 has many
techniques for error correction and detection. If the data block is missing, it throws
an error in Raid 1. In Raid 5, parity creates a data block for missing data block.
Raid 1 is the choice for the high-end level of applications, whereas Raid 5 is
considered for a medium level of applications. When we want to store 300GB of
data, we have to get 600 GB storage in Raid 1, whereas Raid 5 requires the only
same amount of storage space. Every transaction of read/write requires writing on
parity disk in Raid 5. In Raid 1, as there is no parity disk available, the transactions
are faster than Raid.
E.What Is A Raid Array?
A Redundant Array of Independent Disks (RAID) array is a technology used to
improve data storage reliability and performance. It consists of multiple physical
hard drives that work together as a single storage unit, providing redundancy and
fault tolerance.

RAID arrays are commonly used in servers, data centers, and other environments
where data reliability and performance are critical.

 lost, that data can be restored quickly, because this data is also stored in
other drives.
 When uptime and availability are important business factors. If data
needs to be restored, it can When a large amount of data needs to be
restored. If a drive fails and data is be done quickly without downtime.
 When working with large files. RAID provides speed and reliability when
working with large files.
 When an organization needs to reduce strain on physical hardware and
increase overall performance. As an example, a hardware RAID card can
include additional memory to be used as a cache.
 When having I/O disk issues. RAID will provide additional throughput by
reading and writing data from multiple drives, instead of needing to wait for
one drive to perform tasks.
 When cost is a factor. The cost of a RAID array is lower than it was in the
past, and lower-priced disks are used in large numbers, making it cheaper.
.3)Mapping function
A)explain different mapping procedure in organization of catch
memory.
The mapping procedure used in the organization of cache memory is usually one of
three types: direct mapping, fully associative mapping, and set associative
mapping.

1. Direct mapping: In this method, each block of main memory is mapped to only
one block in the cache memory. This means that a given location in main memory
can be cached in only one location in the cache memory. Direct mapping is the
simplest and most efficient mapping technique but is prone to cache conflicts.
2. Fully associative mapping: In this method, any block of main memory can be
mapped to any block of the cache memory. This means that multiple locations in
main memory can be cached in any location in the cache memory. The
disadvantage of this method is that it is more complex than direct mapping, and
thus it requires more hardware to implement.
3. Set associative mapping: This method is a compromise between direct and fully
associative mappings. The cache memory is divided into sets, with each set
containing several cache blocks. Each main memory block can be mapped to any
one of the cache blocks in a particular set. This method reduces the likelihood of
cache conflicts compared to direct mapping while still being less complex
compared to fully associative mapping.

B)Discus about the hierarchy of memory in computer system


regards to speed, cost and size.
The computer system memory hierarchy can be organized in a hierarchy of speed,
cost, and size as follows:
1. Registers: This is the fastest type of memory available in a computer system.
Registers are built into the CPU and are used for storing data that the CPU needs to
access frequently.
2. Cache memory: This is a relatively small amount of high-speed memory that is
used to store frequently accessed data. It is more expensive than main memory but
much faster.
3. Main memory: This is the primary memory of a computer system, which usually
consists of DRAM or similar types of memory. It is slower than cache memory and
more expensive than secondary memory.
4. Secondary memory: This is the storage space that is used for long-term data
storage, such as hard disk drives, solid-state drives, and optical drives. It is slower
than main memory and much less expensive.

C)What is the difference between primary and secondary memory.


Discus about MDR and MAR.

The primary memory of a computer system is also known as the main memory or
the internal memory. Primary memory is directly accessible by the CPU and is
where the CPU stores instructions and data that are currently being used. The
primary memory is volatile and loses its contents when power is switched off.
Secondary memory, on the other hand, is the long-term storage of a computer
system. Secondary memory is non-volatile, and the contents are retained even
when power is switched off. Data and instructions are transferred between the
primary memory and secondary memory as needed.
MDR and MAR
MDR stands for Memory Data Register, and it is a register that holds the data that
is being read from or written to the memory. The MDR is used to hold the data that
is being transferred between the memory and the CPU.
MAR stands for Memory Address Register, and it is a register that holds the
memory address of the data that is being read from or written to the memory. The
MAR is used to specify the address of the memory location that the CPU wants to
access.
D)Explain Micro Programmed control organization with neat diagram.
Micro programmed control is a type of control unit design in which the control
signals are generated by a micro program rather than by hardwired logic circuits.
Microprogramming involves breaking down complex instructions into a sequence
of microinstructions, each of which corresponds to a specific control signal. These
microinstructions are stored in a control memory, which is accessed by the control
unit during instruction execution.
E)Write about Status register and explain the Status bit conditions and
Digital circuit.
Status register and Status bit conditions
The status register is a register that holds various status bits that indicate the
outcome of an instruction execution. The status register is used to report errors or
other conditions that may influence the continuation of the program. The status bits
are used to indicate whether certain conditions are true or false, and whether
certain events have occurred.

Examples of status bits and their conditions are:


1. Zero flag: This flag is set when the result of an operation is zero.
2. Carry flag: This flag is set when the operation results in a value that exceeds the
maximum representable value.
3. Overflow flag: This flag is set when the signed result of an operation exceeds
the maximum representable signed value.

Digital circuit
A digital circuit is an electronic circuit that operates with digital signals. Digital
circuits are composed of logic gates that perform operations on binary logic inputs
(0 or 1) to produce binary logic outputs. Digital circuits are used in computers,
calculators, and other electronic devices, where signals are represented using
binary digits.
A digital circuit can be designed using various logic gates, such as AND, NOT, OR,
XOR, etc. Digital circuits can also be combined to form more complex systems,
such as registers, counters, and memory units. One of the main advantages of
digital circuits is their tolerance to noise and interference, which makes them more
reliable compared to analog circuits.
4) instructions. Write about subroutine call and return
Overall, subroutine calls and return instructions are an essential part of modern
programming languages and are used extensively A subroutine call is an
instruction in a program that transfers control to a specific sequence of code that
performs a specific task. The purpose of a subroutine call is to break down
complex tasks into smaller, more manageable pieces that can be reused throughout
the program.
When a program executes a subroutine call, it saves the current execution state,
including the values of any variables or registers that are in use, and transfers
control to the subroutine. The subroutine then executes its code, which may
include input/output operations, calculations, or other tasks.
Once the subroutine completes its task, it executes a return instruction, which
transfers control back to the calling program. The return instruction restores the
saved execution state, including the values of any variables or registers that were in
use before the subroutine call, and resumes execution at the point immediately
following the subroutine call.
Subroutine calls and return instructions are essential for structured programming
and can help to make programs more modular, maintainable, and efficient. By
breaking down complex tasks into smaller subroutines, programmers can create
code that is easier to understand and debug. Additionally, by reusing subroutines
throughout the program, programmers can reduce code duplication and improve
performance.
In addition to being used for breaking down complex tasks, subroutines can also be
used for implementing recursion, which is a powerful technique for solving
problems that involve repetitive or iterative processes. Recursion involves calling a
subroutine from within itself, which allows the program to solve a problem by
breaking it down into smaller and smaller sub problems until a base case is
reached. in software development to create efficient, modular, and maintainable
code.
5)
SUMMARY
The article discusses various aspects of computer organization and architecture,
including asynchronous data transfer, RAID technology, mapping procedures for
cache memory, hierarchy of memory, micro-programmed control organization,
subroutine calls and return instructions. Asynchronous data transfer is a method of
transferring data between devices without a clock signal, while RAID technology
is a way of storing data redundantly on multiple disks. Mapping procedures for
cache memory include direct mapping, fully associative mapping, and set
associative mapping. The hierarchy of memory is organized based on speed, cost,
and size, starting from registers, cache memory, main memory, and secondary
memory. Micro-programmed control organization is a type of control unit design
that uses microcode to implement the CPU's instruction set. Subroutine calls and
return instructions are essential for breaking down complex tasks into smaller,
more manageable.
REFERENCES
1.Computer_Organization_And_Archi___Sujatha.k
2.Computer_Organization_And_Archi___UnknownComputer_Organization_
And_Archi___Unknown
3. Ebooksclub.org__computer_organization_and_architecture__6th_edition
4. https://www.techtarget.com/

You might also like