You are on page 1of 38

 DEPARTMENT: COMPUTER SCIENCE

 COURSE: COMPUTER ORGANIZATION AND ARCHITECTURE

 Prepared by: Tarekegn Osa


 ID: 70800/14
 Section:II(two)  Submitted to: Mr Elias B.
 Submition date:15/03/2016E.C
Wolaita Sodo,
Ethiopia 2016E.C
Q1.Explain briefly the techniques and methods of Memory mapping
functions in the computer organization.

Answers:
Cont’d

4. Memory-mapped files: This method maps a file into a


process's address space as a memory block. The operating sys-
tem uses a memory-mapped file table to map the file's blocks to
memory addresses.
5. Shared memory mapping: This method maps a shared mem-
ory block into multiple processes' address spaces. The operat-
ing system uses a shared memory table to map the shared
memory block to memory addresses in each process.

These techniques and methods of memory mapping allow pro-


grams to efficiently access and manipulate files and memory
blocks, and enable operating systems to manage memory and
resources more effectively.
Q2: Explain briefly cache Memory principles and techniques
Answers:
Cont’d
There are several techniques used in cache memory design, including:

1. Cache partitioning: This involves dividing the cache into smaller, indepen-
dent caches for different levels of the memory hierarchy. This allows for more
efficient use of cache resources and can improve performance.
2. Cache replacement policies: These policies determine how cache entries
are replaced when the cache is full. There are several replacement policies, in-
cluding Least Recently Used (LRU), Least Frequently Used (LFU), and Ran-
dom Replacement.
3. Cache coherence: This refers to the mechanism that ensures that multiple
processors or cores in a multi-core system have a consistent view of shared
data. There are several cache coherence protocols, including MESI (Modified,
Exclusive, Shared, Invalid), MOESI (Modified, Owned, Exclusive, Shared, In-
valid), and MWI (Modified, Writable, Invalid).
4. Cache memory hierarchy: This refers to the organization of cache memory
into multiple levels, with smaller, faster caches at the top of the hierarchy and
larger, slower caches at the bottom. This allows for efficient use of cache re-
sources and can improve performance.
Q3: Explain briefly the Memory Hierarchy and their types in the computer system.

Answers
Memory hierarchy is the arrangement of different
kinds of memory and storage devices in a computer
system based on their speed, capacity, and cost. It
ranges from the fastest and smallest CPU registers to
the slowest and largest backup storage. Memory
hierarchy is designed to reduce the performance gap
between the processor and the memory. The pro-
cessor can move from one level to another based on
its requirements. Memory hierarchy can be classi-
fied into two types: internal memory and external
memory. Memory hierarchy can also be spanned by
virtual memory, a system that provides programs with
large address spaces that may exceed the actual RAM
Cont’d

Internal memory is directly accessible by the processor


and is divided into three levels: registers, cache
memory, and main memory.
A.Registers are small, high-speed memory units loc-
ated in the CPU. They are used to store the most fre-
quently used data and instructions. Registers have the
fastest access time and the smallest storage capacity,
typically ranging from 16 to 64 bits. B.Cache memory is
a small, fast memory unit located close to the CPU. It
stores frequently used data and instructions that have
been recently accessed from the main memory. Cache
memory is designed to minimize the time it takes to ac-
cess data by providing the CPU with quick access to
frequently used data.
C.Main memory, also known as RAM (Random
Access Memory), is the primary memory of a computer
system. It has a larger storage capacity than cache
memory, but it is slower. Main memory is used to store
data and instructions that are currently in use by the
CPU. Main memory can be further classified into two
types: Static RAM and Dynamic RAM .

External memory is not directly accessible by the pro-


cessor and is used for storing data and instructions that
are not currently in use by the CPU. External memory
can be further classified into two types: magnetic
disk and optical disk.
Magnetic disks are used for storing large amounts
of data and are relatively cheap. Optical disks are
used for storing large amounts of data that need to be
read sequentially. They are slower than magnetic
disks but are more durable and have a longer lifespan
.
There are different types of memory in a
computer system:
1. Random Access Memory (RAM): This is a type
of main memory that allows the CPU to access any
location in the memory directly. RAM is volatile,
meaning that its contents are lost when the
computer is turned off.
Cont’d
2. Read-Only Memory (ROM): This is a type of main
memory that stores data that cannot be changed or
written to. ROM is non-volatile, meaning that its
contents are retained even when the computer is
turned off.
3. Virtual Memory: This is a type of memory that
allows the computer to use a combination of RAM and
secondary storage to provide a large amount of
memory for the CPU to access. Virtual memory is used
when the RAM becomes full and needs to be
expanded.
4. Cache Memory: This is a small, fast memory that
stores frequently accessed data. It acts as a buffer
between the main memory and the CPU, reducing
the time it takes for the CPU to access data.
Cont’d
Cont’d
Cont’d
Q4: Explain briefly CPU and its components, addressing nodes,
organization, data transfere modes, Instruction types.
Answers:
Cont’d

Addressing Modes:
Addressing modes determine how the CPU accesses memory. Some common
addressing modes include:

* Direct Addressing: The CPU uses a single address to access a memory location.
* Indirect Addressing: The CPU uses a pointer to access a memory location.
* Register-Indirect Addressing: The CPU uses a register to hold a pointer to a
memory location.

Organization:
CPUs can be organized in different ways, including:

* Von Neumann Architecture: This architecture uses a single shared bus to con-
nect the CPU, memory, and I/O devices.
* Harvard Architecture: This architecture uses separate buses for the CPU, mem-
ory, and I/O devices.
Cont’d
Data Transfer Modes:
Data transfer modes determine how data is moved between the CPU and mem-
ory. Some common data transfer modes include:

* Load: The CPU loads data from memory into a register.


* Store: The CPU stores data from a register into memory.
* Load/Store: The CPU loads data from memory into a register and stores data
from a register into memory.

Instruction Types:
Instructions are the commands that the CPU executes. Some common instruction
types include:

* Arithmetic Instructions: These instructions perform mathematical operations,


such as addition and subtraction.
* Logical Instructions: These instructions perform logical operations, such as AND
and OR.
* Control Flow Instructions: These instructions control the flow of program execu-
tion, such as jump and branch instructions.
* Memory Access Instructions: These instructions access memory, such as load
and store instructions.
Q5: Explain briefly Pipeline and Vector processing.
a,Pipelininig
b,Arithmetic pipeline
c, Instruction pipeline
d,RISC pipeline
e,Vector processing
Answers:

A. Pipelining:
Pipelining is a technique used in computer processors to im-
prove the performance by processing multiple instructions si-
multaneously, in a series of stages. Each stage completes a
specific function, such as fetching, decoding, executing, and
storing the results. Pipelining allows for the next instruction to
be processed while the previous instruction is still being exe-
cuted, resulting in increased throughput and reduced latency.
Cont’d

B. Arithmetic pipeline:
An arithmetic pipeline is a type of pipeline that is specifically designed to handle
arithmetic instructions, such as addition and multiplication. It is optimized for han-
dling sequential arithmetic operations and can perform multiple operations in par-
allel, resulting in improved performance.

C. Instruction pipeline:
An instruction pipeline is a type of pipeline that handles the execution of instruc-
tions in a program. It is responsible for fetching, decoding, and executing instruc-
tions in a sequence, allowing for the next instruction to be executed while the pre-
vious instruction is still being executed.

D. RISC pipeline:
A RISC (Reduced Instruction Set Computing) pipeline is a type of pipeline that
is optimized for handling simple instructions, such as load, store, and arithmetic
operations. It is designed to reduce the number of instructions that need to be
fetched and decoded, resulting in improved performance and reduced power con-
sumption.
Cont’d

E. Vector processing:
Vector processing is a technique used in computer processors to per-
form the same operation on multiple data elements simultaneously. It is
particularly useful for handling large datasets, such as in machine learn-
ing, scientific simulations, and data analytics. Vector processing can be
achieved through the use of specialized hardware, such as vector pro-
cessing units (VPUs) or graphics processing units (GPUs), or through
software libraries that utilize the CPU's vector instructions.
Q6: Explain Flynn’s classification of computer
Answers:

Flynn's classifications of computer architectures are a way of categorizing com-


puter systems based on their instruction set architecture (ISA) and the way they
handle memory. There are four main classifications:

1. Single-Instruction, Single-Data (SISD) - This is the simplest architecture,


where a single instruction is executed at a time, and there is only one data path.
Examples include simple calculators and small embedded systems.
2. Single-Instruction, Multiple-Data (SIMD) - In this architecture, a single instruc-
tion is executed on multiple data elements simultaneously. This is commonly
used in digital signal processing, image processing, and scientific simulations.
3. Multiple-Instruction, Single-Data (MISD) - In this architecture, multiple instruc-
tions are executed simultaneously, but they all operate on the same data ele-
ment. This is often used in parallel processing systems, where multiple proces-
sors work together to solve a single problem.
Cont’d

4. Multiple-Instruction, Multiple-Data (MIMD) - This is the most


complex architecture, where multiple instructions are executed
simultaneously, and each instruction can operate on a different
data element. This is commonly used in high-performance com-
puting, such as in supercomputers and mainframes.

Each classification has its own advantages and disadvantages,


and they are used in different situations depending on the re-
quirements of the application. For example, SIMD architectures
are well-suited for tasks that require a lot of parallel processing,
while MIMD architectures are more flexible and can handle a
wide range of tasks.
Q7: Explain Registers in computer system with their function
Answers:

A Register is a small amount of memory that is built into the central process-
ing unit (CPU) or other processing devices. Registers are used to store data
temporarily while it is being processed or manipulated. They are typically faster
and more efficient than main memory, because they are closer to the CPU and
do not require memory access cycles.

There are several types of registers in a computer system, each with its own
specific function:

1. General-purpose registers: These are the most commonly used registers in


a computer system. They are used to store data that is being processed or
manipulated by the CPU. Examples of general-purpose registers include EAX,
EBX, ECX, EDX, and ESP in the x86 architecture.
2. Special-purpose registers: These registers are used for specific purposes,
such as storing the program counter (PC), the stack pointer (SP), and the in-
struction pointer (IP). Examples of special-purpose registers include the EIP,
ESP, and EBP in the x86 architecture.
Cont’d

3. Floating-point registers: These registers are used to store floating-point num-


bers. They are typically used in mathematical operations that involve floating-
point numbers. Examples of floating-point registers include the XMM0-XMM7 reg-
isters in the x86 architecture.
4. Vector registers: These registers are used to store data that is processed in
parallel, such as in vector operations. Examples of vector registers include the
XMM0-XMM7 registers in the x86 architecture.
5. Constant registers: These registers are used to store constant values that are
used in computer instructions. Examples of constant registers include the EAX,
EBX, ECX, EDX, and ESP registers in the x86 architecture.
6. Status registers: These registers are used to store information about the cur-
rent state of the computer system, such as the value of the program counter, the
stack pointer, and the instruction pointer. Examples of status registers include the
EFLAGS register in the x86 architecture.
7. Control registers: These registers are used to control the operation of the
computer system, such as the CPU's clock speed, memory allocation, and
input/output operations. Examples of control registers include the CR0-CR4 reg-
isters in the x86 architecture.
Cont’d

In summary, registers are small amounts of memory that are built into the
CPU or other processing devices. They are used to store data temporarily
while it is being processed or manipulated, and they are typically faster and
more efficient than main memory. There are several types of registers in a
computer system, each with its own specific function, including general-pur-
pose registers, special-purpose registers, floating-point registers, vector regis-
ters, constant registers, status registers, and control registers.
Q8: Explain 7-segment Architecture and its design concepts:
Answers:

7-segment architecture is a method of designing computer systems that


uses a combination of seven different segments or regions to organize
memory and provide protection and security. These segments are:

1. Text segment: This segment contains the program's instructions and is


read-only.
2. Data segment: This segment contains the program's data and is read-
write.
3. Stack segment: This segment is used for temporary storage of data and
is read-write.
4. Heap segment: This segment is used for dynamic memory allocation and
is read-write.
5. Program segment: This segment contains the program's code and is
read-only.
6. System segment: This segment contains the operating system's kernel
and is read-only.
7. Firmware segment: This segment contains the computer's firmware and
is read-only.
Cont’d

The design concepts behind 7-segment architecture include:

1. Separation of concerns: Each segment serves a specific purpose, which


helps to improve security and reliability by reducing the risk of data corrup-
tion or tampering.
2. Memory protection: Each segment has its own access permissions,
which helps to prevent unauthorized access to sensitive data and code.
3. Code integrity: The program segment is read-only, which ensures that
the program's code cannot be modified or tampered with.
4. Data isolation: The data segment is read-write, which allows programs to
modify data without affecting other segments.
5. Resource management: The heap segment is used for dynamic memory
allocation, which allows programs to allocate and deallocate memory as
needed.
6. Security: The system segment contains the operating system's kernel,
which provides a layer of protection against malicious software.
7. Flexibility: 7-segment architecture allows for the addition of new seg-
ments as needed, which provides flexibility and scalability.
Cont’d

Fig:seven segiment

Bcd to 7-segiment 15bcd to 7-segiment


Fig:design concept of
seven segiment

Overall, 7-segment architecture provides a structured approach to memory


management and protection, which helps to improve the security, reliability, and
performance of computer systems.
Q9: Explain combinatory digital devices/circuits and their design concepts:
Answers

Combinatory digital devices, also known as combinational logic circuits, are digital cir-
cuits that perform logical operations on one or more input signals to produce an output
signal. These circuits are called "combinatory" because they combine the input signals
in various ways to produce the output.Combinational circuit includes:Half adder,Full
adder, Encoder,Decoder,Multiplexer,Demultiplexer......etc.
The design concepts for combinatory digital devices/circuits include:

1. Functional decomposition: Breaking down a complex logical operation into simpler


operations that can be implemented using basic logic gates.
2. Logic gate implementation: Using basic logic gates such as AND, OR, NOT, NAND,
NOR, XOR, and XNOR to implement the desired logical operation.
3. Truth table analysis: Using truth tables to analyze the functionality of the circuit and
ensure that it produces the correct output for all possible input combinations.
4. Minimization techniques: Applying techniques such as Boolean algebra, Karnaugh
map, and truth table reduction to minimize the number of logic gates required to im-
plement the circuit.
5. Circuit optimization: Optimizing the circuit design to reduce the number of logic
gates, improve performance, and minimize power consumption.
Cont’d

6. Fault tolerance: Implementing fault-tolerant design techniques such as


redundancy, error detection and correction, and fail-safe design to ensure
that the circuit continues to function correctly even in the presence of
faults or errors.
7. Testing and verification: Testing and verifying the circuit using various
methods such as simulation, formal verification, and hardware emulation
to ensure that it meets the desired specifications and functions correctly.

These design concepts are essential for creating efficient and reliable
combinatory digital devices/circuits that can perform complex logical op-
erations with high accuracy and speed.
Cont’d
Cont’d
Cont’d
Q10: Explain sequencial digital devices/circuits and their design concepts:
Answers
Cont’d
Cont’d
Cont’d
Cont’d
Cont’d
Cont’d

These design concepts are used in a wide range of applications, from simple
digital circuits to complex computer systems. Understanding sequential digital
devices/circuits and their design concepts is essential for designing and build-
ing modern digital systems.

You might also like