You are on page 1of 8

A three-state bus buffer is an integrated circuit that connects multiple data sources to a single

bus. The open drivers can be selected to be either a logical high, a logical low, or high
impedance which allows other buffers to drive the bus.

construct a single line common bus system using tri state buffer

what is guard bits?


The Guard bit is the first of two bits past the 0 bit of the mantissa that will be cutoff. The round
bit is the second bit after the o bit of the mantissa. The guard bit here is 1 and the round bit is
zero since no other bit is present.

Stack based CPU Organization


The computers which use Stack-based CPU Organization are based on a data structure
called a stack. The stack is a list of data words. It uses the Last In First Out (LIFO) access
method which is the most popular access method in most of the CPU. A register is used to
store the address of the topmost element of the stack which is known as Stack pointer (SP). In
this organization, ALU operations are performed on stack data. It means both the operands
are always required on the stack. After manipulation, the result is placed in the stack.

Difference between Direct-mapping, Associative Mapping & Set-Associative Mapping:

Set-Associative
Direct-mapping Associative Mapping
Mapping
Set-Associative
Direct-mapping Associative Mapping
Mapping

Needs comparison with all


tag bits, i.e., the cache Needs comparisons
Needs only one comparison because control logic must examine equal to number of
of using direct formula to get the every block’s tag for a match blocks per set as the
effective cache address. at the same time in order to set can contain more
determine that a block is in than 1 blocks.
the cache/not.

Main Memory Address is divided into


3 fields : TAG, BLOCK & WORD. The
BLOCK & WORD together make an
Main Memory Address is Main Memory Address
index. The least significant WORD bits
divided into 1 fields : TAG & is divided into 3 fields :
identify a unique word within a block
WORD. TAG, SET & WORD.
of main memory, the BLOCK bits
specify one of the blocks and the Tag
bits are the most significant bits.

The mapping of the


There is one possible location in the main memory block
The mapping of the main
cache organization for each block can be done with a
memory block can be done
from main memory because we have particular cache block
with any of the cache block.
a fixed formula. of any direct-mapped
cache.

If the processor need to In case of frequently


If the processor need to access same access same memory accessing two different
memory location from 2 different location from 2 different pages of the main
main memory pages frequently, cache main memory pages memory if reduced,
hit ratio decreases. frequently, cache hit ratio the cache hit ratio
has no effect. reduces.

Search time is less here because there Search time is more as the
Search time increases
is one possible location in the cache cache control logic examines
with number of blocks
organization for each block from main every block’s tag for a
per set.
memory. match.

The index is given by the number of The index is zero for The index is given by
blocks in cache. associative mapping. the number of sets in
Set-Associative
Direct-mapping Associative Mapping
Mapping

cache.

It has less tags bits


than associative
It has the greatest number
It has least number of tag bits. mapping and more tag
of tag bits.
bits than direct
mapping.

Advantages-
Advantages-
 Simplest type of mapping
 It gives better
 Fast as only tag field matching is Advantages-
performance than
required while searching for a  It is fast.
the direct and
word.  Easy to implement
associative mapping
 It is comparatively less expensive
techniques.
than associative mapping.

Disadvantages-
Disadvantages- Disadvantages-
 It is most expensive
 It gives low performance because  Expensive because it
as with the increase
of the replacement for data-tag needs to store address
in set size cost also
value. along with the data.
increases.

Write Through:
In write-through, data is simultaneously updated to cache and memory. This process is
simpler and more reliable. This is used when there are no frequent writes to the cache(The
number of write operations is less).
It helps in data recovery (In case of a power outage or system failure). A data write will
experience latency (delay) as we have to write to two locations (both Memory and Cache). It
Solves the inconsistency problem. But it questions the advantage of having a cache in write
operation (As the whole point of using a cache was to avoid multiple access to the main
memory).
Write Back:
The data is updated only in the cache and updated into the memory at a later time. Data is
updated in the memory only when the cache line is ready to be replaced (cache line
replacement is done using Belady’s Anomaly, Least Recently Used Algorithm, FIFO, LIFO, and
others depending on the application).
Write Back is also known as Write Deferred.
Difference between Programmed and Interrupt Initiated I/O : (VVIMP)

Programmed I/O Interrupt Initiated I/O

Data transfer is initiated by the means of


instructions stored in the computer program. The I/O transfer is initiated by the
Whenever there is a request for I/O transfer the interrupt command issued to the CPU.
instructions are executed from the program.

There is no need for the CPU to stay in


The CPU stays in the loop to know if the device is
the loop as the interrupt command
ready for transfer and has to continuously monitor
interrupts the CPU when the device is
the peripheral device.
ready for data transfer.

The CPU cycles are not wasted as CPU


This leads to the wastage of CPU cycles as CPU
continues with other work during this
remains busy needlessly and thus the efficiency of
time and hence this method is more
system gets reduced.
efficient.

CPU cannot do any work until the transfer is CPU can do any other work until it is
complete as it has to stay in the loop to interrupted by the command indicating
continuously monitor the peripheral device. the readiness of device for data transfer

Its module is faster than programmed


Its module is treated as a slow module.
I/O module.

It can be tricky and complicated to


It is quite easy to program and understand. understand if one uses low level
language.

The performance of the system is severely The performance of the system is


degraded. enhanced to some extent.

Concept of Associative Memory Unit:-


Associative memory is also known as content addressable memory (CAM) or associative
storage or associative array. It is a special type of memory that is optimized for performing
searches through data, as opposed to providing a simple direct access to the data based on
the address.
it can store the set of patterns as memories when the associative memory is being presented
with a key pattern, it responds by producing one of the stored pattern which closely
resembles or relates to the key pattern.
it can be viewed as data correlation here. input data is correlated with that of stored data in
the CAM.
it forms of two type:
1. auto associative memory network : An auto-associative memory network, also known as a
recurrent neural network, is a type of associative memory that is used to recall a pattern
from partial or degraded inputs. In an auto-associative network, the output of the network
is fed back into the input, allowing the network to learn and remember the patterns it has
been trained on. This type of memory network is commonly used in applications such as
speech and image recognition, where the input data may be incomplete or noisy.
2. hetero associative memory network : A hetero-associative memory network is a type of
associative memory that is used to associate one set of patterns with another. In a hetero-
associative network, the input pattern is associated with a different output pattern,
allowing the network to learn and remember the associations between the two sets of
patterns. This type of memory network is commonly used in applications such as data
compression and data retrieval.
Associative memory of conventional semiconductor memory (usually RAM) with added
comparison circuity that enables a search operation to complete in a single clock cycle. It is a
hardware search engine, a special type of computer memory used in certain very high
searching applications.
How Does Associative Memory Work?
In conventional memory, data is stored in specific locations, called addresses, and retrieved
by referencing those addresses. In associative memory, data is stored together with
additional tags or metadata that describe its content. When a search is performed, the
associative memory compares the search query with the tags of all stored data, and retrieves
the data that matches the query.
Advantages of Associative memory :-
1. It is used where search time needs to be less or short.
2. It is suitable for parallel searches.
3. It is often used to speedup databases.
4. It is used in page tables used by the virtual memory and used in neural networks.
Disadvantages of Associative memory :-
1. It is more expensive than RAM.
2. Each cell must have storage capability and logical circuits for matching its content with
external argument.

*****Divition Using restoring Method****


*****Booth’s Algorithm and it’s Application***
Best Case and Worst Case Occurrence (VVIMP)
The best case occurrence of Booth’s algorithm occurs when the least significant bit (LSB) of the
register “Q” is a zero in every iteration. This means that the algorithm can skip the addition or
subtraction step and proceed directly to the shifting steps, which reduces the overall number of
steps required to complete the algorithm.

The worst case occurrence of Booth’s algorithm occurs when the LSB of the register “Q” is a
one in every iteration. This means that the algorithm must perform the addition or subtraction
step in every iteration, which increases the overall number of steps required to complete the
algorithm.
What is the von Neumann bottleneck?

The von Neumann bottleneck is a limitation on throughput caused by the standard personal
computer architecture. The term is named for John von Neumann, who developed the theory
behind the architecture of modern computers. Earlier computers were fed programs and data
for processing while they were running.

Overcoming the von Neumann bottleneck (any 5 point)

The von Neumann bottleneck has often been considered a problem that can be overcome only
through significant changes to computer or processor architectures. Even so, there have been
numerous attempts to address the limitations of the existing structure:

 Caching. A common method for addressing the bottleneck has been to add caches to the
CPU. In a typical cache configuration, the L1, L2 and L3 cache levels sit between the
processor core and the main memory to help speed up operations. The L1 cache is the
smallest, fastest and most expensive. The L3 cache, which is shared among multiple
processor cores, is the largest, slowest and least expensive. The L2 cache falls somewhere
between the two.
Diagram
illustrating how cache memory is a memory block separate from main memory accessed before
main memory. Caching is a common method used to overcome the von Neumann bottleneck.

 Prefetching. Instructions and data that are expected to be used first are fetched into the
cache in advance so they're immediately available when needed.

 Speculative execution. The processor performs specific tasks before it is prompted to


perform them so the information is ready when needed. Speculative execution uses branch
prediction to estimate which instructions will likely be needed first.

 Multithreading. The processor manages multiple requests simultaneously, while switching


execution between threads. The multithreading process usually happens so quickly that the
threads appear to be running simultaneously.

 New types of RAM. Current developments in RAM technologies promise to help address at
least part of the bottleneck issues by getting the data into the bus more quickly. Emerging
areas of development include resistive RAM, magnetic RAM, ferroelectric RAM and spin-
transfer torque RAM.

 Near-data processing. With NDP, memory and storage are augmented with processing
capabilities that help improve performance, while reducing dependency on the system bus.
One type of NDP is processing in memory, which integrates a processor and memory in a
single microchip.
 Hardware acceleration. Processing is shifted to other hardware devices to reduce the load on
the CPU and dependency on the system bus. Common types of hardware acceleration
include GPUs, application-specific integrated circuits and field-programmable gate arrays.

 System-on-a-chip. A single chip contains processing, memory and other system resources,
eliminating much of the data transfer on the system bus. Mobile devices and embedded
systems use SoC technology extensively. However, the technology is now making its way into
the computer industry, with Apple silicon leading the way.

Hardwired Control Unit Microprogrammed Control Unit

Hardwired control unit generates the control Microprogrammed control unit generates
signals needed for the processor using logic the control signals with the help of micro
circuits instructions stored in control memory

Hardwired control unit is faster when compared


This is slower than the other as micro
to microprogrammed control unit as the
instructions are used for generating
required control signals are generated with the
signals here
help of hardwares

Difficult to modify as the control signals that Easy to modify as the modification need
need to be generated are hard wired to be done only at the instruction level

Less costlier than hardwired control as


More costlier as everything has to be realized in
only micro instructions are used for
terms of logic gates
generating control signals

It cannot handle complex instructions as the


It can handle complex instructions
circuit design for it becomes complex

Only limited number of instructions are used due Control signals for many instructions can
to the hardware implementation be generated

Used in computer that makes use of Reduced Used in computer that makes use of
Instruction Set Computers(RISC) Complex Instruction Set Computers(CISC)

You might also like