You are on page 1of 6

ALTERNATIVES

Module : 2
PARELLEL PRIORITY INTERUPTS
A parallel priority interrupt is a mechanism used in computer systems to manage multiple interrupt requests
from various devices. Here's how it works:

1. Register for Interrupt Signals: The system uses a register where each bit corresponds to a specific device's
interrupt signal. When a device needs to interrupt the CPU, it sets its corresponding bit in this register.
2. Priority Determination: The priority of the interrupt is established based on the position of the bits in the
register. Typically, the higher the position of the bit (e.g., MSB), the higher the priority of the interrupt. This
allows for prioritizing certain devices over others.
3. Mask Register: Alongside the interrupt register, there's a mask register. The purpose of the mask register is
to control the status of each interrupt request. It can be programmed to enable or disable interrupts from
specific devices.
4. Priority Handling: The mask register can be programmed to disable lower-priority interrupts while a
higher-priority device is being serviced. This ensures that critical tasks are handled without interruption
from lower-priority devices.
5. Interrupt Nesting: Additionally, the mask register can provide a facility for interrupt nesting. This means
that a high-priority device can interrupt the CPU even if a lower-priority device is currently being serviced.
This ensures that urgent tasks can preempt lower-priority ones when necessary.

Overall, a parallel priority interrupt mechanism provides a structured way to handle interrupt requests from
multiple devices, ensuring that higher-priority tasks are addressed promptly while still allowing for the
handling of lower-priority tasks in a controlled manner.

Mask Register: This register is used to control which interrupts are enabled (bit set to 1) or disabled (bit set
to 0). Program instructions can manipulate this register to set or reset individual bits, thereby enabling or
disabling specific interrupts.
Interrupt Handling Logic: Each interrupt has a corresponding bit in the mask register. When an interrupt
occurs, the hardware checks whether the corresponding mask bit is set (equals 1). If the mask bit is set, the
interrupt is recognized and processed; otherwise, it is ignored.
Priority Encoding: The interrupt handling system employs a priority encoder that takes inputs from the
enabled interrupts and generates a priority-encoded vector address. This address determines which
interrupt is serviced first when multiple interrupts occur simultaneously.
Interrupt Status Flip-Flop (IST): When an unmasked interrupt occurs, the corresponding bit in the IST is set,
indicating that an interrupt has been triggered.
Interrupt Enable Flip-Flop (IEN): This flip-flop is controlled by the program and determines whether the
entire interrupt system is enabled or disabled. When enabled, interrupts can trigger the CPU to respond.
Common Interrupt Signal: The outputs of the IST are ANDed with the IEN to produce a common interrupt
signal. This signal is sent to the CPU to indicate that an interrupt has occurred and needs to be serviced.
Interrupt Acknowledge (INTACK) Signal: When the CPU acknowledges an interrupt, it enables the bus
buffers, allowing the vector address (VAD) of the interrupting device to be placed onto the bus. This vector
address is used by the CPU to determine the location of the interrupt service routine (ISR) in memory.
DIRECT MEMORY ACCESS (DMA)
Definition: A direct memory access (DMA) is an operation in which data is copied (transported) from one
resource to another resource in a computer system without the involvement of the CPU.
Direct Memory Access is a capability provided by some computer bus architectures that allows data to be
sent directly from an attached device (such as a disk drive) to the memory on the computer's
motherboard. The microprocessor is freed from involvement with the data transfer, thus speeding up
overall computer operation.
An alternative to DMA is the Programmed Input/output (PIO) interface in which all data transmitted
between devices goes through the processor.

Fig(1) Computer system with DMA

Note//BG: bus grant signal and BR: bus ready signal

Advantages of DMA
-Fast memory transfer of data
-CPU and DMA run concurrently under cache mode
-DMA can trigger an interrupt, which frees the CPU from polling the channel
Use of this mechanism can greatly increase throughput to and from a device.
Efficiency: DMA allows peripherals (such as disk drives, network interfaces, and graphics cards) to access
the system memory directly without involving the CPU for every data transfer. This reduces the burden on
the CPU and improves overall system performance by allowing it to focus on executing other tasks.
Data Transfer Speed: DMA can transfer data at much higher speeds compared to CPU-managed transfers.
Since DMA operates independently of the CPU, it can move data between peripherals and memory at
speeds limited only by the capabilities of the peripherals and the memory bus.
Multitasking: DMA enables multitasking by allowing multiple devices to access memory simultaneously.
This is crucial for modern operating systems that need to handle multiple processes and tasks
concurrently.
Real-time Processing: In systems that require real-time processing, such as audio and video streaming or
data acquisition, DMA ensures timely data transfers without interruptions from CPU-related tasks.
Reduced Power Consumption: By offloading data transfer tasks from the CPU, DMA can help conserve
power since the CPU can enter lower power states when it's not actively involved in managing data
transfers.
DMA CONTROLLER
the operation and components of a Direct Memory Access (DMA) controller, which is a specialized hardware
component commonly found in computer systems. Here's a breakdown of the key points:

Function of DMA Controller: The DMA controller is responsible for executing data transfer operations
between various resources in the system, such as between I/O devices and memory. It operates
independently of the CPU and is dedicated to handling data transfer tasks.

Transfer Modes: The DMA controller supports different transfer modes, including:
I/O-device to memory
Memory to I/O-device
Memory to memory
I/O-device to I/O-device (less common)

Replacement of CPU for Data Transfer: The DMA controller replaces the CPU for data transfer tasks that
would otherwise be executed using programmed input/output (PIO) mode. PIO mode involves the CPU
executing a sequence of instructions to copy data, whereas the DMA controller performs these transfers
more efficiently.

Master/Slave Resource: The DMA controller acts as a master/slave resource on the system bus. It requests
access to the bus when data is available for transport, typically signaled by a REQ signal from the device
initiating the transfer.

Integration with Other Units: The DMA controller can be integrated into various functional units within a
computer system, such as the memory controller, south bridge, or directly into an I/O device.

Registers in DMA Controller:


DMA Address Register: Contains the memory address for data transfer.
DMA Count Register (Word Count Register): Holds the number of bytes of data to be transferred.
DMA Control Register: Accepts commands from the CPU to control the operation of the DMA
controller.
(Optional) Status Register: Provides information about the status of ongoing DMA operations,
accessible by the CPU as an input port.

Overall, the DMA controller enhances system performance by offloading data transfer tasks from the CPU,
allowing for faster and more efficient data movement within the computer system.
DMA CONTROLLER WORKING
1. DMA Controller Initialization:
The CPU initializes the DMA controller by configuring its registers, including the DMA Address Register,
DMA Count Register, and DMA Control Register.
Initially, the DMA controller is idle, waiting for a request to perform a data transfer.
2. Data Transfer Request:
An I/O device or another peripheral initiates a data transfer request, indicating the need for data to be
moved between itself and memory.
Upon receiving the request, the DMA controller asserts the BR (Bus Request) signal to request control
of the system bus for the data transfer operation.
3. Bus Arbitration:
The DMA controller's BR signal competes with other devices' requests for control of the system bus. An
arbiter or bus controller decides which device gets access to the bus.
If the DMA controller wins the bus arbitration, it receives control of the bus, indicated by the assertion
of the BG (Bus Grant) signal.
4. Initiating Data Transfer:
With control of the bus granted, the DMA controller asserts the RS (Request Strobe) signal to indicate
that it's ready to initiate the data transfer operation.
Depending on the direction of data transfer (read or write), the DMA controller asserts either the RD
(Read) or WR (Write) signal, along with the DS (Data Strobe) signal to signify valid data on the bus.
5. Memory Access:
If it's a read operation, the DMA controller reads data from memory by asserting the RD signal.
Conversely, for a write operation, it writes data to memory by asserting the WR signal.
The DMA controller uses the address stored in its DMA Address Register to specify the memory
location involved in the data transfer.
6. Data Transfer Completion:
Once the data transfer is complete, the DMA controller updates its status and optionally notifies the
CPU of the completion.
It may deassert the RS signal to indicate the end of the transfer and release control of the system bus
by deasserting the BR signal.
7. CPU Interaction:
Throughout the data transfer process, the CPU may monitor the DMA controller's status by accessing
its Status Register, which provides information about ongoing DMA operations.
8. Interrupt Handling:
Upon completion of the data transfer or in the event of an error, the DMA controller may generate an
interrupt signal to notify the CPU, allowing it to handle the completion or error condition appropriately.

Advantages
DMA speedups the memory operations by bypassing the involvement of the CPU.
The work overload on the CPU decreases.
For each transfer, only a few numbers of clock cycles are required
Disadvantages
Cache coherence problem can be seen when DMA is used for data transfer.
Increases the price of the system.
INPUT-OUTPUT PROCESSOR WITH VARIOUS MODES
OF DATA TRANSFER.
An input-output (I/O) processor is a specialized component responsible for managing communication between
the CPU and external devices, such as storage devices, network interfaces, and peripherals. It offloads the
CPU from handling I/O operations directly, thereby improving system performance and efficiency. Here are
the various modes of data transfer supported by an I/O processor:
Programmed I/O (PIO):
In programmed I/O mode, the CPU directly controls the data transfer between the I/O device and memory.
The CPU issues commands to the I/O processor to initiate data transfers, waits for the operation to complete,
and then proceeds with other tasks.
This mode is simple and straightforward but can lead to CPU inefficiency as it must wait for I/O operations to
complete.
Interrupt-Driven I/O:
In interrupt-driven I/O mode, the CPU initiates an I/O operation and then continues executing other tasks.
When the I/O operation completes, the I/O processor interrupts the CPU to signal that data transfer is finished
and that the CPU can process the received data.
This mode allows the CPU to perform other tasks while waiting for I/O operations to complete, improving
overall system efficiency.
Direct Memory Access (DMA):
DMA mode enables the I/O processor to transfer data directly between an external device and memory
without CPU intervention.
The CPU sets up the DMA controller with the starting memory address, the number of bytes to transfer, and
the direction of data transfer (read or write).
Once configured, the DMA controller manages the data transfer independently, transferring data between the
device and memory while the CPU is free to perform other tasks.
DMA mode is highly efficient as it offloads data transfer tasks from the CPU, reducing CPU overhead and
improving system performance.
Burst Mode:
Burst mode is a variation of DMA mode that allows the DMA controller to transfer multiple blocks of data in
rapid succession without intervention from the CPU.
This mode optimizes data transfer by minimizing setup and teardown overhead, maximizing the utilization of
available bandwidth, and further reducing CPU involvement in I/O operations.
In summary, an input-output processor supports various modes of data transfer, including programmed I/O,
interrupt-driven I/O, DMA, and burst mode, each offering different levels of efficiency and CPU involvement in
handling I/O operations.

You might also like