You are on page 1of 19

Introductions and Systems

Considerations

Engr. A. N. Aniedu
Electronic and Computer Engineering
Nnamdi Azikiwe University, Awka
FACTORS TO CONSIDER WHEN CHOOSING A
PROCESSOR FOR REAL-TIME APPLICATION

• To permit the efficient functioning of a system in real time, certain


features are desirable and sometimes necessary in CPU.
• The importance of these features increase with the volume and critical
nature of real time transaction s that the processor is required to
handle.
• A system that processes mainly batch work with only a small
proportion of the job having realtime requirements has less critical
requirements that a system that is completely dedicated to realtime
work
1. Fast I/O Capabilities
• The I/O activities in real time system tends to be higher than in
other systems and usually involve a greater number of peripheral
devices. The processor must be capable of coping with the high
troughput of data which this entails.

2. Interrupts:
• The use of interrupts (instead of software pooling) for slow
peripherals helps to ensure that the time constraints are met. The
main criteria by which the interrupts facilities may be accessed are:
a) The type of interrupt handled
b) The degree of automatic interrupt handling
c) The number of priority (ie interrupt levels)
• The more the interrupt resources, the better
3. The Power of the Instruction set
• The more powerful the instruction set of the processor in a realtime
application, the less the programming effort required to implement
the realtime system, and the less the proportion of the main
memory required for application program storage
4. Speed:
• The higher the maximum clock frequency a realtime processor can
cope with, the faster the program execution and the higher the
system throughput ie volume of I/O transaction per unit time
5. Expansibility:
• If there is a significant increase in the work load of a realtime
system, the capabilities of the central processor will have to be
enhanced in one of the these directions:
a) Main memory
b) CPU power
c) Peripheral handling capability
• An increase in the size of the main memory enables a greater number of
frequently used processing routines to be held in main memory and as a result,
fewer backing store accesses are needed during operations
• Extra demand made on a realtime system may be such that an increase in the size
of the main memory may not suffice. In this case the power of the CPU would
need to be increased by replacing the initial CPU with a more powerful one .
• Choosing the initial CPU from an upward compatible family of processors is then
a good idea whenever possible if the processor is to cope with an increased
volume of transactions – more terminals and communication becomes necessary
• Therefore a processor should be chosen (initially)on which flexible peripheral
interfacing facilities are sufficiently available or can be easily added.
Facilities for Systems Development
• Simulators:
• Simulation involves modelling the underlying state of the target. A simulator is a program run
on a medium or large computer which interprets the microprocessor program in instructions
and simulates the execution of them.
• The end result of a good simulation is that the simulation model will emulate the target which it
is simulating.
• Its advantage lies in its ability to display every detail of the program execution and to allow
testing and program debugging before it is run o a microprocessor at all. Many useful functions
can be built into a simulator, such as program traps, breakpoints, trace facilities, and memory
protection.
• One of the main difficulties in using a simulator is in handling the input/output operations.
• A simulator does not normally work as fast as the real microprocessor and therefore cannot be
used in realtime applications.
• Attaching peripherals to a simulator is in any case difficult, and so simulators are normally only
used for testing the computational part of programs.
• They are usually used when the hardware being developed is not yet available
• Emulators:
• Emulation is the process of mimicking the outwardly observable behavior to match an
existing target.
• The internal state of the emulation mechanism does not have to accurately reflect the
internal state of the target which it is emulating
• An emulator performs a similar function but is more hardware based and is constructed
from an array of conventional (TTL) logic elements.
• Emulators do not suffer the disadvantage of slow response time as seen with simulators
and hence is suitable for modeling realtime applications
DIFFERENT I/O TECHNIQUES
• In a microprocessor based system peripheral devices require periodic service from
the CPU.
• The term service generally means sending data to or taking data from the device or
performing some updating process.
• From the software point of view there are three principal techniques used to initiate
and control the transfer of data through a computer I/O port.
1. Dedicated and periodic polling (Polled I/O)
2. Interrupt driven
3. Direct Memory Access (DMA)
• in general peripheral devices are very slow compared with the CPU, hence in-
between the time that the CPU is required to service a peripheral, it can do a lot of
processing.
• In most systems, its processing time must be maximized by using an efficient method
of servicing the peripheral.
1. POLLED I/O
• In this method, the CPU must test each peripheral device in sequence at certain intervals
to see if it needs or is ready for servicing.
• The CPU sequentially selects each peripheral device via the multiplexer to see if it needs
service by checking the state of its ready line
• Certain peripherals may need service at regular and unpredictable intervals
• The CPU must poll the devices at the highest rate to forestall a device needing service
and not being responded to early enough.
• A major disadvantages of this technique is that each time the CPU polls a device it must
stop the program that is currently being processed, go through the polling sequence,
provide service if needed, and then return to the part where it left off in its current task
• Another problem is that if two or more devices need service at the same time, the 1st
polled will be serviced and the other devices will have to wait, although it may need
servicing much more urgently than the first device polled.
• Hence polling is suitable only for devices that can be serviced at regular and predictable
intervals and only in situations in which there are no priority considerations
2. INTERRUPT DRIVEN I/O
• An interrupt is an asynchronous signal indicating the need for attention or a
synchronous event in software indicating the need for a change in execution.
• A hardware interrupt causes the processor to save its state of execution and
begin execution of an interrupt handler.
• Software interrupts are usually implemented as instructions in the instruction
set, which cause a context switch to an interrupt handler similar to a hardware
interrupt.
• Interrupts are a commonly used technique for computer multitasking,
especially in real-time computing. Such a system is said to be interrupt-
driven.
• An act of interrupting is referred to as an interrupt request (IRQ).
• This approach overcomes the disadvantage of the polling method
• In this method, the CPU responds to a need for service only when service is requested by
a peripheral device.
• Thus the CPU can concentrate on running the current program without having to break
away unnecessarily to see if a device needs service.
• When the CPU receives I/O interrups signal, it temporaily stops its current program,
acknowledges the interrupt, and fetches a special program (service routine) from
memory for the particular device that has issued the interruput.
• When the service routine is complete, the CPU returns to where it left off in the program
execution.
• A device called a programmable interrupt controller (PIC) handles the interrupt on a
priority basis.
• Ie, when two devices requests service at the same time, the one assigned the higheste
priority is serviced first then the one with the next highest priority and so on.
In the normal execution of a program there are three types of interrupts that can cause
a break:
1. External Interrupts: These types of interrupts generally come from external input /
output devices which are connected externally to the processor. They are generally
independent and oblivious of any programming that is currently running on the
processor.
2. Internal Interrupts: They are also known as traps and their causes could be due to
some illegal operation or the erroneous use of data. Instead of being triggered by
an external event they are usually triggered due to any exception that has been
caused by the program itself. Some of the causes of these types of interrupts can be
due to attempting a division by zero or an invalid opcode etc.
3. Software interrupts: These types if interrupts can occur only during the execution
of an instruction. They can be used by a programmer to cause interrupts if need be.
The primary purpose of such interrupts is to switch from user mode to supervisor
mode.
Interrupts can be categorized into:
• Maskable interrupt (IRQ) is a hardware interrupt that may be ignored by setting a
bit in an interrupt mask register's (IMR) bit-mask.
• Non-maskable interrupt (NMI) is a hardware interrupt that lacks an associated bit-
mask, so that it can never be ignored. NMIs are often used for timers,
especially watchdog timers.
• Inter-processor interrupt (IPI) is a special case of interrupt that is generated by
one processor to interrupt another processor in a multiprocessor system.
• Software interrupt is an interrupt generated within a processor by executing an
instruction. Software interrupts are often used to implement system calls because
they implement a subroutine call with a CPU ring level change.
• Spurious interrupt is a hardware interrupt that is unwanted. They are typically
generated by system conditions such as electrical interference on an interrupt line or
through incorrectly designed hardware.
3. DIRECT MEMORY ACCESS (DMA)
• Direct memory access (DMA) is a feature of modern computers that allows certain
hardware subsystems within the computer to access system memory independently
of the central processing unit (CPU).
• Without DMA, when the CPU is using programmed input/output, it is typically fully
occupied for the entire duration of the read or write operation, and is thus
unavailable to perform other work.
• With DMA, the CPU initiates the transfer, does other operations while the transfer is
in progress, and receives an interrupt from the DMA controller when the operation is
done.
• This feature is useful any time the CPU cannot keep up with the rate of data transfer,
or where the CPU needs to perform useful work while waiting for a relatively slow I/O
data transfer.
• DMA is also used for intra-chip data transfer in multi-core processors. Computers that
have DMA channels can transfer data to and from devices with much less CPU
overhead than computers without a DMA channel.
• Similarly, a processing element inside a multi-core processor can transfer data to and
from its local memory without occupying its processor time, allowing computation
and data transfer to proceed in parallel.
• DMA can also be used for "memory to memory" copying or moving of data within
memory. DMA can offload expensive memory operations, such as large copies
or scatter-gather operations, from the CPU to a dedicated DMA engine.
• An implementation example is the I/O Acceleration Technology.
MODES OF OPERATION

a) Burst Mode:
– Here an entire block of data is transferred in one contiguous sequence.
– Once the DMA controller is granted access to the system bus by the CPU, it transfers all
bytes of data in the data block before releasing control of the system buses back to the
CPU.
– This mode is useful for loading program or data files into memory, but renders the CPU
inactive for relatively long periods of time. The mode is also called Block Transfer Mode.
b) Cycle Stealing Mode:
– The cycle stealing mode is used in systems in which the CPU should not be disabled for the
length of time needed for burst transfer modes.
– Here, the DMA controller obtains access to the system bus the same way as in burst mode,
using BR (Bus Request) and BG (Bus Grant) signals, which are the two signals controlling the
interface between the CPU and the DMA controller.
– However, after one byte of data transfer, the control of the system bus is deasserted to the
CPU via BG. It is then continually requested again via BR, transferring one byte of data per
request, until the entire block of data has been transferred.
– By continually obtaining and releasing the control of the system bus, the DMA controller
essentially interleaves instruction and data transfers. The CPU processes an instruction,
then the DMA controller transfers one data value, and so on.
– On the one hand, the data block is not transferred as quickly in cycle stealing mode as in
burst mode, but on the other hand the CPU is not idled for as long as in burst mode.
– Cycle stealing mode is useful for controllers that monitor data in real time.
c) Transparent mode:
– This mode takes the most time to transfer a block of data, yet it is
also the most efficient mode in terms of overall system
performance.
– Here, the DMA controller only transfers data when the CPU is
performing operations that do not use the system buses.
– It is the primary advantage of the transparent mode that the CPU
never stops executing its programs and the DMA transfer is free in
terms of time.
– The disadvantage of the transparent mode is that the hardware
needs to determine when the CPU is not using the system buses,
which can be complex and relatively expensive

You might also like