Professional Documents
Culture Documents
8255A Cu
I/O mode
Port C
CL
Fig 1a Fig 1b
Control Logic.
It has 6 lines
___
1. RD: Enables read operation. When low, MPU reads data from selected I/O
port of the 8255A.
2. WR bar: Enables write operation to selected port or control register.
3. RESET: Active high signal that clears the control register & sets all pots in
input mode. __ __
4. CS bar, A0, A1: Theses are device select signals. CS is connected to a
decoded address & A0 & A1 are generally connected to MPU address lines
A0 & A1.
The CS bar signal is master chip select & A0, A1 specify one of the I/O ports or
control register s given below:
__
CS A1 A0 Selected
0 0 0 Port A
0 0 1 Port B
0 1 0 Port C
0 1 1 Control register
1 X X 8255 not selected
As an example, port addresses in Fig 2a are determined by CS bar, A0 & A1. The CS
bar line goes low when A7=1 & A6 thru A2 are at logic 0. When these signals are
combined with A0 & A1, the port addresses range from 80H to 83H as shown in Fig
2b.
A7
A6 8255
__
A5 CS
A4 A1
A1 A=80H
A3
A0 A0
A2
___ __
IOR RD
C=82H
__
___
WR
IO
W
B=81H
Fig 2(a)
Fig 2(b)
Fig 2: 8255A Chip select logic (a) & I/O port addresses (b)
CONTROL WORD.
Fig 1b shows a register called Control Register.
The contents of this register called the control word specify an I/O function for
each port.
This register can be accessed to write a control word when A0 & A1 are at logic 12.
The register is NOT accessible for a Read operation.
Bit D7 of the control register specifies either a BSR Mode or I/O function mode. If
D7=1, D6 thru D0 determine I/O functions in various modes.
Ports A & B are used as 2 simple 8-bit ports & Port C as two 4-bit ports. Each port
(or half port for C) can be programmed to function as simple Input/ Output Port.
The I/O features in Mode 0 are:
Port B
1 = Input
0 = Output
Mode selection
0 = Mode 0
1 = Mode 1
GROUPA
Port C (Upper PC7-PC4)
1 = Input
0 = Output
Port A
1 = Input
0 = Output
Mode selection
00 = Mode 0
01 = Mode 1, 1X = Mode 2
1 = I/O Mode
0 = BSR Mode
Program:
2. Identify Mode 0 control word to configure port A & CU as input ports & port
B & CL as output ports.
3. Write a program to read DIP switches & display the reading from port B at
port A & from CL to port CU.
LEDs + 5V
8255 A
PA7
Buffer
PA0
PC7 Buffer
A15 CS PC4
A1 A1
A0 A0 DIP switches
PC3
__
MEMR RD
WR PC0
PB7
MEMW
PB0
RESET Reset
OUT
8085
+5V
Ans:1. Port Addresses. This is a memory mapped I/O; when address line A15 is
high, Chip Select is enabled.
Assuming all don’t care lines are at logic 0, port addresses are as follows:
Port A = 8000H (A1 = 0, A0 = 0)
Port B = 8001H (A1 = 0, A0 = 1)
Port C = 8002H (A1 = 1, A0 = 0)
Control register = 8003H (A1 = 1, A0 = 1)
2. Control Word
D7 D6 D5 D4 D3 D2 D1 D0
1 0 0 0 0 0 1 1
| | | | | | | | = 83H
V ____ V | | V V
I/O | Port A | | Port B Port CL
function V = output | | = Input = Input
Port A | |
In mode0 | |
| |
V |-> Port B in mode0
Port CU = Output
4. Program
Program Description: The circuit is designed for memory mapped I/O; so the
instructions are written as if all 8255 ports are memory locations.
Ports are initialized by loading the control word 83H in the control register. STA &
LDA are equivalent to 8 bit port I/O instructions IN & OUT respectively.
The low 4 bits of port C are configured as input & high 4 bits are configured as
output. Read & Write memory operations are differentiated by control signals
MEMR bar & MEMW bar.
It is concernd only with the 8 bits of port C which can be set/ reset by witing
appropriate control word in th control register.
A control word with bit D7 = 0 is rcognized as a BSR control word & it DOES NOT
ALTR any previously transmitted control word with bit D7 = 1. thus, I/O
operations of ports A & B are NOT AFFECTED by a BSR control word. In this
mode, individual bits of port C can be used as ON/OFF switches.
D7 D6 D5 D4 D3 D2 D1 D0
0 X X X Bit Select S/R
SET = 1
BSR Reset = 0
Mode
000 = Bit 0
001 = Bit 1
Not used, generally set to 0 010 = Bit 2
011 = Bit 3
100 = Bit 4
101 = Bit 5
110 = Bit 6
111 = bit 7
SUBROUTINE
BSR: MVI A, 0FH ; Load byte in ACC to set PC7
OUT 83H ; Set PC7 = 1
MVI A, 07H ; Load byte in ACC to set PC3
OUT 83H ; Set PC3 = 1
CALL DELAY; This is 10ms delay
MVI A, 06H ; Load byte in ACC to reset PC3
OUT 83H ; Reset PC3
MVI A, 0EH ; Load byte in ACC to reset PC7
OUT 83H ; Set PC7
RET
I/O mode
Port B
PC3 INTRA input
Port A mode 1
INTEB Port B input
___
PC2
STBB Port A input
PC1
IBFB PC6,7
1 = i/p
INTRB 0 = o/p
___
PC0
RD Port B Input
PC6,7 I/O
D7 D6 D5 D4 D3 D2 D1 D0
IBF (Input Buffer Full): It is an acknowledgement by 8255 to indicate that the input
latch has received the data byte. It is reset when MPU reads the data byte.
INTR (Interrupt request): This is an output signal that may be ued to interrupt the
MPU.
This signal is generated if STB bar, IBF & INTE (Interrupt Enable internal flip-flop)
are all at logic 1. this is reset by falling edge of the RD bar signal.
STB
IBF
INTR
RD
Input
from
perip
heral
Fig: 8255A mode 1: Timing Waveforms for strobed input with handshake.
Programming the 8255A in Mode1.
8255A may be programmed to function in either status check or interrupt I/O. Both
flowcharts are shown in figure.
In (a), MPU continues to check data status thru IBF line until it goes high. It is a
simplified flow chart, it does NOT show how to handle data transfer if 2 ports are
being used.
Disadvantage of status check I/O with h/s is that MPU is held up in the loop.
Fig (b) shows steps for interrupt I/O assuming vectored interrupts are available.
Continue
8255A Mode 1 : O/p configuration
I/O mode
Port B
PC3 INTRA output
Port A mode 1
INTEB Port B
ACKB Mode 1
PC2
_____ Port A output
OBFB
PC1
PC4,5
INTRB 1 = i/p
0 = o/p
___
PC0
WR Port B Input
PC4,5 I/O
D7 D6 D5 D4 D3 D2 D1 D0
_____ _____
OBFA INTEA I/O I/O INTRA INTEB OBFB INTRB
OBF (Output Buffer Full): It is an o/p signal that goes low when the MPU writes
data to o/p latch of 8255A. It indicates to peripheral that new data is available for
reading. It goes high again after 8255A receives ACK bar from peripheral.
___
ACK: This is an input acknowledge signal from a peripheral that must o/p a low
when peripheral receives the data from the 8255A ports.
INTR (Interrupt request): This is an output signal that may be used to interrupt the
MPU. This signal is generated if OBF, ACK bar & INTE (Interrupt Enable internal
flip-flop) are all at logic 1. This is reset by falling edge of the WR bar signal.
WR
____
OBF
INTR
ACK
Output
from
periphe
ral
Fig: 8255A mode 1: Timing Waveforms for strobed output with handshake.
Mode 2: Bidirectional data transfer: Used for transfer of data between 2 computers
or floppy disk controller interface. Port A can be configured as bidirectional port &
port B in either Mode 0 or 1. Port A uses 5 signals from port c as h/s signals. The
remaining 3 signals from port C may be used as simple I/O or h/s for port B.
Control word
INTRA
D7 D6 D5 D4 D3 D2 D1 D0 PC3
PA7-PA0
8
1 1 X X X 0 1 1/0
PC7 OBF
PC6 ACKA
PC2-0
1= i/p
0 = o/p PC4 STBA
PC5 IBFA
RD
PC2-0 I/O
__ PB7-PB0 8
WR
INTRA
D7 D6 D5 D4 D3 D2 D1 D0 PC3
PA7-PA0
8
1 1 X X X 1 0 X
PC7 OBF
PC6 ACKA
PC4 STBA
PC5 IBFA
RD ____
PC1 OBFB
PC2
__ ACKB
WR PC0
INTRB
PB7-PB0
8
This device also receives serial data from the outside and transmits parallel data to
the CPU after conversion.
Block diagram of the 8251 USART (Universal Synchronous Asynchronous
Receiver Transmitter)
Operation between the 8251 and a CPU is executed by program control. Table 1
shows the operation between a CPU and the device.
3.Programmable Keyboard/Display Interface - 8279
A programmable keyboard and display interfacing chip.
Scans and encodes up to a 64-key keyboard.
Controls up to a 16-digit numerical display.
Keyboard section has a built-in FIFO 8 character buffer.
The display is controlled from an internal 16x8 RAM that
stores the coded display information.
Pinout Definition 8279
A0: Selects data (0) or control/status (1) for reads and
writes between micro and 8279.
BD: Output that blanks the displays.
CLK: Used internally for timing. Max is 3 MHz.
CN/ST: Control/strobe, connected to the control key on
the keyboard.
Features:
• 8 levels of interrupts.
• Can be cascaded in master-slave configuration to handle 64 levels of
interrupts.
• Internal priority resolver.
• Fixed priority mode and rotating priority mode.
• Individually maskable interrupts.
• Modes and masks can be changed dynamically.
• Accepts IRQ, determines priority, checks whether incoming priority >
current level being serviced, issues interrupt signal.
• In 8085 mode, provides 3 byte CALL instruction. In 8086 mode, provides 8
bit vector number.
• Polled and vectored mode.
• Starting address of ISR or vector number is programmable.
• No clock required.
Pinout
Block diagram
A0 D7 D6 D5 D4 D3 D2 D1 D0
0 A7 A6 A5 1 LTIM ADI SNGL IC4
A7 A6 A5 A4 A3 A2 A1 A0
of which A7, A6, A5 are provided by D7-D5 of ICW1 (if ADI=1), or A7, A6 are
provided if ADI=0. A4-A0 (or A5-A0) are set by 8259 itself:
IRQ A7 A6 A5 A4 A3 A2 A1 A0 IRQ A7 A6 A5 A4 A3 A2 A1 A0
IR0 A7 A6 A5 0 0 0 0 0 IR0 A7 A6 0 0 0 0 0 0
IR1 A7 A6 A5 0 0 1 0 0 IR1 A7 A6 0 0 1 0 0 0
IR2 A7 A6 A5 0 1 0 0 0 IR2 A7 A6 0 1 0 0 0 0
IR3 A7 A6 A5 0 1 1 0 0 IR3 A7 A6 0 1 1 0 0 0
IR4 A7 A6 A5 1 0 0 0 0 IR4 A7 A6 1 0 0 0 0 0
IR5 A7 A6 A5 1 0 1 0 0 IR5 A7 A6 1 0 1 0 0 0
IR6 A7 A6 A5 1 1 1 0 0 IR6 A7 A6 1 1 0 0 0 0
IR7 A7 A6 A5 1 1 1 0 0 IR7 A7 A6 1 1 1 0 0 0
D7 D6 D5 D4 D3 D2 D1 D0
A0
Master S7 S6 S5 S4 S3 S2 S1 S0
1
Slave 0 0 0 0 0 ID3 ID2 ID1
A0 D7 D6 D5 D4 D3 D2 D1 D0
A0 D7 D6 D5 D4 D3 D2 D1 D0
1 M7 M6 M5 M4 M3 M2 M1 M0
A0 D7 D6 D5 D4 D3 D2 D1 D0
1 R SL EOI 0 0 L3 L2 L1
R SL EOI Action
0 1 0 No operation
A0 D7 D6 D5 D4 D3 D2 D1 D0
0 X No effect
It provides three independent 16-bit counters, and each counter may operate in a
different mode. All modes are software programmable.
The 8254 megafunction solves one of the most common problems in any
microcomputer system: the generation of accurate time delays under software
control.
Bits
76 Counter Select Bits
00 select counter 0
01 select counter 1
10 select counter 2
11 read back command (8254 only, illegal on 8253, see below)
Bits
54 Read/Write/Latch Format Bits
00 latch present counter value
01 read/write of MSB only
10 read/write of LSB only
11 read/write LSB, followed by write of MSB
Bits
321 Counter Mode Bits
000 mode 0, interrupt on terminal count; countdown, interrupt,
then wait for a new mode or count; loading a new count in
the
middle of a count stops the countdown
001 mode 1, programmable one-shot; countdown with optional
restart; reloading the counter will not affect the
countdown
until after the following trigger
010 mode 2, rate generator; generate one pulse after 'count'
CLK
cycles; output remains high until after the new countdown
has
begun; reloading the count mid-period does not take affect
until after the period
011 mode 3, square wave rate generator; generate one pulse
after
'count' CLK cycles; output remains high until 1/2 of the
next
countdown; it does this by decrementing by 2 until zero, at
which time it lowers the output signal, reloads the counter
and counts down again until interrupting at 0; reloading
the
count mid-period does not take affect until after the
period
100 mode 4, software triggered strobe; countdown with output
high
until counter zero; at zero output goes low for one CLK
period; countdown is triggered by loading counter;
reloading
counter takes effect on next CLK pulse
101 mode 5, hardware triggered strobe; countdown after
triggering
with output high until counter zero; at zero output goes
low
for one CLK period
Read Back Command Format (8254 only)
Read Back Command Status (8254 only, read from counter register)
- the 8253 is used on the PC & XT, while the 8254 is used on the
AT+
- all counters are decrementing and fully independent
- the PIT is tied to 3 clock lines all generating 1.19318 MHz.
- the value of 1.19318MHz is derived from (4.77/4 MHz) and has
it's
roots based on NTSC frequencies
- counters are 16 bit quantities which are decremented and then
tested against zero. Valid range is (0-65535). To get a value
of 65536 clocks you must specify 0 as the default count since
65536 is a 17 bit value.
- reading by latching the count doesn't disturb the countdown
but
reading the port directly does; except when using the 8254 Read
Back Command - counter 0 is the time of day interrupt and is
generated approximately 18.2 times per sec. The value 18.2 is
derived from the frequency 1.10318/65536 (the normal default
count).
- counter 1 is normally set to 18 (dec.) and signals the 8237 to
do
a RAM refresh approximately every 15æs
- counter 2 is normally used to generate tones from the speaker
but can be used as a regular counter when used in conjunction
with the 8255
- newly loaded counters don't take effect until after a an
output
pulse or input CLK cycle depending on the mode
- the 8253 has a max input clock rate of 2.6MHz, the 8254 has
max
input clock rate of 10MHz
Programming considerations:
Example code:
cli
mov al,00110110b ; bit 7,6 = (00) timer counter 0
; bit 5,4 = (11) write LSB then MSB
; bit 3-1 = (011) generate square wave
; bit 0 = (0) binary counter
out 43h,al ; prep PIT, counter 0, square wave&init
count
jmp $+2
mov cx,countdown ; default is 0x0000 (65536) (18.2 per sec)
; interrupts when counter decrements to 0
mov al,cl ; send LSB of timer count
out 40h,al
jmp $+2
mov al,ch ; send MSB of timer count
out 40h,al
jmp $+2
sti
7.Analog-to-digital converter
Resolution
The resolution of the converter indicates the number of discrete values it can
produce over the range of analog values.
The values are usually stored electronically in binary form, so the resolution is
usually expressed in bits. In consequence, the number of discrete values available,
or "levels", is usually a power of two.
For example, an ADC with a resolution of 8 bits can encode an analog input to one
in 256 different levels, since 28 = 256. The values can represent the ranges from 0 to
255 (i.e. unsigned integer) or from -128 to 127 (i.e. signed integer), depending on
the application.
Resolution can also be defined electrically, and expressed in volts. The voltage
resolution of an ADC is equal to its overall voltage measurement range divided by
the number of discrete intervals as in the formula:
Where:
Q is resolution in volts per step (volts per output code),
EFSR is the full scale voltage range = VRefHi − VRefLow,
M is the ADC's resolution in bits.
N is the number of intervals, given by the number of available levels (output
codes), which is: N = 2M
Some examples may help:
• Example 1
o Full scale measurement range = 0 to 10 volts
o ADC resolution is 12 bits: 212 = 4096 quantization levels (codes)
o ADC voltage resolution is: (10V - 0V) / 4096 codes = 10V / 4096
codes 0.00244 volts/code 2.44 mV/code
• Example 2
o Full scale measurement range = -10 to +10 volts
o ADC resolution is 14 bits: 214 = 16384 quantization levels (codes)
o ADC voltage resolution is: (10V - (-10V)) / 16384 codes = 20V /
16384 codes 0.00122 volts/code 1.22 mV/code
• Example 3
o Full scale measurement range = 0 to 8 volts
o ADC resolution is 3 bits: 23 = 8 quantization levels (codes)
o ADC voltage resolution is: (8 V − 0 V)/8 codes = 8 V/8 codes = 1
volts/code = 1000 mV/code
In practice, the smallest output code ("0" in an unsigned system) represents a
voltage range which is 0.5X of the ADC voltage resolution (Q)(meaning half-wide
of the ADC voltage Q ) while the largest output code represents a voltage range
which is 1.5X of the ADC voltage resolution (meaning 50% wider than the ADC
voltage resolution). The other N − 2 codes are all equal in width and represent the
ADC voltage resolution (Q) calculated above. Doing this centers the code on an
input voltage that represents the M th division of the input voltage range. For
example, in Example 3, with the 3-bit ADC spanning an 8 V range, each of the N
divisions would represent 1 V, except the 1st ("0" code) which is 0.5 V wide, and
the last ("7" code) which is 1.5 V wide. Doing this the "1" code spans a voltage
range from 0.5 to 1.5 V, the "2" code spans a voltage range from 1.5 to 2.5 V, etc.
Thus, if the input signal is at 3/8ths of the full-scale voltage, then the ADC outputs
the "3" code, and will do so as long as the voltage stays within the range of 2.5/8ths
and 3.5/8ths. This practice is called "Mid-Tread" operation. This type of ADC can
be modeled mathematically as:
The exception to this convention seems to be the Microchip PIC processor, where
all M steps are equal width. This practice is called "Mid-Rise with Offset"
operation.
In practice, the useful resolution of a converter is limited by the best signal-to-
noise ratio that can be achieved for a digitized signal. An ADC can resolve a signal
to only a certain number of bits of resolution, called the "effective number of bits"
(ENOB). One effective bit of resolution changes the signal-to-noise ratio of the
digitized signal by 6 dB, if the resolution is limited by the ADC. If a preamplifier has
been used prior to A/D conversion, the noise introduced by the amplifier can be an
important contributing factor towards the overall SNR.
Response type
Linear ADCs
Most ADCs are of a type known as linear[1] The term linear as used here means that
the range of the input values that map to each output value has a linear relationship
with the output value, i.e., that the output value k is used for the range of input
values from
m(k + b)
to
m(k + 1 + b),
where m and b are constants. Here b is typically 0 or −0.5. When b = 0, the ADC is
referred to as mid-rise, and when b = −0.5 it is referred to as mid-tread.
Non-linear ADCs
If the probability density function of a signal being digitized is uniform, then the
signal-to-noise ratio relative to the quantization noise is the best possible. Because
this is often not the case, it is usual to pass the signal through its cumulative
distribution function (CDF) before the quantization. This is good because the
regions that are more important get quantized with a better resolution. In the
dequantization process, the inverse CDF is needed.
This is the same principle behind the companders used in some tape-recorders and
other communication systems, and is related to entropy maximization.
For example, a voice signal has a Laplacian distribution. This means that the region
around the lowest levels, near 0, carries more information than the regions with
higher amplitudes. Because of this, logarithmic ADCs are very common in voice
communication systems to increase the dynamic range of the representable values
while retaining fine-granular fidelity in the low-amplitude region.
An eight-bit A-law or the µ-law logarithmic ADC covers the wide dynamic range
and has a high resolution in the critical low-amplitude region, that would
otherwise require a 12-bit linear ADC.
Accuracy
An ADC has several sources of errors. Quantization error and (assuming the ADC
is intended to be linear) non-linearity is intrinsic to any analog-to-digital
conversion. There is also a so-called aperture error which is due to a clock jitter and
is revealed when digitizing a time-variant signal (not a constant value).
These errors are measured in a unit called the LSB, which is an abbreviation for
least significant bit. In the above example of an eight-bit ADC, an error of one LSB
is 1/256 of the full signal range, or about 0.4%.
Quantization error
Main article: Quantization noise
Quantization error is due to the finite resolution of the ADC, and is an unavoidable
imperfection in all types of ADC. The magnitude of the quantization error at the
sampling instant is between zero and half of one LSB.
In the general case, the original signal is much larger than one LSB. When this
happens, the quantization error is not correlated with the signal, and has a uniform
distribution. Its RMS value is the standard deviation of this distribution, given by .
In the eight-bit ADC example, this represents 0.113% of the full signal range.
At lower levels the quantizing error becomes dependent of the input signal,
resulting in distortion. This distortion is created after the anti-aliasing filter, and if
these distortions are above 1/2 the sample rate they will alias back into the audio
band. In order to make the quantizing error independent of the input signal, noise
with an amplitude of 1 quantization step is added to the signal. This slightly reduces
signal to noise ratio, but completely eliminates the distortion. It is known as dither.
Non-linearity
All ADCs suffer from non-linearity errors caused by their physical imperfections,
resulting in their output to deviate from a linear function (or some other function,
in the case of a deliberately non-linear ADC) of their input. These errors can
sometimes be mitigated by calibration, or prevented by testing.
Important parameters for linearity are integral non-linearity (INL) and differential
non-linearity (DNL). These non-linearities reduce the dynamic range of the signals
that can be digitized by the ADC, also reducing the effective resolution of the ADC
Digital-to-analog converter
In electronics, a digital-to-analog converter (DAC or D-to-A) is a device for
converting a digital (usually binary) code to an analog signal (current, voltage or
electric charge).
An analog-to-digital converter (ADC) performs the reverse operation.
A DAC converts an abstract finite-precision number (usually a fixed-point binary
number) into a concrete physical quantity (e.g., a voltage or a pressure). In
particular, DACs are often used to convert finite-precision time series data to a
continually-varying physical signal.
A typical DAC converts the abstract numbers into a concrete sequence of impulses
that are then processed by a reconstruction filter using some form of interpolation
to fill in data between the impulses. Other DAC methods (e.g., methods based on
Delta-sigma modulation) produce a pulse-density modulated signal that can then
be filtered in a similar way to produce a smoothly-varying signal.
By the Nyquist–Shannon sampling theorem, sampled data can be reconstructed
perfectly provided that its bandwidth meets certain requirements (e.g., a baseband
signal with bandwidth less than the Nyquist frequency).
However, even with an ideal reconstruction filter, digital sampling introduces
quantization error that makes perfect reconstruction practically impossible.
Increasing the digital resolution (i.e., increasing the number of bits used in each
sample) or introducing sampling dither can reduce this error.
Instead of impulses, usually the sequence of numbers update the analogue voltage
at uniform sampling intervals.
These numbers are written to the DAC, typically with a clock signal that causes
each number to be latched in sequence, at which time the DAC output voltage
changes rapidly from the previous value to the value represented by the currently
latched number.
The effect of this is that the output voltage is held in time at the current value until
the next input number is latched resulting in a piecewise constant or 'staircase'
shaped output. This is equivalent to a zero-order hold operation and has an effect
on the frequency response of the reconstructed signal.
QUESTIONS
1. Name the two modes of operation of DMA controller?
2. List the operating modes of 8253 timer.
3. Give the control word format of timer?
4. What is the use of USART?
5. Compare serial and parallel communication.
6. What is the use of Keyboard and display controller?
7. What are the functions performed by 8279?
8. What is PPI?
9. Give the control word format for I/O mode of 8255?
10. Give the BSR mode format of 8255.
11. What is the need for interrupt controller?
12. What are the registers present in 8259?
13. What are the applications of 8253?
14. Define interrupts.
15. Define DMA process.
16. Give the status word format of 8257.
17. Draw the Block diagram and explain the operations of 8251 serial
communication
interface.
18. Draw the Block diagram of 8279 and explain the functions of each block.
19. Draw the block diagram of programmable interrupt controller and explain its
operations.
20. Discuss in detail about the operation of timer along with its various modes.
21. Draw the Block diagram of DMA controller and explain its operations.
22.What does the CPU do when it receive an interrupt? How do you enable and
disable
interrupts in 8086.
23 Explain the command words/control words of 8259 in details.
24. With the help of block diagram explain various modes of operation of 8259 in
details.
25. Explain type 0,1,2 interrupts found interrupt vector table of 8086/8088
microprocessor.
26. Describe the use of CAS0, CAS and CAS2 lines in a system with a cascaded
8259’s.
27. What are the different modes of operation of the 8253 programmable timer?
How
does 8254 differ from 8253?
28. Which mode will you use to generate a square wave ? Give a flow chart to
generate it on 8253.
29. Explain with neat waveform the mode 0 of the 8253 timer/counter.
30. Explain with the help of block diagram, functioning of 8253 in various
programmable modes.
31. A 32-bit binary counter is to be implemented using timer/counter
i)design and explain the control word to meet above requirement
ii)Draw timing diagram of the mod(s) used.