You are on page 1of 78

Microprocessor-Based Systems

Lecture 3
Microcontroller Architecture
The Harvard Architecture

9/20/6 Lecture 21 -PIC Architecture 5


MC Manufacturers
4-bit through 32-bit

• 4-bit -- Very inexpensive


• 8-bit -- Still very cheap – often ~$1.00 per chip
• 16 and 32 bit -- Priced at $6.00 to 12.00 each
• Evaluation of requirements, chip capability,
and cost come into design decision
SYSTEM PERIPHERALS AND INTERRUPTS

• A microprocessor-based system must obviously contain a


microprocessor otherwise it is simply another electronic
circuit.

• The microprocessor must be programmed. This means


that it must be provided with a series of instructions to be
executed.

• The microprocessor usually connects to a set of


peripherals used for data acquisition and storage.
• An embedded system will in many cases
contain a microcontroller. If it contains a
microprocessor, then the input/ouput and other
peripherals (that would have been IN the
microcontroller, such as ADCs and UART
interfaces) will be on the same circuit board,
connected to the μP.

• What determines the microprocessor speed?


The CLOCK.
The operation of a μP is controlled by the clock signal. This
signal is a voltage that lies within a given range and is
provided by an oscillator.

An electronic oscillator is a circuit that produces a periodic


electronic signal usually in the form of a square wave.

Many types of oscillators exist: RC oscillators, LC


oscillators and crystal (XT) oscillators.
Oscillator differences
• An RC oscillator contains a capacitor and a resistor and
another electronic device such as an opAmp. The frequency of
RC oscillators often drifts several factors such as temperature
and voltage.

• LC oscillators contain inductors and are used in larger analog


receivers. They are not often used in embedded systems.
(why?)

• Crystal oscillators find largest application in embedded systems


because they vibrate at very high STABLE frequencies (their
resonant frequency). They have better temperature stability
than other oscillators and thus better frequency stability.
Pin types

• GPIO – these are general purpose digital I/O pins used


to set or receive voltage levels.
• Analog PINS – these are used to receive voltages when
interfacing with ADCs. In most cases, these can also
serve as digital I/O
• Power Pins (V+, V-, GND) – the purpose of these is
obvious
• Protocol Pins – these are reserved for special interface
protocols such as RS232, SPI, USB , I2C
• Useless Pins – these do nothing. Why are they there
then ?!
Peripherals

• Timers - These internal circuits of the MCU provide precision binary timing
• Watchdog timer -
• Counters – precision counting
• ADC and DAC: This stands for
• UART – this is used for RS232 communication
• SPI controller – this is used for serial communication using the SPI
protocol
• USB controller – what does this do?
• PWM generator – generates PWM 
• Flash Memory
• Low power management units – e.g. Sleep/Wake up
• Others – e.g Digital Audio Interfaces (built on top)
• In many processors, especially microcontrollers a single pin will be used
for many purpose depending on the state of its control register.
Sample PIN layout on a PIC18F2455
Interrupts and Polling
• Consider a scenario in a lecture room.
• During this lecture, I can accept questions from Peter as I
speak (and thus allow to be interrupted) or I can stop
talking and ask Peter if he has a question (whether or not
his hand is up).

• The first scenario is similar to interrupts in embedded


systems and the second is analogous to polling.
• An interrupt is a signal that indicates the need for attention
from a µP or a µC. It causes the device to suspend its current
state of execution and execute another function. The code for
this function is written in what is known as an Interrupt Service
Routine or an Interrupt Handler.

• Hardware interrupts are generated by external circuitry.


Special hardware in the µC monitors the interrupt flag. When
an interrupt occurs, the hardware causes a jump to the
interrupt routine without any software help.
• Software interrupts are more common in operating systems
and are generated from within by other programs.
Polling vs Interrupts
• Polling is a continual request for the state of a given
circuit. Usually, a polling function will have a while loop
that executes ad infinitum.

• Polling is slow and interrupts are fast.


• Using Polling consumes a lot of power and interrupts
consume very little power. [No-Sleep capability during
polling]
• Interrupts can be prioritized while polling gives the same
amount of time to every monitoring circuit
Practical Applications of Interrupts
• Applications are REALLY many. A few are:

• Self-adjusting Air conditioning Systems

• Process monitoring in factories

• Automatic transmission vehicles


Vectored and non-vectored interrupts
• A vectored interrupt is an I/O interrupt that indicates at the hardware level that a
request for attention from an I/O device has been received and also identifies the
device that sent the interrupt request.

• The device identifier can be as simple as an index in an array called the interrupt
vector table, which contains the memory addresses of the ISR to be executed
when an IR is received. These memory addresses are called interrupt vectors.
main()
{
//code here
}
ISR1() //this code is stored in an address
{
}
ISR2() //this code is stored in an address
{
}
• Non vectored interrupts are also known as polled interrupts which
require that the interrupt handler poll or send a signal to each device
in turn in order to find out which one sent the interrupt request.

• Please note, this is different from polling though the word is used in
the same manner.

• In a polled interrupt, the MCU receives the interrupt, suspends all


action, but has to go through all the devices to get the address of the
device that is responsible for it.
• non vectored interrupts are generally raised by I/O (slow) devices. In
this case there is always a specific handler that needs to be executed,
hence no need to pass a vector for the address of the handler.
• Many times, an embedded system will have multiple
interrupts being generated at the same time. Some
will of course be more important than others.

• The MCU in the system usually stores interrupt


priority levels in an array. These levels will have been
programmed by the user in the ISR.

• The interrupts will be processed according to their


priority levels and whether they are masked or not.

Interrupt Priority Levels: Maskable &


unmaskable Interrupts
• A maskable interrupt is one that can be delayed or
rejected by the MCU.
• A Non-maskable interrupt (NMI) can not be ignored and
must be processed immediately.

• Consider a chemical processing plant with a temperature


and pressure monitor wired to a MCU. Interrupt signals
indicating power failure, pressure overload and
temperature overload will almost always be non-maskable
and will be given high priority levels because ignoring
them could cause fatal injuries and damage to property.
DMA interrupts
• DMA= Direct Memory Access.
• When reading from/writing to memory is part of a computer’s
program, time must be set aside to do this and the MCU will be
occupied during this time. If the data to be written is large, time
wastage will occur and is undesirable especially when the MCU
has more important or urgent tasks at hand.

• DMA is a technique in which the CPU of the MCU only triggers


certain hardware to start the R/W operation while the MCU
performs other tasks. When the operation is complete, a DMA
interrupt is sent to indicate that the operation is done. The
interrupt generation and the whole R/W operation is handled by
a DMA controller. DMA controllers are mainly found in graphics
cards and HDDs.
Microprocessor-Based Systems

Assembly Language Programming


High-Level Language
Most programming nowdays is done using so-called
“high-level” languages (such asFORTRAN, BASIC,
COBOL, PASCAL, C, C++, JAVA, SCHEME, Lisp,
ADA, etc.)
These languages deliberately “hide” from a
programmer many details concerning HOW his
problem actually will be solved by the underlying
computing machinery
The BASIC language
Some languages allow programmers to forget about
the computer completely!
The language can express a computing problem with a
few words of English, plus formulas familiar from
high-school algebra
The example in BASIC
1 LET X = 4
2 LET Y = 5
3 LET Z = X + Y
4 PRINT X, “+”, Y, “=“, Z
5 END

Output: 4+5=9
The C language
Other high-level languages do require a small amount
of awareness by the program-author of how a
computation is going to be processed
For example, that:
- the main program will get “linked” with a
“library” of other special-purpose subroutines
- instructions and data will get placed into
separate sections of the machine’s memory
- variables and constants get treated differently
- data items have specific space requirements
Same example: rewritten in C

#include <stdio.h> // needed for printf()


int x = 4, y = 5; // initialized variables
int z; // unitialized variable
int main()
{
z = x + y;
printf( “%d + %d = %d \n”, x, y, z );
}
“ends” versus “means”
Key point: high-level languages let programmers
focus attention on the problem to be solved, and not
spend effort thinking about details of “how” a
particular piece of electrical machiney is going to carry
out the pieces of a desired computation
Key benefit: their problem gets solved sooner
(because their program can be written faster)
Programmers don’t have to know very much about
how a digital computer actually works
A machine’s own language
For understanding how computers work, we need
familiarity with the computer’s own language (called
“machine language”)
It’s LOW-LEVEL language (very detailed)
It is specific to a machine’s “architecture”
It is a language “spoken” using voltages
Humans represent it with zeros and ones
Hence assembly language
There are two key ideas:
-- mnemonic opcodes: we use abbreviations of
English language words to denote operations
-- symbolic addresses: we invent “meaningful”
names for memory storage locations we need
These make machine-language understandable to
humans – if they know their machine’s design
Let’s see our example-program, rewritten using actual
“assembly language” for Intel’s Pentium
Example of machine-language

Here’s what a program-fragment looks like:

10100001 10111100 10010011 00000100


00001000 00000011 00000101 11000000
10010011 00000100 00001000 10100011
11000000 10010100 00000100 00001000

It means: z = x + y;
Simplified Block Diagram

Central
Main
Processing
Memory
Unit

system bus

I/O I/O I/O I/O


device device device device
Assembly Language
Assembly Language
Advantages
Gives the best results with the least expensive
microcontrollers.
Lets you specify the exact instructions that the
CPU will follow.
Gives control on time and memory for each
step of the program.
Simpler than basic or C if programming
experience is limited.
What are the disadvantages?
Assembly Language
Components of an Assembly Program
Assembler
 Translates assembly language to object code
 Translates symbolic values into numerical values
 Keeps track of the numerical values of all symbols
Assembly language instructions
 Translated to Machine Language
 To be downloaded to ‘target’ processor
Comments
 Meaningful comments within Assembly code
General Program Development
Development Process
Elements of an AL Statement
Label

Mnemonic (Opcode)

Operand

Comment (optional)
Label Field
Mnemonic Field
Operand Field
Operand Field (cont’d)
Comment Field

You might also like