Professional Documents
Culture Documents
8/13/2012
Embedded Systems
8/13/2012
Microprocessor
8/13/2012
Embedded Systems
Low unit cost, in part because manufacturer spreads NRE over large numbers of units Carefully designed since higher NRE is acceptable
Can yield good performance, size and power
8/13/2012
Basic Architecture
Processor Control unit Datapath ALU Controller
Control /Status
Registers
PC
IR
I/O Memory
8/13/2012
Evolution
Intel Processors
8/13/2012
-contd
1950s- IBM instituted a research program. 1964- Release of System/360 Mid-1970s improved measurement tools demonstrated on CISC In 1971- Intel released first processor Intel 4004 for use in calculators. In 1975 MC 6800 was released- First processor with Index registers. 1975-801 project initiated at IBMs Watson Research Center. 1979- 32-bit RISC microprocessor (801) developed led by Joel Birnbaum 1979 MC 68000, 32 bit processor with 16 bit buses With protected mode of operation. 1981 MIPS-I developed at Stanford, RISC-I at Berkeley. 1988 RISC processors had taken over high-end of the workstation market Early 1990s IBMs POWER (Performance Optimization With Enhanced RISC) architecture introduced w/ the RISC System/6k AIM (Apple, IBM, Motorola) alliance formed, resulting in PowerPC
8/13/2012
Architectural Variants
Von Neumann vs Harvard Architecture:
Processor Processor
Program memory
Data memory
Harvard
Von Neumann
Harvard allows two simultaneous memory fetches. Most DSPs and embedded controllers use Harvard architecture for streaming data:
greater memory bandwidth; more predictable bandwidth
Most of the computers are von Neumann architecture In certain embedded applications where the program is more-or-less hard wired, the Harvard architecture is advantageous.
8/13/2012 8
-contd
RISC vs CISC Complex instruction set computer (CISC):
many addressing modes many operations. Simple programming and Less program space. Complex processor control-store control unit
Wash
8 1 2 3 4 5 6 7 8
2 1
3 2
4 3
5 4
6 5
7 6
8 7 8
Non-pipelined
Pipelined
non-pipelined Laundry
Time 5 4 3 2 1 6 5 4 3 2 7 6 5 4 3 8 7 6 5 4 8 7 6 5 8 7 6 8 7 8 Time
pipelined Laundry
Time
2 1
3 2 1
4 3 2 1
Pipelined
Instruction 1
8/13/2012
10
Fetchinstr. Decode
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 Two Pipelines
Execute
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
Time
8/13/2012
11
Typical Processors-VIA C3
8/13/2012
12
Architecture-VIA C3
VIA C3 is processor by VIA technologies based on x86 ISA. Compared to Pentium, these are power efficient and hence more suitable for embedded market. Low power consumption and effective heat dissipation. Suitable for personal electronics and mobile phones. Good performance for Internet, digital media applications, video conferencing, web browsing. Multiple Stages of Pipeline- 12 stages. More than one Level of Cache Memory. Available in EBGA package .
8/13/2012 13
Architectural Details
Instruction Fetch Unit
Fetches instruction from I-cache or the external bus. Three pipeline stages exist in Instruction Fetch Unit that deliver aligned instructions into the instruction decode buffers. The instruction is predecoded as it comes out of the cache Predecode is overlapped with other required operations and, thus, effectively takes no time. The fetched instruction data is placed sequentially into multiple buffers. TLB (Translation Look-aside Buffer) holds the address of the pages in the memory accessed recently. The TLB enables faster computing because it allows the address processing to take place independent of the normal address-translation pipeline.
8/13/2012 14
-contd
Instruction Decode Unit
Converts instruction byte into internal execution format by 2 pipeline stages. Branching operations are identified here and the processor starts getting new instructions from a different location. The F stage decodes and formats an instruction into an intermediate format. The internal-format instructions are placed into a five-deep FIFO queue: the FIQ. The X-stage, translates an intermediate-form instruction from the FIQ into the internal microinstruction format. Instruction fetch, decode, and translation are made asynchronous from execution via a five-entry FIFO queue.
8/13/2012 15
-Contd Branch Prediction (BP)- Branch History Table (BHT) & Branch Target Buffer (BTB)
IFU pre-fetches the instruction in to IF cache at different stages and sends them for decoding. In case of Branch instruction all instrn are abandoned and new set needs to be loaded. Prediction of branch earlier in the pipeline can save time in flushing out the current instructions and getting new instructions. BP is a technique that attempts to infer the proper next instruction address, knowing only the current one. Typically it uses a BTB, a small, associative memory that watches the instruction cache index and tries to predict which index should be accessed next, based on branch history which stored in another set of buffers known as BHT. This is carried out in the F stage.
8/13/2012 16
-Contd
Decode stage (R): Micro-instructions are decoded, integer register files are accessed and resource dependencies are evaluated. Addressing stage (A): Memory addresses are calculated and sent to the D-cache (Data Cache). Cache Access stages (D, G): The D-cache and D-TLB (Data Translation Look aside Buffer) are accessed and aligned load data returned at the end of the G-stage. Execute stage (E): Integer ALU operations are performed. All basic ALU functions take one clock except multiply and divide. Store stage (S): Integer store data is grabbed in this stage and placed in a store buffer. Write-back stage (W): The results of operations are committed to the register file.
Integer Unit
8/13/2012
17
Separate 80-bit floating-point execution unit that can execute floating-point instructions (FPI) in parallel with integer instructions. FPI are passed from the integer pipeline to the FPU thr a separate FIFO queue. This queue, which runs at the processor clock speed, decouples the slower running FP unit from the integer pipeline so that the integer pipeline can continue to process instructions overlapped with FP instructions. Basic arithmetic floating-point instructions (add, multiply, divide, square root, compare, etc.) are represented by a single internal floating-point instruction. Certain little-used and complex floating point instructions (sin, tan, etc.) implemented in microcode and are represented by a long stream of instructions coming from the ROM. These instructions tie up the integer instruction pipeline such that integer execution cannot proceed until they complete.
8/13/2012 18
-Contd
MMX & 3D Unit Separate execution unit for the MMX-compatible instructions. One MMX instruction can issue into the MMX unit every clock. The MMX multiplier is fully pipelined and can start one non-dependent MMX multiply[-add] instruction (which consists of up to four separate multiplies) every clock. Other MMX instructions execute in one clock. Multiplies followed by a dependent MMX instruction require two clocks. Separate execution unit for some specific 3D instructions. These instructions provide assistance for graphics transformations SIMD (Single Instruction Multiple Data) single-precision floating-point capabilities. One 3D instruction can issue into the 3D unit every clock. The 3D unit has two single-precision floating-point multipliers and two single-precision floating-point adders. Other functions such as conversions, reciprocal, and reciprocal square root are provided. The multiplier and adder are fully pipelined and can start any nondependent 3D instructions every clock.
19
8/13/2012
VIA C3 processor uses the same x86 instruction set as Intel processor It is a pipelined architecture. Because of the uncertainties associated with Branching the overall instruction execution time is not fixed (therefore it is not suitable for some of the real time applications which need accurate execution speed) It handles a very complex instruction set . The overall power consumption because of the complexity of the processor is higher.
8/13/2012
20
High-performance superscalar MP As many as three instructions in execution per clock Single clock cycle execution for most instructions Pipelined FPU for all single-precision and most double-precision operations Three independent execution units and two register files BPU featuring static branch prediction A 32-bit IU Fully IEEE 754-compliant FPU for both single- and double-precision operations. 32 GPRs for integer operands 32 FPRs for single- or double-precision operands
8/13/2012
22
High instruction and data throughput Zero-cycle branch capability Instruction unit capable of fetching eight instructions per clock from the cache An eight-entry instruction queue that provides look-ahead capability Interlocked pipelines with feed-forwarding that control data dependencies in hardware Unified 32-Kbyte cacheeight-way set-associative, physically addressed; LRU replacement Memory unit with a two-element read queue and a three-element write queue Run-time reordering of loads and stores BPU that performs condition register (CR) look-ahead operations Address translation facilities for both Data and Instructions thr UTLBBTB and ITLB resp. 52-bit virtual address; 32-bit physical address
23
8/13/2012
Summary
8/13/2012
24