You are on page 1of 12

Basic Computer Architecture

Lesson 1 Positional Number System


Objectives
At the end of this unit students will be able to: Determine the value of a digit in a positional notation using a given base Determine the place value of the least significant digit and most significant digit in a given base Determine the range of values that can be represented using given a number of digits Convert to/from decimal and other number bases

Summary of Content

Positional Notation
Positional notation or place-value notation is a method of representing or encoding numbers where the position of a digit affects its value. The base is usually the number of unique symbols, called digits, which a positional numeral system uses to represent numbers. For example, for the decimal system the base is 10, because it uses the 10 digits from 0 through 9. The base is an integer that is greater than 1, and the highest digit of a positional number system has the value one less than the value of the base. For example, in base 10, the value of the highest digit is 10 1 = 9. The number represented by a digit is the digit value multiplied by the place value. For example, the value of the digit 9 in the number string 94 is 9 10 = 90. The value represented by a string of digits is the sum of the numbers represented by each digit in the string. For example, the string 94 represents the number 90 + 4. Place values are the number of the base raised to the ith power, where i' is the position of the digit counting from right to left, starting with 0; i.e. the rightmost position is position 0, and has the place value 1 (b0 = 1). As an example of usage, the number 465 in its respective base 'b' (which must be at least base 7 because the highest digit in it is 6) is equal to: (4 b2) + (6 b1) + (5 b0) If the number 465 was in base 10, then it would equal: 4 102 + 6 101 + 5 100 = 4 100 + 6 10 + 5 1 = 465 If however, the number were in base 7, then it would equal: 4 72 + 6 71 + 5 70 = 4 49 + 6 7 + 5 1 = 243 Hence, 465 7 = 243 10 The least significant digit is the rightmost digit and has the place value b0, or 1 (any value raised to the power 0 is 1). Page 1 of 3

Basic Computer Architecture - Lesson 1 The most significant digit is the leftmost digit and has the value bn-1, where n is the number of digits in the sequence. For example, in base 10 and using 3 digits the most significant digit would have a place value of 103-1 = 102 =100. The range of values, lowest to highest, that could be represented in a given base b using n digits is from 0 to bn-1. For example, the range of values that could be represented in base 2 using 5 digits is 0 to 25 1, i.e. 0 to 31. The total number of unique values that could be represented in a given base b using n digits is bn. For example the highest value that could be represented in base 2 using 5 digits is 25 = 32 (the values 0 to 31 inclusive).

Base Conversion
Decimal to Base-b

To convert from a base-10 integer numeral to its base-b equivalent: 1. The number is divided by b, and the remainder is the least significant digit. 2. The (integer) result is again divided by b; its remainder is the next most significant digit. 3. Repeat Step 2 until the result of further division becomes zero. Example: Convert 123 10 to base-2; i.e. b = 2. 123 10 = 123/2 = 61/2 = 30/2 = 15/2 = 7/2 = 3/2 = 1/2 = 61 with a remainder of (1) 30 with a remainder of (1) 15 with a remainder of (0) 7 with a remainder of (1) 3 with a remainder of (1) 1 with a remainder of (1) 0 with a remainder of (1) = 1111011 2
Base-b to Decimal

To convert from base-b to base-10, proceed by applying the preceding algorithm in reverse. The digits of the number are used one by one, starting with the most significant (leftmost) digit. 1. Begin with the value 0. 2. Multiply the prior value by b and add the next digit, going from left to right, to produce the next value. 3. Repeat Step 2 until there are no more digits.

Page 2 of 3

Basic Computer Architecture - Lesson 1 Example: Convert 1111011 2 to decimal; i.e. b = 2. 0 1 3 7 15 30 61 2+ 2+ 2+ 2+ 2+ 2+ 2+ 1 1 1 1 0 1 1 1 3 7 15 30 61 123 = 123 10

Page 3 of 3

Basic Computer Architecture


Lesson 2 Introduction to Computer Architecture
Objective
At the end of this lesson the student should be able to: Define computer architecture List the basic parts of a computing device Distinguish between the two basic categories of architectures Draw a diagram to illustrate the system bus model Briefly describe the functions of the components in the system bus model Give the advantage and disadvantage of the system bus model

Summary of Content
Computer Architecture is the art of assembling logical elements into a computing device; the specification of the relationship between parts of a computer system.

Basic Types of Architecture


Fixed Program
A special-purpose computer having a program permanently wired in. Changing the program of a fixed-program machine requires re-wiring, re-structuring, or re-designing the machine.

Stored Program
A general-purpose computer that includes an instruction set, and can store in memory a set of instructions (a program) that details how to perform the computation.

Parts of General Purpose Computer


At the most basic level, a stored program computer is a device consisting of three parts: 1. A processor to interpret and execute programs 2. Memory to store both data and programs 3. A mechanism for transferring data to and from the outside world

Page 1 of 2

Basic Computer Architecture - Lesson 2

System Bus Model


CPU Memory

System Bus

Input

Output

CPU carries out program instructions sequentially. Memory stores programs and data. Input/Output transfers data to/from the outside world. System Bus moves data (and instructions) between the CPU and other devices. The advantage of this model is its simplicity. The disadvantage is that it uses a single bus to transfer data and instructions, which leads to a bottleneck (the Von Neumann bottleneck).

Page 2 of 2

Basic Computer Architecture


Lesson 3 Data Representation and Storage
Objective
At the end of this lesson students will be able to: Determine the number of bits required to encode data using a specified scheme Encode/decode an alpha-numeric string of characters using the ASCII coding scheme Distinguish between a bit, a byte and a word Convert between various units of storage Determine number of memory units given memory size and the architectures word size Calculate the architectures address space given relevant specifications

Summary of Content
A bit (binary digit) is the smallest unit of storage in a computer. It can be either a 1 or a 0, and is represented by two different voltage levels in an electrical circuit. In order to represent and manipulate numeric values in a computer system it is necessary to use some coding scheme whereby each number can be represented or "encoded" by a unique sequence of bits. A "binary encoding system" is a one-to-one function for encoding a set of related data objects (for example, all integer values between 0 and 100 inclusive, or all the letters of the alphabet) into unique binary patterns.

Number of Bits Required for Encoding


Only two possible patterns are possible for a single bit: 0 and 1. With two bits, four patterns become possible: 00, 01, 10, and 11. Each time another bit is added, the number of patterns doubles; half of the new patterns will have this additional bit set to 1 and the other half will have it set to 0. The number of unique patterns is given by the formula 2n, where n is the number of bits. For example, 8 bits gives 28 = 256 bit patterns. In order to encode a collection of data objects, the encoding scheme must have at least as many patterns available as there are objects to be encoded. For example, if we wished to encode the integer values from 0 to 9, a minimum of 4 bits (providing 24 = 16 patterns) would be required. To determine the minimum number of bits required to encode a set of data values, convert the number of values in the set (in decimal) to a binary value and count the number of bits in the binary string. For example, to represent 25 values you will need at least 5 bits since 25 = 11001 2 .

Storage Units
Data in a computer is represented by a series of bits grouped together to form distinct patterns. A byte is a group of eight bits. A byte can be used to represent up to 256 different values (28 = 256). Page 1 of 2

Basic Computer Architecture - Lesson 3 A word is the number of bits that a computer accesses and manipulates together as a unit. The number of bits in a word depends on the design of the system and may be any multiple of bits.

Prefix
A SI prefix is a name that precedes a basic unit of measure to indicate a multiple of the unit. Each prefix has a unique symbol that is pre-pended to the unit symbol. The prefixes kibi, mibi, gibi and greater are often used in combination with the storage size units bit (b) and byte (B).
Text Symbol Factor Decimal Equivalent

Kibi Mibi Gibi Tebi Examples:

Ki Mi Gi Ti

2 = 1024 220 = 1048576 230 = 1073741824 240 = 1099511627776

10

kilo (k) = 103 = 1000 mega (M) = 106 = 1000000 giga (G) = 109 = 1000000000 tera (T) 1012 = 1000000000000

1 Kib = 1 kibibit = 210 bits = 1024 bits 1 KiB = 1 kibibyte = 210 bytes = 1024 bytes = 1024 8 bits = 8092 bits 1 64-bit word contains 64 bits or 8 bytes

Memory
Memory consists of a sequence of re-writeable bits numbered from 0 to s-1, where s is the size of memory measured in bits. For example, 1 MiB = 220 8 bits = 8 388 608 bits. The bits are divided into units, each the size of a word; and each memory unit has an address number that is used to indicate its location. To determine the number of memory units, divide the size of memory in bits by the architectures word size. For example, given 1 MiB (220 8 bits) of memory and a 16-bit word; there are 524288 (220 8 16) memory locations number from 0 to 524287; i.e. memory address 0 contains the first 8 bits, etc. Each memory unit is addressed using a unique combination of bits. The number of bits used to indicate an address is dependent on the architecture. The architectures address space is the range of memory addresses that the CPU can specify given the amount of address bits specified by the architecture. Using n address bits the size of the address space is 2n and the range is 0 up to 2n 1. For example, given a 5 bit address the size of the address space is 25 = 32 memory units and the range of addresses is 0 to 31 (31 = 25 - 1).

Page 2 of 2

Basic Computer Architecture


Lesson 4 CPU Architecture and Instruction Set
Objectives
At the end of this unit students will be able to: List and describe the major parts of the CPU State the purpose of the five main CPU registers Outline the steps in the fetch-decode-execute cycle Explain what Convert to/from decimal and other number bases

Summary of Content

Central Processing Unit (CPU)


Control Unit (CU): controls program execution. Arithmetic Logic Unit (ALU): performs mathematical and logical operations. Registers: temporary storage areas inside the CPU the helps the computer to maintain state during the perfomance of a task. Five of the most important registers include: Program Counter (PC): stores the memory address of the next instruction to be executed. Instruction Register (IR): stores a copy of the instruction that is currently executing. Accumulator (AC): stores the results of an operation. Memery Address Register (MAR): stores the address used for reading and writing data to memory. Memory Buffer Register (MBR): stores the a copy of data read from or to be written to memory.

Fetch-Decode-Execute Cycle
The system bus model runs programs in what is known as the von Neumann fetch-decodeexecute cycle, which describes how the machine works. The cycle starts immediately when power is applied to the system using an initial PC value that is predefined for the system architecture. One iteration of the cycle is as follows: 1. The control unit fetches the next program instruction from the memory and stores it in the instruction register (IR). The memory address of the instruction to be fetched is read from the the program counter (PC). After the instruction is fetched, the value in the program counter is incremented so that another instruction can be executed during the next cycle. 2. The instruction is decoded into a command the CPU can perform. Data (operand) required to execute the instruction copied into CPU registers. The address of the data to be read is stored in the memory address register (MAR). After the data is read from memory it is stored in the memory buffer (MBR). Page 1 of 2

Basic Computer Architecture - Lesson 4 3. The CPU executes the instruction and places the results in the accumulator (AC). After executing the instruction the cycle continues by fetching the next instruction pointed to by the instruction register.

Instruction Set Architecture


An instruction set architecture (ISA) is the part of the computer architecture related to programming. An ISA includes a specification of the set of opcodes (machine language), and the native commands implemented by a particular processor.

Instruction Types
Some operations available in most instruction sets include: Data handling and Memory operations set a register to a fixed constant value. move data from a memory location to a register, or vice versa. read and write data from hardware devices. Arithmetic and Logic add, subtract, multiply, or divide the values of two registers, placing the result in a register. compare two values in registers. Control flow branch to another location in the program and execute instructions there. conditionally branch to another location if a certain condition holds.

Design Decisions for Instruction Sets


A machine instruction has an opcode and zero or more operands and can be encoded in a variety of ways. Architectures are differentiated from one another by the number of bits allowed per instruction, the number of operands allowed per instruction, and by the types of instructions and data each can process. When a computer architecture is in the design phase, the instruction set format must be determined before many other decisions can be made. number of bits in the instruction impacts the architectures word size. number and types of operands impacts the architectures address space. number of bits in the opcode impacts the number of operations the processor can perform. Instructions on current architectures can be formatted in two ways: Fixed length: instructions include the opcode and a fixed number of operands; performs simple operations; uses more space but is fast and results in better performance. Reduced Instruction Set Computer. Variable length: number of operands is dependant on the operation; single instruction for complex tasks; more complex to decode but saves storage space. Complex Instruction Set Computer.

Page 2 of 2

Operation Code 001 010 011

Decimal Value 1 2 3

Operation LOAD X STORE X ADD X

100

SUB X

101

JUMP X

110

JUMPIF X

Description Copy the value at memory location X into the AC Copy the value in the AC to memory location X Adds the value at memory location x to the value in the AC Subtract the value at memory location x from the value in the AC Copy X into the PC (so that the instruction at memory location X would be executed next Perform JUMP X; If the value in the AC =0

REG. Notation X >> MAR [MAR] >> MBR >> AC X >> MAR AC >> MBR MBR >> [MAR] X >> MAR [MAR] >> MBR AC + MBR >> AC X >> MAR [MAR] >> MBR AC MBR >> AC X >> PC

IF AC = 0; X >> PC

Input Output Subsystem


A computer interacts with the outside world using its input and output (I/O) devices. These I/O devices usually have different characteristics in terms of the amount, speed and how often the device requires data transfer. What is required is a system that masks these differences and provides a standard interface to the system bus, allowing for efficient transfer of data between system components. We would look at three different I/O methods: programmed I/O, interrupt-driven I/O and direct memory access.

Programmed I/O, also called polled I/O, is the simplest method. Each I/O
device has a CPU register which is used to signal the need for data transfer to or from the device. The CPU polls the I/O devices by running a program that checks the status of the CPU registers (flags) to determine if there are any I/O requests. If a request is detected the CPU performs the data transfer. The major problem with programmed I/O is the amount of time the CPU has to spend polling the I/O devices; time that could be spent performing other useful task.

Interrupt-driven I/O solves this problem by including a device, the interrupt


controller, which polls the I/O devices. If an I/O request is detected the interrupt controller sends a signal to the CPU. The CPU response to this signal by interrupting the program that it is currently running and switching to a device specific program to perform the data transfer. The CPU returns to the interrupted program after the data transfer is complete.

DMA - Programs that perform data transfer are usually very simple, and can be
executed by a special purpose processor which can read from and write to memory (direct memory access). The DMA shares control of the system bus with the CPU. In a system which uses DMA the CPU offloads the tedious I/O operations to this special purpose processor. When data transfer is requested, the CPU passes the details to the DMA controller which intermittently takes control of the bus (cycle stealing) in order to complete the transfer before the I/O device times out. Programmed, Interrupt-driven and DMA are three types of I/O methods used in modern computer systems; each has its strengths and weaknesses. Programmed

I/O is usually preferred on embedded systems programmed to perform a specific task, e.g. alarms systems. However, most modern general purpose computers are interrupt driven as this allows for better user response. In a multitasking system where some programs perform mostly calculations and others that do mostly data transfers, DMA can prove to be an advantage.