Professional Documents
Culture Documents
UNIT-I
Computer types, Functional units, basic operational concepts, Bus structures, Data
types, Software: Languages and Translators, Loaders, Linkers, Operating systems.
UNIT-2
Register transfer Language, Register transfer, Bus and Memory Transfers, Arithmetic
Micro operations, Logic Micro operations, shift Micro operations, Arithmetic Logic Shift
Unit
UNIT-3
Control Memory, address Sequencing, Micro Program Example, Design of Control Unit
Addition and Subtraction, Multiplication Algorithms, Division Algorithms, Floating Point
Arithmetic Operations, Decimal Arithmetic Unit, Decimal Arithmetic Operations.
UNIT-IV
UNIT-V
1
part:-A
TOPIC: Computer types
PC (Personal Computer)
2
A PC can be defined as a small, relatively inexpensive computer designed for an
individual user. PCs are based on the microprocessor technology that enables
manufacturers to put an entire CPU on one chip. Businesses use personal computers
for word processing, accounting, desktop publishing, and for running spreadsheet and
database management applications. At home, the most popular use for personal
computers is playing games and surfing the Internet.
Although personal computers are designed as single-user systems, these systems are
normally linked together to form a network. In terms of power, now-a-days high-end
models of the Macintosh and PC offer the same computing power and graphics
capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and Dell.
Workstation
3
Common operating systems for workstations are UNIX and Windows NT. Like PCs,
workstations are also single-user computers like PC but are typically linked together to
form a local-area network, although they can also be used as stand-alone systems.
Minicomputer
It is a midsize multi-processing system capable of supporting up to 250 users
simultaneously.
Mainframe
Mainframe is very large in size and is an expensive computer capable of supporting
hundreds or even thousands of users simultaneously. Mainframe executes many
programs concurrently and supports many simultaneous execution of programs.
4
Supercomputer
Supercomputers are one of the fastest computers currently available. Supercomputers
are very expensive and are employed for specialized applications that require immense
amounts of mathematical calculations (number crunching).
Functional unit
5
A computer consists of five functionally independent main parts input, memory, arithmetic logic
unit (ALU), output and control unit.
1. Primary memory: Is the one exclusively associated with the processor and operates at the
electronics speeds programs must be stored in this memory while they are being executed.
The memory contains a large number of semiconductors storage cells.Each ALU Processor
Control Unit capable of storing one bit of information. These are processed in a group of fixed
site called words. To provide easy access to a word in memory, a distinct address is
associated with each word location. Addresses are numbers that identify memory location.
Number of bits in each word is called the word length of the computer. Programs must
6
reside in the memory during execution. Instructions and data can be written into the memory or
read out under the control of the processor. Memory in which any location can be reached in
a short and fixed amount of time after specifying its address is called random-access
memory (RAM). The time required to access one word called memory access time.
Memory which is only readable by the user and contents of which can’t be altered
is called read only memory (ROM) ; it contains operating systems. Caches are the
small fast RAM units, which are coupled with the processor and are often contained on
the same IC chip to achieve high performance. Although primary storage is essential it tends
to be expensive.
2. Secondary memory: Is used where large amounts of data & programs have to be
stored, particularly information that is accessed infrequently.
Examples:Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc., Arithmetic logic
unit (ALU): Most of the computer operators are executed in ALU of the processor like
addition, subtraction, division, multiplication, etc. the operands are brought into the ALU from
memory and stored in high speed storage elements called register.Then according to the
instructions the operation is performed in the required sequence. The control and the ALU are
many times faster than other devices connected to a computer system. This enables a single
processor to control a number of external devices such as keyboards, displays, magnetic and
optical disks, sensors and other mechanical controllers. Output unit: - These actually are the
counterparts of the input unit. Its basic function is to send the processed results to the
outside world.
Examples:-
Printer, speakers, monitor etc. Control unit:-It effectively is the nerve center that sends signals
to other units and senses their states. The actual timing signals that govern the transfer of data
between input unit, processor, memory and output unit are generated by the control unit.
Source
7
1. Program Is loaded to memory via the input unit
2. ‘Execution of the program starts when the program counter
(PC) points to the first instruction
3. Contents of the PC is sent to the memory address register — L—
(MAR) and a read control signal is sent to memory
4. After memory access time finishes, the 1*' instructionis [wor]
read out of memory and loaded into the memory data
register €MDR) pc Ry
5. Contents of the MDR are transferred to the instruction R
register (IR) R .
6. Nowtheinstructionis ready to be decoded and executed : atu
7. Provided the instruction requires an operation that warrants RQ.
the ALU, the operand is fetch from memory by sending the general purpose
operand’s address to the MAR and starting a Read cycle registers
8
8. Then operand is transferred from memory to the MDR
9, Thentransferred from the MDR to the ALU *
NOTE: normal operations can be preempted by
10. Afterall the operands are fetched —- the
ALU performs its VO interrupts
operation In this case, the internal state of the PC, general
11. Result Is stored in memory, then set to the MDR registers and control
info are stored in memory
12. The Address of where the result will be stored in memory is * After the
interrupt-service routine is completed,
sent to the MAR anda Write cycle is started state of the processor is
restored
13. The PC is incremented to point to the next instruction
Bus structure
Single bus structure: In computer architecture, a bus is a subsystem that transfers data
between components inside a computer, or between computers. Early computer buses
were literally parallel electrical wires with multiple connections, but Modern computer
buses can use both parallel and bit serial connections.
9
To achieve a reasonable speed of operation, a computer must be organized so that all
its units can handle one full word of data at a given time. When a word of data is
transferred between units, all its bits are transferred in parallel, that is, the bits are
transferred simultaneously over many wires, or lines, one bit per line. A group of lines
that serves as a connecting path for several devices is called a bus. In addition to the
lines that carry the data, the bus must have lines for address and control purposes. The
simplest way to interconnect functional units is to use a single bus, as shown in Figure
1.3.1. All units are connected to this bus. Because the bus can be used for only one
transfer at a time, only two units can actively use the bus at any given time. Bus control
lines are used to arbitrate multiple requests for use of the bus. The main virtue of the
single-bus structure is its low cost and flexibility for attaching peripheral devices.
Systems that contain multiple buses achieve more concurrency in operations by
allowing two or more transfers to be carried out at the same time. This leads to better
performance but at an increased cost.
Parts of a System bus: Processor, memory, Input and output devices are connected by
system bus, which consists of separate busses as shown in figure 1.3.2. They are:
(i)Address bus: Address bus is used to carry the address. It is a unidirectional bus. The
address is sent from CPU to memory and I/O port and hence unidirectional. It consists
of 16, 20, 24 or more parallel signal lines.
(ii)Data bus: Data bus is used to carry or transfer data to and from memory and I/O
ports. They are bidirectional. The processor can read on data lines from memory and
I/O port and as well as it can write data to memory. It consists of 8, 16, 32 or more
parallel signal lines.
(iii)Control bus: Control bus is used to carry control signals in order to regulate the
control activities. They are bidirectional. The CPU sends control signals on the control
bus to enable the outputs of addressed memory devices or port devices. Some of the
10
control signals are: MEMR (memory read), MEME (memory write), IOR (I/O read), IOW
(I/O write), BR (bus request), BG (bus grant), INTR (interrupt request), INTA (interrupt
acknowledge), RST (reset), RDY (ready), HLD (hold), HLDA (hold acknowledge),
The devices connected to a bus vary widely in their speed of operation. Some
electromechanical devices, such as keyboards and printers are relatively slow. Other
devices like magnetic or optical disks, are considerably faster. Memory and processor
units operate at electronic speeds, making them the fastest parts of a computer.
Because all these devices must communicate with each other over a bus, an efficient
transfer mechanism that is not constrained by the slow devices and that can be used to
smooth out the differences in timing among processors, memories, and external devices
is necessary.
A common approach is to include buffer registers with the devices to hold the
information during transfers. To illustrate this technique, consider the transfer of an
encoded character from a processor to a character printer. The processor sends the
character over the bus to the printer buffer. Since the buffer is an electronic register, this
transfer requires relatively little time. Once the buffer is loaded, the printer can start
printing without further intervention by the processor. The bus and the processor are no
longer needed and can be released for other activity. The printer continues printing the
11
character in its buffer and is not available for further transfers until this process is
completed. Thus, buffer registers smooth out timing differences among processors,
memories, and I/O devices. They prevent a high-speed processor from being locked to
a slow I/O device during a sequence of data transfers. This allows the processor to
switch rapidly from one device to another, interweaving its processing activity with data
transfers involving several I/O devices.
The Figure 1.3.3 shows traditional bus configurations and the Figure 1.3.4 shows high
speed bus configurations. The traditional bus connection uses three buses: local bus,
system bus and expanded bus. The high speed bus configuration uses high-speed bus
along with the three buses used in the traditional bus connection. Here, the cache
controller is connected to a high speed bus. This bus supports connection to high-speed
LANs, such as Fiber Distributed Data Interface (FDDI), video and graphics workstation
controllers, as well as interface controllers to local peripheral including SCSI.
12
Data types
Each variable in C has an associated data type. Each data type requires different
amounts of memory and has some specific operations which can be performed over it.
Following are the examples of some very common data types used in C:
● char: The most basic data type in C. It stores a single character and requires
13
Different data types also have different ranges upto which they can store numbers.
These ranges may vary from compiler to compiler. Below is a list of ranges along with
the memory requirement and format specifiers on the 32 bit gcc compiler.
Software
languages:
● Python.
● Java.
● Ruby/Ruby on Rails.
● HTML (HyperText Markup Language)
● JavaScript.
● C Language.
● C++
Translator
Purpose of Translator
It translates a high-level language program into a machine language program
that the central processing unit (CPU) can understand. It also detects errors in
the program.
14
Compiler
Interpreter
Assembler
Examples of Translators
Here are some examples of translators per type:
15
Translator Examples
Interpreter OCaml
Python
16
● It is not easy to debug as errors are shown at the end of the
execution.
● Hardware specific, it works on specific machine language and
architecture.
● You discover errors before you complete the program, so you learn
from your mistakes.
● Program can be run before it is completed so you get partial results
immediately.
● You can work on small parts of the program and link them later into a
whole program.
loaders
17
As the program that has to be executed currently must reside in the main memory of the
computer. It is the responsibility of the loader, a program in an operating system, to load
the executable file/module of a program, generated by the linker, to the main memory
for execution. It allocates the memory space to the executable module in main memory.
● Absolute loading
● Relocatable loading
● Dynamic run-time loading
linkers
18
Linker is a program in a system which helps to link an object module of a program into a
single object file. It performs the process of linking. Linkers are also called link editors.
Linking is the process of collecting and maintaining pieces of code and data into a
single file. Linker also links a particular module into the system library. It takes object
modules from assembler as input and forms an executable file as output for the loader.
Linking is performed at both compile time, when the source code is translated into
machine code and load time, when the program is loaded into memory by the loader.
Source code -> compiler -> Assembler -> Object code -> Linker -> Executable file -> Loader
1. Static Linking –
before execution in static linking. It takes a collection of relocatable object files and
command-line arguments and generates fully linked object files that can be loaded and
run.
19
The linker copies all library routines used in the program into executable images. As a
result, it requires more memory space. As it does not require the presence of a library
on the system when it is run . so, it is faster and more portable. No failure chance and
2. Dynamic linking – Dynamic linking is performed during the run time. This linking is
accomplished by placing the name of a shareable library in the executable image. There
are more chances of error and failure chances. It requires less memory space as
Here we can perform code sharing. it means we are using the same object a number of
times in the program. Instead of linking the same object again and again into the library,
each module shares information of an object with another module having the same
object. The shared library needed in the linking is stored in virtual memory to save RAM.
In this linking we can also relocate the code for the smooth running of code but all the
Operating system
programs and acts as an interface between the user of a computer and the
computer hardware.
● A more common definition is that the operating system is the one program
running at all times on the computer (usually called the kernel), with all else
20
● An operating system is concerned with the allocation of resources and
efficient manner.
1. User
3. Operating system
4. Hardware
programs, and application programs. The hardware consists of memory, CPU, ALU, and
21
I/O devices, peripheral device, and storage device. System program consists of
compilers, loaders, editors, OS, etc. The application program consists of business
Every computer must have an operating system to run other programs. The operating
system coordinates the use of the hardware among the various system programs and
application programs for various users. It simply provides an environment within which
The operating system is a set of special programs that run on a computer system that
allows it to work properly. It performs basic tasks such as recognizing input from the
22
keyboard, keeping track of files and directories on the disk, sending output to the
The Operating system must support the following tasks. The task are:
using an editor.
2. Access to the compiler for translating the user program from high level
The module that keeps track of the status of devices is called the I/O traffic controller.
Each I/O device has a device handler that resides in a separate process associated with
that device.
23
The I/O subsystem consists of
spooling.
Assembler –
program plus information that enables the loader to prepare the object program for
execution. At one time, the computer programmer had at his disposal a basic machine
this computer by writing a series of ones and Zeros (Machine language), and place
Vol
24
Part-B
● Non-Volatile Memory: This is a permanent storage and does not lose any data
Memory Hierarchy
25
system from the slow Auxiliary Memory to fast Main Memory and to smaller Cache
memory.
Auxillary memory access time is generally 1000 times that of the main memory,
hence it is at the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate
directly with the CPU and with auxiliary memory devices through Input/output processor
(I/O).
When the program not residing in main memory is needed by the CPU, they are brought
in from auxiliary memory. Programs not currently needed in main memory are
transferred into auxiliary memory to provide space in main memory for other programs
that are currently in use.
The cache memory is used to store program data which is currently being executed in
the CPU. Approximate access time ratio between cache memory and main memory is
about 1 to 7~10
memory location has a unique address. Using this unique address any memory
26
2. Sequential Access: This methods allows memory access in a sequence or in
order.
3. Direct Access: In this mode, information is stored in tracks, with each track
Main Memory
The memory unit that communicates directly within the CPU, Auxillary memory and
Cache memory, is called main memory. It is the central storage unit of the computer
system. It is a large and fast memory used to store data during computer operations.
Main memory is made up of RAM and ROM, with RAM integrated circuit chips holing
the major share.
● RAM: Random Access Memory
○ SRAM: Static RAM, has a six transistor circuit in each cell and retains
○ NVRAM: Non-Volatile RAM, retains its data, even when turned off.
● ROM: Read Only Memory, is non-volatile and is more like a permanent storage
for information. It also stores the bootstrap loader program, to load and start the
27
Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For example:
Magnetic disks and tapes are commonly used auxiliary devices. Other devices used as
auxiliary memory are magnetic drums, magnetic bubble memory and optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output channels
Addresses
28
encoding of information
Memory encoding allows information to be converted into a construct that is stored in
the brain indefinitely. Once it is encoded, it can be recalled from either short- or long-
term memory. At a very basic level, memory encoding is like hitting “Save” on a
computer file. Once a file is saved, it can be retrieved as long as the hard drive is
undamaged. “Recall” refers to retrieving previously encoded information.
Encoding is achieved using chemicals and electric impulses within the brain. Neural
pathways, or connections between neurons (brain cells), are actually formed or
strengthened through a process called long-term potentiation, which alters the flow of
information within the brain. In other words, as a person experiences novel events or
sensations, the brain “rewires” itself in order to store those new experiences in memory.
Types of Encoding
The four primary types of encoding are visual, acoustic, elaborative, and semantic.
Visual
Visual encoding is the process of encoding images and visual sensory information. The
creation of mental pictures is one way people use visual encoding. This type of
information is temporarily stored in iconic memory, and then is moved to long-term
29
memory for storage. The amygdala plays a large role in the visual encoding of
memories.
Acoustic
Acoustic encoding is the use of auditory stimuli or hearing to implant memories. This is
aided by what is known as the phonological loop. The phonological loop is a process by
which sounds are sub-vocally rehearsed (or “said in your mind over and over”) in order
to be remembered.
Elaborative
Elaborative encoding uses information that is already known and relates it to the new
information being experienced. The nature of a new memory becomes dependent as
much on previous information as it does on the new information. Studies have shown
that the long-term retention of information is greatly improved through the use of
elaborative encoding.
Semantic
Semantic encoding involves the use of sensory input that has a specific meaning or can
be applied to a context. Chunking and mnemonics (discussed below) aid in semantic
encoding; sometimes, deep processing and optimal retrieval occurs. For example, you
might remember a particular phone number based on a person’s name or a particular
food by its color.
Not all information is encoded equally well. Think again about hitting “Save” on a
computer file. Did you save it into the right folder? Was the file complete when you
30
saved it? Will you be able to find it later? At a basic level, the process of encoding faces
similar challenges: if information is improperly coded, recall will later be more
challenging. The process of encoding memories in the brain can be optimized in a
variety of ways, including mnemonics, chunking, and state-dependent learning.
Mnemonics
Mnemonic devices, sometimes simply called mnemonics, are one way to help encode
simple material into memory. A mnemonic is any organization technique that can be
used to help remember something. One example is a peg-word system, in which the
person “pegs” or associates the items to be remembered with other easy-to-remember
items. An example of this is “King Phillip Came Over For Good Soup,” a peg-word
sentence for remembering the order of taxonomic categories in biology that uses the
same initial letters as the words to be remembered: kingdom, phylum, class, order,
family, genus, species. Another type of mnemonic is an acronym, in which a person
shortens a list of words to their initial letters to reduce their memory load.
Chunking
Chunking is the process of organizing parts of objects into meaningful wholes. The
whole is then remembered as a unit instead of individual parts. Examples of chunking
include remembering phone numbers (a series of individual numbers separated by
dashes) or words (a series of individual letters).
State-Dependent Learning
31
certain concepts, playing that song is likely to cue up the concepts learned. Smells,
sounds, or place of learning can also be part of state-dependent learning.
Memory Consolidation
Memory consolidation is a category of processes that stabilize a memory trace after its
initial acquisition. Like encoding, consolidation influences whether the memory of an
event is accessible after the fact. However, encoding is more influenced by attention
and conscious effort to remember things, while the processes involved in consolidation
tend to be unconscious and happen at the cellular or neurological level. Generally,
encoding takes focus, while consolidation is more of a biological process. Consolidation
even happens while we sleep.
Research indicates that sleep is of paramount importance for the brain to consolidate
information into accessible memories. While we sleep, the brain analyzes, categorizes,
and discards recent memories. One useful memory-enhancement technique is to use
an audio recording of the information you want to remember and play it while you are
trying to go to sleep. Once you are actually in the first stage of sleep, there is no
learning occurring because it is hard to consolidate memories during sleep (which is
one reason why we tend to forget most of our dreams). However, the things you hear on
the recording just before you fall asleep are more likely to be retained because of your
relaxed and focused state of mind.
In order to encode information into memory, we must first pay attention, a process
known as attentional capture.
32
main memory operations
33
Instruction Formats
Computer perform task on the basis of instruction provided. An instruction in computer
comprises of groups called fields. These field contains different information as for
computers every thing is in 0 and 1 so each field has different significance on the basis
of which a CPU decide what to perform. The most common fields are:
● Address field which contain the location of operand, i.e., register or memory
location.
Generally CPU organization are of three types on the basis of number of address fields:
3. Stack organization
34
In first organization operation is done involving a special register called accumulator. In
second on multiple registers are used for the computation purpose. In third organization
the work on stack basis operation due to which it does not contain any address field. It
INSTRUCTION SEQUENCING
Four types of operations
R3<–[R1]+[R2]
35
● Three address instructions– Add A,B,C
A, B-source operands
C-destination operands
B <–[A] + [B]
● The processor control circuits use information in PC to fetch & execute instructions one
at a time in order of increasing address.
● This is called straight line sequencing.
● Executing an instruction-2 phase procedures.
● 1st phase–“instruction fetch”-instruction is fetched from memory location whose address
is in PC.
● This instruction is placed in instruction register in processor
● 2nd phase-“instruction execute”-instruction in IR is examined to determine which
operation to be performed.
5) Branching
6) Condition codes
36
● These flags are grouped together in a special processor register called “condition code
register” or “status register”
● Individual condition code flags-1 or 0.
● 4 commonly used flag
Addressing modes
The term addressing modes refers to the way in which the operand of an instruction is specified.
The addressing mode specifies a rule for interpreting or modifying the address field of the
instruction before the operand is actually executed.
37
1. Implied / Implicit Addressing Mode
2. Stack Addressing Mode
3. Immediate Addressing Mode
4. Direct Addressing Mode
5. Indirect Addressing Mode
6. Register Direct Addressing Mode
7. Register Indirect Addressing Mode
8. Relative Addressing Mode
9. Indexed Addressing Mode
38
10. Base Register Addressing Mode
11. Auto-Increment Addressing Mode
12. Auto-Decrement Addressing Mode
Examples-
(since operands are always implied to be present on the top of the stack)
Example-
ADD
● This instruction simply pops out two symbols contained at the top of the stack.
● The addition of those two operands is performed.
39
● The result so obtained after addition is pushed again at the top of the stack.
Examples-
40
Example-
● ADD X will increment the value stored in the accumulator by the value stored at
memory location X.
AC ← AC + [X]
41
Example-
● ADD X will increment the value stored in the accumulator by the value stored at
memory location specified by X.
AC ← AC + [[X]]
42
Example-
● ADD R will increment the value stored in the accumulator by the content of register
R.
AC ← AC + [R]
NOTE-
It is interesting to note-
● This addressing mode is similar to direct addressing mode.
● The only difference is address field of the instruction refers to a CPU register
instead of main memory.
43
● Only one reference to memory is required to fetch the operand.
Example-
● ADD R will increment the value stored in the accumulator by the content of
memory location specified in register R.
AC ← AC + [[R]]
NOTE-
It is interesting to note-
● This addressing mode is similar to indirect addressing mode.
● The only difference is address field of the instruction refers to a CPU register.
44
● Effective address of the operand is obtained by adding the content of program
counter with the address part of the instruction.
Effective Address
NOTE-
● Program counter (PC) always contains the address of the next instruction to be
executed.
● After fetching the address of the instruction, the value of program counter
immediately increases.
● The value increases irrespective of whether the fetched instruction has completely
executed or not.
45
9. Indexed Addressing Mode-
Effective Address
46
● Effective address of the operand is obtained by adding the content of base register
with the address part of the instruction.
Effective Address
= Content of Register
47
In this addressing mode,
● After accessing the operand, the content of the register is automatically
incremented by step size ‘d’.
● Step size ‘d’ depends on the size of operand accessed.
● Only one reference to memory is required to fetch the operand.
Example-
48
NOTE-
Example-
49
Assume operand size = 2 bytes.
Here,
● First, the instruction register RAUTO will be decremented by 2.
● Then, updated value of RAUTO will be 3302 – 2 = 3300.
● At memory address 3300, the operand will be found.
NOTE-
Pushdown Stacks
50
Of the data types that support insert and remove for collections of objects,
the most important is called the pushdown stack.
In computer science, a stack is a last in, first out (LIFO) abstract data type
and data structure. A stack can have any abstract data type as an element,
but is characterized by only two fundamental operations: push and pop. The
push operation adds to the top of the list, hiding any items already on the
stack, or initializing the stack if it is empty. The pop operation removes an item
from the top of the list, and returns this value to the caller. A pop either reveals
previously concealed items, or results in an empty list.
I'll start to implement stack for integer data type using structure with array
member. Then, it will be converted to generic implementation of stack.
51
Subroutine
Subroutine. Only one copy of this Instruction is stored in the memory. When a
Subroutine is required it can be called many times during the Execution of a Particular
program. A call Subroutine Instruction calls the Subroutine. Care Should be taken while
returning a Subroutine as Subroutine can be called from a different place from the
memory.
The content of the PC must be Saved by the call Subroutine Instruction to make a
52
Figure – Process of subroutine in a program
Subroutine linkage method is a way in which computer call and return the Subroutine.
The simplest way of Subroutine linkage is saving the return address in a specific
location, such as register which can be called as link register call Subroutine.
2. Subroutine Nesting –
another Subroutine.
53
Figure – Subroutine calling another subroutine
From the above figure, assume that when Subroutine 1 calls Subroutine 2 the return
Subroutine 2. As the last Subroutine called is the first one to be returned ( Last in first
out format). So stack data structure is the most efficient way to store the return
54
Figure – Return address of subroutine is stored in stack memory
55