You are on page 1of 55

Co Complete Note

UNIT-I
Computer types, Functional units, basic operational concepts, Bus structures, Data
types, Software: Languages and Translators, Loaders, Linkers, Operating systems.

Memory locations - addresses and encoding of information - main memory operations


~ Instruction formats and instruction sequences — Addressing modes and instructions —
Simple input programming ~ pushdown stacks - subroutines.

UNIT-2

Register transfer Language, Register transfer, Bus and Memory Transfers, Arithmetic
Micro operations, Logic Micro operations, shift Micro operations, Arithmetic Logic Shift
Unit

Stack organization, instruction formats, Addressing modes, Data transfer and


manipulation, Execution of a complete instruction, Sequencing of control signals,
Program Control

UNIT-3
Control Memory, address Sequencing, Micro Program Example, Design of Control Unit
Addition and Subtraction, Multiplication Algorithms, Division Algorithms, Floating Point
Arithmetic Operations, Decimal Arithmetic Unit, Decimal Arithmetic Operations.

UNIT-IV

Peripheral Devices, Input-Output Interface, Asynchronous Data Transfer, Modes of


Transfer, Priority Interrupt, Direct Memory Access (DMA), Input-Output Processor
(OP), Serial Communication

Memory hierarchy, main memory, auxiliary memory, Associative memory, Cache


memory, Virtual memory, Memory management hardware

UNIT-V

Parallel Processing, Pipelining, Arithmetic Pipeline, Instruction Pipeline, RISC Pipeline


Vector Processing, Array Processors.

Characteristics of Multiprocessors, interconnection Structures, _Interprocessor


A Titration, inter-processor Communication and Synchronization, Cache Coherence.

1
part:-A
TOPIC: Computer types

Computers can be broadly classified by their speed and computing power.

S.N Type Specifications


o.

PC It is a single user computer system having moderately powerful


1 (Personal microprocessor
Computer)

It is also a single user computer system, similar to a personal


2 Workstation
computer however has a more powerful microprocessor.

Mini It is a multi-user computer system, capable of supporting


3
Computer hundreds of users simultaneously.

It is a multi-user computer system, capable of supporting


4 Main Frame hundreds of users simultaneously. Software technology is
different from a minicomputer.

Supercompu It is an extremely fast computer, which can execute hundreds


5
ter of millions of instructions per second.

PC (Personal Computer)

2
A PC can be defined as a small, relatively inexpensive computer designed for an
individual user. PCs are based on the microprocessor technology that enables
manufacturers to put an entire CPU on one chip. Businesses use personal computers
for word processing, accounting, desktop publishing, and for running spreadsheet and
database management applications. At home, the most popular use for personal
computers is playing games and surfing the Internet.
Although personal computers are designed as single-user systems, these systems are
normally linked together to form a network. In terms of power, now-a-days high-end
models of the Macintosh and PC offer the same computing power and graphics
capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and Dell.

Workstation

Workstation is a computer used for engineering applications (CAD/CAM), desktop


publishing, software development, and other such types of applications which require a
moderate amount of computing power and relatively high quality graphics capabilities.
Workstations generally come with a large, high-resolution graphics screen, large
amount of RAM, inbuilt network support, and a graphical user interface. Most
workstations also have mass storage devices such as a disk drive, but a special type of
workstation, called diskless workstation, comes without a disk drive.

3
Common operating systems for workstations are UNIX and Windows NT. Like PCs,
workstations are also single-user computers like PC but are typically linked together to
form a local-area network, although they can also be used as stand-alone systems.

Minicomputer
It is a midsize multi-processing system capable of supporting up to 250 users
simultaneously.

Mainframe
Mainframe is very large in size and is an expensive computer capable of supporting
hundreds or even thousands of users simultaneously. Mainframe executes many
programs concurrently and supports many simultaneous execution of programs.

4
Supercomputer
Supercomputers are one of the fastest computers currently available. Supercomputers
are very expensive and are employed for specialized applications that require immense
amounts of mathematical calculations (number crunching).

For example, weather forecasting, scientific simulations, (animated) graphics, fluid


dynamic calculations, nuclear energy research, electronic design, and analysis of
geological data (e.g. in petrochemical prospecting)

Functional unit

5
A computer consists of five functionally independent main parts input, memory, arithmetic logic
unit (ALU), output and control unit.

Fig A: Functional units of computer


Input device accepts the coded information as source program i.e. high level language. This is
either stored in the memory or immediately used by the processor to perform the desired
operations. The program stored in the memory determines the processing steps. Basically the
computer converts one source program to an object program. i.e. into machine language.
Finally the results are sent to the outside world through output device. All of these actions are
coordinated by the control unit.
Input unit: The source program/high level language program/coded information/simple data is
fed to a computer through input devices; keyboard is a most common type. Whenever a key is
pressed, one corresponding word or number is translated into its equivalent binary code over a
cable and fed either to memory or process. Joysticks, trackballs, mouse, and scanners are other
input devices.
Memory unit: Its function into store programs and data.
It is basically to two types
1.Primary memory
2.Secondary memory

1. Primary memory: Is the one exclusively associated with the processor and operates at the
electronics speeds programs must be stored in this memory while they are being executed.
The memory contains a large number of semiconductors storage cells.Each ALU Processor
Control Unit capable of storing one bit of information. These are processed in a group of fixed
site called words. To provide easy access to a word in memory, a distinct address is
associated with each word location. Addresses are numbers that identify memory location.
Number of bits in each word is called the word length of the computer. Programs must

6
reside in the memory during execution. Instructions and data can be written into the memory or
read out under the control of the processor. Memory in which any location can be reached in
a short and fixed amount of time after specifying its address is called random-access
memory (RAM). The time required to access one word called memory access time.
Memory which is only readable by the user and contents of which can’t be altered
is called read only memory (ROM) ; it contains operating systems. Caches are the
small fast RAM units, which are coupled with the processor and are often contained on
the same IC chip to achieve high performance. Although primary storage is essential it tends
to be expensive.

2. Secondary memory: Is used where large amounts of data & programs have to be
stored, particularly information that is accessed infrequently.
Examples:Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc., Arithmetic logic
unit (ALU): Most of the computer operators are executed in ALU of the processor like
addition, subtraction, division, multiplication, etc. the operands are brought into the ALU from
memory and stored in high speed storage elements called register.Then according to the
instructions the operation is performed in the required sequence. The control and the ALU are
many times faster than other devices connected to a computer system. This enables a single
processor to control a number of external devices such as keyboards, displays, magnetic and
optical disks, sensors and other mechanical controllers. Output unit: - These actually are the
counterparts of the input unit. Its basic function is to send the processed results to the
outside world.

Examples:-
Printer, speakers, monitor etc. Control unit:-It effectively is the nerve center that sends signals
to other units and senses their states. The actual timing signals that govern the transfer of data
between input unit, processor, memory and output unit are generated by the control unit.
Source

basic operational concepts

7
1. Program Is loaded to memory via the input unit
2. ‘Execution of the program starts when the program counter
(PC) points to the first instruction
3. Contents of the PC is sent to the memory address register — L—
(MAR) and a read control signal is sent to memory
4. After memory access time finishes, the 1*' instructionis [wor]
read out of memory and loaded into the memory data
register €MDR) pc Ry
5. Contents of the MDR are transferred to the instruction R
register (IR) R .
6. Nowtheinstructionis ready to be decoded and executed : atu
7. Provided the instruction requires an operation that warrants RQ.
the ALU, the operand is fetch from memory by sending the general purpose
operand’s address to the MAR and starting a Read cycle registers

8
8. Then operand is transferred from memory to the MDR
9, Thentransferred from the MDR to the ALU *
NOTE: normal operations can be preempted by
10. Afterall the operands are fetched —- the
ALU performs its VO interrupts
operation In this case, the internal state of the PC, general
11. Result Is stored in memory, then set to the MDR registers and control
info are stored in memory
12. The Address of where the result will be stored in memory is * After the
interrupt-service routine is completed,
sent to the MAR anda Write cycle is started state of the processor is
restored
13. The PC is incremented to point to the next instruction

Bus structure

Single bus structure: In computer architecture, a bus is a subsystem that transfers data
between components inside a computer, or between computers. Early computer buses
were literally parallel electrical wires with multiple connections, but Modern computer
buses can use both parallel and bit serial connections.

Figure 1.3.1 Single bus structure

9
To achieve a reasonable speed of operation, a computer must be organized so that all
its units can handle one full word of data at a given time. When a word of data is
transferred between units, all its bits are transferred in parallel, that is, the bits are
transferred simultaneously over many wires, or lines, one bit per line. A group of lines
that serves as a connecting path for several devices is called a bus. In addition to the
lines that carry the data, the bus must have lines for address and control purposes. The
simplest way to interconnect functional units is to use a single bus, as shown in Figure
1.3.1. All units are connected to this bus. Because the bus can be used for only one
transfer at a time, only two units can actively use the bus at any given time. Bus control
lines are used to arbitrate multiple requests for use of the bus. The main virtue of the
single-bus structure is its low cost and flexibility for attaching peripheral devices.
Systems that contain multiple buses achieve more concurrency in operations by
allowing two or more transfers to be carried out at the same time. This leads to better
performance but at an increased cost.

Parts of a System bus: Processor, memory, Input and output devices are connected by
system bus, which consists of separate busses as shown in figure 1.3.2. They are:

(i)Address bus: Address bus is used to carry the address. It is a unidirectional bus. The
address is sent from CPU to memory and I/O port and hence unidirectional. It consists
of 16, 20, 24 or more parallel signal lines.

(ii)Data bus: Data bus is used to carry or transfer data to and from memory and I/O
ports. They are bidirectional. The processor can read on data lines from memory and
I/O port and as well as it can write data to memory. It consists of 8, 16, 32 or more
parallel signal lines.

(iii)Control bus: Control bus is used to carry control signals in order to regulate the
control activities. They are bidirectional. The CPU sends control signals on the control
bus to enable the outputs of addressed memory devices or port devices. Some of the

10
control signals are: MEMR (memory read), MEME (memory write), IOR (I/O read), IOW
(I/O write), BR (bus request), BG (bus grant), INTR (interrupt request), INTA (interrupt
acknowledge), RST (reset), RDY (ready), HLD (hold), HLDA (hold acknowledge),

Figure 1.3.2 Bus interconnection scheme

The devices connected to a bus vary widely in their speed of operation. Some
electromechanical devices, such as keyboards and printers are relatively slow. Other
devices like magnetic or optical disks, are considerably faster. Memory and processor
units operate at electronic speeds, making them the fastest parts of a computer.
Because all these devices must communicate with each other over a bus, an efficient
transfer mechanism that is not constrained by the slow devices and that can be used to
smooth out the differences in timing among processors, memories, and external devices
is necessary.

A common approach is to include buffer registers with the devices to hold the
information during transfers. To illustrate this technique, consider the transfer of an
encoded character from a processor to a character printer. The processor sends the
character over the bus to the printer buffer. Since the buffer is an electronic register, this
transfer requires relatively little time. Once the buffer is loaded, the printer can start
printing without further intervention by the processor. The bus and the processor are no
longer needed and can be released for other activity. The printer continues printing the

11
character in its buffer and is not available for further transfers until this process is
completed. Thus, buffer registers smooth out timing differences among processors,
memories, and I/O devices. They prevent a high-speed processor from being locked to
a slow I/O device during a sequence of data transfers. This allows the processor to
switch rapidly from one device to another, interweaving its processing activity with data
transfers involving several I/O devices.

The Figure 1.3.3 shows traditional bus configurations and the Figure 1.3.4 shows high
speed bus configurations. The traditional bus connection uses three buses: local bus,
system bus and expanded bus. The high speed bus configuration uses high-speed bus
along with the three buses used in the traditional bus connection. Here, the cache
controller is connected to a high speed bus. This bus supports connection to high-speed
LANs, such as Fiber Distributed Data Interface (FDDI), video and graphics workstation
controllers, as well as interface controllers to local peripheral including SCSI.

12
Data types

Each variable in C has an associated data type. Each data type requires different

amounts of memory and has some specific operations which can be performed over it.

Let us briefly describe them one by one:

Following are the examples of some very common data types used in C:

● char: The most basic data type in C. It stores a single character and requires

a single byte of memory in almost all compilers.

● int: As the name suggests, an int variable is used to store an integer.

● float: It is used to store decimal numbers (numbers with floating point

value) with single precision.

● double: It is used to store decimal numbers (numbers with floating point

value) with double precision.

13
Different data types also have different ranges upto which they can store numbers.

These ranges may vary from compiler to compiler. Below is a list of ranges along with

the memory requirement and format specifiers on the 32 bit gcc compiler.

Software

languages:
● Python.
● Java.
● Ruby/Ruby on Rails.
● HTML (HyperText Markup Language)
● JavaScript.
● C Language.
● C++

Translator

A translator is a programming language processor that converts a computer


program from one language to another. It takes a program written in source
code and converts it into machine code. It discovers and identifies the error
during translation.

Purpose of Translator
It translates a high-level language program into a machine language program
that the central processing unit (CPU) can understand. It also detects errors in
the program.

Different Types of Translators


There are 3 different types of translators as follows:

14
Compiler

A compiler is a translator used to convert high-level programming language to


low-level programming language. It converts the whole program in one session
and reports errors detected after the conversion. Compiler takes time to do its
work as it translates high-level code to lower-level code all at once and then
saves it to memory.

A compiler is processor-dependent and platform-dependent. But it has been


addressed by a special compiler, a cross-compiler and a source-to-source
compiler. Before choosing a compiler, the user has to identify first the Instruction
Set Architecture (ISA), the operating system (OS) and the programming
language that will be used to ensure that it will be compatible.

Interpreter

Just like a compiler, is a translator used to convert high-level programming


language to low-level programming language. It converts the program one at a
time and reports errors detected at once, while doing the conversion. With this, it
is easier to detect errors than in a compiler. An interpreter is faster than a
compiler as it immediately executes the code upon reading the code.

It is often used as a debugging tool for software development as it can execute a


single line of code at a time. An interpreter is also more portable than a compiler
as it is not processor-dependent, you can work between hardware architectures.

Assembler

An assembler is a translator used to translate assembly language to machine


language. It is like a compiler for the assembly language but interactive like an
interpreter. Assembly language is difficult to understand as it is a low-level
programming language. An assembler translates a low-level language, an
assembly language to an even lower-level language, which is the machine code.
The machine code can be directly understood by the CPU.

Examples of Translators
Here are some examples of translators per type:

15
Translator Examples

Compiler Microsoft Visual Studio

GNU Compiler Collection (GCC)

Common Business Oriented Language (COBOL)

Interpreter OCaml

List Processing (LISP)

Python

Assembler Fortran Assembly Program (FAP)

Macro Assembly Program (MAP)

Symbolic Optimal Assembly Program (SOAP)

Advantages and Disadvantages of Translators


Here are some advantages of the Compiler:

● The whole program is validated so there are no system errors.


● The executable file is enhanced by the compiler, so it runs faster.
● Users do not have to run the program on the same machine it was
created.

Here are some disadvantages of the Compiler:

● It is slow to execute as you have to finish the whole program.

16
● It is not easy to debug as errors are shown at the end of the
execution.
● Hardware specific, it works on specific machine language and
architecture.

Here are some advantages of the Interpreter:

● You discover errors before you complete the program, so you learn
from your mistakes.
● Program can be run before it is completed so you get partial results
immediately.
● You can work on small parts of the program and link them later into a
whole program.

Here are some disadvantages of the Interpreter:

● There’s a possibility of syntax errors on unverified scripts.


● Program is not enhanced and may encounter data errors.
● It may be slow because of the interpretation in every execution.

Here are some advantages of the Assembler:

● The symbolic programming is easier to understand thus time-saving


for the programmer.
● It is easier to fix errors and alter program instructions.
● Efficiency in execution just like machine level language.

Here are some disadvantages of the Assembler:

● It is machine dependent, cannot be used in other architecture.


● A small change in design can invalidate the whole program.
● It is difficult to maintain.

loaders

17
As the program that has to be executed currently must reside in the main memory of the
computer. It is the responsibility of the loader, a program in an operating system, to load
the executable file/module of a program, generated by the linker, to the main memory
for execution. It allocates the memory space to the executable module in main memory.

There are three kinds of loading approaches:

● Absolute loading
● Relocatable loading
● Dynamic run-time loading

linkers

18
Linker is a program in a system which helps to link an object module of a program into a

single object file. It performs the process of linking. Linkers are also called link editors.

Linking is the process of collecting and maintaining pieces of code and data into a

single file. Linker also links a particular module into the system library. It takes object

modules from assembler as input and forms an executable file as output for the loader.

Linking is performed at both compile time, when the source code is translated into

machine code and load time, when the program is loaded into memory by the loader.

Linking is performed at the last step in compiling a program.

Source code -> compiler -> Assembler -> Object code -> Linker -> Executable file -> Loader

Linking is of two types:

1. Static Linking –

It is performed during the compilation of the source program. Linking is performed

before execution in static linking. It takes a collection of relocatable object files and

command-line arguments and generates fully linked object files that can be loaded and

run.

Static linker perform two major task:

● Symbol resolution – It associates each symbol reference with exactly one

symbol definition .Every symbol has a predefined task.

● Relocation – It relocates code and data sections and modifies symbol

references to the relocated memory location.

19
The linker copies all library routines used in the program into executable images. As a

result, it requires more memory space. As it does not require the presence of a library

on the system when it is run . so, it is faster and more portable. No failure chance and

less error chance.

2. Dynamic linking – Dynamic linking is performed during the run time. This linking is

accomplished by placing the name of a shareable library in the executable image. There

are more chances of error and failure chances. It requires less memory space as

multiple programs can share a single copy of the library.

Here we can perform code sharing. it means we are using the same object a number of

times in the program. Instead of linking the same object again and again into the library,

each module shares information of an object with another module having the same

object. The shared library needed in the linking is stored in virtual memory to save RAM.

In this linking we can also relocate the code for the smooth running of code but all the

code is not relocatable.It fixes the address at run time

Operating system

● An operating system is a program that controls the execution of application

programs and acts as an interface between the user of a computer and the

computer hardware.

● A more common definition is that the operating system is the one program

running at all times on the computer (usually called the kernel), with all else

being application programs.

20
● An operating system is concerned with the allocation of resources and

services, such as memory, processors, devices, and information. The

operating system correspondingly includes programs to manage these

resources, such as a traffic controller, a scheduler, memory management

module, I/O programs, and a file system.

Functions of Operating system – Operating system performs three functions:

1. Convenience: An OS makes a computer more convenient to use.

2. Efficiency: An OS allows the computer system resources to be used in an

efficient manner.

3. Ability to Evolve: An OS should be constructed in such a way as to permit

the effective development, testing and introduction of new system functions

at the same time without interfering with service.

Operating system as User Interface –

1. User

2. System and application programs

3. Operating system

4. Hardware

Every general-purpose computer consists of the hardware, operating system, system

programs, and application programs. The hardware consists of memory, CPU, ALU, and

21
I/O devices, peripheral device, and storage device. System program consists of

compilers, loaders, editors, OS, etc. The application program consists of business

programs, database programs.

Fig1: Conceptual view of a computer system

Every computer must have an operating system to run other programs. The operating

system coordinates the use of the hardware among the various system programs and

application programs for various users. It simply provides an environment within which

other programs can do useful work.

The operating system is a set of special programs that run on a computer system that

allows it to work properly. It performs basic tasks such as recognizing input from the

22
keyboard, keeping track of files and directories on the disk, sending output to the

display screen and controlling peripheral devices.

OS is designed to serve two basic purposes:

1. It controls the allocation and use of the computing System’s resources

among the various users and tasks.

2. It provides an interface between the computer hardware and the

programmer that simplifies and makes feasible for coding, creation,

debugging of application programs.

The Operating system must support the following tasks. The task are:

1. Provides the facilities to create, modification of programs and data files

using an editor.

2. Access to the compiler for translating the user program from high level

language to machine language.

3. Provide a loader program to move the compiled program code to the

computer’s memory for execution.

4. Provide routines that handle the details of I/O programming.

I/O System Management –

The module that keeps track of the status of devices is called the I/O traffic controller.

Each I/O device has a device handler that resides in a separate process associated with

that device.

23
The I/O subsystem consists of

● A memory Management component that includes buffering, caching and

spooling.

● A general device driver interface.

Drivers for specific hardware devices.

Assembler –

The input to an assembler is an assembly language program. The output is an object

program plus information that enables the loader to prepare the object program for

execution. At one time, the computer programmer had at his disposal a basic machine

that interpreted, through hardware, certain fundamental instructions. He would program

this computer by writing a series of ones and Zeros (Machine language), and place

them into the memory of the machine.

Vol

24
Part-B

Memory Organization in Computer


Architecture
A memory unit is the collection of storage units or devices together. The memory unit
stores the binary information in the form of bits. Generally, memory/storage is classified
into 2 categories:
● Volatile Memory: This loses its data, when power is switched off.

● Non-Volatile Memory: This is a permanent storage and does not lose any data

when power is switched off.

Memory Hierarchy

The total memory capacity of a computer can be visualized by hierarchy of components.


The memory hierarchy system consists of all storage devices contained in a computer

25
system from the slow Auxiliary Memory to fast Main Memory and to smaller Cache
memory.
Auxillary memory access time is generally 1000 times that of the main memory,
hence it is at the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate
directly with the CPU and with auxiliary memory devices through Input/output processor
(I/O).
When the program not residing in main memory is needed by the CPU, they are brought
in from auxiliary memory. Programs not currently needed in main memory are
transferred into auxiliary memory to provide space in main memory for other programs
that are currently in use.
The cache memory is used to store program data which is currently being executed in
the CPU. Approximate access time ratio between cache memory and main memory is
about 1 to 7~10

Memory Access Methods


Each memory type, is a collection of numerous memory locations. To access data from
any memory, first it must be located and then the data is read from the memory location.
Following are the methods to access information from memory locations:
1. Random Access: Main memories are random access memories, in which each

memory location has a unique address. Using this unique address any memory

location can be reached in the same amount of time in any order.

26
2. Sequential Access: This methods allows memory access in a sequence or in

order.

3. Direct Access: In this mode, information is stored in tracks, with each track

having a separate read/write head.

Main Memory
The memory unit that communicates directly within the CPU, Auxillary memory and
Cache memory, is called main memory. It is the central storage unit of the computer
system. It is a large and fast memory used to store data during computer operations.
Main memory is made up of RAM and ROM, with RAM integrated circuit chips holing
the major share.
● RAM: Random Access Memory

○ DRAM: Dynamic RAM, is made of capacitors and transistors, and must be

refreshed every 10~100 ms. It is slower and cheaper than SRAM.

○ SRAM: Static RAM, has a six transistor circuit in each cell and retains

data, until powered off.

○ NVRAM: Non-Volatile RAM, retains its data, even when turned off.

Example: Flash memory.

● ROM: Read Only Memory, is non-volatile and is more like a permanent storage

for information. It also stores the bootstrap loader program, to load and start the

operating system when computer is turned on. PROM(Programmable ROM),

EPROM(Erasable PROM) and EEPROM(Electrically Erasable PROM) are some

commonly used ROMs.

27
Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For example:
Magnetic disks and tapes are commonly used auxiliary devices. Other devices used as
auxiliary memory are magnetic drums, magnetic bubble memory and optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output channels

Addresses

In computing, a memory address is a reference to a specific memory location used at


various levels by software and hardware. Memory addresses are fixed-length
sequences of digits conventionally displayed and manipulated as unsigned integers

28
encoding of information
Memory encoding allows information to be converted into a construct that is stored in
the brain indefinitely. Once it is encoded, it can be recalled from either short- or long-
term memory. At a very basic level, memory encoding is like hitting “Save” on a
computer file. Once a file is saved, it can be retrieved as long as the hard drive is
undamaged. “Recall” refers to retrieving previously encoded information.

The process of encoding begins with perception, which is the identification,


organization, and interpretation of any sensory information in order to understand it
within the context of a particular environment. Stimuli are perceived by the senses, and
related signals travel to the thalamus of the human brain, where they are synthesized
into one experience. The hippocampus then analyzes this experience and decides if it is
worth committing to long-term memory.

Encoding is achieved using chemicals and electric impulses within the brain. Neural
pathways, or connections between neurons (brain cells), are actually formed or
strengthened through a process called long-term potentiation, which alters the flow of
information within the brain. In other words, as a person experiences novel events or
sensations, the brain “rewires” itself in order to store those new experiences in memory.

Types of Encoding

The four primary types of encoding are visual, acoustic, elaborative, and semantic.

Visual

Visual encoding is the process of encoding images and visual sensory information. The
creation of mental pictures is one way people use visual encoding. This type of
information is temporarily stored in iconic memory, and then is moved to long-term

29
memory for storage. The amygdala plays a large role in the visual encoding of
memories.

Acoustic

Acoustic encoding is the use of auditory stimuli or hearing to implant memories. This is
aided by what is known as the phonological loop. The phonological loop is a process by
which sounds are sub-vocally rehearsed (or “said in your mind over and over”) in order
to be remembered.

Elaborative

Elaborative encoding uses information that is already known and relates it to the new
information being experienced. The nature of a new memory becomes dependent as
much on previous information as it does on the new information. Studies have shown
that the long-term retention of information is greatly improved through the use of
elaborative encoding.

Semantic

Semantic encoding involves the use of sensory input that has a specific meaning or can
be applied to a context. Chunking and mnemonics (discussed below) aid in semantic
encoding; sometimes, deep processing and optimal retrieval occurs. For example, you
might remember a particular phone number based on a person’s name or a particular
food by its color.

Optimizing Encoding through Organization

Not all information is encoded equally well. Think again about hitting “Save” on a
computer file. Did you save it into the right folder? Was the file complete when you

30
saved it? Will you be able to find it later? At a basic level, the process of encoding faces
similar challenges: if information is improperly coded, recall will later be more
challenging. The process of encoding memories in the brain can be optimized in a
variety of ways, including mnemonics, chunking, and state-dependent learning.

Mnemonics

Mnemonic devices, sometimes simply called mnemonics, are one way to help encode
simple material into memory. A mnemonic is any organization technique that can be
used to help remember something. One example is a peg-word system, in which the
person “pegs” or associates the items to be remembered with other easy-to-remember
items. An example of this is “King Phillip Came Over For Good Soup,” a peg-word
sentence for remembering the order of taxonomic categories in biology that uses the
same initial letters as the words to be remembered: kingdom, phylum, class, order,
family, genus, species. Another type of mnemonic is an acronym, in which a person
shortens a list of words to their initial letters to reduce their memory load.

Chunking

Chunking is the process of organizing parts of objects into meaningful wholes. The
whole is then remembered as a unit instead of individual parts. Examples of chunking
include remembering phone numbers (a series of individual numbers separated by
dashes) or words (a series of individual letters).

State-Dependent Learning

State-dependent learning is when a person remembers information based on the state


of mind (or mood) they are in when they learn it. Retrieval cues are a large part of state-
dependent learning. For example, if a person listened to a particular song while learning

31
certain concepts, playing that song is likely to cue up the concepts learned. Smells,
sounds, or place of learning can also be part of state-dependent learning.

Memory Consolidation

Memory consolidation is a category of processes that stabilize a memory trace after its
initial acquisition. Like encoding, consolidation influences whether the memory of an
event is accessible after the fact. However, encoding is more influenced by attention
and conscious effort to remember things, while the processes involved in consolidation
tend to be unconscious and happen at the cellular or neurological level. Generally,
encoding takes focus, while consolidation is more of a biological process. Consolidation
even happens while we sleep.

Sleep and Memory

Research indicates that sleep is of paramount importance for the brain to consolidate
information into accessible memories. While we sleep, the brain analyzes, categorizes,
and discards recent memories. One useful memory-enhancement technique is to use
an audio recording of the information you want to remember and play it while you are
trying to go to sleep. Once you are actually in the first stage of sleep, there is no
learning occurring because it is hard to consolidate memories during sleep (which is
one reason why we tend to forget most of our dreams). However, the things you hear on
the recording just before you fall asleep are more likely to be retained because of your
relaxed and focused state of mind.

The Role of Attention in Memory

In order to encode information into memory, we must first pay attention, a process
known as attentional capture.

32
main memory operations

We can imagine Main Memory to be organised as a matrix of bits. Each


row represents a memory location, typically this is equal to the word size of
the architecture, although it can be a word multiple (e.g. 2xWordsize) or a
partial word (e.g. half the wordsize). For simplicity we will assume that data
within Main memory can only be read or written a single row (memory
location) at a time.

For a 96-bit memory we could organise the memory as 12 × 8 bits, or 8 ×


12 bits or 6 × 16 bits, or even as 96 × 1 bits or 1 × 96 bits. Each row also
has a natural number Address which is used for selecting the row:

33
Instruction Formats
Computer perform task on the basis of instruction provided. An instruction in computer

comprises of groups called fields. These field contains different information as for

computers every thing is in 0 and 1 so each field has different significance on the basis

of which a CPU decide what to perform. The most common fields are:

● Operation field which specifies the operation to be performed like addition.

● Address field which contain the location of operand, i.e., register or memory

location.

● Mode field which specifies how operand is to be founded.

An instruction is of various length depending upon the number of addresses it contain.

Generally CPU organization are of three types on the basis of number of address fields:

1. Single Accumulator organization

2. General register organization

3. Stack organization

34
In first organization operation is done involving a special register called accumulator. In

second on multiple registers are used for the computation purpose. In third organization

the work on stack basis operation due to which it does not contain any address field. It

is not necessary that only a single organization is applied a blend of various

organization is mostly what we see generally.

On the basis of number of address, instruction are classified as:

Note that we will use X = (A+B)*(C+D) expression to showcase the procedure.

INSTRUCTION SEQUENCING
Four types of operations

1. Data transfer between memory and processor registers.


2. Arithmetic & logic operations on data
3. Program sequencing & control
4. I/O transfers.

1) Register transfer notations(RTN)

R3<–[R1]+[R2]

● Right hand side of RTN-denotes a value.


● Left hand side of RTN-name of a location.

2) Assembly language notations(ALN)

Add R1, R2, R3

● Adding contents of R1, R2 & place sum in R3.

3) Basic instruction types-4 types

35
● Three address instructions– Add A,B,C

A, B-source operands

C-destination operands

● Two address instructions-Add A,B

B <–[A] + [B]

● One address instructions –Add A

Add contents of A to accumulator & store sum back to accumulator.

● Zero address instructions

Instruction store operands in a structure called push down stack.

4) Instruction execution & straight line sequencing

● The processor control circuits use information in PC to fetch & execute instructions one
at a time in order of increasing address.
● This is called straight line sequencing.
● Executing an instruction-2 phase procedures.
● 1st phase–“instruction fetch”-instruction is fetched from memory location whose address
is in PC.
● This instruction is placed in instruction register in processor
● 2nd phase-“instruction execute”-instruction in IR is examined to determine which
operation to be performed.

5) Branching

● Branch-type of instruction loads a new value into program counter.


● So processor fetches & executes instruction at this new address called “branch target”
● Conditional branch-causes a branch if a specified condition is satisfied.
● E.g. Branch>0 LOOP –conditional branch instruction .it executes only if it satisfies
condition.

6) Condition codes

● Recording required information in individual bits called “condition code flags”.

36
● These flags are grouped together in a special processor register called “condition code
register” or “status register”
● Individual condition code flags-1 or 0.
● 4 commonly used flag

Addressing modes
The term addressing modes refers to the way in which the operand of an instruction is specified.
The addressing mode specifies a rule for interpreting or modifying the address field of the
instruction before the operand is actually executed.

Types of Addressing Modes-

In computer architecture, there are following types of addressing modes

37
1. Implied / Implicit Addressing Mode
2. Stack Addressing Mode
3. Immediate Addressing Mode
4. Direct Addressing Mode
5. Indirect Addressing Mode
6. Register Direct Addressing Mode
7. Register Indirect Addressing Mode
8. Relative Addressing Mode
9. Indexed Addressing Mode

38
10. Base Register Addressing Mode
11. Auto-Increment Addressing Mode
12. Auto-Decrement Addressing Mode

In this article, we will discuss about these addressing modes in detail.

1. Implied Addressing Mode-

In this addressing mode,


● The definition of the instruction itself specify the operands implicitly.
● It is also called as implicit addressing mode.

Examples-

● The instruction “Complement Accumulator” is an implied mode instruction.


● In a stack organized computer, Zero Address Instructions are implied mode
instructions.

(since operands are always implied to be present on the top of the stack)

2. Stack Addressing Mode-

In this addressing mode,


● The operand is contained at the top of the stack.

Example-

ADD
● This instruction simply pops out two symbols contained at the top of the stack.
● The addition of those two operands is performed.

39
● The result so obtained after addition is pushed again at the top of the stack.

3. Immediate Addressing Mode-

In this addressing mode,


● The operand is specified in the instruction explicitly.
● Instead of address field, an operand field is present that contains the operand.

Examples-

● ADD 10 will increment the value stored in the accumulator by 10.


● MOV R #20 initializes register R to a constant value 20.

4. Direct Addressing Mode-

In this addressing mode,


● The address field of the instruction contains the effective address of the operand.
● Only one reference to memory is required to fetch the operand.
● It is also called as absolute addressing mode.

40
Example-

● ADD X will increment the value stored in the accumulator by the value stored at
memory location X.

AC ← AC + [X]

5. Indirect Addressing Mode-

In this addressing mode,


● The address field of the instruction specifies the address of memory location that
contains the effective address of the operand.
● Two references to memory are required to fetch the operand.

41
Example-

● ADD X will increment the value stored in the accumulator by the value stored at
memory location specified by X.

AC ← AC + [[X]]

6. Register Direct Addressing Mode-

In this addressing mode,


● The operand is contained in a register set.
● The address field of the instruction refers to a CPU register that contains the
operand.
● No reference to memory is required to fetch the operand.

42
Example-

● ADD R will increment the value stored in the accumulator by the content of register
R.

AC ← AC + [R]

NOTE-

It is interesting to note-
● This addressing mode is similar to direct addressing mode.
● The only difference is address field of the instruction refers to a CPU register
instead of main memory.

7. Register Indirect Addressing Mode-

In this addressing mode,


● The address field of the instruction refers to a CPU register that contains the
effective address of the operand.

43
● Only one reference to memory is required to fetch the operand.

Example-

● ADD R will increment the value stored in the accumulator by the content of
memory location specified in register R.

AC ← AC + [[R]]

NOTE-

It is interesting to note-
● This addressing mode is similar to indirect addressing mode.
● The only difference is address field of the instruction refers to a CPU register.

8. Relative Addressing Mode-

In this addressing mode,

44
● Effective address of the operand is obtained by adding the content of program
counter with the address part of the instruction.

Effective Address

= Content of Program Counter + Address part of the instruction

NOTE-

● Program counter (PC) always contains the address of the next instruction to be
executed.
● After fetching the address of the instruction, the value of program counter
immediately increases.
● The value increases irrespective of whether the fetched instruction has completely
executed or not.

45
9. Indexed Addressing Mode-

In this addressing mode,


● Effective address of the operand is obtained by adding the content of index
register with the address part of the instruction.

Effective Address

= Content of Index Register + Address part of the instruction

10. Base Register Addressing Mode-

In this addressing mode,

46
● Effective address of the operand is obtained by adding the content of base register
with the address part of the instruction.

Effective Address

= Content of Base Register + Address part of the instruction

11. Auto-Increment Addressing Mode-

● This addressing mode is a special case of Register Indirect Addressing Mode


where-

Effective Address of the Operand

= Content of Register

47
In this addressing mode,
● After accessing the operand, the content of the register is automatically
incremented by step size ‘d’.
● Step size ‘d’ depends on the size of operand accessed.
● Only one reference to memory is required to fetch the operand.

Example-

Assume operand size = 2 bytes.


Here,
● After fetching the operand 6B, the instruction register R AUTO will be automatically
incremented by 2.
● Then, updated value of RAUTO will be 3300 + 2 = 3302.
● At memory address 3302, the next operand will be found.

48
NOTE-

In auto-increment addressing mode,


● First, the operand value is fetched.
● Then, the instruction register RAUTO value is incremented by step size ‘d’.

12. Auto-Decrement Addressing Mode-

● This addressing mode is again a special case of Register Indirect Addressing


Mode where-

Effective Address of the Operand

= Content of Register – Step Size

In this addressing mode,


● First, the content of the register is decremented by step size ‘d’.
● Step size ‘d’ depends on the size of operand accessed.
● After decrementing, the operand is read.
● Only one reference to memory is required to fetch the operand.

Example-

49
Assume operand size = 2 bytes.
Here,
● First, the instruction register RAUTO will be decremented by 2.
● Then, updated value of RAUTO will be 3302 – 2 = 3300.
● At memory address 3300, the operand will be found.

NOTE-

In auto-decrement addressing mode,


● First, the instruction register RAUTO value is decremented by step size ‘d’.
● Then, the operand value is fetched.

Pushdown Stacks
50
Of the data types that support insert and remove for collections of objects,
the most important is called the pushdown stack.

A pushdown stack is an Abstract Data Types (ADT) that comprises two


basic operations: insert (push) a new item, and remove (pop) the item that
was most recently inserted. Items of this pushdown stack are removed
according to a last-in, first-out (LIFO) discipline.

In computer science, a stack is a last in, first out (LIFO) abstract data type
and data structure. A stack can have any abstract data type as an element,
but is characterized by only two fundamental operations: push and pop. The
push operation adds to the top of the list, hiding any items already on the
stack, or initializing the stack if it is empty. The pop operation removes an item
from the top of the list, and returns this value to the caller. A pop either reveals
previously concealed items, or results in an empty list.

A stack is a restricted data structure, because only a small number of


operations are performed on it. The nature of the pop and push operations
also means that stack elements have a natural order. Elements are removed
from the stack in the reverse order to the order of their addition: therefore, the
lower elements are typically those that have been in the list the longest.

I'll start to implement stack for integer data type using structure with array
member. Then, it will be converted to generic implementation of stack.

51
Subroutine

A set of Instructions which are used repeatedly in a program can be referred to as

Subroutine. Only one copy of this Instruction is stored in the memory. When a

Subroutine is required it can be called many times during the Execution of a Particular

program. A call Subroutine Instruction calls the Subroutine. Care Should be taken while

returning a Subroutine as Subroutine can be called from a different place from the

memory.

The content of the PC must be Saved by the call Subroutine Instruction to make a

correct return to the calling program.

52
Figure – Process of subroutine in a program
Subroutine linkage method is a way in which computer call and return the Subroutine.

The simplest way of Subroutine linkage is saving the return address in a specific

location, such as register which can be called as link register call Subroutine.

2. Subroutine Nesting –

Subroutine nesting is a common Programming practice In which one Subroutine call

another Subroutine.

53
Figure – Subroutine calling another subroutine
From the above figure, assume that when Subroutine 1 calls Subroutine 2 the return

address of Subroutine 2 should be saved somewhere. So if link register stores return

address of Subroutine 1 this will be (destroyed/overwritten) by return address of

Subroutine 2. As the last Subroutine called is the first one to be returned ( Last in first

out format). So stack data structure is the most efficient way to store the return

addresses of the Subroutines.

54
Figure – Return address of subroutine is stored in stack memory

----------Complete First Unit---------------

55

You might also like