You are on page 1of 13

Fundamentals of Computers:

Introduction to Computers - Computer Definition, Characteristics of Computers, Evolution, and


History of Computers, Types of Computers, Basic Organisation of a Digital Computer; Number
Systems – different types, conversion from one number system to another; Computer Codes –
BCD, Gray Code, ASCII, and Unicode; Boolean Algebra – Boolean Operators with Truth Tables;
Types of Software – System Software and Utility Software; Computer Languages - Machine
Level, Assembly Level & High-Level Languages, Translator Programs – Assembler, Interpreter
and Compiler; Planning a Computer Program - Algorithm, Flowchart and Pseudo code with
Examples. Classification of Digital Computer Systems: Microcomputers, Minicomputers,
Mainframes, Supercomputers.

Computer:
The computer is an electronic device that is capable of receiving information or data and
performing a series of operations in accordance with a set of operations. This produces results
in the form of data or information. A computer is a machine capable of solving problems and
manipulating data. It accepts and processes the data by doing some mathematical and logical
operations and gives us the desired output.
Therefore, we may define a computer as an electronic device that transforms data into
information.

CHARACTERISTICS OF COMPUTER
Let us now identify the major characteristics of a computer. These are
1. Speed
As you know computers can work very fast. It takes only a fraction of a second for calculations
that manually take hours to complete. It takes a few minutes for the computer to process huge
amounts of data and give the result.

2. Accuracy
The degree of accuracy of the computer is very high and every calculation is performed with the
same accuracy. The accuracy level is determined on the basis of the design of the computer.
The errors in computers are mainly due to humans and inaccurate data.

3. Diligence
A computer is free from tiredness, lack of concentration, fatigue, etc. It can work for hours
without any errors.

4. Versatility
The computer is highly versatile. You can use it for a number of tasks simultaneously such as
inventory management, preparation of electrical bills, preparation of pay cheques, etc. Similarly,
in libraries computers can be used for various library housekeeping operations like acquisition,
circulation, serial control, etc., and also by students for searching library books on the computer
terminal.

4. Power of Remembering
The computer has the power of storing a large amount of information or data. Any information
can be stored and recalled whenever required for any number of years. It depends entirely upon
you how much data you want to store in a computer and when to retrieve or delete stored data.

5. Dumb Machine with no IQ


A computer is a dumb machine and it cannot do any work without instructions from the user. It
performs the instructions at a tremendous speed and with great accuracy as it has the power of
logic. It is for you to decide what you want to do and in which sequence. So, a computer cannot
take decisions of its own as human beings can take.

6. Storage
The computer has an in-built memory where it can store huge amounts of data. You can also
store data in secondary storage devices such as CDs, DVDs, and pen drives which can be kept
outside the computer and can be carried to other computers.

GENERATION OF COMPUTERS
The history of computer development is in reference to a different generation of computing
devices. The first generation of computers appeared in the mid-1940s. The present-day
computer, however, has undergone rapid changes for the last seven decades. This period,
during which the evolution of the computer took place, can be divided into five distinct phases
known as Generations of Computers that are presented in the table given below:-

Generation Period Technology

First 1946-59 Based on vacuum tube technology

Second 1957-64 Transistor-based technology replaces vacuum tube

Third 1965-70 Integrated circuit (IC) technology developed

Fourth 1970-90 Microprocessors developed

Fifth 1990-till date Use of Bio-Chip technology

TYPES OF COMPUTERS
Present day computers can be categorized as below:
a) Super Computer
Supercomputers are fastest computers and are very expensive. These are employed for
specialized applications that require immense amounts of mathematical calculations. For
example, weather forecasting requires a supercomputer. Other uses of supercomputers include
animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum
exploration.

b) Mainframe Computer
It is a very large and expensive computer and is capable of supporting hundreds, or even
thousands of users simultaneously. Mainframes are just below supercomputers in the hierarchy
that starts with a simple microprocessor (in watches, for example) at the bottom and move to
supercomputers at the top. In some ways, mainframes are more powerful than supercomputers
because they support simultaneous programs. But, supercomputers can execute a single
program faster than a mainframe.
The chief difference between a supercomputer and a mainframe is that a supercomputer
channels all its power into executing a few programs as fast as possible. In contrast, a
mainframe uses its power to execute many programs concurrently.

c) Mini Computer
It is a mid-sized computer in size and power. It lies between workstations and mainframes. In
the past decade, the distinction between large minicomputers and small mainframes has
blurred. In general, a minicomputer is a multiprocessing system capable of supporting from 4 to
about 200 users simultaneously.

d) Micro Computer
i. Desktop Computer: a personal or micro-mini computer sufficient to fit on a desk.
ii.Laptop Computer: a portable computer complete with an integrated screen and
keyboard. It is generally smaller in size than a desktop computer and larger than a notebook
computer
iii. Palmtop Computer/Digital Diary /Notebook /PDAs (Personal Digital Assistant): a
hand-sized computer, Palmtop, does not have a keyboard, but its screen serves both as an
input and output device.

e) Workstations
It is a terminal or desktop computer in a network. In this context, the workstation is just a generic
term for a user’s machine (client machine) in contrast to a “server” or “mainframe.”

HARDWARE AND SOFTWARE


Hardware
Hardware refers to the physical equipment used for the input, processing, output, and storage
activities of a computer system. It consists of mechanical and electronic devices, which we are
able to see and touch easily. Some of them are the central processing unit (CPU), primary
storage devices, secondary storage devices, input, and output units, and communication
devices. These are explained below:-
● Central processing unit (CPU): It manipulates the data and controls the tasks performed by
the other components.
● Primary storage: It temporarily stores data and program instructions during the processing.
● Primary memory (main memory): These are RAM (Random Access Memory/Read-Write
Memory), and ROM (Read-only-memory).
● Secondary storage: These store data and programs for future use. These are
Hard Disk (Local Disk) and External Hard Disc, Optical disk,(CDR, CDRW, DVD-R, DVD-RW
) , Pen Drive, Memory Cards, etc.
Secondary Storage Devices
Communication Devices: These are used for the communication or flow of data from one
computer to another computer. Some of them are Modems, Switches, Routers, TV tuner cards,
etc.

Software
A computer cannot do anything on its own. It has to be guided by the user. We have to give a
sequence of instructions to the computer in order to do any specific job. Software is simply a
computer program or a set of instructions. The software guides the computer at every step
indicating where to start and stop during a particular job. The process of software development
is called programming.
Types of software
There are two types of software, namely, system software and application software.
System software:
It is a general-purpose program designed to perform tasks such as controlling all operations
required to move data into and out of the computer. It communicates with a keyboard, printer,
card reader, disk, tapes, etc. It also monitors the use of various hardware like memory, CPU,
etc. System software acts as an interface between hardware and application software.
Remember that it is not possible to run application software without system software. Some of
the system software are Disc Operating System(DOS), Windows, Unix/Linux, MAC/OS X, etc.
Application software:
It is a set of programs, which are written to perform specific tasks of the users of the computer.
These software's are developed in high-level languages to help the user to get the computer to
perform various tasks. Some of the application software is MS Office, Macromedia
(Dreamweaver, Flash, Freehand), Adobe (PageMaker, PhotoShop), LIBSYS, SOUL, WINISIS,
KOHA, etc.

BASIC COMPUTER OPERATIONS


A computer basically performs five major operations or functions such as:
● Accepts data or instructions by way of input.
● Stores data,
● Processes data as required by the user,
● Gives results in the form of output, and
● Controls all operations inside a computer.

Input
This is the process of entering data and programs in the computer system. Therefore, the input
unit takes data from us to the computer in an organized manner for processing.
Storage
The data and instructions are saved/ stored permanently in a storage unit. The storage unit
performs the following major functions:
● All data and instructions, before and after processing, are stored here, and
● Intermediate results of processing are also stored here.
Processing
The task of performing operations like arithmetic and logical operations is called processing.
The Central Processing Unit (CPU) takes data and instructions from the storage unit and makes
all sorts of calculations based on the instructions given and the type of data provided. After this
data is sent back to the storage unit.

Output
This is the process of producing results from the data for getting useful information. The output
produced by the computer after processing is stored inside the computer before it is given to
you in human-readable form. The output is also stored inside the computer for further
processing.

Control
Controlling of all operations like input, processing, and output are performed by the control unit.
It takes care of the step-by-step processing of all operations inside the computer.

COMPUTER SYSTEM
In order to carry out its operations, a computer system is divided into three separate units. They
are
1) Arithmetic logical unit,
2) Control unit,
3) Central processing unit.
All these three units are known as functional units.

Arithmetic Logical Unit (ALU)


The processing of the data and instructions is performed by Arithmetic Logical Unit. The major
operations performed by the ALU are addition, subtraction, multiplication, division, logic, and
comparison. For processing, data is transferred from the storage unit to ALU. After processing,
the output is returned back to the storage unit for further processing or for storing purposes.
Control Unit (CU)
The next component of a computer is the Control Unit, which acts like the supervisor seeing that
things are done in a proper way. The control unit determines the sequence in which computer
programs and instructions are to be executed. Activities like processing of programs stored in
the main memory, interpretation of the instructions, and issuing of signals for other units of the
computer to execute them are carried out by CU. It coordinates the activities of the computer’s
peripheral equipment which include input and output devices. Therefore, it is the manager of all
the operations mentioned in the previous section.

Central Processing Unit (CPU)


The ALU and the CU of a computer system are jointly known as the central processing unit(
CPU). You may call the CPU the brain of any computer system. It is just like the brain that takes
all major decisions, makes all sorts of calculations, and directs different parts of the computer
functions by activating and controlling the operations. The CPU (Central Processing Unit) is the
device that interprets and executes instructions.

Number System
All digital computers store numbers, letters, and other characters in coded form. The code used
to represent characters is the Binary Code – i.e. a code made up of bits called Binary Digits.
Every character is represented by a string of “0s” and “1s” – the only digits found in the binary
numbering system.
“0” or “1” = bit (Binary Digit)
8 bits = 1 Byte (1 Character)
1024 Bytes = 1 KB (Kilo Bytes)
1024 KB = 1 MB (Mega Byte)
1024 MB = 1 GB (Giga Byte)
1024 GB= 1 TB (Terra Byte)

When data is typed into a computer, the key board converts each key stroke into a binary
character code. This code is then transmitted to the computer. When the computer transmits
the data to the any device, each individual character is communicated in binary code. It is then
converted back to the specific character while displaying or printing the data.

Earlier numbers consisted of symbols like I for 1, II for 2, III for 3 etc. Each Symbol represented
the same value irrespective of its position in the number. This approach is called an additive
approach. As time passed positional numbering systems were developed. In such a system the
number of symbols is few and they represent different values depending on the position they
occupy.
Decimal Number System (Base 10)
In the decimal system the successive positions to the left of the decimal point represent units,
tens, hundreds, thousands etc.
For example if we consider The number 7762, the digit 2 represents the number of units, 6
represents the number of tens, 7 the number of hundreds and 7 the number of thousands. (7 x
1000) + (7 x 100) + (6 x 10) + (2 x 1) = 7762 Thus as we move one position to the left, the
value of the digit increases by ten times. We can see that the position of the number affects its
value. These kinds of number systems are therefore called positional number systems. In other
words the number of symbols used to represent numbers in the system is called the base of that
system. In short we can say that the value of each digit in the number system is determined by:
● The digit itself
● The position of the digit in the number itself
● The base of the system.
The Roman numbering system uses symbols like I, II, III, IV, V etc. To represent the decimal
numbers 1, 2, 3, 4, and 5. As we can see this follows an additive approach and hence is not
conductiveto arithmetic.

Binary Number System (Base 2)


We now come to a different number system – the Binary number system. This binary number
system has a base of two, and the digits used are “0” And “1”. In this number system, as we
move to the left the value of the digit will be two times greater than its predecessor. Thus the
values of the places are: 64 32 16 8 4 2 1

Converting Decimal To Binary In conversion from decimal to any other number


system, the steps to be followed are:
● Divide the decimal number by the base of 2.
● Note the remainder in one column and divide the quotient again with the base.
Repeat this process until the quotient is reduced to a zero.

Example : Convert 65(10) to base 2


Base Number Reminder

2 65 1

2 32 0

2 16 0

2 8 0

2 4 0

2 2 0

1
The binary number of 65 is 1000001
Converting Binary To Decimal
The decimal number of 100001 is
=(1*26 )+(0*25 )+(0*24 )+(0*23 )+(0*22 )+(0*21 )+(1*20 )
= (1*64) + (0*32) + (0*16) + (0*8) + (0*4) + (0*2) + (1 * 1)
= 64 + 0 + 0 + 0 + 0 + 0 + 1 = 65
The decimal number of 1000001 is 65
Octal Number System (Base 8)
A commonly used positional system is the Octal System. The octal systemhas a base of 8. The
values increase from left to right as 1, 8, 64, 512, 4096,….

Converting Decimal To Octal


In conversion from decimal to any other number system, the steps to be followed are:
● Divide the decimal number by the base of the 8.
Example: The decimal number is 224
Base Number Reminder

8 224 0

8 28 4

3
The octal number of 224 is 340

Converting Octal To Decimal


The octal number is 340
= (3*82 )+(4*81 )+(0*80 )
= (3*64) + (4*8) + (0*1)
= 192 + 32 + 0 = 224
The decimal number of 340 is 224

Converting Binary Octal


000 0

001 1

010 2

011 3

100 4

101 5

110 6

111 7

Converting from Binary to Octal


The binary number must be divided into groups of three from the octal point – to the right in
case of the fractional portion and to the left in case of the integer portion. Each group can then
be replaced with their octal equivalent.
Example Binary 101010101010100
101 010 101 010 100
5 2 5 2 4

Hexadecimal Number System (Base 16)


There is another commonly used positional system, hexadecimal system. The hexadecimal
system has a base of 16, so the value increases from left toright as 1, 16, 256, 65536,. . . .
We need to keep a simple table in mind before we attempt any conversion from hexadecimal or
vice-versa.

Converting Decimal To HexaDecimal


In conversion from decimal to any other number system, the steps to be followed are:
● Divide the decimal number by the base of 16.
Example: The decimal number is 370

Base Number Reminder

16 370 2

16 23 7

1
The hexadecimal number of 370 is 172

Converting Hexadecimal To Decimal


The hexadecimal number 172
= (1*162 ) + (7*161 ) + (2*160 )
= (1*256) + (7*16) + (2*1)
= 256+112+2
= 370
The decimal number of 172 is 370

Converting Binary to Hexadecimal


Each hexadecimal digit is represented by 4 binary digits.

Binary Hexadecimal

0000 0

0001 1

0010 2

0011 3

0100 4
0101 5

0110 6

0111 7

1000 8

1001 9

1010 A

1011 B

1100 C

1101 D

1110 E

1111 F

To convert a binary number to its hexadecimal equivalent we split the quantity into groups of
four onwards, as before. Each of these groups of four is directly converted into their
hexadecimal equivalent. We may add zeros to the left of the number if necessary.

example Binary 10101011000010

Computer Languages
Machine Language (low-level language)
Low-Level language is the only language that the computer can understand. Low-level language
is also known as Machine Language. The machine language contains only two symbols 1 & 0.
All the machine language instructions are written in binary numbers 1's & 0's. A computer can
directly understand the machine language.

Assembly Language (middle-level language)


Middle-level language is a computer language in which the instructions are created using
symbols such as letters, digits, and special characters. Assembly language is an example of
middle-level language. In assembly language, we use predefined words called mnemonics.
Binary code instructions in low-level language are replaced with mnemonics and operands in
middle-level language. But the computer cannot understand mnemonics, so we use a translator
called Assembler to translate mnemonics into machine language.

An assembler is a translator which takes assembly code as input and produces machine code
as output. That means, the computer cannot understand middle-level language, so it needs to
be translated into a low-level language to make it understandable by the computer. Assembler is
used to translate middle-level language into low-level language.

High-Level Language
High-level language is a computer language that the users can understand. The high-level
language is very similar to human languages and has a set of grammar rules that are used to
make instructions more easily. Every high-level language has a set of predefined words known
as Keywords and a set of rules known as Syntax to create instructions. The high-level language
is easier to understand for the users but the computer can not understand it. High-level
language needs to be converted into low-level language to make it understandable by the
computer. We use a Compiler or interpreter to convert high-level language to low-level
language.

Languages like FORTRAN,C, C++, JAVA, Python, etc., are examples of high-level languages.
All these programming languages use human-understandable language like English to write
program instructions. These instructions are converted to low-level language by the compiler or
interpreter so that they can be understood by the computer.
The compiler creates the program. It will analyze all the language statements to check if they
are correct. If it comes across something is incorrect, it will give an error message. If there are
no errors spotted, the compiler will convert the source code into machine code. The compiler
links the different code files into programs that can be run such as .exe. Finally, the program
runs.
An Interpreter creates the program. It neither links the files nor generates machine code. The
source statements are executed line by line while executing the program.

Number System
Binary Coded Decimal, or BCD
It is another process for converting decimal numbers into their binary equivalents. It is a form of
binary encoding where each digit in a decimal number is represented in the form of bits.
This encoding can be done in either 4-bit or 8-bit (usually 4-bit is preferred). It is a fast and
efficient system that converts decimal numbers into binary numbers as compared to the existing
binary system. These are generally used in digital displays where the manipulation of data is
quite a task. Thus BCD plays an important role here because the manipulation is done by
treating each digit as a separate single sub-circuit.
The BCD equivalent of a decimal number is written by replacing each decimal digit in the integer
and fractional parts with its four-bit binary equivalent. the BCD code is more precisely known as
8421 BCD code, with 8,4,2, and 1 representing the weights of different bits in the four-bit
groups, Starting from MSB and proceeding towards LSB. This feature makes it a weighted
code, which means that each bit in the four-bit group representing a given decimal digit has an
assigned weight.

BCD-to-Decimal Conversion
The conversion from binary-coded decimal to decimal is the exact opposite of the above. Simply
divide the binary number into groups of four digits, starting with the least significant digit, and
then write the decimal digit represented by each 4-bit group. Add additional zeros at the end if
required to produce a complete 4-bit grouping. So for example, 1101012 would become 0011
01012 or 3510 in decimal.

The conversion of BCD-to-decimal or decimal-to-BCD is a relatively straightforward task but we


need to remember that BCD numbers are decimal numbers and not binary numbers, even
though they are represented using bits. The BCD representation of a decimal number is
important to understand because microprocessor-based systems used by most people need to
be in the decimal system.

However, while BCD is easy to code and decode, it is not an efficient way to store numbers. In
the standard 8421 BCD encoding of decimal numbers, the number of individual data bits
needed to represent a given decimal number will always be greater than the number of bits
required for an equivalent binary encoding.

Also, performing arithmetic tasks using binary-coded decimal numbers can be a bit awkward
since each digit can not exceed 9. The addition of two decimal digits in BCD will create a
possible carry bit of 1 which needs to be added to the next group of 4-bits.
If the binary sum with the added carry bit is equal to or less than 9 (1001), the corresponding
BCD digit is correct. But when the binary sum is greater than 9 the result is an invalid BCD digit.
Therefore it is better to convert BCD numbers into pure binary, perform the required addition,
and then convert the back to BCD before displaying the results.

You might also like