You are on page 1of 13

INTRODUCTION TO COMPUTERS:

Everyone must have used, seen, or read about computers. This is because computer is an integral
part of our everyday existence like school, banks, shops, railway stations, hospital, hotel etc.
Computers are present everywhere, making our work easier and faster for us. As they are such
integral parts of our lives, everyone must know what they are and how they function.
The literal meaning of computer is to compute (Calculate). It a device that can calculate.
However, modern computers can do a lot more than calculate. Computers can accomplish our
tasks easily and repeatedly without getting bored and without committing errors.
The term computer is derived from the Latin term ‘computare’, this means to calculate or
programmable machine. Computer cannot do anything without a Program. It represents the
decimal numbers through a string of binary digits. The Word ‘Computer’ usually refers to the
Center Processor Unit plus Internal memory.
Computer is an advanced electronic device that takes raw data as input from the user and
processes these data under the control of set of instructions (called program) and gives the result
(output) and saves output for the future use. It can process both numerical and non-numerical
(arithmetic and logical) calculations.

Computer is an electronic device that receives input, stores or processes the input as per user
instructions and provides output in desired format.

A computer is a machine that can be programmed to accept data (input), process it into useful
information (output), and store it away (in a secondary storage device) for safekeeping or later
reuse. The processing of input to output is directed by the software but performed by the
hardware.

Charles Babbage is called the “Grand Father” of the computer. The First mechanical computer
designed by Charles Babbage was called Analytical Engine. It uses read-only memory in the
form of punch cards.

Basic characteristics of computer:


1. Speed: Computer can work very fast. It takes only few seconds for calculations that we take
hours to complete. You will be surprised to know that computer can perform millions (1,000,000) of
instructions and even more per second.
Therefore, we determine the speed of computer in terms of microsecond (10-6 part of a second)
or nanosecond (10 to the power -9 part of a second). From this you can imagine how fast your
computer performs work.
2. Accuracy: The degree of accuracy of computer is very high and every calculation is
performed with the same accuracy. The accuracy level is 7.
3. Diligence: computer is free from tiredness, lack of concentration, fatigue, etc. It can work for
hours without creating any error. If millions of calculations are to be performed, a computer will perform
every calculation with the same accuracy. Due to this capability it overpowers human being in routine
type of work.
4. Versatility: It means the capacity to perform completely different type of work. You may use
your computer to prepare payroll slips. Next moment you may use it for inventory management or to
prepare electric bills.
5. Power of Remembering: Computer has the power of storing any amount of information
or data. Any information can be stored and recalled as long as you require it, for any numbers of years. It
depends entirely upon you how much data you want to store in a computer and when to lose or retrieve
these data.
6. No IQ: Computer is a dumb machine and it cannot do any work without instruction from the
user. It performs the instructions at tremendous speed and with accuracy. It is you to decide what you
want to do and in what sequence. So a computer cannot take its own decision as you can.
7. No Feeling: It does not have feelings or emotion, taste, knowledge and experience. Thus it
does not get tired even after long hours of work. It does not distinguish between users.
8. Storage: The Computer has an in-built memory where it can store a large amount of data. You
can also store data in secondary storage devices such as floppies, which can be kept outside your
computer and can be carried to other computers.

Components of computer system.

Central processing unit


(CPU)
The central processing unit
(CPU) is the unit which performs
most of the processing inside a
computer. To control instructions
and data flow to and from other
parts of the computer, the CPU
relies heavily on a chipset, which
is a group of microchips located
on the motherboard.
The CPU has two components:
• Control Unit: extracts
instructions from memory and decodes and executes them
• Arithmetic Logic Unit (ALU): handles arithmetic and logical operations
To function properly, the CPU relies on the system clock, memory, secondary storage, and data
and address buses.
This term is also known as a central processor, microprocessor or chip.
The CPU is the heart and brain of a computer. It receives data input, executes instructions, and
processes information. It communicates with input/output (I/O) devices, which send and receive
data to and from the CPU. Additionally, the CPU has an internal bus for communication with the
internal cache memory, called the backside bus. The main bus for data transfer to and from the
CPU, memory, chipset, and AGP socket is called the front-side bus.
The CPU contains internal memory units, which are called registers. These registers contain data,
instructions, counters and addresses used in the ALU’s information processing.
Some computers utilize two or more processors. These consist of separate physical CPUs located
side by side on the same board or on separate boards. Each CPU has an independent interface,
separate cache, and individual paths to the system front-side bus. Multiple processors are ideal
for intensive parallel tasks requiring multitasking. Multicore CPUs are also common, in which a
single chip contains multiple CPUs.

An arithmetic logic unit (ALU)


An arithmetic logic unit (ALU) is a major component of the central processing unit of a
computer system. It does all processes related to arithmetic and logic operations that need to be
done on instruction words. In some microprocessor architectures, the ALU is divided into the
arithmetic unit (AU) and the logic unit (LU).
An ALU can be designed by engineers to calculate any operation. As the operations become
more complex, the ALU also becomes more expensive, takes up more space in the CPU and
dissipates more heat. That is why engineers make the ALU powerful enough to ensure that the
CPU is also powerful and fast, but not so complex as to become prohibitive in terms of cost and
other disadvantages.
The arithmetic logic unit is that part of the CPU that handles all the calculations the CPU may
need. Most of these operations are logical in nature. Depending on how the ALU is designed, it
can make the CPU more powerful, but it also consumes more energy and creates more heat.
Therefore, there must be a balance between how powerful and complex the ALU is and how
expensive the whole unit becomes. This is why faster CPUs are more expensive, consume more
power and dissipate more heat.
The main functions of the ALU are to do arithmetic and logic operations, including bit shifting
operations. These are essential processes that need to be done on almost any data that is being
processed by the CPU.

ALUs routinely perform the following operations:


(i) Logical Operations: These include AND, OR, NOT, XOR, NOR, NAND, etc.

(ii) Bit-Shifting Operations: This pertains to shifting the positions of the bits by a certain
number of places to the right or left, which is considered a multiplication operation.

(iii) Arithmetic Operations: This refers to bit addition and subtraction. Although multiplication
and division are sometimes used, these operations are more expensive to make. Addition can be
used to substitute for multiplication and subtraction for division.
An arithmetic logic unit is also known as an integer unit (IU).

Control Unit (CU)


A control unit (CU) handles all processor control signals. It directs all input and output flow,
fetches code for instructions from microprograms and directs other units and models by
providing control and timing signals. A CU component is considered the processor brain because
it issues orders to just about everything and ensures correct instruction execution.
A CU takes its input from the instruction and status registers. Its rules of operation, or
microprogram, are encoded in a programmable logic array (PLA), random logic or read-only
memory (ROM).
CU functions are as follows:
• Controls sequential instruction execution
• Interprets instructions
• Guides data flow through different computer areas
• Regulates and controls processor timing
• Sends and receives control signals from other computer devices
• Handles multiple tasks, such as fetching, decoding, execution handling and storing results

CUs are designed in two ways:

(i) Hardwired control: Design is based on a fixed architecture. The CU is made up of flip-flops,
logic gates, digital circuits and encoder and decoder circuits that are wired in a specific and fixed
way. When instruction set changes are required, wiring and circuit changes must be made. This
is preferred in a reduced instruction set computing (RISC) architecture, which only has a small
number of instructions.

(ii) Microprogram control: Microprograms are stored in a special control memory and are
based on flowcharts. They are replaceable and ideal because of their simplicity.
Input / output devices.
INPUT DEVICE
An input device is any hardware device that sends data to a computer, allowing you to interact
with and control it. The picture shows a Logitech trackball mouse, which is an example of an
input device.
The most commonly used or primary input devices on a computer are the keyboard and mouse.
However, there are dozens of other devices that can also be used to input data into the computer.
Input Devices:
• Graphics Tablets
• Video Capture Hardware
• Trackballs
• Barcode reader
• Digital camera
• MIDI keyboard
• Gamepad
• Joystick
• Keyboard
• Cameras
• Microphone
• Mouse (pointing device)
• Scanner
• Webcam
• Touchpad’s
• Microphone
• Electronic Whiteboard
• OMR
• OCR
• Pen Input
• Punch card reader
• MICR (Magnetic Ink character reader)
• Magnetic Tape Drive
Some of the popular input devices are:
1. Keyboard
The keyboard is a basic input device that is used to enter data into a computer or any other
electronic device by pressing keys. It has different sets of keys for letters, numbers, characters,
and functions. Keyboards are connected to a computer through USB or a Bluetooth device for
wireless communication.
2. Mouse
The mouse is a hand-held input device which is used to move cursor or pointer across the screen.
It is designed to be used on a flat surface and generally has left and right button and a scroll
wheel between them. Laptop computers come with a touchpad that works as a mouse. It lets you
control the movement of cursor or pointer by moving your finger over the touchpad. Some
mouse comes with integrated features such as extra buttons to perform different buttons.
The mouse was invented by Douglas C. Engelbart in 1963. Early mouse had a roller ball
integrated as a movement sensor underneath the device. Modern mouse devices come with
optical technology that controls cursor movements by a visible or invisible light beam. A mouse
is connected to a computer through different ports depending on the type of computer and type of
a mouse.
3. Scanner
The scanner uses the pictures and pages of text as input. It scans the picture or a document. The
scanned picture or document then converted into a digital format or file and is displayed on the
screen as an output. It uses optical character recognition techniques to convert images into digital
ones.
4. Joystick
A joystick is also a pointing input device like a mouse. It is made up of a stick with a spherical
base. The base is fitted in a socket that allows free movement of the stick. The movement of stick
controls the cursor or pointer on the screen.
The frist joystick was invented by C. B. Mirick at the U.S. Naval Research Laboratory. A
joystick can be of different types such as displacement joysticks, finger-operated joysticks, hand
operated, isometric joystick, and more. In joystick, the cursor keeps moving in the direction of
the joystick unless it is upright, whereas, in mouse, the cursor moves only when the mouse
moves.
5. Light Pen
A light pen is a computer input device that looks like a pen. The tip of the light pen contains a
light-sensitive detector that enables the user to point to or select objects on the display screen. Its
light sensitive tip detects the object location and sends the corresponding signals to the CPU. It is
not compatible with LCD screens, so it is not in use today. It also helps you draw on the screen if
needed. The first light pen was invented around 1955 as a part of the Whirlwind project at the
Massachusetts Institute of Technology (MIT).
6. Digitizer
Digitizer is a computer input device that has a flat surface and usually comes with a stylus. It
enables the user to draw images and graphics using the stylus as we draw on paper with a pencil.
The images or graphics drawn on the digitizer appear on the computer monitor or display screen.
The software converts the touch inputs into lines and can also convert handwritten text to
typewritten words.
It can be used to capture handwritten signatures and data or images from taped papers.
Furthermore, it is also used to receive information in the form of drawings and send output to a
CAD (Computer-aided design) application and software like AutoCAD. Thus, it allows you to
convert hand-drawn images into a format suitable for computer processing.
7. Microphone
The microphone is a computer input device that is used to input the sound. It receives the sound
vibrations and converts them into audio signals or sends to a recording medium. The audio
signals are converted into digital data and stored in the computer. The microphone also enables
the user to telecommunicate with others. It is also used to add sound to presentations and with
webcams for video conferencing.
8. Magnetic Ink Character Recognition (MICR)
MICR computer input device is designed to read the text printed with magnetic ink. MICR is a
character recognition technology that makes use of special magnetized ink which is sensitive to
magnetic fields. It is widely used in banks to process the cheques and other organizations where
security is a major concern. It can process three hundred cheques in a minute with hundred-
percent accuracy. The details on the bottom of the cheque (MICR No.) are written with magnetic
ink. A laser printer with MICR toner can be used to print the magnetic ink.
The device reads the details and sends to a computer for processing. A document printed in
magnetic ink is required to pass through a machine which magnetizes the ink, and the magnetic
information is then translated into characters.
9. Optical Character Reader (OCR)
OCR computer input device is designed to convert the scanned images of handwritten, typed or
printed text into digital text. It is widely used in offices and libraries to convert documents and
books into electronic files.
It processes and copies the physical form of a document using a scanner. After copying the
documents, the OCR software converts the documents into a two-color (black and white),
version called bitmap. Then it is analyzed for light and dark areas, where the dark areas are
selected as characters, and the light area is identified as background. It is widely used to convert
hard copy legal or historic documents into PDFs. The converted documents can be edited if
required like we edit documents created in ms word.
OUTPUT DEVICE
An output device is any device used to send data from a computer to another device or user.
Most computer data output that is meant for humans is in the form of audio or video. Thus, most
output devices used by humans are in these categories. Examples include monitors, projectors,
speakers, headphones and printers.
OUTPUT DEVICES:
• LCD Projection Panels
• Monitor (LED, LCD, CRT etc)
• Printers (all types)
• Plotters
• Microfiche
• Projector
• Head Phone
• Computer Output Microfilm (COM)
• Speaker(s)
• Visual Display Unit
• Film Recorder
Following are some of the important output devices used in a computer.
1. Monitors
Monitors, commonly called as Visual Display Unit (VDU), are the main output device of a
computer. It forms images from tiny dots, called pixels that are arranged in a rectangular form.
The sharpness of the image depends upon the number of pixels.
There are two kinds of viewing screen used for monitors.
• Cathode-Ray Tube (CRT)
• Flat-Panel Display
2. Printer
A printer produces hard copies of the processed data. It enables the user, to print images, text or
any other information onto the paper.
Based on the printing mechanism, the printers are of two types: Impact Printers and Non-impact
Printers.
(a) Impact Printers: They are of two types:
(i) Character Printers
• Dot Matrix printers
• Daisy Wheel printers
(ii) Line printers
• Drum printers
• Chain printers
(b) Non-impact printers: They are of two types:
• Laser printers
• Inkjet printers
3. Projector
A projector is an output device that enables the user to project the output onto a large surface
such as a big screen or wall. It can be connected to a computer and similar devices to project
their output onto a screen. It uses light and lenses to produce magnified texts, images, and
videos. So, it is an ideal output device to give presentations or to teach a large number of people.
Modern projects (digital projectors) come with multiple input sources such as HDMI ports for
newer equipment and VGA ports that support older devices. Some projectors are designed to
support Wi-Fi and Bluetooth as well. They can be fixed onto the ceiling, placed on a stand, and
more and are frequently used for classroom teaching, giving presentations, home cinemas, etc.
A digital projector can be of two types:
(i) Liquid Crystal Display (LCD) digital projector: This type of digital projectors are very
popular as they are lightweight and provide crisp output. An LCD projector uses transmissive
technology to produce output. It allows the light source, which is a standard lamp, to pass
through the three colored liquid crystal light panels. Some colors pass through the panels and
some are blocked by the panels and thus images are on the screen.
(ii) Digital Light Processing (DLP) digital projector: It has a set of tiny mirrors, a separate
mirror for each pixel of the image and thus provide high-quality images. These projectors are
mostly used in theatres as they fulfill the requirement of high-quality video output.

4. Speakers – speakers are attached to computers to facilitate the output of sound; sound cards
are required in the computer for speakers to function. The different kinds of speakers range from
simple, two-speaker output devices right the way up to surround-sound multi-channel units.

5.Headset – this is a combination of speakers and microphone. It is mostly used by gamers, and
is also a great tool for communicating with family and friends over the internet using some VOIP
program or other.

6. Plotter – this generates a hard copy of a digitally depicted design. The design is sent to the
plotter through a graphics card, and the design is formed by using a pen. It is generally used
with engineering applications, and essentially draws a given image using a series of straight
lines.

Both Input–Output Devices:


• Touch Screen
• Modems
• Network cards
• Audio Cards / Sound Card
• Headsets (Headset consists of Speakers and Microphone.
• Speaker act Output Device and Microphone act as Input device
• Facsimile (FAX) (It has scanner to scan the document and also have printer to Print the
document)

Online and Off line Devices


Offline device is a device which is not connected to CPU.

When a computer or other device is not turned on or connected to other devices, it is said to be
"offline." This is the opposite of being "online," when a device can readily communicate with
other devices. ... Offline can also mean not being connected to the Internet.

Storage devices

Primary Memory (RAM and ROM)


Memory is the storage place where data and instructions are stored. They can be retrieved from
memory whenever required. Every computer comes with a certain amount of physical memory,
usually referred to as Main Memory or RAM. You can think of main memory as an array of
cells, each cell holding a single bit of information. This means a computer with 1MB of memory
can hold about 1 million bytes of information.
Memory is the most essential element of a computing system because without it computer can’t
perform simple tasks. Computer memory is of two basic type – Primary memory / Volatile
memory and Secondary memory / non
non-volatile
volatile memory. Random Access Memory (RAM) is
volatile memory and Read Only Memory (ROM) is non
non-volatile memory.

1. Random Access Memory (RAM)


It is a read/write (R/W) memory w which
hich is volatile. This means when power is turned off, all the
contents are destroyed. This is memory that can be accessed randomly that is, any byte of
memory can be accessed without touching the preceding bytes. RAM is synonymous with main
memory, the memorymory avaialble to programs. RAm is the most common type of memory found
in computers and other devices such as printers. There are two basic types of RAM: Dynamic
RAM (DRAM) and Static RAM(SRAM)
(i) DRAM (Dynamic RAM)
Dynamic RAM is more common type. Dyna Dynamic
mic RAM needs to be refreshed thousands of times
per second. DRAM stores a bit of data using a transistor and capacitor pair, which together
comprise a memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the
transistor acts as a switch that lets the control circuitry on the chip read the capacitor’s state of
charge or change it. As this form of memory is less expensive to produce than static RAM, it is
the predominant form of computer memory used in modern computers.
(ii) SRAM (StaticRAM)
Static RAM does not need to be refreshed, which makes it faster, but it is more expensive than
dynamic RAM. In static RAM, a bit of data is stored using the state of a flip flip-flop.
flop. This form of
RAM is more expensive to produce, but is generally ffasteraster and requires less power than DRAM
and, in modern computers, is often used as cache memory for the CPU.
2. Read Only Memory (ROM)
ROM is non-volatile
volatile which menas it retains the stored information even if power is turned off.
this memory is used to store programs that boot the computer and perform diagnostics. therefore,
we can also call ROM as the read
read-only RAM.
ROM is of four types:
(i) Masked ROM: In this ROM a bit pattern is permanently recorded by a marking and
metallization process,
rocess, which is an expensive and specialized one. Memory manufacturers are
generally equipped to undertake this process.
(ii) PROM (Programmable ROM): A PROM is a memory chip on which data can be written
onto only once. Once a program is written onto a PR PROM
OM chip, it remains there forever. Unlike
RAM, PROM retains its contents when the computer is turned off. The difference between a
PROM and a ROM is that a PROM is manufactured as blank memory and programmed later
with a special device called PROM programm
programmer er or the PROM burner, whereas the ROM is
programmed during manufacturing process. The process of programming a PROM is sometimes
called burning a PROM.
(iii) EPROM (Erasable Programmable ROM): An EPROM is a special type of PROM that
can be erased by exposing it to ultraviolet light. Once erased, it can be reprogrammed. An
EPROM is similar to a PROM except that it requires ultravilolet radiation to be erased.
(iv) EEPROM (Electrically Erasable Programmable ROM): EEPROM is a special type of
PROM that can be erased by exposing it to an electrical charge. Like other types of PROM,
EEPROM retains its contents even when the power is turned off. Also, like other types of ROM,
EEPROM is not as fast as RAM. EEPROM is similar to Flash Memory (sometimes called flash
EEPROM). The principal difference is that EEPROM requires data to be written or erased one
byte at a time whereas flash memory allows data to be written or erased in blocks.
Cache Memory:
The speed of the CPU is extremely high as compared to the access time of main memory. the
slowness of main memory inhibits the performance of CPU. To decrease the mismatch in
operating speed, a small memory chip is attached between the CPU and the main memory,
whose access time is close to the processing speed of the CPU. It is called cache memory. Cache
memory is accessed more quickly than conventional RAM. It is used to store programs or data
currently being executed or temporary data frequently used by the CPU.
• It is also called as read write memory or the main memory or the primary memory.
• The programs and data that the CPU requires during execution of a program are stored in this
memory.
• It is a volatile memory as the data loses when the power is turned off.
• RAM is further classified into two types- SRAM (Static Random Access Memory) and DRAM
(Dynamic Random Access Memory).

Secondary Memory (Hard Disk, Optical Disk)


You know that processor memory, also known as primary memory, is expensive as well as
limited. The faster primary memory is also volatile. If we need to store large amount of data or
programs permanently, we need a cheaper and permanent memory. Such memory is called
secondary memory. Here we will discuss secondary memory devices that can be used to store
large amount of data, audio, video and multimedia files.
Characteristics of Secondary Memory
These are some characteristics of secondary memory, which distinguish it from primary
memory:
• It is non-volatile, i.e. it retains data when power is switched off
• It is large capacities to the tune of terabytes
• It is cheaper as compared to primary memory
Depending on whether secondary memory device is part of CPU or not, there are two types of
secondary memory – fixed and removable.
Hard Disk Drive
Hard disk drive is made up of a series of circular disks called platters arranged one over the other
almost ½ inches apart around a spindle. Disks are made of non-magnetic material like aluminum
alloy and coated with 10-20 nm of magnetic material.
Standard diameter of these disks is 14 inches and they rotate with speeds varying from 4200 rpm
(rotations per minute) for personal computers to 15000 rpm for servers. Data is stored by
magnetizing or demagnetizing the magnetic coating. A magnetic reader arm is used to read data
from and write data to the disks. A typical modern HDD has capacity in terabytes (TB).
CD Drive
CD stands for Compact Disk. CDs are circular disks that use optical rays, usually lasers, to read
and write data. They are very cheap as you can get 700 MB of storage space for less than a
dollar. CDs are inserted in CD drives built into CPU cabinet. They are portable as you can eject
the drive, remove the CD and carry it with you. There are three types of CDs −
• CD-ROM (Compact Disk – Read Only Memory) − The data on these CDs are recorded by the
manufacturer. Proprietary Software, audio or video are released on CD-ROMs.
• CD-R (Compact Disk – Recordable) − Data can be wri2en by the user once on the CD-R. It cannot be
deleted or modified later.
• CD-RW (Compact Disk – Rewritable) − Data can be wri2en and deleted on these optical disks again
and again.
DVD Drive
DVD stands for Digital Video Display. DVD is optical devices that can store 15 times the data
held by CDs. They are usually used to store rich multimedia files that need high storage capacity.
DVDs also come in three varieties – read only, recordable and rewritable.
Pen Drive/ Memory Card
Pen drive is a portable memory device that uses solid state memory rather than magnetic fields or
lasers to record data. It uses a technology similar to RAM, except that it is nonvolatile. It is also
called USB drive, key drive or flash memory.
Blu Ray Disk
Blu Ray Disk (BD) is an optical storage media used to store high definition (HD) video and other
multimedia filed. BD uses shorter wavelength laser as compared to CD/DVD. This enables
writing arm to focus more tightly on the disk and hence pack in more data. BDs can store up to
128 GB data.
Optical Disk
An optical disk is any computer disk that uses optical storage techniques and technology to read
and write data. It is a computer storage disk that stores data digitally and uses laser beams
(transmitted from a laser head mounted on an optical disk drive) to read and write data.
An optical disk is primarily used as a portable and secondary storage device. It can store more
data than the previous generation of magnetic storage media, and has a relatively longer lifespan.
Compact disks (CD), digital versatile/video disks (DVD) and Blu-ray disks are currently the
most commonly used forms of optical disks. These disks are generally used to:
• Distribute software to customers
• Store large amounts of data such as music, images and videos
• Transfer data to different computers or devices
• Back up data from a local machine

Integration of Application
Generation of Computer Technology
A computer is an electronic device that manipulates information or data. It has the ability to
store, retrieve, and process data.
Nowadays, a computer can be used to type documents, send email, play games, and browse the
Web. It can also be used to edit or create spreadsheets, presentations, and even videos. But the
evolution of this complex system started around 1946 with the first Generation of Computer and
evolving ever since.
Even more so the generation who have grown from infancy within the global desktop and laptop
revolution since the 1980s.
The history of the computer goes back several decades however and there are five definable
generations of computers.
Each generation is defined by a significant technological development that changes
fundamentally how computers operate – leading to more compact, less expensive, but more
powerful, efficient and robust machines.
There are five generations of computers:
1940 – 1956: First Generation – Vacuum Tubes
These early computers used vacuum tubes as circuitry and magnetic drums for memory. As a
result they were enormous, literally taking up entire rooms and costing a fortune to run. These
were inefficient materials which generated a lot of heat, sucked huge electricity and subsequently
generated a lot of heat which caused ongoing breakdowns.
These first generation computers relied on ‘machine language’ (which is the most basic
programming language that can be understood by computers). These computers were limited to
solving one problem at a time. Input was based on punched cards and paper tape. Output came
out on print-outs. The two notable machines of this era were the UNIVAC and ENIAC machines
– the UNIVAC is the first every commercial computer which was purchased in 1951 by a
business – the US Census Bureau.
1956 – 1963: Second Generation – Transistors
The replacement of vacuum tubes by transistors saw the advent of the second generation of
computing. Although first invented in 1947, transistors weren’t used significantly in computers
until the end of the 1950s. They were a big improvement over the vacuum tube, despite still
subjecting computers to damaging levels of heat. However they were hugely superior to the
vacuum tubes, making computers smaller, faster, cheaper and less heavy on electricity use. They
still relied on punched card for input/printouts.
The language evolved from cryptic binary language to symbolic (‘assembly’) languages. This
meant programmers could create instructions in words. About the same time high level
programming languages were being developed (early versions of COBOL and FORTRAN).
Transistor-driven machines were the first computers to store instructions into their memories –
moving from magnetic drum to magnetic core ‘technology’. The early versions of these
machines were developed for the atomic energy industry.
1964 – 1971: Third Generation – Integrated Circuits
By this phase, transistors were now being miniaturised and put on silicon chips (called
semiconductors). This led to a massive increase in speed and efficiency of these
machines. These were the first computers where users interacted using keyboards and monitors
which interfaced with an operating system, a significant leap up from the punch cards and
printouts. This enabled these machines to run several applications at once using a central
program which functioned to monitor memory.
As a result of these advances which again made machines cheaper and smaller, a new mass
market of users emerged during the ‘60s.
1972 – 2010: Fourth Generation – Microprocessors
This revolution can be summed in one word: Intel. The chip-maker developed the Intel 4004 chip
in 1971, which positioned all computer components (CPU, memory, input/output controls) onto
a single chip. What filled a room in the 1940s now fit in the palm of the hand. The Intel chip
housed thousands of integrated circuits. The year 1981 saw the first ever computer (IBM)
specifically designed for home use and 1984 saw the MacIntosh introduced by Apple.
Microprocessors even moved beyond the realm of computers and into an increasing number of
everyday products.
The increased power of these small computers meant they could be linked, creating networks.
Which ultimately led to the development, birth and rapid evolution of the Internet. Other major
advances during this period have been the Graphical user interface (GUI), the mouse and more
recently the astounding advances in lap-top capability and hand-held devices.
2010 : Fifth Generation – Artificial Intelligence
Computer devices with artificial intelligence are still in development, but some of these
technologies are beginning to emerge and be used such as voice recognition.
AI is a reality made possible by using parallel processing and superconductors. Leaning to the
future, computers will be radically transformed again by quantum computation, molecular and
nano technology.
The essence of fifth generation will be using these technologies to ultimately create machines
which can process and respond to natural language, and have capability to learn and organise
themselves.

Programming Languages
Types of Computer languages
Just as humans use language to communicate, and different regions have different languages,
computers also have their own languages that are specific to them.
Different kinds of languages have been developed to perform different types of work on the
computer. Basically, languages can be divided into two categories according to how the
computer understands them.
Two Basic Types of Computer Language

1. Low-Level Languages
Low-level computer languages are either machine codes or are very close them. A computer
cannot understand instructions given to it in high-level languages or in English. It can only
understand and execute instructions given in the form of machine language i.e. binary. There are
two types of low-level languages:
• Machine Language: a language that is directly interpreted into the hardware
• Assembly Language: a slightly more user-friendly language that directly corresponds to machine
language

(i) Machine Language


Machine language is the lowest and most elementary level of programming language and was the
first type of programming language to be developed. Machine language is basically the only
language that a computer can understand and it is usually written in hex.
In fact, a manufacturer designs a computer to obey just one language, its machine code, which is
represented inside the computer by a string of binary digits (bits) 0 and 1. The symbol 0 stands
for the absence of an electric pulse and the 1 stands for the presence of an electric pulse. Since a
computer is capable of recognizing electric signals, it understands machine language.

(ii) Assembly Language


Assembly language was developed to overcome some of the many inconveniences of machine
language. This is another low-level but very important language in which operation codes and
operands are given in the form of alphanumeric symbols instead of 0’s and l’s.
These alphanumeric symbols are known as mnemonic codes and can combine in a maximum of
five-letter combinations e.g. ADD for addition, SUB for subtraction, START, LABEL etc.
Because of this feature, assembly language is also known as ‘Symbolic Programming Language.’
This language is also very difficult and needs a lot of practice to master it because there is only a
little English support in this language. Mostly assembly language is used to help in compiler
orientations. The instructions of the assembly language are converted to machine codes by a
language translator and then they are executed by the computer.

2. High-Level Languages
High-level computer languages use formats that are similar to English. The purpose of
developing high-level languages was to enable people to write programs easily, in their own
native language environment (English).
High-level languages are basically symbolic languages that use English words and/or
mathematical symbols rather than mnemonic codes. Each instruction in the high-level language
is translated into many machine language instructions that the computer can understand.

Types of High-Level Languages


Many languages have been developed for achieving a variety of different tasks. Some are fairly
specialized, and others are quite general.
These languages, categorized according to their use, are:

(i) Algebraic Formula-Type Processing


These languages are oriented towards the computational procedures for solving mathematical
and statistical problems.
Examples include:
• BASIC (Beginners All Purpose Symbolic Instruction Code)
• FORTRAN (Formula Translation)
• PL/I (Programming Language, Version 1)
• ALGOL (Algorithmic Language)
• APL (A Programming Language)

(ii) Business Data Processing


These languages are best able to maintain data processing procedures and problems involved in
handling files. Some examples include:
• COBOL (Common Business Oriented Language) RPG (Report Program Generator)
(iii) String and List Processing
These are used for string manipulation, including search patterns and inserting and deleting
characters. Examples are:
• LISP (List Processing) Prolog (Program in Logic)
(iv) Object-Oriented Programming Language
In OOP, the computer program is divided into objects. Examples are:
• C++ Java
(v) Visual Programming Language
These programming languages are designed for building Windows-based applications. Examples
are:
• Visual Basic Visual Java
• Visual C

You might also like