You are on page 1of 115

UNIVERSITY OF ABUJA

DEPARTMENT OF COMPUTER SCIENCE


FACULTY OF SCIENCE

COURSE CODE:
CSC101

COURSE TITLE:
INTRODUCTION TO COMPUTER

This lecture material is prepared to introduce and acquaint students with the basic knowledge of
computer alongside its components. In addition, the fundamentals of programming language and
specifically, BASIC programming are also highlighted. Importantly, the resource materials are
gathered from Books, Journals and Web resources.

1|Page
COURSE INFORMATION

Course Code: CSC101

Course Title: Introduction to Computer

Credit Unit: 3

Course Status: Core

Semester: First

Course Duration:

Required Hours of Study:

Year 2023.

GRADING SCHEME

Class Assessment: 30%

Final Examination: 70%. This will cover every aspect of the course.

Total: 100%

2|Page
COURSE DESCRIPTION

CSC101: Introduction to Computer is a three (3) credit unit course. It deals with the
introduction to computer concepts. Computers basic meaning, functions, characteristics,
components, applications, etc have not changed, except for the fact that different applications
may ascribe different meanings to it.

This lecture material will serve as an introduction to computer. It explains all concepts related
to computer in detail, from background on computers and information to their roles in the
society. The idea of computer literacy is also discussed, which includes the definition and
functions, history; generations, characteristics, user/applications and social impact. Students
will learn about the components of a computer, the concept of hardware and software, the
concept of data processing and representation, information and knowledge. The Von Neumann
model of computation; the fetch/decode/execute cycle. Overview of the programming
languages and compilation, comparison of interpreters and compilers and language translation
phases are also discussed. Further, introduction to BASIC programming and fundamentals of
programming constructs are discussed.

So, it is important not only to know how to use a computer, but also to understand the
components of it and what they do. Students will be required to complete lab assignments using
PC’s operating system, and several commonly used applications. In addition to Internet and
on-line resources, browsers and search engines.

3|Page
LECTURE 1

COMPUTER AND INFORMATION PROCESSING: GENERAL


OVERVIEW

Objectives

After going through this topic, students would be able to:

• Understand what a computer is and information process


• Describe impact of information processing
• Understand simple computer model
• Comprehend computer components
• Know types and classification of computer

Background on Computer and Information Processing


Information processing according to Britannica is referred to as the acquisition, recording,
organisation, retrieval, display, and dissemination of information. In today’s settings, it usually
refers to computer-based operations. Therefore, it consists of locating and capturing
information, using software to manipulate it into desired form and outputting the data.
Information processing systems include business software, operating systems, computers,
networks and mainframes. Whenever data needs to be transferred or operated upon in some
way, this is referred to as information processing. For example, in printing a text file, an
information processor works to translate and format the digital information for printed form.

What is a computer?
The word “computer” was used long before the modern definition to mean “a person that
computes.” This definition of computer was upheld until the 20th century when computer was
associated with “a programmable electronic device that can store, retrieve, and process data”
as defined in the Webster’s Dictionary. Therefore, a computer has come to be synonymous
with a device that “computes.” With the ability to perform a multitude of tasks, a computer is
regarded as a general (or multi) purpose machine. Computing here includes mathematical
computing as well as logical-based tasks. Generally, a computer is an electronic device,
operating under the control of instructions stored in its own memory that can accept data
(input), process the data according to specified rules, produce information (output), and store
the information for future use. As the modern definition suggests, a computer must be capable
of retrieving, processing, and storing data or information.

To perform a task, we must identify data/information associated with task and provide clear
guidelines, i.e., proper set of instructions for the task to be solved on the computer. This set of
steps or instructions written in simple understandable language is called an algorithm. More
specifically, an algorithm is a sequence of instructions needed to perform a task.

4|Page
Information Processing
Information processing is the manipulation of data to produce useful information; it involves
the capture of data in a format that is retrievable and analysable. Processing information
involves taking raw data and making it more useful by putting it into context. In general,
information processing means processing new data, which includes a number of steps:
acquiring, inputting, validating, manipulating, storing, outputting, communicating, retrieving,
and disposing. The future accessing and updating of files involves one or more of these steps.
Information processing provides individuals with basic skills to use the computer to process
many types of information effectively and efficiently. The term has often been associated
specifically with computer-based operations.
A computer is an information processing machine. Computers process data to produce
information. It accepts data and instructions, executes the instructions on the data to produce
results or perform actions as an output. The set of data and instructions provided by the user is
called input and the result obtained after the computer has processed the input is output.
Typically, the process can be repeated using the same instructions but different data. This is
only possible if these instructions are converted into machine readable format called program
and stored in a computer (this is also called stored-program concept). As indicated in the figure
below, the computer must remember the data in order to execute program. Computer accept
input from the user, based on the instruction, processes the data and produce output.

Figure 1.1: computer information processing


A computer information processor processes information to produce understandable results.
The processing may include the acquisition, recording, assembly, retrieval or dissemination of
information.

Impact of information processing in Brief


Information processing has had an enormous impact on modern society. The marketplace has
become increasingly complex with the escalating availability of data and information.
Individuals need a sound understanding of how to create, access, use, and manage information,
which is essential in the work environment. People need to understand the interrelationship
among individuals, the business world nationally and internationally, and government to
constructively participate as both consumers and producers. These general competencies must
be coupled with those that lead to employment in business as well as advanced business studies.

Simple Computer Model

A simple model of a typical computer is shown in the figure 1.2. The input unit provides a
mechanism for a computer to accept data and instructions from the users. Typical input devices
are mouse and keyboard. The input data and instructions are stored in the memory of the
computer before that they are processed in the processing unit (also called processor). Results
are presented to the user via a mechanism called output unit. Typical output devices are monitor
5|Page
and printer. The input unit, processor and output unit constitute the basic components of a
computer.

Input unit provides a means of entering data and instructions to the computer. Other input
devices in addition to keyboard and mouse mentioned earlier, are joysticks, touchscreens, pen
devices, character recognition, voice recognition, barcode readers, universal serial bus (USB)
drives, hard disks (HDs) and compact disks (CDs). These devices take data and instructions in
a variety of forms and send to computer memory. For example, the keyboard accepts
alphanumeric characters and passes them to stored program that processes them into machine
understandable codes (such as ASCII codes, ASCII is acronym for American Standard Code
for Information Interchange). A mouse controls the motion of a pointer in two dimensions in a
graphical environment called graphical user interface (GUI). The mouse converts hand
movements (backward and forward, left and right) into electronic signals that are used to move
the pointer. With these movements graphical objects are selected and sent to stored program
that processes them into machine understandable formats.

Figure 1.2: Simple computer model

Computer memory unit stores data and instructions obtained from the input unit as well as
processed results for future use. Computer memory retains data and instructions for a short
duration or for a long time. A computer memory that is capable of retaining information for a
very short duration, either while work is still in progress or power supply is ensured is called
volatile memory. This forms the primary storage of a computer; hence it is called primary
memory or simply memory. It is also called main memory or temporary memory. The content
in the main memory changes depending on the instructions being processed by the computer
and the latest content remain in the memory until the power supply is switched off. When the
computer is switched off or reset, the content in the memory is lost. To preserve content for a
long time, a secondary or auxiliary storage is used. Unlike the primary memory, the secondary
storage memory is non-volatile, slow, less expensive, and has the capacity to store large data.
Examples for secondary storage devices are hard disks, USB flash drives, compact discs (CDs),
and digital volatile/video discs (DVDs). Aside from the primary memory, faster memory
module, also called cache memory is used as a bridge between primary memory and the
processing unit.

The processing unit, also called computer processing unit (CPU) is regarded as the brain of the
computer. The CPU accepts data and instructions from the primary memory, executes
instructions and produces results which are either preserved in the primary memory or stored
in the secondary storage memory for future use. The unit that executes instructions that involve

6|Page
arithmetic and logical calculations is called the Arithmetic and Logic Unit (ALU) and all other
instructions involving the control operations of computer components are executed by the
Control Unit (CU). The ALU and CU together form the CPU.

Output unit is a mechanism for displaying results of the processed data from the processing
unit. The most commonly used output device is the monitor. A monitor as a display screen used
for visual representation of results. Other output devices are speakers and headphones for
sound-oriented results and printers for hard copy presentation of results. Results can also be
stored in a secondary storage memory device for future use.

Functions of Computers

The following function are performed by a computer

• Receiving Input: Data is fed into computer through various input devices like
keyboard, mouse, digital pens, etc. Input can also be fed through devices like CD-
ROM, pen drive, scanner, etc.
• Processing the information: Operations on the input data are carried out based on the
instructions provided in the programs.
• Storing the information: After processing, the information gets stored in the primary
or secondary storage area.
• Producing output: The processed information and other details are communicated to
the outside world through output devices like monitor, printer, etc.

Characteristics of a Computer

The followings are the characteristics of a computer

Figure 1.3: characteristics of a computer

Speed: speed is one of the main characteristics of a computer. Computer provide very high
speed accompanied by an equally high level for reliability. Thus, computers never make

7|Page
mistakes of their own accord. A computer can perform billions of calculations in a second. The
time taken by computers for their operations is microseconds and nanoseconds.

Accuracy: Computers can perform operations and process data faster with accuracy. Results
can be wrong only if incorrect data is fed to the computer or a bug may be the cause of an error.
(Garbage In Garbage Out – GIGO).

Diligence: A computer can perform millions of tasks or calculations with the same consistency
and accuracy. It does not feel any fatigue or lack of concentration. Its memory also makes it
superior to that of human beings.

Versatility: Versatility refers to the capability of a computer to perform different kinds of


works with same accuracy and efficiency. They are used in various fields and for variety of
activities and tasks.

Reliability: A computer is reliable as it gives consistent result for similar set of data i.e., if we
give same set of input any number of times, we will get the same result.

Automation: Computer performs all the tasks automatically i.e. it performs tasks without
manual intervention.

Memory/Storage: A computer has built-in memory called primary memory where it stores
data. Secondary storage are removable devices such as CDs, pen drives, etc., which are also
used to store data.

Multitasking: Computers can perform several tasks simultaneously in time.

Communications: Computers have the ability to communicate using some sort of connection
(either Wired or Wireless connection). Two computers can be connected to share data and
information and collaboratively complete assigned tasks.

Based on the characteristics described above, the following are the advantages of a computer.

1. Computers makes it possible to receive, supply and process large volumes of data at
very high speed.
2. Computer reduces the cost of all data related operations including, input, output,
storage, processing, and transmission.
3. Computer ensures consistent and error free processing of data.
4. Digitization of all kinds of information including sounds and images, combined with
massive information processing capabilities of the computer has resulted in
development of application to produce physical products of very high quality at great
speed and very economically.
5. Computers have enabled development of many real time applications requiring speedy
continuous monitoring.

Disadvantages of computers include:

1. Computer is highly dependent on the quality of input data fed to it.

8|Page
2. The task of programming a computer for a computer application is very costly and time
consuming.
3. Computer systems are rather rigid. Once a computers system is designed and
programmed, making even minor corrections or improvements can be quite costly and
time consuming.
4. Computers require use of sophisticated equipment and support facilities.

Components of a Computer

So far, we have described a simple computer model consisting of input, processing and output
units, primary and secondary memory. We indicated that data and instructions are entered via
the input unit to the memory for processing by the CPU and the results are displayed or entered
into the output unit. This description is simply an abstraction, which is a conceptual framework
for all computers. To actualize or bring this abstract model to reality, a computer must need
software (logical aspect) and hardware (physical aspect). Software and hardware form the two
components of a computer.

Software: This is the intangible part of a computer. It consists of programs and operating
information (data and documentation) used to direct the operation of a computer, without which
the computer would have no use. Software is often divided into two categories. Systems
software includes the operating system and all the utilities that enable the computer to function.
Applications software includes programs that do real work for users.

Hardware: This is the tangible or physical part of a computer. The devices that make up the
various units in the simple computer model form the hardware. The hardware of a computer is
comprehensive term used to describe physical parts of a computer, including, the processor,
motherboard or system board, keyboard, mouse, monitor and other peripheral devices.

Firmware: This is a software that are integrated into the hardware. Firmware is a type of
software that provides control, monitoring and data manipulation of engineered products and
systems. Typical examples of devices containing firmware are embedded systems (such as
traffic lights, consumer appliances, and digital watches), computers, computer peripherals,
mobile phones, and digital cameras. The firmware contained in these devices provides the low-
level control program for the device.

Types of Computers

There are three different types of computers according to the principles of operation, namely,
analog, digital and hybrid computers.

Analog Computers: Analog Computer is a computing device that works on continuous range
of values. The results given by the analog computers will only be approximate since they deal
with quantities that vary continuously. It generally deals with physical variables such as
voltage, pressure, temperature, speed, etc.

Digital Computers: A digital computer operates on digital data such as numbers. It uses binary
number system in which there are only two digits 0 and 1. Each one is called a bit. The digital
computer is designed using digital circuits in which there are two levels for an input or output

9|Page
signal. These two levels are known as logic 0 and logic 1. Digital Computers can give more
accurate and faster results. Digital computer is well suited for solving complex problems in
engineering and technology. Hence digital computers have an increasing use in the field of
design, research and data processing.

Digital computers can be further classified as,

• General Purpose Computers


• Special Purpose Computers

Special purpose computer: It is one that is built for a specific application.

General purpose computers: are used for any type of applications. They can store different
programs and do the jobs as per the instructions specified on those programs. Most of the
computers that we see today, are general purpose computers.

Hybrid Computers

A hybrid computer combines the desirable features of analog and digital computers. It is mostly
used for automatic operations of complicated physical processes and machines. Now-a-days
analog-to-digital and digital-to-analog converters are used for transforming the data into
suitable form for either type of computation.

For example, in hospital’s ICU, analog devices might measure the patient temperature, blood
pressure and other vital signs. These measurements which are in analog might then be
converted into numbers and supplied to digital components in the system. These components
are used to monitor the patient’s vital sign and send signals if any abnormal readings are
detected. Hybrid computers are mainly used for specialized tasks.

Classification of Digital Computers-Based Size and Performance

There are four different types of computers when we classify them based on their performance
and capacity, namely, supercomputers, mainframe computers, minicomputers and
microcomputers, see figure below.

Figure 1.4: classification of digital computers based on size

10 | P a g e
Supercomputers

Supercomputers are known for high processing capacity and are generally the most expensive.
These computers can process billions of instructions per second. Normally, they will be used
for applications which require intensive numerical computations such as stock analysis,
weather forecasting etc. Other uses of supercomputers are scientific simulations, (animated)
graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis
of geological data (e.g. in petrochemical prospecting).

Mainframe Computers

These are computers used primarily by large organizations for critical applications, bulk data
processing such as census, industry and consumer statistics, enterprise resource planning and
transaction processing. Mainframe computers can also process data at very high speeds, i.e.,
hundreds of million instructions per second and they are also quite expensive. Normally, they
are used in banking, airlines and railways and many other scientific applications.

Minicomputers

Minicomputers are lower to mainframe computers in terms of speed and storage capacity. They
are also less expensive than mainframe computers. Some of the features of mainframes will
not be available in mini computers. Hence, their performance also will be less than that of
mainframes.

Micro Computers

The invention of microprocessor (single chip CPU) gave birth to the much cheaper
microcomputers. They are further classified into: desktop computers, laptop computers,
handheld computers (PDAs)

Desktop Computers

Today the Desktop computers are the most popular computer systems. These desktop
computers are also known as personal computers or simply PCs. They are usually easier to use
and more affordable. They are normally intended for individual users for their word processing
and other small application requirements.

Laptop Computers

Laptop computers are portable computers. They are lightweight computers with a thin screen.
They are also called as notebook computers because of their small size. They can operate on
batteries and hence are very popular with travelers. The screen folds down onto the keyboard
when not in use.

Handheld Computers

Handheld computers or Personal Digital Assistants (PDAs) are pen-based and also battery-
powered. They are small and can be carried anywhere. They use a pen like stylus and accept

11 | P a g e
handwritten input directly on the screen. They are not as powerful as desktops or laptops but
they are used for scheduling appointments, storing addresses and playing games. They have
touch screens which we use with a finger or a stylus.

Practice Questions

1. What is the modern definition of a computer?


2. Define the following
a. an algorithm
b. an input
c. an output
d. a process
3. Use a diagram to describe the relationship among input, output and process.
4. Draw a simple computer model and describe the various components of a computer.
5. What are differences between main memory and secondary storage memory?
6. List some examples of input and output devices
7. List some examples of secondary storage memory devices
8. Describe the CPU, ALU and CU.
9. Explain the various types of computers.
10. State and describe 4 characteristics of a computer
11. State and describe all four classes of digital computers
12. Describe 3 advantages of a computer
13. What are the differences between desktop and laptop computers
14. Give 4 applications of a computer and describe each of them.

References
Tutorialspoint Computer Concepts - Introduction to Computer.
https://www.tutorialspoint.com/computer_concepts/computer_concepts_introduction_to_comput
er.htm

Types of Computers: https://en.wikipedia.org/wiki/Firmware


Types of Computers: http://www.computerbasicsguide.com/basics/types.html
Advantages of Computers: http://www.enotes.com/homework-help/what-advantage-computer-
126225
Classification of computers: http://www.computerbasicsguide.com/basics/types.html
Figures and Simple Computer Model: http://www.gujarat-education.gov.in/

12 | P a g e
LECTURE 2
HISTORY OF COMPUTER
Description

Milestones in computer history are numerous, especially since the advent of digital computers
around the mid-twentieth century. History of computing has become a semester-long university
course and therefore, its coverage in this lecture will be far from exhaustive. This lecture will
introduce students to the basic history and evolution of computers covering the pre-mechanical,
mechanical and digital era.

Objectives

After completing this lecture, students will learn

• different era of the history of computing


• generations of computing hardware and software
• prominent inventors of computing hardware and software
• computing hardware invented in pre-mechanical, mechanical and digital era

History of Computers

The history of the computer dates back to several years. The literature is full with many
different written versions of the history of computing starting from the ancient era. However,
a deeper examination revealed some commonalities and, in this lecture, the focus will be on
those common view points. The history will cover pre-mechanical era (ancient era), mechanical
era, electro-mechanical era and modern era.

Ancient Era

Devices have been used to aid computation for thousands of years, mostly using one-to-one
correspondence with fingers. The earliest counting device was probably a form of tally stick.
Another was the use of counting rods.

The earliest known device used for arithmetic tasks was Abacus, which has beads strung into
wires attached to a frame. What we now call the Roman abacus was used in Babylonia as early
as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented.

Mechanical Era

In 1642 Blaise Pascal (a famous French mathematician) invented an adding machine based on
mechanical gears in which numbers were represented by the cogs on the wheels.

In 1690, Leibnitz developed a machine that could perform additions, subtractions, divisions
and square roots. These instructions were hardcoded and could not be altered once written.

13 | P a g e
In 1822, an Englishman, Charles Babbage invented a machine called “Difference Engine,” that
would perform calculations without human intervention. In 1833, he developed the “Analytical
Engine” along with his associate Augusta Ada Byron (later Countess of Lovelace) who assisted
with the programming. His design contained the basic units of a modern computer: input,
output and processing units, memory and storage devices. Hence, he is regarded as the “father
of the modern-day computers”.

Electro-Mechanical Era

An American, Herman Hollerith, developed (around 1890) the first electrically driven device.
It utilized punched cards and metal rods which passed through the holes to close an electrical
circuit and thus cause a counter to advance. This machine was able to complete the calculation
of the 1890 U.S. census in 6 weeks compared with 7 1/2 years for the 1880 census which was
manually counted.

In 1936 Howard Aiken of Harvard University convinced Thomas Watson of IBM to invest $1
million in the development of an electromechanical version of Babbage's analytical engine.
The Harvard Mark 1 was completed in 1944 and was 8 feet high and 55 feet long.

At about the same time (the late 1930's) John Atanasoff of Iowa State University and his
assistant Clifford Berry built the first digital computer that worked electronically, the ABC
(Atanasoff-Berry Computer). This machine was basically a small calculator.

In 1943, as part of the British war effort, a series of vacuum tube-based computers (named
Colossus) were developed to crack German secret codes. The Colossus Mark 2 series consisted
of 2400 vacuum tubes.

John Mauchly and J. Presper Eckert of the University of Pennsylvania developed these ideas
further by proposing a huge machine consisting of 18,000 vacuum tubes. ENIAC (Electronic
Numerical Integrator and Computer) was born in 1946. It was a huge machine with a huge
power requirement and two major disadvantages. Maintenance was extremely difficult as the
tubes broke down regularly and had to be replaced, and also there was a big problem with
overheating. The most important limitation, however, was that every time a new task needed
to be performed the machine must be rewired.

In the late 1940's John von Neumann (at the time a special consultant to the ENIAC team)
developed the EDVAC (Electronic Discrete Variable Automatic Computer) which pioneered
the "stored program concept". This allowed programs to be read into the computer and so gave
birth to the age of general-purpose computers.

Modern Era

In the modern era, computers are classified into a number of generations. There are five
prominent generations of computers. Each generation has witnessed several technological
advances which change the functionality of the computers. This results in more compact,
powerful, robust systems which are less expensive. The classification can be based on the
hardware technology used in building the computer or based on applications/software used.

14 | P a g e
Note that, the literature did not present consistent generational periods and therefore those
indicated here may be different from several generational periods in the literature.

First-Generation Computers (1943-1958)

The first generation of computers started with ENIAC described above. This was followed by
IBM UNIVAC (Universal Automatic Computer) designed and built by Mauchly and Eckert in
1951.

The first-generation computers used vacuum tubes, which were very large, requiring lot of
energy and slow in input and output processing. They also suffered from heat and maintenance
problems. The vacuum tubes have to be replaced often because of their short life span. See
figures below for the ENIAC machine and vacuum tubes.

Figure 2.1: ENIAC and vacuum tubes

Characteristics of 1st Generation of Computers

▪ It has a vacuum tube circuit.


▪ Continuous maintenance required.
▪ Punched card and paper tape utilized as secondary storage.
▪ Dump primary storage.
▪ Machine and symbolic language programming.
▪ Generated considerable heat.
▪ Poor reliability.
▪ Limited internal storage capacity.
▪ First commercial computer to be used.
▪ Slow input/output operations.
▪ Computer programmed with machine language.
15 | P a g e
Second Generation Computers (1959-1964)

In the mid-1950's Bell Labs developed the transistor. Transistors were capable of performing
many of the same tasks as vacuum tubes but were only a fraction of the size. The first transistor-
based computer was produced in 1959. Transistors were not only smaller, enabling computer
size to be reduced, but they were faster, more reliable and consumed less electricity. See figure
below for sample transistors.

Figure 2.2: Transistors

The computers were able to perform operations comparatively faster. The storage capacity was
also improved. IBM 650, 700, 305 RAMAC, 1401, and 1620 desktop computers were
manufactured and distributed during this period. It was also during this period, assembly and
symbolic programming languages and high-level computer programming languages, such as
FORTRAN (Formula Translation) and COBOL (Common Business Oriented Language) and
BASIC (Beginner’s All-purpose Symbolic Instruction Code), were launched. Computer
programs written in these English-like high-level programming languages are translated to
machine code using interpreters, compilers or translators.

Characteristics of 2nd Generation of Computers

▪ In a core element transistor circuit was used.


▪ Magnetic core primary storage.
▪ Secondary storage on tapes.
▪ Greater reliability and speed.
▪ Reduced generated heat.
▪ Smaller and more reliable.
▪ Faster than 1st generation computers.
▪ Required less power to operate.
▪ High level procedural languages Fortran and Cobol were utilized.
▪ Computers programmed in high level languages.
16 | P a g e
▪ Third-Generation Computers (1965-1970)

Third-Generation Computers (1965-1970)

This period marked the development of computers based on Integrated Circuits (ICs)
instead of transistors. An integrated circuit or monolithic integrated circuit (also referred to
as an IC, a chip, or a microchip) is a set of electronic circuits on one small plate ("chip") of
semiconductor material, normally silicon. A silicon chip consumes less than one-eighth of
an inch square on which many electronic components like diodes, transistors, and
capacitors can be fixed. See figure below for an illustration of an integrated circuit on a
chip. Third generation computers are smaller, faster and more flexible in terms of input and
output than second generation computers. Third generation computers satisfy the need of a
small business and became popular as minicomputers. IBM 360, PDP 8 and PDP 11
computers are examples for third generation computers. Another feature of this period is
that computer software became much more powerful and flexible and for the first time
more than one program could share the computer's resources at the same time (multi-
tasking). The majority of programming languages used today are often referred to as 3GL's
(3rd generation languages) even though some of them originated during the 2nd generation.

Figure 2.3: Integrated circuit

Characteristics of 3rd Generation of Computers

▪ Transistors were replaced by integrated circuits.


▪ Increased speed and reliability.
▪ Development of minicomputers.
▪ On-line, real-time processing.
▪ Multiprogramming operating system was introduced.
▪ It was faster than the previous generation.
▪ To improved input and output devices.

Fourth Generation Computers (1971-1989)

The third-generation computers have integrated circuits consisting of anywhere from 1 to 500
transistors and were considered small-scale integration (1 to 10 transistors) to medium-scale
integration (10 to 500 transistors). In 1970 large-scale integration was achieved where the

17 | P a g e
equivalent of thousands of integrated circuits was crammed onto a single silicon chip. This
development again increased computer performance (especially reliability and speed) whilst
reducing computer size and cost. Around this time the first complete general-purpose
microprocessor became available on a single chip (giving birth to microcomputers, also called
personal computers). In 1975 Very Large-Scale Integration (VLSI) took the process one step
further. The development started with hundreds of thousands of transistors in the early 1980s,
and continues beyond several billion transistors as of 2009. Examples of fourth generation
computers are IBM PC and Apple II. The fourth-generation computers also include
supercomputers such as CRAY series computers. Supercomputers are the best in terms in
processing capacity and cost. These computers can process billions of instructions per second.
They are used for applications which require intensive numerical computations such as stock
analysis, weather forecasting and other similar complex applications. The spread of computer
network was also observed during this period.

During this period Fourth Generation Languages (4GL's) have come into existence. Such
languages are a step further removed from the computer hardware in that they use language
much like natural language. Many database languages can be described as 4GL's. They are
generally much easier to learn than are 3GL's. Microsoft developed MS-DOS operating system
for IBM PCs. In 1980, Alan Shugart presents the Winchester hard drive, revolutionizing
storage for PCs. In 1982, Hayes introduced the 300 bits per second smart modem. In 1989, Tim
Berners-Lee invented an Internet-based hypermedia enterprise for information sharing giving
birth to World Wide Web (WWW). In the same year, Intel 486 becomes the world’s first
1,000,000 transistor microprocessor. It crams 1.2 million transistors on a .4 in by .6 in sliver of
silicon and executes 15 million instructions per second, four times as fast as its predecessor the
80386 chips, which has 275,000 transistors.

Figure 2.4: Intel 80386, Intel 8086 Microprocessor Chip, Intel 80486

Characteristics of 4th Generation of Computers

▪ The 4th generation computer used LSI and VLSI technology.


▪ Dramatic rise in hardware costs.
▪ Semi- conductor primary storage was used.
▪ Development of microcomputer or personal computer.
▪ Increased costs of software.
▪ Advancement of electronic spreadsheets and database management systems.
▪ Compact in size but faster speeds of processing.
18 | P a g e
▪ Microprocessor was used.

Fifth Generation Computers (1990-Present)

Fifth generation computers are further made smarter in terms of processing speed, user
friendliness and connectivity to network. These computers are portable and sophisticated.
Powerful desktops, notebooks, variety of storage mechanism such as optical disks and
advanced software technology such as distributed operating system and artificial intelligence
are characteristic of this period. IBM notebooks, Pentium PCs and PARAM 10000 are
examples of fifth generation computers.

In 1992, Microsoft releases Windows 3.1 and within two months sold over 3 million copies.
The Pentium processor, a successor of Intel 486 was produced in 1993. Pentium processor
contains 3.1 million transistors and could perform 112 million instructions per second.
Microsoft released Microsoft Office this same year. Other inventions (not exhaustive) are
tabulated below.

1994 – Jim Clark and Marc Andreessen found Netscape and launched Netscape Navigator 1.0,
a browser for the World Wide Web.

1996 – U.S. Robotics introduced the PalmPilot, a low-cost, user-friendly personal digital
assistant (PDA).

1997 – Pentium II processor with 7.5 million transistors was introduced by Intel. This processor
incorporates MMX technology, processes video, audio and graphics data more efficiently and
supports applications such as movie editing, gaming and more. In the same year Microsoft
releases Internet Explorer 4.0.

1998 – Apple Computer releases iMac, the next version of Macintosh computer. iMac did not
feature floppy disk drive. Windows 98 was introduced this year, which was an extension to
Windows 95 with improved Internet access, system performance and support for new
generation of hardware and software. Google, a search engine was founded.

2001 – Intel unveils Pentium 4 chip with clock speeds starting at 1.4 GHz and with 42
transistors. Windows XP for desktops and servers was introduced.

2002 – Intel revamped Pentium 4 chip with 0.13 micron processor and Hyper-Threading (HT)
Technology and operating at a speed of 3.06GHz. DVD writes were introduced to replace CD
writers (CD-RW).

2004 - Flat panel LCD monitors were introduced, replacing the bulky CRT monitors as the
popular choice. USB flash drive was also made popular this year as a cost-effective eay to
transport data amd information. Apple Computer introduced the sleek iMac G5. Smart phone
overtakes the PDA as the perosnal mobile device of choice. A smart phone offers the user a
cell phone, full personal information management, a Web browser, e-mail functionality, instant
messaging and ability to listen to music, watch and record video, play games and take pictures.
In 2005, Microsoft releases Xbox 360, a game console with capability to play music, display
photos, and network with computers and other Xbox games.

19 | P a g e
2006 – Intel introduced Core 2 Duo processor family with 291 million transistors and uses 40
percent less power than Pentium processor. IBM produced the fastest supercomputer called
Blue Gene/L, which can perform 28 trillion calculations in the time it takes to blink an eye or
about 0ne-tenth of a second. Sony launches Playstation 3 to include Blu-ray disc players, high
definition capabilities and always-on online connectivity.

2007 – Intel introduces Core 2 Quad, a four-core processor made of dual processor servers and
desktop computers. Apple launches iPhone.

An effective 5th generation computer would be highly complex and intelligent electronic
device conceived with an idea of intelligence without going through the various stages of
technical development. This idea of intelligence is called artificial intelligence or AI. The
emphasis is now shifting developing reliable, faster, and smaller but dump machines to more
intelligent machines.

Practice Problems

1. Write a short note on the history of computers. Explain why Charles Babbage is known
as the father of the modern-day computers
2. What are the characteristics of first-generation computers? Discuss their major
drawbacks.
3. What are the characteristics of second-generation computers? Discuss their major
drawbacks.
4. What distinguishes fourth generation computers from third generation computers?
5. What are the full meanings of these acronyms: ENIAC, UNIVAC, EDVAC, PDA,
FORTRAN, COBOL and BASIC?
6. Who were the major inventors in the electro-mechanical era? Write short notes about
their inventions.
7. What distinguishes Pentium 4 from Pentium II chip?
8. What distinguishes smart phones from PDAs? List all the capabilities of a smart phone.
9. Describe the invention of Herman Hollerith.
10. Describe the invention of Atanasoff and Berry.
11. Describe the invention of Mauchly and Eckert.
12. What is the speed capability of Blue Gene/L developed by IBM in 2006?

References

1. Figures, charts and computer generations – http://www.gujarat-education.gov.in/


2. Computer Generations and Inter Processors - https://en.wikipedia.org/wiki/Intel_80486
3. A short history of computers and computing by Robert Mannell -
http://clas.mq.edu.au/speech/synthesis/history_computers/
4. G. Shelly and M. Vermaat, 2008, Discovering Computers 2009, Course Technology , Cengage
Learning.

20 | P a g e
LECTURE 3

DATA AND INFORMATION

Description

Data, information and knowledge are very integral to computers. An understanding of data and
information is imperative to understanding computers and therefore, in this lecture, data,
information and knowledge will be presented.

Objectives

At the end of this lecture, students will learn

• definition of data and information


• distinguish between data and information and knowledge
• types of data: texts, audio and video
• Data representation: decimal, binary and hexadecimal
• Conversion between various data representations

Introduction

Today, everyone talks about living in the information age. This is the same way the
development of industry created the industrial age, the development of information technology
systems, and particularly the internet. Internet in turn has created the information age. This
belief that knowledge is power and that knowledge stems from understanding of information;
information, in turn, is the transmission of meaning to data. Let by defining these three related
concepts.

Figure 3.1: Related concept

Definition of Data, Information and Knowledge

Data and information have different meaning to different people and disciplines. Attempt will
be made to distinguish data and information in the context of computing. What is the meaning
21 | P a g e
of data as it relates to computer? Is it possible to differentiate data from information in the
computing community? Data drives computers and information in the result one obtains from
the computers after the data has been processed. Now, it is tempting to say that data and
information are synonymous. That would as well be true. Perhaps, definitions ascribed to data,
information and knowledge in the existing literature will elucidate their differences. However,
it must be stressed that these differences are subtle and clearly subjective.

• Data

The concept of data as used in the course outline is commonly referred to as ‘raw’ data – a
collection of text, numbers and symbols with no meaning. Thus, Data is an unprocessed facts
and figures, which at a glance do not have any meaningful interpretation or analysis. For
example, the grade Adamu received from CSC 101 is B. Another example will be the price
(N500 million) of a particular real estate property in Maitama neighbourhood in Abuja or the
age (68 years) of Yusuf Maitama. Data therefore has to be processed, or provided with a
context, before it can have meaning as seen in figure 3.2.

More examples

• 3, 6, 9, 12
• cat, dog, gerbil, rabbit
• 161.2, 175.3, 166.4, 164.7, 169.3
These examples are meaningless sets of data. Could be the first four answers in the 3 x table, a
list of household pets and the heights of 15-year-old students but without a context not known.

Data Processing & Data Processing Stages

• Data processing

Data processing is a process of converting raw facts or data into a meaningful information. In
other words, it is the re-structuring or re-ordering of data by human or machine to increase their
usefulness and add values for a particular purpose. Data processing consists of the following
basic steps - input, processing, and output as seen in figure 3.2. These three steps constitute the
data processing cycle.

Figure 3.2: Data processing

Input: In this step, the input data is prepared in some convenient form for processing. The
form will depend on the processing machine. For example, when electronic computers are used,
the input data can be recorded on any one of the several types of input medium, such as
magnetic disks, tapes, and so on.
22 | P a g e
Processing: In this step, the input data is changed to produce data in a more useful form. For
example, pay-checks can be calculated from the time cards, or a summary of sales for the month
can be calculated from the sales orders.

Output: At this stage, the result of the proceeding processing step is collected. The particular
form of the output data depends on the use of the data. For example, output data may be pay-
checks for employees.

Stages of Data Processing

Data processing consists of following 6 stages

Figure 3.3: Stages of data processing1

Collection

Collection of data refers to gathering of data. The data gathered should be defined and accurate.

Preparation

Preparation is a process of constructing a dataset of data from different sources for future use
in processing step of cycle.

Input

Input refers to supply of data for processing. It can be fed into computer through any of input
devices like keyboard, scanner, mouse, etc.

1
Tutorials point:
https://www.tutorialspoint.com/computer_concepts/computer_concepts_introduction_to_computer.htm

23 | P a g e
Processing

The process refers to concept of an actual execution of instructions. In this stage, raw facts or
data is converted to meaningful information.

Output and Interpretation

In this process, output will be displayed to user in form of text, audio, video, etc. Interpretation
of output provides meaningful information to user.

Storage

In this process, we can store data, instruction and information in permanent memory for future
reference.

Information

It is important that students learn the concept of what ‘information’ is as used in information
technology. Information is the result of processing data. It represents data that have been
processed, interpreted in order that it has meaning for the consumer. For example, the grade
that Adamu received in CSC 101 represents “very good.” Similarly, one might say that the
real estate property in Maitama neighbourhood is “average” and the age of Yusuf Maitama
makes him a “senior citizen”.

In mathematical form;

Data + Meaning = Information.

More examples:

Looking at the examples given for data:

• 3, 6, 9, 12
• cat, dog, gerbil, rabbit
• 161.2, 175.3, 166.4, 164.7, 169.3
Only when we assign a context or meaning does the data become information. It all becomes
meaningful when we are told:

• 3, 6, 9 and 12 are the first four answers in the 3x table


• cat, dog, gerbil, rabbit is a list of household pets
• 161.2, 175.3, 166.4, 164.7, 169.3 are the heights of 15-year-old students.
These results are facts, which enable the processed data to be used in context and have meaning.
Information is data that has meaning.

Knowledge

When someone memories information, this is often referred to as ‘rote-learning’ or ‘learning


by heart’. It can then say that knowledge has been acquired. Another form of knowledge is
24 | P a g e
produced as a result of understanding information that has been given to us, and using that
information to gain knowledge of how to solve problems.

Knowledge can therefore be:

• acquiring and remembering a set of facts, or


• the use of information to solve problems.
The first type is often called explicit knowledge. This is knowledge that can be easily passed
on to others. Most forms of explicit knowledge can be stored in certain media. The information
contained in encyclopaedias and textbooks are good examples of explicit knowledge.

The second type is called tacit knowledge. It is the kind of knowledge that is difficult to pass
on to another person just by writing it down. For example, saying that Abuja is the capital of
Nigeria is explicit knowledge that can be written down, passed on, and understood by someone
else. However, the ability to speak a foreign language (some students can speak Japanese,
Arabic ‘GST courses’), bake bread, program a computer or use complicated machinery requires
additional pieces of knowledge (such as that gained through experience) that are not always
known explicitly and are difficult to pass on to other users.

Knowledge combines information, experience and insight that may be beneficial to the
individual or organization. Using the examples above, the fact that Adamu scored a “B” in CSC
101 means that he is likely to successfully complete CSC 201 with a grade of “C” or higher;
the price of the real estate property makes it likely to be resalable and the age of Yusuf Maitama
makes him a candidate for senior citizen benefits available in Nigeria (give examples of such
benefits). Thus, knowledge means the familiarity and awareness of a person, place, events,
ideas, issues, ways of doing things or anything else, which is gathered through learning,
perceiving or discovering. It is the state of knowing something with cognizance through the
understanding of concepts, study and experience.

How Data, Information, and Knowledge are related

Consider putting knowledge in equation form, thus

Information + application or use = Knowledge

Example Looking at the examples given for data:

• 3, 6, 9, 12
• cat, dog, gerbil, rabbit,
• 161.2, 175.3, 166.4, 164.7, 169.3
Only when we assign a context or meaning does the data become information. It all becomes
meaningful when we are told:

• 3, 6, 9 and 12 are the first four answers in the 3 x table


• cat, dog, gerbil, rabbit, is a list of household pets
• 161.2, 175.3, 166.4, 164.7, 169.3 are the heights of the five tallest 15-year-old students
in a class.
If we now apply this information to gain further Knowledge, we could say that:
25 | P a g e
• 4, 8, 12 and 16 are the first four answers in the 4 x table (because the 3 x table starts at
three and goes up in threes the 4 x table must start at four and go up in fours)
• The tallest student is 175.3cm.
• A lion is not a household pet as it is not in the list and it lives in the wild

Figure 3.4 Data, Information and Knowledge connection. (Source:2)

Figure 3.5: The flow from data to Information to Knowledge (Source:3)

2
. https://www.analytixlabs.co.in/blog/difference-between-data-and-information/
3
. https://internetofwater.org/valuing-data/what-are-data-information-and-knowledge/

26 | P a g e
The flow from data to information and knowledge is not uni-directional. The knowledge gained
may reveal redundancies or gaps in the data collected. As a result, an actionable insight may
be to change the data collected, or how those data are converted into information, to better meet
user needs. For knowledge to result in action, an individual must have the authority and
capacity to make and implement a decision. Knowledge (and authority) are needed to
produce actionable information that can lead to impact.

Data becomes information when it is applied to some purpose in order to add value for the
consumer. Consider a real estate agent assigned to sale a real estate property in Maitama
neighbourhood with no clear information about the price. She would have to consider the
prices of homes in the neighbourhood to determine property sale price, perhaps using the
average home price in Maitama area. She may also consider recently sold homes in the area.

How does information become knowledge? The distinction between information and
knowledge is not clear, sometimes, dicey. It may help to think of knowledge as being:

1. formal, explicit or generally available in order to develop policies and operative


guidelines
2. instinctive, subconscious, tacit or hidden within organization by certain individuals in
the organization
As indicated earlier, computers are driven by data, meaning that data play a very significant
role in computing.

Data Representation Forms

Figure 3.6: Data representation. (Source:4)


Data come in different forms. The most common forms are numbers, texts, pictures, audio and
video. Numbers are usually in decimal, texts are in characters, pictures, audio, and videos are
in bits (jpeg, jpg, mv, flv, wav, mpeg, etc). While these forms are only for human consumption,
the computer understands only numbers, in particular discrete numbers, such as binary (base

4
. https://ecomputernotes.com/fundamental/information-technology/what-do-you-mean-by-data-and-
information

27 | P a g e
2) or hexadecimal (base 16). Octal (base 8) numbers are also used. More importantly, data in
numbers (in particular decimals), texts, audios and videos are represented in computers as
discrete (finite).

Number System

The concept of numbers has been introduced from a very early age. To a computer, everything
is a number, i.e., alphabets, pictures, sounds, etc., are numbers. Number system is categorized
into four types;

• Binary number system consists of only two values, either 0 or 1


• Octal number system represents values in 8 digits.
• Decimal number system represents values in 10 digits.
• Hexadecimal number system represents values in 16 digits.

Table 3.1: Number system


Number System

System Base Digits

Binary 2 01

Octal 8 01234567

Decimal 10 0123456789

Hexadecimal 16 0123456789ABCDEF

Bits and Bytes

Bits - A bit is a smallest possible unit of data that a computer can recognize or use. Computer
usually uses bits in groups.

Nibble – group of four bits.

Bytes - group of eight bits is called a byte.

Figure 3.7: one bit and group of 8 bits


28 | P a g e
The following table shows conversion of Bits and Bytes

Table 3.2: Bits/Byte value


Byte Value Bit Value

1 Byte 8 Bits

1024 Bytes 1 Kilobyte

1024 Kilobytes 1 Megabyte

1024 Megabytes 1 Gigabyte

1024 Gigabytes 1 Terabyte

1024 Terabytes 1 Petabyte

1024 Petabytes 1 Exabyte

1024 Exabytes 1 Zettabyte

1024 Zettabytes 1 Yottabyte

1024 Yottabytes 1 Brontobyte

1024 Brontobytes 1 Geopbytes

Text Code

Text code is format used commonly to represent alphabets, punctuation marks and other
symbols. Four most popular text code systems are: EBCDIC, ASCII, Extended ASCII, Unicode

EBCDIC: Extended Binary Coded Decimal Interchange Code is an 8-bit code that defines 256
symbols.

ASCII: American Standard Code for Information Interchange is an 8-bit code that specifies
character values from 0 to 127.

Extended ASCII: Extended American Standard Code for Information Interchange is an 8-bit
code that specifies character values from 128 to 255.

Unicode: Unicode Worldwide Character Standard uses 4 to 32 bits to represent letters,


numbers and symbol.

• Number Conversions

29 | P a g e
A discrete data is one that can be categorized into classification, meaning that a discrete data
is countable, i.e., finite. A data that is not discrete is regarded as continuous, meaning that it
can be displayed on a number line with all points on the line having different values.

A decimal number can either be an integer or a floating-point number. Another term for
floating point number is a real number.

An integer number is a whole number without fractional part (i.e. decimal or radix point).

Examples: 104, 3568, -348, 234, -667

A floating-point number is a number with a floating point (or decimal point). A floating-point
number is an integer with decimal point (radix point) or a real number with fractional part, i.e.,
digits to the right of the decimal point. Floating point can be viewed as non-integer.

What makes up a non-integer? A Fractional.

What do we call these non-integer numbers? Floating point.

Thus, What if we…

• identify the sign


• remove the decimal point to convert the number into an integer
• capture the decimal point position

Examples: 89.1, 67.001, 251.0, 5.456e-3, 0.0005, 4.75675e7, -4875.125 E-3.

Figure 3.8: Floating-point walkthrough. (Source:5)

The examples 5.456e-3, 4.75675e7 are in scientific notation. The general form is

5
. https://knowthecode.io/labs/basics-of-digitizing-data/episode-11

30 | P a g e
mantissa * base ^exponent. Where e is used to represent base 10. The symbol ^ stands for
exponentiation. The mantissa has the form d1.d2d3d4d5.... d1 is greater than 0 and less than
base and d2, d3, d4, etc., are greater than or equal to 0 and less than base.

For the examples, the letter e represents base 10. However, it is possible to have other bases,
such as binary, octal and hexadecimal.

• Binary, Octal, and Hexadecimal

1.0101*2^101 is binary number in scientific notation. Note that exponent is also binary. A
binary number is one whose digits are either 0 or 1. An octal number has digits that are 0, 1, 2,
3, 4, 5, 6, and 7. Hexadecimal number has digits that are 0 to 15. There is a dilemma here, how
to represent digits 10, 11, 12, 13, 14 and 15. These digits are represented by A, B, C, D, E, and
F. In computing, the prefix 0x is used to represent a hexadecimal number.

• 0x10A is 1*16^2 + 0*16^1 + 10*16^0 = 266 (base 10)


• 0xFF is 15*16^1 + 15*16^0 = 255 (base 10)
• 10010 (base 2) = 1*2^4 + 0*2^3 + 0*2^2 + 1*2^1 + 0*2^0 = 18 (base 10)
• 237 (base 8) = 2*8^2 + 3*8^1 + 7*8^0 = 159 (base 10)

From these 4 examples, the conversion from any base n to decimal is straightforward with the
most significant digit to the left most of the number and the least significant to the rightmost
of the number. These numbers have the largest and smallest exponents, respectively.

Examples: The most significant digit in 0.000678 is 0 and the least significant digit is 8
(exponents at -1 and -6, respectively). The most significant digit in 9748.05 is 9 and the least
is 5 (exponents are 3 and -2 respectively). One can write them in scientific notation, thus,

• 0.000678 = 6.78e-4
• 9748.05 = 9.74805e3

• Conversion from any other base n (n = 2,8,16) to base10.

Conversion of binary, octal or hexadecimal number to decimal; use of synthetic multiplication


to convert to decimal. Consider the binary, octal or hexadecimal number in the general form

• dk…d3d2d1d0 (base b)
Where it is assumed that the number has k+1 digits: d1, d2, d3… dk. As discussed above, the
equivalent of this in base 10 is

• d0*b^0 + d1*b^1 + d2*b^2 + d3*b^3 + … + dk*b^k = N


This can be computed using the sequence formula:

ck = dk
ck-1 = dk-1+ ck*b
ck-2 = dk-2 + ck-1*b

31 | P a g e



c2 = d2 + c3*b
c1 = d1 + c2*b
c0 = d0 + c1*b = N
Examples

1. Convert 100101 (base 2) to decimal.


d5 d4 d3 d2 d1 d0
1 0 0 1 0 1
c5 = d5 = 1
c4 = d4 +c5*2 = 0 + 1*2 = 2
c3 = d3 + c4*2 = 0+ 2*2 = 4
c2 = d2 + c3*2 = 1 + 4*2 = 9
c1 = d1 + c2*2 = 0+9*2 = 18
c0 = d0 + c1*2 = 1+ 18*2 = 37 (base 10)

Alternatively

1+ 0+ 0+ 1+ 0+ 1+
2 2 2 2 2
1 2 4 9 18 37 (base 10)

Where the arrow signifies multiplication. Multiply the two numbers and add the number above
to obtain the number below it. Without loss of generality, we will omit the arrows in subsequent
examples.

2. Convert 111100010 (base 2) to decimal.


1 1 1 1 0 0 0 1 0
2 2 2 2 2 2 2 2
1 3 7 15 30 60 120 241 482
(base
10)

3. Convert 127 (base 8) to decimal.


1 2 7
8 8
1 10 87 (base 10)
One can convert rational numbers (numbers involving fractions) the same way except that we
use 1/b for multiplication instead of b and carry the operations from right to left instead of from
left to right for digits to the right of the radix point (dot). Note that digits to the right and left
of the radix point (dot) are done separately.

Examples

32 | P a g e
1. Convert 0.10101 (base 2) to decimal.
0 . 1 0 1 0 1
1/2 ½ ½ ½ 1/2
0 . 0.65625 0.3125 0.625 0.25 0.5
In this case the arrows denote addition, e.g., 0.25 is added to 1 before multiplying by 1/2 to
obtain 0.625.

Answer: 0.65625 (base 10)

1. Convert 110.011 (base 2) to decimal.


1 1 0 . 0 1 1
2 2 ½ 1/2 1/2
1 3 6 . 0.375 0.75 0.5
Answer = 6.375 (base 10)

2. Convert 10101.001 (base 2) to decimal.


1 0 1 0 1 . 0 0 1
2 2 2 2 1/2 ½ 1/2
1 2 5 10 21 0.125 0.25 0.5
Answer = 21.125 (base 10)

3. Convert 152.4 (base 8) to decimal.


1 5 1 . 4
8 8 1/8
1 13 105 . 0.5
Answer = 105.5 (base 10)

• Converting from decimal to base b (binary, octal or hexadecimal) involves synthetic division.

To convert N(base 10) to (dk…d3d2d1d0) in base b, where d0, d1, d2… dk are digits of the number in
base b.

Note that

N/b (N divided by b) = whole part (multiple of base b) + remainder/b and so we write:


N/b = c0 + d0/b
Next, we repeat the division using c0, thus,
c0/b = c1 + d1/b
We continue this process repeatedly until
ck-1/b = ck + dk/b and ck=0.
Example

Convert 37 (in decimal) to binary (base 2).

37/2 = 18 + 1/2, d0 = 1, c0= 18 (remainder =1)


18/2 = 9 + 0/2, d1 = 0, c1 = 9 (remainder =0)
9/2 = 4 + 1/2, d2 = 1, c2 = 4
4/2 = 2 + 0/2, d3 = 0, c3 = 2
2/2 = 1 + 0/2, d4 = 0, c4 = 1
33 | P a g e
½ = 0 + 1/2, d5 = 1, c5 = 0, stop.
Answer: 37(base 10) = 100101 (base 2)
In table format:
Base Divisor Remainder
2 37 1
18 0
9 1
4 0
2 0
1 1
0
The arrow indicates that you write the digits in the order from bottom to top, hence, 37(base 10) =
100101 (base 2).

More Examples

1. Convert 65 (in decimal) to binary and hexadecimal.


Base Divisor Remainder Base Divisor Remainder
2 65 1 16 65 1
32 0 4 4
16 0 0
8 0
4 0
2 0
1 1
0

65 (base 10) = 1000001 (base 2) 65 (base 10) = 41 (base 16)

2. Convert 255 (in decimal) to octal and hexadecimal.


Base Divisor Remainder Base Divisor Remainder
8 255 7 16 255 15 (F)
31 7 15 15 (F)
3 3 0
0
255 (decimal) = 377 (octal) 255(decimal) = FF (hexadecimal)

3. The colour of pixel in an image is represented on the computer using the Red, Green and Blue
(RGB)combination (called tuple) in bits. A bit is the short form of binary digit (so it is either 0
or 1) and it is the smallest unit of data in computing and digital communications. An 8-bit
colour will have values ranging from 0-255 (equivalent of 0-11111111 in binary) for each of R,
G and B. For example, (255, 255, 255) will give a white pixel and (0, 0, 0) combination will give
black pixel. What would be the binary and hexadecimal representations of the following
colours represented in decimal as (92, 125, 233) and (255, 0, 127)?
Answer:
Decimal: (92,125, 233)
Binary: (01011100, 01111101, 11101011)
Hexadecimal: (5C,7D, EB)
Decimal: (255, 0, 127)
34 | P a g e
Binary: (11111111, 0, 01111111)
Hexadecimal: (FF, 0, 7F)

Practice Problems

1. Define data and information. What distinguishes data from information and
knowledge?
2. Give an example to explain the difference between data and information.
3. In a gubernatorial election, it was found that candidate A received majority of the votes
from 20 local electoral districts in the State (specifically A received more than 50% of
the votes in 16 electoral districts). In order to win, a candidate must have majority total
votes (at least 50% of the total votes) and 3-quarter majority votes from the electoral
districts. Using this description, what would constitute data, information and
knowledge?
4. What is the difference between discrete and continuous data?
5. True or false, 23.45e-4 is a number in scientific notation?
6. What are the least and most significant digits in the following numbers: 9.34e4, 1.0001,
0.02361, and 023.45
7. True or false, the number 1181 is binary?
8. The hexadecimal representation can be obtained from the binary representation by
grouping the digits in fours starting from the right and padding it with zeros at the
beginning to enable the number of digits to be evenly divisible by 4. For examples, 11
can be written as 0011, 101001 can be written as [0010] [1001]. Converting the four
digits within each group to decimal gives the hexadecimal presentation. Therefore,
a. 11 (base 2) = 0011 (base 2) = 3 (base 16)
b. 101001 (base 2) = [0010] [1001] (base 2) = 29 (base 16)
c. 10111011101(base 2) = [1011] [1011] [1101] (base 16) = BBD (base 16)
d. 11111111(base 2) = [1111] [1111] (base 2) = FF (base 16)
e. (Question). Convert the following binary numbers to hexadecimal numbers:
111010001, 10001110111, 11101111, 110111
9. What are the corresponding hexadecimal values for the following colour pixel values?
a. RGB: (214, 192, 255)
b. RGB: (128, 240, 127)
c. RGB: (198, 28, 211)
10. Convert the following decimals to binary, octal and hexadecimal.
a. 1024
b. 105
c. 256
d. 198
11. Convert the following to decimal.
a. 1100011 (base 2)
b. 1217 (base 8)
c. 1F3D (base 16)
d. FFFF (base 16)
e. 111101101 (base 2)
f. 10111 (base 2)

35 | P a g e
LECTURE 4

COMPONENTS OF A COMPUTER SYSTEM

Description

Any kind of computers consists of basically two components: hardware and software. A
computer must need software (logical aspect) and hardware (physical aspect) to operate. Figure
4.1 highlights computer system architecture layers. This simple architecture shows layers
selecting and interconnecting hardware components to create computer system that meets
functional, performance and cost goal. As shown, user interacts with system through software
(application/system) in the hardware. In this lecture, the physical aspect of a computer will be
discussed.

Figure 4.1: Simple computer system architecture layers6

Objectives

Students will learn:

• what constitutes a computer system


• about input and output devices
• about the Central Processing Unit
• different types of memory
• and about different CPU models and manufacturers

Introduction

Hardware: This is the tangible or physical part of a computer. The devices that make up the
various units in the simple computer model form the hardware. It refers to the collection of

6 . https://www.learncomputerscienceonline.com/introduction-to-computer-system/

36 | P a g e
physical parts of a computer system that one can touch or feel. This includes the computer case,
monitor, keyboard, and mouse. It also includes all the parts inside the computer case, such as
the hard disk drive, processor, motherboard, video card, and other peripheral devices.

The hardware components of a computer can be viewed into 4 primary categories: System Unit,
Display Device, Input Devices, External Devices.

System Unit

A System Unit is the main component of a computer/personal computer, which houses the
other devices necessary for the computer to function. It is comprised of a chassis and the
internal components of a personal computer such as the system board (mother board), the
microprocessor, memory modules, disk drives, adapter cards, the power supply, a fan or other
cooling device and ports for connecting external components such as monitors, keyboards,
mice, and other devices as shown in figure 4.2.

Figure 4.2: System unit components

• Output Devices

Output devices enable user to view information in human readable form (text and graphical
data) associated with a computer program. Display devices commonly connect to the system
unit via a cable, and they have controls to adjust the settings for the device. They vary in size
and shape, as well as the technology used. Such devices include speakers, monitors, plotters,
printers, etc.

Monitors: Monitors, commonly called as Visual Display Unit (VDU), are the main output
device of a computer. It forms images from tiny dots, called pixels that are arranged in a
rectangular form. The sharpness of the image depends upon the number of pixels.

There are two kinds of viewing screen used for monitors.

37 | P a g e
• Cathode-Ray Tube (CRT)
• Flat- Panel Display

Figure 4.3: Display device

• Input Devices
An input device is a computer component that enables users to enter data or instructions into a
computer. The most common input devices are keyboards and computer mice. Input devices
can connect to the system via a cable or a wireless connection, but laptop systems typically use
a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks,
scanners, light pen, track ball, Magnetic Ink Card Reader(MICR), Optical Character
Reader(OCR), Bar Code Reader, and Optical Mark Reader(OMR), etc.

Figure 4.4: Keyboard/mouse


• External Devices
Any peripherals devices that are not housed inside the system unit are inherently external
devices. A personal computer’s functionality can be enhanced by connecting different types of
external devices to the system unit, often called peripherals devices. These devices typically
provide alternative input or output methods or additional data storage. External devices are
connected to the system unit via a cable or wireless connection. Some of them have their own
power source and some draw power from the system.

38 | P a g e
Figure 4.5: several categories of external devices

Figure 4.6: connected computer devices

39 | P a g e
More examples of hardware components are as follows:

• Printers

Printer is an output device, which is used to print information on paper. There are two types of
printers: Impact Printers and Non-Impact Printers

Impact Printers

The impact printers print the characters by striking them on the ribbon which is then pressed
on the paper.

Characteristics of Impact Printers are the following:

• Very low consumable costs


• Very noisy
• Useful for bulk printing due to low cost
• There is physical contact with the paper to produce an image

These printers are of two types: Character printers and Line printers

Character Printers: Character printers are the printers which print one character at a time.

These are further divided into two types: Dot Matrix Printer (DMP) and Daisy Wheel

Line Printers: Line printers are the printers which print one line at a time. These are of further
two types: Drum Printer and Chain Printer

Non-impact Printers

Non-impact printers print the characters without using ribbon. These printers print a complete
page at a time so they are also called as Page Printers.

These printers are of two types: Laser Printers and Inkjet Printers

Characteristics of Non-impact Printers

• Faster than impact printers.


• They are not noisy.
• High quality.
• Support many fonts and different character size.

System Unit Components

Below are some units within the system unit.

• The System Board


• Central Processing Unit
40 | P a g e
• Memory
• Power Supplies
• Cooling Systems etc

The System Board

The system board is a computer component that acts as the backbone for the entire computer
system as it serves as a single platform to connect all of the parts of a computer together. It
connects the CPU, memory, hard drives, optical drives, video card, sound card, and other ports
and expansion cards directly or via cables. System Board is also known as motherboard. It
consists of a large, flat circuit board with chips and other electrical components on it.

Some popular manufacturers of the motherboard are: Intel, Asus, Gigabyte, Biostar, Msi

Figure 4.7: system board or motherboard

Central Processing Unit (CPU)

The Central Processing Unit (CPU), sometimes called microprocessor or just processor, is the
real brain of the computer and is where most of the calculations take place. It also stores data,
intermediate results and instructions(program) and controls the operation of all parts of
computer.

Central Processing Unit (CPU) consists of the following functions:

• CPU is considered as the brain of the computer.


• CPU performs all types of data processing operations.
• It stores data, intermediate results, and instructions (program).
41 | P a g e
• It controls the operation of all parts of the computer.
• Supervises flow of data within CPU
• Transfers data to Arithmetic and Logic Unit
• Transfers results to memory
• Fetches results from memory to output devices

Figure 4.8: Central processing unit

CPU has the following components:

• Control Unit
• Arithmetic Logic Unit (ALU)
• Memory unit

Figure 4.9: CPU components

Arithmetic and Logic Unit (ALU)

This unit consists of two subsections namely; arithmetic and logic section.

42 | P a g e
Arithmetic Section: Function of arithmetic section is to perform arithmetic operations like
addition, subtraction, multiplication and division. All complex operations are done by making
repetitive use of above operations.

Logic Section: Function of logic section is to perform logic operations such as comparing,
selecting, matching and merging of data (>, <, =, etc).

Whenever calculations are required, the control unit transfer the data from storage unit to ALU
once the computation are done, the results are transferred to the memory unit by the control
unit and then, it is sent to the output unit for display.

Control Unit (CU)

This unit controls the operations of all parts of the computer but does not carry out any actual
data processing operations.

Functions of this unit are:

• It is responsible for controlling the transfer of data and instructions among other units
of a computer.
• It manages and coordinates all the units of the computer.
• It obtains the instructions from the memory, interprets them, and directs the operation
of the computer.
• It communicates with Input/Output devices for transfer of data or results from storage.
• It does not process or store data.

CPU models and manufacturers

The power and performance of a computer depends on mainly the CPU or the Central
Processing Unit of the computer. As the brains of the computer, the CPU is the most important
MVP (most valuable player) of the system.

Computer manufacturers have found that performance is boosted if a computer has more than
one CPU. This arrangement is called Dual-core or Multi-core Processing and harnesses the
power of two processors. In this configuration, one integrated circuit contains two processors,
their caches as well as the cache controllers. These two "cores" have resources to perform tasks
in parallel, almost doubling the efficiency and performance of the computer as a whole. Dual
processor systems on the other hand have two separate physical processors in the system.

In addition to the multitasking of processors, some advances in technology such as hyper-


threading (processing multiple system intensive applications at the same time), extended
memory 64 technology and dual core (two cores in one processor) are also enhancing system
responsiveness. Some resellers are using a method called "overclocking" to increase the
performance of the CPU by enabling it to run at higher speeds than those recommended by the
manufacturers.

43 | P a g e
INTEL CPU'S

Currently, Intel and AMD are the major CPU manufacturers who seem to have the market
covered. Most computers such as APPLE Macs, Gateway computers, HP computers and Dell
use processors made by the same computer manufacturers, such as Intel, AMD, etc. Some
examples of CPUs are giving below:

Multi-Core Intel Xeon Processors

The core speed of Xeon family of processors ranges from 1.6GHz to 3.2 GHz. These processors
are suited for specific communication applications such as telecommunications servers, search
engines, network management or storage. It provides high memory bandwidth, memory
capacity and I/O bandwidth. Examples of Computer manufacturers that use this processor in
their systems - Dell, Apple, etc.

Intel Core 2 Extreme Quad-core Processors

This first four core desktop processor is designed for multimedia applications such as
audio/video editing and rendering, 3D modelling and other intensive, high CPU demanding
tasks. With multi-core processing, the system response is improved by delegating certain tasks
to specific cores. Quad core configuration also has an unlocked clock for easier overclocking.
Examples of Computer manufacturers that use this processor in their systems - Dell, Gateway.

Intel Core 2 Duo Processors

The new Intel chipsets that feature 1333MHz bus speeds are enabling the creation of higher
performance processors at competitive prices. There are four processors: Core 2 Duo processor
E6400, Core 2 Duo processor E4300, Core 2 Duo processor T7400 and the Core 2 Duo
processor L7400. This family of processors delivers more instructions per cycle, improves
system performance by efficiently using the memory bandwidth and is more environmentally
friendly because of its low energy consumption. Examples of Computer manufacturers that use
this processor in their systems - Apple computers, Gateway.

Intel Pentium Processors - Pentium M, Pentium 4

This family of Pentium Processors uses a micro architecture for high-performance computing
using low-power. These processors are designed for medium to large enterprise
communications applications, transaction terminals, etc. The cheapest Intel CPUs now
available in the market are the Intel Pentium models.

Figure 4.10: Intel Pentium Processors

44 | P a g e
Intel Celeron Processors

The Celeron M family is designed for the next generation mobile applications. Combining
Intel's trademark high performance stats with low power consumption, these processors are
perfect for thermally sensitive embedded and communications applications. This family of
processors will probably be used for small to medium businesses and for enterprise
communications, Point on Sale appliances and ATMs.

AMD Athlon 64 X2 Dual Core processors

Popular computer for medium to high performance computing. Boosts performance by using
dual-core technology. The Athlon 64 X2 is the best AMD value for its price.
Examples of companies that use this processor in their systems - Acer, HP, Gateway.

Other processors by AMD include: AMD Athlon X2, AMD Sempron, AMD Turion, AMD
Opteron, AMD Athlon 64 FX.

Memory or Storage Unit

This unit stores data, instructions & results for processing and stores the final results of
processing before these results are released to an output device. It is also responsible for the
transmission of all inputs and outputs. The memory is divided into large number of small parts
called cells. Each location or cell has a unique address which varies from zero to memory size
minus one. For example, if computer has 64k words, then this memory unit has 64 *
1024=65536 memory locations. The address of these locations varies from 0 to 65535.

Memory is primarily of three types

• Cache Memory
• Primary Memory/Main Memory
• Secondary Memory

Cache Memory

Cache memory is a very high-speed semiconductor memory which can speed up CPU. It acts
as a buffer between the CPU and main memory. It is used to hold those parts of data and
program which are most frequently used by CPU. The parts of data and programs are
transferred from disk to cache memory by operating system, from where CPU can access them.

Advantages

The advantages of cache memory are as follows:

✓ Cache memory is faster than main memory.


✓ It consumes less access time as compared to main memory.
✓ It stores the program that can be executed within a short period of time.
✓ It stores data for temporary use.

45 | P a g e
Disadvantages

The disadvantages of cache memory are as follows:

- Cache memory has limited capacity.


- It is very expensive.

Main Memory Unit

This unit can store instructions, data and intermediate results. This unit supplies information to
the other units of the computer when needed. It is also known as internal storage unit or main
memory or primary storage.

Primary memory holds only those data and instructions on which computer is currently
working. It has limited capacity and data is lost when power is switched off. It is generally
made up of semiconductor device. These memories are not as fast as registers. The data and
instruction required to be processed reside in main memory.

Characteristics of Main Memory

• These are semiconductor memories


• It is known as main memory.
• Usually volatile memory.
• Data is lost in case power is switched off.
• It is working memory of the computer.
• Faster than secondary memories.
• A computer cannot run without primary memory.

Random Access Memory

RAM (Random Access Memory) is Computer Memory that is directly accessible by the
CPU. RAM stores temporary data, that is in case of power loss, the stored information gets
lost. In Simple words, it stores the data which is currently processing by the CPU. It is the
internal memory of the CPU for storing data, program and program result. It is read/write
memory which stores data until the machine is working. As soon as the machine is switched
off, data is erased.

Figure 4.11: RAM


46 | P a g e
Access time in RAM is independent of the address that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time. Data in the
RAM can be accessed randomly but it is very expensive.

RAM is volatile, i.e., data stored in it is lost when we switch off the computer or if there is a
power failure. Hence a backup uninterruptible power system (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.

RAM is of two types: Static RAM (SRAM) and Dynamic RAM (DRAM)

Static Random Access Memory (SRAM)

Data is stored in transistors and requires a constant power flow. Because of the continuous
power, SRAM doesn’t need to be refreshed to remember the data being stored. SRAM is
called static as no change or action i.e.; refreshing is not needed to keep the data intact. It is
used in cache memories.

Figure 4.12: Static RAM7

Characteristics of Static RAM

• Static RAM is much faster than DRAM.


• Static RAM has greater storage than DRAM.
• Static RAM takes less power to perform.

Advantages of Static RAM

• Static RAM has low power consumption.


• Static RAM has faster access speeds than DRAM.
• Static RAM helps in creating a speed-sensitive cache.

Disadvantages of Static RAM

7
https://www.geeksforgeeks.org/difference-between-sram-and-dram/

47 | P a g e
• Static RAM has less memory capacity.
• Static RAM has high costs of manufacturing than DRAM.
• Static Ram comprises of more complex design.

Dynamic Random Access Memory (DRAM)

Data is stored in capacitors. Capacitors that store data in DRAM gradually discharge energy,
no energy means the data has been lost. So, a periodic refresh of power is required in order
to function. DRAM is called dynamic as constant change or action (change is continuously
happening) i.e., refreshing is needed to keep the data intact. It is used to implement main
memory.

Figure 4.13: DRAM8

Characteristics of Dynamic RAM

• Dynamic RAM is slower in comparison to SRAM.


• Dynamic RAM is less costly than SRAM.
• Dynamic RAM has high power consumption.

Advantages of Dynamic RAM

• Dynamic RAM has Low costs of manufacturing than SRAM.


• Dynamic RAM has greater memory capacities.
• Dynamic RAM does not need to refresh its memory contents.

8
https://www.geeksforgeeks.org/difference-between-sram-and-dram/

48 | P a g e
Disadvantages of Dynamic RAM

• Dynamic RAM has a slow access speed.


• Dynamic RAM has high power consumption.
• Dynamic RAM data can be lost in case of Power Loss.

Below is table 4.1 showing difference between SRAM and DRAM

Table 4.1: SRAM & DRAM differences

49 | P a g e
SRAM DRAM

It stores information as long as the power is supplied


It stores information as long as
or a few milliseconds when the power is switched
the power is supplied.
off.

Transistors are used to store


Capacitors are used to store data in DRAM.
information in SRAM.

Capacitors are not used hence To store information for a longer time, the contents
no refreshing is required. of the capacitor need to be refreshed periodically.

SRAM is faster compared to


DRAM provides slow access speeds.
DRAM.

It does not have a refreshing


It has a refreshing unit.
unit.

These are expensive. These are cheaper.

SRAMs are low-density


DRAMs are high-density devices.
devices.

In this, bits are stored in


In this, bits are stored in the form of electric energy.
voltage form.

These are used in cache


These are used in main memories.
memories.

Consumes less power and


Uses more power and generates more heat.
generates less heat.

SRAMs has lower latency DRAM has more latency than SRAM

SRAMs are more resistant to


DRAMs are less resistant to radiation than SRAMs
radiation than DRAM

SRAM has higher data transfer


DRAM has lower data transfer rate
rate

SRAM is used in high-speed


DRAM is used in lower-speed main memory
cache memory

50 | P a g e
SRAM DRAM

SRAM is used in high


DRAM is used in general purpose applications
performance applications

Read Only Memory

ROM stands for Read Only Memory. One can only read but cannot write on it. This type of
memory is non-volatile. The information is stored permanently in such memories during
manufacture. A ROM stores such instructions that are required to start a computer. This
operation is referred to as bootstrap. ROM chips are not only used in the computer but also in
other electronic items like washing machine and microwave oven.

Following are the various types of ROM:

• MROM (Masked ROM)

The very first ROMs were hard-wired devices that contained a pre-programmed set of data or
instructions. These kinds of ROMs are known as masked ROMs which are inexpensive.

• PROM (Programmable Read only Memory)

PROM is read-only memory that can be modified only once by a user. The user buys a blank
PROM and enters the desired contents using a PROM program. It can be programmed only
once and is not erasable.

• EPROM (Erasable and Programmable Read Only Memory)

The EPROM can be erased by exposing it to ultra-violet light for a duration of up to 40 minutes.
Usually, an EPROM eraser achieves this function.

• EEPROM (Electrically Erasable and Programmable Read Only Memory)

The EEPROM is programmed and erased electrically. It can be erased and reprogrammed about
ten thousand times. EEPROMs can be erased one byte at a time, rather than erasing the entire
chip. Hence, the process of re-programming is flexible but slow.

Advantages of ROM

✓ The advantages of ROM are as follows:


✓ Non-volatile in nature
✓ These cannot be accidentally changed
✓ Cheaper than RAMs
✓ Easy to test
✓ More reliable than RAMs

51 | P a g e
✓ These are static and do not require refreshing
✓ Its contents are always known and can be verified

Secondary Memory

This type of memory is also known as external memory or non-volatile. It is slower than main
memory. These are used for storing data/Information permanently. CPU directly does not
access these memories instead they are accessed via input-output routines. Contents of
secondary memories are first transferred to main memory, and then CPU can access it.
Examples include: hard disk, CD-ROM, DVD etc.

Characteristic of Secondary Memory

✓ These are magnetic and optical memories


✓ It is known as backup memory.
✓ It is non-volatile memory.
✓ Data is permanently stored even if power is switched off.
✓ It is used for storage of data in a computer.
✓ Computer may run without secondary memory.
✓ Slower than primary memories.

Figure 4.14: Hard drive

Hard drive interfaces

Hard drives come with one of several different connectors built in. The five types are ATA/IDE
and SATA for consumer-level drives, and SCSI, Serial Attached SCSI (SAS), and Fibre
Channel for enterprise-class drives.

ATA/IDE Cable

For many years, Advanced Technology Attachment (ATA) connections were the favoured
internal drive connection in PCs. Apple adopted ATA with the Blue and White G3 models.
ATA drives must be configured as either a master or a slave when connecting. This is usually
accomplished by the use of a hardware jumper or, more recently, through the use of a cable
that can tell the drive to act as either a master or slave.

52 | P a g e
ATA also goes by the name ATAPI, IDE, EIDE and PATA, which stands for Parallel ATA.
ATA is still in use in many computers today, but most drive manufacturers are switching over
to SATA (Serial ATA).

SATA

As of 2007, most new computers (Macs and PCs, laptops and desktops) use the newer SATA
interface. It has a number of advantages, including longer cables, faster throughput, multidrive
support through port multiplier technology, and easier configuration. SATA drives can also be
used with eSATA hardware to enable fast, inexpensive configuration as an external drive. Most
people investing in new hard drive enclosures for photo storage should be using SATA drives.

SCSi/SAS and Fibre Channel

SCSI, SAS, and Fibre Channel drives are rare in desktop computers, and are typically found in
expensive enterprise-level storage systems. You can also find SAS drives (along with the
necessary SAS controller cards) in video editing systems where maximum throughput is
needed.

Power Supply

Power Supply known as switch-mode power supply (SMPS) is an electronic circuit that
converts power using switching devices that are turned on and off at high frequencies, and
storage components such as inductors or capacitors to supply power when the switching device
is in its non-conduction state. Switching power supplies have high efficiency and are widely
used in a variety of electronic equipment, including computers and other sensitive equipment
requiring stable and efficient power supply.

Figure 4.15: SMPS

53 | P a g e
Cooling System

Cooling may be required for CPU, Video Card, Mother Board, Hard Drive, etc. Without proper
cooling, the computer hardware may suffer from overheating. This overheating causes
slowdowns, system error messages, and crashing. Also, the life expectancy of the PC's
components is likely to diminish. The following are commonly used techniques for cooling the
PC or Server components:

• Heat Sinks
• CPU/Case Fans
• Thermal Compound
• Liquid Cooling Systems

Heat Sinks: The purpose of a heatsink is to conduct the heat away from the processor or any
other component (such as chipset) to which it is attached. Thermal transfer takes place at the
surface of a heatsink. Therefore, heat sinks should have a large surface area. A commonly used
technique to increase the surface area is by using fins. A typical processor heat sink is shown
in the figure below:

Figure 4.16: Heat sink

Fan: The Fan is primarily used to force cooler air in to the system or remove hot air out of the
system. A fan keeps the surrounding cooler by displacing air around the heatsink and other
parts of the computer. A typical CPU fan is shown below.

Figures 4.17; CPU fan/heat sink with fan

54 | P a g e
Thermal Compound: A thermal compound is used for maximum transfer of heat from CPU
to the heatsink. The surface of a CPU or a heatsink is not perfectly flat. If you place a heatsink
directly on a CPU, there will be some air gaps between the two. Air is a poor conductor of heat.
Therefore, an interface material with a high thermal conductivity is used to fill these gaps, and
thus improve heat conductivity between CPU and heatsink.

Liquid Cooling Systems: Like a radiator for a car, a liquid cooling system circulates a liquid
through a heat sink attached to the processor. First, the cooler liquid passes through the
heatsink, and then gets hot due to transfer of heat from the processor to the heatsink. Then the
hot liquid passes through the radiator at the back of the case, and transfers the heat to the
secondary coolant (air). Now, the liquid is cool enough to pass through the hot processor
heatsink, and the cycle repeats. The chief advantage of LCS (Liquid Cooling System) is that
the cooling takes place very efficiently (since liquids transfer heat more efficiently than
air/solids). The disadvantages include bulkier cooling system, cost, and additional reliability
issues associated with LCS.

References

https://en.wikipedia.org/wiki/computer-hardware

www.tutorialspoint.com/computer-fundamentals/computer_ram.html

www.geekswhoknow.com/articles/cpu.php

http://www.dpbestflow.org/data-storage-hardware/hard-drive-101#interfaces

https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading01.htm

https://www.learncomputerscienceonline.com/introduction-to-computer-system/

https://www.tutorialspoint.com/computer_concepts/computer_concepts_introduction_to_comput
er.htm

https://www.geeksforgeeks.org/difference-between-sram-and-dram/

55 | P a g e
LECTURE 5

COMPONENT OF COMPUTER: SOFTWARE

Description

The proliferation of software systems has left most non-computer professionals at a bewildered
situation especially with the various operating systems such as Windows and Linux, mobile
operating systems such as Android and iOS, software applications in terms of open and closed
source and finally, the firmware. In this lecture, software component of a computer will be
discussed.

Objectives

Students will learn:

• Different types of computer operating systems


• Understand the different types of mobile operating systems
• Differentiate between open-source software and proprietary application software
• Understand firmware.

Software

A set of instructions that drives computer to do stipulated tasks is called a program. Software
instructions are programmed in a computer language, translated into machine language, and
executed by computer. Software can be categorized into two types:

• System software
• Application software

System software (systems software) is computer software designed to operate and control the
computer hardware and to provide a platform for running application software. System
software can be separated into two different categories, operating systems and utility software.

An operating system (OS) is software that manages computer hardware and software
resources and provides common services for computer programs. The operating system is an
essential component of the system software in a computer system. Application programs
usually require an operating system to function.

56 | P a g e
Figure 5.1: Operating System

It is necessary to have at least one operating system installed in the computer to run basic
programs like browsers. Figure 5.2 shows user interaction with a computer by using the input
unit to call the required application software which must be managed by the OS to produce a
required output via output devices.

Figure 5.2: Simple user/computer interaction9

Functions of Operating System

Processor Management: An operating system manages the processor’s work by allocating


various jobs to it and ensuring that each process receives enough time from the processor to
function properly.

Memory Management: An operating system manages the allocation and deallocation of the
memory to various processes and ensures that the other process does not consume the memory
allocated to one process.

9
https://www.learncomputerscienceonline.com/introduction-to-computer-system/
57 | P a g e
Device Management: There are various input and output devices. An OS controls the working
of these input-output devices. It receives the requests from these devices, performs a specific
task, and communicates back to the requesting process.

File Management: An operating system keeps track of information regarding the creation,
deletion, transfer, copy, and storage of files in an organized way. It also maintains the integrity
of the data stored in these files, including the file directory structure, by protecting against
unauthorized access.

Security: The operating system provides various techniques which assure the integrity and
confidentiality of user data. Following security measures are used to protect user data:

o Protection against unauthorized access through login.


o Protection against intrusion by keeping Firefall active.
o Protecting the system memory against malicious access.
o Displaying messages related to system vulnerabilities.

Error Detection: From time to time, the operating system checks the system for any external
threat or malicious software activity. It also checks the hardware for any type of damage. This
process displays several alerts to the user so that the appropriate action can be taken against
any damage caused to the system.

Job Scheduling: In a multitasking OS where multiple programs run simultaneously, the


operating system determines which applications should run in which order and how time should
be allocated to each application.

Components of Operating System

In order to perform its functions, the operating system has two components: Shell and Kernel.

• Shell

Shell handles user interactions. It is the outermost layer of the OS and manages the interaction
between user and operating system by doing the following:

➢ Prompting the user to give input


➢ Interpreting the input for the operating system
➢ Handling the output from the operating system.

Shell provides a way to communicate with the OS by either taking the input from the user or
the shell script. A shell script is a sequence of system commands that are stored in a file.

• Kernel

The kernel is the core component of an operating system for a computer (OS). All other
components of the OS rely on the core to supply them with essential services. It serves as the
primary interface between the OS and the hardware and aids in the control of devices,
networking, file systems, and process and memory management.

58 | P a g e
Figure 5.3: Kernel

Functions of kernel

The kernel is the core component of an operating system which acts as an interface between
applications, and the data is processed at the hardware level. When an OS is loaded into
memory, the kernel is loaded first and remains in memory until the OS is shut down. After that,
the kernel provides and manages the computer resources and allows other programs to run and
use these resources. The kernel also sets up the memory address space for applications, loads
the files with application code into memory, and sets up the execution stack for programs.

The kernel is responsible for performing the following tasks:

• Input-Output management
• Memory Management
• Process Management for application execution
• Device Management
• System calls control

Types of Operating Systems

There are several different types of operating systems present. In this section, we will discuss
the advantages and disadvantages of these types of OS.

➢ Batch OS
➢ Distributed OS
➢ Multitasking OS
➢ Network OS
➢ Real-OS
➢ Mobile OS

• Batch OS

59 | P a g e
Batch OS is the first operating system for second-generation computers. This OS does not
directly interact with the computer. Instead, an operator takes up similar jobs and groups them
together into a batch, and then these batches are executed one by one based on the first-come,
first, serve principle.

Advantages of Batch OS

• Execution time taken for similar jobs is higher.


• Multiple users can share batch systems.
• Managing large works becomes easy in batch systems.
• The idle time for a single batch is very less.

Disadvantages of OS

• It is hard to debug batch systems.


• If a job fails, then the other jobs have to wait for an unknown time till the issue is
resolved.
• Batch systems are sometimes costly.

Examples of Batch OS: payroll system, bank statements, data entry, etc.

• Distributed OS

A distributed OS is a recent advancement in the field of computer technology and is utilized


all over the world that too with great pace. In a distributed OS, various computers are connected
through a single communication channel. These independent computers have their memory
unit and CPU and are known as loosely coupled systems. The system processes can be of
different sizes and can perform different functions. The major benefit of such a type of
operating system is that a user can access files that are not present on his system but in another
connected system. In addition, remote access is available to the systems connected to this
network.

Advantages of Distributed OS

• Failure of one system will not affect the other systems because all the computers are
independent of each other.
• The load on the host system is reduced.
• The size of the network is easily scalable as many computers can be added to the
network.
• As the workload and resources are shared therefore the calculations are performed at a
higher speed.
• Data exchange speed is increased with the help of electronic mail.

Disadvantages of Distributed OS

• The setup cost is high.


• Software used for such systems is highly complex.

60 | P a g e
• Failure of the main network will lead to the failure of the whole system.

Examples of Distributed OS: LOCUS, etc.

• Multitasking OS

The multitasking OS is also known as the time-sharing operating system as each task is given
some time so that all the tasks work efficiently. This system provides access to a large number
of users, and each user gets the time of CPU as they get in a single system. The tasks performed
are given by a single user or by different users. The time allotted to execute one task is called
a quantum, and as soon as the time to execute one task is completed, the system switches over
to another task.

Advantages of Multitasking OS

• Each task gets equal time for execution.


• The idle time for the CPU will be the lowest.
• There are very few chances for the duplication of the software.

Disadvantages of Multitasking OS

• Processes with higher priority cannot be executed first as equal priority is given to each
process or task.
• Various user data is needed to be taken care of from unauthorized access.
• Sometimes there is a data communication problem.

Examples of Multitasking OS: UNIX, etc.

• Network OS

Network operating systems are the systems that run on a server and manage all the networking
functions. They allow sharing of various files, applications, printers, security, and other
networking functions over a small network of computers like LAN or any other private
network. In the network OS, all the users are aware of the configurations of every other user
within the network, which is why network operating systems are also known as tightly coupled
systems.

Advantages of Network OS

• New technologies and hardware can easily upgrade the systems.


• Security of the system is managed over servers.
• Servers can be accessed remotely from different locations and systems.
• The centralized servers are stable.

Disadvantages of Network OS

• Server costs are high.


• Regular updates and maintenance are required.
61 | P a g e
• Users are dependent on the central location for the maximum number of operations.

Examples of Network OS: Microsoft Windows server 2008, LINUX, etc.

• Real-Time OS

Real-Time operating systems serve real-time systems. These operating systems are useful when
many events occur in a short time or within certain deadlines, such as real-time simulations.

Types of the real-time OS are:

• Hard real-time OS

The hard real-time OS is the operating system for mainly the applications in which the slightest
delay is also unacceptable. The time constraints of such applications are very strict. Such
systems are built for life-saving equipment like parachutes and airbags, which immediately
need to be in action if an accident happens.

• Soft real-time OS

The soft real-time OS is the operating system for applications where time constraint is not very
strict.

In a soft real-time system, an important task is prioritized over less important tasks, and this
priority remains active until the completion of the task. Furthermore, a time limit is always set
for a specific job, enabling short time delays for future tasks, which is acceptable. For Example,
virtual reality, reservation systems, etc.

Advantages of Real-Time OS

• It provides more output from all the resources as there is maximum utilization of
systems.
• It provides the best management of memory allocation.
• These systems are always error-free.
• These operating systems focus more on running applications than those in the queue.
• Shifting from one task to another takes very little time.

Disadvantages of Real-Time OS

• System resources are extremely expensive and are not so good.


• The algorithms used are very complex.
• Only limited tasks can run at a single time.
• In such systems, we cannot set thread priority as these systems cannot switch tasks
easily.

Examples of Real-Time OS: Medical imaging systems, robots, etc.

62 | P a g e
• Mobile OS

A mobile OS is an operating system for smartphones, tablets, and PDA’s. It is a platform on


which other applications can run on mobile devices.

Advantages of Mobile OS

• It provides ease to users.

Disadvantages of Mobile OS

• Some of mobile operating systems give poor battery quality to users.


• Some of the mobile operating systems are not user-friendly.

Examples of Mobile OS: Android OS, iOS, Symbian OS, and Windows mobile OS.

Popular Operating Systems

Some of the most popular operating systems in use today include:

• Windows: Windows is the most popular desktop operating system, used by over 1
billion users worldwide. It has a wide range of features and applications, including the
Office suite, gaming, and productivity tools.
• macOS: macOS is the desktop operating system used by Apple Mac computers. It is
known for its clean, user-friendly interface and is popular among creative professionals.
• Linux: Linux is an open-source operating system that is available for free and can be
customized to meet specific needs. It is used by developers, businesses, and individuals
who prefer an open-source, customizable operating system.
• iOS: iOS is the mobile operating system used by Apple iPhones and iPads. It is known
for its user-friendly interface, tight integration with Apple’s hardware and software, and
robust security features.
• Android: Android is the most popular mobile operating system, used by over 2 billion
users worldwide. It is known for its open-source nature, customization options, and
compatibility with a wide range of devices.

Utility software helps to analyse, configure, optimize and maintain the computer, such as virus
protection.

Application Software

Application software is used to accomplish specific tasks other than just running the computer
system. Application software may consist of a single program, such as an image viewer; a small
collection of programs (often called a software package) that work closely together to
accomplish a task, such as a spreadsheet or text processing system; a larger collection (often
called a software suite) of related but independent programs and packages that have a common
user interface or shared data format, such as Microsoft Office, which consists of closely
63 | P a g e
integrated word processor, spreadsheet, database, etc.; or a software system, such as a database
management system, which is a collection of fundamental programs that may provide some
service to a variety of other independent applications. In contrast to system software, it allows
users to do things like create text documents, play games, listen to music, or web browsers to
surf the web are called application software.

Figure 5.4: Application software

Firmware

Firmware is a software program or set of instructions programmed on a hardware device. It


provides the necessary instructions for how the device communicates with the other computer
hardware.
Firmware is typically stored in the flash ROM of a hardware device. While ROM is "read-only
memory," flash ROM can be erased and rewritten because it is actually a type of flash memory.

Firmware can be thought of as "semi-permanent" since it remains the same unless it is updated
by a firmware updater. You may need to update the firmware of certain devices, such as hard
drives and video cards in order for them to work with a new operating system. CD and DVD
drive manufacturers often make firmware updates available that allow the drives to read faster
media. Sometimes manufacturers release firmware updates that simply make their devices
work more efficiently. Firmware updates are usually found by going to the "Support" or
"Downloads" area of a manufacturer's website. Keeping your firmware up-to-date is often not
necessary, but it is still a good if you can. Just make sure that once you start a firmware updater,
you let the update finish, because most devices will not function if their firmware is not
recognized.

64 | P a g e
Figure 5.5: Dives with firmware

A remote control is a very simple example of an engineered product that contains firmware.
The firmware monitors the buttons, controls the LEDs, and processes the button presses in
order to send data in a format the receiving device (a TV set, for example) can understand and
process
In electronic systems and computing, firmware is a type of software that provides control,
monitoring and data manipulation of engineered products and systems. Typical examples of
devices containing firmware are embedded systems (such as traffic lights, consumer
appliances, and digital watches), computers, computer peripherals, mobile phones, and digital
cameras. The firmware contained in these devices provides the low-level control program for
the device. Presently, most firmware can be updated.
Firmware is held in non-volatile memory devices such as ROM, EPROM, or flash memory.
Changing the firmware of a device may rarely or never be done during its economic lifetime;
some firmware memory devices are permanently installed and cannot be changed after
manufacture. Common reasons for updating firmware include fixing bugs or adding features
to the device. This may require ROM integrated circuits to be physically replaced, or flash
memory to be reprogrammed through a special procedure. Firmware such as the ROM BIOS of
a personal computer may contain only elementary basic functions of a device and may only
provide services to higher-level software. Firmware such as the program of an embedded
system may be the only program that will run on the system and provide all of its functions.

References

http://www.uswitch.com/mobiles/guides/mobile-operating-systems/

https://www.tutorialspoint.com/operating_system/os_overview.htm

https://mechanicalnotes.com/computer/

https://www.learncomputerscienceonline.com/introduction-to-computer-system/

65 | P a g e
LECTURE 6

VON NEUMANN MODEL OF COMPUTATION

Introduction

John von Neumann is perhaps best known known for his work in the early development of
computers. As director of the Electronic Computer Project at Princeton's Institute for
Advanced Study (1945-1955), he developed MANIAC (mathematical analyser, numerical
integrator and computer), which was at the time the fastest computer of its kind. He also made
important contributions in the fields of mathematical logic, the foundations of quantum
mechanics, economics and game theory10.

Objectives

Students will learn

• Von-Neumann model
• Computer memory
• I/O interfaces
• Instruction cycle
• Basic CPU structures

Von-Neumann Model

Von Neumann’s model is composed of three specific components (or sub-systems) including
a central processing unit (CPU), memory, and input/output (I/O) interfaces. Figure 6.1 defines
one of the various possible methods of interconnecting these components.

Figure 6.1: Neuman model

10
http://www.ams.org/about-us/presidents/31-von-
neumann#:~:text=John%20von%20Neumann%20is%20perhaps,at%20the%20time%20the%20fastest

66 | P a g e
CPU

CPU can be regarded as the soul of the computing system, includes three main components;
the control unit (CU), one or more arithmetic logic units (ALUs), and multiple registers. The
control unit decide the order in which instructions should be implemented and controls the
retrieval of the useful operands. It defines the instructions of the machine.

The execution of each instruction is persistent by a sequence of control signals created by the
control unit. The control unit controls the flow of data through the system by issuing control
signals to various components. Each operation generated by a control signal is known as
microoperation (MO).

Computer Memory

It stores program instructions and data. The two types of memories are RAM (random-access
memory) and ROM (read-only memory).

RAM stores the data and general-purpose programs that the machine implements. RAM is
temporary and its contents can be modified at any time and it is removed when the power to
the device is turned off.

ROM is permanent and can store the original boot-up instructions of the machine.

I/O Interfaces

It enables the computer's memory to get data and send information to output devices. Also,
they enable the computer to connect to the user and secondary storage devices such as disk and
tape drives. The prior components are linked through a set of signal lines called a bus. As
shown in the figure, the main buses carrying data are the control bus, data bus, and address bus.
Each bus consists of multiple wires that enable the parallel transmission of data between several
hardware components.

• The address bus recognizes both a memory location and an I/O device.
• The bidirectional data bus sends information to or from a component.
• The control bus includes signals that allow the CPU to connect with the memory and
I/O devices.

The implementation of software in a von Neumann computing device requires the use of the
three main components just defined. Generally, a software package, known as an operating
system, controls how these three components work together. The operating system makes use
of the I/O interfaces to fetch the application from secondary storage and load it into the
memory. Once the program is in memory, the operating system then schedules the CPU to
start implementing the program instructions. Each instruction to be performed should first be
retrieved from memory. This retrieval is defined as an instruction fetch.

After an instruction is fetched, it is placed into a specific register in the CPU, known as the
instruction register (IR). While in the IR, the instruction is decoded to decide what kind of
operation must be implemented. If the instruction needed operands, these are fetched from
67 | P a g e
memory or probably from various registers and located into the suitable location (certain
registers or specially designated storage areas referred to as buffers).

Instruction Cycle: Fetch, Decode and Execute Cycle

The Central Processing Unit (CPU) is the brain of the computer because it is what makes all
the decisions and it is where all programs are executed. The purpose of the CPU is to
process data. The CPU works by following a process known as ‘fetch, decode and
execute’. The CPU fetches an instruction from memory, decodes this instruction and then
executes it. The CPU carries out this cycle continuously, millions of times per
second. Whatever software is being used on computers; the CPU processes the data being
used.

Every processor shows these three-step instruction cycle: Fetch, Decode and Execute.

Figure 6.2: Instruction cycle

Before this process can begin, both the program and data are loaded into Random-access
memory (RAM) by the operating system. This is then loaded into a temporary memory area
inside the CPU called a memory address register (MAR). The CPU is now ready to carry out
the fetch decode execute cycle.

STEP 1. Fetch

The Control Unit sends a signal to the RAM in order to fetch the program and data, which is
then stored in one of the CPU’s registers. To do so, the CPU makes use of a vital hardware
path called the “address bus” along which the program and data travels.

The Control Unit then increments the Program Counter (PC). The PC is an important register
that keeps track of the running order of the instructions and shows which instruction in the
program is due to be executed next.

The CPU then places the address of the next item to be fetched on to the address bus. Data
from this address then moves from main memory into the CPU by travelling along another
hardware path called the ‘data bus’.
68 | P a g e
STEP 2. Decode

The ‘instruction set’ of the CPU is designed to understand a specific set of commands, which
serves to make sense of the instruction it has just fetched. This process is called ‘decode’. A
single piece of program code might require several instructions.

STEP 3. Execute

This is the part of the cycle when data processing takes place, and the instruction is executed. In
the example above, the values from the two variables, length and width would be multiplied
together. The result of this processing is then stored as the variable area in yet another register.

Once the execute stage is complete, the CPU begins the cycle all over again.

Basic CPU structure

Figure 6.3: Detail CPU structure11

Main Memory Unit (Registers)

➢ Accumulator: Stores the results of calculations made by ALU. It holds the


intermediate of arithmetic and logical operatoins.it act as a temporary storage location
or device.
➢ Program Counter (PC): Keeps track of the memory location of the next instructions to
be dealt with. The PC then passes this next address to the Memory Address Register
(MAR).
➢ Memory Address Register (MAR): It stores the memory locations of instructions that
need to be fetched from memory or stored in memory.

11
https://www.geeksforgeeks.org/computer-organization-von-neumann-architecture/

69 | P a g e
➢ Memory Data Register (MDR): It stores instructions fetched from memory or any
data that is to be transferred to, and stored in, memory.
➢ Current Instruction Register (CIR): It stores the most recently fetched instructions
while it is waiting to be coded and executed.
➢ Instruction Buffer Register (IBR): The instruction that is not to be executed
immediately is placed in the instruction buffer register IBR

Input/Output Devices

Program or data is read into main memory from the input device or secondary storage under
the control of CPU input instruction. Output devices are used to output information from a
computer. If some results are evaluated by the computer and it is stored in the computer, then
with the help of output devices, we can present them to the user.

Registers

Registers refer to high-speed storage areas in the CPU. The data processed by the CPU are
fetched from the registers. There are different types of registers used in architecture.

Buses

Data is transmitted from one part of a computer to another, connecting all major internal
components to the CPU and memory, by the means of Buses. Types:

➢ Data Bus: It carries data among the memory unit, the I/O devices, and the processor.
➢ Address Bus: It carries the address of data (not the actual data) between memory
and processor.
➢ Control Bus: It carries control commands from the CPU (and status signals from
other devices) in order to control and coordinate all the activities within the computer.

The general representation of basic machine organisation is shown in figure 6.4 which contains
the three fundamental components of the of computer are CPU (ALU, Control Unit, Registers),
Memory Subsystem (Stored Data) and I/O subsystem (I/O devices) Each of those components
are connected through buses.

70 | P a g e
Figure 6.4: Basic machine organisation12

References
https://www.tutorialspoint.com/what-is-von-neumann-
model#:~:text=Von%20Neumann's%20model%20is%20composed,methods%20of%20interc
onnecting%20these%20components.

https://technologyforlearners.com/spotlight-on-the-fetch-decode-execute-cycle/

https://www.geeksforgeeks.org/computer-organization-von-neumann-architecture/

https://www.computerscience.gcse.guru/theory/von-neumann-architecture

12
: https://www.learncomputerscienceonline.com/introduction-to-computer-system/
71 | P a g e
LECTURE 7

PROGRAMMING LANGUAGES AND THE APPLICATION PROCESS:


OVERVIEW

Description

This topic deals with an overview of programming languages and the compilation processes.
Programming languages are not very different from spoken languages. Learning any language
requires an understanding of the building blocks and the grammar that govern the construction
of statements in that language. In addition, history of programming language, brief survey of
programming paradigms, introduction to language translation, will be looked at.

Objectives

At the end, students will be able to:

• Understand history of programming languages


• Comprehend programming paradigm
• Know categories of programming language
• Understand language translators

Introduction

In this topic, students will learn the fundamentals of programming. As noted, a computer is not
a useful device as an entity without a programming force driving its operations. Consequently,
programming plays a very essential role in the usefulness of a computer system, forming the
interface link between human users and the computer machinery. In this regard, a computer
program is designed to solve a specific problem following the execution of the program
instructions and every program has to be properly designed so as to solve the problem behind
its development effectively and accurately. Note that a program is a set of instructions that
help computer to perform tasks. This set of instructions is also called as scripts. Programs are
executed by processor whereas scripts are interpreted. The languages that are used to write a
program or set of instructions are called "Programming languages".

Brief History of Programming Language

Computer programming is essential in our world today, running the systems for almost every
device. It is used to tell electronic devices/machines what to do. Machines and humans “think”
very differently, so programming languages are necessary to bridge that gap. The first computer
programming language was created in 1883, when a woman named Ada Lovelace worked with
Charles Babbage on his very early mechanical computer, the Analytical Engine. While
Babbage was concerned with simply computing numbers, Lovelace saw that the numbers the
computer worked with could represent something other than just amounts of things. She wrote
an algorithm for the Analytical Engine that was the first of its kind. Because of her contribution,
Lovelace is credited with creating the first computer programming language. As different needs
have arisen and new devices have been created, many more languages have followed. These
are listed with dates as follows:
72 | P a g e
Algorithm for the Analytical Engine (1883): Created by Ada Lovelace for Charles Babbage’s
Analytical Engine to compute Bernoulli numbers, it’s considered to be the first computer
programming language.

Assembly Language (1949): First widely used in the Electronic Delay Storage Automatic
Calculator, assembly language is a type of low-level computer programming language that
simplifies the language of machine code, the specific instructions needed to tell the computer
what to do.

Autocode (1952): Autocode was a generic term for a family of early computer programming
languages. The first was developed by Alick Glennie for the Mark 1 computer at the University
of Manchester in the U.K. Some consider autocode to be the first compiled computer
programming language, meaning that it can be translated directly into machine code using a
program called a compiler.

Fortran (1957): Created by John Backus for complicated scientific, mathematical, and
statistical work, Fortran stands for Formula Translation. It is the one of the oldest computer
programming languages still used today.

Algol (1958): Created by a committee for scientific use, Algol stands


for Algorithmic Language. Algol served as a starting point in the development of languages
such as Pascal, C, C++, and Java.

COBOL (1959): Created by Dr. Grace Murray Hopper as a computer programming language
that could run on all brands and types of computers, COBOL stands
for COmmon Business Oriented Language. It is used in ATMs, credit card processing,
telephone systems, hospital and government computers, automotive systems, and traffic
signals. In the movie The Terminator, pieces of COBOL source code were used in the
Terminator’s vision display.

LISP (1959): Created by John McCarthy of MIT, LISP is still in use. It stands
for LISt Processing language. It was originally created for artificial intelligence research but
today can be used in situations where Ruby or Python are used.

BASIC (1964): Developed by John G. Kemeny and Thomas E. Kurtz at Dartmouth College so
that students who did not have a strong technical or mathematical understanding could still use
computers, it stands for Beginner’s All-purpose Symbolic Instruction Code. A modified
version of BASIC was written by Bill Gates and Paul Allen. This was to become the first
Microsoft product.

Pascal (1970): Developed by Niklaus Wirth, Pascal was named in honor of the French
mathematician, physicist, and philosopher Blaise Pascal. It is easy to learn and was originally
created as a tool for teaching computer programming. Pascal was the main language used for
software development in Apple’s early years.

Smalltalk (1972): Developed by Alan Kay, Adele Goldberg, and Dan Ingalls at Xerox Palo
Alto Research Center, Smalltalk allowed computer programmers to modify code on the fly and
also introduced other aspects now present in common computer programming languages
including Python, Java, and Ruby.
73 | P a g e
C (1972): Developed by Dennis Ritchie at Bell Labs, C is considered by many to be the first
high-level language. A high-level computer programming language is closer to human
language and more removed from the machine code. C was created so that an operating system
called Unix could be used on many different types of computers. It has influenced many other
languages, including Ruby, C#, Go, Java, JavaScript, Perl, PHP, and Python.

SQL (1972): SQL was developed by Donald D. Chamberlin and Raymond F. Boyce at IBM.
SQL stands for Structured Query Language. It is used for viewing and changing information
that is stored in databases. SQL uses command sentences called queries to add, remove, or
view data.

MATLAB (1978): Developed by Cleve Moler. MATLAB stands for Matrix Laboratory. It is
one of the best computer programming languages for writing mathematical programs and is
mainly used in mathematics, research, and education. It can also be used to create two- and
three-dimensional graphics.

Objective-C (1983): Created by Brad Cox and Tom Love, Objective-C is the main computer
programming language used when writing software for macOS and iOS, Apple’s operating
systems.

C++ (1983): C++ is an extension of the C language and was developed by Bjarne Stroustrup.
It is one of the most widely used languages in the world. C++ is used in game engines and
high-performance software like Adobe Photoshop. Most packaged software is still written in
C++.

Perl (1987): Perl was originally developed by Larry Wall in 1987 as a scripting language
designed for text editing. Its purpose was to make report processing easier. It is now widely
used for many purposes, including Linux system administration, Web development, and
network programming.

Haskell (1990): Named after Haskell Brooks Curry, an American logician and mathematician.
Haskell is called a purely functional computer programming language, which basically means
that it is mostly mathematical. It is used by many industries, especially those that deal with
complicated calculations, records, and number-crunching.

Python (1991): Designed by Guido Van Rossum, Python is easier to read and requires fewer
lines of code than many other computer programming languages. It was named after the British
comedy group Monty Python. Popular sites like Instagram use frameworks that are written in
Python.

Visual Basic (1991): Developed by Microsoft, Visual Basic allows programmers to choose
and change pre-selected chunks of code in a drag-and-drop fashion through a graphical user
interface (GUI).

R (1993): Developed by Ross Ihaka and Robert Gentleman at the University of Auckland, New
Zealand, R is named after the first names of the first two authors. It is mostly used by
statisticians and those performing different types of data analysis.

74 | P a g e
Java (1995): Originally called Oak, Java was developed by Sun Microsystems. It was intended
for cable boxes and hand-held devices but was later enhanced so it could be used to deliver
information on the World Wide Web. Java is everywhere, from computers to smartphones to
parking meters. Three billion devices run Java.

PHP (1995): Created by Rasmus Lerdorf, PHP is used mostly for Web development and is
usually run on Web servers. It originally stood for Personal Home Page, as it was used by
Lerdorf to manage his own online information. PHP is now widely used to build websites and
blogs. WordPress, a popular website creation tool, is written using PHP.

Ruby (1995): Ruby was created by Yukihiro “Matz” Matsumoto, who combined parts of his
favorite languages to form a new general-purpose computer programming language that can
perform many programming tasks. It is popular in Web application development. Ruby code
executes more slowly, but it allows for computer programmers to quickly put together and run
a program.

JavaScript (1995): Created in just 10 days by Brendan Eich, this language is mostly used to
enhance many Web browser interactions. Almost every major website uses Javascript.

C# (2000): Developed by Microsoft with the goal of combining the computing ability of C++
with the simplicity of Visual Basic, C# is based on C++ and is similar to Java in many aspects.
It is used in almost all Microsoft products and is primarily used for developing desktop
applications.

Scala (2003): Created by Martin Odersky. Scala is a computer programming language that
combines functional programming, which is mathematical, with object-oriented programming,
which is organized around data that controls access to code. Its compatibility with Java makes
it helpful in Android development.

Groovy (2003): Developed by James Strachan and Bob McWhirter, Groovy is derived from
Java and improves the productivity of developers because it is easy to learn and concise.

Go (2009): Go was developed by Google to address problems that can occur in large software
systems. Since computer and technology use is much different today than it was when
languages such as C++, Java, and Python were introduced and put to use, problems arose when
huge computer systems became common. Go was intended to improve the working
environment for programmers so they could write, read, and maintain large software systems
more efficiently.

Swift (2014): Developed by Apple as a replacement for C, C++, and Objective-C, Swift is
supposed to be easier to use and allows less room for mistakes. It is versatile and can be used
for desktop and mobile apps and cloud services.

Computer Programming Languages Today

Most computer programming languages were inspired by or built upon concepts from previous
computer programming languages. Today, while older languages still serve as a strong
foundation for new ones, newer computer programming languages make programmers’ work

75 | P a g e
simpler. Businesses rely heavily on programs to meet all of their data, transaction, and customer
service needs. Science and medicine need accurate and complex programs for their research.
Mobile applications must be updated to meet consumer demands. And all of these new and
growing needs ensure that computer programming languages, both old and new, will remain
an important part of modern life.

Survey of Programming Paradigm

Programming paradigms are different ways or styles in which a given program or programming
language can be organized. Each paradigm consists of certain structures, features, and opinions
about how common programming problems should be tackled. They are more like a set of
ideals and guidelines that many people have agreed on, followed, and expanded upon. Certain
paradigms are better suited for certain types of problems, so it makes sense to use different
paradigms for different kinds of projects. Consequently, there are many options to choose from
when the need to write and structure a given program arises.

Common Programming Paradigms

Now that we have introduced what programming paradigms, let us consider the most popular
ones, explain their main characteristics, and compare them. Keep in mind this list is not
exhaustive. There are other programming paradigms not included here. It can be shown that
anything solvable using one of these paradigms can be solved using the others; however, certain
types of problems lend themselves more naturally to specific paradigms

• Imperative Programming

Imperative programming consists of sets of detailed instructions that are given to the computer
to execute in a given order. Control flow in imperative programming is explicit: commands
show how the computation takes place, step by step. Each step affects the global state of the
computation. It is called "imperative" because programmers dictate exactly what the computer
has to do, in a very specific way. Imperative programming focuses on describing how a
program operates, step by step. Example; compute result of certain number of people.

result = []
i=0
start:
numPeople = length(people)
if i >= numPeople goto finished
p = people[i]
nameLength = length(p.name)
if nameLength <= 5 goto nextOne
upperName = toUpper(p.name)
addToList(result, upperName)
nextOne:
i=i+1
goto start
finished:
return sort(result)

• Procedural Programming
76 | P a g e
Procedural programming is a derivation of imperative programming, adding to it the feature of
functions (also known as "procedures" or "subroutines"). In procedural programming, the user
is encouraged to subdivide the program execution into functions, as a way of improving
modularity and organization. It is an Imperative programming with procedure calls. Example,
baking a cake, processes may be divided into separate functions groups;

function pourIngredients() {
- Pour flour in a bowl
- Pour a couple eggs in the same bowl
- Pour some milk in the same bowl
}
function mixAndTransferToMold() {
- Mix the ingredients
- Pour the mix in a mold
}
function cookAndLetChill() {
- Cook for 35 minutes
- Let chill
}
pourIngredients()
mixAndTransferToMold()
cookAndLetChill()

The functions could be written and stored elsewhere but only needed to be called. So, the
implementation of functions could just read the three function calls at the end of the file and
get a good idea of what our program does. That simplification and abstraction is one of the
benefits of procedural programming. But within the functions, we still got same old imperative
code.

• Functional Programming

Functional programming takes the concept of functions a little bit further. In functional
programming, functions are treated as first-class citizens, meaning that they can be assigned
to variables, passed as arguments, and returned from other functions. Another key concept is
the idea of pure functions. A pure function is one that relies only on its inputs to generate its
result. And given the same input, it will always produce the same result. Besides, it produces
no side effects (any change outside the function's environment). With these concepts in mind,
functional programming encourages programs written mostly with functions.

• Declarative Programming

77 | P a g e
Declarative programming is all about hiding away complexity and bringing programming
languages closer to human language and thinking. Programming by specifying the result you
want, not how to get it. It is the direct opposite of imperative programming in the sense that the
programmer does not give instructions about how the computer should execute the task, but
rather on what result is needed.

Control flow in declarative programming is implicit: the programmer states only what the
result should look like, not how to obtain it. Example;

select upper(name)
from people
where length(name) > 5
order by name

No loops, no assignments, etc. Whatever engine that interprets this code is just supposed go
get the desired information, and can use whatever approach it wants. (The logic and constraint
paradigms are generally declarative as well.). What is nice about this is that it's easier to read
and comprehend, and often shorter to write.

• Object-Oriented Programming

One of the most popular programming paradigms is object-oriented programming (OOP). The
core concept of OOP is to separate concerns into entities which are coded as objects. Each
entity will group a given set of information (properties) and actions (methods) that can be
performed by the entity. So, OOP is based on the sending of messages to objects. Objects
respond to messages by performing operations, generally called methods. Messages can have
arguments. OOP makes heavy usage of classes (which are a way of creating new objects
starting out from a blueprint or boilerplate that the programmer sets). Objects that are created
from a class are called instances.

Because objects operate independently, they are encapsulated into modules which contain both
local environments and methods. Communication with an object is done by message passing.
Objects are organized into classes, from which they inherit methods and equivalent variables.
The object-oriented paradigm provides key benefits of reusable code and code extensibility.
The ability to use inheritance is the single most distinguishing feature of the OOP paradigm.
Inheritance gives OOP its chief benefit over other programming paradigms - relatively
easy code reuse and extension without the need to change existing source code. The
mechanism of modelling a program as a collection of objects of various classes, and
furthermore describing many classes as extensions or modifications of other classes, provides
a high degree of modularity.

Programming paradigms as discussed are different ways in which one can face programming
problems, and organize code. Imperative, procedural, functional, declarative, and object-
oriented paradigms are some of the most popular and widely used paradigms today. And
knowing the basics about them is good for general knowledge and also for better understanding
of other topics with regards to coding.

Category of Programming Languages


78 | P a g e
Programming languages are broadly categorized into three types:

• Machine level language


• Assembly level language
• High-level language

Machine Level Language

Machine Language is generally called the lowest level language and it was the first language
available to computer programmers. It is very fast since the language needs no translation
because its instructions are made up of zeros and ones (O's and I 's). Thus, Machine Language
is merely data to the computer and the program can modify itself during execution. It handles
binary data i.e., 0’s and 1’s. It directly interacts with system. Machine language is difficult for
human beings to understand as it comprises combination of 0’s and 1’s. In this language, there
is no need of compilers and interpreters for conversion and hence the time consumption is less.
However, it is not portable and non-readable to humans.

The advantages of Machine Language (ML) can be summarized as follows:

✓ Fast execution speed


✓ Storage Saving
✓ Programmer's full control of the computer and its capabilities

However, ML has Some disadvantages as follows:

- Difficult to learn
- Highly prone to errors
- Totally machine-dependent — this means that programs written in ML will only
- Execute on the specific machine they were written.

Assembly Level Language

Assembly Language was developed in the early 1950s to alleviate some of the difficulties
associated with the Machine Language. Symbolic names or mnemonics are used to replace the
binary code of the Machine Language. Remember that a mnemonic means a memory aid.
Hence AL instructions are easier to remember than the O's and I 's of the ML instructions. It is
very close to machine level language. The computer should have assembler to translate
79 | P a g e
assembly level program to machine level program. Examples include ADA, PASCAL, etc. It
is in human-readable format and takes lesser time to write a program and debug it. However,
it is a machine dependent language. Example LDA #$01, where LD stands for LoaD the
following value into a memory Address.

Assembly Language Machine Code


SUB AX, BX 0010101110000011
MOV CX, AX 100010111001000
MOV DX, 0 10111010000000000000000

Below are the advantages of the AL:

✓ It is efficient in processing time and in the use of memory space.


✓ It encourages Modular Programming, where programs are broken into modules.
✓ It provides an error listing which is useful in debugging.
✓ Since it is very close to machine language, it is also fast.

Some disadvantages of AL are stated below:

- It is cumbersome in usage.
- Assembly Language has one-to-one-relationship with machine language, meaning that
one Assembly Language instruction is translated into one Machine Language
instruction. This process leads to long program preparation time.
- Assembly Language is machine-dependent like the Machine Language.

High-level Language

Low-level language is fundamentally machine-dependent. In contrast to machine dependence


of low- level language, High-Level Languages (HLL's) are machine- independent. HLL uses
format or language that is most familiar to users. The instructions in this language are
called codes or scripts. The computer needs a compiler and interpreter to convert high-level
language program to machine level language. Examples include C++, Python, Java, C, C+,
C++, ADA, LISP, PHP, JavaScript, PASCAL, FORTRAN, COBOL, BASIC, SQL, HTML,
etc. It is easy to write a program using high level language and is less time-consuming.
Debugging is also easy and is a human-readable language. Main disadvantages of this are that
it takes lot of time for execution and occupies more space when compared to Assembly or
Machine-level languages.

Translation Method Classification

High level languages are generally written in human readable programming languages such as;
python, java, C, C++, etc. These languages are first required to be converted to low level
(machine codes) and instructions in binary. The machine instructions in binary can be directly
80 | P a g e
decoded and executed by the computer. This conversion is called compilation. So, among all
the computer programming languages, only the machine language is in machine-executable
form. All other languages must be translated into O's and l 's, the only things understood by the
computer.

Figure 7.1: Program compilation13

Translators take the forms of the following

• Interpreter
• Compiler
• Assembler

Interpreter

Some languages are interpreted by converting the "source program" into machine language as
the program is being executed. Interpreters translate code line-by- line which therefore makes
them run slowly than other translators. For example, sonic BASIC language versions only
interpret programs instead of compiling them. However, in Turbo Basic, KBASIC or BASIC
4GL you can compile the BASIC source code into an executable code.

Interpreters have several advantages:

• Instructions are executed as soon as they are translated.


• Errors can be quickly spotted - once an error is found, the program stops running and
the user is notified at which part of the program the interpretation has failed. This makes
interpreters extremely useful when developing programs.

Interpreters also have several disadvantages:

- Interpreted programs run slowly as the processor has to wait for each instruction to be
translated before it can be executed.
- Additionally, the program has to be translated every time it is run.

13
https://www.learncomputerscienceonline.com/introduction-to-computer-system/
81 | P a g e
- Interpreters do not produce an executable file that can be distributed. As a result, the
source code program has to be supplied, and this could be modified without permission.
- Interpreters do not optimise code - the translated code is executed as it is.

Compiler

Unlike an Interpreter, a Compiler translates an entire program into machine language before
the execution of the program. A Compiler usually translates the SOURCE program into another
program called the OBJECT program which is the machine language version of the source
code. With the object program created by your compiler, you will never use the source program
again except when you want to modify it. Generally, a compiled program runs faster than an
interpreted program.

Compilers have several advantages:

• Compiled programs run quickly, since they have already been translated.
• A compiled program can be supplied as an executable file. An executable file is a file
that is ready to run. Since an executable file cannot be easily modified, programmers
prefer to supply executables rather than source code.
• Compilers optimise code. Optimised code can run quicker and take up
less memory space.

Compilers also have disadvantages:

- The source code must be re-compiled every time the programmer changes the program.
- Source code compiled on one platform will not run on another - the machine code is
specific to the processor's architecture.

Assembler

As noted earlier, Assembly Language is neither compiled nor interpreted. For the language, it
is simply assembled. Since the assembly language is already close to the machine language,
assembling a program is therefore less time consuming than compilation. Whereas compilers
and interpreters generate many machine code instructions for each high-level instruction,
assemblers create one machine code instruction for each assembly instruction.

It is worth noting that advancement in computer programming languages has added a new
classification of languages, called Very High-Level Languages (VHLL's) or what are usually
referred to as the 4th Generation Languages (4GL's). It is a programming language with a very
high level of abstraction, used mainly as a professional tool for productivity. VHLL is domain
specific languages, limited specific applications, purpose or type of tasks. Below are the basic
characteristic layers of a simple 4GL:

• Database
• Data Communication
• Data Processing

82 | P a g e
• End User Facilities (EUF)

From the above, database languages such as FoxPro, Dbase and Foxbase are 4GL's. A language
such as Visual Basic which equally has database capabilities can also be classified as a 4GL
from the above characteristics.

Reference

https://learnacademy.org/blog/first-programming-language-use-microsoft-apple/

Online College Plan: https://www.onlinecollegeplan.com/computer-programming-languages/

Tutorials Point: Computer concept:


https://www.tutorialspoint.com/computer_concepts/computer_concepts_introduction_to_computer.htm

Programming paradigms: https://www.freecodecamp.org/news/an-introduction-to-programming-


paradigms/#:~:text=What%20is%20a%20Programming%20Paradigm%3F,programming%20problems
%20should%20be%20tackled.

https://www.cs.ucf.edu/~leavens/ComS541Fall97/hw-pages/paradigms/major.html

https://cs.lmu.edu/~ray/notes/paradigms/#:~:text=functional%20language%20paradigm%E2%80%9D
.-,Some%20Common%20Paradigms,%2Dfree%2C%20nested%20control%20structures.

https://www.bbc.co.uk/bitesize/guides/z4cck2p/revision/3#:~:text=Generally%2C%20there%20are%
20three%20types,assemblers

83 | P a g e
LECTURE 8

INTRODUCTION TO PROGRAMMING

Description
The topic will also take students through introduction to programming, algorithm,
pseudocodes, and flowcharts. BASIC programming, and fundamental programming constructs
will also be treated.
Objectives
Students will learn
• Computer program
• Basic principles of programming
• Stages of program development
• Algorithm, Pseudocodes, and Flowchart
• BASIC programming fundamentals

Computer Program
Recall that a program is a set of instructions that help computer to perform tasks. These
instructions direct the computer on what to do exactly to provide solution(s) to problems. Now,
though a program is a set of instructions, but the statements must be submitted to a computer
as a unit to direct the computer behaviour. In other words, a program is a set of instructions
following the rules of the chosen language. Without programs, computers are useless. A
program is like a recipe, it contains a list of ingredients (called variables) and a list of directions
(called statements) that tell the computer what to do with the variables. The languages that are
used to write a program or set of instructions are called "programming languages".

Computer Programming

Programming is the process of creating a set of instructions that tell a computer how to perform
a task. It is the process of writing, testing, debugging/troubleshooting, and maintaining the
source code of computer programs. This source code is written in a programming language
like C++, JAVA, Python etc.

Basic Principles of Programming

Again, a computer program is designed to solve a specific problem following the execution of
the program instructions. However, it is important to know that a computer does not solve
problems the way humans do. Human beings make use of reasoning, intelligence and intuition
to solve problems while computers solve problems according to instructions supplied by
programmers. In this regard, every program has to be properly designed so as to solve the
problem behind its development effectively and accurately. The aims guiding design of a good
computer program are as follows:

84 | P a g e
• Reliability
• Maintainability
• Portability
• Readability
• Performance
• Memory Saving

Reliability: Reliability means that one should be able to depend on the program to always do
what it has been designed to do.

Maintainability: Program should be able to modified when the need arises.

Portability: The concept of portability in programming is that a program should be capable of


being transferable to a different computer platform with a minimum modification, if any at all.

Readability: A program should be easy for other programmers to read and understand. For
example, a readable program is easier to maintain.

Performance: A program that does not carry out the expected tasks quickly and efficiently has
lost the performance aim. Therefore, a major aim in program design is that the program should
execute quickly and efficiently too.

Memory Saving: What is meant here is simply that a program should not be unnecessarily too
long and requiring excessive large memory to execute.

Stages of Program Development

Having gone through the aims underlying the designs of programs, let’s see below the different
stages involved in program development:

• Problem Definition
• Solution Design
• Program Coding or Writing
• Program Testing
• Program Documentation and Maintenance

Problem Definition

Problem formulation or definition is very essential in programming and it begins with


recognition of a need for information by a user or an organization. The programmer is expected
to analyse the problem thoroughly in order to understand what is required of its solution.
Generally, if one describes a problem carefully at the beginning of the programming process,
the program will be better and might cost less to develop.

One way of defining a problem is to do that in terms of the following:

85 | P a g e
▪ Input
▪ Output
▪ Processing
Starting with OUTPUT, this represents the information requirements of users of the program.
This is the reason most of the times the programmer can simply use a report generated by a
program to design the corresponding input form or interface.

Having determined the output requirements, then the INPUT required to provide the output
should be determined too. Finally, based on the input and output requirements, the
PROCESSING can then be determined.

Solution Design

After the definition of the problem is completed, the design of the solution is the next step and
this may take the form of one or more programs. What is best to do here is for the programmer
to take each step or segment of the problem definition and then work out a tentative program
flow. With this approach, by handling each segment separately, one can concentrate on
developing an efficient and logical flow for that segment. This is the approach employed in
what is called "Modular Programming", where a program is divided into parts or modules
for easy development and maintenance.

To develop a program logical flow, there are two major aspects:

▪ General Logic
▪ Detailed Logic
The "general logic" flow design can be done by using a "Structure Chart" which shows the
major elements of the program and their relationships to each other. After the general logic
description, one must deal with the "detailed logic". This is simply the step-by-step operation
of each block in your structure chart. Figure 8.1 shows an example of a structure chart for a
simple Payroll program with main segment that contains sub-segments and each with
additional sub-segments, etc.

86 | P a g e
Figure 8.1: Structure chart

By detailed logic, we mean each of the boxes in the above chart should be described in clear
terms. In detailed logic description, one will definitely need to use such tools as Algorithm,
Flowcharts and Pseudocode which will be treated later.

In general, there is nothing frightening about writing a computer program after describing the
problem and its solution; one simply need to understand the basic syntax/logic patterns
involved in programming.

Program Coding or Writing

Next in program development is the coding or writing of the program itself in a specific
programming language, especially High-Level Languages (HLL), such as Basic, Pascal or
C++, etc. Generally, the definition and solution of a problem do not depend on a particular
programming language. But most of the times, the proposed solution may limit the choices of
languages that can be employed. It is also necessary to know that some languages are better
suited for some types of problems.

Program Testing

After the coding or writing of a program, it is submitted to the computer for testing. Generally,
testing involves the following:

▪ Debugging
▪ Compiling
▪ Testing (in stages)
Debugging: Errors in programs are usually called bugs and the processing of removing errors
in your programs is called debugging.

87 | P a g e
Compiling: Program has to be translated before the computer can execute it. Compiling is one
way of translating program(s). The other is by using Interpreters as discussed earlier.

Testing: Usually, for a large program that has been developed using modular method, there are
various stages of testing as follows

▪ Unit Testing
▪ Integration Testing
▪ System Testing
▪ User Testing

Unit Testing involves testing the separate components or modules as they are being developed.
Integration Testing involves testing the program as separate modules are put together. Finally,
on the part of the programmer, System Testing occurs when the whole program is being tested
in its final form to be ready for use. However, there is also the usual need for the user testing.
This is when the user of the program tests the final program to see whether it meets his or her
needs.

Program Documentation and Maintenance

There is no good programming without documentation. This is the documentation of all the
work involved in the program development. The documentation should consist of all written
descriptions and explanations of the program and other materials associated with the
development. Generally, proper documentation serves as a reference guide for programmers
and system analysts who are to modify the programs and their procedures when the needs arise.
When an organization grows for example, program modifications must keep pace with its
changing needs. Hence the process of documentation is an ongoing one. Maintenance includes
any activity aimed at keeping programs in working condition, error-free, and up to date.

Algorithm

As discussed, computer accept input (data) from the user, based on the instructions, processes
the data and produce output. These instructions are referred to as an algorithm. An Algorithm
is a prescribed set of well-defined rules or instructions for the solution of a problem in a finite
number of steps. On the other hand, you can define an algorithm to be a sequence of well-
defined, finite steps, ordered sequentially to accomplish a task. More so, algorithm is a set of
step-by-step measures or rules followed to complete a given task or solve a particular problem.

Algorithms are used for calculation, data processing, and many other fields. They are essential
because, algorithms provide the organized procedures that computers require. A good
algorithm is similar to using the exact tool in a workshop. Jobs are done effortlessly. Using the
erroneous algorithm or one that is not plainly defined is like trying to cut a piece of plywood
with a pair of scissors: though it may get done, but how effective could it be in completing it.
So, when solving a problem, the key to arriving at the best solution is often choosing the right
approach.

88 | P a g e
Properties of an Algorithm

Figure 8.2: Properties of an algorithm

▪ Finiteness: An algorithm must always terminate after a finite number of steps. It means
after every step one reach closer to solution of the problem and after a finite number of
steps algorithm reaches to an end point.
▪ Definiteness: Each step of must be precisely defined. Actions to be carried out must to
be unambiguously specified for every activity.
▪ Input: An algorithm should have some inputs. The values are given to the algorithm
before it begins.
▪ Output: An algorithm has one or more outputs after completing a specified task based
on the inputs.
▪ Effectiveness: Algorithm operations should be basic and done exactly in a finite
amount of time by a person with pen/paper. There must never be an impossible task
included in algorithm steps. In actual programming, an example of impossible tasks is
when a quantity is being divided by zero.

Common Elements of Algorithms

• Acquire data (input): Some means of reading values from an external source; most
algorithms require data values to define the specific problem.
• Computation: Performing arithmetic computations, comparisons, testing logical
conditions, etc.
• Selection: Choosing among two or more possible courses of action, based upon initial
data, user input and/or computed results.
• Iteration: Repeatedly executing a collection of instructions, for a fixed number of times
or until some logical condition holds
• Report results (output): Reporting computed results to the user, or requesting additional
data from the user.

89 | P a g e
Types of Algorithms

Algorithms are classified according to what they are being used to achieve. The basic types of
computer science algorithms include:

• Divide and conquer algorithms – divide the problem into smaller sub problems of the
same type; solve those smaller problems, and combine those solutions to solve the
original problem.
• Brute force algorithms – try all possible solutions until a satisfactory solution is found.
• Randomized algorithms – use a random number at least once during the computation
to find a solution to the problem.
• Greedy algorithms – find an optimal solution at the local level with the intent of
finding an optimal solution for the whole problem.
• Recursive algorithms – solve the lowest and simplest version of a problem then, solve
increasingly larger versions of the problem until the solution to the original problem is
found.
• Backtracking algorithms – divide the problem into sub problems, each which can be
attempted to be solved; however, if the desired solution is not reached, move backwards
in the problem until a path is found that moves it forward.
• Dynamic programming algorithms – break a complex problem into a collection of
simpler sub problems, and then solve each of those sub problems only once, storing
their solution for future use instead of re-computing their solutions.

Algorithm Representation

Algorithms can be expressed in many notations including:

• Natural languages: tend to verbose and ambiguous and rarely used for complex or
technical algorithms.
• Programming language: intended for expressing algorithm in the form that can be
executed by a computer.
• Flow charts and Pseudocodes: both are structured ways to express algorithms that
avoid many ambiguities common in natural language statements, while remaining
independent of a particular implementation language.

Pseudocode

A Pseudocode is a logical representation of an algorithm using third generation language style.


Literally, pseudocode means “fake code”. It is an artificial and informal approach to writing
sequence of actions and instructions (algorithms) in the form that humans can understand with
ease. In other word, it is a means of describing computer algorithms using a combination of
natural language and programming language. These aid programmers concentrate on the
organization and sequence of a computer algorithm without the need for actually following the
exact coding syntax.

In pseudocode, one does not have to think about semi-colons, curly braces, and the syntax for
arrow functions, how to define promises, DOM methods and other core language principles. It
essentially just to explain one’s thinking and making the rules. It does not matter what language

90 | P a g e
you use to write your pseudocode. All that matters is comprehension. It is important to know
that a Pseudocode is not directly executable on a computer except it is transformed into a high-
level language code.

Flowchart

The emphasis on this sub-section will be on program flowcharts, however, see some brief
descriptions of the following charts.

Data Flow Diagram (DFD)

A data-flow diagram is a way of representing a flow of data through a process or a system


(usually an information system). The DFD also provides information about the outputs and
inputs of each entity and the process itself. A data-flow diagram has no control flow - there are
no decision rules and no loops. A data flow diagram (DFD) maps out the flow of information
for any process or system. It uses defined symbols like rectangles, circles and arrows, plus short
text labels, to show data inputs, outputs, storage points and the routes between each
destination. Data flowcharts can range from simple, even hand-drawn process overviews, to
in-depth, multi-level DFDs that dig progressively deeper into how the data is handled.

Figure 8.3: Common DFD symbols14

System Flowchart

14
https://en.wikipedia.org/wiki/Data-flow_diagram

91 | P a g e
A system flowchart shows the path taken by data in a system and the decisions made during
different levels. Different symbols are combined together to show data flow, including what
happens to data and where it goes.

Forms Flowcharts

The DFDs and system flowcharts already considered give no any indication of the units of an
organization that perform the data processing task and which units use the information.
However, Forms Flowcharts are simply employed to supply this information of how documents
and forms flow among the organizational units. They do not indicate how data are processed.

Program Flowcharts

Program flowchart shows control over a program in a system. This flowchart is very important
while writing code or a program. It shows how data flows while writing an algorithm.

Generally, Flowchart is a diagram that depicts the flow of data in sequential order through
processing systems. In other words, it makes operations and the sequence of these operations
in a system known. In other words, flowchart is a means to visually present the flow of data
through an information processing system, the operations performed within the system and the
sequence in which they are performed.

As stated, focus is on the program flowchart, which describes what operations (and in what
sequence) are required to solve a given problem. The program flowchart can be likened to the
blueprint of a building. A designer draws a blueprint before starting construction on a building.
Similarly, a programmer prefers to draw a flowchart prior to writing a computer program. As
in the case of the drawing of a blueprint, the flowchart is drawn according to defined rules and
using standard flowchart symbols prescribed by the American National Standard Institute
(ANSI), Inc.

Meaning of a Flowchart

A flowchart is a diagrammatic representation that illustrates the sequence of operations to be


performed to get the solution of a problem. Flowcharts are generally drawn in the early stages
of formulating computer solutions. Flowcharts facilitate communication between programmers
and business people. These flowcharts play a vital role in the programming of a problem and
are quite helpful in understanding the logic of complicated and lengthy problems. Once the
flowchart is drawn, it becomes easy to write the program in any high-level language.
Frequently, use of flowcharts is helpful in explaining the program to others. Hence, it is correct
to say that a flowchart is a must for the better documentation of a complex program.

The "Logic or Block" diagrams of charts are employed to depict the step-by-step procedures
to be used by a computer system to process data. Usually, the flowchart symbols are arranged
in the same logical sequence in which corresponding program statements will appear in the
program. A programmer is expected to be very familiar with how to construct a good program
flowchart. This is because flowcharts provide excellent way of documenting your program.
Moreover, flowcharts serve as good maintenance tools since they can be employed to guide
the programmer to determine what statements are required to make any necessary

92 | P a g e
modifications and where to locate them. Thus, an updated flowchart provides good
documentation for a revised program.

Flowchart Symbols

Flowcharts are generally drawn by means of some standard symbols; nevertheless, some
special symbols can also be developed when needed. Below are some standard symbols that
are frequently used in many computer programs.

93 | P a g e
Table 8.1: Flowchart symbols

Guidelines For Drawing a Flowchart

(a) In drawing a proper flowchart, all necessary requirements should be listed out in
logical order.

94 | P a g e
(b) The flowchart should be clear, neat and easy to follow. There should not be any
room for ambiguity in understanding the flowchart.
(c) The usual direction of the flow of a procedure or system is from left to right or top
to bottom.
(d) Only one flow line should come out from a process symbol.


→ → or

(e) Only one flow line should enter a decision symbol, but with two exit flow lines, one
for each possible answer, should leave the decision symbol.

Yes

No

(f) Only one flow line is used in conjunction with terminal symbol.
(g) If the flowchart becomes complex, it is better to use connector symbols to reduce
the number of flow lines. Avoid the intersection of flow lines if you want to make
it more effective and better way of communication.

(h) Ensure that the flowchart has a logical start and finish.
(i) It is useful to test the validity of the flowchart by passing through it with a simple
test data.

Advantages Of Using Flowcharts

The benefits of flowcharts are as follows:

i. Communication: Flowcharts are better way of communicating the logic of a system to


all concerned.
ii. Effective analysis: With the help of flowchart, problem can be analysed in more
effective way.
iii. Proper documentation: Program flowcharts serve as a good program documentation,
which is needed for various purposes.
iv. Efficient Coding: The flowcharts act as a guide or blueprint during the systems analysis
and program development phase.
v. Proper Debugging: The flowchart helps in debugging process.
vi. Efficient Program Maintenance: The maintenance of operating program becomes easy
with the help of flowchart. It helps the programmer to put efforts more efficiently on
that part

95 | P a g e
BASIC Programming Fundamentals

This sub-section is aimed at introducing the rudiments of BASIC programming language.


BASIC stands for Beginners’ All-purpose Symbolic Instruction Code and is a family
of general-purpose, high-level programming languages designed for ease of use. The language
was developed in the early 1960s by John G. Kemeny and Thomas Kurtz of Dartmouth College,
as a teaching language. The emergence of microcomputers in the mid-1970s led to the
development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the
tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects
were also created. BASIC was available for almost any system of the era, and became the de
facto programming language for home computer systems that emerged in the late 1970s.
These PCs almost always had a BASIC interpreter installed by default, often in the
machine's firmware or sometimes on ROM cartridge.

There are many versions of the language. Examples are: BASICA (Advanced BASIC), GW
BASIC (Eagle BASIC), QBASIC (Quick BASIC), Turbo BASIC, Visual BASIC, BASIC 4GL,
KBASIC, FreeBASIC, SpeedBASIC, ExtremeBASIC, MediaBAS1C, YaBAS1C, xBASIC,
Liberty BASIC, Just BASIC, wxBASIC, smallBASIC, Gambas.

Versions like BASICA and GW BASIC are already taken over by other versions listed above.
Visual BASIC is a version of the language specially designed for Windows platforms and is
today one of the programming tools with very high preference among programmers. It was
released in 1991 by Microsoft combining an updated version of BASIC with a visual forms
builder. This reignited use of the language and "VB" remains a major programming language in
the forms of VB.NET. However, it is interesting to know that BASIC language remains one of
the best programming tools gaining wide acceptance even in the face of rapid developments of
various computing platforms.

Basic Variables and Characters

The BASIC Interpreter or Compiler (BASIC program with Turbo BASIC, KBASIC,
MediaBASIC and others, for Example) handles variables as names of memory locations.

Memory can just be visualized as Post Office with its Post Office boxes as memory locations
or cells. So, each Post Office box has an address to identify it. The contents of the box can
change from time to time but the address remains uniquely unchanged. The same occurs in the
computer memory. Thus, variables are like container for storing data value. In addition to
identifying memory locations, variable names are also used to identify the types of data being
stored in the computer memory. Though there are several types of data, in this unit on BASIC,
two will be introduced and hence two types of variables, namely

• Numeric Variables
• String Variables

Numeric Variables

96 | P a g e
Generally, in BASIC, variable names must start with a letter and can be followed with numbers
or other letters, for example, a variable name can be as long as 40 characters (as in MS QBasic).
So, examples of variables names are Al, B3, AB4, JONH, REJ2, Y5, and so on.

Numeric variables are simply those that represent numbers. In BASIC, such numbers can be
whole numbers, decimals, zero, positive or negative integers. In other several programming
languages, you may need to identify specifically the type of the variable, either as REAL,
INTEGER, LOGICAL, COMPLEX or CHARACTER. However, BASIC is a more
accommodating language and most versions may not bother on identifying the variable type.

String Variables

Variable names are also employed to represent character data in memory. Character variable
names are similar to numeric variable names, but they refer to memory locations containing
character strings, where collections of characters are called String Variables. To specifically
distinguish a string variable, the dollar symbol $ is used at the end of the variable name.
Examples of string variable names are therefore as follows: A$, JOHN$, B2$, REJ$, etc. It is
good to also know that the dollar signs $ also appears at the end of almost every BASIC
reserved word that deals with strings. Now, while the values of numeric variables are simply
numbers, the values of string variables are given as characters enclosed between double quotes,
such as "NAME", "OK", etc. The space character can also be made as part of the string. These
may be very helpful in choosing variable names:

✓ Keep variable name as short as possible since the need to type it every time it is needed
in program code (though copy and paste, can as well apply).
✓ Select a meaningful variable name to aid remembering what it represents.

Below are some characters or symbols used in BASIC language. The characters will be grouped
for your understanding as follows:

• Alphabetic characters
• Numeric characters
• Data — type Suffixes
• Mathematical Operators
• Special or other characters

Alphabetic Characters

a — z or A — Z

That is, BASIC accepts letters A to Z, either in lower or upper cases.

Numeric Characters

0 — 9 and A — F (or a —f) for hexadecimal numbers

97 | P a g e
Data-Type Suffixes

$ String
! Single Precision
# Double Precision
% Integer
& Long Integer

Mathematical Operators

+ Addition
* Multiplication
- Subtraction
= Equal to
< Less than
> Greater than
. Decimal point
^ Exponentiation
<> Not equal to
/ Division
<= Less than or equal to
>= Greater than or equal to
\ Integer division

Special Characters

: (Colon) separates multiple statements on a single line


; (Semicolon), controls PRINT statement output
, (Comma) also controls PRINT statement output
‗ (Single Quote) used for comment line in place of REM.
? Used in place of PRINT. BASIC also uses it as INPUT statement prompt

Reserved Words or Keywords in Basic

Just as words have their meanings in the natural language, the same thing applies in
programming languages, reserved words generally describe the operations to be performed by
the computer. Also referred to as BASIC statement or reserved word. It is an instruction in
BASIC, which has a specific means to the computer or interpreter. If reserved words are
wrongly coded, system would give syntax error message during the running of the program.
As a vital programming rule, it is essential to avoid using any of the reserved words as a
variable name to avoid program errors during execution. These BASIC reserved words are
grouped below according to their programming tasks.

98 | P a g e
Data manipulation

LET: It is used to assign a value (which may be the result of an expression) to a


variable. In most dialects of BASIC, it is optional, and a line with no other identifiable
keyword will assume the keyword to be LET. Example
20 LET A = C + D
DATA: holds a list of values which are assigned sequentially using the READ
command.

READ: reads a value from a DATA statement and assigns it to a variable. An internal
pointer keeps track of the last DATA element that was read and moves it one position
forward with each READ. Most dialects allow multiple variables as parameters,
reading several values in a single operation.

RESTORE: resets the internal pointer to the first DATA statement, allowing the
program to begin READing from the first value. Many dialects allow an optional line
number or ordinal value to allow the pointer to be reset to a selected location.

DIM: Sets up an array.

Program Flow Control

IF … THEN … {ELSE}: used to perform comparisons or make decisions. Early


dialects only allowed a line number after the THEN , but later versions allowed any
valid statement to follow. ELSE was not widely supported, especially in earlier
versions.

FOR … TO … {STEP} … NEXT: repeat a section of code a given number of times.


A variable that acts as a counter, the "index", is available within the loop.

WHILE … WEND and REPEAT … UNTIL: repeat a section of code while the
specified condition is true. The condition may be evaluated before each iteration of the
loop, or after. Both of these commands are found mostly in later dialects.

DO … LOOP {WHILE} or {UNTIL}: repeat a section of code indefinitely or


while/until the specified condition is true. The condition may be evaluated before each
iteration of the loop, or after. Similar to WHILE, these keywords are mostly found in
later dialects.

GOTO: jumps to a numbered or labelled line in the program. Most dialects also
allowed the form GO TO.

GOSUB … RETURN: jumps to a numbered or labelled line, executes the code it finds
there until it reaches a RETURN command, on which it jumps back to the statement
following the GOSUB, either after a colon, or on the next line. This is used to
implement subroutines.

99 | P a g e
ON … GOT/GOSUB: chooses where to jump based on the specified conditions.

Input and output

LIST: displays the full source code of the current program.

PRINT: The PRINT statement tells computer to display the output of the executed
program on the screen.
Example
40 PRINT SUM
INPUT: This is used to assign or give value to variables while the program is running.
It can be used for both numeric and string variables. The statement may include a
prompt message. Example
30 INPUT A, B, C
TAB: used with PRINT to set the position where the next character will be shown on
the screen or printed on paper. AT is an alternative form.

SPC: prints out a number of space characters. Similar in concept to TAB but moves by
a number of additional spaces from the current column rather than moving to a specified
column.

Mathematical Functions

ABS: Absolute value


ATN: Arctangent (result in radians)
COS: Cosine (argument in radians)
EXP: Exponential function
INT: Integer part (typically floor function)
LOG: Natural logarithm
RND: Random number generation
SIN: Sine (argument in radians)
SQR: Square root
TAN: Tangent (argument in radians)

Miscellaneous

REM: It is a remark statement used to insert remark in a program. Such remarks are
used to explain what the program is all about.
Example:
10 REM This program finds the sum of 8 numbers
END statement: This is an instruction to terminate a program. Once a computer
encounters the END, it automatically terminates the program. Example
70 END

100 | P a g e
USR: transfers program control to a machine language subroutine, usually entered as
an alphanumeric string or in a list of DATA statements.
CALL: alternative form of USR found in some dialects. Does not require an artificial
parameter to complete the function-like syntax of USR, and has a clearly defined
method of calling different routines in memory.
TRON / TROFF: turns on display of each line number as it is run ("TRace ON"). This
was useful for debugging or correcting of problems in a program. TROFF turns it back
off again.
RUN statement: It is used to execute a program. In Q-BASIC, F5 is used to RUN a
program. Keep in mind that the program will not RUN if any error is detected in it.

Commonly Available BASIC Interpreter Environment

• Microsoft QUICK BASIC


• Turbo Basic

Microsoft QBASIC Interpreter

Below is the Quick BASIC Interpreter window environment:

Figure 8.4: Microsoft QBASIC environment

The Quick BASIC program is a DOS program usually shipped with MS-DOS operating
system. Thus, the program is readily available on any computer system running on fully
101 | P a g e
installed MS-DOS Version 5 for example. As shown in the above figure, the Interpreter is a
menu-driven environment where one can type BASIC code and run the program immediately.
With this version of BASIC coding environment, you cannot compile your program into an
executable form as you can do in Turbo Basic environment shown below.

Turbo BASIC Interpreter

The Turbo Basic Interpreter or Compiler environment is as seen in the following figure 8.5

Figure 8.5: Turbo BASIC interpreter environment

It can be observed from the above figure, the environment is also menu-driven but with
available option to compile the source program into an EXEcutable file. This is shown in the
next figure below:

102 | P a g e
Figure 8.6: Executable environment

The editing window changes to the window where BASIC source code can be typed.

Let us see some very common statements:

• REM Statement
• Assignment or LET Statement
• INPUT Statement
• PRINT Statement
• GOTO Statement
• IF... THEN... ELSE Statement
• END Statement

But before the above BASIC statements, it is good to state that a BASIC statement has the
following general form:

Line Number (optional) statement

In other versions of BASIC like GW-BASIC, line number is a must in every statement, but in
MS-QUICKBASIC or Turbo Basic, line number is optional. Line numbers are necessary when
your program contains GOTO statement(s) or GOSUB statement(s). Now, get started with
BASIC statements stated above.

REM Statement

The REM or REMark statement is simply a comment statement that provides information about
the program or any of its segments to the programmer. The REM statement provides no
103 | P a g e
information to the computer itself and hence the statement is never executed during the
execution of the program. For example, a REM statement may explain what a variable name
represents or what program segment (or module) does. Example of a REM statement is as
follows:

10 REM This program finds the average of two number

The above code has a REM statement describing what program is about. REM statement may
be written without any comment just to create an empty line within the code for good
readability. Thus, REM statements may that look like the following:

20 REM
30 REM

Assignment or LET Statement

The assignment or what is also called the LET statement is simply used to assign values to
variables. The general format is as follows:

line # (optional) LET variable = expression

or

line # (optional) variable = expression

where line # represents LINE NUMBER

In BASIC, the equality (=) sign is the assignment symbol. In the above general format, the
expression on the right-hand side can be any of the following:

• A constant
• An Arithmetic formula
• A variable

Regarding the above general format, some versions of BASIC (such as MS-Quick BASIC and
Turbo Basic) do not require the reserved word LET in an assignment statement. Hence it is
optional. Examples of assignment statements are as follows:

10 X = 25 B$ = "NOUN"
20 Y=X^2 + 3*X-4
30 N$ = B$
40 T=Y

INPUT Statement

104 | P a g e
When a computer program is designed to run many times, each time working with different
input data, an INPUT statement is necessary in the code. In using INPUT statement, a single
statement can be used for multiple variables in place of multiple INPUT statements.

Examples of INPUT statements are as follows:

10 INPUT X1: INPUT A$


20 INPUT X2, X3, B4
30 INPUT "SCORE =", EX

In line 30, a string constant is used in the statement to help the program user see on the screen
an information about the value to be entered for variable "EX". That is, when line 30 is
executed, you will see something like the following on the screen.

SCORE =?

The question mark (?) simply indicates that the program is requesting for an input from the
keyboard.

PRINT Statement

Very central to the output phase of any BASIC program is the PRINT statement. This is used
to display the results of computer processing. To send an output to a printer, you use LPRINT
instead of the PRINT statement.

The general format is as follows:

Line # (optional) PRINT expression.

The expression in the above statement can take the form of the following:

• Variables
• Arithmetic expressions
• Literals
• Combination of the above three types.

By a literal, we mean an expression that consists of alphabetic, numeric or special characters.

Examples of PRINT statements are as follows:

10 PRINT X : PRINT
20 PRINT A, B
30 ? C; D
40 PRINT ―VALUE=‖, V
50 PRINT 3* A — B
60 PRINT T$
70 PRINT "THE SOLUTION IS"

105 | P a g e
80 LPRINT W

In line 10, the second PRINT statement prints an empty line. This is necessary to provide a
good space between output lines. In line 30, the question mark (?) is used in place of the
reserved word PRINT. In QuickBasic, the symbol is automatically changed to the word PRINT
when one moves out of the line to the next. When line 80 is executed, the value of the numeric
variable W is sent to the printer.

Observe that semicolon (;) is used in the PRINT statement instead of the comma (,) symbol.
These two characters give different formats to the result output of a PRINT statement.
Assuming that the values of A, B, C and D are as follows: A = 2, B = 1, C = 23, D = 0. The
values of C and D are printed immediately after the other while the values of A and B are
printed with spaces between them, specifically, there will be 14 spaces between them.

Generally, BASIC divides output screen into what are called PRINT ZONES. There are five
(5) of them, the first starting at column 1, the second at 15, the third at 29, the fourth at 43, the
fifth at 57. Using semicolon in PRINT statement is a way of giving enough space between the
printed values. However, a more flexible way to format output is to use the TAB function.

GOTO Statement

Very often, when an IF ... THEN statement is used, one condition causes the computer to
execute the next line of BASIC code with another condition requires it to execute some code
somewhere else if the program. Therefore, in order to move or branch to the code that does not
follow the IF... THEN statement, an unconditional branch statement is needed, which is the
GOTO statement. That means that, every time the computer encounters a GOTO statement, it
branches to the specified program line irrespective of any condition in the program. Example
of GOTO statement

10 CLS : INPUT A
20 IF A < 3 THEN GOTO 40
30 ? : PRINT
40 END

IF ... THEN... ELSE Statement

This statement works just like IF ... THEN statement used in the above BASIC code, except
that the ELSE part is executed if the condition is not satisfied.

An example is as follows:

10 CLS
20 INPUT X
30 IF X> 0 THEN PRINT "POSITIVE" ELSE? X
40 END

106 | P a g e
In the above code CLS statement simply clears the screen.

END Statement

Every program has a terminal point. In BASIC, the END statement signifies the end of the
program as used in the above immediate two BASIC codes. Since line numbers are optional
in some BASIC versions, therefore an END statement can appear anywhere in the BASIC
program where the program logically ends. Some Basic versions do not even require an END
statement, such as QuickBASIC.

Practice Exercises

1. Identify the following variables as acceptable or unacceptable, giving your reason(s) if


unacceptable in BASIC.

i. ABA
ii. $w
iii. KT2
iv. 8BIG
v. W.3

2 Identify the type of expression

10 X = 25 : B$ = ”NOUN”
20 Y = X^2 + 3*X-4
30 R$ = A$
40 M=P

Further Readings

Borland International Inc., Turbo Basic Version 1.1, 1987.

Brightman, R.W. and Dimsdate, J.M., Using Computers in an Information Age, Delmar
Publishers Inc., 1986.

Mandell, S.L., Computers and Data Processing West Publishing Company, 1985. Microsoft

Corporation, MS-QUICKBASIC 1.1, 1992

107 | P a g e
Reference

ACETEL. (2020). CIT 215: Introduction to Programming Language. National Open University.

https://www.edrawsoft.com/article/what-is-system-flowchart.html

https://www.lucidchart.com/pages/data-flow-diagram

https://en.wikipedia.org/wiki/Data-flow_diagram

https://www.edrawsoft.com/article/what-is-system-flowchart.html

https://www.codecademy.com/article/pseudocode-and-flowcharts

https://www.zenflowchart.com/flowchart-symbols

https://en.wikipedia.org/wiki/BASIC#:~:text=BASIC%20(Beginners'%20All%2Dpurpose,at
%20Dartmouth%20College%20in%201963.

https://classnotes.ng/lesson/introductionto-basic-programming-ss2/

108 | P a g e
LECTURE 9

ROLE OF COMPUTER AND INFORMATION PROCESSING TO THE


SOCIETY

Description

Everyone knows that this is the age of computer and vast majority of people are using
computer. Development of science and technology has direct effect on our daily life as well as
in our social life. Computer technology has made communication possible from one part of the
world to the other in seconds. They can see the transactions in one part of the world while
staying in the other part. Computer development is one of the greatest scientific achievements
of the 20th century. Since the invention of computer, they have evolved in terms of increased
computing power and decreased size. Owing to the widespread use of computers in every
sphere, Life in today’s world would be unimaginable without computers. They have made
human lives better and happier. Computers are used in various fields as well as in teaching and
learning. Some of the major impacts of computer on society are listed below.

Objective

Student will learn

• How computer has impacted in different fields: positive and negative

Introduction

Computers play a vital role in our society. They are the central tools in modern societies.
Nowadays, computers are used in almost all the field of society. They are used in education,
medicine, entertainment, industries, banks, homes, science and technology, transportation,
communication, and businesses. The terms ‘computer’ and ‘society’ are complementary to
each other. If a computer is removed from society, society becomes useless. All human
activities have a great influence on computers.

During the past few decades, computers and electronic technologies have been incorporated
into almost every aspect of society. They now play a role in how we learn, how we take care
of our money, and how we are entertained. Today, there is probably no better indication of how
advanced a society is than how computerized it is. In our society, computers are now a
fundamental component of our jobs, our schools, our stores, our means of transportation, and
our health care. Our complex systems of banking and investment could not operate without
computers. Essentially, all of our medical and scientific facilities now depend entirely upon
incredibly complex computer-based systems. Almost all of our businesses now use the
computer to maintain information about customers and products. Our schools use computers
to teach and to maintain student records. Computers are now commonly used in medicine for

109 | P a g e
diagnosis and treatment. In fact, every day it gets harder to find any type of business,
educational institution, or government office that does not use computers in some way.

A variety of new types of specialized hardware and software tools have made the computer
valuable for everything from the most repetitive tasks, such as scanning items in a supermarket,
to incredibly detailed and complex tasks, such as designing spacecraft. Because computers can
store accurate information, they are used to help people make better decisions. Because
computers can continue to operate day or night, 24 hours a day, they are now used to provide
a level of services to humans that were unknown before their invention. Some of the major
computer application fields are listed below:

Business

Business was one of the first areas to incorporate the computer. Because of its powerful
capability to store and retrieve vast amounts of information, computers are now a vital
component of almost every type of business. They are used to record sales, maintain
information about inventories, maintain payroll records, and generate pay checks. Business
workers now use computers to keep track of meetings, write letters and memos, create charts
and presentation graphics, create newsletters, and examine trends.

All of us have by now experienced how the point-of-sale (POS) product scanning systems in
stores have speeded up the check-out process and made it more accurate by eliminating the
need for checkers to punch in the price for each individual item. These point-of-sale systems
not only make it more convenient for shoppers, but they also provide an accurate inventory of
product availability for the store's management.

A computer has high speed of calculation, diligence, accuracy, reliability, or versatility which
made it an integrated part in all business organisations. Other uses of computer in business are:

• Payroll calculations
• Budgeting
• Sales analysis
• Financial forecasting
• Managing employee’s database
• Maintenance of stocks etc.

Banking and finance

Computers have become an indispensable tool in the handling of money and finances.
Computerized ATM machines and credit card machines are now familiar throughout the world.
Although they have only been in existence for a short while, many of us now take them for
granted and expect our bank to provide these computerized services whenever and wherever
we need them. Many do not realize that these machines are part of the huge electronic network
that has been put in place in the banking and financial services industries. The ATM machines
and the credit card machines provide our interface with the bank's computers.

Manufacturing Industries
110 | P a g e
Computers have made their way towards jobs that were unpleasant or too dangerous for
humans to do, such as working hundreds of feet below the earth or opening a package that
might contain an explosive device. In other industries, computers are used to control the
production of resources very precisely. All robots and machinery are now controlled by various
computers, making the production process faster and cheaper. All the stages of manufacturing,
from designing to production, can be done with the use of computer technology with greater
diversity.

Engineering Design

Computers are widely used in Engineering purpose. There are Computer-aided engineering
(CAE) programs simulate effects of conditions such as wind, temperature, weight, and stress
on product designs and materials. Examples include the use of computers to test stresses on
bridges or on airplane wings before the products are built. Another major area is CAD
(Computer aided design). That provides creation and modification of images.

Health Care and Medicine

Computers are now so widely used in medicine they are changing the very structure of our
society's health care system. They are used extensively for basic tasks such as keeping track of
patient appointments and they are used widely for diagnostic and treatment procedures.
Diagnosis of illness can be aided through the use of databases that contain information on
diseases and symptoms and laboratory tests on blood and tissue chemistry have become
dependent on computer analysis. In addition, such computer-based technologies as computer
tomography (CAT) scans and magnetic resonance imaging (MRI), which allow the physician
to see the organs of the body in three dimensions, can provide direct evidence of disease. In
addition, computers have become important part in hospitals, labs, and dispensaries. The
computers are being used in hospitals to keep the record of patients and medicines. It is also
used in scanning and diagnosing different diseases.

Telephones

Computerized telephone exchanges handle an ever-increasing volume of calls very efficiently.

Law Enforcement

Recent innovation in computerized law enforcement include:

• fingerprint files: for crime detection


• Mode of operation of serial killers,
• Computer modelling of DNA, which can be used to match traces from an alleged
criminal's body, such as blood at a crime scene. In addition,
• Computers also contain a complete database of all the names, pictures and information
of such people who choose to break the law.
• Computer Forensics: is a branch of digital forensic science pertaining to legal
evidence found in computers and digital storage media

111 | P a g e
Military

Computers are largely used in defence. Modern tanks, missiles, weapons etc. Military also
employs computerised control systems. Some military areas where a computer has been used
are:

• Missile Control: A system that serves to maintain attitude stability and to correct
deflections
• Military Communication: conveying message, an idea during operation
• Military Operation and Planning
• Smart Weapons/smart munition, smart bomb): is a guided munitions intended to
precisely hit a specific target, and to minimize collateral damage.

Marketing

In marketing, uses of computer are as following:

• Advertising - With computers, advertising professionals create art and graphics, write
and revise copy, and print and disseminate ads with the goal of selling more products.
• Online Shopping - online shopping has been made possible through use of
computerised catalogues that provide access to product information and permit direct
entry of orders to be filled by the customers.

Education

Today, computers can be found in every school. From kindergarten to graduate school, the
computer is being used for learning, for record keeping, and for research. A variety
of computer-assisted instruction (CAI) programs are now being used to facilitate the learning
of nearly every educational topic. Multimedia-based learning systems can deliver information
to students in the form of sound and video in addition to text and pictures. Using these new
tools, students can gain control over their own learning as the computer delivers the instruction
at the student's desired pace, monitors their progress, and provides instantaneous feedback.
And, because computers can now take over some of the instruction that used to take place in
the classroom, teachers are free to work with students who need more concentrated attention.

The progress of any country is measured on the basis of the advancement in Information
Technology (IT). Nevertheless, there exist positive as well as negative impacts of computers
in society ss there is no field without computer system. Computers have both positive and
negative impacts on our daily life as well as in our social life. But the gross development of the
nation is faster with the application of computers in industries and education. Both positive and
negative impacts of computers are given below:

Positive Impacts

112 | P a g e
Employment

Computer Science has opened many opportunities for present and future employment.
Nowadays, the world is computerized and computers have become essential requirements for
human life. Because of the development of computers, people get opportunities related to the
development of computer software and hardware. So, computers play a vital role to create new
employment opportunities in our society.

Education

Computers have made a direct impact on education. Computers are involved in teaching and
learning processes. They can instruct each student differently, animate important concepts, and
use interactivity to involve students in the learning process. CAI (Computer Aided Instruction)
refers to interactive instructional strategies that are computers to convey and teach instructional
material to students. Similarly, CAL (Computer Assisted Learning) is used to convey a vast
amount of information in a very short period of time. A computers’ voice recognition capability
and its connection to the Internet make it possible for the students to participate in learning
activities.

Communication

One of the most exciting and fastest-growing applications of computer technology is


communication. The Internet forms a vast web system around the world through which
computers can exchange information at high speed. People, by sitting at one corner of a country
can communicate easily with different people around the world very easily. Workers use a
computer terminal or microcomputer with telecommunication capability to access their
company computer networks and databases.

Commerce

Electronic commerce is an online activity of the business. It is a global phenomenon of


marketing in which goods and services are offered through the Internet. The business
organizations arrange questionnaires and collect views of customers about products. The
advertisement, employment communication, order taking, and supply are done through the
computer.

Health care

Computers are used in the medical field for performing a wide variety of tasks. They are used
in medicine for diagnosing illnesses and monitoring patients to control the movements of
robotic surgical assistants. There are many expert systems to guide physicians and surgeons.
The computerized systems automate the billing and other administrative processes.

Entertainment

Computers have now become an important part of the entertainment industry. They are widely
used to create special effects in movies. They are used in editing movies and multimedia

113 | P a g e
presentations. All films and videos are created by using computers. So, it can be said that there
is little chance of entertainment without the use of computers in our society. Similarly, Virtual
Reality (VR) and artificial environments seem to really help in playing games and watching
3D movies.

Science and technology

Computers are responsible to improve scientific and technological advancements. They are
also used in the design and operations of man-made satellites, space exploration, vehicles,
rockets, and weather forecasting.

Multimedia

The multimedia presentation gives the information to the targeted group using a variety of
media like text, voice, animation, sound, picture and so on. Multimedia contents are very clear
for viewers to grasp the concept of everything. Multimedia technology is used in our society,
especially for the teaching-learning processes and advertisements.

Besides the above-mentioned points, the positive impacts of computers can be listed as:

✓ Work can be done in a very short time.


✓ More information can be stored in a small space.
✓ Multitasking and multiprocessing capabilities are available.
✓ It is easy to access data.
✓ Impartiality.
✓ The documents can be kept secret.
✓ It is helpful to produce an error-free result.
✓ It can be used for various purposes.

Negative Impacts

Reduction of employment

When computers are installed in organizations, only a few expert employees are needed and
the rest are terminated from jobs. So, it creates the problem of unemployment also. Because of
the lack of knowledge of computer education in our society, many people are becoming jobless.
On the other hand, if our society is computerized then fewer workers are required. This is the
negative impact of computers.

Computer Crime

Computer crime includes theft of computers and computer software, theft of data, and
destruction of data by computer viruses. Many computer crimes involved the theft of money.
Computer-related crimes are numerous and widespread. If data is lost or damaged, we may not
retrieve these in desired time and it creates adverse effects on business.

Privacy

114 | P a g e
The growth of computerized record-keeping brings dangers of privacy. Computers contain
important and valuable data and they allow this data to be transmitted, copied, and combined
in ways that were never possible with an earlier manual system. So, important data and
information may be changed illegally by computer users and creates a problem of privacy.

Creativity

Nowadays, people are not using their minds for common arithmetic which gradually results in
loss of their numerical ability. On the other hand, people are busy playing computer games
instead of outdoor games. This leads to a decrease in physical and creative activities.

Besides these points, the negative impacts of computers can be listed as:

- Highly expensive.
- Sometimes, huge data and information can be lost.
- Fast-changing computer technology.
- illiteracy of computing and computers.

As mentioned above, computers have both positive and negative impacts on our society. But
the use of computers is increasing day by day because its positive impacts overshadow the
negative impacts.

Reference

https://www.werecoverdata.com/blog/various-roles-computers-different-sectors-
society/#:~:text=Writing%20and%20sending%20of%20emails,to%20social%20media%20and%20oth
ers.

https://tech.aakarpost.com/2007/11/impact-of-computer-on-society.html

https://computenepal.com/what-are-the-impact-of-computer-in-
society-computer-fundamentals/

Uses of computers in various fields | Essay on uses of computers

www.byte-notes.com/uses-computers-various-fields2013

115 | P a g e

You might also like