You are on page 1of 103

Introduction to Computing

INTRODUCTION TO
COMPUTING
COMP 20013

Compiled by:

Marian G. Arada
Monina D. Barretto
Melvin C. Roxas Course Syllabus 1 | P a g e
COMP 20013 - Introduction to Computing

Table of Contents

Message to the Student 3

Course Syllabus 5

UNIT I OVERVIEW OF INFORMATION AND COMMUNICATIONS 12


TECHNOLOGY (ICT)

UNIT II DATA REPRESENTATION 21

UNIT III HARDWARE 45

UNIT IV PEOPLEWARE 60

UNIT V SOFTWARE 65

UNIT VI NETWORK, INTERNET, AND INTERNET PROTOCOLS 72

UNIT VII TRENDS AND ISSUES IN INFORMATION AND COMMUNICATIONS 81


TECHNOLOGY (ICT)

UNIT VIII SPECIAL INTEREST TOPICS IN INFORMATION AND 88


COMMUNICATION TECHNOLOGY (ICT)

Message to the Student - 2 | P a g e


.
COMP 20013 - Introduction to Computing

Message to the Student

This instructional material presents all topics which are based in the course syllabus. It is
presented in a concise, simple manner intended to guide you through the different topics of the
course. Please read the material thoroughly for better understanding of the lessons. You are
encouraged to read additional learning materials available to you. There are suggested
references at the end of each topic.

All course materials and/or activities where there is a need to access the internet are
optional. You may work on the internet-based activities only if you have access to the internet.

The assessments/activities at the end of each module must be answered. They are
intended to gauge your understanding of what you have learned from the lessons. Your professor
should get in touch with you at the start of the semester regarding the submission of answers to
assessments/activities and will give further instructions on how distance learning will be
implemented.

Thank you, God bless, and keep safe.

Message to the Student - 3 | P a g e


Introduction to Computing

Course Syllabus

POLYTECHNIC UNIVERSITY OF THE PHILIPPINES


College of Computer and Information Sciences

COURSE TITLE Introduction to Computing

COURSE CODE COMP 20013

CREDIT UNITS 3 Units / 5 Hours (2 Units Lecture and 1 Unit Lab)

COURSE None
PREREQUISITE

COURSE This course is designed to provide students with a breadth-first overview


DESCRIPTION of computing fundamentals. The materials covered in this course include
concepts such as components of the computer system, number system
operation and conversion, data representation, digital logic system, levels
of programming, computer networks, computer applications, current
trends and issues.

Institutional Learning
Program Outcomes Course Outcomes
Outcomes
1. Creative and Critical Thinking Apply knowledge of computing Explain fundamental principles,
Graduates use their imaginative fundamentals, knowledge of a concepts and evolution of
as well as a rational thinking computing specialization, and computing systems as they
ability to life situations in order mathematics, science and relate to different fields.
push boundaries, realize domain knowledge appropriate
possibilities, and deepen their for the computing specialization Identify and define the
interdisciplinary and general to the abstraction and components of the computer
conceptualization of computing system.
understanding of the world.
models from defined problems
and requirements. Compare and understand the
different number systems such
Identify, analyze, formulate,
research literature, and solve as binary, decimal, and
complex computing problems hexadecimal number systems.
and requirements reaching
substantiated conclusions using Perform number conversion,
fundamental principles of fixed point and floating point
mathematics, computing number representation.
sciences, and relevant domain
disciplines.

Knowledge and understanding


of information security issues in
relation to the design,
development and use of
information systems.

Course Syllabus - 5 | P a g e
COMP 20013 - Introduction to Computing

Institutional Learning
Program Outcomes Course Outcomes
Outcomes
2. Effective Communication Communicate effectively with Understand the concepts of
Graduates are proficient in the the computing community and data communication, network
four macro skills in with society-at- large about components, protocols and
communication (reading, writing, complex computing activities by internet issues.
listening, and speaking) and are being able to comprehend and
able to use these skills in solving write effective reports, design
problems. Making decisions, and documentation, make effective
articulating thoughts when presentations, and give and
engaging with people in various understand clear instructions.
circumstances.
3. Strong Service Orientation Design and evaluate solutions Analyze solutions employed by
Graduates exemplify the for complex computing organizations to address
potentialities of an efficient, problems, and design and different computing issues.
well-rounded and responsible evaluate systems, components,
professional deeply committed or processes that meet specified
to service excellence. needs with appropriate
consideration for public health
and safety, cultural, societal,
and environmental
considerations.
4. Community Engagement Create, select, adapt and apply Evaluate tools and techniques
Graduates take an active role in appropriate techniques, for purposes of identifying best
the promotion and fulfillment of resources and modern practices in computing
various advocacies computing tools to complex development.
(educational, social and computing activities, with an
environmental) for the understanding of the limitations
advancement of community to accomplish a common goal.
welfare.
5. Adeptness in the An ability to apply mathematical Expound on the recent
Responsible Use of foundations, algorithmic developments in the different
Technology principles and computer science computing knowledge areas
Graduates demonstrate theory in the modeling and Understand the basics of digital
optimized use of digital learning design of computer-based logic system.
abilities, including technical and systems in a way that Identify the different levels of
numerical skills. demonstrates comprehension of programming.
the tradeoffs involved in design
choices.
6. Passion to Lifelong Learning Recognize the need, and have
Graduates are enabled to perform the ability, to engage in
and function in the society by independent learning for
taking responsibility in their quest continual development as a
to know more about the world computing professional.
through lifelong learning.
7. High Level of Leadership and Function effectively as an
Organizational Skills individual and as a member or
Graduates are developed to leader in diverse teams and in
become the best professionals multidisciplinary settings.
in their respective disciplines by
manifesting the appropriate
skills and leaderships qualities.

Course Syllabus - 6 | P a g e
COMP 20013 - Introduction to Computing

Institutional Learning Course Outcomes


Program Outcomes
Outcomes
8. Sense of Personal and An ability to recognize the legal, Be aware of the important
Professional Ethics social, ethical and professional social issues and the impact
Graduates show desirable issues involved in the utilization that it is having on society.
attitudes and behavior either in of computer technology and be Demonstrate awareness of the
their personal and professional guided by the adoption of important social and ethical
circumstances. appropriate, ethical and legal issues and computing
practices technologies’ impact on such
concerns.

9. Sense of National and Global Function effectively as an


Responsiveness individual and as a member or
Graduates’ deep sense of leader in diverse teams and in
national compliments the need multidisciplinary settings.
to live in a global village where
one’s culture and other people
culture are respected.

Course Plan
Assessment
Week Topic Learning Outcomes Methodology Resources
*
1 1. Introduction to a. Demonstrate an Orientation University Quick
the Course understanding of Self- Student recitation to
what the subject is all Introduction Handbook get student’s
a. Vision Mission about, what will be in (On-line) thoughts and
Goals and scope for the College questions
Objective of the semester, and what Manual using online
University, and students are expected
Course application
College. to learn
b. Self-Introduction b. Communicate with Syllabus
c. Course Overview fellow students and
teacher and begin to Online
d. Grading System
e. Classroom establish rapport application
Management c. Identify and explain
the course
assessment and
validation criteria,
including grading
system to understand
how to pass the
subject
d. Explain what are the
do’s and don’ts while
the class is on-going
Unit I: Overview a. Categorize Lecture Powerpoint Short Quiz
of Information and computers. Video Material
b. Contrast elements of presentation
Communications Reference
computer system. Interactive
2nd Technology c. Identifies various learning Books
1. Introduction to events/improvements
Computers in the computing
world.

Course Syllabus - 7 | P a g e
COMP 20013 - Introduction to Computing

Week Topic Learning Outcomes Methodology Resources Assessment


2. Elements of a d. Qualifies the Lecture Powerpoint Short Quiz
Computer understanding of Video Material
System computer usage presentation
3. Classification of Interactive Reference
2nd Computers learning Books
4. Capabilities and
Limitations of
Computers
5. History of
Computing
Unit II: Data a. Distinguish the various Lecture Powerpoint Short Quiz
Representation number system and Video Material Seat Work
data representation presentation Problem
b. Perform number Exercises Reference Solving
A. Number System system operation Books
Operation and such addition,
Conversion subtraction and
complement.
3rd to B. Data c. Explain the workflow
8th Representation of the
processor/memory/in
1. Numeric put and output
a. Unsigned devices and
b. Signed architecture
c. Fixed d. Perform number
d. Floating conversion, fixed
2. Non-numeric point and floating
point number
representation.
MIDTERM Midterm
9th
EXAMINATION Examination
Unit III: Hardware Lecture Powerpoint Short Quiz
a. Explain the functions Exercises Material Exercises
1. Digital Logic of the main units of
System a physical computer Reference
2. Processor system Books
3. Memory b. Describe what input
Input and Output and output devices
Devices are
c. Differentiate the
primary and
10th secondary storage
devices
d. Give examples of I-
O devices
e. Define what boolean
algebra is
f. Identify the different
logic gates
g. Illustrate the
representation of the
different logic gates

Course Syllabus - 8 | P a g e
COMP 20013 - Introduction to Computing

h. Convert boolean
algebra expression
into a logic circuit
i. Create truth tables
for the
corresponding logic
circuits and boolean
expression
j. Explain basic
theorems and
postulates on digital
logic system

Unit IV: a. Contrast roles and jobs Lecture Powerpoint


Peopleware in the ICT profession. Video Material Short Quiz
b. Discuss a typical day presentation
11th 1. Roles and Job Reference
of an ICT professional.
Titles Books
2. Code of Ethics for
ICT Professionals
Unit V: Software a. Differentiate an Lecture Powerpoint Short Quiz
application software Video Material
1. Assembly and from a system presentation
Machine Reference
software
Language Books
b. Explain the types of a
2. Compilers and
Translators system software
12th 3. Programming c. Explain the different
Languages functions of an
4. Operating operating system
Systems d. Discuss the different
5. Application means by which
Software application software
are made available
Unit VI: a. Discuss network Student Powerpoint
Computer models lecture/demon Material Short Quiz
Networks b. Explain network stration
topologies Video Reference
13th 1. Concepts in c. Explain internet presentation Books
Computer concepts Interactive
Networking learning
2. Network Services Simulation
3. Internet and the Interactive
World Wide Web lecturing
Unit VII: a. Explains the current Student Powerpoint Short Quiz
Computer Trends ICT trends and social lecture/ Material
and Issues in ICT issues. demonstration
b. Initiates discipline and Video Reference
1. Current Trends relates knowledge of presentation Books
14th 2. Social and Legal Debate
ICT trends and issues
Issues discussion
on study works.
Interactive
lecturing
Assigned
reading

Course Syllabus - 9 | P a g e
COMP 20013 - Introduction to Computing

Assessment
Week Topic Learning Outcomes Methodology Resources
*
Unit VIII: Special a. Explain the difference Powerpoint Short Quiz
Interest Topics in between AI, machine Student Material
ICT learning, and deep lecture/demon
learning stration Reference
1. Artificial b. Provide applications of Video Books
Intelligence AI in different industries presentation
2. Data Science and in daily use. Debate
3. Social Networking c. Identify important discussion
and Society milestones in the Interactive
history of AI lecturing
d. Explain supervised, Assigned
unsupervised learning reading
15th and other concepts
related to AI
to 16th e. Explain what the field of
data science is
f. Identify the
skills/expertise needed
to be a data scientist
g. Discuss what big data
is and how it relates to
data science
h. Discuss where each
specific popular media
sites are commonly
used
i. Analyze the benefits of
social media to society
j. Discuss the
disadvantages of social
media
17th FINAL Final
EXAMINATION Examination
18th Round-up
Activities
*Activities under methodology / assessment will all be done online (i.e. distance learning)
Suggested Readings and References
REFERENCES
1. Burd, Stephen D. Systems Architecture. 5th Edition, 2006.
2. Cashman, Shelley. Discovering Computers, Course Technology. 2006.
3. Norton, Peter. Introduction to Computers. 6th Edition, 2006.
4. Albano, Gisela May, Atole, Ronnel., Ariola, Rose Joy. Introduction to Information Technology. 2003.
5. Parson, June Jamrich, Oja, Dan. Computer Concepts, 5th Edition. 2003.
6. Stallings, William. Computer Organization and Architecture. 6th Edition, 2003.
7. Long, Larry. Computers: Information Technology in Perspective. 2002.
8. Schneider, G. Michael, Gersting, Judith. An Invitation to Computer Science. 2000.
9. Sawyer, S. Using Information Technology: A Practical Introduction to Computers and Communication:
Intro Version. 2000.
10. Farrel, Joyce ,Technology Now, 2018

Note: Additional readings and references may be given by the professor.

Course Syllabus - 10 | P a g e
COMP 20013 - Introduction to Computing

Course Grading System*


COURSE ASSESSMENT& EVALUATION CRITERIA (GRADING & REQUIREMENTS)

Assignments / Quizzes / Exercises


Major Requirements
 Midterm and Final Exam
GRADING SYSTEM:
FIRST GRADING = Class Standing (70%): Quizzes, Assignment, Exercises; Midterm
Examination (30%)
SECOND GRADING = Class Standing (70%): Quizzes, Assignment, Exercises; Final
Examination (30%)
FINAL GRADE = [(FIRST GRADING + SECOND GRADING) / 2]

*Some assessment criteria may not apply with a different teaching modality (i.e. online/distance
learning)

Classroom Policy

Aside from what is prescribed in the student handbook, the following are the professor’s
additional house rules:

1. The course is expected to have a minimum of four (4) quizzes. No makeup tests will be given.
2. Assignments and research projects/report works will be given throughout the semester. Such
requirements shall be due as announced in class. Late submission shall be penalized with grade
deductions (5% per day) or shall no longer be accepted, depending on the subject facilitator’s
discretion. Assignments and exercises are designed to assist you in understanding the materials
presented in class, and to prepare you for the exams.
3. Students are required to attend classes regularly, including possible make-up classes. The student
will be held liable for all topics covered and assignments made during his/her absence. The
university guidelines on attendance and tardiness will be implemented.
4. Any evidence of copying or cheating during any examinations may result in a failing grade from the
examination for all parties involved. Note that other university guidelines shall be used in dealing
with this matter.
5. Students are advised to keep graded work until the semester has ended.
6. Contents of the syllabus are subject to modification with notification.
7. Cell phones, radios or other listening devices are not allowed to be used inside lecture and
laboratory rooms to prevent any distractive interruption of the class activity. *
8. No foods, drinks, cigarettes nor children are allowed inside the lecture and laboratory rooms. *
9. Withdrawal and dropping from the subject should be done in accordance with existing university
policies and guidelines regarding the matter.
*May not apply with a different teaching modality (i.e. distance learning, non F2F mode)
Consultation Time
Prepared by: Recommending Approval:

Melvin C. Roxas, MSGITS Benilda Eleonor V. Comendado, PhD


Monina D. Barretto, MBA Dean
Marian G. Arada, MIT
Faculty Members from the Main Campus Approved by:

Reviewed by:
Emanuel C. De Guzman, PhD
Melvin C. Roxas, MSGITS Vice President for Academic Affairs
Department Chair

Course Syllabus - 11 | P a g e
Introduction to Computing

UNIT I: OVERVIEW OF INFORMATION AND COMMUNICATIONS TECHNOLOGY

The word computer is derived from the word compute. Compute means to calculate. It
had the capacity to solve complex arithmetic and scientific problems at very high speed. But
nowadays computers perform many other tasks like accepting, sorting, selecting, moving,
comparing various types of information. They also perform arithmetic and logical operations on
alphabetic, numeric and other types of information. This information provided by the user to the
computer is data. The information in one form which is presented to the computer is the input
information or input data.
Computer is defined as a fast and accurate data processing system that accepts data,
performs various operations on the data, has the capability to store the data and produce the
results on the basis of detailed step by step instructions given to it.

LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Identify and define the components of computer system
2. Categorize computers. Compare and understand the different types/classifications of
computer
3. Identify various events/improvements in the computing world.
4. Qualify the understanding of computer usage.

COURSE MATERIALS

ELEMENTS OF COMPUTER SYSTEM

Hardware:
Hardware refers to the tangible component of a computer system. The hardware is the
machinery itself. It is made up of the physical parts or devices of the computer system like the
electronic Integrated Circuits (ICs), magnetic storage media and other mechanical devices like
input devices, output devices etc. Various hardware are linked together to form an effective
functional unit.
The various types of hardware used in the computers, has evolved from vacuum tubes of
the first generation to Ultra Large Scale Integrated Circuits of the present generation.

Software:
Software refers to the intangible component of a computer system. The computer
hardware itself is not capable of doing anything on its own. It has to be given explicit instructions
to perform the specific task. The computer program is the one which controls the processing
activities of the computer. The computer thus functions according to the instructions written in the
program. Software mainly consists of these computer programs, procedures and other

Unit I: Overview of ICT - 12 | P a g e


COMP 20013 - Introduction to Computing

documentation used in the operation of a computer system. Software is a collection of programs


which utilize and enhance the capability of the hardware.

Peopleware
Peopleware is regarded as the most important element of the computer and
communication system. It is said that without this element, there would not be any hardware
computers to be used, no software systems that would run computers, and no outputs to be
interpreted as a valid source of information. But thanks to the founding men and women behind
the innovations in the field of computing, the likes of Charles Babbage, Lady Ada Lovelace, Alan
Turing, and others, the world we live in today has made it a necessity for computers and its
systems to be part of our daily lives.

CLASSIFICATION OF COMPUTERS
The computer systems can be classified on the following:
1. According to Size.
2. According to Types of Data Handling.
3. According to Purpose

I. Classification on the basis of size

1. Super computers

The super computers are the highest performing system. A supercomputer is a


computer with a high level of performance compared to a general-purpose computer. The
actual Performance of a supercomputer is measured in FLOPS instead of MIPS.

Supercomputers actually play an important role in the field of computation, and are
used for intensive computation tasks in various fields, including quantum mechanics,
weather forecasting, climate research, oil and gas exploration, molecular modeling, and
physical simulations. Throughout the history, supercomputers have been essential in the
field of the cryptanalysis.

2. Mainframe computers

These are commonly called as big iron, they are usually used by big organizations
for bulk data processing such as statics, census data processing, transaction processing
and are widely used as the severs as these systems has a higher processing capability
as compared to the other classes of computers, most of these mainframe architectures
were established in 1960s, the research and development worked continuously over the
years and the mainframes of today are far more better than the earlier ones, in size,
capacity and efficiency.

3. Mini computers
These computers came into the market in mid 1960s and were sold at a much
cheaper price than the main frames, they were actually designed for control,
instrumentation, human interaction, and communication switching as distinct from

Unit I: Overview of ICT - 13 | P a g e


COMP 20013 - Introduction to Computing

calculation and record keeping, later they became very popular for personal uses with
evolution.

In the 60s to describe the smaller computers that became possible with the use of
transistors and core memory technologies, minimal instructions sets and less expensive
peripherals such as the ubiquitous Teletype Model 33 ASR. They usually took up one or
a few inch rack cabinets, compared with the large mainframes that could fill a room, there
was a new term “MINICOMPUTERS” coined.

4. Micro computers
A microcomputer is a small, relatively inexpensive computer with a microprocessor
as its CPU. It includes a microprocessor, memory, and minimal I/O circuitry mounted on
a single printed circuit board. The previous to these computers, mainframes and
minicomputers, were comparatively much larger, hard to maintain and more expensive.
They actually formed the foundation for present day microcomputers and smart gadgets
that we use in day to day life.

II. Classification on the types of data handling

1. Analog computers

An analog computer is a form of computer that uses the continuously-changeable


aspects of physical fact such as electrical, mechanical, or hydraulic quantities to model
the problem being solved. Anything that is variable with respect to time and continuous
can be claimed as analog just like an analog clock measures time by means of the
distance traveled for the spokes of the clock around the circular dial.

2. Digital computers

A computer that performs calculations and logical operations with quantities


represented as digits, usually in the binary number system of “0” and “1”, “Computer
capable of solving problems by processing information expressed in discrete form. from
manipulation of the combinations of the binary digits, it can perform mathematical
calculations, organize and analyze data, control industrial and other processes, and
simulate dynamic systems such as global weather patterns.

3. Hybrid computers

A computer that processes both analog and digital data, Hybrid computer is a
digital computer that accepts analog signals, converts them to digital and processes them
in digital form

Classification of Computer According to Purpose


1. General Purpose Computer

General Purpose Computer are computers that are utilized for ordinary work. These
computers can do numerous sorts of work, yet each one of those assignments is ordinary.

Unit I: Overview of ICT - 14 | P a g e


COMP 20013 - Introduction to Computing

For example, - Writing a letter with Word Processing, setting up a record, printing
reports, making a database, and so forth. The CPU limit of these computers is likewise
less. In this manner, just ordinary work should be possible in it.
2. Special Purpose Computer
These computers are built for a particular task. The CPU capabilities in this additionally
relate to that particular function. On the off chance that more than one CPU is required,
at that point, numerous computers are introduced on these computers. Aside from this, on
the off chance that the work requires particular hardware or gadget, at that point those
gadgets or gadgets can be included in these calculations.

CAPABILITIES AND LIMITATIONS OF COMPUTER

I. Capabilities of computer

A computer system is better than human beings in a way that it possesses the following
capabilities:

1. Speed
Speed is the amount of time taken by the computer in accomplishing a task of an
operation. The time taken by a computer to perform a particular task is far less than that taken
by than a human being. Different computers are classified on the basis of their speed by
comparing their MIPS (Million Instructions Per Second).

2. Accuracy

Accuracy refers to the degree of correctness and exactness of operations performed


by a computer. In the absence of bad programming, computers do not commit errors and are
capable of handling complex instructions accurately. If the data fed into a computer is not
error free, it is likely to produce inaccurate results.

3. Reliability

Computer systems are non-respondent to human factors like fatigue, tiredness or


boredom. Therefore, they are more likely to work repeatedly and efficiently. In case of any
failure in a computer system, there are provisions for immediate backup of information and
programs.

4. Versatility

Computers are capable of performing all levels of tasks- simple or complex.


Therefore, they can be used in any area-science, technology, business, finance, accounts,
communications and so on.

5. Storage:

Unit I: Overview of ICT - 15 | P a g e


COMP 20013 - Introduction to Computing

It refers to the capacity of a computer to store data and programs. Storage is done
in storage media such as CDs, Floppies, DVDs, RAM (Random Access Memory), ROM
(Read Only Memory).

Limitations of a Computer

Although a computer is far better in performance than a human being, it fails in certain
ways as follows:

1. Dependent on User Input:

Computers cannot think and they can’t do any job unless they are first programmed
with specific instructions for same. They work as per stored instructions. Algorithms are
designed by humans to make a computer perform a special task. This is also called
artificial intelligence.

2. Cannot Decide on Their Own:

Computers are incapable of decision making as they do not possess the essential
elements necessary to take a decision i.e. knowledge, information, wisdom, intelligence
and the ability to judge.

3. No Feeling

Lack of feeling is another limitation of computer. A computer cannot feel like us. It
does not have emotions, feelings, knowledge etc. It does not get tired and keep on doing
its tasks. It can do very risky works which are not capable by human beings.

4. Computers can’t Implement:

Though computers are helpful in storage of data and can contain the contents of
encyclopedias even, but only humans can decide and implement the policies.

HISTORY OF COMPUTING

I. Earliest computing devices

YEAR PLACE MACHINE INVENTOR DESCRIPTION


18st Century China Abacus A frame with beads strung on
wires and rods and arithmetic
calculations are formed by
manipulating the beads.
17th Century Europe Napier’s Logs John Napier Simple device for multiplying.
and Bones
17th Century Europe Oughtred’s Slide William Oughtred Consists of two movable ruler
Rule placed side by side. Each ruler is

Unit I: Overview of ICT - 16 | P a g e


COMP 20013 - Introduction to Computing

marked off in such a way that the


actual distance from the
beginning of the ruler are
proportional to the logarithms of
the number printed on the ruler.
1642 France Calculator Wilhelm Von Capable of adding, subtracting,
Leibniz multiplying, and dividing
numbers.
1896 America Punch Card Herman Hollerith Automatically read the
Machine information that had been
punched into card, without
human intermediation.
1897 America Automatic Howard Aiken Handled 23 decimal place
Calculating numbers and could perform all
Machine four arithmetic operations. It has
built-in special programs, or
subroutines, to handle logarithms
and trigonometric functions.
19th Century Analytical Charles Babbage Calculate and print mathematical
Engine/ tables.
Difference
Engine
1930 Differential Dr. Vannevar Used to calculate artillery
Analyzer Bush trajectories during World War II.

1943-1946 America Electronic Presper Eckert The first large-scale vacuum tube
Numeric Jr. computer.
Integrated and John Mauchly
Calculator
(ENIAC)
1946 EDVAC John Von Modified version of the ENIAC.
Neumann

II. Generations of computers

1. First Generation (1946-1959)


The use of vacuum tubes as a means of storing data in memory and the use of the
store program concept.

Advantages:
1. It made use of vacuum tubes which are the only electronic component available
during those days.
2. These computers could calculate in milliseconds.

Disadvantages:
1. These were very big in size; weight was about 30 tones.
2. These computers were based on vacuum tubes.
3. These computers were very costly.

Unit I: Overview of ICT - 17 | P a g e


COMP 20013 - Introduction to Computing

4. It could store only a small amount of information due to the presence of magnetic
drums.
5. As the invention of first-generation computers involves vacuum tubes, so another
disadvantage of these computers was, vacuum tubes require a large cooling system.
6. Very less work efficiency.
7. Limited programming capabilities and punch cards were used to take inputs.
8. Large amount of energy consumption.
9. Not reliable and constant maintenance is required.

2. Second Generation (1959 – 1965)


The use of transistors, diodes, and magnetic storage, built-in error detecting
device.

Advantages:
1. Due to the presence of transistors instead of vacuum tubes, the size of electron
component decreased. This resulted in reducing the size of a computer as compared
to first generation computers.
2. Less energy and not produce as much heat as the first generation.
3. Assembly language and punch cards were used for input.
4. Low cost than first generation computers.
5. Better speed, calculate data in microseconds.
6. Better portability as compared to first generation

Disadvantages:
1. A cooling system was required.
2. Constant maintenance was required.
3. Only used for specific purposes.

3. Third Generation (1965 – 1971)


The use of integrated solid-state circuity improved secondary storage devices, and
new input/output devices were the most important advantages. IC was invented by Robert
Noyce and Jack Kilby In 1958-1959. IC was a single component containing number of
transistors.

Advantages:
1. These computers were cheaper as compared to second-generation computers.
2. They were fast and reliable.
3. Use of IC in the computer provides the small size of the computer.
4. IC not only reduce the size of the computer but it also improves the performance of
the computer as compared to previous computers.
5. This generation of computers has big storage capacity.
6. Instead of punch cards, mouse and keyboard are used for input.
7. They used an operating system for better resource management and used the
concept of time-sharing and multiple programming.
8. These computers reduce the computational time from microseconds to
nanoseconds.

Unit I: Overview of ICT - 18 | P a g e


COMP 20013 - Introduction to Computing

Disadvantages:
1. IC chips are difficult to maintain.
2. The highly sophisticated technology required for the manufacturing of IC chips.
3. Air conditioning is required.

4. Fourth Generation (1971 – 1980)


The development of the different areas in computer technology such as:
multiprocessing, multiprogramming, miniaturation, time-sharing, operating speed and virtual
storage. This technology is based on Microprocessor. A microprocessor is used in a
computer for any logical and arithmetic function to be performed in any program. Graphics
User Interface (GUI) technology was exploited to offer more comfort to users.

Advantages:
1. Fastest in computation and size get reduced as compared to the previous generation
of computer.
2. Heat generated is negligible.
3. Small in size as compared to previous generation computers.
4. Less maintenance is required.
5. All types of high-level language can be used in this type of computers.

Disadvantages:
1. The Microprocessor design and fabrication are very complex.
2. Air conditioning is required in many cases due to the presence of ICs.
3. Advance technology is required to make the ICs.

5. Fifth Generation (1980 – onwards)


This generation is based on artificial intelligence. The aim of the fifth generation
is to make a device which could respond to natural language input and are capable of
learning and self-organization. This generation is based on ULSI(Ultra Large Scale
Integration) technology resulting in the production of microprocessor chips having ten
million electronic component.

Advantages:
1. It is more reliable and works faster.
2. It is available in different sizes and unique features.
3. It provides computers with more user-friendly interfaces with multimedia features.

Unit I: Overview of ICT - 19 | P a g e


COMP 20013 - Introduction to Computing

UNIT ASSESSMENTS/ACTIVITIES

1. Discuss what is your understanding on the elements of computer system and how are
they interrelated with one another.

2. Aside from the examples on the classification of computers discussed, give and explain
examples for each classification of computers.

3. Give and discuss additional capabilities and limitations of computers not


mentioned/discussed in the lesson.

4. Make a research on the recent hardware and software developments in ICT. Discuss its
significant contributions to ICT. Support your discussion with pictures and include
references in the paper.

Unit I: Overview of ICT - 20 | P a g e


Introduction to Computing

UNIT II: DATA REPRESENTATION

OVERVIEW
This module describes the various ways in which computers can manipulate numbers and
characters. The module is subdivided in two parts, the first part discusses the numbers system
operations and conversions. The second part covers the different data representations including
the Numeric and Non-Numeric representation of data.

LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Distinguish the various number systems and data representation.
2. Compute number system operation and conversion.
3. Manipulate various operation and conversions in number system
4. Answer/practice various operations and conversions.

NUMBER SYSTEMS

There are several number systems which we normally use, such as decimal, binary, octal,
hexadecimal. Amongst them we are most familiar with the decimal number system. These
systems are classified according to the values of the base of the number system.

Binary Number System


A binary number system having the value of the base two. A binary number has only two
(2) different digits—0 and 1. Hence, a binary number cannot have any digit other than 0 or 1. So
to deal with a binary number system is quite easier than a decimal system.

Octal Number System


Octal number system is a base eight number system. In an octal number system there
are 8 digits—0, 1, 2, 3, 4, 5, 6, and 7. Hence, any octal number cannot have any digit greater than
7.

Decimal Number System


The number system having the value of the base as 10 is called a decimal number system,
with a decimal system we have 10 different digits, which are 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.

Hexadecimal Number System

Unit II: Data Representation - 21 | P a g e


COMP 20013 - Introduction to Computing

Hexadecimal number system is a based 16 number. Similarly, a hexadecimal number


system has 16 digits—0 to 9— and the rest of the six digits are specified by letter symbols as A,
B, C, D, E, and F. Here A, B, C, D, E, and F represent decimal 10, 11, 12, 13, 14, and 15
respectively.

In general, we can express any number in any base or radix “X.” Any number with base
X, having n digits to the left and m digits to the right of the decimal point, can be expressed as:

1. Number System Conversion : Conversion of Whole Numbers


A number can be converted to a any number system, e.g., it may be required to
convert a decimal number to binary or octal or hexadecimal. It can be also converted in a
reverse manner a binary number may be converted into decimal and so on.

Integer Number Conversion

Decimal
Here's the decimal number system as an example:
digits (or symbols) allowed: 0,1,2,3,4,5,6,7,8,9
base (or radix): 10
the order of the digits is significant

345 is represented as

3 x 100 + 4 x 10 + 5 x 1
3 x 102 + 4 x 101 + 5 x 100

3 is the most significant symbol (it carries the most weight)

5 is the least significant symbol (it carries the least weight)

Binary to Decimal
Here's a binary number system:
digits (symbols) allowed: 0, 1
base (radix): 2
each binary digit is called a BIT
the order of the digits is significant

numbering of the digits


MSB LSB
n-1 0

where n is the number of digits in the number

MSB stands for most significant bit


LSB stands for least significant bit

1001 (base 2) is represented as

Unit II: Data Representation - 22 | P a g e


COMP 20013 - Introduction to Computing

1 x 23 + 0 x 22 + 0 x 21 + 1 x 20
910

11000 (base 2) is represented as


1 x 24 +1 x 23 + 0 x 22 + 0 x 21 + 0 x 20
2410

Octal to Decimal
Here's an octal number system:
digits (symbols) allowed: 0,1,2,3,4,5,6,7
base (radix) 8
the order of the digits is significant

345 (base 8) is represented as


3 x 82 + 4 x 81 + 5 x 80
192 + 32 + 5
22910

1001 (base 8) is represented as


1 x 83 + 0 x 82 + 0 x 81 + 1 x 80
512 + 0 + 0 + 1
51310

Hexadecimal to Decimal
Here's a hexadecimal number system:
digits (symbols) allowed: 0-9, A,B,C,D,E,F
base (radix) 16
the order of the digits is significant

Hex Decimal Binary


0 0 0000
1 1 0001
.
.
.
9 9 1001
A 10 1010
B 11 1011
C 12 1100
D 13 1101
E 14 1110
F 15 1111

A3 (base 16) is represented as


A x 161 + 3 x 160
160 + 3
16310

Unit II: Data Representation - 23 | P a g e


COMP 20013 - Introduction to Computing

A common syntax used to represent hexadecimal values (in code) is to place the
symbols "0x" as a prefix to the value.

Example: 0x8d is the hexadecimal value 8d (10001101 binary)

A second common syntax is to place a suffix of 'h' onto a value, indicating that it is
hexadecimal.

Example: 8dh (same example as just given)

Note that h is not a symbol used in hexadecimal, so it can indicate the representation
used. The Intel architectures do this in their assembly languages. This representation
is actually more time consuming (meaning the execution time of the code) to interpret,
since the entire number must be read before it can be decided what number system is
being used.

In General
Given all these examples, here's a set of formulas for the general case.

Given an n-digit number (in weighted positional notation):

Sn-1 S n-2 . . . S2 S1 S0

the subscript gives us a numbering of the digits given a base b, this is the decimal value

the summation (from i=0 to i=n-1) of Si * bi

Transformations Between Bases


Any base to decimal

Just use the definition (summation) given above.

134 (base 5)
1 x 52 + 3 x 511 + 4 x 50
25 + 15 + 4
4410

Decimal to another base

Divide decimal value by the base until the quotient is 0.

The remainders give the digits of the value.

Note: This algorithm works for decimal to ANY base. Just change the base you want
to convert.

Unit II: Data Representation - 24 | P a g e


COMP 20013 - Introduction to Computing

Decimal  Binary

examples:

36 (base 10) to base 2 (binary)


36/2 = 18 r = 0  LSB
18/2 = 9 r = 0
9/2 = 4 r = 1
4/2 = 2 r = 0
2/2 = 1 r = 0
1/2 = 0 r = 1  MSB

3610 == 1001002

14 (base 10) to base 2 (binary)


14/2 = 7 r = 0  LSB
7/2 = 3 r = 1
3/2 = 1 r = 1
1/2 = 0 r = 1  MSB

1410 == 11102

Decimal  Octal

229/8 = 28 r = 5  LSB
28/8 = 3 r = 4
3  MSB
22910 == 3458

513/8 = 64 r = 1  LSB
64/8 = 8 r = 0
8/8 = 1 r = 0
1  MSB
51310 == 10018

Decimal  Hexadecimal

759/16 = 47 r = 7  LSB
47/16 = 2 r = (15) F
2  MSB
75910 == 2F716

845/16 = 52 r = (13) E  LSB


52/16 = 3 r = 4
3  MSB
84510 == 34E16

Unit II: Data Representation - 25 | P a g e


COMP 20013 - Introduction to Computing

Binary  Octal

1. Group into 3's starting at least significant symbol


(if the number of bits is not evenly divisible by 3, then add 0's at the most significant
end)

2. Write 1 octal digit for each group

Examples:

100 010 111 (binary)


4 2 7 (octal)

10 101 110 (binary)


2 5 6 (octal)

Binary  Hexadecimal
(just like binary to octal!)

1. Group into 4's starting at least significant symbol


(if the number of bits is not evenly divisible by 4, then add 0's at the most significant
end)

2. Write 1 hex digit for each group

Examples:

1001 1110 0111 0000 (binary)


9 E 7 0 (hexadecimal)

1 1111 1010 0011 (binary)


1 F A 3 (hexadecimal)

Hexadecimal  Binary
Just write down the four (4) bit binary code for each hexadecimal digit

Example:

3 9 C 8 (hexadecimal)
0011 1001 1100 1000 (binary)

Octal  Binary
Like hex to binary, just write down the 8 bit binary code for each octal digit

Example:

Unit II: Data Representation - 26 | P a g e


COMP 20013 - Introduction to Computing

5 0 1 (octal)
101 000 001 (binary)

Hexadecimal  Octal
Do it in 2 steps,

1. hex  binary
2. binary  octal

2. Number System Conversion: Fractional Conversion

The above discussion is for integer numbers only. Now if the number contains the
fractional part we have to deal in a different way when converting the number from a different
number system (i.e., binary, octal, or hexadecimal) to a decimal number system or vice versa.
We illustrate this with examples.

Examples

Binary to Decimal

Convert 1110.0112 into a decimal number.

The binary number given is 1 0 1 0. 0 1 1


Positional weights 3 2 1 0 -1-2-3

The positional weights for each of the digits are written in italics below each digit.

The decimal equivalent number is given as:

1 × 23 + 1 × 22 + 1 × 21 + 0 × 20 + 0 × 2–1 + 1 × 2–2 + 1 × 2–3


= 8 + 4 + 2 + 0 + 0 + 0.25 + 0.125

1010.0112 = 14.3751010

Octal to Decimal

Convert 362.358 into a decimal number.

The octal number given is 3 4 5. 3 5


Positional weights 2 1 0 -1-2

The positional weights for each of the digits are written in italics below each digit.

Hence the decimal equivalent number is given as:


3 × 82 + 4 × 81 + 5 × 80 + 3 × 8–1 + 5 × 8–2
= 192 + 32 + 5 + 0.375 + 0.078125

345.358 = 229.4531251010

Unit II: Data Representation - 27 | P a g e


COMP 20013 - Introduction to Computing

Hexadecimal to Decimal

Convert 51B.1216 into a decimal number.

The hexadecimal number given is 5 1 B. 1 2


Positional weights 2 1 0 -1-2

The positional weights for each of the digits are written in italics below each digit.

Hence the decimal equivalent number is given as:


5 × 162 + 1 × 161 + 11 × 160 + 1 × 16–1 + 1 × 16–2
= 1280 + 16 + 11 + 0.0625 + 0.00390625

51B.1216 = 1307.066406251010

3. Number Systems Arithmetic


Addition
The procedure of adding two numbers (octal, hexadecimal, binary) is same as that
of two decimal numbers.
Addition is carried out from the least significant bit (LSB) and it proceeds to higher
significant digits, adding the carry resulting from the addition of two previous digits each
time.

Unit II: Data Representation - 28 | P a g e


COMP 20013 - Introduction to Computing

Subtraction
The procedure of subtracting two (octal, hexadecimal, binary) numbers using the
direct method is same as decimal subtraction.
The direct method of subtraction uses the concept of borrow. In this method, we
borrow from a higher significant position when the minuend digit is smaller than the
corresponding subtrahend digit.

Unit II: Data Representation - 29 | P a g e


COMP 20013 - Introduction to Computing

Binary Subtraction
There are Three (3) ways:
 The direct method
 2’s complement
 1’s complement
Subtraction - By Complements
Complements are used in digital computers for simplifying the subtraction operation
and for logical manipulations.
There are two types of complements for each number system of base r:
- the r complement
- The r-1 complement
So for binary the value of r is 2 so we have the 2’s (r’s) complement and the 1’s (r-
1’s) complement
1’s Complement
To get the 1’s complement of a binary number, the “0” and “1” bits of the original
bit string are switched.
Ex. 101102  010012
2’s Complement
2’s complement is the 1’s complement bit string plus 1.
Ex. the 2’s complement of 101102
010012  1’s complement of 10110
+ 1
010102  2’s complement of 10110

Unit II: Data Representation - 30 | P a g e


COMP 20013 - Introduction to Computing

Binary Subtraction (Using 1’s complement)


Procedure
1. Get the 1's complement of the subtrahend.
2. Add the 1’s complement of the subtrahend to the minuend
If carry is generated, remove the carry, add it to the result

Binary Subtraction (Using 2’s complement)


Procedure
1. Get the 2's complement of the subtrahend.
2. Add the 2’s complement of the subtrahend to the minuend
3. If carry is generated, remove the carry.

Binary Subtraction (Using 1’s complement), if the subtrahend is larger than the
minuend
If the subtrahend is larger than the minuend, then no carry is granted.
The answer is obtained in 1’s complement of the true result and opposite in sign.

Unit II: Data Representation - 31 | P a g e


COMP 20013 - Introduction to Computing

Binary Subtraction (Using 2’s complement) if the subtrahend is larger than the
minuend
If the subtrahend is larger than the minuend, then no carry is granted. Add the 2’s
complement of the subtrahend to the minuend.
The answer is obtained in 2’s complement of the number and change the sign.

Therefore, the reason why negative numbers are represented using 2’s complement
method in computing is that subtractions can be performed as additions.
Since subtractions can be performed with addition circuits, the subtraction circuits are
unnecessary , thereby simplifying the hardware structure.

Multiplication
The procedure is similar to decimal multiplication but much simpler.
The multiplication is done by repeated addition of all partial products to obtain the full
product

Division
Binary division follows the same procedure as decimal division.

Unit II: Data Representation - 32 | P a g e


COMP 20013 - Introduction to Computing

2 DATA REPRESENTATION

Computers and other digital circuits process data in binary format.


Various binary codes are used to represent data which maybe numeric, alphabetic or
special characters.
In digital systems in every code used, the information is represented in binary form, but the
interpretation of the data is only possible if the code in which the data is being represented
is known.
10000102 = 66 (decimal) in straight binary
= 42 in Binary Coded Decimal
= B in ASCII code
Decimal Digit Representation

1) Binary Coded Decimal (BCD)


2) Unpacked Decimal Format (UDF)
3) Packed Decimal Format (PDF)

Binary Coded Decimal (BCD) - coding scheme relating decimal and binary numbers.
Four (4) bits are required to code each decimal number

Decimal
Binary Number BCD Code
Number
0 0000 0000
1 0001 0001
2 0010 0010
3 0011 0011
4 0100 0100
5 0101 0101
6 0110 0110
7 0111 0111
8 1000 1000
9 1001 1001
10 1010 0001 0000
11 1011 0001 0001
12 1100 0001 0010
13 1101 0001 0011
14 1110 0001 0100
15 1111 0001 0101

Example 1
789 -- 0111100010012

7 8 9
0111 1000 1001

Unit II: Data Representation - 33 | P a g e


COMP 20013 - Introduction to Computing

Example 2
105  1000001012

1 0 5
0001 0000 0101

Unpacked Decimal Format (UDF) - also called zoned decimal format


- Uses 1 byte for each digit of the decimal number
- Represents the values 0 to 9 in the least significant 4 bits of 1 byte
- In the most significant 4 bits, called the zoned bits, 1111 is stored
- The 4 bits that represent the sign is stored in the zoned bits of the least significant digit
(Positive and 0 is represented by 1100 while negative is 1101)

Example 1
789  1111011111111000110010012

7 8 9
11110111 11111000 11001001

Example 2
-105  1111000111110000110101012

1 0 5
11110001 11110000 11010101

Packed Decimal Format (PDF)


- One 1 byte represents 2 digits of the decimal number
- The least significant 4 bits represent the sign
(The bit pattern of the sign is the same as that of the unpacked decimal format, Positive
and 0 is represented by 1100 while negative is 1101)

Example 1
789  0111100010011100 2

7 8 9 +(sign bit)
0111 1000 1001 1100

Example 2
-105  00010000010111012

1 0 5 - (sign bit)
0001 0000 0101 1101

Unit II: Data Representation - 34 | P a g e


COMP 20013 - Introduction to Computing

Binary Digit Representation

1. Signed Binary Numbers

- Sign-magnitude Representation
- Absolute value representation
- Complement representation

Sign-magnitude Representation

An additional bit is used as the sign bit, usually placed as the MSB (most significant
bit)

Generally 0 is reserved for positive a 1 is reserved for a negative number.

Examples
Magnitude : 1011002 = 4410
01011002 = +4410

Magnitude : 1112 = 710


11112 = -710

Absolute Value Representation

Uses an 8-bit representation where the first bit corresponds to the sign and the last
seven bits to the value of the number. 0 for positive and 1 for negative.

Limitations:
1. With the 8-bit representation, the range of numeric values that can be
represented is only -127 to 127

2. Value 0 can be represented as 00000000 (positive) and 10000000 (negative)


which makes the operation more complicated

Examples
100011002 = -1210
000011002 = +1210

Complement Representation has been discussed in Number Systems Arithmetic

Unit II: Data Representation - 35 | P a g e


COMP 20013 - Introduction to Computing

Binary Digit Representation

2. Floating Point

Representation of Floating Point Numbers

Computers represent real values in a form similar to that of scientific notation. There
are standards which define what the representation means so that across computers there
will be consistency. Note that this is not the only way to represent floating point numbers,
it is just the IEEE standard way of doing it.

Here's what we do:

the representation

-------------------
|S| E | F |
-------------------

S is one bit representing the sign of the number


E is an 8 bit biased integer representing the exponent
F is an unsigned integer

the true value represented is:

S e
(-1) x f x 2

where
e = E - bias
n
f = F/2 + 1

for single precision numbers (the emphasis in this class)


n = 23
bias = 127

Now, what does all this mean?

--> S, E, F all represent fields within a representation. Each is just a bunch of bits.

--> S is just a sign bit. 0 for positive, 1 for negative.

--> E is an exponent field. The E field is a biased-127 representation. So, the true
exponent represented is (E - bias). The radix for the number is ALWAYS 2.

Note: Computers that did not use this representation, like those built before the
standard, did not always use a radix of 2.

Example: some IBM machines had radix of 16.

Unit II: Data Representation - 36 | P a g e


COMP 20013 - Introduction to Computing

--> F is the mantissa. It is in a somewhat modified form. There 23 bits available for the
mantissa. It turns out that if fl. pt. numbers are always stored in their normal form, then
the leading bit (the one on the left, or MSB) is always a 1. So, why store it at all? It gets
put back into the number (giving 24 bits of precision for the mantissa) for any calculation,
but we only have to store 23 bits.

This MSB is called the HIDDEN BIT.

An example: Put the decimal number 64.2 into the standard single precision
representation.

First step:
Get a binary representation for 64.2. To do this, get binary representation for the
stuff to the left, and right of the decimal point separately.

64 is 1000000

.2 can be converted using the algorithm:

.2 x 2 = 0.4 0
.4 x 2 = 0.8 0
.8 x 2 = 1.6 1
.6 x 2 = 1.2 1

.2 x 2 = 0.4 0 now this whole pattern (0011) repeats.


.4 x 2 = 0.8 0
.8 x 2 = 1.6 1
.6 x 2 = 1.2 1

so a binary representation for .2 is .001100110011. . .

Putting the halves back together again:


64.2 is 1000000.0011001100110011. . .

Second step:
Normalize the binary representation. (make it look like scientific notation)

1.000000 00110011. . . x 26

Third step:
Six (6) is the true exponent. For the standard form, it needs to be in biased-127
form.

6
+ 127
133

133 in 8 bit, unsigned representation is 1000 0101


this is bit pattern used for E in the standard form.

Unit II: Data Representation - 37 | P a g e


COMP 20013 - Introduction to Computing

Fourth step:
The mantissa stored (F) is the stuff to the right of the radix point in the normal form.
We need 23 bits of it.

000000 00110011001100110

put it all together (and include the correct sign bit):

S E F
0 10000101 00000000110011001100110

the values are often given in hex, so here is the final answer

0100 0010 1000 0000 0110 0110 0110 0110

Overflow

Sometimes a value cannot be represented in the limited number of bits allowed.

Examples:

unsigned, 3 bits: 8 would require at least 4 bits (1000)


sign mag., 4 bits: 8 would require at least 5 bits (01000)

When a value cannot be represented in the number of bits allowed, we say that overflow has
occurred. Overflow occurs when doing arithmetic operations.

example: 3 bit unsigned representation

011 (3)
+ 110 (6)
---------
? (9) it would require 4 bits (1001) to represent
the value 9 in unsigned rep.

Character Representation

Everything represented by a computer is represented by binary sequences. A common


non-integer needed to be represented is characters. We use standard encodings (binary
sequences) to represent characters.

Unit II: Data Representation - 38 | P a g e


COMP 20013 - Introduction to Computing

REMEMEBER: bit patterns do NOT imply a representation


I/O devices work with 8 bit quantities. A standard code ASCII (American Standard for Computer
Information Interchange) defines what character is represented by each sequence.

examples:

0100 0001 is 41 (hex) or 65 (decimal). It represents `A'


0100 0010 is 42 (hex) or 66 (decimal). It represents `B'

Different bit patterns are used for each different character that needs to be represented.
The code has some nice properties. If the bit patterns are compared, (pretending they represent
integers), then `A' < `B' This is good, because it helps with sorting things into alphabetical order.

Notes: `a' (61 hex) is different than `A' (41 hex)


`8' (38 hex) is different than the integer 8

the digits:
`0' is 48 (decimal) or 30 (hex)
`9' is 57 (decimal) or 39 (hex)

Character Representation
Coding of Alphanumeric
1. American Standard Code for Information Interchange (ASCII)
 This coding scheme was adopted by the American national Standard Institute.
 This code uses bit patterns of length of 7 to represent the upper and lower case
letters of the English alphabet, punctuations, the digits 0 through 9, and certain
control information such as line feeds, carriage returns, and tabs.
 ASCII is often extended to 8 bits pattern.
 Parity bit or check bit is used to detect error in data transmission. Parity bit are
used to signal the computer that the bits in a byte have stayed the way they are
supposed to during transmission.

Character ASCII CODE


A–I 65 - 73
J–R 74 - 82
S-Z 83 - 90
a–i 97 - 105
j-r 106 - 114
s–z 115 -122
0 -9 48 - 57
space 32

Unit II: Data Representation - 39 | P a g e


COMP 20013 - Introduction to Computing

2. Extended Binary Coded Decimal Interchange Code. (EBCDIC)

 An 8 bit code representation plus parity bit


 It was developed by IBM and introduced with their System 360 line.

EBCDIC
Character
Zone Digit
A–I 12 1–9

J–R 13 1–9

S–Z 14 2–9

a–i 8 1–9

j–r 9 1–9

s–z 10 2–9

0–9 15 0–9

Space 4 0

PARITY BIT
In the process of transmitting binary information, any external noise introduced may
change bit values from 0 to 1 or vice versa.
An error detection code can be used to detect errors during transmission.
A parity bit is an extra bit added in a string of binary code to make the total of 1s either
odd or even.

How does it work?


 In the sending end, the message is applied to a “parity generation” circuit where the
required parity bit is generated
 The message, including the parity bit, is transferred to its destination
 In the receiving end, all the incoming bits are applied to a “parity check” circuit to check
the proper parity adopted
 An error is detected if the checked parity does not correspond to the adopted one

This mechanism enables the detection of single bit errors

Unit II: Data Representation - 40 | P a g e


COMP 20013 - Introduction to Computing

Two types:
1. Even parity - for a given set of bits, the occurrences of bits whose value is 1 is counted.
If that count is odd, the parity bit value is set to 1, making the total count of occurrences
of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a
given set of bits is already even, the parity bit's value is 0.
If EVEN Parity
Count the no. of 1’s
If Even No., parity = 0
If Odd No., parity = 1

2. Odd parity - For a given set of bits, if the count of bits with a value of 1 is even, the parity
bit value is set to 1 making the total count of 1s in the whole set (including the parity bit)
an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the
parity bit's value is 0.
If ODD Parity
Count the no. of 1’s
If Odd No., parity = 0
If Even No., parity = 1

UNIT ASSESSMENTS
Solve the following problems in a separate answer sheet. Show/present all your
complete solution in solving the given problems.
I. Number System Conversion. Complete the table below.

Binary Octal Decimal Hexadecimal


Convert the ff: to
(Base 2) (Base 8) (Base 10) (Base 16)
101101102 (1) (2) (3)
101101102
1110111001012 (4) (5) (6)
1110111001012
(7) 7628 (8) (9)
7628
(10) 47358 (11) (12)
47358
(13) (14) 78510 31116
78510
(15) (16) 201910 (17)
201910
(18) 57568 (19) BEE16
BEE16
(20) (21) (22) FF73D16
FF73D16
(23) (24) (25) ACE16
ACE16

Unit II: Data Representation - 41 | P a g e


COMP 20013 - Introduction to Computing

Binary Octal Decimal Hexadecimal


Convert the ff: to (Base 2) (Base 8) (Base 10) (Base 16)
0.1110112 (26)
0.1110112
0.11010111102 (27) (28)
0.11010111102
10111.0111012 (29) (30) (31)
10111.0111012
1101011.1110112 (32) (33) (34)
1101011.1110112
110100101101111.1101011010112 (35) (36)
110100101101111.1101011010112
(37) 0.3478 (38) (39)
0.3478
(40) 23.75 8 (41) (42)
23.75 8
(43) 2641.3378 (44) (45)
2641.3378
(46) (47) 0.127510 (48)
0.127510
(49) (50) 849.27410 (51)
849.27410
(52) (53) CAB.3B16
CAB.3B16
(54) (55) D4BFC.ED416
D4BFC.ED416

II. Arithmetic

2.1. Add the following octal numbers


1.a 756 + 345 =
1.b 654 + 555 + 765 =
2.2. Add the following hexadecimal numbers
2.a ACE + BEAD =
2.b DEED + BADE + 109 =
2.3. Add the following binary numbers
3.a 101010 + 111111 + 101
3.b 111110 + 100111
2.4. Subtract the following numbers
4.a Octal numbers 711 - 345 =
4.b Hexadecimal numbers BEAD - ACE =
2.5. Subtract the binary number 1111 – 101 using
5.a direct method
5.b one’s complement
5.c two’s complement

2.6. Subtract the binary number 1001 – 1101 using


6.a one’s complement
6.b two’s complement

Try to do more exercises using numbers you provide.

Unit II: Data Representation - 42 | P a g e


COMP 20013 - Introduction to Computing

III. Represent the following numbers accordingly:


a. Binary Coded Decimal (BCD)
1. 3421
2. 7684
3. 32884
4. 55937
b. Unpacked Decimal Format (UDF)
1. +3245
2. -7448
3. -19846
4. +6781
5. - 36721
c. Packed Decimal Format (PDF)
1. +33485
2. -7438
3. -943276
4. +6781
5. - 36721
d. Represent the following in eight (8) bit format:
1. – 87 – Sign Magnitude
1’s complement
2”s complement

e. What is the decimal equivalent of the 1’s complement no. 1011 1011?
f. What is the decimal equivalent of the 2’s complement no. 1111 0101?
g. Give the floating point and mantissa for the following numbers
1. 25.62510
2. 1063.8910
3. -0.6562510
4. 353.348
5. 0.0005248
6. -0.05C16
7. -4.2510
8. -0.00112

i. Non Numeric Representation

1. What is the ASCII equivalent of “bSCs”?


2. What is the ASCII equivalent of “roxas” using even parity?
3. What is the ASCII equivalent of “PUP” using odd parity?
4. What is the ASCII equivalent of “77Me”?
5. What is the hexadecimal equivalent of your answer in question no. 4?
6. What is the EBCDIC equivalent of “computer123 using even parity?
7. What is the EBCDIC equivalent of “hello”?
8. What is the EBCDIC equivalent of “zxyABC” using odd parity?
9. What is the EBCDIC equivalent of “OOTD”?

Unit II: Data Representation - 43 | P a g e


COMP 20013 - Introduction to Computing

Watch:
https://www.makeuseof.com/tag/audio-file-format-right-needs/
https://blog.hubspot.com/insiders/different-types-of-image-files
https://blog.hubspot.com/marketing/best-video-format
https://www.google.com.ph/amp/s/www.wix.com/blog/photography/amp/2018/10/25/video-
formats

Unit II: Data Representation - 44 | P a g e


Introduction to Computing

UNIT III: HARDWARE

OVERVIEW

The topic on hardware is divided into two. The first part discusses the electronic
components that make up the computer system. It includes topics on the main units of the
computer including the input and output devices and other related computer equipment.
The second part covers the introduction to digital logic system which discusses the logic circuits
and how they are used to implement circuit design.

PART 1: COMPONENTS OF THE SYSTEM UNIT

LEARNING OUTCOMES

At the end of this module, the student is expected to:

1. Explain the functions of the main units of a physical computer system


2. Describe what input and output devices are
3. Differentiate the primary and secondary storage devices
4. Give examples of I-O devices

COURSE MATERIALS

Definition: Hardware is the tangible, physical parts of the computer and related devices.
Main Units of a Computer
o Processor – interprets and carries out the basic instructions that operate a computer
o Main Storage – also called the memory or the primary storage
o Input – device used to send data to a computer
o Output – device used to send data from a computer to other devices

Unit III: Hardware - 45 | P a g e


COMP 20013 - Introduction to Computing

Figure 3.1. Main Units of a Computer

Processor - interprets and carries out the basic instructions that operate a computer; it may
also be called the central processing unit or the CPU
The processor contains:
1. Control Unit – directs the flow of instructions and data inside the CPU and acts as a traffic
controller; it interprets each instruction and initiates the appropriate action to carry out.
2. Arithmetic and Logic Unit (ALU) – performs the arithmetic and logical calculations inside the
CPU
3. Registers – temporarily holds data and instructions; they are small high-speed location
inside the processor
4. System clock – controls the timing of computer operations; it generates regular electronic
impulses (ticks) that sets the operating pace of the system unit components

Main Storage
The memory stores instructions waiting to be executed by the processor, the data needed by
those instructions and the results of processed data (information).
Memory stores three (3) basic types of items:
1. Operating system and other system software
2. Application programs
3. Data / Information

Types of Memory:
1. RAM (Random Access Memory) - stores data and instructions for processing; volatile
(Volatile means the data/program in memory are erased once power is cut off)

Unit III: Hardware - 46 | P a g e


COMP 20013 - Introduction to Computing

Cache memory – high speed holding area ; for those information which are frequently used by
the CPU
2. ROM (Read Only Memory) - contains stored instructions that a computer requires to be able
to do its basic routine operations; non-volatile
3. CMOS (complementary metal-oxide semiconductor) – provides information every time
computer is turned on, e.g. RAM capacity, date/time

Input Device

An input device is a hardware or peripheral device used to send data to a computer. An


input device allows users to communicate and feed instructions and data to computers for
processing, display, storage and/or transmission.
(https://www.techopedia.com/definition/2344/input-device)

Figure 3.2. Images of most common input devices


*Image from Pinterest

Other examples of Input Devices

 Optical Mark Recognition


 Sensors
 Touch Screen
 Light Pen
 Speech Synthesizers- converts human speech into digital form or written text into
computerized voice

Unit III: Hardware - 47 | P a g e


COMP 20013 - Introduction to Computing

 Magnetic Ink Character Recognition (MICR)- used to read the numbers printed at the
bottom of checks

Output Device

An output device is any device used to send data from a computer to another device or
user (see Figure 3-2).

Other Hardware

Secondary or Auxiliary Storage Devices

Is where data are stored permanently. It is outside the primary storage and serves just
like a filing cabinet.

We store data in an auxiliary storage device for 2 reasons:

1. Primary storage can only store a limited amount of data


2. Data stored in primary storage are volatile and temporary

Figure 3.3. Images of Output Devices


*Image from Pinterest

Unit III: Hardware - 48 | P a g e


COMP 20013 - Introduction to Computing

Secondary or Auxiliary Storage Devices (Cont.)

Examples of Secondary or Auxiliary Storage Devices


- Magnetic tape - Data are stored serially and can only be accessed in a serial manner;
high capacity; cheap
- Magnetic Disk - Direct access storage media; high capacity and fast retrieval speed;
Reads/ write data through the use of electromagnetism
- Optical Disc (CD, DVD, Blu-ray) – Read / write data through light/laser beams
- Solid State Drives - Use integrated circuit assemblies as memory
- External Drive
- Flash Drive
- Cloud

Read:

Discovering Computers by Shelly, Cashman, Vermaat


Introduction to Information Technology by Albano, Atole, and Ariola

UNIT ASSESSMENTS/ACTIVITIES

1. Differentiate a primary storage from a secondary storage


2. Differentiate a RAM from a ROM
3. Identify and explain the main units of a computer
4. What are the reasons why a secondary storage is needed?
5. Differentiate a magnetic disk from an optical disc
6. Give examples of cloud storage
7. Enumerate and explain what a processor contains.
8. What secondary storage do you normally use?
9. From what you remembered during the lesson on History of computers, how have memory
and secondary storages evolved?
10. What I/O devices are used by magnetic tape and magnetic disk?

Unit III: Hardware - 49 | P a g e


COMP 20013 - Introduction to Computing

PART 2: DIGITAL LOGIC SYSTEM

LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Define what boolean algebra is
2. Identify the different logic gates
3. Illustrate the representation of the different logic gates
4. Convert boolean algebra expression into a logic circuit
5. Create truth tables for the corresponding logic circuits and boolean expression
6. Explain basic theorems and postulates on digital logic system

COURSE MATERIALS
Introduction

George Boole (1815 – 1864) - developed an algebraic system to treat the logic functions, which
is now called Boolean algebra.
Claude Shannon (1916-2001)- is said to be the founder of Digital Circuit Design; It was in 1938
when Shannon applied boolean algebra to telephone switching circuits. And it was then the
engineers realized that boolean algebra could be used to analyze and design computer circuits.

Boolean Algebra

- A branch of mathematics developed by George Boole


- Provides a system of logic and reasoning using true and false statements suitable for
representing switching circuits
- The basic operations are complementation, multiplication, and addition. In digital systems,
these operations are performed by inverters, AND gates, and OR gates respectively

Boolean algebra differs from ordinary algebra


- Ordinary algebra deals with real numbers, which consist of an infinite set of elements.
Boolean algebra deals with only two elements, 0 and 1.
- Boolean algebra defines an operator called complement which is not available in ordinary
algebra.
- Boolean algebra does not have additive or multiplicative inverses, so there are no
subtraction or division operations

Logic Gates

Computer circuits are often called logic circuits because they simulate mental processes.
These logic circuits are called GATES. A GATE is a digital circuit having one or more input signals
but only one output signal. The basic gates are NOT, AND, OR.

Unit III: Hardware - 50 | P a g e


COMP 20013 - Introduction to Computing

Operation Symbol
Inversion NOT ‘ or an over bar
Multiplication AND 
Addition OR +

Using 1s and 0s, the representation is as follows:

Inversion 1 = 0

Multiplication 00=0
01=0
10=0
11=1

Addition 00=0
01=1
10=1
11=1

The Basic Gates : NOT, AND, OR Gates

NOT Gate - Inverter

OR Gate – Addition

Unit III: Hardware - 51 | P a g e


COMP 20013 - Introduction to Computing

AND Gate – Multiplication

Example of a logic circuit using the basic AND and OR gates

Boolean expression : Z = XY + W

Universal Gates : NAND and NOR Gates

A universal logic gate is a logic gate that can be used to construct all other logic gates.
This will be discussed in further details in later topics.

NAND Gate

Unit III: Hardware - 52 | P a g e


COMP 20013 - Introduction to Computing

NAND can also be drawn as below,

NOR Gate

NOR can also be drawn as below,

Exclusive OR (XOR) Gate

Unit III: Hardware - 53 | P a g e


COMP 20013 - Introduction to Computing

Exclusive NOR (XNOR) Gate

Circuits that can perform binary addition and subtraction are constructed by combining
logic gates. These circuits are used in the design of the arithmetic logic unit (ALU). The electronic
circuits are capable of very fast switching action, and thus an ALU can operate at high clock rates.

Example of two inverters entering an AND gate, with the corresponding truth table

Unit III: Hardware - 54 | P a g e


COMP 20013 - Introduction to Computing

Postulates and theorems useful for two valued Boolean algebra

Examples applying Theorem 4 (Distributive) and Postulate 5.

F = AB + BC + B′C
= AB + C(B + B′)
= AB + C

F = A + A′B
= (A + A′) (A + B)
=A+B

F = A′B′C + A′BC + AB′


= A′C (B′+B) + AB′
= A′C + AB′

DeMorgan's Theorems

DeMorgan’s Theorems are two additional simplification techniques that can be used to simplify
Boolean expressions.

Theorem 1 : (X + Y)’ = X’Y’ -> A NOR gate is same as a bubbled AND gate

Unit III: Hardware - 55 | P a g e


COMP 20013 - Introduction to Computing

Theorem 2 : (XY)’ = X’ + Y’ -> A NAND gate is same as a bubbled OR gate

Equivalence among circuits

Double inversion has no effect on the logic state. If you invert the signal twice, you get the
original signal back. Double invert a low, and you still have a low. Double invert a high, and
you still have a high.

The following three circuits will generate the same output. Using De Morgan’s theorem,
we convert an OR-AND circuit to an all NOR circuit.

Unit III: Hardware - 56 | P a g e


COMP 20013 - Introduction to Computing

Figure 3.4

Figure 3.5

Double inversion in Figure 3.5. , which makes it the same as in Figure 3.4.

Applying De Morgan’s Theorem # 1, where a bubbled AND gate is the same as NOR, we come
up with a following all NOR gate circuit

Figure 3.6

Universal Gates

Unit III: Hardware - 57 | P a g e


COMP 20013 - Introduction to Computing

A universal logic gate is a logic gate that can be used to construct all other logic gates.

NAND gates and NOR gates are called universal gates as any type of gates or logic functions
can be implemented by these gates.

Basic gates NOT, AND, OR, implemented using all NAND gates

Basic gates NOT, AND, OR, implemented using all NOR gates

Unit III: Hardware - 58 | P a g e


COMP 20013 - Introduction to Computing

Fabrication of Integrated Circuit that performs a logic operation becomes easier when gate
of only one kind is used.

The advantage of using universal gates for implementation of logic functions is that it
reduces the number of varieties of gates.

Read:

Digital Principles and Logic Design by A. Saba & N. Manna


Digital Principles and Applications, 7th ed. By Leach, Malvino, Saha
https://www.csus.edu/indiv/p/pangj/class/cpe64/ademo/L1_Demo_Demorgan.pdf
www.secs.oakland.edu/~polis/Lectures/EGR240%20D5.1%20BasicLogicGates.ppt

UNIT ASSESSMENTS/ACTIVITIES
1. Create two boolean expressions for each of the circuits below. For the second boolean
expression for each, apply De Morgan’s theorem

2. Create the truth tables for each of the circuits used in the topic ‘Equivalence among circuits
to confirm that the 3 circuits generate the same output.
3. Per De Morgan’s theorem # 1, what is equivalent to a NOR gate?
4. Why are NAND and NOR gates called universal gates?
5. Double inversion puts my logic circuit to a much lower state. Is this statement correct?
6. Draw an XOR gate and provide the truth table
7. Draw a NOR gate and its equivalent gate based on De Morgan’s theorem
8. Provide the corresponding Boolean equations for the 2 gates in #7
9. Draw a NAND gate and its equivalent gate based on De Morgan’s theorem
10. Provide the corresponding Boolean equations for the 2 gates in #9
11. Create the circuit for this Boolean expression : V = WX + YZ
12. Draw the circuit for the OR function using all NOR gates.
13. Draw the circuit for the AND function using all NAND gates.
14. A NOR and a bubbled OR will have the same output? True or false? Draw the truth table
to prove your answer.
15. A NAND and a bubbled OR will have the same output? True or false? Draw the truth table
to prove your answer.

Unit III: Hardware - 59 | P a g e


COMP 20013 - Introduction to Computing

UNIT IV: PEOPLEWARE

OVERVIEW
Peopleware is regarded as the most important element of the computer and
communication system. It is said that without this element, there would not be any hardware
computers to be used, no software systems that would run computers, and no outputs to be
interpreted as a valid source of information. But thanks to the founding men and women behind
the innovations in the field of computing, the likes of Charles Babbage, Lady Ada Lovelace, Alan
Turing, and others, the world we live in today has made it a necessity for computers and its
systems to be part of our daily lives.
Nowadays, as the information and communications technology continues to evolve, not
much of its credit is given to the people who continuously improve it. Various careers in ICT are
part and parcel of the vast users of computers and its enabling technologies to make all industries’
operations simpler, if not, better. And in this module, we will discuss various ICT professions and
differentiate them from one another, and how they contribute to the increasing demand in the
utilization of computers.

LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Contrast roles and jobs in the ICT profession.
2. Summarize and report insights gained from ICT professionals.
3. Discuss a typical day of an ICT professional.

COURSE MATERIALS
Most professional ICT work can be classified into three (3) broad areas:
1. Information systems / Information Technology
2. Computer systems engineering
3. Computer science

People in ICT
1. Business Analysis Career – evaluate customer business needs, and provides business
solutions.

 Business Analyst - guide businesses in improving processes, products, services and


software through data analysis. These agile workers straddle the line between IT and
the business to help bridge the gap and improve efficiency.

 Business Systems Analyst – solve organizational information problems and requirements


by analyzing requirements; designing computer programs; recommending system
controls and protocols.

Unit IV: Peopleware - 61 | P a g e


COMP 20013 - Introduction to Computing

2. Computer Engineering Career – design, install, repair and service of computers.

 Computer Engineer - evaluate, design, and maintain computer hardware and software
systems. They develop, test, and design, computer processors, circuit boards, and
network systems.

 Hardware Design Engineer - develop, improve, and test components and systems
including circuit boards, processors, and memory cards for computers and other devices.

 Tehnical Support Engineer - also known as an IT support engineer, they help in resolving
technical issues within different components of computer systems, such as software,
hardware, and other network-related IT related problems.

 Computer Systems Engineer - develop, test, and evaluate software and personal
computers by combining their knowledge of engineering, computer science, and math
analysis.

3. Database Administration Career – monitor system performance, managing data, sharing of


data.

 Database Administrator - specialized computer systems administrator who maintains a


successful database environment by directing or performing all related activities to keep
the data secure.

4. ICT Education Career – specializes in ICT teaching and trainings, and ICT education
management.

 IT Lecturer - educate students on how computers work, from the basic science and
mathematics behind their operation to the actual hardware and the software built on those
foundations.

 Training Officer - identify staff training and development needs, and for planning,
organizing and overseeing appropriate training.

 Education Manager - develop policy, inform course curricula and teaching methods,
manage educational systems, recruitment, financial and physical resources.

5. Internet and E-commerce Careers – special instances of other ICT careers.

 Web Architect – create and implement interactive programs.

 Web Designer -develop and create websites and associated applications.

 Web Programmer - use a variety of programming languages to create web applications.


They also create these applications based on requests from clients and feedback from
end users.

Unit IV: Peopleware - 62 | P a g e


COMP 20013 - Introduction to Computing

 Web Administrator – maintain and update their company's website or websites. They help
ensure websites are user friendly and offer an optimal user experience.

6. Multimedia – create and manipulate graphic images, animation, sound, text and video.

 Multimedia Graphics Designer - are the creative minds behind advertisements,


informational videos, and many other types of content you come across online or while
watching television. They combine text, audio, animation, photography, and video to
create informational and impactful content.

 Multimedia Content Author - generate and manipulate graphic images, animations, sound,
text and video into consolidated and seamless multimedia programs.

 Animator – create extensive series of images that form the animation seen in movies,
commercials, television programs, and video games. They typically specialize in one of
these media and may further concentrate on a specific area, such as characters, scenery,
or background design.

7. Software Development Career – translation of requirements into set of instructions.

 Programmer - code and test programming for software and mobile apps.

 Software Engineer - develop information systems by designing, developing, and installing


software solutions. They determine operational feasibility by evaluating analysis, problem
definition, requirements, solution development, and proposed solutions.

8. Project Management Career – problem solving process involving planning, implementation


of project.

 Project Manager – establish project scope by studying strategic business drivers;


discovering and validating business and technical requirements and parameters;
obtaining input from subject-matter experts; examining and recommending changes to
current business practices; developing and writing proposals. They also develop solution
by formulating objectives; planning project life-cycle deliverables and resource availability
and application; preparing installation and modification specifications; leading the
exploration, evaluation, and design of technical solutions.

9. Systems Analysis and Design Career – partner of project managers and system developers.

 Systems Analyst - implement computer system requirements by defining and analyzing


system problems; designing and testing standards and solutions. They also define
application problem by conferring with clients; evaluating procedures and processes.

 Systems Architect - develop computer hardware, software, and network systems. They
are responsible for implementing, maintaining, and operating these systems. Systems
architects customize systems to meet the needs of specific clients.

Unit IV: Peopleware - 63 | P a g e


COMP 20013 - Introduction to Computing

10. Systems Management and Administration – connections, communication of IT


infrastructure.

 Systems Administrator - is a professional who is held accountable for network


setup, annual server maintenance such as mail servers and file servers, and much more.
Based upon an organization’s requirements and other IT-related infrastructure, a system
administrator is tasked with providing a reliable work environment, particularly whereby
multi-user computers are associated with the LAN network.

 Network Adminstrator – assist in network design and implementation; provide network


support with a variety of operating systems; install and configure computer network
equipment; and maintain network connectivity of all computer workstations.

Code of Ethics of the Filipino Computing and Information Technology Professional


(Source: http://www.philippinecomputersociety.org/code-of-ethics)

For purposes of this Code, the following terms are defined as follows:

Information Technology - the preparation, collection, creation, transport, retrieval, storage,


access, presentation and transformation of electronic information in all its forms including, but
not limited to, voice, graphics, text, video, data and image.

Information Technology Professional - one who develops or provides information technology


products and/or services to the public.

Preamble:

I will use my special knowledge and skills for the benefit of the public. I will serve employers and
clients with integrity, subject to an overriding responsibility to the public interest, and I will strive
to enhance the competence and prestige of the professional. By these, I mean:

 I will promote public knowledge, understanding and appreciation of information


technology;
 I will consider the general welfare and public good in the performance of my work;
 I will advertise goods or professional services in a clear and truthful manner;
 I will comply and strictly abide by the intellectual property laws, patent laws and other
related laws in respect of information technology;
 I will accept full responsibility for the work undertaken and will utilize my skills with
competence and professionalism;
 I will make truthful statements on my areas of competence as well as the capabilities and
qualities of my products and services;
 I will not disclose or use any confidential information obtained in the course of professional
duties without the consent of the parties concerned, except when required by law;
 I will try to attain the highest quality in both the products and services I offer;
 I will not knowingly participate in the development of Information Technology Systems that
will promote the commission of fraud and other unlawful acts;
 I will uphold and improve the IT professional standards through continuing professional
development in order to enhance the IT profession.

Unit IV: Peopleware - 64 | P a g e


COMP 20013 - Introduction to Computing

UNIT ASSESSMENTS/ACTIVITIES

1. Aside from the examples of ICT professions discussed, identify ten 10 more jobs and
differentiate their specific roles and responsibilities. Your answers may come from each
of the careers or you may select from any of the careers. Write your answers on a separate
paper.

2. Identify two (2) individuals in your community that are working in the field of ICT (Computer
Science, Information Technology, Information Systems, Computer Engineering). After
which, interview them on what is a typical day like in their profession. You may inquire
what their roles are, and how do they manage their daily job. Write your answer on a
separate paper. As a matter of privacy, you are not to disclose their personal information,
and the company they are working.

3. What particular ICT profession do you want to pursue in the future and why? Write your
explanation on a separate paper.

References:

www.aapathways.com.au
www.cio.com
www.fieldengineer.com
www.hiring.monster.com
www.jobhero.com
www.payscale.com
www.study.com
www.targetjobs.co.uk
www.thebalancecareers.com
www.uwa.edu.au
www.yourfreecareertest.com
http://www.philippinecomputersociety.org/code-of-ethics

Unit IV: Peopleware - 65 | P a g e


Introduction to Computing

UNIT V: SOFTWARE

OVERVIEW
The portion of the computer system which provides instructions to the hardware on how
to perform tasks is the software. This module covers topics about software, its major
classifications, and the functions of the different type of software. This topic explains how

LEARNING OUTCOMES
At the end of this module, the student is expected to:

1. Differentiate an application software from a system software


2. Explain the types of a system software
3. Explain the different functions of an operating system
4. Discuss the different means by which application software are made available

COURSE MATERIALS

Definition: Software are programs which consists of step-by-step instructions to tell the
computer how to perform a task.

Two Categories of Software:

1. System Software
2. Application Software

System Software

 Consists of programs that control or maintain the operations of the computer and its
devices
 Serves as the interface between the user, the application software, and the computer’s
hardware

Two (2) types of System Software


 Operating System
 Utility Programs

Operating System is a collection of programs that :

 coordinates all the activities among computer hardware devices


 manages a computer system's internal workings, its memory, processors, devices,
and file system*
 Provides a means for users to communicate with the computer and other software

*https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_ops
ysintro.htm

An operating system manages a computer’s resources and acts as intermediary

Unit V: Software - 65 | P a g e
COMP 20013 - Introduction to Computing

between a user and the computer resources

Functions of an Operating System


1. Helps in the boot operation
Steps in the boot operation

 When a computer is first powered on, it must initially rely only on the code and
data stored in nonvolatile portions of the systems memory.
 This code is referred to as the BIOS (basic input/output system), a firmware
which resides in the ROM.
 BIOS performs a series of tests called the POST (power-on self test). POST
checks for various system components including system clock, adapter cards,
RAM chips, mouse, keyboards etc.
 POST results are compared with data in the CMOS. CMOS stores
configuration information such as the amount of memory, current date/time,
types of drives, etc. If any problems are identified, error messages may
display.
 If POST completes successfully, the BIOS searches for system files and load
them into memory from storage (usually the hard disk).
 Next the kernel of the OS loads into memory. Then the OS in the memory
takes control of the computer

2. Provides user interface

Interaction with a software is through its user interface (UI). Three (3) types of
UI:
 Command-line interface – displays a prompt, user types on the keyboard,
computer executes and provide the textual output
 Menu-Driven interface - user has a list of items to choose from and can make
selection by highlighting one
 Graphical User interface (GUI) - uses windows, icons, pointers, menus

3. Manages program

The following are the different ways by which an OS handles program

 Single user / single tasking operating system – allows one user to run one
program at a time
 Single user / multitasking operating system – allows a single user to work on
two or more programs at the same time
 Multiuser operating system – allows two or more users to run programs
simultaneously
 Multiprocessing operating system – supports two or more processors running
programs at the same time

4. Manages memory

This function optimizes the use of the RAM

Unit V: Software - 66 | P a g e
COMP 20013 - Introduction to Computing

 OS allocates, or assigns, data and instructions to an area of memory while they


are being processed
 Monitors contents of memory and releases items from memory when the
processor no longer requires them

5. Schedules jobs

The OS determines the order in which jobs are processed. Jobs may include the
following:

 Receiving data from an input device


 Processing instructions
 Sending information to an output device
 Transferring items from storage to memory and vice versa

6. Configures devices

 A Device driver is a small program that tells the OS how to communicate with a
specific device.
 Each I/O device has its own specialized set of commands and thus require its
own specific driver.
 When you boot the computer, the OS loads each device’s driver.

7. Provides file management and other utilities

OS provides users with the capability of managing files, viewing images, and
other functions such as uninstalling programs, scanning disks, setting screensavers,
etc.

8. Controls network

A network OS organizes and coordinates how multiple users access and share
resources on a network. Resources include hardware, software, data, information

Category of OS

 Stand Alone OS – can operate with or without a network


 Network OS – designed a network; resides on a server

9. Administers security

The network administrator uses the network OS to establish permissions to


resources.

Unit V: Software - 67 | P a g e
COMP 20013 - Introduction to Computing

10. Monitors performance

OS assesses and reports information about various computer resources and


devices such as the processor, disk, memory, and network usage

Two (2) types of System Software

 Operating System
 Utility Programs
Utility Program
is a type of system software that allows a user to perform maintenance-type tasks usually
related to managing a computer, its devices, or its programs. Although the OS usually has
built-in utility programs, users oftentimes prefer stand-alone utilities because they offer
improvements.
Some examples of stand-alone utility programs are anti-virus programs, spyware
removers, file compression programs, etc.

Compiler and Interpreter

Compiler – converts the entire source program into machine language; Result is called
the object code. It produces a program listing containing the source code and a list of any
errors.

Interpreter - translates and executes one statement at a time; reads a code statement,
converts it to one or more machine language instructions, and then executes those
machine language. An interpreter does not produce and object code. One of the
advantages is that when it finds errors, it displays feedback immediately. An advantage is
that it does not run as fast as the compiled programs

Two Categories of Software:

1. System Software
2. Application Software

Application Software

 can be called end-user programs since they allow users to perform tasks such as
creating documents, spreadsheets, publications, running business, playing games, etc.

 consists of programs designed to make users more productive and assist them with
personal tasks

Unit V: Software - 68 | P a g e
COMP 20013 - Introduction to Computing

Application Software Categories


1. Business
 Word Processing
 Spreadsheet
 Database
 Project Management
 Accounting

2. Graphics and Multimedia


 Computer-Aided Design (CAD)
 Desktop Publishing
 Paint-Image Editing
 Video-Audio Editing
 Web Page Authoring
3. Home/Personal/Educational
 Software Suite
 Personal Finance
 Photo/Video Editing
 Educational
 Entertainment

4. Communications
 E-mail
 Chat Facility
 Videoconferencing
5. Business
 Word Processing
 Spreadsheet
 Database
 Project Management
 Accounting

6. Graphics and Multimedia


 Computer-Aided Design (CAD)
 Desktop Publishing
 Paint-Image Editing
 Video-Audio Editing
 Web Page Authoring

7. Home/Personal/Educational
 Software Suite
 Personal Finance
 Photo/Video Editing
 Educational
 Entertainment

Unit V: Software - 69 | P a g e
COMP 20013 - Introduction to Computing

8. Communications
 E-mail
 Chat Facility
 Videoconferencing

Two Categories of Application Software :


 Packaged - mass-produced, copyrighted retail software that meets needs of a wide
variety of users
 Custom - performs functions specific to a business or industry

Software is available in a variety of forms


 Open source software - software provided for use, modification, and redistribution; no
restrictions from the copyright holder regarding modification of the software
 Shareware - copyrighted software that is distributed at no cost for a trial period
 Freeware - copyrighted software provided at no cost by an individual or a company that
retains all rights to the software; distributed for free in the aim to expand the market
share of a "premium" product.
 Public-domain software - software that has been placed in the public domain: in other
words, there is absolutely no ownership such as copyright, trademark, or patent.
Software in the public domain can be modified, distributed, or sold even without any
attribution by anyone*
* https://en.wikipedia.org/wiki/Public-domain_software

Programming Languages
Low Level Languages
 1st GL Machine Language – Instructions are in the form of machine code, 1’s and
0’s
 2nd GL Assembly Language – uses short, English-like, abbreviations to represent
common elements of machine code

Procedural Languages (3rd GL) - Uses English-like words to write instructions

 COBOL (Common Business Oriented Language)


 C Language

Object Oriented Programming (OOP) Languages - implements an object oriented


design. An advantage is the ability to reuse and modify existing objects; the objects
can be reused in many systems, are designed for repeated use, and become stable over
time.

 Java

Unit V: Software - 70 | P a g e
COMP 20013 - Introduction to Computing

 C++
 C#

Read:

Discovering Computers by Shelly, Cashman, Vermaat Introduction to Information Technology


by Albano, Atole & Ariola

UNIT ASSESSMENTS/ACTIVITIES
1. Enumerate 10 available operating systems. (Get familiar with their corresponding logos).
2. Explain how an OS manages memory.
3. Differentiate a system software from an application software.
4. Give your own example of an application software.
5. Give your own example of a system software.
6. Discuss the difference between a freeware, a shareware, and an open source software.
7. Enumerate programming languages that are considered object-oriented which have not
been mentioned in this IM.
8. Differentiate a compiler from an interpreter.
9. Read the topic on how OS participates in the boot operation and enumerate the steps.
which happen before the OS takes control of the computer.
10. Give examples of program codes which are interpreted rather than compiled.
11. Categorize the following software, application or system software.
11.1 Payroll system 11.6 Microsoft Word
11.2 Avast anti-virus 11.7 MySQL
11.3 Ubuntu 11.8 Defragmenter
11.4 Inventory System 11.9 Screen saver
11.5 Image viewer 11.10 Disk Scanner
12.Give one example each of a freeware, shareware, and an open source software.

Unit V: Software - 71 | P a g e
COMP 20013 - Introduction to Computing

UNIT VI: NETWORK, INTERNET, AND INTERNET PROTOCOLS

OVERVIEW

From the early times, people had seen the need to communicate over a distance
(telecommunication). They used various means to communicate such as smoke signals,
sound(drums), and homing pigeons. During the later years, with the advent of electricity, other
devices were invented to facilitate telecommunication such as telegraph, telephone, and radio.

Today, we see a tremendous leap toward communications. Modern communication is


now about the movement of data in a vast network of computers. This network of computer
networks is called the Internet. This module covers the basics of networking and Internet
concepts.

LEARNING OUTCOMES

At the end of this module, the student is expected to:

1. Discuss network models


2. Explain network topologies
3. Explain internet concepts

COURSE MATERIALS

Concepts in Computer Networking

What is Data Communication


(https://www.tutorialspoint.com/data_communication_computer_network/index.htm)

Data communications refers to the transmission of digital data between two or more
computers. A computer network or data network is a telecommunications network that allows
computers to exchange data.
History of Data Communication

 1753 - Proposal submitted to a Scottish magazine to run a communication line between


villages comprised of 26 parallel wires representing each letter of the alphabet
 1833 - Carl Friedrich Gauss developed a system based on a 5X5 matrix representing 25
letters so message could be sent in a single wire
 1832 - Samuel F.B.Morse invented the telegraph , the first successful data communication
system that used binary coded electrical signals to transmit information.
Samuel Morse also developed the Morse code which used dots and dashes to represent
information
 1840 - Morse secured an American patent for the telegraph
 1844 - The first telegraph line was established between Baltimore and Washington D.C. with
the first message conveyed “What hath God wrought!”

Unit VI: Network, Internet, and Internet Protocols - 72 | P a g e


COMP 20013 - Introduction to Computing

 1875 - Alexander Graham Bell invented the telephone


 1899 - Guglielmo Marconi succeeded in sending radio (wireless) telegraph messages
 1920 - First radio transmissions of the human voice. Birth of sound-broadcasting in the
improvised studios of the Marconi company
 1957 - Launch of Sputnik-1, the Earth’s first artificial satellite
 1963 - Launch of the world’s first telecommunication satellite, Syncom-1, in geostationary
orbit
Computer network - two or more computers interconnected with one another for the purpose of
sharing resources such as database, backup device, and others.
The elements of a computer network are (Source: Network Fundamentals – Cisco):
 Protocols – rules and agreements on how the different parts of the network will operate.
A protocol stack is a list or set of protocols used by a system.
 Data and Messages –information used or transmitted / received in the network.
 Communications medium – interconnects the different devices in the network. Ex.
copper and fiber optic cables
 Devices - includes computers, routers, switches, hubs, bridges and others.

Classification of Computer Networks According to Geographic Scope


• Local Area Network
- Computers confined to one building or cluster of buildings
- Relatively high speed of transmission
- Usually privately owned
• Metropolitan Area Network
- Computers located within a city or cluster of cities
- Usually use facilities of telecom or network service providers
• Wide Area Network
- Computers located outside a building or cluster of buildings
- Computers may be located between two or more cities, or between two or more
countries
• Global Area Network
- Computers located in different countries around the world. Ex. Internet
• Personal Area Network (PAN)
- interconnection of information technology devices within the range of an individual
person, typically within a range of 10 meters

Two Basic Network Models

 Peer to peer client server – all computers share their resources with all the other computers
in the network.
 Dedicated client / server – one or more computers are assigned as a server and the rest of
the computers are clients.
- A network architecture where one centralized, powerful computer (called the server) is a
hub to which many less powerful personal computers or workstations (called clients) are
connected.
- Server manages all network resources; dedicated; engineered to manage, store, send
and process data; provides the service

Unit VI: Network, Internet, and Internet Protocols - 73 | P a g e


COMP 20013 - Introduction to Computing

- Clients are workstations on which users run applications. Clients rely on servers for
resources; request the service

Network Topology refers to the appearance or the way a network is laid out.

Network topology could be:

 Physical Topology - refers to the physical lay out (geometric representation) of the computers
in a network.
 Logical Topology – Describes how data actually flow through the network. It refers to the
logical layout of the computers in a network (how computers access other computers in the
network)
Most Basic topologies

 Point to point - two stations are connected


 Multipoint – connects three or more stations

Point to point - Only two stations are connected by a transmission medium.

Advantages
• Very simple
• Transmission medium is ready for use anytime by the two stations.

Disadvantage
• Less stations can communicate with each other directly.

Multipoint – Connects more than 2 stations

The following are examples of multipoint topologies


Star Bus Ring Mesh Tree Hybrid

Figure 6-1
(Image Source: systemzone.net)

Unit VI: Network, Internet, and Internet Protocols - 74 | P a g e


COMP 20013 - Introduction to Computing

Physical Star Topology

• Stations are connected directly to a centrally located device such as a computer or hub which
acts like a multipoint connector.
• The central node is sometimes called central control, star coupler, or central switch.

Advantages
• If link of one computer fails, others can still communicate
• Requires less cable and communication ports than mesh topology
• Could be less expensive than mesh topology
• Easier to install compared to mesh topology
• Easier fault isolation compared to bus

Disadvantages
• If central hub breaks down, all communications are down
• Less robust compared to mesh topology
• Often requires more cable than bus

Physical Bus Topology

• It uses a multipoint data communications circuit.


• All stations are connected to a single transmission medium, which allows all stations to receive
transmitted packets.
• Also called multidrop, linear bus, or horizontal bus.

Advantages
• Requires no special routing or circuit switching.
• Not necessary to store and forward messages.
• Requires less cable than other topology
• Easier to install compared to other topology
• Requires less communication ports than mesh and ring topology
• Could be less expensive than mesh topology

Disadvantages
• Computers could not communicate anytime (because of collision)
• If cable breaks down, entire network could be disrupted
• More difficult fault isolation
• Not suitable when stations are transmitting most of the time (because of too much collision).

Physical Ring Topology

• All stations are connected in tandem (series) to form a closed loop or circle.

Advantages
• Requires less cable than mesh topology
• Requires less communication ports than mesh topology
• Relatively easy to install
• Could be less expensive than mesh topology

Unit VI: Network, Internet, and Internet Protocols - 75 | P a g e


COMP 20013 - Introduction to Computing

Disadvantages
• Delay is longer for non-adjacent stations.
• If one cable breaks down, entire network could be disrupted
• Requires more communication port than bus or star topology

Physical Mesh Topology

• Every station has a direct two-point communication to every other station.


• Also called fully connected.
• Fully connected circuit requires n(n-1) physical transmission links to interconnect n stations.

Advantages
• Computers can communicate anytime (no contention for use of medium)
• Robust (Data could have alternate routes)
• Has more privacy and security
• Easier fault isolation

Disadvantages
• More expensive and bulkier cabling / communication lines
• More communication ports are needed
• More cumbersome installation and reconnection
• Could have higher total cost of ownership

Physical Hybrid Topology

• It combines two or more of the traditional topologies to form a larger, more complex
topology.

Advantages
• Combines the benefits of traditional topologies used.

Disadvantages
• Combines the disadvantages of traditional topologies used.

Physical Tree Topology


(https://systemzone.net/computer-network-topology-outline/)

• A central ‘root’ node (top level of the hierarchy) is connected to one or more other nodes that
are one level lower in the hierarchy with a point-to-point physical link.
• The second level node may also have connected to one or more other nodes that are one
level down in the hierarchy with another point-to-point link.
• The top level node i.e root node is the only node that has no other node above it in the
hierarchy.

INTERNET CONCEPTS

Internet

A network of computer networks. It allows any computer connected to it to send and receive data
from any computer connected to it.

Unit VI: Network, Internet, and Internet Protocols - 76 | P a g e


COMP 20013 - Introduction to Computing

Internet History

 1962 - J.C.R. Licklider of MIT envisioned a globally interconnected set of computers through
which everyone could quickly access data and programs from any site. In spirit, the concept
was very much like the Internet of today.
 1969 - Pentagon’s ARPANET (Advanced Research Projects Agency) became functional,
linking scientific and academic researches across the US
 1972 – First public demonstration of the ARPANET to the public; Initial application of the
electronic mail was introduced
 1983 -
o ARPANET adopted the Transmission Control Protocol and Internet Protocol (TCP/IP)
;
o ARPANET was being used by a significant number of R&D and operational
organizations;
o Widespread development of LANs, PCs, and workstations allowed the internet to
flourish
 1987 - there were nearly 30,000 hosts on the Internet. The original Arpanet protocol had been
limited to 1,000 hosts, but the adoption of the TCP/IP standard made larger numbers of hosts
possible.
 1989 – The World Wide Web was born
 1995
o is often considered the first year the web became commercialized. While there were
commercial enterprises online prior to ’95, there were a few key developments that
happened that year. First, SSL (Secure Sockets Layer) encryption was developed by
Netscape, making it safer to conduct financial transactions (like credit card
payments) online.
o The Federal Networking Council (FNC) unanimously passed a resolution defining the
term Internet. “Internet” refers to the global information system that - (i) is logically
linked together by a globally unique address space based on the Internet Protocol (IP)
or its subsequent extensions/follow-ons; (ii) is able to support communications using
the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of its subsequent
extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or
makes accessible, either publicly or privately, high level services layered on the
communications and related infrastructure described herein
 Today, the Internet remains a public cooperative and independent network
Each organization on the Internet is responsible only for maintaining its own network

The World Wide Web

• collection of interlinked multimedia documents that are stored on the Internet and accessed
using a common protocol (HTTP).
• Each electronic document on the web is called a web page
• A collection of web pages is called a web site

World Wide Web Consortium (W3C)

The World Wide Web Consortium (W3C) oversees research and sets standards and
guidelines for many areas of the Internet

Unit VI: Network, Internet, and Internet Protocols - 77 | P a g e


COMP 20013 - Introduction to Computing

About 350 organizations are members of W3C. They advise, define standards, and address
other issues.

Sir Tim Berners-Lee, a British computer scientist invented the World Wide Web in 1989.

By October of 1990, Tim had written the three fundamental technologies that remain the
foundation of today’s web (and which you may have seen appear on parts of your web browser):

 HTML: HyperText Markup Language. The markup (formatting) language for the web.
 URI: Uniform Resource Identifier. A kind of “address” that is unique and used to identify
to each resource on the web. It is also commonly called a URL.
 HTTP: Hypertext Transfer Protocol. Allows for the retrieval of linked resources from across
the web.

Internet service provider (ISP),

An Internet service provider (ISP), also called telephone companies or other


telecommunication providers, provides services such as Internet access, Internet transit, domain
name registration and hosting, dial-up access, leased line access and colocation. e.g. America
Online (AOL), Compuserve

The ISP connects to its customers using a data transmission technology such as
• Dial-up
• DSL (Digital Subscriber Line)
• Cable modem
• Wireless
• Fiber optics

Internet Protocol Address

IP address is short for Internet Protocol (IP) address. An IP address is number that uniquely
identifies each computer or device connected to the Internet. IP version 4 addresses are
comprised of four groups of separated by dots. Each number is between 0 to 255.

Ex.
127.0.0.1
253.16.44.22
72.48.108.101

Domain Name

Domain Name is the text version of an IP address. The Domain Name System (DNS) is the
method that the Internet uses to store domain names and their corresponding IP addresses.
When you specify a domain name, a DNS server translates the domain name to its associated IP
address

Ex.
www.google.com

Unit VI: Network, Internet, and Internet Protocols - 78 | P a g e


COMP 20013 - Introduction to Computing

Web Addresses - Uniform Resource Locator (URL)

A uniform resource locator, abbreviated URL (also known as web address. It is the full
address to a web page or file/program.
• The full address usually starts with "http://" for Web pages, "https://" for secure Web pages
• Following these prefixes are the "www.", domain name, the path and the file name
http://www.domain_name/path/filename
• As with physical addresses, the exact layout can vary.
• Sometimes there will be more parts to the address. Domains can be divided into multiple
subdomains.
• Sometimes there will be fewer parts - typically the larger the organization, the shorter their
domain name, ibm.com for example.

Protocol

In the networking and communications area, a protocol is the formal specification that
defines the procedures that must be followed when transmitting or receiving data. Protocols define
the format, timing, sequence, and error checking used on the network.

TCP/IP
Transmission Control Protocol / Internet Protocol
Foundation protocols for the internet
Manages conversations between servers and web clients

HTTP
HTTP stands for HyperText Transfer Protocol.
It’s what browsers and web servers rely on for exchanging data
HTTP’s responsibility is the World Wide Web or WWW.
HTTP is the protocol between the client (your computer using web browsers) and the server
(web server serving web pages and similar online resources)
It is information exchanging procedure standard between 2 communicating parties or
computers, such as the client and the server.

HTTPS
stands for HyperText Transfer Protocol Secure and is a secure version of HTTP. It’s
basically an encrypted HTTP channel that encrypts all the information being exchanged,
making transferring of confidential information secure from eavesdropping

Other protocols
FTP File transfer protocol – used for interactive file transfer between systems
SMTP Simple Mail Transfer Protocol - for transfer of electronic messages (and
attachments)

Intranet

 private network accessible only by the organization's members, employees, or others with
authorization
 Internal website that takes advantage of the same basic technology as the Internet;

Unit VI: Network, Internet, and Internet Protocols - 79 | P a g e


COMP 20013 - Introduction to Computing

 a local or restricted communications network, esp. a private network created using World Wide
Web software.

UNIT ASSESSMENTS

1. Differentiate a ring from a star topology.


2. Discuss the advantages of a mesh topology.
3. Differentiate peer-to-peer from client-server architecture.
4. Explain the different network protocols.
5. Differentiate the classification of network according to geographical scope.
6. Differentiate internet from an intranet.
7. Give examples where intranet is used.

References:

Data Communication and Networking by Wayne Tomasi


Data Communication and Networking by Behrouz A. Forouzan
Data Communication and Networking by William Stallings
Network Fundamentals by Mark Dye, Rick McDonald, and Antoon Rufi
https://www.webfx.com/blog/web-design/the-history-of-the-internet-in-a-nutshell/
https://www.internetsociety.org/internet/history-internet/brief-history-internet/
https://webfoundation.org/about/vision/history-of-the-web/
http://www.igoldrush.com/domain-guide/domain-name-basics/the-difference-between-urls-and-
domain-names
http://www.firewall.cx/networking-topics/protocols/123-introduction-to-protocols.html
http://www.kavoir.com/2009/03/http-explained-what-does-http-stand-for-what-is-http-meaning-
and-https-definition.html

Unit VI: Network, Internet, and Internet Protocols - 80 | P a g e


COMP 20013 - Introduction to Computing

UNIT VII: TRENDS AND ISSUES IN INFORMATION AND COMMUNICATIONS


TECHNOLOGY (ICT)

OVERVIEW
This module covers the advancement and application of information technology. Some of
the trends in the information technology are Cloud computing, Mobile Application, Analytics,
Internet of Things, Data Security.

LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Demonstrate awareness on the current ICT trends and social issues.
2. Explain the current ICT trends and social issues and the impact that is having on society
3. Initiates disciplines ad relates knowledge of ICT trends and issues on study works.

COURSE MATERIALS
TRENDS IN ICT
21st century has been defined by application of and advancement in information
technology. Information technology has become an integral part of our daily life. According to
Information Technology Association of America, information technology is defined as “the study,
design, development, application, implementation, support or management of computer-based
information systems.”
Information technology has served as a big change agent in different aspect of business
and society. It has proven game changer in resolving economic and social issues.
Some of the advance developments in the Information Technology are:
1. Cloud Computing
One of the most talked about concept in information technology is the cloud computing.
Clouding computing is defined as utilization of computing services, i.e. software as well as
hardware as a service over a network. Typically, this network is the internet.
More and more businesses around the world are turning to cloud computing to help
support their business development demands. Cloud services allow companies to offload data
management, backend development, and even design so that their talent can focus on
innovation. To achieve better IT results, companies must build or reconfigure the appropriate
policies and workflow for a cloud-based approach. Cloud computing is expected to continue
being one of the most vital future trends in information technology.
Cloud computing offers 3 types of broad services mainly Infrastructure as a Service
(IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).

Unit VII: Trends and Issues in ICT - 81 | P a g e


COMP 20013 - Introduction to Computing

Some of the benefit of cloud computing is as follows:


1. Cloud computing reduces IT infrastructure cost of the company.
2. Cloud computing promotes the concept of virtualization, which enables server and
storage device to be utilized across organization.
3. Cloud computing makes maintenance of software and hardware easier as
installation is not required on each end user’s computer.
Some issues concerning cloud computing are privacy, compliance, security, legal,
abuse, IT governance, etc.

2. Internet of Things
The Internet of Things (IoT) is transforming our physical world into a complex and
dynamic system of connected devices on an unprecedented scale.
Advances in technology are making possible a more widespread adoption of IoT, from
pill-shaped micro-cameras that can pinpoint thousands of images within the body, to smart
sensors that can assess crop conditions on a farm, to the smart home devices that are
becoming increasingly popular. But what are the building blocks of IoT? And what are the
underlying technologies that drive the IoT revolution?
The explosive growth of the “Internet of Things” is changing our world and the rapid
drop in price for typical IoT components is allowing people to innovate new designs and
products at home.
Internet of Things (IoT) devices are rapidly making their way into corporate spaces.
From gathering new data to the automation of infrastructure, companies are finding many
benefits from adding connectivity and intelligence to physical infrastructure. According to
CompTIA, adding digital capabilities to everyday components will drastically increase the
scope of IT responsibilities.

3. Mobile Application
Another emerging trend within information technology is mobile applications
(software application on Smart phone, tablet, etc.)
Mobile application or mobile app has become a success since its introduction. They
are designed to run on Smartphone, tablets and other mobile devices. They are available as
a download from various mobile operating systems like Apple, Blackberry, Nokia, etc. Some
of the mobile app are available free where as some involve download cost. The revenue
collected is shared between app distributor and app developer.

4. Human Computer Interaction


Human-computer interaction (HCI) is a multidisciplinary field of study focusing on the
design of computer technology and, in particular, the interaction between humans (the users)
and computers. While initially concerned with computers, HCI has since expanded to cover
almost all forms of information technology design.
HCI surfaced in the 1980s with the advent of personal computing, just as machines
such as the Apple Macintosh, IBM PC 5150 and Commodore 64 started turning up in homes

Unit VII: Trends and Issues in ICT - 82 | P a g e


COMP 20013 - Introduction to Computing

and offices in society-changing numbers. For the first time, sophisticated electronic systems
were available to general consumers for uses such as word processors, games units and
accounting aids. Consequently, as computers were no longer room-sized, expensive tools
exclusively built for experts in specialized environments, the need to create human-computer
interaction that was also easy and efficient for less experienced users became increasingly
vital. From its origins, HCI would expand to incorporate multiple disciplines, such as computer
science, cognitive science and human-factors engineering.
HCI soon became the subject of intense academic investigation. Those who studied
and worked in HCI saw it as a crucial instrument to popularize the idea that the interaction
between a computer and the user should resemble a human-to-human, open-ended dialogue.
Initially, HCI researchers focused on improving the usability of desktop computers (i.e.,
practitioners concentrated on how easy computers are to learn and use). However, with the
rise of technologies such as the Internet and the smartphone, computer use would
increasingly move away from the desktop to embrace the mobile world.

5. Data Analytics
The field of analytics has grown many folds in recent years. Analytics is a process
which helps in discovering the informational patterns with data. The field of analytics is a
combination of statistics, computer programming and operations research.
The field of analytics has shown growth in the field of data analytics, predictive
analytics and social analytics.
Data analytics is tool used to support decision-making process. It converts raw data
into meaningful information.
Predictive analytics is tool used to predict future events based on current and
historical information.
Social media analytics is tool used by companies to understand and accommodate
customer needs.
The ever-changing field of information technology has seen great advancement and
changes in the last decade. And from the emerging trend, it can be concluded that its
influence on business is ever growing, and it will help companies to serve customers better.

6. Artificial Intelligence
Artificial intelligence (AI) requires significant computer resources (which can be
procured in the cloud), various algorithms allow learning (which can be baked into products
or provided as a service) and contextual awareness (which can come from IoT devices or
massive collections of data). By adding a layer of intelligence to the technical solutions they
are building, companies can both manage a more extensive IT architecture and solve a
broader range of problems.

Unit VII: Trends and Issues in ICT - 83 | P a g e


COMP 20013 - Introduction to Computing

7. Data Security
One of the top trending technologies in computer science. IT services rely on digital
technology to work faster, data security becomes a top priority. It’s difficult to improve security
efforts when technology is updating so rapidly. Many businesses have increased investments
in security, but beyond the technical aspects, organizations will also begin building business
processes that enhance security. In order to adapt to the rapid IT development, companies
will have to shift their security mindset from technology-based defenses to proactive steps
that include technology, process, and education. In this top 5 disruptive technologies list,
Data security will always be import among the latest technology trends in information
technology.

ISSUES IN ICT
1. Data Privacy
Data privacy refers to the act of providing the integrity, confidentiality, and availability
of personal information that are collected, stored and processed. Data privacy, also called
information privacy, is the aspect of IT that deals with the ability an organization or individual
has to determine what data in a computer system can be shared with third parties.
Data privacy is challenging since it attempts to use data while protecting an individual's
privacy preferences and personally identifiable information. The fields of computer security,
data security, and information security all design and use software, hardware and human
resources to address this issue.
To ensure Data Privacy, the Philippines passed into Republic Act No. 10173 or known
as the Data Privacy Act of 2012.

Figure 7.1. RA 10173 - Data Privacy Act Infographics


Unit VII: Trends and Issues in ICT - 84 | P a g e
COMP 20013 - Introduction to Computing

Figure 7.2. Data Privacy Rights *INFOGRAPHIC BY JESSA MALAPIT

2. Cybersecurity
The cybersecurity challenge is two-fold. First is that Cyberattacks are growing in size
and sophistication and second, millions of cybersecurity jobs remain unfilled.
Organizations cannot take IT security lightly. An analysis of worldwide identity and
access management by the International Data Corporation revealed that 55% of consumers
would switch platforms or providers due to the threat of a data breach, and 78% would switch
if a breach impacted them directly. Customers aren’t willing to put their data at risk.
The problem is there aren’t enough IT professionals with cybersecurity expertise. Forty
percent of IT decision-makers say they have cybersecurity skills gaps on their teams. It’s also
identified as the most challenging hiring area in IT.

Unit VII: Trends and Issues in ICT - 85 | P a g e


COMP 20013 - Introduction to Computing

There isn’t an immediate solution to this problem, but a long-term fix is to build your
cyber workforce from the inside. Invest in cybersecurity training and upskill your current staff.
Hiring and outsourcing isn’t always a viable (or cheap) solution. Current IT professionals who
know the industry are more apt to transition into successful cybersecurity professionals.

Figure 7.3. Electronic and Cybercrime Prevention Act

Unit VII: Trends and Issues in ICT - 86 | P a g e


COMP 20013 - Introduction to Computing

UNIT ASSESSMENTS
1. What new technology coming out in the next 10 years do you think will disrupt the global IT
industry?
2. Make an analysis on how cybersecurity is being implemented in the Philippines.
3. From recent technology updates, what new devices are being connected to the internet.
4. How do students apply the concept of cloud computing?
5. Give examples of cybersecurity attacks which became headlines in the past year
(Philippines or abroad)
6. Related to question #5, give an example very specific to intrusion of data privacy.
7. Give examples of data security measure which are being implemented in certain institutions,
e.g. banks, school, offices.
8. Identify and discuss one or two application of Internet of Things that you think might be
useful in this time of health crisis.
9. If you are to create a mobile application, conceptualize an application that might be effective
in this situation of health crisis.
10. Why do you think access to correct and accurate data is essential these days with respect
to politics, health, world events, and the like?

References:
https://www.interaction-design.org/literature/topics/human-computer-interaction
https://online.stanford.edu/courses/xee100-introduction-internet-things
https://www.globalknowledge.com/us-en/resources/resource-library/articles/12-challenges-
facing-it-professionals/#2
https://www.bizvibe.com/blog/it-solutions-outsourcing/latest-technology-trends-information-
technology/
https://www.managementstudyguide.com/emerging-trends-in-information-technology.htm
https://www.coursera.org/learn
https://insidemanila.ph/article/293/heres-what-we-know-so-far-about-the-dfa-data-breach
https://www.privacy.gov.ph/data-privacy-act/
https://lawphil.net/statutes/repacts/ra2012/ra_10175_2012.html

Unit VII: Trends and Issues in ICT - 87 | P a g e


COMP 20013 - Introduction to Computing

UNIT VIII: SPECIAL INTEREST TOPICS IN ICT

OVERVIEW
This module gives an introduction to three of special interest topics related to information
technology, Artificial Intelligence (AI), Data Science, and Social Networking and Society.
The topic on artificial intelligence defines AI, lists down the milestones in AI’s history and
explains the two buzzwords related to AI, machine learning and deep learning. It also discusses
the different fields where we would see the application of AI.
Data science, on the other hand discusses how this field of science came about. The
emergence of big data and the need to analyze this huge amount of data prompted the beginnings
of data science. It also explains the roles and skills of a data scientist.
Spending time in social networking sites has become a part of almost everybody’s daily
routine. The topic on social networking delves on the pros and cons of social media. It also briefly
discusses the most popular social networking sites.

PART 1: ARTIFICIAL INTELLIGENCE


LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Explain the difference between AI, machine learning, and deep learning
2. Provide applications of AI in different industries and in daily use.
3. Identify important milestones in the history of AI
4. Explain supervised, unsupervised learning and other concepts related to AI

COURSE MATERIALS
Artificial Intelligence
(https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
https://www.britannica.com/technology/artificial-intelligence
https://pathmind.com/wiki/ai-vs-machine-learning-vs-deep-learning)

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that
are programmed to think like humans and mimic their actions. AI is frequently applied to the
project of developing systems endowed with the intellectual processes characteristic of humans,
such as the ability to reason, discover meaning, generalize, or learn from past experience.

John McCarthy, widely recognized as one of the godfathers of AI, defined it as “the science
and engineering of making intelligent machines.”

Other definitions of artificial intelligence:

 A branch of computer science dealing with the simulation of intelligent behavior in


computers.
 The capability of a machine to imitate intelligent human behavior.

Unit VIII: Special Interest Topics in ICT - 88 | P a g e


COMP 20013 - Introduction to Computing

 A computer system able to perform tasks that normally require human intelligence, such
as visual perception, speech recognition, decision-making, and translation between
languages.

History of Artificial Intelligence


(https://www.javatpoint.com/history-of-artificial-intelligence)

o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing published "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program" which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist
John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic
field.
o Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers
to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.

Unit VIII: Special Interest Topics in ICT - 89 | P a g e


COMP 20013 - Introduction to Computing

o Year 2012: Google has launched an Android app feature "Google now", which was able
to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she
was talking with the machine.

Applications of Artificial Intelligence


(https://www.javatpoint.com/application-of-ai
https://www.valluriorg.com/blog/artificial-intelligence-and-its-applications/)

Figure 8.1 Applications of AI


(Image Source: gettingsmart.com)

The applications for artificial intelligence are endless. The technology can be applied to
many different sectors and industries.

AI in Healthcare: Companies are applying machine learning to make better and faster diagnoses
than humans. One of the best-known technologies is IBM’s Watson. It understands natural
language and can respond to questions asked of it. The system mines patient data and other
available data sources to form a hypothesis, which it then presents with a confidence scoring
schema.

AI in Finance: The finance industry is implementing automation, chatbot, adaptive intelligence,


algorithm trading, and machine learning into financial processes. it is used to detect and flag
activity in banking and finance such as unusual debit card usage and large account deposits—all
of which help a bank's fraud department. Applications for AI are also being used to help streamline
and make trading easier. This is done by making supply, demand, and pricing of securities easier
to estimate.
AI in Business: Robotic process automation is being applied to highly repetitive tasks normally
performed by humans. Machine learning algorithms are being integrated into analytics and CRM
(Customer relationship management) platforms to uncover information on how to better serve

Unit VIII: Special Interest Topics in ICT - 90 | P a g e


COMP 20013 - Introduction to Computing

customers. Chatbots have already been incorporated into websites and e companies to provide
immediate service to customers. Automation of job positions has also become a talking point
among academics and IT consultancies.
AI in Education: It automates grading, giving educators more time. It can also assess students
and adapt to their needs, helping them work at their own pace.

AI in Automotive Industry: Some Automotive industries are using AI to provide virtual assistant
to their user for better performance. Such as Tesla has introduced TeslaBot, an intelligent virtual
assistant. Various Industries are currently working for developing self-driven cars which can
make your journey more safe and secure. Just like humans, self-driving cars need to have sensors
to understand the world around them and a brain to collect, processes and choose specific actions
based on information gathered. Autonomous vehicles are with advanced tool to gather
information, including long range radar, cameras, and LiDAR (light detection and ranging).

AI in Gaming: AI can be used for gaming purpose. The AI machines can play strategic games
like chess, where the machine needs to think of a large number of possible places.
AI in Data Security: The security of data is crucial for every company and cyber-attacks are
growing very rapidly in the digital world. AI can be used to make your data more safe and secure.
Some examples such as AEG bot, AI2 Platforms, are used to determine software bug and cyber-
attacks in a better way.

AI in Social Media: Social Media sites such as Facebook, Twitter, and Snapchat contain billions
of user profiles, which need to be stored and managed in a very efficient way. AI can organize
and manage massive amounts of data. AI can analyze lots of data to identify the latest trends,
hashtag, and requirement of different users.

AI in Travel & Transport: AI is becoming highly demanding for travel industries. AI is capable
of doing various travel related works such as from making travel arrangement to suggesting the
hotels, flights, and best routes to the customers. Travel industries are using AI-powered chatbots
which can make human-like interaction with customers for better and fast response.

AI in Robotics: Artificial Intelligence has a remarkable role in Robotics. Usually, general robots
are programmed such that they can perform some repetitive task, but with the help of AI, we can
create intelligent robots which can perform tasks with their own experiences without pre-
programmed. Humanoid Robots are best examples for AI in robotics, recently the intelligent
Humanoid robot named as Erica and Sophia has been developed which can talk and behave like
humans.

AI in Entertainment: We are currently using some AI based applications in our daily life with
some entertainment services such as Netflix or Amazon. With the help of ML/AI algorithms, these
services show the recommendations for programs or shows. The role of AI in film, television and
media can also be felt in marketing and advertising, personalization of user experience, and
search optimization. (https://emerj.com/ai-sector-overviews/ai-in-movies-entertainment-visual-media/)

Unit VIII: Special Interest Topics in ICT - 91 | P a g e


COMP 20013 - Introduction to Computing

Machine learning (ML)


(https://www.deeplearningbook.org/contents/ml.html)

Machine Learning is seen as a subset of artificial intelligence. It is the study of computer


algorithms that improve automatically through experience. Machine learning algorithms build
a mathematical model based on sample data, known as "training data", in order to make
predictions or decisions without being explicitly programmed to do so.
Machine learning enables us to tackle tasks that are too difficult to solve with fixed
programs written and designed by human beings. From a scientific and philosophical point of
view, machine learning is interesting because developing our understanding of it entails
developing our understanding of the principles that underlie intelligence.

Learning Algorithms
(https://www.deeplearningbook.org/contents/ml.html)

A machine learning algorithm is an algorithm that is able to learn from data.


Learning has been defined by Mitchell (1997) as follows: “A computer program is said to learn
from experience E with respect to some class of tasks T and performance measure P. If its
performance at tasks in T, as measured by P improves with experience E.”

In this relatively formal definition of the word “task,” the process of learning itself is not the
task. Learning is our means of attaining the ability to perform the task. For example, if we want a
robot to be able to walk, then walking is the task. We could program the robot to learn to walk, or
we could attempt to directly write a program that specifies how to walk manually.

Categories of Machine Learning


(https://en.wikipedia.org/wiki/Machine_learning Bishop, C.M. (2006), Pattern Recognition and Machine Learning)

Machine learning approaches are traditionally divided into three broad categories, depending
on the nature of the "signal" or "feedback" available to the learning system:

 Supervised learning: The computer is presented with example inputs and their desired
outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to
outputs.
 Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to
find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden
patterns in data) or a means towards an end (feature learning).
 Reinforcement learning: A computer program interacts with a dynamic environment in which
it must perform a certain goal (such as driving a vehicle or playing a game against an
opponent). As it navigates its problem space, the program is provided feedback that's
analogous to rewards, which it tries to maximize.

Deep learning
(https://www.investopedia.com/terms/d/deep-learning.asp
https://orbograph.com/deep-learning-how-will-it-change-healthcare/)

Deep Learning is an artificial intelligence (AI) function that imitates the workings of the
human brain in processing data and creating patterns for use in decision making. Deep learning
is a subset of machine learning in artificial intelligence that has networks capable of learning

Unit VIII: Special Interest Topics in ICT - 92 | P a g e


COMP 20013 - Introduction to Computing

unsupervised from data that is unstructured or unlabeled. Also known as deep neural learning or
deep neural network.

Deep learning, a subset of machine learning, utilizes a hierarchical level of artificial neural
networks to carry out the process of machine learning. The artificial neural networks are built like
the human brain, with neuron nodes connected together like a web. While traditional programs
build analysis with data in a linear way, the hierarchical function of deep learning systems enables
machines to process data with a nonlinear approach.

Figure 8.2. An illustration of a deep learning neural network (Source: University of Cincinnati)

Deep learning, also known as hierarchical learning or deep structured learning, is a type
of machine learning that uses a layered algorithmic architecture to analyze data.
In deep learning models, data is filtered through a cascade of multiple layers, with each
successive layer using the output from the previous one to inform its results. Deep learning
models can become more and more accurate as they process more data, essentially learning
from previous results to refine their ability to make correlations and connections.
Deep learning is loosely based on the way biological neurons connect with one another to
process information in the brains of animals. Similar to the way electrical signals travel across the
cells of living creates, each subsequent layer of nodes is activated when it receives stimuli from
its neighboring neurons.
In artificial neural networks (ANNs), the basis for deep learning models, each layer may
be assigned a specific portion of a transformation task, and data might traverse the layers multiple
times to refine and optimize the ultimate output.
These “hidden” layers serve to perform the mathematical translation tasks that turn raw
input into meaningful output.

Unit VIII: Special Interest Topics in ICT - 93 | P a g e


COMP 20013 - Introduction to Computing

Watch:

Understanding Artificial Intelligence and its Future


https://www.youtube.com/watch?v=SN2BZswEWUA

Deep Learning in 5 Minutes


https://www.youtube.com/watch?v=6M5VXKLf4D4

Read:

Recent use of Machine Learning


https://ph.yahoo.com/news/covid-19-symptom-clusters-223755338.html

UNIT ASSESSMENTS/ACTIVITIES

1. Explain artificial intelligence


2. Differentiate machine learning from deep learning
3. Give other specific applications of AI in the fields of manufacturing, education, business
(those not mentioned above)
4. How is machine learning different from traditional programming?
5. How does Netflix use AI?
6. List down the applications of AI in games which were mentioned in the History of AI.
7. List down other games which applied AI which were not mentioned in the History of AI.
8. What do you understand by training data?
9. Differentiate supervised and unsupervised learning. You may give an example
10. Discuss what a Turing Test is.

References:
https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
https://www.britannica.com/technology/artificial-intelligence
https://pathmind.com/wiki/ai-vs-machine-learning-vs-deep-learning
https://www.javatpoint.com/history-of-artificial-intelligence
https://www.javatpoint.com/application-of-ai
https://www.valluriorg.com/blog/artificial-intelligence-and-its-applications/
https://emerj.com/ai-sector-overviews/ai-in-movies-entertainment-visual-media/
https://www.deeplearningbook.org/contents/ml.html
Machine_learning Bishop, C.M. (2006), Pattern Recognition and Machine Learning
https://www.investopedia.com/terms/d/deep-learning.asp
https://orbograph.com/deep-learning-how-will-it-change-healthcare/

Unit VIII: Special Interest Topics in ICT - 94 | P a g e


COMP 20013 - Introduction to Computing

PART 2: DATA SCIENCE

LEARNING OUTCOMES

At the end of this module, the student is expected to:


1. Explain what the field of data science is
2. Identify the skills/expertise needed to be a data scientist
3. Discuss what big data is and how it relates to data science

COURSE MATERIALS

What Is Data Science?


(https://www.investopedia.com/terms/d/data-science.asp)

Data science provides meaningful information based on large amounts of complex data
or big data. Data science, or data-driven science, combines different fields of work in statistics
and computation to interpret data for decision-making purposes.

Data is drawn from different sectors, channels, and platforms including cell phones, social
media, e-commerce sites, healthcare surveys, and Internet searches. The increase in the amount
of data available opened the door to a new field of study based on big data—the massive data
sets that contribute to the creation of better operational tools in all sectors.

The continually increasing access to data is possible due to advancements in technology


and collection techniques. Individuals buying patterns and behavior can be monitored and
predictions made based on the information gathered.

However, the ever-increasing data is unstructured and requires parsing for effective
decision making. This process is complex and time-consuming for companies—hence, the
emergence of data science.

What is Big Data?


(https://www.forbes.com/sites/peterpham/2015/08/28/the-impacts-of-big-data-that-you-may-not-have-heard-of)
(https://www.investopedia.com/terms/b/big-data.asp)

Historically, data was used as an ancillary to core business and was gathered for specific
purposes. Retailers recorded sales for accounting. Manufacturers recorded raw materials for
quality management. But as the demand for Big Data analytics emerged, data no longer serves
only its initial purpose. Companies able to access huge amounts of data possess a valuable asset
that when combined with the ability to analyze it, has created a whole new industry.
Big data refers to the large, diverse sets of information that grow at ever-increasing rates.
It encompasses the volume of information, the velocity or speed at which it is created and
collected, and the variety or scope of the data points being covered. Big data often comes from
multiple sources and arrives in multiple formats.
Successful players in Big Data are recognized well by the market. Some examples of companies
with big data are Amazon, Facebook, Google, Twitter, SAP to name a few.

Unit VIII: Special Interest Topics in ICT - 95 | P a g e


COMP 20013 - Introduction to Computing

History of Data Science


(https://www.forbes.com/sites/gilpress/2013/05/28/a-very-short-history-of-data-science/#5ca4865a55cf)

1962 John W. Tukey writes in “The Future of Data Analysis”: … Data analysis, and the parts of
statistics which adhere to it, must…take on the characteristics of science rather than those of
mathematics… data analysis is intrinsically an empirical science
1974 Peter Naur publishes Concise Survey of Computer Methods in Sweden and the United
States. Naur offers the following definition of data science: “The science of dealing with data,
once they have been established, while the relation of the data to what they represent is delegated
to other fields and sciences.”
1977 The International Association for Statistical Computing (IASC) is established as a Section
of the ISI. “It is the mission of the IASC to link traditional statistical methodology, modern computer
technology, and the knowledge of domain experts in order to convert data into information and
knowledge.”

1989 Gregory Piatetsky-Shapiro organizes and chairs the first Knowledge Discovery in Databases
(KDD) workshop. In 1995, it became the annual ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD).

1996 Members of the International Federation of Classification Societies (IFCS) meet in Kobe,
Japan, for their biennial conference. For the first time, the term “data science” is included in the
title of the conference (“Data science, classification, and related methods”).

1997 In his inaugural lecture for the H. C. Carver Chair in Statistics at the University of Michigan,
Professor C. F. Jeff Wu (currently at the Georgia Institute of Technology), calls for statistics to be
renamed data science and statisticians to be renamed data scientists.

May 2005 Thomas H. Davenport, Don Cohen, and Al Jacobson publish “Competing on Analytics,”
a Babson College Working Knowledge Research Center report, describing “the emergence of a
new form of competition based on the extensive use of analytics, data, and fact-based decision
making... Instead of competing on traditional factors, companies are beginning to employ
statistical and quantitative analysis and predictive modeling as primary elements of competition.

July 2008 “The Skills, Role & Career Structure of Data Scientists & Curators: Assessment of
Current Practice & Future Needs,” defines data scientists as “people who work where the
research is carried out--or, in the case of data centre personnel, in close collaboration with the
creators of the data--and may be involved in creative enquiry and analysis, enabling others to
work with digital data, and developments in database technology.”

January 2009 Hal Varian, Google’s Chief Economist, tells the McKinsey Quarterly: “The ability to
take data—to be able to understand it, to process it, to extract value from it, to visualize it, to
communicate it—that’s going to be a hugely important skill in the next decades… Because now
we really do have essentially free and ubiquitous data. So the complimentary scarce factor is the
ability to understand that data and extract value from it… I do think those skills—of being able to
access, understand, and communicate the insights you get from data analysis—are going to be
extremely important.

May 2011 David Smith writes in "’Data Science’: What's in a name?”: “The terms ‘Data Science’
and ‘Data Scientist’ have only been in common usage for a little over a year, but they've really

Unit VIII: Special Interest Topics in ICT - 96 | P a g e


COMP 20013 - Introduction to Computing

taken off since then: many companies are now hiring for ‘data scientists’, and entire conferences
are run under the name of ‘data science’

September 2011 D.J. Patil writes in “Building Data Science Teams”: “Starting in 2008, Jeff
Hammerbacher (@hackingdata) and I sat down to share our experiences building the data and
analytics groups at Facebook and LinkedIn. In many ways, that meeting was the start of data
science as a distinct professional specialization.

2012 Tom Davenport and D.J. Patil publish “Data Scientist: The Sexiest Job of the 21st Century”
in the Harvard Business Review

The Data Scientist


(https://searchenterpriseai.techtarget.com/definition/data-scientist
https://towardsdatascience.com/how-data-science-will-impact-future-of-businesses-7f11f5699c4d)

A data scientist is a professional responsible for collecting, analyzing and interpreting


extremely large amounts of data. The data scientist role is an offshoot of several traditional
technical roles, including mathematician, scientist, statistician and computer professional. This
job requires the use of advanced analytics technologies, including machine learning
and predictive modeling.

Figure 8.3 Domains of Data Science


(Image Source: kainos.com)

Since data scientists have an in-depth understanding of data, they work very well in moving
organizations towards deep learning, machine learning, and AI adoption as these companies
generally have the same data-driven aims. They also help in software development services for
that software that includes lots of data and analytics.

Data scientists help companies of all sizes to figure out the ways to extract useful
information from an ocean of data to help optimize and analyze their organizations based on these

Unit VIII: Special Interest Topics in ICT - 97 | P a g e


COMP 20013 - Introduction to Computing

findings. Data scientists focus on asking data-centric questions, analyzing data, and applying
statistics & mathematics to find relevant results.

Data scientists have their background in statistics & advanced mathematics, AI and
advanced analysis & machine learning. For companies that want to run an AI based project, it is
crucial to have a data scientist on the team in order to customize algorithms, make the most of
their data, and weigh data-centric decisions.

UNIT ASSESSMENTS/ACTIVITIES
1. What fields of science are associated with data science?
2. Why do you think there is a lack of data scientists in the industry?
3. What are the usual sources of big data?
4. What data do you provide by using your social networking account, e.g.facebook?

References:
https://www.investopedia.com/terms/d/data-science.asp
https://www.forbes.com/sites/peterpham/2015/08/28/the-impacts-of-big-data-that-you-may-not-
have-heard-of
https://www.investopedia.com/terms/b/big-data.asp
https://www.forbes.com/sites/gilpress/2013/05/28/a-very-short-history-of-data-science
https://searchenterpriseai.techtarget.com/definition/data-scientist
https://towardsdatascience.com/how-data-science-will-impact-future-of-businesses

Unit VIII: Special Interest Topics in ICT - 98 | P a g e


COMP 20013 - Introduction to Computing

PART 3: SOCIAL NETWORKING AND SOCIETY

LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Discuss where each specific popular media sites are commonly used
2. Analyze the benefits of social media to society
3. Discuss the disadvantages of social media

COURSE MATERIALS
What is Social Networking

A social networking service (also social networking site or social media) is an online platform
which people use to build social networks or social relationships with other people who share
similar personal or career interests, activities, backgrounds or real-life connections. Social
networking sites allow users to share ideas, digital photos and videos, posts, and to inform others
about online or real-world activities and events with people in their network.

Popular Social Media Sites


(Source: A Study on Positive and Negative Effects of Social Media on Society, W.Akram, R.Kumar, 2017)

The following are the most popular social media sites

Facebook. This is the largest social media network on the Internet, both in terms of total number
of users and name recognition. Facebook came into existence on February 4, 2004, Facebook
has within 12 years managed to collect more than 1.59 billion monthly active users and this
automatically makes it one of the best mediums for connecting people from all over the world with
your business. It is predictable that more than 1 million small and medium-sized businesses use
the platform to advertise their business.
Twitter We might be thinking that restrictive our posts to 140 characters is no way to advertise
our business, but we will be shocked to know that this social media stage has more than 320
million active monthly users who can build use of the 140 character limit to pass on information.
Businesses can use Twitter to interact with prospective clients, answer questions, release latest
news and at the same time use the targeted ads with specific audiences. Twitter was founded on
March 21, 2006, and has its headquarters in San Francisco, California.
Google+ Google+ is one of the popular social media sites in these days. Its SEO value alone
makes it a must-use tool for any small business. Google+ was propelled on December 15, 2011,
and has joined the major alliances enlisting 418 dynamic million clients as of December 2015.
YouTube YouTube : the biggest and most well known video-based online networking site — was
established on February 14, 2005, by three previous PayPal workers. It was later purchased by
Google in November 2006 for $1.65 billion. YouTube has more than 1 billion site guests for every
month and is the second most well known internet searcher behind Google.

Unit VIII: Special Interest Topics in ICT - 99 | P a g e


COMP 20013 - Introduction to Computing

Pinterest Pinterest is commonly a beginner in the online networking field. This stage comprises
of computerized announcement sheets where organizations can stick their substance. Pinterest
reported September 2015 that it had obtained 100 million clients. Private ventures whose intended
interest group is for the most part comprised of ladies should put resources into Pinterest as the
greater parts of its guests are ladies.
Instagram Instagram is a visual online networking stage. The site has more than 400 million
dynamic clients and is possessed by Facebook. A significant number of its clients utilize it to post
data about travel, form, sustenance, workmanship and comparable subjects. The stage is likewise
recognized by its remarkable channels together with video and photograph altering highlights.
Right around 95 percent of Instagram clients additionally utilize Facebook.
Tumblr Tumblr is a standout amongst the most hard toutilize informal communication stages, but
at the same time it's a standout amongst the most fascinating locales. The stage permits a few
diverse post groups, including cite posts, talk posts, video and photograph posts and in addition
sound posts, so you are never constrained in the kind of substance that you can share. Like
Twitter, reblogging, which is more similar to retweeting, is speedy and simple. The long range
informal communication site was established by David Karp in February 2007 and at present has
more than 200 million sites.
Flickr Flickr, articulated "Glint," is an online picture and video facilitating stage that was made by
the then Vancouverconstruct Ludicorp in light of February 10, 2004, and later obtained by Yahoo
in 2005. The stage is well known with clients who share and install photos. Flickr had more than
112 million clients and had its impression in more than 63 nations. Million of photographs are
shared day by day on Flickr.
Reddit This is social news and excitement organizing site where enlisted clients can submit
substance, for example, coordinate connections and content posts. Clients are likewise ready to
arrange and decide their position on the site's pages by voting entries up or down. Entries with
the best votes show up in the best classification or primary page.
Snapchat Snapchat is a image informing application training item that was made by Reggie
Brown, Evan Spiegel and Bobby Murphy when they were understudies at Stanford University.
The application was authoritatively discharged in September 2011, and inside a limited ability to
focus time they have become hugely enrolling a normal of 100 million every day dynamic clients
as of May 2015. More than 18 percent of every social medium client utilizes Snapchat.
WhatsApp WhatsApp Messenger is a cross-platform instant messaging client for smartphones,
PCs and tablets. This application needs Internet connection to send images, texts, documents,
audio and video messages to other users that have the app installed on their devices. Launched
in January 2010, WhatsApp Inc. was purchased by Facebook on February 19, 2004, for about
$19.3 billion. Today, more than 1 billion persons make use of the administration to speak with
their companions, friends and family and even clients.
TikTok
(Source: https://www.commonsensemedia.org/blog/parents-ultimate-guide-to-tiktok
https://slate.com/technology/2018/09/tiktok-app-musically-guide.html)

TikTok It is a Chinese video-sharing social networking service owned by ByteDance, a Beijing-


based internet technology company founded in 2012 by Zhang Yiming. It is used to create short
dance, lip-sync, comedy and talent videos. It lets you watch and share videos -- often to a
soundtrack of the top hits in music -- right from your phone. ... As with the lip-
synching app Dubsmash, users can watch and record videos of themselves lip-synching to
popular music and sound bites.2

Unit VIII: Special Interest Topics in ICT - 100 | P a g e


COMP 20013 - Introduction to Computing

Impact of Social Media on Society


Social media have had profound impacts on the modern world. According to a recent study
by Allcott, et.al (2020), the number of monthly users for Facebook, has reached 2.3 billion
monthly active users worldwide (Facebook 2018). The authors said that as of 2016, the average
user was spending 50 minutes per day on Facebook and its sister platforms Instagram and
Messenger (Facebook 2016). There may be no technology since television that has so
dramatically reshaped the way people get information and spend their time (Allcott, et.al, 2020),
Although there are a number of positive effects of social media, there have, however been
speculations of its negative impact. The study of Allcott focused on Facebook and its effect to its
users. It says that the results leave little doubt that Facebook provides large benefits for its
users. Among the benefits are: it is an important source of news and information, It is a source
of entertainment, it is a means to organize a charity or an activist group, and it is a vital social
lifeline for those who are otherwise isolated (Allcott, et.al, 2020).
On the downside, the conclusion of the study says , “We find that while (Facebook)
deactivation makes people less informed, it also makes them less polarized by at least some
measures, consistent with the concern that social media have played some role in the recent rise
of polarization in the United States. “. The study further said that although the negative effects
could be real concerns, they could be smaller than what might have been expected given prior
studies and researches on the topic (Allcott, et.al, 2020).

Akram and Kumar (2017) listed down the positive effects of social media on society. They are:
 Connectivity – easier for people to connect with anyone regardless of location,
 Education – easy for experts and professionals to educate via social media, regardless of
location, education background, and it is also free,
 Help – A person’s issues can be shared with a group for help and energy,
 Information and updates – Availability of most recent happenings around the planet,
 Advertising – Business can be promoted to a very wide audience,
 Noble cause – Effective way to solicit contribution for needy people, and
 Helps in building communities – People of different communities can connect to discuss
and share related stuff.
While the negative effects are:
 Cyber harassing – Because of anonymity in the net, it is exceptionally straightforward to
bully people in the internet,
 Hacking – Personal information can be stolen through hacking
 Addiction – People spend so much more time than is necessary and lose a sense of
productiveness
 Fraud and scams – Fraudulent activities being involving money comes in many forms
 Reputation – Damage to reputation by spreading false story in the internet

In general, social media has contributed positively to the society in many ways. One
advantage which everybody would be able to relate to would be in our connectivity. Connecting
with people has never been so easy. Friends and family, we have not seen or talked with for
quite some time suddenly become just a message away. It has given people more opportunities
for socialization and for keeping updated of what’s going on with friends, family, business

Unit VIII: Special Interest Topics in ICT - 101 | P a g e


COMP 20013 - Introduction to Computing

partners, or just mere acquaintances. The other advantages on education, ease of sharing
information, the help it provides by just being able to link with people who can provide guidance
and assistance, and building communities, these are major benefits that people enjoy with social
media.
Some of the negative effects could be avoided by making sure our user profile is secure
so that it will not be available to people we do not know. It will also help to use strong passwords.
People in social media should also study and examine businesses and investment opportunities
being offered before entering into any deal online. It is necessary to be circumspect when dealing
with people we only talk with, most of the time, using only chats or messages. Setting time limit
for using social media should also be a good practice as it makes us monitor our use and make
us conscious too just how much time we had already spent in social media.
Depending on the individual and his discipline on the use of social media, the benefits may
outweigh the disadvantages or the downside may overwhelm the advantages.

UNIT ASSESSMENTS
1. Give three social media sites and differentiate them
2. From the study made by Allcott, et.al, would you say that there are more harmful effects of
the use of Facebook?
3. From the advantages of social media (Akram & Kumar), give three which are most important
to you.
4. Which among the harmful effects of social media have you experienced. Elaborate on your
answer.

References:
https://en.wikipedia.org/wiki/Social_networking_service
A Study on Positive and Negative Effects of Social Media on Society, W.Akram, R.Kumar, 2017
https://www.commonsensemedia.org/blog/parents-ultimate-guide-to-tiktok 2
https://slate.com/technology/2018/09/tiktok-app-musically-guide.html 2
The Welfare Effects of Social Media By Hunt Allcott, Luca Braghieri, Sarah Eichmeyer and
Matthew Gentzkow* American Economic Review 2020, 110(3): 629–676
https://doi.org/10.1257/aer.20190658

Unit VIII: Special Interest Topics in ICT - 102 | P a g e


Unit VIII: Special Interest Topics in ICT - 103 | P a g e

You might also like