Professional Documents
Culture Documents
INTRODUCTION TO
COMPUTING
COMP 20013
Compiled by:
Marian G. Arada
Monina D. Barretto
Melvin C. Roxas Course Syllabus 1 | P a g e
COMP 20013 - Introduction to Computing
Table of Contents
Course Syllabus 5
UNIT IV PEOPLEWARE 60
UNIT V SOFTWARE 65
This instructional material presents all topics which are based in the course syllabus. It is
presented in a concise, simple manner intended to guide you through the different topics of the
course. Please read the material thoroughly for better understanding of the lessons. You are
encouraged to read additional learning materials available to you. There are suggested
references at the end of each topic.
All course materials and/or activities where there is a need to access the internet are
optional. You may work on the internet-based activities only if you have access to the internet.
The assessments/activities at the end of each module must be answered. They are
intended to gauge your understanding of what you have learned from the lessons. Your professor
should get in touch with you at the start of the semester regarding the submission of answers to
assessments/activities and will give further instructions on how distance learning will be
implemented.
Course Syllabus
COURSE None
PREREQUISITE
Institutional Learning
Program Outcomes Course Outcomes
Outcomes
1. Creative and Critical Thinking Apply knowledge of computing Explain fundamental principles,
Graduates use their imaginative fundamentals, knowledge of a concepts and evolution of
as well as a rational thinking computing specialization, and computing systems as they
ability to life situations in order mathematics, science and relate to different fields.
push boundaries, realize domain knowledge appropriate
possibilities, and deepen their for the computing specialization Identify and define the
interdisciplinary and general to the abstraction and components of the computer
conceptualization of computing system.
understanding of the world.
models from defined problems
and requirements. Compare and understand the
different number systems such
Identify, analyze, formulate,
research literature, and solve as binary, decimal, and
complex computing problems hexadecimal number systems.
and requirements reaching
substantiated conclusions using Perform number conversion,
fundamental principles of fixed point and floating point
mathematics, computing number representation.
sciences, and relevant domain
disciplines.
Course Syllabus - 5 | P a g e
COMP 20013 - Introduction to Computing
Institutional Learning
Program Outcomes Course Outcomes
Outcomes
2. Effective Communication Communicate effectively with Understand the concepts of
Graduates are proficient in the the computing community and data communication, network
four macro skills in with society-at- large about components, protocols and
communication (reading, writing, complex computing activities by internet issues.
listening, and speaking) and are being able to comprehend and
able to use these skills in solving write effective reports, design
problems. Making decisions, and documentation, make effective
articulating thoughts when presentations, and give and
engaging with people in various understand clear instructions.
circumstances.
3. Strong Service Orientation Design and evaluate solutions Analyze solutions employed by
Graduates exemplify the for complex computing organizations to address
potentialities of an efficient, problems, and design and different computing issues.
well-rounded and responsible evaluate systems, components,
professional deeply committed or processes that meet specified
to service excellence. needs with appropriate
consideration for public health
and safety, cultural, societal,
and environmental
considerations.
4. Community Engagement Create, select, adapt and apply Evaluate tools and techniques
Graduates take an active role in appropriate techniques, for purposes of identifying best
the promotion and fulfillment of resources and modern practices in computing
various advocacies computing tools to complex development.
(educational, social and computing activities, with an
environmental) for the understanding of the limitations
advancement of community to accomplish a common goal.
welfare.
5. Adeptness in the An ability to apply mathematical Expound on the recent
Responsible Use of foundations, algorithmic developments in the different
Technology principles and computer science computing knowledge areas
Graduates demonstrate theory in the modeling and Understand the basics of digital
optimized use of digital learning design of computer-based logic system.
abilities, including technical and systems in a way that Identify the different levels of
numerical skills. demonstrates comprehension of programming.
the tradeoffs involved in design
choices.
6. Passion to Lifelong Learning Recognize the need, and have
Graduates are enabled to perform the ability, to engage in
and function in the society by independent learning for
taking responsibility in their quest continual development as a
to know more about the world computing professional.
through lifelong learning.
7. High Level of Leadership and Function effectively as an
Organizational Skills individual and as a member or
Graduates are developed to leader in diverse teams and in
become the best professionals multidisciplinary settings.
in their respective disciplines by
manifesting the appropriate
skills and leaderships qualities.
Course Syllabus - 6 | P a g e
COMP 20013 - Introduction to Computing
Course Plan
Assessment
Week Topic Learning Outcomes Methodology Resources
*
1 1. Introduction to a. Demonstrate an Orientation University Quick
the Course understanding of Self- Student recitation to
what the subject is all Introduction Handbook get student’s
a. Vision Mission about, what will be in (On-line) thoughts and
Goals and scope for the College questions
Objective of the semester, and what Manual using online
University, and students are expected
Course application
College. to learn
b. Self-Introduction b. Communicate with Syllabus
c. Course Overview fellow students and
teacher and begin to Online
d. Grading System
e. Classroom establish rapport application
Management c. Identify and explain
the course
assessment and
validation criteria,
including grading
system to understand
how to pass the
subject
d. Explain what are the
do’s and don’ts while
the class is on-going
Unit I: Overview a. Categorize Lecture Powerpoint Short Quiz
of Information and computers. Video Material
b. Contrast elements of presentation
Communications Reference
computer system. Interactive
2nd Technology c. Identifies various learning Books
1. Introduction to events/improvements
Computers in the computing
world.
Course Syllabus - 7 | P a g e
COMP 20013 - Introduction to Computing
Course Syllabus - 8 | P a g e
COMP 20013 - Introduction to Computing
h. Convert boolean
algebra expression
into a logic circuit
i. Create truth tables
for the
corresponding logic
circuits and boolean
expression
j. Explain basic
theorems and
postulates on digital
logic system
Course Syllabus - 9 | P a g e
COMP 20013 - Introduction to Computing
Assessment
Week Topic Learning Outcomes Methodology Resources
*
Unit VIII: Special a. Explain the difference Powerpoint Short Quiz
Interest Topics in between AI, machine Student Material
ICT learning, and deep lecture/demon
learning stration Reference
1. Artificial b. Provide applications of Video Books
Intelligence AI in different industries presentation
2. Data Science and in daily use. Debate
3. Social Networking c. Identify important discussion
and Society milestones in the Interactive
history of AI lecturing
d. Explain supervised, Assigned
unsupervised learning reading
15th and other concepts
related to AI
to 16th e. Explain what the field of
data science is
f. Identify the
skills/expertise needed
to be a data scientist
g. Discuss what big data
is and how it relates to
data science
h. Discuss where each
specific popular media
sites are commonly
used
i. Analyze the benefits of
social media to society
j. Discuss the
disadvantages of social
media
17th FINAL Final
EXAMINATION Examination
18th Round-up
Activities
*Activities under methodology / assessment will all be done online (i.e. distance learning)
Suggested Readings and References
REFERENCES
1. Burd, Stephen D. Systems Architecture. 5th Edition, 2006.
2. Cashman, Shelley. Discovering Computers, Course Technology. 2006.
3. Norton, Peter. Introduction to Computers. 6th Edition, 2006.
4. Albano, Gisela May, Atole, Ronnel., Ariola, Rose Joy. Introduction to Information Technology. 2003.
5. Parson, June Jamrich, Oja, Dan. Computer Concepts, 5th Edition. 2003.
6. Stallings, William. Computer Organization and Architecture. 6th Edition, 2003.
7. Long, Larry. Computers: Information Technology in Perspective. 2002.
8. Schneider, G. Michael, Gersting, Judith. An Invitation to Computer Science. 2000.
9. Sawyer, S. Using Information Technology: A Practical Introduction to Computers and Communication:
Intro Version. 2000.
10. Farrel, Joyce ,Technology Now, 2018
Course Syllabus - 10 | P a g e
COMP 20013 - Introduction to Computing
*Some assessment criteria may not apply with a different teaching modality (i.e. online/distance
learning)
Classroom Policy
Aside from what is prescribed in the student handbook, the following are the professor’s
additional house rules:
1. The course is expected to have a minimum of four (4) quizzes. No makeup tests will be given.
2. Assignments and research projects/report works will be given throughout the semester. Such
requirements shall be due as announced in class. Late submission shall be penalized with grade
deductions (5% per day) or shall no longer be accepted, depending on the subject facilitator’s
discretion. Assignments and exercises are designed to assist you in understanding the materials
presented in class, and to prepare you for the exams.
3. Students are required to attend classes regularly, including possible make-up classes. The student
will be held liable for all topics covered and assignments made during his/her absence. The
university guidelines on attendance and tardiness will be implemented.
4. Any evidence of copying or cheating during any examinations may result in a failing grade from the
examination for all parties involved. Note that other university guidelines shall be used in dealing
with this matter.
5. Students are advised to keep graded work until the semester has ended.
6. Contents of the syllabus are subject to modification with notification.
7. Cell phones, radios or other listening devices are not allowed to be used inside lecture and
laboratory rooms to prevent any distractive interruption of the class activity. *
8. No foods, drinks, cigarettes nor children are allowed inside the lecture and laboratory rooms. *
9. Withdrawal and dropping from the subject should be done in accordance with existing university
policies and guidelines regarding the matter.
*May not apply with a different teaching modality (i.e. distance learning, non F2F mode)
Consultation Time
Prepared by: Recommending Approval:
Reviewed by:
Emanuel C. De Guzman, PhD
Melvin C. Roxas, MSGITS Vice President for Academic Affairs
Department Chair
Course Syllabus - 11 | P a g e
Introduction to Computing
The word computer is derived from the word compute. Compute means to calculate. It
had the capacity to solve complex arithmetic and scientific problems at very high speed. But
nowadays computers perform many other tasks like accepting, sorting, selecting, moving,
comparing various types of information. They also perform arithmetic and logical operations on
alphabetic, numeric and other types of information. This information provided by the user to the
computer is data. The information in one form which is presented to the computer is the input
information or input data.
Computer is defined as a fast and accurate data processing system that accepts data,
performs various operations on the data, has the capability to store the data and produce the
results on the basis of detailed step by step instructions given to it.
LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Identify and define the components of computer system
2. Categorize computers. Compare and understand the different types/classifications of
computer
3. Identify various events/improvements in the computing world.
4. Qualify the understanding of computer usage.
COURSE MATERIALS
Hardware:
Hardware refers to the tangible component of a computer system. The hardware is the
machinery itself. It is made up of the physical parts or devices of the computer system like the
electronic Integrated Circuits (ICs), magnetic storage media and other mechanical devices like
input devices, output devices etc. Various hardware are linked together to form an effective
functional unit.
The various types of hardware used in the computers, has evolved from vacuum tubes of
the first generation to Ultra Large Scale Integrated Circuits of the present generation.
Software:
Software refers to the intangible component of a computer system. The computer
hardware itself is not capable of doing anything on its own. It has to be given explicit instructions
to perform the specific task. The computer program is the one which controls the processing
activities of the computer. The computer thus functions according to the instructions written in the
program. Software mainly consists of these computer programs, procedures and other
Peopleware
Peopleware is regarded as the most important element of the computer and
communication system. It is said that without this element, there would not be any hardware
computers to be used, no software systems that would run computers, and no outputs to be
interpreted as a valid source of information. But thanks to the founding men and women behind
the innovations in the field of computing, the likes of Charles Babbage, Lady Ada Lovelace, Alan
Turing, and others, the world we live in today has made it a necessity for computers and its
systems to be part of our daily lives.
CLASSIFICATION OF COMPUTERS
The computer systems can be classified on the following:
1. According to Size.
2. According to Types of Data Handling.
3. According to Purpose
1. Super computers
Supercomputers actually play an important role in the field of computation, and are
used for intensive computation tasks in various fields, including quantum mechanics,
weather forecasting, climate research, oil and gas exploration, molecular modeling, and
physical simulations. Throughout the history, supercomputers have been essential in the
field of the cryptanalysis.
2. Mainframe computers
These are commonly called as big iron, they are usually used by big organizations
for bulk data processing such as statics, census data processing, transaction processing
and are widely used as the severs as these systems has a higher processing capability
as compared to the other classes of computers, most of these mainframe architectures
were established in 1960s, the research and development worked continuously over the
years and the mainframes of today are far more better than the earlier ones, in size,
capacity and efficiency.
3. Mini computers
These computers came into the market in mid 1960s and were sold at a much
cheaper price than the main frames, they were actually designed for control,
instrumentation, human interaction, and communication switching as distinct from
calculation and record keeping, later they became very popular for personal uses with
evolution.
In the 60s to describe the smaller computers that became possible with the use of
transistors and core memory technologies, minimal instructions sets and less expensive
peripherals such as the ubiquitous Teletype Model 33 ASR. They usually took up one or
a few inch rack cabinets, compared with the large mainframes that could fill a room, there
was a new term “MINICOMPUTERS” coined.
4. Micro computers
A microcomputer is a small, relatively inexpensive computer with a microprocessor
as its CPU. It includes a microprocessor, memory, and minimal I/O circuitry mounted on
a single printed circuit board. The previous to these computers, mainframes and
minicomputers, were comparatively much larger, hard to maintain and more expensive.
They actually formed the foundation for present day microcomputers and smart gadgets
that we use in day to day life.
1. Analog computers
2. Digital computers
3. Hybrid computers
A computer that processes both analog and digital data, Hybrid computer is a
digital computer that accepts analog signals, converts them to digital and processes them
in digital form
General Purpose Computer are computers that are utilized for ordinary work. These
computers can do numerous sorts of work, yet each one of those assignments is ordinary.
For example, - Writing a letter with Word Processing, setting up a record, printing
reports, making a database, and so forth. The CPU limit of these computers is likewise
less. In this manner, just ordinary work should be possible in it.
2. Special Purpose Computer
These computers are built for a particular task. The CPU capabilities in this additionally
relate to that particular function. On the off chance that more than one CPU is required,
at that point, numerous computers are introduced on these computers. Aside from this, on
the off chance that the work requires particular hardware or gadget, at that point those
gadgets or gadgets can be included in these calculations.
I. Capabilities of computer
A computer system is better than human beings in a way that it possesses the following
capabilities:
1. Speed
Speed is the amount of time taken by the computer in accomplishing a task of an
operation. The time taken by a computer to perform a particular task is far less than that taken
by than a human being. Different computers are classified on the basis of their speed by
comparing their MIPS (Million Instructions Per Second).
2. Accuracy
3. Reliability
4. Versatility
5. Storage:
It refers to the capacity of a computer to store data and programs. Storage is done
in storage media such as CDs, Floppies, DVDs, RAM (Random Access Memory), ROM
(Read Only Memory).
Limitations of a Computer
Although a computer is far better in performance than a human being, it fails in certain
ways as follows:
Computers cannot think and they can’t do any job unless they are first programmed
with specific instructions for same. They work as per stored instructions. Algorithms are
designed by humans to make a computer perform a special task. This is also called
artificial intelligence.
Computers are incapable of decision making as they do not possess the essential
elements necessary to take a decision i.e. knowledge, information, wisdom, intelligence
and the ability to judge.
3. No Feeling
Lack of feeling is another limitation of computer. A computer cannot feel like us. It
does not have emotions, feelings, knowledge etc. It does not get tired and keep on doing
its tasks. It can do very risky works which are not capable by human beings.
Though computers are helpful in storage of data and can contain the contents of
encyclopedias even, but only humans can decide and implement the policies.
HISTORY OF COMPUTING
1943-1946 America Electronic Presper Eckert The first large-scale vacuum tube
Numeric Jr. computer.
Integrated and John Mauchly
Calculator
(ENIAC)
1946 EDVAC John Von Modified version of the ENIAC.
Neumann
Advantages:
1. It made use of vacuum tubes which are the only electronic component available
during those days.
2. These computers could calculate in milliseconds.
Disadvantages:
1. These were very big in size; weight was about 30 tones.
2. These computers were based on vacuum tubes.
3. These computers were very costly.
4. It could store only a small amount of information due to the presence of magnetic
drums.
5. As the invention of first-generation computers involves vacuum tubes, so another
disadvantage of these computers was, vacuum tubes require a large cooling system.
6. Very less work efficiency.
7. Limited programming capabilities and punch cards were used to take inputs.
8. Large amount of energy consumption.
9. Not reliable and constant maintenance is required.
Advantages:
1. Due to the presence of transistors instead of vacuum tubes, the size of electron
component decreased. This resulted in reducing the size of a computer as compared
to first generation computers.
2. Less energy and not produce as much heat as the first generation.
3. Assembly language and punch cards were used for input.
4. Low cost than first generation computers.
5. Better speed, calculate data in microseconds.
6. Better portability as compared to first generation
Disadvantages:
1. A cooling system was required.
2. Constant maintenance was required.
3. Only used for specific purposes.
Advantages:
1. These computers were cheaper as compared to second-generation computers.
2. They were fast and reliable.
3. Use of IC in the computer provides the small size of the computer.
4. IC not only reduce the size of the computer but it also improves the performance of
the computer as compared to previous computers.
5. This generation of computers has big storage capacity.
6. Instead of punch cards, mouse and keyboard are used for input.
7. They used an operating system for better resource management and used the
concept of time-sharing and multiple programming.
8. These computers reduce the computational time from microseconds to
nanoseconds.
Disadvantages:
1. IC chips are difficult to maintain.
2. The highly sophisticated technology required for the manufacturing of IC chips.
3. Air conditioning is required.
Advantages:
1. Fastest in computation and size get reduced as compared to the previous generation
of computer.
2. Heat generated is negligible.
3. Small in size as compared to previous generation computers.
4. Less maintenance is required.
5. All types of high-level language can be used in this type of computers.
Disadvantages:
1. The Microprocessor design and fabrication are very complex.
2. Air conditioning is required in many cases due to the presence of ICs.
3. Advance technology is required to make the ICs.
Advantages:
1. It is more reliable and works faster.
2. It is available in different sizes and unique features.
3. It provides computers with more user-friendly interfaces with multimedia features.
UNIT ASSESSMENTS/ACTIVITIES
1. Discuss what is your understanding on the elements of computer system and how are
they interrelated with one another.
2. Aside from the examples on the classification of computers discussed, give and explain
examples for each classification of computers.
4. Make a research on the recent hardware and software developments in ICT. Discuss its
significant contributions to ICT. Support your discussion with pictures and include
references in the paper.
OVERVIEW
This module describes the various ways in which computers can manipulate numbers and
characters. The module is subdivided in two parts, the first part discusses the numbers system
operations and conversions. The second part covers the different data representations including
the Numeric and Non-Numeric representation of data.
LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Distinguish the various number systems and data representation.
2. Compute number system operation and conversion.
3. Manipulate various operation and conversions in number system
4. Answer/practice various operations and conversions.
NUMBER SYSTEMS
There are several number systems which we normally use, such as decimal, binary, octal,
hexadecimal. Amongst them we are most familiar with the decimal number system. These
systems are classified according to the values of the base of the number system.
In general, we can express any number in any base or radix “X.” Any number with base
X, having n digits to the left and m digits to the right of the decimal point, can be expressed as:
Decimal
Here's the decimal number system as an example:
digits (or symbols) allowed: 0,1,2,3,4,5,6,7,8,9
base (or radix): 10
the order of the digits is significant
345 is represented as
3 x 100 + 4 x 10 + 5 x 1
3 x 102 + 4 x 101 + 5 x 100
Binary to Decimal
Here's a binary number system:
digits (symbols) allowed: 0, 1
base (radix): 2
each binary digit is called a BIT
the order of the digits is significant
1 x 23 + 0 x 22 + 0 x 21 + 1 x 20
910
Octal to Decimal
Here's an octal number system:
digits (symbols) allowed: 0,1,2,3,4,5,6,7
base (radix) 8
the order of the digits is significant
Hexadecimal to Decimal
Here's a hexadecimal number system:
digits (symbols) allowed: 0-9, A,B,C,D,E,F
base (radix) 16
the order of the digits is significant
A common syntax used to represent hexadecimal values (in code) is to place the
symbols "0x" as a prefix to the value.
A second common syntax is to place a suffix of 'h' onto a value, indicating that it is
hexadecimal.
Note that h is not a symbol used in hexadecimal, so it can indicate the representation
used. The Intel architectures do this in their assembly languages. This representation
is actually more time consuming (meaning the execution time of the code) to interpret,
since the entire number must be read before it can be decided what number system is
being used.
In General
Given all these examples, here's a set of formulas for the general case.
Sn-1 S n-2 . . . S2 S1 S0
the subscript gives us a numbering of the digits given a base b, this is the decimal value
134 (base 5)
1 x 52 + 3 x 511 + 4 x 50
25 + 15 + 4
4410
Note: This algorithm works for decimal to ANY base. Just change the base you want
to convert.
Decimal Binary
examples:
3610 == 1001002
1410 == 11102
Decimal Octal
229/8 = 28 r = 5 LSB
28/8 = 3 r = 4
3 MSB
22910 == 3458
513/8 = 64 r = 1 LSB
64/8 = 8 r = 0
8/8 = 1 r = 0
1 MSB
51310 == 10018
Decimal Hexadecimal
759/16 = 47 r = 7 LSB
47/16 = 2 r = (15) F
2 MSB
75910 == 2F716
Binary Octal
Examples:
Binary Hexadecimal
(just like binary to octal!)
Examples:
Hexadecimal Binary
Just write down the four (4) bit binary code for each hexadecimal digit
Example:
3 9 C 8 (hexadecimal)
0011 1001 1100 1000 (binary)
Octal Binary
Like hex to binary, just write down the 8 bit binary code for each octal digit
Example:
5 0 1 (octal)
101 000 001 (binary)
Hexadecimal Octal
Do it in 2 steps,
1. hex binary
2. binary octal
The above discussion is for integer numbers only. Now if the number contains the
fractional part we have to deal in a different way when converting the number from a different
number system (i.e., binary, octal, or hexadecimal) to a decimal number system or vice versa.
We illustrate this with examples.
Examples
Binary to Decimal
The positional weights for each of the digits are written in italics below each digit.
1010.0112 = 14.3751010
Octal to Decimal
The positional weights for each of the digits are written in italics below each digit.
345.358 = 229.4531251010
Hexadecimal to Decimal
The positional weights for each of the digits are written in italics below each digit.
51B.1216 = 1307.066406251010
Subtraction
The procedure of subtracting two (octal, hexadecimal, binary) numbers using the
direct method is same as decimal subtraction.
The direct method of subtraction uses the concept of borrow. In this method, we
borrow from a higher significant position when the minuend digit is smaller than the
corresponding subtrahend digit.
Binary Subtraction
There are Three (3) ways:
The direct method
2’s complement
1’s complement
Subtraction - By Complements
Complements are used in digital computers for simplifying the subtraction operation
and for logical manipulations.
There are two types of complements for each number system of base r:
- the r complement
- The r-1 complement
So for binary the value of r is 2 so we have the 2’s (r’s) complement and the 1’s (r-
1’s) complement
1’s Complement
To get the 1’s complement of a binary number, the “0” and “1” bits of the original
bit string are switched.
Ex. 101102 010012
2’s Complement
2’s complement is the 1’s complement bit string plus 1.
Ex. the 2’s complement of 101102
010012 1’s complement of 10110
+ 1
010102 2’s complement of 10110
Binary Subtraction (Using 1’s complement), if the subtrahend is larger than the
minuend
If the subtrahend is larger than the minuend, then no carry is granted.
The answer is obtained in 1’s complement of the true result and opposite in sign.
Binary Subtraction (Using 2’s complement) if the subtrahend is larger than the
minuend
If the subtrahend is larger than the minuend, then no carry is granted. Add the 2’s
complement of the subtrahend to the minuend.
The answer is obtained in 2’s complement of the number and change the sign.
Therefore, the reason why negative numbers are represented using 2’s complement
method in computing is that subtractions can be performed as additions.
Since subtractions can be performed with addition circuits, the subtraction circuits are
unnecessary , thereby simplifying the hardware structure.
Multiplication
The procedure is similar to decimal multiplication but much simpler.
The multiplication is done by repeated addition of all partial products to obtain the full
product
Division
Binary division follows the same procedure as decimal division.
2 DATA REPRESENTATION
Binary Coded Decimal (BCD) - coding scheme relating decimal and binary numbers.
Four (4) bits are required to code each decimal number
Decimal
Binary Number BCD Code
Number
0 0000 0000
1 0001 0001
2 0010 0010
3 0011 0011
4 0100 0100
5 0101 0101
6 0110 0110
7 0111 0111
8 1000 1000
9 1001 1001
10 1010 0001 0000
11 1011 0001 0001
12 1100 0001 0010
13 1101 0001 0011
14 1110 0001 0100
15 1111 0001 0101
Example 1
789 -- 0111100010012
7 8 9
0111 1000 1001
Example 2
105 1000001012
1 0 5
0001 0000 0101
Example 1
789 1111011111111000110010012
7 8 9
11110111 11111000 11001001
Example 2
-105 1111000111110000110101012
1 0 5
11110001 11110000 11010101
Example 1
789 0111100010011100 2
7 8 9 +(sign bit)
0111 1000 1001 1100
Example 2
-105 00010000010111012
1 0 5 - (sign bit)
0001 0000 0101 1101
- Sign-magnitude Representation
- Absolute value representation
- Complement representation
Sign-magnitude Representation
An additional bit is used as the sign bit, usually placed as the MSB (most significant
bit)
Examples
Magnitude : 1011002 = 4410
01011002 = +4410
Uses an 8-bit representation where the first bit corresponds to the sign and the last
seven bits to the value of the number. 0 for positive and 1 for negative.
Limitations:
1. With the 8-bit representation, the range of numeric values that can be
represented is only -127 to 127
Examples
100011002 = -1210
000011002 = +1210
2. Floating Point
Computers represent real values in a form similar to that of scientific notation. There
are standards which define what the representation means so that across computers there
will be consistency. Note that this is not the only way to represent floating point numbers,
it is just the IEEE standard way of doing it.
the representation
-------------------
|S| E | F |
-------------------
S e
(-1) x f x 2
where
e = E - bias
n
f = F/2 + 1
--> S, E, F all represent fields within a representation. Each is just a bunch of bits.
--> E is an exponent field. The E field is a biased-127 representation. So, the true
exponent represented is (E - bias). The radix for the number is ALWAYS 2.
Note: Computers that did not use this representation, like those built before the
standard, did not always use a radix of 2.
--> F is the mantissa. It is in a somewhat modified form. There 23 bits available for the
mantissa. It turns out that if fl. pt. numbers are always stored in their normal form, then
the leading bit (the one on the left, or MSB) is always a 1. So, why store it at all? It gets
put back into the number (giving 24 bits of precision for the mantissa) for any calculation,
but we only have to store 23 bits.
An example: Put the decimal number 64.2 into the standard single precision
representation.
First step:
Get a binary representation for 64.2. To do this, get binary representation for the
stuff to the left, and right of the decimal point separately.
64 is 1000000
.2 x 2 = 0.4 0
.4 x 2 = 0.8 0
.8 x 2 = 1.6 1
.6 x 2 = 1.2 1
Second step:
Normalize the binary representation. (make it look like scientific notation)
1.000000 00110011. . . x 26
Third step:
Six (6) is the true exponent. For the standard form, it needs to be in biased-127
form.
6
+ 127
133
Fourth step:
The mantissa stored (F) is the stuff to the right of the radix point in the normal form.
We need 23 bits of it.
000000 00110011001100110
S E F
0 10000101 00000000110011001100110
the values are often given in hex, so here is the final answer
Overflow
Examples:
When a value cannot be represented in the number of bits allowed, we say that overflow has
occurred. Overflow occurs when doing arithmetic operations.
011 (3)
+ 110 (6)
---------
? (9) it would require 4 bits (1001) to represent
the value 9 in unsigned rep.
Character Representation
examples:
Different bit patterns are used for each different character that needs to be represented.
The code has some nice properties. If the bit patterns are compared, (pretending they represent
integers), then `A' < `B' This is good, because it helps with sorting things into alphabetical order.
the digits:
`0' is 48 (decimal) or 30 (hex)
`9' is 57 (decimal) or 39 (hex)
Character Representation
Coding of Alphanumeric
1. American Standard Code for Information Interchange (ASCII)
This coding scheme was adopted by the American national Standard Institute.
This code uses bit patterns of length of 7 to represent the upper and lower case
letters of the English alphabet, punctuations, the digits 0 through 9, and certain
control information such as line feeds, carriage returns, and tabs.
ASCII is often extended to 8 bits pattern.
Parity bit or check bit is used to detect error in data transmission. Parity bit are
used to signal the computer that the bits in a byte have stayed the way they are
supposed to during transmission.
EBCDIC
Character
Zone Digit
A–I 12 1–9
J–R 13 1–9
S–Z 14 2–9
a–i 8 1–9
j–r 9 1–9
s–z 10 2–9
0–9 15 0–9
Space 4 0
PARITY BIT
In the process of transmitting binary information, any external noise introduced may
change bit values from 0 to 1 or vice versa.
An error detection code can be used to detect errors during transmission.
A parity bit is an extra bit added in a string of binary code to make the total of 1s either
odd or even.
Two types:
1. Even parity - for a given set of bits, the occurrences of bits whose value is 1 is counted.
If that count is odd, the parity bit value is set to 1, making the total count of occurrences
of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a
given set of bits is already even, the parity bit's value is 0.
If EVEN Parity
Count the no. of 1’s
If Even No., parity = 0
If Odd No., parity = 1
2. Odd parity - For a given set of bits, if the count of bits with a value of 1 is even, the parity
bit value is set to 1 making the total count of 1s in the whole set (including the parity bit)
an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the
parity bit's value is 0.
If ODD Parity
Count the no. of 1’s
If Odd No., parity = 0
If Even No., parity = 1
UNIT ASSESSMENTS
Solve the following problems in a separate answer sheet. Show/present all your
complete solution in solving the given problems.
I. Number System Conversion. Complete the table below.
II. Arithmetic
e. What is the decimal equivalent of the 1’s complement no. 1011 1011?
f. What is the decimal equivalent of the 2’s complement no. 1111 0101?
g. Give the floating point and mantissa for the following numbers
1. 25.62510
2. 1063.8910
3. -0.6562510
4. 353.348
5. 0.0005248
6. -0.05C16
7. -4.2510
8. -0.00112
Watch:
https://www.makeuseof.com/tag/audio-file-format-right-needs/
https://blog.hubspot.com/insiders/different-types-of-image-files
https://blog.hubspot.com/marketing/best-video-format
https://www.google.com.ph/amp/s/www.wix.com/blog/photography/amp/2018/10/25/video-
formats
OVERVIEW
The topic on hardware is divided into two. The first part discusses the electronic
components that make up the computer system. It includes topics on the main units of the
computer including the input and output devices and other related computer equipment.
The second part covers the introduction to digital logic system which discusses the logic circuits
and how they are used to implement circuit design.
LEARNING OUTCOMES
COURSE MATERIALS
Definition: Hardware is the tangible, physical parts of the computer and related devices.
Main Units of a Computer
o Processor – interprets and carries out the basic instructions that operate a computer
o Main Storage – also called the memory or the primary storage
o Input – device used to send data to a computer
o Output – device used to send data from a computer to other devices
Processor - interprets and carries out the basic instructions that operate a computer; it may
also be called the central processing unit or the CPU
The processor contains:
1. Control Unit – directs the flow of instructions and data inside the CPU and acts as a traffic
controller; it interprets each instruction and initiates the appropriate action to carry out.
2. Arithmetic and Logic Unit (ALU) – performs the arithmetic and logical calculations inside the
CPU
3. Registers – temporarily holds data and instructions; they are small high-speed location
inside the processor
4. System clock – controls the timing of computer operations; it generates regular electronic
impulses (ticks) that sets the operating pace of the system unit components
Main Storage
The memory stores instructions waiting to be executed by the processor, the data needed by
those instructions and the results of processed data (information).
Memory stores three (3) basic types of items:
1. Operating system and other system software
2. Application programs
3. Data / Information
Types of Memory:
1. RAM (Random Access Memory) - stores data and instructions for processing; volatile
(Volatile means the data/program in memory are erased once power is cut off)
Cache memory – high speed holding area ; for those information which are frequently used by
the CPU
2. ROM (Read Only Memory) - contains stored instructions that a computer requires to be able
to do its basic routine operations; non-volatile
3. CMOS (complementary metal-oxide semiconductor) – provides information every time
computer is turned on, e.g. RAM capacity, date/time
Input Device
Magnetic Ink Character Recognition (MICR)- used to read the numbers printed at the
bottom of checks
Output Device
An output device is any device used to send data from a computer to another device or
user (see Figure 3-2).
Other Hardware
Is where data are stored permanently. It is outside the primary storage and serves just
like a filing cabinet.
Read:
UNIT ASSESSMENTS/ACTIVITIES
LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Define what boolean algebra is
2. Identify the different logic gates
3. Illustrate the representation of the different logic gates
4. Convert boolean algebra expression into a logic circuit
5. Create truth tables for the corresponding logic circuits and boolean expression
6. Explain basic theorems and postulates on digital logic system
COURSE MATERIALS
Introduction
George Boole (1815 – 1864) - developed an algebraic system to treat the logic functions, which
is now called Boolean algebra.
Claude Shannon (1916-2001)- is said to be the founder of Digital Circuit Design; It was in 1938
when Shannon applied boolean algebra to telephone switching circuits. And it was then the
engineers realized that boolean algebra could be used to analyze and design computer circuits.
Boolean Algebra
Logic Gates
Computer circuits are often called logic circuits because they simulate mental processes.
These logic circuits are called GATES. A GATE is a digital circuit having one or more input signals
but only one output signal. The basic gates are NOT, AND, OR.
Operation Symbol
Inversion NOT ‘ or an over bar
Multiplication AND
Addition OR +
Inversion 1 = 0
Multiplication 00=0
01=0
10=0
11=1
Addition 00=0
01=1
10=1
11=1
OR Gate – Addition
Boolean expression : Z = XY + W
A universal logic gate is a logic gate that can be used to construct all other logic gates.
This will be discussed in further details in later topics.
NAND Gate
NOR Gate
Circuits that can perform binary addition and subtraction are constructed by combining
logic gates. These circuits are used in the design of the arithmetic logic unit (ALU). The electronic
circuits are capable of very fast switching action, and thus an ALU can operate at high clock rates.
Example of two inverters entering an AND gate, with the corresponding truth table
F = AB + BC + B′C
= AB + C(B + B′)
= AB + C
F = A + A′B
= (A + A′) (A + B)
=A+B
DeMorgan's Theorems
DeMorgan’s Theorems are two additional simplification techniques that can be used to simplify
Boolean expressions.
Theorem 1 : (X + Y)’ = X’Y’ -> A NOR gate is same as a bubbled AND gate
Double inversion has no effect on the logic state. If you invert the signal twice, you get the
original signal back. Double invert a low, and you still have a low. Double invert a high, and
you still have a high.
The following three circuits will generate the same output. Using De Morgan’s theorem,
we convert an OR-AND circuit to an all NOR circuit.
Figure 3.4
Figure 3.5
Double inversion in Figure 3.5. , which makes it the same as in Figure 3.4.
Applying De Morgan’s Theorem # 1, where a bubbled AND gate is the same as NOR, we come
up with a following all NOR gate circuit
Figure 3.6
Universal Gates
A universal logic gate is a logic gate that can be used to construct all other logic gates.
NAND gates and NOR gates are called universal gates as any type of gates or logic functions
can be implemented by these gates.
Basic gates NOT, AND, OR, implemented using all NAND gates
Basic gates NOT, AND, OR, implemented using all NOR gates
Fabrication of Integrated Circuit that performs a logic operation becomes easier when gate
of only one kind is used.
The advantage of using universal gates for implementation of logic functions is that it
reduces the number of varieties of gates.
Read:
UNIT ASSESSMENTS/ACTIVITIES
1. Create two boolean expressions for each of the circuits below. For the second boolean
expression for each, apply De Morgan’s theorem
2. Create the truth tables for each of the circuits used in the topic ‘Equivalence among circuits
to confirm that the 3 circuits generate the same output.
3. Per De Morgan’s theorem # 1, what is equivalent to a NOR gate?
4. Why are NAND and NOR gates called universal gates?
5. Double inversion puts my logic circuit to a much lower state. Is this statement correct?
6. Draw an XOR gate and provide the truth table
7. Draw a NOR gate and its equivalent gate based on De Morgan’s theorem
8. Provide the corresponding Boolean equations for the 2 gates in #7
9. Draw a NAND gate and its equivalent gate based on De Morgan’s theorem
10. Provide the corresponding Boolean equations for the 2 gates in #9
11. Create the circuit for this Boolean expression : V = WX + YZ
12. Draw the circuit for the OR function using all NOR gates.
13. Draw the circuit for the AND function using all NAND gates.
14. A NOR and a bubbled OR will have the same output? True or false? Draw the truth table
to prove your answer.
15. A NAND and a bubbled OR will have the same output? True or false? Draw the truth table
to prove your answer.
OVERVIEW
Peopleware is regarded as the most important element of the computer and
communication system. It is said that without this element, there would not be any hardware
computers to be used, no software systems that would run computers, and no outputs to be
interpreted as a valid source of information. But thanks to the founding men and women behind
the innovations in the field of computing, the likes of Charles Babbage, Lady Ada Lovelace, Alan
Turing, and others, the world we live in today has made it a necessity for computers and its
systems to be part of our daily lives.
Nowadays, as the information and communications technology continues to evolve, not
much of its credit is given to the people who continuously improve it. Various careers in ICT are
part and parcel of the vast users of computers and its enabling technologies to make all industries’
operations simpler, if not, better. And in this module, we will discuss various ICT professions and
differentiate them from one another, and how they contribute to the increasing demand in the
utilization of computers.
LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Contrast roles and jobs in the ICT profession.
2. Summarize and report insights gained from ICT professionals.
3. Discuss a typical day of an ICT professional.
COURSE MATERIALS
Most professional ICT work can be classified into three (3) broad areas:
1. Information systems / Information Technology
2. Computer systems engineering
3. Computer science
People in ICT
1. Business Analysis Career – evaluate customer business needs, and provides business
solutions.
Computer Engineer - evaluate, design, and maintain computer hardware and software
systems. They develop, test, and design, computer processors, circuit boards, and
network systems.
Hardware Design Engineer - develop, improve, and test components and systems
including circuit boards, processors, and memory cards for computers and other devices.
Tehnical Support Engineer - also known as an IT support engineer, they help in resolving
technical issues within different components of computer systems, such as software,
hardware, and other network-related IT related problems.
Computer Systems Engineer - develop, test, and evaluate software and personal
computers by combining their knowledge of engineering, computer science, and math
analysis.
4. ICT Education Career – specializes in ICT teaching and trainings, and ICT education
management.
IT Lecturer - educate students on how computers work, from the basic science and
mathematics behind their operation to the actual hardware and the software built on those
foundations.
Training Officer - identify staff training and development needs, and for planning,
organizing and overseeing appropriate training.
Education Manager - develop policy, inform course curricula and teaching methods,
manage educational systems, recruitment, financial and physical resources.
Web Administrator – maintain and update their company's website or websites. They help
ensure websites are user friendly and offer an optimal user experience.
6. Multimedia – create and manipulate graphic images, animation, sound, text and video.
Multimedia Content Author - generate and manipulate graphic images, animations, sound,
text and video into consolidated and seamless multimedia programs.
Animator – create extensive series of images that form the animation seen in movies,
commercials, television programs, and video games. They typically specialize in one of
these media and may further concentrate on a specific area, such as characters, scenery,
or background design.
Programmer - code and test programming for software and mobile apps.
9. Systems Analysis and Design Career – partner of project managers and system developers.
Systems Architect - develop computer hardware, software, and network systems. They
are responsible for implementing, maintaining, and operating these systems. Systems
architects customize systems to meet the needs of specific clients.
For purposes of this Code, the following terms are defined as follows:
Preamble:
I will use my special knowledge and skills for the benefit of the public. I will serve employers and
clients with integrity, subject to an overriding responsibility to the public interest, and I will strive
to enhance the competence and prestige of the professional. By these, I mean:
UNIT ASSESSMENTS/ACTIVITIES
1. Aside from the examples of ICT professions discussed, identify ten 10 more jobs and
differentiate their specific roles and responsibilities. Your answers may come from each
of the careers or you may select from any of the careers. Write your answers on a separate
paper.
2. Identify two (2) individuals in your community that are working in the field of ICT (Computer
Science, Information Technology, Information Systems, Computer Engineering). After
which, interview them on what is a typical day like in their profession. You may inquire
what their roles are, and how do they manage their daily job. Write your answer on a
separate paper. As a matter of privacy, you are not to disclose their personal information,
and the company they are working.
3. What particular ICT profession do you want to pursue in the future and why? Write your
explanation on a separate paper.
References:
www.aapathways.com.au
www.cio.com
www.fieldengineer.com
www.hiring.monster.com
www.jobhero.com
www.payscale.com
www.study.com
www.targetjobs.co.uk
www.thebalancecareers.com
www.uwa.edu.au
www.yourfreecareertest.com
http://www.philippinecomputersociety.org/code-of-ethics
UNIT V: SOFTWARE
OVERVIEW
The portion of the computer system which provides instructions to the hardware on how
to perform tasks is the software. This module covers topics about software, its major
classifications, and the functions of the different type of software. This topic explains how
LEARNING OUTCOMES
At the end of this module, the student is expected to:
COURSE MATERIALS
Definition: Software are programs which consists of step-by-step instructions to tell the
computer how to perform a task.
1. System Software
2. Application Software
System Software
Consists of programs that control or maintain the operations of the computer and its
devices
Serves as the interface between the user, the application software, and the computer’s
hardware
*https://www.ibm.com/support/knowledgecenter/zosbasics/com.ibm.zos.zmainframe/zconc_ops
ysintro.htm
Unit V: Software - 65 | P a g e
COMP 20013 - Introduction to Computing
When a computer is first powered on, it must initially rely only on the code and
data stored in nonvolatile portions of the systems memory.
This code is referred to as the BIOS (basic input/output system), a firmware
which resides in the ROM.
BIOS performs a series of tests called the POST (power-on self test). POST
checks for various system components including system clock, adapter cards,
RAM chips, mouse, keyboards etc.
POST results are compared with data in the CMOS. CMOS stores
configuration information such as the amount of memory, current date/time,
types of drives, etc. If any problems are identified, error messages may
display.
If POST completes successfully, the BIOS searches for system files and load
them into memory from storage (usually the hard disk).
Next the kernel of the OS loads into memory. Then the OS in the memory
takes control of the computer
Interaction with a software is through its user interface (UI). Three (3) types of
UI:
Command-line interface – displays a prompt, user types on the keyboard,
computer executes and provide the textual output
Menu-Driven interface - user has a list of items to choose from and can make
selection by highlighting one
Graphical User interface (GUI) - uses windows, icons, pointers, menus
3. Manages program
Single user / single tasking operating system – allows one user to run one
program at a time
Single user / multitasking operating system – allows a single user to work on
two or more programs at the same time
Multiuser operating system – allows two or more users to run programs
simultaneously
Multiprocessing operating system – supports two or more processors running
programs at the same time
4. Manages memory
Unit V: Software - 66 | P a g e
COMP 20013 - Introduction to Computing
5. Schedules jobs
The OS determines the order in which jobs are processed. Jobs may include the
following:
6. Configures devices
A Device driver is a small program that tells the OS how to communicate with a
specific device.
Each I/O device has its own specialized set of commands and thus require its
own specific driver.
When you boot the computer, the OS loads each device’s driver.
OS provides users with the capability of managing files, viewing images, and
other functions such as uninstalling programs, scanning disks, setting screensavers,
etc.
8. Controls network
A network OS organizes and coordinates how multiple users access and share
resources on a network. Resources include hardware, software, data, information
Category of OS
9. Administers security
Unit V: Software - 67 | P a g e
COMP 20013 - Introduction to Computing
Operating System
Utility Programs
Utility Program
is a type of system software that allows a user to perform maintenance-type tasks usually
related to managing a computer, its devices, or its programs. Although the OS usually has
built-in utility programs, users oftentimes prefer stand-alone utilities because they offer
improvements.
Some examples of stand-alone utility programs are anti-virus programs, spyware
removers, file compression programs, etc.
Compiler – converts the entire source program into machine language; Result is called
the object code. It produces a program listing containing the source code and a list of any
errors.
Interpreter - translates and executes one statement at a time; reads a code statement,
converts it to one or more machine language instructions, and then executes those
machine language. An interpreter does not produce and object code. One of the
advantages is that when it finds errors, it displays feedback immediately. An advantage is
that it does not run as fast as the compiled programs
1. System Software
2. Application Software
Application Software
can be called end-user programs since they allow users to perform tasks such as
creating documents, spreadsheets, publications, running business, playing games, etc.
consists of programs designed to make users more productive and assist them with
personal tasks
Unit V: Software - 68 | P a g e
COMP 20013 - Introduction to Computing
4. Communications
E-mail
Chat Facility
Videoconferencing
5. Business
Word Processing
Spreadsheet
Database
Project Management
Accounting
7. Home/Personal/Educational
Software Suite
Personal Finance
Photo/Video Editing
Educational
Entertainment
Unit V: Software - 69 | P a g e
COMP 20013 - Introduction to Computing
8. Communications
E-mail
Chat Facility
Videoconferencing
Programming Languages
Low Level Languages
1st GL Machine Language – Instructions are in the form of machine code, 1’s and
0’s
2nd GL Assembly Language – uses short, English-like, abbreviations to represent
common elements of machine code
Java
Unit V: Software - 70 | P a g e
COMP 20013 - Introduction to Computing
C++
C#
Read:
UNIT ASSESSMENTS/ACTIVITIES
1. Enumerate 10 available operating systems. (Get familiar with their corresponding logos).
2. Explain how an OS manages memory.
3. Differentiate a system software from an application software.
4. Give your own example of an application software.
5. Give your own example of a system software.
6. Discuss the difference between a freeware, a shareware, and an open source software.
7. Enumerate programming languages that are considered object-oriented which have not
been mentioned in this IM.
8. Differentiate a compiler from an interpreter.
9. Read the topic on how OS participates in the boot operation and enumerate the steps.
which happen before the OS takes control of the computer.
10. Give examples of program codes which are interpreted rather than compiled.
11. Categorize the following software, application or system software.
11.1 Payroll system 11.6 Microsoft Word
11.2 Avast anti-virus 11.7 MySQL
11.3 Ubuntu 11.8 Defragmenter
11.4 Inventory System 11.9 Screen saver
11.5 Image viewer 11.10 Disk Scanner
12.Give one example each of a freeware, shareware, and an open source software.
Unit V: Software - 71 | P a g e
COMP 20013 - Introduction to Computing
OVERVIEW
From the early times, people had seen the need to communicate over a distance
(telecommunication). They used various means to communicate such as smoke signals,
sound(drums), and homing pigeons. During the later years, with the advent of electricity, other
devices were invented to facilitate telecommunication such as telegraph, telephone, and radio.
LEARNING OUTCOMES
COURSE MATERIALS
Data communications refers to the transmission of digital data between two or more
computers. A computer network or data network is a telecommunications network that allows
computers to exchange data.
History of Data Communication
Peer to peer client server – all computers share their resources with all the other computers
in the network.
Dedicated client / server – one or more computers are assigned as a server and the rest of
the computers are clients.
- A network architecture where one centralized, powerful computer (called the server) is a
hub to which many less powerful personal computers or workstations (called clients) are
connected.
- Server manages all network resources; dedicated; engineered to manage, store, send
and process data; provides the service
- Clients are workstations on which users run applications. Clients rely on servers for
resources; request the service
Network Topology refers to the appearance or the way a network is laid out.
Physical Topology - refers to the physical lay out (geometric representation) of the computers
in a network.
Logical Topology – Describes how data actually flow through the network. It refers to the
logical layout of the computers in a network (how computers access other computers in the
network)
Most Basic topologies
Advantages
• Very simple
• Transmission medium is ready for use anytime by the two stations.
Disadvantage
• Less stations can communicate with each other directly.
Figure 6-1
(Image Source: systemzone.net)
• Stations are connected directly to a centrally located device such as a computer or hub which
acts like a multipoint connector.
• The central node is sometimes called central control, star coupler, or central switch.
Advantages
• If link of one computer fails, others can still communicate
• Requires less cable and communication ports than mesh topology
• Could be less expensive than mesh topology
• Easier to install compared to mesh topology
• Easier fault isolation compared to bus
Disadvantages
• If central hub breaks down, all communications are down
• Less robust compared to mesh topology
• Often requires more cable than bus
Advantages
• Requires no special routing or circuit switching.
• Not necessary to store and forward messages.
• Requires less cable than other topology
• Easier to install compared to other topology
• Requires less communication ports than mesh and ring topology
• Could be less expensive than mesh topology
Disadvantages
• Computers could not communicate anytime (because of collision)
• If cable breaks down, entire network could be disrupted
• More difficult fault isolation
• Not suitable when stations are transmitting most of the time (because of too much collision).
• All stations are connected in tandem (series) to form a closed loop or circle.
Advantages
• Requires less cable than mesh topology
• Requires less communication ports than mesh topology
• Relatively easy to install
• Could be less expensive than mesh topology
Disadvantages
• Delay is longer for non-adjacent stations.
• If one cable breaks down, entire network could be disrupted
• Requires more communication port than bus or star topology
Advantages
• Computers can communicate anytime (no contention for use of medium)
• Robust (Data could have alternate routes)
• Has more privacy and security
• Easier fault isolation
Disadvantages
• More expensive and bulkier cabling / communication lines
• More communication ports are needed
• More cumbersome installation and reconnection
• Could have higher total cost of ownership
• It combines two or more of the traditional topologies to form a larger, more complex
topology.
Advantages
• Combines the benefits of traditional topologies used.
Disadvantages
• Combines the disadvantages of traditional topologies used.
• A central ‘root’ node (top level of the hierarchy) is connected to one or more other nodes that
are one level lower in the hierarchy with a point-to-point physical link.
• The second level node may also have connected to one or more other nodes that are one
level down in the hierarchy with another point-to-point link.
• The top level node i.e root node is the only node that has no other node above it in the
hierarchy.
INTERNET CONCEPTS
Internet
A network of computer networks. It allows any computer connected to it to send and receive data
from any computer connected to it.
Internet History
1962 - J.C.R. Licklider of MIT envisioned a globally interconnected set of computers through
which everyone could quickly access data and programs from any site. In spirit, the concept
was very much like the Internet of today.
1969 - Pentagon’s ARPANET (Advanced Research Projects Agency) became functional,
linking scientific and academic researches across the US
1972 – First public demonstration of the ARPANET to the public; Initial application of the
electronic mail was introduced
1983 -
o ARPANET adopted the Transmission Control Protocol and Internet Protocol (TCP/IP)
;
o ARPANET was being used by a significant number of R&D and operational
organizations;
o Widespread development of LANs, PCs, and workstations allowed the internet to
flourish
1987 - there were nearly 30,000 hosts on the Internet. The original Arpanet protocol had been
limited to 1,000 hosts, but the adoption of the TCP/IP standard made larger numbers of hosts
possible.
1989 – The World Wide Web was born
1995
o is often considered the first year the web became commercialized. While there were
commercial enterprises online prior to ’95, there were a few key developments that
happened that year. First, SSL (Secure Sockets Layer) encryption was developed by
Netscape, making it safer to conduct financial transactions (like credit card
payments) online.
o The Federal Networking Council (FNC) unanimously passed a resolution defining the
term Internet. “Internet” refers to the global information system that - (i) is logically
linked together by a globally unique address space based on the Internet Protocol (IP)
or its subsequent extensions/follow-ons; (ii) is able to support communications using
the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of its subsequent
extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or
makes accessible, either publicly or privately, high level services layered on the
communications and related infrastructure described herein
Today, the Internet remains a public cooperative and independent network
Each organization on the Internet is responsible only for maintaining its own network
• collection of interlinked multimedia documents that are stored on the Internet and accessed
using a common protocol (HTTP).
• Each electronic document on the web is called a web page
• A collection of web pages is called a web site
The World Wide Web Consortium (W3C) oversees research and sets standards and
guidelines for many areas of the Internet
About 350 organizations are members of W3C. They advise, define standards, and address
other issues.
Sir Tim Berners-Lee, a British computer scientist invented the World Wide Web in 1989.
By October of 1990, Tim had written the three fundamental technologies that remain the
foundation of today’s web (and which you may have seen appear on parts of your web browser):
HTML: HyperText Markup Language. The markup (formatting) language for the web.
URI: Uniform Resource Identifier. A kind of “address” that is unique and used to identify
to each resource on the web. It is also commonly called a URL.
HTTP: Hypertext Transfer Protocol. Allows for the retrieval of linked resources from across
the web.
The ISP connects to its customers using a data transmission technology such as
• Dial-up
• DSL (Digital Subscriber Line)
• Cable modem
• Wireless
• Fiber optics
IP address is short for Internet Protocol (IP) address. An IP address is number that uniquely
identifies each computer or device connected to the Internet. IP version 4 addresses are
comprised of four groups of separated by dots. Each number is between 0 to 255.
Ex.
127.0.0.1
253.16.44.22
72.48.108.101
Domain Name
Domain Name is the text version of an IP address. The Domain Name System (DNS) is the
method that the Internet uses to store domain names and their corresponding IP addresses.
When you specify a domain name, a DNS server translates the domain name to its associated IP
address
Ex.
www.google.com
A uniform resource locator, abbreviated URL (also known as web address. It is the full
address to a web page or file/program.
• The full address usually starts with "http://" for Web pages, "https://" for secure Web pages
• Following these prefixes are the "www.", domain name, the path and the file name
http://www.domain_name/path/filename
• As with physical addresses, the exact layout can vary.
• Sometimes there will be more parts to the address. Domains can be divided into multiple
subdomains.
• Sometimes there will be fewer parts - typically the larger the organization, the shorter their
domain name, ibm.com for example.
Protocol
In the networking and communications area, a protocol is the formal specification that
defines the procedures that must be followed when transmitting or receiving data. Protocols define
the format, timing, sequence, and error checking used on the network.
TCP/IP
Transmission Control Protocol / Internet Protocol
Foundation protocols for the internet
Manages conversations between servers and web clients
HTTP
HTTP stands for HyperText Transfer Protocol.
It’s what browsers and web servers rely on for exchanging data
HTTP’s responsibility is the World Wide Web or WWW.
HTTP is the protocol between the client (your computer using web browsers) and the server
(web server serving web pages and similar online resources)
It is information exchanging procedure standard between 2 communicating parties or
computers, such as the client and the server.
HTTPS
stands for HyperText Transfer Protocol Secure and is a secure version of HTTP. It’s
basically an encrypted HTTP channel that encrypts all the information being exchanged,
making transferring of confidential information secure from eavesdropping
Other protocols
FTP File transfer protocol – used for interactive file transfer between systems
SMTP Simple Mail Transfer Protocol - for transfer of electronic messages (and
attachments)
Intranet
private network accessible only by the organization's members, employees, or others with
authorization
Internal website that takes advantage of the same basic technology as the Internet;
a local or restricted communications network, esp. a private network created using World Wide
Web software.
UNIT ASSESSMENTS
References:
OVERVIEW
This module covers the advancement and application of information technology. Some of
the trends in the information technology are Cloud computing, Mobile Application, Analytics,
Internet of Things, Data Security.
LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Demonstrate awareness on the current ICT trends and social issues.
2. Explain the current ICT trends and social issues and the impact that is having on society
3. Initiates disciplines ad relates knowledge of ICT trends and issues on study works.
COURSE MATERIALS
TRENDS IN ICT
21st century has been defined by application of and advancement in information
technology. Information technology has become an integral part of our daily life. According to
Information Technology Association of America, information technology is defined as “the study,
design, development, application, implementation, support or management of computer-based
information systems.”
Information technology has served as a big change agent in different aspect of business
and society. It has proven game changer in resolving economic and social issues.
Some of the advance developments in the Information Technology are:
1. Cloud Computing
One of the most talked about concept in information technology is the cloud computing.
Clouding computing is defined as utilization of computing services, i.e. software as well as
hardware as a service over a network. Typically, this network is the internet.
More and more businesses around the world are turning to cloud computing to help
support their business development demands. Cloud services allow companies to offload data
management, backend development, and even design so that their talent can focus on
innovation. To achieve better IT results, companies must build or reconfigure the appropriate
policies and workflow for a cloud-based approach. Cloud computing is expected to continue
being one of the most vital future trends in information technology.
Cloud computing offers 3 types of broad services mainly Infrastructure as a Service
(IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).
2. Internet of Things
The Internet of Things (IoT) is transforming our physical world into a complex and
dynamic system of connected devices on an unprecedented scale.
Advances in technology are making possible a more widespread adoption of IoT, from
pill-shaped micro-cameras that can pinpoint thousands of images within the body, to smart
sensors that can assess crop conditions on a farm, to the smart home devices that are
becoming increasingly popular. But what are the building blocks of IoT? And what are the
underlying technologies that drive the IoT revolution?
The explosive growth of the “Internet of Things” is changing our world and the rapid
drop in price for typical IoT components is allowing people to innovate new designs and
products at home.
Internet of Things (IoT) devices are rapidly making their way into corporate spaces.
From gathering new data to the automation of infrastructure, companies are finding many
benefits from adding connectivity and intelligence to physical infrastructure. According to
CompTIA, adding digital capabilities to everyday components will drastically increase the
scope of IT responsibilities.
3. Mobile Application
Another emerging trend within information technology is mobile applications
(software application on Smart phone, tablet, etc.)
Mobile application or mobile app has become a success since its introduction. They
are designed to run on Smartphone, tablets and other mobile devices. They are available as
a download from various mobile operating systems like Apple, Blackberry, Nokia, etc. Some
of the mobile app are available free where as some involve download cost. The revenue
collected is shared between app distributor and app developer.
and offices in society-changing numbers. For the first time, sophisticated electronic systems
were available to general consumers for uses such as word processors, games units and
accounting aids. Consequently, as computers were no longer room-sized, expensive tools
exclusively built for experts in specialized environments, the need to create human-computer
interaction that was also easy and efficient for less experienced users became increasingly
vital. From its origins, HCI would expand to incorporate multiple disciplines, such as computer
science, cognitive science and human-factors engineering.
HCI soon became the subject of intense academic investigation. Those who studied
and worked in HCI saw it as a crucial instrument to popularize the idea that the interaction
between a computer and the user should resemble a human-to-human, open-ended dialogue.
Initially, HCI researchers focused on improving the usability of desktop computers (i.e.,
practitioners concentrated on how easy computers are to learn and use). However, with the
rise of technologies such as the Internet and the smartphone, computer use would
increasingly move away from the desktop to embrace the mobile world.
5. Data Analytics
The field of analytics has grown many folds in recent years. Analytics is a process
which helps in discovering the informational patterns with data. The field of analytics is a
combination of statistics, computer programming and operations research.
The field of analytics has shown growth in the field of data analytics, predictive
analytics and social analytics.
Data analytics is tool used to support decision-making process. It converts raw data
into meaningful information.
Predictive analytics is tool used to predict future events based on current and
historical information.
Social media analytics is tool used by companies to understand and accommodate
customer needs.
The ever-changing field of information technology has seen great advancement and
changes in the last decade. And from the emerging trend, it can be concluded that its
influence on business is ever growing, and it will help companies to serve customers better.
6. Artificial Intelligence
Artificial intelligence (AI) requires significant computer resources (which can be
procured in the cloud), various algorithms allow learning (which can be baked into products
or provided as a service) and contextual awareness (which can come from IoT devices or
massive collections of data). By adding a layer of intelligence to the technical solutions they
are building, companies can both manage a more extensive IT architecture and solve a
broader range of problems.
7. Data Security
One of the top trending technologies in computer science. IT services rely on digital
technology to work faster, data security becomes a top priority. It’s difficult to improve security
efforts when technology is updating so rapidly. Many businesses have increased investments
in security, but beyond the technical aspects, organizations will also begin building business
processes that enhance security. In order to adapt to the rapid IT development, companies
will have to shift their security mindset from technology-based defenses to proactive steps
that include technology, process, and education. In this top 5 disruptive technologies list,
Data security will always be import among the latest technology trends in information
technology.
ISSUES IN ICT
1. Data Privacy
Data privacy refers to the act of providing the integrity, confidentiality, and availability
of personal information that are collected, stored and processed. Data privacy, also called
information privacy, is the aspect of IT that deals with the ability an organization or individual
has to determine what data in a computer system can be shared with third parties.
Data privacy is challenging since it attempts to use data while protecting an individual's
privacy preferences and personally identifiable information. The fields of computer security,
data security, and information security all design and use software, hardware and human
resources to address this issue.
To ensure Data Privacy, the Philippines passed into Republic Act No. 10173 or known
as the Data Privacy Act of 2012.
2. Cybersecurity
The cybersecurity challenge is two-fold. First is that Cyberattacks are growing in size
and sophistication and second, millions of cybersecurity jobs remain unfilled.
Organizations cannot take IT security lightly. An analysis of worldwide identity and
access management by the International Data Corporation revealed that 55% of consumers
would switch platforms or providers due to the threat of a data breach, and 78% would switch
if a breach impacted them directly. Customers aren’t willing to put their data at risk.
The problem is there aren’t enough IT professionals with cybersecurity expertise. Forty
percent of IT decision-makers say they have cybersecurity skills gaps on their teams. It’s also
identified as the most challenging hiring area in IT.
There isn’t an immediate solution to this problem, but a long-term fix is to build your
cyber workforce from the inside. Invest in cybersecurity training and upskill your current staff.
Hiring and outsourcing isn’t always a viable (or cheap) solution. Current IT professionals who
know the industry are more apt to transition into successful cybersecurity professionals.
UNIT ASSESSMENTS
1. What new technology coming out in the next 10 years do you think will disrupt the global IT
industry?
2. Make an analysis on how cybersecurity is being implemented in the Philippines.
3. From recent technology updates, what new devices are being connected to the internet.
4. How do students apply the concept of cloud computing?
5. Give examples of cybersecurity attacks which became headlines in the past year
(Philippines or abroad)
6. Related to question #5, give an example very specific to intrusion of data privacy.
7. Give examples of data security measure which are being implemented in certain institutions,
e.g. banks, school, offices.
8. Identify and discuss one or two application of Internet of Things that you think might be
useful in this time of health crisis.
9. If you are to create a mobile application, conceptualize an application that might be effective
in this situation of health crisis.
10. Why do you think access to correct and accurate data is essential these days with respect
to politics, health, world events, and the like?
References:
https://www.interaction-design.org/literature/topics/human-computer-interaction
https://online.stanford.edu/courses/xee100-introduction-internet-things
https://www.globalknowledge.com/us-en/resources/resource-library/articles/12-challenges-
facing-it-professionals/#2
https://www.bizvibe.com/blog/it-solutions-outsourcing/latest-technology-trends-information-
technology/
https://www.managementstudyguide.com/emerging-trends-in-information-technology.htm
https://www.coursera.org/learn
https://insidemanila.ph/article/293/heres-what-we-know-so-far-about-the-dfa-data-breach
https://www.privacy.gov.ph/data-privacy-act/
https://lawphil.net/statutes/repacts/ra2012/ra_10175_2012.html
OVERVIEW
This module gives an introduction to three of special interest topics related to information
technology, Artificial Intelligence (AI), Data Science, and Social Networking and Society.
The topic on artificial intelligence defines AI, lists down the milestones in AI’s history and
explains the two buzzwords related to AI, machine learning and deep learning. It also discusses
the different fields where we would see the application of AI.
Data science, on the other hand discusses how this field of science came about. The
emergence of big data and the need to analyze this huge amount of data prompted the beginnings
of data science. It also explains the roles and skills of a data scientist.
Spending time in social networking sites has become a part of almost everybody’s daily
routine. The topic on social networking delves on the pros and cons of social media. It also briefly
discusses the most popular social networking sites.
COURSE MATERIALS
Artificial Intelligence
(https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
https://www.britannica.com/technology/artificial-intelligence
https://pathmind.com/wiki/ai-vs-machine-learning-vs-deep-learning)
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that
are programmed to think like humans and mimic their actions. AI is frequently applied to the
project of developing systems endowed with the intellectual processes characteristic of humans,
such as the ability to reason, discover meaning, generalize, or learn from past experience.
John McCarthy, widely recognized as one of the godfathers of AI, defined it as “the science
and engineering of making intelligent machines.”
A computer system able to perform tasks that normally require human intelligence, such
as visual perception, speech recognition, decision-making, and translation between
languages.
o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch
and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing published "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program" which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist
John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic
field.
o Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers
to the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
o Year 2012: Google has launched an Android app feature "Google now", which was able
to provide information to the user as a prediction.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she
was talking with the machine.
The applications for artificial intelligence are endless. The technology can be applied to
many different sectors and industries.
AI in Healthcare: Companies are applying machine learning to make better and faster diagnoses
than humans. One of the best-known technologies is IBM’s Watson. It understands natural
language and can respond to questions asked of it. The system mines patient data and other
available data sources to form a hypothesis, which it then presents with a confidence scoring
schema.
customers. Chatbots have already been incorporated into websites and e companies to provide
immediate service to customers. Automation of job positions has also become a talking point
among academics and IT consultancies.
AI in Education: It automates grading, giving educators more time. It can also assess students
and adapt to their needs, helping them work at their own pace.
AI in Automotive Industry: Some Automotive industries are using AI to provide virtual assistant
to their user for better performance. Such as Tesla has introduced TeslaBot, an intelligent virtual
assistant. Various Industries are currently working for developing self-driven cars which can
make your journey more safe and secure. Just like humans, self-driving cars need to have sensors
to understand the world around them and a brain to collect, processes and choose specific actions
based on information gathered. Autonomous vehicles are with advanced tool to gather
information, including long range radar, cameras, and LiDAR (light detection and ranging).
AI in Gaming: AI can be used for gaming purpose. The AI machines can play strategic games
like chess, where the machine needs to think of a large number of possible places.
AI in Data Security: The security of data is crucial for every company and cyber-attacks are
growing very rapidly in the digital world. AI can be used to make your data more safe and secure.
Some examples such as AEG bot, AI2 Platforms, are used to determine software bug and cyber-
attacks in a better way.
AI in Social Media: Social Media sites such as Facebook, Twitter, and Snapchat contain billions
of user profiles, which need to be stored and managed in a very efficient way. AI can organize
and manage massive amounts of data. AI can analyze lots of data to identify the latest trends,
hashtag, and requirement of different users.
AI in Travel & Transport: AI is becoming highly demanding for travel industries. AI is capable
of doing various travel related works such as from making travel arrangement to suggesting the
hotels, flights, and best routes to the customers. Travel industries are using AI-powered chatbots
which can make human-like interaction with customers for better and fast response.
AI in Robotics: Artificial Intelligence has a remarkable role in Robotics. Usually, general robots
are programmed such that they can perform some repetitive task, but with the help of AI, we can
create intelligent robots which can perform tasks with their own experiences without pre-
programmed. Humanoid Robots are best examples for AI in robotics, recently the intelligent
Humanoid robot named as Erica and Sophia has been developed which can talk and behave like
humans.
AI in Entertainment: We are currently using some AI based applications in our daily life with
some entertainment services such as Netflix or Amazon. With the help of ML/AI algorithms, these
services show the recommendations for programs or shows. The role of AI in film, television and
media can also be felt in marketing and advertising, personalization of user experience, and
search optimization. (https://emerj.com/ai-sector-overviews/ai-in-movies-entertainment-visual-media/)
Learning Algorithms
(https://www.deeplearningbook.org/contents/ml.html)
In this relatively formal definition of the word “task,” the process of learning itself is not the
task. Learning is our means of attaining the ability to perform the task. For example, if we want a
robot to be able to walk, then walking is the task. We could program the robot to learn to walk, or
we could attempt to directly write a program that specifies how to walk manually.
Machine learning approaches are traditionally divided into three broad categories, depending
on the nature of the "signal" or "feedback" available to the learning system:
Supervised learning: The computer is presented with example inputs and their desired
outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to
outputs.
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to
find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden
patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which
it must perform a certain goal (such as driving a vehicle or playing a game against an
opponent). As it navigates its problem space, the program is provided feedback that's
analogous to rewards, which it tries to maximize.
Deep learning
(https://www.investopedia.com/terms/d/deep-learning.asp
https://orbograph.com/deep-learning-how-will-it-change-healthcare/)
Deep Learning is an artificial intelligence (AI) function that imitates the workings of the
human brain in processing data and creating patterns for use in decision making. Deep learning
is a subset of machine learning in artificial intelligence that has networks capable of learning
unsupervised from data that is unstructured or unlabeled. Also known as deep neural learning or
deep neural network.
Deep learning, a subset of machine learning, utilizes a hierarchical level of artificial neural
networks to carry out the process of machine learning. The artificial neural networks are built like
the human brain, with neuron nodes connected together like a web. While traditional programs
build analysis with data in a linear way, the hierarchical function of deep learning systems enables
machines to process data with a nonlinear approach.
Figure 8.2. An illustration of a deep learning neural network (Source: University of Cincinnati)
Deep learning, also known as hierarchical learning or deep structured learning, is a type
of machine learning that uses a layered algorithmic architecture to analyze data.
In deep learning models, data is filtered through a cascade of multiple layers, with each
successive layer using the output from the previous one to inform its results. Deep learning
models can become more and more accurate as they process more data, essentially learning
from previous results to refine their ability to make correlations and connections.
Deep learning is loosely based on the way biological neurons connect with one another to
process information in the brains of animals. Similar to the way electrical signals travel across the
cells of living creates, each subsequent layer of nodes is activated when it receives stimuli from
its neighboring neurons.
In artificial neural networks (ANNs), the basis for deep learning models, each layer may
be assigned a specific portion of a transformation task, and data might traverse the layers multiple
times to refine and optimize the ultimate output.
These “hidden” layers serve to perform the mathematical translation tasks that turn raw
input into meaningful output.
Watch:
Read:
UNIT ASSESSMENTS/ACTIVITIES
References:
https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
https://www.britannica.com/technology/artificial-intelligence
https://pathmind.com/wiki/ai-vs-machine-learning-vs-deep-learning
https://www.javatpoint.com/history-of-artificial-intelligence
https://www.javatpoint.com/application-of-ai
https://www.valluriorg.com/blog/artificial-intelligence-and-its-applications/
https://emerj.com/ai-sector-overviews/ai-in-movies-entertainment-visual-media/
https://www.deeplearningbook.org/contents/ml.html
Machine_learning Bishop, C.M. (2006), Pattern Recognition and Machine Learning
https://www.investopedia.com/terms/d/deep-learning.asp
https://orbograph.com/deep-learning-how-will-it-change-healthcare/
LEARNING OUTCOMES
COURSE MATERIALS
Data science provides meaningful information based on large amounts of complex data
or big data. Data science, or data-driven science, combines different fields of work in statistics
and computation to interpret data for decision-making purposes.
Data is drawn from different sectors, channels, and platforms including cell phones, social
media, e-commerce sites, healthcare surveys, and Internet searches. The increase in the amount
of data available opened the door to a new field of study based on big data—the massive data
sets that contribute to the creation of better operational tools in all sectors.
However, the ever-increasing data is unstructured and requires parsing for effective
decision making. This process is complex and time-consuming for companies—hence, the
emergence of data science.
Historically, data was used as an ancillary to core business and was gathered for specific
purposes. Retailers recorded sales for accounting. Manufacturers recorded raw materials for
quality management. But as the demand for Big Data analytics emerged, data no longer serves
only its initial purpose. Companies able to access huge amounts of data possess a valuable asset
that when combined with the ability to analyze it, has created a whole new industry.
Big data refers to the large, diverse sets of information that grow at ever-increasing rates.
It encompasses the volume of information, the velocity or speed at which it is created and
collected, and the variety or scope of the data points being covered. Big data often comes from
multiple sources and arrives in multiple formats.
Successful players in Big Data are recognized well by the market. Some examples of companies
with big data are Amazon, Facebook, Google, Twitter, SAP to name a few.
1962 John W. Tukey writes in “The Future of Data Analysis”: … Data analysis, and the parts of
statistics which adhere to it, must…take on the characteristics of science rather than those of
mathematics… data analysis is intrinsically an empirical science
1974 Peter Naur publishes Concise Survey of Computer Methods in Sweden and the United
States. Naur offers the following definition of data science: “The science of dealing with data,
once they have been established, while the relation of the data to what they represent is delegated
to other fields and sciences.”
1977 The International Association for Statistical Computing (IASC) is established as a Section
of the ISI. “It is the mission of the IASC to link traditional statistical methodology, modern computer
technology, and the knowledge of domain experts in order to convert data into information and
knowledge.”
1989 Gregory Piatetsky-Shapiro organizes and chairs the first Knowledge Discovery in Databases
(KDD) workshop. In 1995, it became the annual ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD).
1996 Members of the International Federation of Classification Societies (IFCS) meet in Kobe,
Japan, for their biennial conference. For the first time, the term “data science” is included in the
title of the conference (“Data science, classification, and related methods”).
1997 In his inaugural lecture for the H. C. Carver Chair in Statistics at the University of Michigan,
Professor C. F. Jeff Wu (currently at the Georgia Institute of Technology), calls for statistics to be
renamed data science and statisticians to be renamed data scientists.
May 2005 Thomas H. Davenport, Don Cohen, and Al Jacobson publish “Competing on Analytics,”
a Babson College Working Knowledge Research Center report, describing “the emergence of a
new form of competition based on the extensive use of analytics, data, and fact-based decision
making... Instead of competing on traditional factors, companies are beginning to employ
statistical and quantitative analysis and predictive modeling as primary elements of competition.
July 2008 “The Skills, Role & Career Structure of Data Scientists & Curators: Assessment of
Current Practice & Future Needs,” defines data scientists as “people who work where the
research is carried out--or, in the case of data centre personnel, in close collaboration with the
creators of the data--and may be involved in creative enquiry and analysis, enabling others to
work with digital data, and developments in database technology.”
January 2009 Hal Varian, Google’s Chief Economist, tells the McKinsey Quarterly: “The ability to
take data—to be able to understand it, to process it, to extract value from it, to visualize it, to
communicate it—that’s going to be a hugely important skill in the next decades… Because now
we really do have essentially free and ubiquitous data. So the complimentary scarce factor is the
ability to understand that data and extract value from it… I do think those skills—of being able to
access, understand, and communicate the insights you get from data analysis—are going to be
extremely important.
May 2011 David Smith writes in "’Data Science’: What's in a name?”: “The terms ‘Data Science’
and ‘Data Scientist’ have only been in common usage for a little over a year, but they've really
taken off since then: many companies are now hiring for ‘data scientists’, and entire conferences
are run under the name of ‘data science’
September 2011 D.J. Patil writes in “Building Data Science Teams”: “Starting in 2008, Jeff
Hammerbacher (@hackingdata) and I sat down to share our experiences building the data and
analytics groups at Facebook and LinkedIn. In many ways, that meeting was the start of data
science as a distinct professional specialization.
2012 Tom Davenport and D.J. Patil publish “Data Scientist: The Sexiest Job of the 21st Century”
in the Harvard Business Review
Since data scientists have an in-depth understanding of data, they work very well in moving
organizations towards deep learning, machine learning, and AI adoption as these companies
generally have the same data-driven aims. They also help in software development services for
that software that includes lots of data and analytics.
Data scientists help companies of all sizes to figure out the ways to extract useful
information from an ocean of data to help optimize and analyze their organizations based on these
findings. Data scientists focus on asking data-centric questions, analyzing data, and applying
statistics & mathematics to find relevant results.
Data scientists have their background in statistics & advanced mathematics, AI and
advanced analysis & machine learning. For companies that want to run an AI based project, it is
crucial to have a data scientist on the team in order to customize algorithms, make the most of
their data, and weigh data-centric decisions.
UNIT ASSESSMENTS/ACTIVITIES
1. What fields of science are associated with data science?
2. Why do you think there is a lack of data scientists in the industry?
3. What are the usual sources of big data?
4. What data do you provide by using your social networking account, e.g.facebook?
References:
https://www.investopedia.com/terms/d/data-science.asp
https://www.forbes.com/sites/peterpham/2015/08/28/the-impacts-of-big-data-that-you-may-not-
have-heard-of
https://www.investopedia.com/terms/b/big-data.asp
https://www.forbes.com/sites/gilpress/2013/05/28/a-very-short-history-of-data-science
https://searchenterpriseai.techtarget.com/definition/data-scientist
https://towardsdatascience.com/how-data-science-will-impact-future-of-businesses
LEARNING OUTCOMES
At the end of this module, the student is expected to:
1. Discuss where each specific popular media sites are commonly used
2. Analyze the benefits of social media to society
3. Discuss the disadvantages of social media
COURSE MATERIALS
What is Social Networking
A social networking service (also social networking site or social media) is an online platform
which people use to build social networks or social relationships with other people who share
similar personal or career interests, activities, backgrounds or real-life connections. Social
networking sites allow users to share ideas, digital photos and videos, posts, and to inform others
about online or real-world activities and events with people in their network.
Facebook. This is the largest social media network on the Internet, both in terms of total number
of users and name recognition. Facebook came into existence on February 4, 2004, Facebook
has within 12 years managed to collect more than 1.59 billion monthly active users and this
automatically makes it one of the best mediums for connecting people from all over the world with
your business. It is predictable that more than 1 million small and medium-sized businesses use
the platform to advertise their business.
Twitter We might be thinking that restrictive our posts to 140 characters is no way to advertise
our business, but we will be shocked to know that this social media stage has more than 320
million active monthly users who can build use of the 140 character limit to pass on information.
Businesses can use Twitter to interact with prospective clients, answer questions, release latest
news and at the same time use the targeted ads with specific audiences. Twitter was founded on
March 21, 2006, and has its headquarters in San Francisco, California.
Google+ Google+ is one of the popular social media sites in these days. Its SEO value alone
makes it a must-use tool for any small business. Google+ was propelled on December 15, 2011,
and has joined the major alliances enlisting 418 dynamic million clients as of December 2015.
YouTube YouTube : the biggest and most well known video-based online networking site — was
established on February 14, 2005, by three previous PayPal workers. It was later purchased by
Google in November 2006 for $1.65 billion. YouTube has more than 1 billion site guests for every
month and is the second most well known internet searcher behind Google.
Pinterest Pinterest is commonly a beginner in the online networking field. This stage comprises
of computerized announcement sheets where organizations can stick their substance. Pinterest
reported September 2015 that it had obtained 100 million clients. Private ventures whose intended
interest group is for the most part comprised of ladies should put resources into Pinterest as the
greater parts of its guests are ladies.
Instagram Instagram is a visual online networking stage. The site has more than 400 million
dynamic clients and is possessed by Facebook. A significant number of its clients utilize it to post
data about travel, form, sustenance, workmanship and comparable subjects. The stage is likewise
recognized by its remarkable channels together with video and photograph altering highlights.
Right around 95 percent of Instagram clients additionally utilize Facebook.
Tumblr Tumblr is a standout amongst the most hard toutilize informal communication stages, but
at the same time it's a standout amongst the most fascinating locales. The stage permits a few
diverse post groups, including cite posts, talk posts, video and photograph posts and in addition
sound posts, so you are never constrained in the kind of substance that you can share. Like
Twitter, reblogging, which is more similar to retweeting, is speedy and simple. The long range
informal communication site was established by David Karp in February 2007 and at present has
more than 200 million sites.
Flickr Flickr, articulated "Glint," is an online picture and video facilitating stage that was made by
the then Vancouverconstruct Ludicorp in light of February 10, 2004, and later obtained by Yahoo
in 2005. The stage is well known with clients who share and install photos. Flickr had more than
112 million clients and had its impression in more than 63 nations. Million of photographs are
shared day by day on Flickr.
Reddit This is social news and excitement organizing site where enlisted clients can submit
substance, for example, coordinate connections and content posts. Clients are likewise ready to
arrange and decide their position on the site's pages by voting entries up or down. Entries with
the best votes show up in the best classification or primary page.
Snapchat Snapchat is a image informing application training item that was made by Reggie
Brown, Evan Spiegel and Bobby Murphy when they were understudies at Stanford University.
The application was authoritatively discharged in September 2011, and inside a limited ability to
focus time they have become hugely enrolling a normal of 100 million every day dynamic clients
as of May 2015. More than 18 percent of every social medium client utilizes Snapchat.
WhatsApp WhatsApp Messenger is a cross-platform instant messaging client for smartphones,
PCs and tablets. This application needs Internet connection to send images, texts, documents,
audio and video messages to other users that have the app installed on their devices. Launched
in January 2010, WhatsApp Inc. was purchased by Facebook on February 19, 2004, for about
$19.3 billion. Today, more than 1 billion persons make use of the administration to speak with
their companions, friends and family and even clients.
TikTok
(Source: https://www.commonsensemedia.org/blog/parents-ultimate-guide-to-tiktok
https://slate.com/technology/2018/09/tiktok-app-musically-guide.html)
Akram and Kumar (2017) listed down the positive effects of social media on society. They are:
Connectivity – easier for people to connect with anyone regardless of location,
Education – easy for experts and professionals to educate via social media, regardless of
location, education background, and it is also free,
Help – A person’s issues can be shared with a group for help and energy,
Information and updates – Availability of most recent happenings around the planet,
Advertising – Business can be promoted to a very wide audience,
Noble cause – Effective way to solicit contribution for needy people, and
Helps in building communities – People of different communities can connect to discuss
and share related stuff.
While the negative effects are:
Cyber harassing – Because of anonymity in the net, it is exceptionally straightforward to
bully people in the internet,
Hacking – Personal information can be stolen through hacking
Addiction – People spend so much more time than is necessary and lose a sense of
productiveness
Fraud and scams – Fraudulent activities being involving money comes in many forms
Reputation – Damage to reputation by spreading false story in the internet
In general, social media has contributed positively to the society in many ways. One
advantage which everybody would be able to relate to would be in our connectivity. Connecting
with people has never been so easy. Friends and family, we have not seen or talked with for
quite some time suddenly become just a message away. It has given people more opportunities
for socialization and for keeping updated of what’s going on with friends, family, business
partners, or just mere acquaintances. The other advantages on education, ease of sharing
information, the help it provides by just being able to link with people who can provide guidance
and assistance, and building communities, these are major benefits that people enjoy with social
media.
Some of the negative effects could be avoided by making sure our user profile is secure
so that it will not be available to people we do not know. It will also help to use strong passwords.
People in social media should also study and examine businesses and investment opportunities
being offered before entering into any deal online. It is necessary to be circumspect when dealing
with people we only talk with, most of the time, using only chats or messages. Setting time limit
for using social media should also be a good practice as it makes us monitor our use and make
us conscious too just how much time we had already spent in social media.
Depending on the individual and his discipline on the use of social media, the benefits may
outweigh the disadvantages or the downside may overwhelm the advantages.
UNIT ASSESSMENTS
1. Give three social media sites and differentiate them
2. From the study made by Allcott, et.al, would you say that there are more harmful effects of
the use of Facebook?
3. From the advantages of social media (Akram & Kumar), give three which are most important
to you.
4. Which among the harmful effects of social media have you experienced. Elaborate on your
answer.
References:
https://en.wikipedia.org/wiki/Social_networking_service
A Study on Positive and Negative Effects of Social Media on Society, W.Akram, R.Kumar, 2017
https://www.commonsensemedia.org/blog/parents-ultimate-guide-to-tiktok 2
https://slate.com/technology/2018/09/tiktok-app-musically-guide.html 2
The Welfare Effects of Social Media By Hunt Allcott, Luca Braghieri, Sarah Eichmeyer and
Matthew Gentzkow* American Economic Review 2020, 110(3): 629–676
https://doi.org/10.1257/aer.20190658