You are on page 1of 21

Learning Objectives:

 Define the term the Computer and its components.


 Explain the concepts of hardware and software.
 Differentiate application software from system software.
 Define what SDLC is and recognize its importance, and identify
the different phases of SDLC.

Introduction to Programming

Programming is the process of taking an algorithm and encoding it into a


notation, a programming language, so that it can be executed by a computer.
Although many programming languages and many different types of computers
exist, the important first step is the need to have the solution. Without an
algorithm there can be no program. Computer science is not the study of
programming. Programming, however, is an important part of what computer
scientist does. Programming is often the way that we create a representation for
our solutions. Therefore, this language representation and the process of
creating it becomes a fundamental part of the discipline. Algorithms describe the
solution to a problem in terms of the data needed to represent the problem
instance and the set of steps necessary to produce the intended result.
Programming languages must provide a notational way to represent both the
process and the data. To this end, languages provide control constructs and data
types.
What is Computer?

A Computer is an electronic device capable of performing complex


computations in a short time. It is fast, electronic calculating machine that
accepts input information, processes it according to a list of internally stored
instructions called a program, and produces the resultant output information.

A program is a set of instructions telling a computer what to do or how to behave.

Programming is the craft of implementing one or more interrelated


abstract algorithms using a particular programming language to produce a
concrete computer program today, computers help make jobs that used to be
complicated much simpler. For example, you can write a letter in a word
processor, edit it anytime, spell check, print copies, and send it to someone
across the world in a matter of seconds. All of these activities would have taken
someone days, if not months, to do before computers. Also, all of the above is
just a small fraction of what computers can do.

Hardware and its Concepts

Hardware consists of devices, like the computer itself, the monitor,


keyboard, printer, mouse, and speakers.

Inside your computer, there are more bits of hardware including the
motherboard where you would find the main processing chips that make up the
central processing unit (CPU). The hardware processes the commands it receives
from the software, and performs tasks or calculations.

Hardware represents the physical and tangible components of a computer


i.e. the components that can be seen and touched. Examples of Hardware are
following:
 Input devices -- keyboard, mouse etc.
 Output devices -- printer, monitor etc.
 Secondary storage devices -- Hard disk, CD, DVD etc.
 Internal components -- CPU, motherboard, RAM etc.

Data Representation

Data representation is the conversion of images, letters and sounds to


digital electrical signals. Here, a digital electrical signal refers to a combination
or sequence of on and off signals. It refers to the methods used internally to
represent information stored in a computer. Computers store lots of different
types of information:

 Numbers – normally we write numbers using digits 0 to 9. This


is called base 10. However, any positive integer (whole number)
can be easily represented by a sequence of 0's and 1's. Numbers
in this form are said to be in base 2 and they are called binary
numbers. Base 10 numbers use a positional system based on
powers of 10 to indicate their value.
 Text – text can be represented easily by assigning a unique
numeric value for each symbol used in the text. For example, the
widely used ASCII code (American Standard Code for Information
Interchange) defines 128 different symbols (all the characters
found on a standard keyboard, plus a few extra), and assigns to
each a unique numeric code between 0 and 127.
 Graphics of many varieties (stills, video, animation) – graphics
that are displayed on a computer screen consist of pixels: the tiny
"dots" of color that collectively "paint" a graphic image on a
computer screen.
The pixels are organized into many rows on the screen.
 Sound – needs to be converted into binary for computers to be
able to process it. To do this, sound is Captured - usually by a
microphone - and then converted into a digital signal.

At least, these all seem different to us. However, ALL types of information stored
in a computer are stored internally in the same simple format: a sequence of 0's
and 1's.

Digital Electronics

Digital electronics is the technology used in the manipulation of on and off


(digital electrical) signals. Most computers are digital, as opposed to analog,
devices. A digital device works with discrete signals such as 0 and 1 while an
analog device works with continuous data.

As an analogy, a traditional light switch has two discrete states — on and


off. A dimmer switch, on the other hand, has a rotating dial that controls a
continuous range of brightness — from none (off) to full brightness. The first is
therefore, a digital device while the latter is an analog device.

Digital Data Representation

One major distinction between kinds of data is between digital and analog
data. Digital data code everything they represent in terms of figures (inside the
computer, in binary digits); analog data bear an iconic resemblance to the
phenomena they represent. Digital computers can only process digital data
(because a bit is either set or not set, but not 37% set).

Analog data therefore have to be digitalized before they enter the digital
computer (and they may be re-analogized if they leave it in order to be perceived).
How can a computer represent numbers using bits? The 0 and 1 are also known
as bits or binary digits.

A 5- bit sequence or combination of 0 and 1 could be written as 10011. At


face value, this sequence may have no clear significance. However, this gains
meaning with additional information. For instance, this may pertain to a certain
character, number, or even the color of a dot on the screen, depending on the
context where this appears.

Numeric data like your age, salary, and electricity bill are easily
understood by humans but how is it possible for a computer to show a number
using only binary digits? The computer uses the binary number system which
uses only two digits: 0 and 1. There are no 2’s or 3’s. A series of 0’s and 1’s
results in a particular number much in the same way as the decimal number
system we use.
The table below above shows how the binary system works. The decimal system
used ten symbols or numerals (0 – 9) to represent numbers while the binary
number system uses only two symbols (0 and 1).

Notice in the figure above that there are 4 digits, namely, 0, 1, 0 and 1.
The positions of the 0’s and 1’s holds significant values. The illustration will show
the corresponding position and value in the binary number system.

If the box contains 0, the value is turned ―offline. However, if the box
contains 1, the value of that box is turned ―online and will be added to the value
of the number. Such a number system, where the placement of a symbol or
numeral carries weight, is called a ―positional number system. The decimal
number system is considered a positional number system. But how then a
computer represents words and letters using bits?

Bits can also be used to represent character data. It is analogous to Morse


code that uses the dots and dashes to represent letters in the alphabet. In this
case, computers make use of 0 and 1 as a replacement to dots and dashes.
Common Types of Codes

Some of the common types of codes are:

 ASCII stands for American Standard Code for Information


Interchange. It defines a set of characters that are displayed on screen.
Standard ASCII uses only seven bits for each character but it has been
extended to 8 bits to accommodate more symbols. See the table below for
the ASCII codes equivalent.

 EBCDIC stands for Extended Binary Coded Decimal Interchange Code,


which is an alternative 8-bit code used by older IBM mainframe
computers.
 UNICODE is an 8, 16, or 32-bit character encoding scheme that provides
codes for 65,000 characters. This was developed to represents all the
writing systems of the world.
 Extended ASCII code makes use of a series of 0’s and 1’s to represent
256 characters (including letters for uppercase and lowercase, numbers,
and symbols).

Bits and Bytes

A bit (short for binary digit) is the smallest unit of data in a computer. A
bit has a single binary value, either 0 or 1. Although computers usually provide
instructions that can test and manipulate bits, they generally are designed to
store data and execute instructions in bit multiples called bytes.

Some people interchange the use of bits and bytes. Note that bit is a
contraction of binary digit. The abbreviation for bit is a lowercase “b”.

A byte is a collection of bits (8 to be exact) and is usually abbreviated as


an uppercase “B”. Usually, the unit bits are used when measuring transmission
speeds. An example of a speed transmission measurement is the transmission
rate of a modem (e.g., 56 kilobits per second or kbps).

On the other hand, the unit bytes are usually used in measuring capacity.
For instance, bytes are used to measure the capacity of hard disks (e.g., 80 GB
or Giga Bytes).
Bits and bytes use prefixes to indicate large values. Note that the
measurement is in base 2. A common mistake is to think that the binary kilo
prefix refers to a factor of 1000.

Actually, the closest to 1000 in binary is 210, which is 1024 and not 1000.
Knowing that, we can evaluate then that a kilobit (a.k.a. Kb or Kbit) is 1024 bits
and a kilobyte (a.k.a. KB or Kbyte) is 1024 bytes. Note that the prefix is lowercase
for the decimal kilo and uppercase for the binary version. The table below shows
the prefixes with the corresponding abbreviation and capacity/value.

Hard Drive manufacturers provide approximate non- formatted capacities.


Hence, an 80- GB HD will not show 80GB after formatting. Sometime in the 90's,
some manufacturers used miobytes to mean million (106) bytes. In 2005, they
introduced a new standard for binary prefixes that includes kibi- (Ki) for 210,
mebi- (Mi) for 220, gibi- (Gi) for 230, etc. These prefixes haven't been well adopted
so far.

Digital Electronics

Digital electronics is the foundation of modern computers and digital


communications. Massively complex digital logic circuits with millions of gates
can now be built onto a single integrated circuit such as a microprocessor and
these circuits can perform millions of operations per second. But how are bits
stored and transferred from one point to another?

Storage is either via electronic switches or electronic charges. Electronic


switches are either on or off while charging devices are either charged or
discharged. One of the 2 states is identified as 0 while the other is 1. Since most
computers are electronic devices, bits take the form of electrical pulses traveling
over the circuits. All circuits, chips, and mechanical components forming a
computer are designed to work with bits.

JohnVonNeumann wrote some papers in around 1945 summarizing his


view of what a general-purpose electronic computer ought to be like. A computer
has a "von Neumann architecture" if it follows his recipe:

 Consists of ALU, control unit, memory, and I/O devices.


 The memory just stores numbers (integers of limited size).
 The program is encoded numerically and stored in the memory along
with the data.
 One instruction is executed at a time. It may cause the transfer of a
bounded (and small) amount of data to and from the memory and
I/O devices.

Based in the figure above it explain each part that are listed below.

 Input devices are machines that generate input for the computer, such
as keyboard and mouse.

 Processor or CPU (Central Processing Unit) is the central electronic chip


that controls the processes in the computer. It determines the processing
power of the computer.
 Memory is the part of the computer that stores applications, documents,
and systems operating information.

 Output devices are machines that display information from the


computer, such as monitor, speaker, and printer.

Von Neumann considered parallel computers but recognized the problems of


construction and hence settled for a sequential system. For this reason, parallel
computers are sometimes referred to as non-von Neumann architectures. A von
Neumann machine can compute the same class of functions as a universal
Turing machine.

Software and its Concepts

Software is the name given to the programs that you install on the
computer to perform certain types of activities. There are operating system
software, such as the Apple OS for a Macintosh, or Windows 98 or Windows XP
for a PC. There is also application software, like the games we play, or the tools
we use to compose letters or do math problems.

Software Basics

Here are the terms that you need to know in software basics:

 Computer program (or program) - an organized list of instructions that,


when executed, causes the computer to behave in a predetermined
manner. Without programs, computers are useless.
 Support module – an auxiliary set of instructions used in conjunction with
the main software program (Example: dynamic link libraries).
 Data module – contains data (not supplied by the user) necessary for the
execution of certain tasks. Data modules have a variety of extensions like
.dat, .hlp, and .text. Support modules often have .dll extensions. Program
files have .exe extensions.

The examples presented below for eula.txt which is a data module,


npwmsdrm.dll which is a support module, and setup_wm.exe which is a program
file.

Sometimes, the term “Software” is used too loosely and could cause
confusion. Before, the term software is always associated to all non-hardware
components of a computer. However, modern definitions make it clear that all
documents, spreadsheets, and even downloaded materials from the internet are
now classified as data which means that not all non-hardware components of a
computer are classified as software.

In computer science, code generation is a compilation stage that outputs


machine code in the target language. Code can appear in a variety of forms. The
code that a programmer writes is called source code. After it has been compiled,
it is called object code. Code that is ready to run is called executable code or
machine code. Initially, a programmer writes a program in a particular
programming language.

This form of the program is called the source program, or more generically,
source code. To execute the program, however, the programmer must translate
it into machine language, the language that the computer understands. The first
step of this translation process is usually performed by a utility called a compiler.
The compiler translates the source code into a form called object code.
Sometimes, the object code is the same as machine code; or sometimes it needs
to be translated into machine language by a utility called an assembler.
Software is basically categorized into two:

Application Software

 Application software are computer programs used to accomplish specific


or specialized tasks for computer users such as creating and editing
documents (word processing), making graphic presentations, or listening
to MP3 music. Examples of application software are word processors,
spreadsheets, accounting programs, graphics software, and games.

System Software

 System software are programs that control the basic operations of a


computer system such as saving files in a storage device like a hard disk
or floppy disk, printing files, accepting input from a keyboard or mouse,
execute programs, etc. Examples of system software include operating
systems, communications control programs, interpreters, compilers,
debuggers, text editors, linkers, and loaders.
Software Development Life Cycle
The systems development life cycle (SDLC) is a conceptual model used in
project management that describes the stages involved in an information system
development project, from an initial feasibility study through maintenance of the
completed application.

SDLC is a methodology that is typically used to develop, maintain, and


replace information systems for improving the quality of the software design and
development process. SDLC follows completion of logical sequence of
stages/phases. The output of one stage becomes input for the next stage.

In order to create systems with good design, we must take into


consideration that it must involve several phases. You can develop software in
one big attempt. Furthermore, it would be very favorable for companies to have
certain set of standardized steps that would serve as a guide in systems
development.

SDLC involves five phases. Each phase plays an important role in creating
a good system. The said phases are: Planning, Analysis, Design, Implementation,
and Maintenance. Note that, implementing SDLC may involve several
approaches. It is possible that other books may have a more detailed explanation
to the approach of SDLC. Our intention is just to have a basic, not
comprehensive, understanding of SDLC.

Planning Phase

Planning is the initial stage in the SDLC that has to be performed. This
phase includes information about the requirements for the proposed software.
Also, this phase is known as the feasibility study phase. For larger companies
that deal with complicated software, it is vital that these requirements are
properly identified so that the development of the software may not be delayed
or strayed from the original objective. The purpose of the planning phase is to
conduct a high-level investigation of the business or project, and come up with
a recommendation for the solution.

Analysis Phase

The Analysis phase requires the analyst to thoroughly study the current
procedures or software used to execute tasks in an organization. The main goal
in this phase is to identify the requirements for new software or simply change
several aspects in the current working software.

The purpose of this phase is to conduct a detailed analysis of the project


or current business needs, and identify what options are available to achieve the
needs. The activities performed by the analyst during this phase are:

 Study the current software


 Determine software requirements
 Write requirements report

Design Phase

During the Design phase, the developer of the software translates the
result of the previous phase into actual design or specifications of the software.
Development of the software involves covering the input and output screens to
reports, databases, and computer process.

The purpose of this phase is to identify and document a solution that will
be constructed according to technical and procedural specifications. Design
document will be created that should include but not limited to technical,
environmental, data, program, procedural, and testing specifications. The
activities performed in this phase are:

 Identify potential solutions


 Evaluate solutions and select the best
 Select hardware and software
 Develop application specifications
 Obtain approval to implement the new software

Implementation Phase

After the design phase, we must put the proposed software into the test.
During this phase, implementing the software will include several steps:

 Coding – creation of the actual program


 Testing – both programmer and analyst submits the software to
various “quality testing” to discover if there are any bugs within the
software
 Installation – after coding and testing is done, the actual software
must be installed and slowly or completely replaces the old software.
The purpose of the implementation phase is to release a fully tested and
operational product to an end user or customer. The product should meet all the
requirements.

Maintenance Phase

With every phase being completed, perhaps the maintenance phase would
be the most prevalent phase of all. There are some bugs in the software which
can’t be properly identified without putting the software into actual use. The
maintenance phase is used to make necessary patches to remove found errors.
This is where the software is systematically repaired and improved based on
errors or possible new requirements found.
Waterfall SDLC

The waterfall SDLC suggests that prior to the next phase, the current
phase should be finished first. But applying this strategy in the real world would
be unfeasible since it would be impossible to modularize system development.

This version of the SDLC is more flexible compared to the traditional SDLC
model. Since the nature of developing software is unpredictable, using this model
will allow the developer to adjust to certain situations, such as unforeseen
requirements, or additional features that are highly needed in the software.
Different I.T. Professionals

Hundreds, if not thousands, of IT job titles have been developed since the
dawn of the information era. Technology is continuously evolving, and it appears
that with each new innovation or upgrade, more and more specialized IT job titles
are being developed to match them.

Because information technology is a rapidly developing industry, people


with the necessary education and technical abilities have relatively excellent job
stability and salary. IT experts are in great demand across a wide range of
sectors.

Although it would be nearly impossible to include all IT job titles owing to


their large and ever-changing number, we have described the most popular and
frequent IT job titles in the space below to help you better understand them.

Software Developer

Someone must build computer applications before firms can use them.
Web functionality tools, video games, device drivers, operating systems,
corporate productivity software, and other programs are all created by software
developers.

Using programming languages such as JavaScript, SQL, and HTML, these


developers conceive and create an idea before taking it through the testing and
execution stages. These developers often consult with clients and coworkers to
determine what system or solution is required before developing it. Internet
developers and web developers are two more IT job names that are comparable.
Network Engineer

A network engineer creates, installs, manages, and improves computer


and telecommunications networks. This is one of the most technically
challenging and time-consuming IT jobs. This expert is frequently in charge of
disaster recovery plans, security, and data storage.

Network Administrator

The network administrator is in charge of maintaining and supporting


existing networks such as local area networks (LANs) and wide area networks
(WANs) (WANs). This covers internet and intranet communications, server
maintenance and debugging, and network protection against online attacks.
Other typical IT job titles for network administrators are information security
analyst and information technology expert.

Computer Scientist

Computer scientists create and design computer devices and hardware,


often specializing on a single component, such as motherboards, routers, or
modems.

System Analyst

The solutions specialist, systems engineer, technical designer, or product


specialist are all other names for a systems analyst. This expert solves business
problems by examining difficulties, evaluating them, and then creating
information systems in response to the issue, as well as determining the costs
and needs connected with the solution. To be successful in this field, you must
have a combination of business and technical expertise.
Business Analyst

The business analyst consults with business managers, technology


professionals, and end users to enhance company operations and procedures.
This IT specialist will assess a client's needs, collect and record requirements,
and develop project plans. While a grasp of technology is required for this
position, a technical degree is not necessarily required.

Tech Support

The person in charge of technical assistance troubleshoots information


technology issues. These IT experts react to assistance and support inquiries, as
well as monitor and manage office technology. Technical assistance is sometimes
known as helpdesk support, issue manager, or operations analyst in the IT
industry.

IT Consultant

IT consultants or specialists give technical knowledge to external clients


on a project or contract basis. The consultant may design and execute
information technology systems, manage information technology projects, offer
after-sales support, or even write code.

Software Tester

A software tester, also known as a software quality assurance tester or test


analyst, predicts the many ways an information system or application may fail
in order to limit the occurrence of defects. This expert will put macros and scripts
through their paces, assess the outcomes, and give comments.

You might also like