You are on page 1of 168

Microtek Group Of Institutions

Subject Name : Computer Fundamentals


Name of Faculty : Mr Ashutosh Chaubey
Syllabus Covered : BCA –I Semester
(Purvanchal University)
B.Sc. –I Year

Computer Fundamentals

A computer is a general-purpose device that can be programmed to carry out a set of arithmetic
or logical operations automatically. Since a sequence of operations can be readily changed, the
computer can solve more than one kind of problem.

Conventionally, a computer consists of at least one processing element, typically a central


processing unit (CPU), and some form of memory. The processing element carries out arithmetic
and logic operations, and a sequencing and control unit can change the order of operations in
response to stored information. Peripheral devices allow information to be retrieved from an
external source, and the result of operations saved and retrieved.

Mechanical analog computers started appearing in the first century and were later used in the
medieval era for astronomical calculations. In World War II, mechanical analog computers were
used for specialized military applications such as calculating torpedo aiming. During this time
the first electronic digital computers were developed. Originally they were the size of a large
room, consuming as much power as several hundred modern personal computers (PCs).[1]

Modern computers based on integrated circuits are millions to billions of times more capable
than the early machines, and occupy a fraction of the space.[2] Computers are small enough to fit
into mobile devices, and mobile computers can be powered by small batteries. Personal
computers in their various forms are icons of the Information Age and are generally considered
as ―computers.‖ However, the embedded computers found in many devices from MP3 players to
fighter aircraft and from electronic toys to industrial robots are the most numerous.

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a
programmable computer. Considered the "father of the computer",[16] he conceptualized and
invented the first mechanical computer in the early 19th century. After working on his
revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized
that a much more general design, an Analytical Engine, was possible. The input of programs and
data was to be provided to the machine via punched cards, a method being used at the time to
direct mechanical looms such as the Jacquard loom. For output, the machine would have a
printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards
to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of
conditional branching and loops, and integrated memory, making it the first design for a general-
purpose computer that could be described in modern terms as Turing-complete.[17][18]

The machine was about a century ahead of its time. All the parts for his machine had to be made
by hand — this was a major problem for a device with thousands of parts. Eventually, the project
was dissolved with the decision of the British Government to cease funding. Babbage's failure to
complete the analytical engine can be chiefly attributed to difficulties not only of politics and
financing, but also to his desire to develop an increasingly sophisticated computer and to move
ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a
simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a
successful demonstration of its use in computing tables in 1906.

Later Analog computers

Sir William Thomson's third tide-predicting machine design, 1879–81

During the first half of the 20th century, many scientific computing needs were met by
increasingly sophisticated analog computers, which used a direct mechanical or electrical model
of the problem as a basis for computation. However, these were not programmable and generally
lacked the versatility and accuracy of modern digital computers.[19]

The first modern analog computer was a tide-predicting machine, invented by Sir William
Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve
differential equations by integration using wheel-and-disc mechanisms, was conceptualized in
1876 by James Thomson, the brother of the more famous Lord Kelvin.[15]

The art of mechanical analog computing reached its zenith with the differential analyzer, built by
H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical
integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of
these devices were built before their obsolescence became obvious.

By the 1950s the success of digital electronic computers had spelled the end for most analog
computing machines, but analog computers remain in use in some specialized applications such
as education (control systems) and aircraft (slide rule).

Digital computer development

The principle of the modern computer was first described by mathematician and pioneering
computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,[20] On
Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and
computation, replacing Gödel's universal arithmetic-based formal language with the formal and
simple hypothetical devices that became known as Turing machines. He proved that some such
machine would be capable of performing any conceivable mathematical computation if it were
representable as an algorithm. He went on to prove that there was no solution to the
Entscheidungsproblem by first showing that the halting problem for Turing machines is
undecidable: in general, it is not possible to decide algorithmically whether a given Turing
machine will ever halt.

He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing
machine), with the idea that such a machine could perform the tasks of any other machine, or in
other words, it is provably capable of computing anything that is computable by executing a
program stored on tape, allowing the machine to be programmable. Von Neumann
acknowledged that the central concept of the modern computer was due to this paper.[21] Turing
machines are to this day a central object of study in theory of computation. Except for the
limitations imposed by their finite memory stores, modern computers are said to be Turing-
complete, which is to say, they have algorithm execution capability equivalent to a universal
Turing machine.

Electromechanical

By 1938 the United States Navy had developed an electromechanical analog computer small
enough to use aboard a submarine. This was the Torpedo Data Computer, which used
trigonometry to solve the problem of firing a torpedo at a moving target. During World War II
similar devices were developed in other countries as well.

Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.

Early digital computers were electromechanical; electric switches drove mechanical relays to
perform the calculation. These devices had a low operating speed and were eventually
superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created
by German engineer Konrad Zuse in 1939, was one of the earliest examples of an
electromechanical relay computer.[22]

In 1941, Zuse followed his earlier machine up with the Z3, the world's first working
electromechanical programmable, fully automatic digital computer.[23][24] The Z3 was built with
2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–
10 Hz.[25] Program code was supplied on punched film while data could be stored in 64 words of
memory or supplied from the keyboard. It was quite similar to modern machines in some
respects, pioneering numerous advances such as floating point numbers. Replacement of the
hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler
binary system meant that Zuse's machines were easier to build and potentially more reliable,
given the technologies available at that time.[26] The Z3 was probably a complete Turing machine.

Vacuum tubes and digital electronic circuits

Purely electronic circuit elements soon replaced their mechanical and electromechanical
equivalents, at the same time that digital calculation replaced analog. The engineer Tommy
Flowers, working at the Post Office Research Station in London in the 1930s, began to explore
the possible use of electronics for the telephone exchange. Experimental equipment that he built
in 1934 went into operation 5 years later, converting a portion of the telephone exchange
network into an electronic data processing system, using thousands of vacuum tubes.[19] In the
US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested
the Atanasoff–Berry Computer (ABC) in 1942,[27] the first "automatic electronic digital
computer".[28] This design was also all-electronic and used about 300 vacuum tubes, with
capacitors fixed in a mechanically rotating drum for memory.[29]

Colossus was the first electronic digital programmable computing device, and was used to break
German ciphers during World War II.

During World War II, the British at Bletchley Park achieved a number of successes at breaking
encrypted German military communications. The German encryption machine, Enigma, was first
attacked with the help of the electro-mechanical bombes. To crack the more sophisticated
German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman
and his colleagues commissioned Flowers to build the Colossus.[29] He spent eleven months from
early February 1943 designing and building the first Colossus.[30] After a functional test in
December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January
1944[31] and attacked its first message on 5 February.[29]

Colossus was the world's first electronic digital programmable computer.[19] It used a large
number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to
perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine
Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total).
Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was
both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding
process.[32][33]
ENIAC was the first Turing-complete device, and performed ballistics trajectory calculations for
the United States Army.

The US-built ENIAC[34] (Electronic Numerical Integrator and Computer) was the first electronic
programmable computer built in the US. Although the ENIAC was similar to the Colossus it was
much faster and more flexible. It was unambiguously a Turing-complete device and could
compute any problem that would fit into its memory. Like the Colossus, a "program" on the
ENIAC was defined by the states of its patch cables and switches, a far cry from the stored
program electronic machines that came later. Once a program was written, it had to be
mechanically set into the machine with manual resetting of plugs and switches.

It combined the high speed of electronics with the ability to be programmed for many complex
problems. It could add or subtract 5000 times a second, a thousand times faster than any other
machine. It also had modules to multiply, divide, and square root. High speed memory was
limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper
Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from
1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200
kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds
of thousands of resistors, capacitors, and inductors.[35]

Stored programs

A section of the Manchester Small-Scale Experimental Machine, the first stored-program


computer.

Early computing machines had fixed programs. Changing its function required the re-wiring and
re-structuring of the machine.[29] With the proposal of the stored-program computer this changed.
A stored-program computer includes by design an instruction set and can store in memory a set
of instructions (a program) that details the computation. The theoretical basis for the stored-
program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the
National Physical Laboratory and began work on developing an electronic stored-program digital
computer. His 1945 report ‗Proposed Electronic Calculator‘ was the first specification for such a
device. John von Neumann at the University of Pennsylvania, also circulated his First Draft of a
Report on the EDVAC in 1945.[19]
Ferranti Mark 1, c. 1951.

The Manchester Small-Scale Experimental Machine, nicknamed Baby, was the world's first
stored-program computer. It was built at the Victoria University of Manchester by Frederic C.
Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[36] It was
designed as a testbed for the Williams tube the first random-access digital storage device.[37]
Although the computer was considered "small and primitive" by the standards of its time, it was
the first working machine to contain all of the elements essential to a modern electronic
computer.[38] As soon as the SSEM had demonstrated the feasibility of its design, a project was
initiated at the university to develop it into a more usable computer, the Manchester Mark 1.

The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first
commercially available general-purpose computer.[39] Built by Ferranti, it was delivered to the
University of Manchester in February 1951. At least seven of these later machines were
delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[40] In October 1947,
the directors of British catering company J. Lyons & Company decided to take an active role in
promoting the commercial development of computers. The LEO I computer became operational
in April 1951 [41] and ran the world's first regular routine office computer job.

Transistors

A bipolar junction transistor

The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum
tubes in computer designs, giving rise to the "second generation" of computers. Compared to
vacuum tubes, transistors have many advantages: they are smaller, and require less power than
vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than
vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain
tens of thousands of binary logic circuits in a relatively compact space.

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built
a machine using the newly developed transistors instead of valves.[42] Their first transistorised
computer and the first in the world, was operational by 1953, and a second version was
completed there in April 1955. However, the machine did make use of valves to generate its
125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory,
so it was not the first completely transistorized computer. That distinction goes to the Harwell
CADET of 1955,[43] built by the electronics division of the Atomic Energy Research
Establishment at Harwell.[44][45]

Integrated circuits

The next great advance in computing power came with the advent of the integrated circuit. The
idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar
Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first
public description of an integrated circuit at the Symposium on Progress in Quality Electronic
Components in Washington, D.C. on 7 May 1952.[46]

The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at
Fairchild Semiconductor.[47] Kilby recorded his initial ideas concerning the integrated circuit in
July 1958, successfully demonstrating the first working integrated example on 12 September
1958.[48] In his patent application of 6 February 1959, Kilby described his new device as ―a body
of semiconductor material ... wherein all the components of the electronic circuit are completely
integrated.‖[49][50] Noyce also came up with his own idea of an integrated circuit half a year later
than Kilby.[51] His chip solved many practical problems that Kilby's had not. Produced at
Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.

This new development heralded an explosion in the commercial and personal use of computers
and led to the invention of the microprocessor. While the subject of exactly which device was the
first microprocessor is contentious, partly due to lack of agreement on the exact definition of the
term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the
Intel 4004,[52] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[53]

Mobile computers become dominant

With the continued miniaturization of computing resources, and advancements in portable


battery life, portable computers grew in popularity in the 2000s.[54] The same developments that
spurred the growth of laptop computers and other portable computers allowed manufacturers to
integrate computing resources into cellular phones. These so-called smartphones and tablets run
on a variety of operating systems and have become the dominant computing device on the
market, with manufacturers reporting having shipped an estimated 237 million devices in 2Q
2013.[55]
Programs

The defining feature of modern computers which distinguishes them from all other machines is
that they can be programmed. That is to say that some type of instructions (the program) can be
given to the computer, and it will process them. Modern computers based on the von Neumann
architecture often have machine code in the form of an imperative programming language.

In practical terms, a computer program may be just a few instructions or extend to many millions
of instructions, as do the programs for word processors and web browsers for example. A typical
modern computer can execute billions of instructions per second (gigaflops) and rarely makes a
mistake over many years of operation. Large computer programs consisting of several million
instructions may take teams of programmers years to write, and due to the complexity of the task
almost certainly contain errors.

Stored program architecture

Main articles: Computer program and Computer programming

Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program
computer, at the Museum of Science and Industry in Manchester, England

This section applies to most common RAM machine-based computers.

In most cases, computer instructions are simple: add one number to another, move some data
from one location to another, send a message to some external device, etc. These instructions are
read from the computer's memory and are generally carried out (executed) in the order they were
given. However, there are usually specialized instructions to tell the computer to jump ahead or
backwards to some other place in the program and to carry on executing from there. These are
called ―jump‖ instructions (or branches). Furthermore, jump instructions may be made to happen
conditionally so that different sequences of instructions may be used depending on the result of
some previous calculation or some external event. Many computers directly support subroutines
by providing a type of jump that ―remembers‖ the location it jumped from and another
instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each
word and line in sequence, they may at times jump back to an earlier place in the text or skip
sections that are not of interest. Similarly, a computer may sometimes go back and repeat the
instructions in some section of the program over and over again until some internal condition is
met. This is called the flow of control within the program and it is what allows the computer to
perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such
as adding two numbers with just a few button presses. But to add together all of the numbers
from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of
making a mistake. On the other hand, a computer may be programmed to do this with just a few
simple instructions. The following example is written in the MIPS assembly language:

begin:
addi $8, $0, 0 # initialize sum to 0
addi $9, $0, 1 # set first number to add = 1
loop:
slti $10, $9, 1000 # check if the number is less than 1000
beq $10, $0, finish # if odd number is greater than n then exit
add $8, $8, $9 # update sum
addi $9, $9, 1 # get next number
j loop # repeat the summing process
finish:
add $2, $8, $0 # put sum in output register

Once told to run this program, the computer will perform the repetitive addition task without
further human intervention. It will almost never make a mistake and a modern PC can complete
the task in a fraction of a second.

Machine code

In most computers, individual instructions are stored as machine code with each instruction
being given a unique number (its operation code or opcode for short). The command to add two
numbers together would have one opcode; the command to multiply them would have a different
opcode, and so on. The simplest computers are able to perform any of a handful of different
instructions; the more complex computers have several hundred to choose from, each with a
unique numerical code. Since the computer's memory is able to store numbers, it can also store
the instruction codes. This leads to the important fact that entire programs (which are just lists of
these instructions) can be represented as lists of numbers and can themselves be manipulated
inside the computer in the same way as numeric data. The fundamental concept of storing
programs in the computer's memory alongside the data they operate on is the crux of the von
Neumann, or stored program[citation needed], architecture. In some cases, a computer might store some
or all of its program in memory that is kept separate from the data it operates on. This is called
the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers
display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and
while this technique was used with many early computers,[56] it is extremely tedious and
potentially error-prone to do so in practice, especially for complicated programs. Instead, each
basic instruction can be given a short name that is indicative of its function and easy to
remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are
collectively known as a computer's assembly language. Converting programs written in assembly
language into something the computer can actually understand (machine language) is usually
done by a computer program called an assembler.

A 1970s punched card containing one line from a FORTRAN program. The card reads: ―Z(1) =
Y + W(1)‖ and is labeled ―PROJ039‖ for identification purposes.

Programming language

Main article: Programming language

Programming languages provide various ways of specifying programs for computers to run.
Unlike natural languages, programming languages are designed to permit no ambiguity and to be
concise. They are purely written languages and are often difficult to read aloud. They are
generally either translated into machine code by a compiler or an assembler before being run, or
translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid
method of the two techniques.

Five Generations of Computer


04 Jan

243 Votes

Each generation of computer is characterized by a major technological development that


fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper,
more powerful and more efficient and reliable devices.
The history of computer development is often referred to in reference to the different generations
of computing devices. Each generation of computer is characterized by a major technological
development that fundamentally changed the way computers operate, resulting in increasingly
smaller, cheaper, more powerful and more efficient and reliable devices. Read about each
generation and the developments that led to the current devices that we use today.

First Generation (1940-1956) Vacuum Tubes


The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were
often enormous, taking up entire rooms. They were very expensive to operate and in addition to
using a great deal of electricity, generated a lot of heat, which was often the cause of
malfunctions.
First generation computers relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve one problem at a
time. Input was based on punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The
UNIVAC was the first commercial computer delivered to a business client, the U.S. Census
Bureau in 1951.

Second Generation (1956-1963) Transistors


Transistors replaced vacuum tubes and ushered in the second generation of computers. The
transistor was invented in 1947 but did not see widespread use in computers until the late 1950s.
The transistor was far superior to the vacuum tube, allowing computers to become smaller,
faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors.
Though the transistor still generated a great deal of heat that subjected the computer to damage,
it was a vast improvement over the vacuum tube. Second-generation computers still relied on
punched cards for input and printouts for output.

Second-generation computers moved from cryptic binary machine language to symbolic, or


assembly, languages, which allowed programmers to specify instructions in words. High-level
programming languages were also being developed at this time, such as early versions of
COBOL and FORTRAN. These were also the first computers that stored their instructions in
their memory, which moved from a magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.
Third Generation (1964-1971) Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of computers.
Transistors were miniaturized and placed on silicon chips, called semiconductors, which
drastically increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers through
keyboards and monitors and interfaced with an operating system, which allowed the device to
run many different applications at one time with a central program that monitored the memory.
Computers for the first time became accessible to a mass audience because they were smaller
and cheaper than their predecessors.
Fourth Generation (1971-Present) Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated


circuits were built onto a single silicon chip. What in the first generation filled an entire room
could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer—from the central processing unit and memory to input/output
controls—on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form
networks, which eventually led to the development of the Internet. Fourth generation computers
also saw the development of GUIs, the mouse and handheld devices.

Fifth Generation (Present and Beyond) Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development,
though there are some applications, such as voice recognition, that are being used today. The use
of parallel processing and superconductors is helping to make artificial intelligence a reality.
Quantum computation and molecular and nanotechnology will radically change the face of
computers in years to come. The goal of fifth-generation computing is to develop devices that
respond to natural language input and are capable of learning and self-organization.
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science
that aims to create it. AI textbooks define the field as ―the study and design of intelligent agents‖
where an intelligent agent is a system that perceives its environment and takes actions that
maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as ―the
science and engineering of making intelligent machines.‖

The field was founded on the claim that a central property of humans, intelligence—the sapience
of Homo sapiens—can be so precisely described that it can be simulated by a machine. This
raises philosophical issues about the nature of the mind and the ethics of creating artificial
beings, issues which have been addressed by myth, fiction and philosophy since antiquity.
Artificial intelligence has been the subject of optimism, but has also suffered setbacks and,
today, has become an essential part of the technology industry, providing the heavy lifting for
many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail to
communicate with each other. Subfields have grown up around particular institutions, the work
of individual researchers, the solution of specific problems, longstanding differences of opinion
about how AI should be done and the application of widely differing tools. The central problems
of AI include such traits as reasoning, knowledge, planning, learning, communication, perception
and the ability to move and manipulate objects. General intelligence (or ―strong AI‖) is still
among the field‘s long term goals.

Classification of computers
1. CLASSIFICATION BASED ON OPERATING PRINCIPLES

i) Classification based on Operating Principles


Based on the operating principles, computers can be classified into one of the following
types: -
A. Digital Computers
B. Analog Computers
C. Hybrid Computers

A. Digital Computers: - Operate essentially by counting. All quantities are expressed as


discrete or numbers. Digital computers are useful for evaluating arithmetic expressions
and manipulations of data (such as preparation of bills, ledgers, solution of simultaneous
equations etc)
B. Analog Computers:- An analog computer is a form of computer that uses the continuously
changeable aspects of physical phenomena such as electrical, mechanical, orhydraulic quantities
to model the problem being solved. In contrast, digital computers represent varying quantities
symbolically, as their numerical values change.

C. Hybrid Computers:- are computers that exhibit features of analog


computers and digital computers. The digital component normally serves as the controller and
provides logical operations, while the analog component normally serves as a solver
of differential equations.
tion of Computers

Classification of Computers According to S


According to size computers can be divided into following types:

1. Super Computers:

Super computers are the fastest, largest and costliest computers available. The
speed is in the 100 million instructions per second range. They tend to be used for
specific applications in weather forecasting, aircraft design and nuclear research.
Super computers are sometimes used for time sharing as well. Memory size is in
hundreds of megabytes.

2. Mainframe Computers:

Mainframes are the traditional medium and large scale computer systems used in
most business organizations for information processing.

A mainframe typically has a advanced control system and is capable of linking up


with dozens of input/output units and even minis for additional computer power. It
can usually perform from 16 MIPS to onward. Memory size is from 2 MB to onward.
Examples are IBM 4300 and 3300 series, Honeywell 700 series and NCR 800
series.

3. Mini Computers:

Mini computers have been very popular in business. Minis are frequently used to
add computer power with mainframes. Sometimes an organization decides to
decentralize or distribute its computer power to various stations or locations within
user’s departments. Mini computers are ideal for processing data in a decentralized
mode since they are small. Moreover mini have also made it possible for many
smaller organizations to afford a computer for the first time. The input/output devices
are lesser as compared to mainframe. The speed is usually from 10 MIPS to
onward. RAM is from 2 MB to onward.

4. Micro Computers:

The increasing use of micros in home, school, business and professional offices has
been even more revolutionary. Although these computers have limited memory and
speed, their cost makes them very attractive for applications that would otherwise
not be feasible.

Moreover micros are frequently used to provide additional computer power for
companies that already have mainframes or minis. These are rare
input/output devices with micro computers. It is mostly used as a single user.
Its speed is usually counted in MHz rather than MIPS. The speed is generally
from 8 MHz to onward. The RAM is from 640 KB to onward.
Characteristics of computer

Acc Computers have become part of our life. We use computers on daily
basis, in our school, at home, at office. Why we are so dependent on
computers? Because they have made our lives easier, they provide us
entertainment, they can store our valuable data for as long as we want it to
keep. The following are some of the main characteristics of a computers.

Computers are fast

Speed is one of the main characteristic of a computer. A computer can perform


billions of calculations in a second. The speed of a computer is measured in
Mega Hertz (MHz) or Gega Hertz (GHz). For example, multiplying 639 and 913
can take a couple of minutes if a human performs such calculations, but
computer can perform millions of such calculations in a fraction of second.

See Also: Uses of Computers in various fields

Computers are accurate

Computers can perform operations and process data faster but with accurate
results and no errors. Results can be wrong only if incorrect data is fed to the
computer or a bug may be the cause of an error.

Storing Data

Storage capacity is another big characteristic of a computer. A computer can


store large amount of data. This data can be used at any time and also from any
location. The storage capacity of a computer is measured in Mega Byte, Gega
Byte, Tera Byte.

Computers are versatile

Computer is a versatile machine. They are used in various fields. They are used
in Schools & Colleges, at hospitals, at government organizations and at home
for entertainment & work purposes.
They can communicate

Computers have the ability to communicate, but of course there needs some
sort of connection (either Wired or Wireless connection). Two computers can be
connected to send & receive data. Special softwares are used for text and video
chat. Friends & family can connect over the internet and share files, photos &
videos online.

We can do multitasking

Multitasking is also a computer characteristic. Computers can perform several


tasks at a time. For example you can listen to songs, download movies, and
prepare word documents all at the same time.

No Intelligence

Computers don’t have any intelligence of their own. They follow a set of
instructions fed into them by manufacturer. The user knows what to do and
when to perform a specific task.

Types of Operating System

Operating systems are there from the very first computer generation. Operating systems keep evolving over
the period of time. Following are few of the important types of operating system which are most commonly
used.

Batch operating system


The users of batch operating system do not interact with the computer directly. Each user prepares his job on
an off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with
similar needs are batched together and run as a group. Thus, the programmers left their programs with the
operator. The operator then sorts programs into batches with similar requirements.

The problems with Batch Systems are following.

 Lack of interaction between the user and job.

 CPU is often idle, because the speeds of the mechanical I/O devices is slower than CPU.

 Difficult to provide the desired priority.

Time-sharing operating systems


Time sharing is a technique which enables many people, located at various terminals, to use a particular
computer system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming.
Processor's time which is shared among multiple users simultaneously is termed as time-sharing. The main
difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of
Multiprogrammed batch systems, objective is to maximize processor use, whereas in Time-Sharing Systems
objective is to minimize response time.

Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently.
Thus, the user can receives an immediate response. For example, in a transaction processing, processor
execute each user program in a short burst or quantum of computation. That is if n users are present, each user
can get time quantum. When the user submits the command, the response time is in few seconds at most.

Operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a
time. Computer systems that were designed primarily as batch systems have been modified to time-sharing
systems.

Advantages of Timesharing operating systems are following

 Provide advantage of quick response.

 Avoids duplication of software.

 Reduces CPU idle time.

Disadvantages of Timesharing operating systems are following.

 Problem of reliability.
 Question of security and integrity of user programs and data.

 Problem of data communication.

Distributed operating System


Distributed systems use multiple central processors to serve multiple real time application and multiple users.
Data processing jobs are distributed among the processors accordingly to which one can perform each job
most efficiently.

The processors communicate with one another through various communication lines (such as high-speed
buses or telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in
a distributed system may vary in size and function. These processors are referred as sites, nodes, computers
and so on.

The advantages of distributed systems are following.

 With resource sharing facility user at one site may be able to use the resources available at another.

 Speedup the exchange of data with one another via electronic mail.

 If one site fails in a distributed system, the remaining sites can potentially continue operating.

 Better service to the customers.

 Reduction of the load on the host computer.

 Reduction of delays in data processing.

Network operating System


Network Operating System runs on a server and and provides server the capability to manage data, users,
groups, security, applications, and other networking functions. The primary purpose of the network operating
system is to allow shared file and printer access among multiple computers in a network, typically a local area
network (LAN), a private network or to other networks. Examples of network operating systems are
Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell
NetWare, and BSD.

The advantages of network operating systems are following.

 Centralized servers are highly stable.


 Security is server managed.

 Upgrades to new technologies and hardwares can be easily integrated into the system.

 Remote access to servers is possible from different locations and types of systems.

The disadvantages of network operating systems are following.

 High cost of buying and running a server.

 Dependency on a central location for most operations.

 Regular maintenance and updates are required.

Real Time operating System


Real time system is defines as a data processing system in which the time interval required to process and
respond to inputs is so small that it controls the environment. Real time processing is always on line whereas
on line system need not be real time. The time taken by the system to respond to an input and display of
required updated information is termed as response time. So in this method response time is very less as
compared to the online processing.

Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow
of data and real-time systems can be used as a control device in a dedicated application. Real-time operating
system has well-defined, fixed time constraints otherwise system will fail.For example Scientific
experiments, medical imaging systems, industrial control systems, weapon systems, robots, and home-
applicance controllers, Air traffic control system etc.

There are two types of real-time operating systems.

Hard real-time systems


Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems secondary
storage is limited or missing with data stored in ROM. In these systems virtual memory is almost never
found.

Soft real-time systems


Soft real time systems are less restrictive. Critical real-time task gets priority over other tasks and retains the
priority until it completes. Soft real-time systems have limited utility than hard real-time systems.For
example, Multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary
rovers etc.
List of Input Devices, Output Devices and Both Input Output devices
related to computer.

Here I am going to share you about list of basic Input Devices, Output devices and Both
input–output devices related to computer.

Input Devices:

a) Graphics Tablets

b) Cameras

c) Video Capture Hardware

d) Trackballs

e) Barcode reader

f) Digital camera

g) Gamepad

h) Joystick

i) Keyboard

j) Microphone

k) MIDI keyboard

l) Mouse (pointing device)

m) Scanner

n) Webcam

o) Touchpads

p) Pen Input

q) Microphone
r) Electronic Whiteboard

OUTPUT DEVICES:

1. Monitor
2. Printers (all types)
3. Plotters
4. Projector
5. LCD Projection Panels
6. Computer Output Microfilm (COM)
7. Speaker(s)

Both Input–OutPut Devices:

1. Modems
2. Network cards
3. Touch Screen

4. Headsets (Headset consists of Speakers and Microphone.

Speaker act Output Device and Microphone act as Input

device)

5. Facsimile (FAX) (It has scanner to scan the document and also

have printer to Print the document)

6.Audio Cards / Sound Card

Computer virus
A computer virus is a malware program that, when executed, replicates by inserting copies of
itself (possibly modified) into other computer programs, data files, or the boot sector of the hard
drive; when this replication succeeds, the affected areas are then said to be "infected".[1][2][3][4]
Viruses often perform some type of harmful activity on infected hosts, such as stealing hard disk
space or CPU time, accessing private information, corrupting data, displaying political or
humorous messages on the user's screen, spamming their contacts, logging their keystrokes, or
even rendering the computer useless. However, not all viruses carry a destructive payload or
attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating
computer programs which install themselves without user consent.
Virus writers use social engineering and exploit detailed knowledge of security vulnerabilities to
gain access to their hosts' computing resources. The vast majority of viruses target systems
running Microsoft Windows,[5][6][7] employing a variety of mechanisms to infect new hosts,[8] and
often using complex anti-detection/stealth strategies to evade antivirus software.[9][10][11][12] Motives
for creating viruses can include seeking profit, desire to send a political message, personal
amusement, to demonstrate that a vulnerability exists in software, for sabotage and denial of
service, or simply because they wish to explore artificial life and evolutionary algorithms.[13]

Computer viruses currently cause billions of dollars' worth of economic damage each year,[14] due
to causing systems failure, wasting computer resources, corrupting data, increasing maintenance
costs, etc. In response, free, open-source antivirus tools have been developed, and an industry of
antivirus software has cropped up, selling or freely distributing virus protection to users of
various operating systems.[15] Even though no currently existing antivirus software is able to
uncover all computer viruses (especially new ones), computer security researchers are actively
searching for new ways to enable antivirus solutions to more effectively detect emerging viruses,
before they have already become widely distributed.[16]

The first computer viruses[edit]

The MacMag virus 'Universal Peace', as displayed on a Mac in March of 1988

The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early
1970s.[74] Creeper was an experimental self-replicating program written by Bob Thomas at BBN
Technologies in 1971.[75] Creeper used the ARPANET to infect DEC PDP-10 computers running
the TENEX operating system.[76] Creeper gained access via the ARPANET and copied itself to
the remote system where the message, "I'm the creeper, catch me if you can!" was displayed. The
Reaper program was created to delete Creeper.[77]

In 1982, a program called "Elk Cloner" was the first personal computer virus to appear "in the
wild"—that is, outside the single computer or lab where it was created.[78] Written in 1981 by
Richard Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread via floppy
disk.[78][79] This virus, created as a practical joke when Skrenta was still in high school, was
injected in a game on a floppy disk. On its 50th use the Elk Cloner virus would be activated,
infecting the personal computer and displaying a short poem beginning "Elk Cloner: The
program with a personality."

In 1984 Fred Cohen from the University of Southern California wrote his paper "Computer
Viruses – Theory and Experiments".[80] It was the first paper to explicitly call a self-reproducing
program a "virus", a term introduced by Cohen's mentor Leonard Adleman. In 1987, Fred Cohen
published a demonstration that there is no algorithm that can perfectly detect all possible
viruses.[81] Fred Cohen's theoretical compression virus[82] was an example of a virus which was not
malware, but was putatively benevolent. However, antivirus professionals do not accept the
concept of benevolent viruses, as any desired function can be implemented without involving a
virus (automatic compression, for instance, is available under the Windows operating system at
the choice of the user). Any virus will by definition make unauthorised changes to a computer,
which is undesirable even if no damage is done or intended. On page one of Dr Solomon's Virus
Encyclopaedia, the undesirability of viruses, even those that do nothing but reproduce, is
thoroughly explained.[2]

An article that describes "useful virus functionalities" was published by J. B. Gunn under the title
"Use of virus functions to provide a virtual APL interpreter under user control" in 1984.[83]

The first IBM PC virus in the wild was a boot sector virus dubbed (c)Brain,[84] created in 1986 by
the Farooq Alvi Brothers in Lahore, Pakistan, reportedly to deter piracy of the software they had
written.[85]

The first virus to specifically target Microsoft Windows, WinVir was discovered in April 1992,
two years after the release of Windows 3.0. The virus did not contain any Windows API calls,
instead relying on DOS interrupts. A few years later, in February 1996, Australian hackers from
the virus-writing crew Boza created the VLAD virus, which was the first known virus to target
Windows 95. In late 1997 the encrypted, memory-resident stealth virus Win32.Cabanas was
released—the first known virus that targeted Windows NT (it was also able to infect Windows
3.0 and Windows 9x hosts).[86]

Even home computers were affected by viruses. The first one to appear on the Commodore
Amiga was a boot sector virus called SCA virus, which was detected in November 1987.[87]

The first social networking virus, Win32.5-0-1, Was created by Matt Larose on August 15,
2001.[88] The virus specifically targeted users of MSN Messenger and bulletin boards. Users
would be required to click on a link to activate the virus, which would then send an email
containing user data to an anonymous email address, which was later found to be owned by
Larose. Data sent would contain items such as user IP and email addresses, contacts, site history,
and commonly used phrases. In 2008, larger websites used part of the Win32.5-0-1 code to track
web users ad related interests.

Viruses and the Internet[edit]

See also: Computer worm

Before computer networks became widespread, most viruses spread on removable media,
particularly floppy disks. In the early days of the personal computer, many users regularly
exchanged information and programs on floppies. Some viruses spread by infecting programs
stored on these disks, while others installed themselves into the disk boot sector, ensuring that
they would be run when the user booted the computer from the disk, usually inadvertently.
Personal computers of the era would attempt to boot first from a floppy if one had been left in
the drive. Until floppy disks fell out of use, this was the most successful infection strategy and
boot sector viruses were the most common in the wild for many years.

Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers
and the resultant increase in bulletin board system (BBS), modem use, and software sharing.
Bulletin board–driven software sharing contributed directly to the spread of Trojan horse
programs, and viruses were written to infect popularly traded software. Shareware and bootleg
software were equally common vectors for viruses on BBSs.[citation needed] Viruses can increase their
chances of spreading to other computers by infecting files on a network file system or a file
system that is accessed by other computers.[89]

Macro viruses have become common since the mid-1990s. Most of these viruses are written in
the scripting languages for Microsoft programs such as Word and Excel and spread throughout
Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also
available for Mac OS, most could also spread to Macintosh computers. Although most of these
viruses did not have the ability to send infected email messages, those viruses which did take
advantage of the Microsoft Outlook COM interface.[citation needed]

Some old versions of Microsoft Word allow macros to replicate themselves with additional blank
lines. If two macro viruses simultaneously infect a document, the combination of the two, if also
self-replicating, can appear as a "mating" of the two and would likely be detected as a virus
unique from the "parents".[90]

A virus may also send a web address link as an instant message to all the contacts on an infected
machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to
the website, the virus hosted at the site may be able to infect this new computer and continue
propagating.[91]

Viruses that spread using cross-site scripting were first reported in 2002,[92] and were
academically demonstrated in 2005.[93] There have been multiple instances of the cross-site
scripting viruses in the wild, exploiting websites such as MySpace and Yahoo!.

Antivirus software[edit]
Screenshot of the open source ClamWin antivirus software running in Wine on Ubuntu Linux

Many users install antivirus software that can detect and eliminate known viruses when the
computer attempts to download or run the executable (which may be distributed as an email
attachment, or on USB flash drives, for example). Some antivirus software blocks known
malicious web sites that attempt to install malware. Antivirus software does not change the
underlying capability of hosts to transmit viruses. Users must update their software regularly to
patch security vulnerabilities ("holes"). Antivirus software also needs to be regularly updated in
order to recognize the latest threats. The German AV-TEST Institute publishes evaluations of
antivirus software for Windows[49] and Android.[50]

Examples of Microsoft Windows anti virus and anti-malware software include the optional
Microsoft Security Essentials[51] (for Windows XP, Vista and Windows 7) for real-time
protection, the Windows Malicious Software Removal Tool[52] (now included with Windows
(Security) Updates on "Patch Tuesday", the second Tuesday of each month), and Windows
Defender (an optional download in the case of Windows XP).[53] Additionally, several capable
antivirus software programs are available for free download from the Internet (usually restricted
to non-commercial use).[54] Some such free programs are almost as good as commercial
competitors.[55] Common security vulnerabilities are assigned CVE IDs and listed in the US
National Vulnerability Database. Secunia PSI[56] is an example of software, free for personal use,
that will check a PC for vulnerable out-of-date software, and attempt to update it. Ransomware
and phishing scam alerts appear as press releases on the Internet Crime Complaint Center
noticeboard.

Other commonly used preventative measures include timely operating system updates, software
updates, careful Internet browsing, and installation of only trusted software.[57] Certain browsers
flag sites that have been reported to Google and that have been confirmed as hosting malware by
Google.[58][59]

There are two common methods that an antivirus software application uses to detect viruses, as
described in the antivirus software article. The first, and by far the most common method of virus
detection is using a list of virus signature definitions. This works by examining the content of the
computer's memory (its RAM, and boot sectors) and the files stored on fixed or removable drives
(hard drives, floppy drives, or USB flash drives), and comparing those files against a database of
known virus "signatures". Virus signatures are just strings of code that are used to identify
individual viruses; for each virus, the antivirus designer tries to choose a unique signature string
that will not be found in a legitimate program. Different antivirus programs use different
"signatures" to identify viruses. The disadvantage of this detection method is that users are only
protected from viruses that are detected by signatures in their most recent virus definition update,
and not protected from new viruses (see "zero-day attack").[60]

A second method to find viruses is to use a heuristic algorithm based on common virus
behaviors. This method has the ability to detect new viruses for which antivirus security firms
have yet to define a "signature", but it also gives rise to more false positives than using
signatures. False positives can be disruptive, especially in a commercial environment.
Infection targets and replication techniques[edit]

Computer viruses infect a variety of different subsystems on their hosts.[27] One manner of
classifying viruses is to analyze whether they reside in binary executables (such as .EXE or
.COM files), data files (such as Microsoft Word documents or PDF files), or in the boot sector of
the host's hard drive (or some combination of all of these).[28][29]

Resident vs. non-resident viruses[edit]

A memory-resident virus (or simply "resident virus") installs itself as part of the operating
system when executed, after which it remains in RAM from the time the computer is booted up
to when it is shut down. Resident viruses overwrite interrupt handling code or other functions,
and when the operating system attempts to access the target file or disk sector, the virus code
intercepts the request and redirects the control flow to the replication module, infecting the
target. In contrast, a non-memory-resident virus (or "non-resident virus"), when executed, scans
the disk for targets, infects them, and then exits (i.e. it does not remain in memory after it is done
executing).[30][31][32]

Macro viruses[edit]

Many common applications, such as Microsoft Outlook and Microsoft Word, allow macro
programs to be embedded in documents or emails, so that the programs may be run automatically
when the document is opened. A macro virus (or "document virus") is a virus that is written in a
macro language, and embedded into these documents so that when users open the file, the virus
code is executed, and can infect the user's computer. This is one of the reasons that it is
dangerous to open unexpected attachments in e-mails.[33][34]

Boot sector viruses[edit]

Boot sector viruses specifically target the boot sector/Master Boot Record (MBR) of the host's
hard drive or removable storage media (flash drives, floppy disks, etc.).[28][35][36]

Computer programming language, any of various languages for expressing a set of detailed instructions for
a digital computer. Such instructions can be executed directly when they are in the computer manufacturer-
specific numerical form known as machine language, after a simple substitution process when expressed in a
corresponding assembly language, or after translation from some ―higher-level‖ language. Although there are
over 2,000 computer languages, relatively few are widely used.

Machine and assembly languages are ―low-level,‖ requiring a programmer to manage explicitly all of a
computer‘s idiosyncratic features of data storage and operation. In contrast, high-level languages shield a
programmer from worrying about such considerations and provide a notation that is more easily written and
read by programmers.

Language types
Machine and assembly languages

A machine language consists of the numeric codes for the operations that a particular computer can execute
directly. The codes are strings of 0s and 1s, or binary digits (―bits‖), which are frequently converted both from
and to hexadecimal (base 16) for human viewing and modification. Machine language instructions typically
use some bits to represent operations, such as addition, and some to represent operands, or perhaps the location
of the next instruction. Machine language is difficult to read and write, since it does not resemble conventional
mathematical notation or human language, and its codes vary from computer to computer.

Assembly language is one level above machine language. It uses short mnemonic codes for instructions and
allows the programmer to introduce names for blocks of memory that hold data. One might thus write ―add
pay, total‖ instead of ―0110101100101000‖ for an instruction that adds two numbers.

Assembly language is designed to be easily translated into machine language. Although blocks of data may be
referred to by name instead of by their machine addresses, assembly language does not provide more
sophisticated means of organizing complex information. Like machine language, assembly language requires
detailed knowledge of internal computer architecture. It is useful when such details are important, as in
programming a computer to interact with input/output devices (printers, scanners, storage devices, and so
forth).

Algorithmic languages

Algorithmic languages are designed to express mathematical or symbolic computations. They can express
algebraic operations in notation similar to mathematics and allow the use of subprograms that package
commonly used operations for reuse. They were the first high-level languages.

FORTRAN
The first important algorithmic language was FORTRAN (formula translation), designed in 1957 by an IBM
team led by John Backus. It was intended for scientific computations with real numbers and collections of
them organized as one- or multidimensional arrays. Its control structures included conditional IF statements,
repetitive loops (so-called DO loops), and a GOTO statement that allowed nonsequential execution of program
code. FORTRAN made it convenient to have subprograms for common mathematical operations, and built
libraries of them.

FORTRAN was also designed to translate into efficient machine language. It was immediately successful and
continues to evolve.

ALGOL
ALGOL (algorithmic language) was designed by a committee of American and European computer scientists
during 1958–60 for publishing algorithms, as well as for doing computations. Like LISP (described in the next
section), ALGOL had recursive subprograms—procedures that could invoke themselves to solve a problem by
reducing it to a smaller problem of the same kind. ALGOL introduced block structure, in which a program is
composed of blocks that might contain both data and instructions and have the same structure as an entire
program. Block structure became a powerful tool for building large programs out of small components.

ALGOL contributed a notation for describing the structure of a programming language, Backus–Naur Form,
which in some variation became the standard tool for stating the syntax (grammar) of programming languages.
ALGOL was widely used in Europe, and for many years it remained the language in which computer
algorithms were published. Many important languages, such as Pascal and Ada (both described later), are its
descendants.

LISP
LISP (list processing) was developed about 1960 by John McCarthy at the Massachusetts Institute of
Technology (MIT) and was founded on the mathematical theory of recursive functions (in which a function
appears in its own definition). A LISP program is a function applied to data, rather than being a sequence of
procedural steps as in FORTRAN and ALGOL. LISP uses a very simple notation in which operations and their
operands are given in a parenthesized list. For example, (+ a (* b c)) stands for a + b*c. Although this appears
awkward, the notation works well for computers. LISP also uses the list structure to represent data, and,
because programs and data use the same structure, it is easy for a LISP program to operate on other programs
as data.
LISP became a common language for artificial intelligence (AI) programming, partly owing to the confluence
of LISP and AI work at MIT and partly because AI programs capable of ―learning‖ could be written in LISP as
self-modifying programs. LISP has evolved through numerous dialects, such as Scheme and Common LISP.

C
The C programming language was developed in 1972 by Dennis Ritchie and Brian Kernighan at the AT&T
Corporation for programming computer operating systems. Its capacity to structure data and programs through
the composition of smaller units is comparable to that of ALGOL. It uses a compact notation and provides the
programmer with the ability to operate with the addresses of data as well as with their values. This ability is
important in systems programming, and C shares with assembly language the power to exploit all the features
of a computer‘s internal architecture. C++, along with its descendant C++++, remains one of the most common
languages.

Business-oriented languages
COBOL
COBOL (common business oriented language) has been heavily used by businesses since its inception in 1959.
A committee of computer manufacturers and users and U.S. government organizations established CODASYL
(Committee on Data Systems and Languages) to develop and oversee the language standard in order to ensure
its portability across diverse systems.

COBOL uses an English-like notation—novel when introduced. Business computations organize and
manipulate large quantities of data, and COBOL introduced the record data structure for such tasks. A record
clusters heterogeneous data such as a name, ID number, age, and address into a single unit. This contrasts with
scientific languages, in which homogeneous arrays of numbers are common. Records are an important
example of ―chunking‖ data into a single object, and they appear in nearly all modern languages.

SQL
SQL (structured query language) is a language for specifying the organization of databases (collections of
records). Databases organized with SQL are called relational because SQL provides the ability to query a
database for information that falls in a given relation. For example, a query might be ―find all records with
both last_name Smith and city New York.‖ Commercial database programs commonly use a SQL-like
language for their queries.

Education-oriented languages
BASIC
BASIC (beginner‘s all-purpose symbolic instruction code) was designed at Dartmouth College in the mid-
1960s by John Kemeny and Thomas Kurtz. It was intended to be easy to learn by novices, particularly non-
computer science majors, and to run well on a time-sharing computer with many users. It had simple data
structures and notation and it was interpreted: a BASIC program was translated line-by-line and executed as it
was translated, which made it easy to locate programming errors.

Its small size and simplicity also made BASIC a popular language for early personal computers. Its recent
forms have adopted many of the data and control structures of other contemporary languages, which makes it
more powerful but less convenient for beginners.

Pascal
About 1970 Niklaus Wirth of Switzerland designed Pascal to teach structured programming, which
emphasized the orderly use of conditional and loop control structures without GOTO statements. Although
Pascal resembled ALGOL in notation, it provided the ability to define data types with which to organize
complex information, a feature beyond the capabilities of ALGOL as well as FORTRAN and COBOL. User-
defined data types allowed the programmer to introduce names for complex data, which the language translator
could then check for correct usage before running a program.

During the late 1970s and ‘80s, Pascal was one of the most widely used languages for programming
instruction. It was available on nearly all computers, and, because of its familiarity, clarity, and security, it was
used for production software as well as for education.

Logo
Logo originated in the late 1960s as a simplified LISP dialect for education; Seymour Papert and others used it
at MIT to teach mathematical thinking to schoolchildren. It had a more conventional syntax than LISP and
featured ―turtle graphics,‖ a simple method for generating computer graphics. (The name came from an early
project to program a turtlelike robot.) Turtle graphics used body-centred instructions, in which an object was
moved around a screen by commands, such as ―left 90‖ and ―forward,‖ that specified actions relative to the
current position and orientation of the object rather than in terms of a fixed framework. Together with
recursive routines, this technique made it easy to program intricate and attractive patterns.

Hypertalk
Hypertalk was designed as ―programming for the rest of us‖ by Bill Atkinson for Apple‘s Macintosh. Using a
simple English-like syntax, Hypertalk enabled anyone to combine text, graphics, and audio quickly into
―linked stacks‖ that could be navigated by clicking with a mouse on standard buttons supplied by the program.
Hypertalk was particularly popular among educators in the 1980s and early ‘90s for classroom multimedia
presentations. Although Hypertalk had many features of object-oriented languages (described in the next
section), Apple did not develop it for other computer platforms and let it languish; as Apple‘s market share
declined in the 1990s, a new cross-platform way of displaying multimedia left Hypertalk all but obsolete (see
the section World Wide Web display languages).

Object-oriented languages
Object-oriented languages help to manage complexity in large programs. Objects package data and the
operations on them so that only the operations are publicly accessible and internal details of the data structures
are hidden. This information hiding made large-scale programming easier by allowing a programmer to think
about each part of the program in isolation. In addition, objects may be derived from more general ones,
―inheriting‖ their capabilities. Such an object hierarchy made it possible to define specialized objects without
repeating all that is in the more general ones.

Object-oriented programming began with the Simula language (1967), which added information hiding to
ALGOL. Another influential object-oriented language was Smalltalk (1980), in which a program was a set of
objects that interacted by sending messages to one another.

C++
The C++ language, developed by Bjarne Stroustrup at AT&T in the mid-1980s, extended C by adding objects
to it while preserving the efficiency of C programs. It has been one of the most important languages for both
education and industrial programming. Large parts of many operating systems, such as the Microsoft
Corporation‘s Windows 98, were written in C++++.

Ada
Ada was named for Augusta Ada King, countess of Lovelace, who was an assistant to the 19th-century English
inventor Charles Babbage, and is sometimes called the first computer programmer. Ada, the language, was
developed in the early 1980s for the U.S. Department of Defense for large-scale programming. It combined
Pascal-like notation with the ability to package operations and data into independent modules. Its first form,
Ada 83, was not fully object-oriented, but the subsequent Ada 95 provided objects and the ability to construct
hierarchies of them. While no longer mandated for use in work for the Department of Defense, Ada remains an
effective language for engineering large programs.

Java
In the early 1990s, Java was designed by Sun Microsystems, Inc., as a programming language for the World
Wide Web (WWW). Although it resembled C++++ in appearance, it was fully object-oriented. In particular,
Java dispensed with lower-level features, including the ability to manipulate data addresses, a capability that is
neither desirable nor useful in programs for distributed systems. In order to be portable, Java programs are
translated by a Java Virtual Machine specific to each computer platform, which then executes the Java
program. In addition to adding interactive capabilities to the Internet through Web ―applets,‖ Java has been
widely used for programming small and portable devices, such as mobile telephones.

Visual Basic
Visual Basic was developed by Microsoft to extend the capabilities of BASIC by adding objects and ―event-
driven‖ programming: buttons, menus, and other elements of graphical user interfaces (GUIs). Visual Basic
can also be used within other Microsoft software to program small routines.

Declarative languages
Declarative languages, also called nonprocedural or very high level, are programming languages in which
(ideally) a program specifies what is to be done rather than how to do it. In such languages there is less
difference between the specification of a program and its implementation than in the procedural languages
described so far. The two common kinds of declarative languages are logic and functional languages.

Logic programming languages, of which PROLOG (programming in logic) is the best known, state a program
as a set of logical relations (e.g., a grandparent is the parent of a parent of someone). Such languages are
similar to the SQL database language. A program is executed by an ―inference engine‖ that answers a query by
searching these relations systematically to make inferences that will answer a query. PROLOG has been used
extensively in natural language processing and other AI programs.

Functional languages have a mathematical style. A functional program is constructed by applying functions to
arguments. Functional languages, such as LISP, ML, and Haskell, are used as research tools in language
development, in automated mathematical theorem provers, and in some commercial projects.

Scripting languages
Scripting languages are sometimes called little languages. They are intended to solve relatively small
programming problems that do not require the overhead of data declarations and other features needed to make
large programs manageable. Scripting languages are used for writing operating system utilities, for special-
purpose file-manipulation programs, and, because they are easy to learn, sometimes for considerably larger
programs.

PERL (practical extraction and report language) was developed in the late 1980s, originally for use with the
UNIX operating system. It was intended to have all the capabilities of earlier scripting languages. PERL
provided many ways to state common operations and thereby allowed a programmer to adopt any convenient
style. In the 1990s it became popular as a system-programming tool, both for small utility programs and for
prototypes of larger ones. Together with other languages discussed below, it also became popular for
programming computer Web ―servers.‖

Document formatting languages

Document formatting languages specify the organization of printed text and graphics. They fall into several
classes: text formatting notation that can serve the same functions as a word processing program, page
description languages that are interpreted by a printing device, and, most generally, markup languages that
describe the intended function of portions of a document.
TeX
TeX was developed during 1977–86 as a text formatting language by Donald Knuth, a Stanford University
professor, to improve the quality of mathematical notation in his books. Text formatting systems, unlike
WYSIWYG (―What You See Is What You Get‖) word processors, embed plain text formatting commands in a
document, which are then interpreted by the language processor to produce a formatted document for display
or printing. TeX marks italic text, for example, as {\it this is italicized}, which is then displayed as this is
italicized.

TeX largely replaced earlier text formatting languages. Its powerful and flexible abilities gave an expert
precise control over such things as the choice of fonts, layout of tables, mathematical notation, and the
inclusion of graphics within a document. It is generally used with the aid of ―macro‖ packages that define
simple commands for common operations, such as starting a new paragraph; LaTeX is a widely used package.
TeX contains numerous standard ―style sheets‖ for different types of documents, and these may be further
adapted by each user. There are also related programs such as BibTeX, which manages bibliographies and has
style sheets for all of the common bibliography styles, and versions of TeX for languages with various
alphabets.

PostScript
PostScript is a page-description language developed in the early 1980s by Adobe Systems Incorporated on the
basis of work at Xerox PARC (Palo Alto Research Center). Such languages describe documents in terms that
can be interpreted by a personal computer to display the document on its screen or by a microprocessor in a
printer or a typesetting device.

PostScript commands can, for example, precisely position text, in various fonts and sizes, draw images that are
mathematically described, and specify colour or shading. PostScript uses postfix, also called reverse Polish
notation, in which an operation name follows its arguments. Thus, ―300 600 20 270 arc stroke‖ means: draw
(―stroke‖) a 270-degree arc with radius 20 at location (300, 600). Although PostScript can be read and written
by a programmer, it is normally produced by text formatting programs, word processors, or graphic display
tools.
The success of PostScript is due to its specification‘s being in the public domain and to its being a good match
for high-resolution laser printers. It has influenced the development of printing fonts, and manufacturers
produce a large variety of PostScript fonts.

SGML
SGML (standard generalized markup language) is an international standard for the definition of markup
languages; that is, it is a metalanguage. Markup consists of notations called tags that specify the function of a
piece of text or how it is to be displayed. SGML emphasizes descriptive markup, in which a tag might be
―<emphasis>.‖ Such a markup denotes the document function, and it could be interpreted as reverse video on a
computer screen, underlining by a typewriter, or italics in typeset text.

SGML is used to specify DTDs (document type definitions). A DTD defines a kind of document, such as a
report, by specifying what elements must appear in the document—e.g., <Title>—and giving rules for the use
of document elements, such as that a paragraph may appear within a table entry but a table may not appear
within a paragraph. A marked-up text may be analyzed by a parsing program to determine if it conforms to a
DTD. Another program may read the markups to prepare an index or to translate the document into PostScript
for printing. Yet another might generate large type or audio for readers with visual or hearing disabilities.

World Wide Web display languages


HTML
The World Wide Web is a system for displaying text, graphics, and audio retrieved over the Internet on a
computer monitor. Each retrieval unit is known as a Web page, and such pages frequently contain ―links‖ that
allow related pages to be retrieved. HTML (hypertext markup language) is the markup language for encoding
Web pages. It was designed by Tim Berners-Lee at the CERN nuclear physics laboratory in Switzerland during
the 1980s and is defined by an SGML DTD. HTML markup tags specify document elements such as headings,
paragraphs, and tables. They mark up a document for display by a computer program known as a Web
browser. The browser interprets the tags, displaying the headings, paragraphs, and tables in a layout that is
adapted to the screen size and fonts available to it.

HTML documents also contain anchors, which are tags that specify links to other Web pages. An anchor has
the form <A HREF= ―http://www.britannica.com‖> Encyclopædia Britannica</A>, where the quoted string is
the URL (universal resource locator) to which the link points (the Web ―address‖) and the text following it is
what appears in a Web browser, underlined to show that it is a link to another page. What is displayed as a
single page may also be formed from multiple URLs, some containing text and others graphics.

XML
HTML does not allow one to define new text elements; that is, it is not extensible. XML (extensible markup
language) is a simplified form of SGML intended for documents that are published on the Web. Like SGML,
XML uses DTDs to define document types and the meanings of tags used in them. XML adopts conventions
that make it easy to parse, such as that document entities are marked by both a beginning and an ending tag,
such as <BEGIN>…</BEGIN>. XML provides more kinds of hypertext links than HTML, such as
bidirectional links and links relative to a document subsection.

Because an author may define new tags, an XML DTD must also contain rules that instruct a Web browser
how to interpret them—how an entity is to be displayed or how it is to generate an action such as preparing an
e-mail message.

Web scripting

Web pages marked up with HTML or XML are largely static documents. Web scripting can add information to
a page as a reader uses it or let the reader enter information that may, for example, be passed on to the order
department of an online business. CGI (common gateway interface) provides one mechanism; it transmits
requests and responses between the reader‘s Web browser and the Web server that provides the page. The CGI
component on the server contains small programs called scripts that take information from the browser system
or provide it for display. A simple script might ask the reader‘s name, determine the Internet address of the
system that the reader uses, and print a greeting. Scripts may be written in any programming language, but,
because they are generally simple text-processing routines, scripting languages like PERL are particularly
appropriate.

Another approach is to use a language designed for Web scripts to be executed by the browser. JavaScript is
one such language, designed by the Netscape Communications Corp., which may be used with both Netscape‘s
and Microsoft‘s browsers. JavaScript is a simple language, quite different from Java. A JavaScript program
may be embedded in a Web page with the HTML tag <script language=―JavaScript‖>. JavaScript instructions
following that tag will be executed by the browser when the page is selected. In order to speed up display of
dynamic (interactive) pages, JavaScript is often combined with XML or some other language for exchanging
information between the server and the client‘s browser. In particular, the XMLHttpRequest command enables
asynchronous data requests from the server without requiring the server to resend the entire Web page. This
approach, or ―philosophy,‖ of programming is called Ajax (asynchronous JavaScript and XML).

VB Script is a subset of Visual Basic. Originally developed for Microsoft‘s Office suite of programs, it was
later used for Web scripting as well. Its capabilities are similar to those of JavaScript, and it may be embedded
in HTML in the same fashion.

Behind the use of such scripting languages for Web programming lies the idea of component programming, in
which programs are constructed by combining independent previously written components without any further
language processing. JavaScript and VB Script programs were designed as components that may be attached to
Web browsers to control how they display information.

Software : Classification

A piece of software is usually classified as being either systems software or applications


software.

Systems Software

Systems software controls the operation of a computer. Without systems software a computer
would not function. The most important piece of systems software is the operating system. The
operating system will perform vital tasks such as :

 Managing communications between software and hardware.


 Allocating computer memory to other software programs.
 Allocating CPU time to other software programs.

Other pieces of system software will perform less essential tasks such as backing up files. These
programs are known as utilities. Example utility programs are :

 Disk Defragmenter : Reorganises files stored on a disk to speed up loading and storing
files.
 Backup Software : Used to produce copies of information stored on a computer so that if
data is lost it can be recovered.

Applications Software

Applications software is software that is written to solve a particular problem or carry out a
particular job. Application software can be either generic or application specific.
(1) Generic (General Purpose) Software

A generic software package is a package that can be put to a wide variety of uses. For example a
spreadsheet package can be used for any task involving calculations or graph plotting. The most
common generic software packages are :

Word Processor Used to produce simple documents such as letters and essays.
Desktop Used to produce complicated documents such as leaflets, posters
Publisher and newspapers.
Graphics
Used to produce pictures and diagrams.
Package
Database Used to store information so that it can be easily searched.
Spreadsheet Used to perform calculations and draw graphs.

Some generic packages are described as being integrated packages. This means that they
incorporate more than one program. For example an integrated package may include a word
processor, spreadsheet and database. The individual programs in an integrated package are
designed to work well with each other. Information can easily be transferred between each
program. Additionally the user interface of each program will be similar so that if you have
learnt how to use one program it will be relatively easy to learn how to use another.

(2) Application Specific Software

An application specific package is produced to perform one specific task. For example a
program that was written to produce invoices and manage stock levels for a garage would be
application specific software.

ording t
A flowchart is a graphical representation of an algorithm. These flowcharts play a vital role in
the programming of a problem and are quite helpful in understanding the logic of complicated
and lengthy problems. Once the flowchart is drawn, it becomes easy to write the program in any
high level language. Often we see how flowcharts are helpful in explaining the program to
others. Hence, it is correct to say that a flowchart is a must for the better documentation of a
complex program.

Flowcharts are usually drawn using some standard symbols; however,

Start or end of the program

Computational steps or processing function of a program


Input or output operation

Decision making and branching

Connector or joining of two parts of program

The following are some guidelines in flowcharting:

a. In drawing a proper flowchart, all necessary requirements should be listed out in logical
order.
b. The flowchart should be clear, neat and easy to follow. There should not be any room for
ambiguity in understanding the flowchart.
c. The usual direction of the flow of a procedure or system is from left to right or top to
bottom.
d. Only one flow line should come out from a process symbol.

or

e. Only one flow line should enter a decision symbol, but two or three flow lines, one for
each possible answer, should leave the decision symbol.

f. Only one flow line is used in conjunction with terminal symbol.

h. If the flowchart becomes complex, it is better to use connector symbols to reduce the
number of flow lines. Avoid the intersection of flow lines if you want to make it more
effective and better way of communication.
i. Ensure that the flowchart has a logical start and finish.
j. It is useful to test the validity of the flowchart by passing through it with a simple test
data.

PART II: Example of a flowchart:

Problem 1: Write an algorithm and draw the flowchart for finding the average of two numbers

Algorithm: START

Input: two numbers x and y


Input x
Output: the average of x and y

Steps:
Input y
1. input x
2. input y Sum = x + y
3. sum = x + y
4. average = sum /2
5. output average
Average = sum/2

Output
Average

END

PART III: An exercise

Problem 2: Write an algorithm for finding the area of a rectangle

Hints:

 define the inputs and the outputs


 define the steps
 draw the flowchart
Number System
When we type some letters or words, the computer translates them in numbers as computers can understand only
numbers. A computer can understand positional number system where there are only a few symbols called digits and
these symbols represent different values depending on the position they occupy in the number.

A value of each digit in a number can be determined using

 The digit

 The position of the digit in the number

 The base of the number system (where base is defined as the total number of digits available in the number
system).

Decimal Number System


The number system that we use in our day-to-day life is the decimal number system. Decimal number system has base
10 as it uses 10 digits from 0 to 9. In decimal number system, the successive positions to the left of the decimal point
represent units, tens, hundreds, thousands and so on.

Each position represents a specific power of the base (10). For example, the decimal number 1234 consists of the digit 4
in the units position, 3 in the tens position, 2 in the hundreds position, and 1 in the thousands position, and its value can
be written as

(1x1000)+ (2x100)+ (3x10)+ (4xl)


3 2 1 0
(1x10 )+ (2x10 )+ (3x10 )+ (4xl0 )

1000 + 200 + 30 + 4

1234

As a computer programmer or an IT professional, you should understand the following number systems which are
frequently used in computers.

S.N. Number System and Description

Binary Number System


1
Base 2. Digits used : 0, 1

2 Octal Number System


Base 8. Digits used : 0 to 7

Hexa Decimal Number System


3
Base 16. Digits used : 0 to 9, Letters used : A- F

Binary Number System


Characteristics of binary number system are as follows:
 Uses two digits, 0 and 1.

 Also called base 2 number system

 Each position in a binary number represents a 0 power of the base (2). Example 20

 Last position in a binary number represents a x power of the base (2). Example 2 x where x represents the last
position - 1.

Example
Binary Number : 101012

Calculating Decimal Equivalent:

Step Binary Number Decimal Number

Step 1 101012 ((1 x 24) + (0 x 23) + (1 x 22) + (0 x 21) + (1 x 20))10

Step 2 101012 (16 + 0 + 4 + 0 + 1)10

Step 3 101012 2110

Note : 101012 is normally written as 10101.

Octal Number System


Characteristics of octal number system are as follows:
 Uses eight digits, 0,1,2,3,4,5,6,7.

 Also called base 8 number system

 Each position in an octal number represents a 0 power of the base (8). Example 80
 Last position in an octal number represents a x power of the base (8). Example 8x where x represents the last
position - 1.

Example
Octal Number : 125708

Calculating Decimal Equivalent:

Step Octal Number Decimal Number

Step 1 125708 ((1 x 84) + (2 x 83) + (5 x 82) + (7 x 81) + (0 x 80))10

Step 2 125708 (4096 + 1024 + 320 + 56 + 0)10

Step 3 125708 549610

Note : 125708 is normally written as 12570.

Hexadecimal Number System


Characteristics of hexadecimal number system are as follows:
 Uses 10 digits and 6 letters, 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F.

 Letters represents numbers starting from 10. A = 10. B = 11, C = 12, D = 13, E = 14, F = 15.

 Also called base 16 number system

 Each position in a hexadecimal number represents a 0 power of the base (16). Example 160

 Last position in a hexadecimal number represents a x power of the base (16). Example 16x where x represents
the last position - 1.

Example
Hexadecimal Number : 19FDE16

Calculating Decimal Equivalent:

Step Binary Number Decimal Number


Step 1 19FDE16 ((1 x 164) + (9 x 163) + (F x 162) + (D x 161) + (E x 160))10

Step 2 19FDE16 ((1 x 164) + (9 x 163) + (15 x 162) + (13 x 161) + (14 x 160))10

Step 3 19FDE16 (65536+ 36864 + 3840 + 208 + 14)10

Step 4 19FDE16 10646210

Note : 19FDE16 is normally written as 19FDE.

Number Conversion
There are many methods or techniques which can be used to convert numbers from one base to another. We'll
demonstrate here the following:

 Decimal to Other Base System

 Other Base System to Decimal

 Other Base System to Non-Decimal

 Shortcut method - Binary to Octal

 Shortcut method - Octal to Binary

 Shortcut method - Binary to Hexadecimal

 Shortcut method - Hexadecimal to Binary

Decimal to Other Base System


steps

 Step 1 - Divide the decimal number to be converted by the value of the new base.

 Step 2 - Get the remainder from Step 1 as the rightmost digit (least significant digit) of new base
number.

 Step 3 - Divide the quotient of the previous divide by the new base.

 Step 4 - Record the remainder from Step 3 as the next digit (to the left) of the new base number.

Repeat Steps 3 and 4, getting remainders from right to left, until the quotient becomes zero in Step 3.
The last remainder thus obtained will be the most significant digit (MSD) of the new base number.

Example
Decimal Number : 2910

Calculating Binary Equivalent:

Step Operation Result Remainder

Step 1 29 / 2 14 1

Step 2 14 / 2 7 0

Step 3 7/2 3 1

Step 4 3/2 1 1

Step 5 1/2 0 1

As mentioned in Steps 2 and 4, the remainders have to be arranged in the reverse order so that the first
remainder becomes the least significant digit (LSD) and the last remainder becomes the most significant digit
(MSD).

Decimal Number : 2910 = Binary Number : 111012.

Other base system to Decimal System


Steps

 Step 1 - Determine the column (positional) value of each digit (this depends on the position of the
digit and the base of the number system).

 Step 2 - Multiply the obtained column values (in Step 1) by the digits in the corresponding columns.

 Step 3 - Sum the products calculated in Step 2. The total is the equivalent value in decimal.

Example
Binary Number : 111012

Calculating Decimal Equivalent:


Step Binary Number Decimal Number

Step 1 111012 ((1 x 24) + (1 x 23) + (1 x 22) + (0 x 21) + (1 x 20))10

Step 2 111012 (16 + 8 + 4 + 0 + 1)10

Step 3 111012 2910

Binary Number : 111012 = Decimal Number : 2910

Other Base System to Non-Decimal System


Steps

 Step 1 - Convert the original number to a decimal number (base 10).

 Step 2 - Convert the decimal number so obtained to the new base number.

Example
Octal Number : 258

Calculating Binary Equivalent:

Step 1 : Convert to Decimal


Step Octal Number Decimal Number

Step 1 258 ((2 x 81) + (5 x 80))10

Step 2 258 (16 + 5 )10

Step 3 258 2110

Octal Number : 258 = Decimal Number : 2110

Step 2 : Convert Decimal to Binary


Step Operation Result Remainder

Step 1 21 / 2 10 1
Step 2 10 / 2 5 0

Step 3 5/2 2 1

Step 4 2/2 1 0

Step 5 1/2 0 1

Decimal Number : 2110 = Binary Number : 101012

Octal Number : 258 = Binary Number : 101012

Shortcut method - Binary to Octal


Steps

 Step 1 - Divide the binary digits into groups of three (starting from the right).

 Step 2 - Convert each group of three binary digits to one octal digit.

Example
Binary Number : 101012

Calculating Octal Equivalent:

Step Binary Number Octal Number

Step 1 101012 010 101

Step 2 101012 28 58

Step 3 101012 258

Binary Number : 101012 = Octal Number : 258

Shortcut method - Octal to Binary


Steps

 Step 1 - Convert each octal digit to a 3 digit binary number (the octal digits may be treated as decimal
for this conversion).
 Step 2 - Combine all the resulting binary groups (of 3 digits each) into a single binary number.

Example
Octal Number : 258

Calculating Binary Equivalent:

Step Octal Number Binary Number

Step 1 258 210 510

Step 2 258 0102 1012

Step 3 258 0101012

Octal Number : 258 = Binary Number : 101012

Shortcut method - Binary to Hexadecimal


Steps

 Step 1 - Divide the binary digits into groups of four (starting from the right).

 Step 2 - Convert each group of four binary digits to one hexadecimal symbol.

Example
Binary Number : 101012

Calculating hexadecimal Equivalent:

Step Binary Number Hexadecimal Number

Step 1 101012 0001 0101

Step 2 101012 110 510

Step 3 101012 1516

Binary Number : 101012 = Hexadecimal Number : 1516


Shortcut method - Hexadecimal to Binary
steps

 Step 1 - Convert each hexadecimal digit to a 4 digit binary number (the hexadecimal digits may be
treated as decimal for this conversion).

 Step 2 - Combine all the resulting binary groups (of 4 digits each) into a single binary number.

Example
Hexadecimal Number : 1516

Calculating Binary Equivalent:

Step Hexadecimal Number Binary Number

Step 1 1516 110 510

Step 2 1516 00012 01012

Step 3 1516 000101012

Hexadecimal Number : 1516 = Binary Number : 101012

Ones and Twos Complement


Ones Complement

The complement (or opposite) of +5 is −5. When representing positive and negative numbers in
8-bit ones complement binary form, the positive numbers are the same as in signed binary
notation described in Number Systems Module 1.4 i.e. the numbers 0 to +127 are represented as
000000002 to 011111112. However, the complement of these numbers, that is their negative
counterparts from −128 to −1, are represented by ‗complementing‘ each 1 bit of the positive
binary number to 0 and each 0 to 1.

For example:

+510 is 000001012

−510 is 111110102
Notice in the above example, that the most significant bit (msb) in the negative number −510 is 1,
just as in signed binary. The remaining 7 bits of the negative number however are not the same
as in signed binary notation. They are just the complement of the remaining 7 bits, and these give
the value or magnitude of the number.

The problem with signed the binary arithmetic described in Number Systems Module 1.4 was
that it gave the wrong answer when adding positive and negative numbers. Does ones
complement notation give better results with negative numbers than signed binary?

Fig. 1.5.1 Adding Positive & Negative Numbers in Ones Complement

Fig. 1.5.1 shows the result of adding −4 to +6, using ones complement,(this is the same as
subtracting +4 from +6, and so it is crucial to arithmetic).

The result, 000000012 is 110 instead of 210.

This is better than subtraction in signed binary, but it is still not correct. The result should be
+210 but the result is +1 (notice that there has also been a carry into the none existent 9th bit).

Fig. 1.5.2 shows another example, this time adding two negative numbers −4 and −3.

Because both numbers are negative, they are first converted to ones complement notation.

+410 is 00000100 in pure 8 bit binary, so complementing gives 11111011.

Fig. 1.5.2 Adding Positive & Negative Numbers in Ones Complement

This is −410 in ones complement notation.

+310 is 00000011 in pure 8 bit binary, so complementing gives 11111100.

This is −310 in ones complement notation.


The result of 111101112 is in its complemented form so the 7 bits after the sign bit (1110111),
should be re-complemented and read as 0001000, which gives the value 810. As the most
significant bit (msb) of the result is 1 the result must be negative, which is correct, but the
remaining seven bits give the value of −8. This is still wrong by 1, it should be −7.

End Around Carry

There is a way to correct this however. Whenever the ones complement system handles negative
numbers, the result is 1 less than it should be, e.g. 1 instead of 2 and −8 instead of −7, but
another thing that happens in negative number ones complement calculations is that a carry is
‗left over‘ after the most significant bits are added. Instead of just disregarding this carry bit, it
can be added to the least significant bit of the result to correct the value. This process is called
‗end around carry‘ and corrects for the result -1 effect of the ones complement system.

There are however, still problems with both ones complement and signed binary notation. The
ones complement system still has two ways of writing 010 (000000002 = +010 and 111111112 =
−010); additionally there is a problem with the way positive and negative numbers are written. In
any number system, the positive and negative versions of the same number should add to
produce zero. As can be seen from Table 1.5.1, adding +45 and −45 in decimal produces a result
of zero, but this is not the case in either signed binary or ones complement.

This is not good enough, however there is a system that overcomes this difficulty and allows
correct operation using both positive and negative numbers. This is the Twos Complement
system.

Twos Complement Notation

Twos complement notation solves the problem of the relationship between positive and negative
numbers, and achieves accurate results in subtractions.

To perform binary subtraction, the twos complement system uses the technique of
complementing the number to be subtracted. In the ones complement system this produced a
result that was 1 less than the correct answer, but this could be corrected by using the ‗end
around carry‘ system. This still left the problem that positive and negative versions of the same
number did not produce zero when added together.

The twos complement system overcomes both of these problems by simply adding one to the
ones complement version of the number before addition takes place. The process of producing a
negative number in Twos Complement Notation is illustrated in Table 1.5.2.

Fig. 1.5.3 Adding a Number to its Twos Complement Produces Zero


This version of −5 now, not only gives the correct answer when used in subtractions but is also
the additive inverse of +5 i.e. when added to +5 produces the correct result of 0, as shown in Fig.
1.5.3

Note that in twos complement the (1) carry from the most significant bit is discarded as there is
no need for the ‗end around carry‘ fix.

With numbers electronically stored in their twos complement form, subtractions can be carried
out more easily (and faster) as the microprocessor has simply to add two numbers together using
nearly the same circuitry as is used for addition.

6 − 2 = 4 is the same as (+6) + (−2) = 4

Twos Complement Examples

Note: When working with twos complement it is important to write numbers in their full 8 bit
form, since complementing will change any leading 0 bits into 1 bits, which will be included in
any calculation. Also during addition, carry bits can extend into leading 0 bits or sign bits, and
this can affect the answer in unexpected ways.

Fig. 1.5.4 Adding Positive Numbers in Twos Complement

Twos Complement Addition

Fig 1.5.4 shows an example of addition using 8 bit twos complement notation. When adding two
positive numbers, their the sign bits (msb) will both be 0, so the numbers are written and added
as a pure 8-bit binary addition.

Twos Complement Subtraction


Fig. 1.5.5 Subtracting a Positive Number from a Larger Positive Number

Fig. 1.5.5 shows the simplest case of twos complement subtraction where one positive number
(the subtrahend) is subtracted from a larger positive number (the minuend). In this case the
minuend is 1710 and the subtrahend is 1010.

Because the minuend is a positive number its sign bit (msb) is 0 and so it can be written as a pure
8 bit binary number.

The subtrahend is to be subtracted from the minuend and so needs to be complemented (simple
ones complement) and 1 added to the least significant bit (lsb) to complete the twos complement
and turn +10 into −10.

When these three lines of digits, and any carry 1 bits are added, remembering that in twos
complement, any carry from the most significant bit is discarded. The answer (the difference
between 17 and 10) is 000001112 = 710 which is correct. Therefore the twos complement method
has provided correct subtraction by using only addition and complementing, both operations that
can be simply accomplished by digital electronic circuits.

Subtraction with a negative result


Fig. 1.5.6 Subtraction Producing a Negative Result

Some subtractions will of course produce an answer with a negative value. In Fig. 1.5.6 the result
of subtracting 17 from 10 should −710 but the twos complement answer of 111110012 certainly
doesn‘t look like −7. However the sign bit is indicating correctly that the answer is negative, so
in this case the 7 bits indicating the value of the negative answer need to be 'twos complemented'
once more to see the answer in a recognisable form.

When the 7 value bits are complemented and 1 is added to the least significant bit however, like
magic, the answer of 100001112 appears, which confirms that the original answer was in fact −7
in 8 bit twos complement form.

It seems then, that twos complement will get the right answer in every situation?

Well guess what − it doesn‘t! There are some cases where even twos complement will give a
wrong answer. In fact there are four conditions where a wrong answer may crop up:

1. When adding large positive numbers.

2. When adding large negative numbers.

3. When subtracting a large negative number from a large positive number.

4. When subtracting a large positive number from a large negative number.

The problem seems to be with the word ‗large‘. What is large depends on the size of the digital
word the microprocessor uses for calculation. As shown in Table 1.5.3, if the microprocessor
uses an 8−bit word, the largest positive number that can appear in the problem OR THE
RESULT is +12710 and the largest negative number will be −12810. The range of positive values
appears to be 1 less than the negative range because 0 is a positive number in twos complement
and has only one occurrence (000000002) in the whole range of 25610 values.

With a 16-bit word length the largest positive and negative numbers will be +3276710 and -
3276810, but there is still a limit to the largest number that can appear in a single calculation.
Overflow Problems.

Steps can be taken to accommodate large numbers, by breaking a long binary word down into
byte sized sections and carrying out several separate calculations before assembling the final
answer. However this doesn‘t solve all the cases where errors can occur.

A typical overflow problem that can happen even with single byte numbers is illustrated in Fig.
1.5.7.

Fig. 1.5.7 Carry Overflows into Sign Bit

In this example, the two numbers to be added (11510 and 9110) should give a sum of 20610 and at
first glance 110011102 looks like the correct answer of 20610,but remember that in the 8 bit twos
complement system the most significant bit is the sign of the number, therefore the answer
appears to be a negative value and reading just the lower 7 bits gives 10011102 or -7810.
Although twos complement negative answers are not easy to read, this is clearly wrong as the
result of adding two positive numbers must give a positive answer.

According to the information in Fig 1.5.6, as the answer is negative, complementing the lower 7
bits of 110011102 and adding 1 should reveal the value of the correct answer, but carrying out
the complement+1 on these bits and leaving the msb unchanged gives 101100102 which is -5010.
This is nothing like the correct answer of 20610 so what has happened?

The 8 bit twos complement notation has not worked here because adding 115 + 91 gives a total
greater than +127, the largest value that can be held in 8-bit twos complement notation.

What has happened is that an overflow has occurred, due to a 1 being carried from bit 6 to bit 7
(the most significant bit, which is of course the sign bit), this changes the sign of the answer.
Additionally it changes the value of the answer by 12810 because that would be the value of the
msb in pure binary. So the original answer of 7810 has ‗lost‘ 12810 to the sign bit. The addition
would have been correct if the sign bit had been part of the value, however the calculation was
done in twos complement notation and the sign bit is not part of the value.
Of course in real electronic calculations, a single byte overflow situation does not usually cause a
problem; computers and calculators can fortunately deal with larger numbers than 12710. They
achieve this because the microprocessors used are programmed to carry out the calculation in a
number of steps, and although each step must still be carried out in a register having a set word
length, e.g. 8 bits or 16 bits, corrective action can also be taken if an overflow situation is
detected at any stage.

Microprocessors deal with this problem by using a special register called a status register, flag
register or conditions code register, which automatically flags up any problem such as an
overflow or a change of sign, that occurs. It also provides other information useful to the
programmer, so that whatever problem occurs; corrective action can be taken by software, or in
many cases by firmware permanently embedded within the microprocessor to deal with a range
of math problems.

Whatever word length the microprocessor is designed to handle however, there must always be a
limit to the word length, and so the programmer must be aware of the danger of errors similar to
that described in Fig. 1.5.7.

Fig. 1.5.8 Typical 8-bit Flag Register

A typical flag register is illustrated in Fig. 1.5.8 and consists of a single 8-bit storage register
located within the microprocessor, in which some bits may be set by software to control the
actions of the microprocessor, and some bits are set automatically by the results of arithmetic
operations within the microprocessor.
Typical flags for an 8-bit microprocessor are listed below:

Bit 0 (C) (set by arithmetic result) = 1 Carry has been created from result msb.

Bit 1 (Z) (set by arithmetic result) = 1 Calculation resulted in 0.

Bit 2 (I) (set by software) 1 = Interrupt disable (Prevents software interrupts).

Bit 3 (D) (set by software) 1 = Decimal mode (Calculations are in BCD).

Bit 4 (B) (set by software) 1 = Break (Stops software execution).

Bit 5 (X) Not used.

Bit 6 (V) (set by arithmetic result) = 1 Overflow has occurred (result too big for 8 bits).

Bit 7 (N) (set by arithmetic result) = 1 Negative result (msb of result is 1).

It seems therefore, that the only math that microprocessors can do is to add together two numbers
of a limited value, and to complement binary numbers. Well at a basic level this is true, however
there are some additional tricks they can perform, such as shifting all the bits in a binary word
left or right, as a partial aid to multiplication or division. However anything more complex must
be done by software.

Multiplication and Division

While addition and subtraction can be achieved by adding positive and negative numbers as
described above, this does not include the other basic forms of mathematics, multiplication and
division. Multiplication in its simplest form can however be achieved by adding a number to
itself a number of times, for example, starting with a total of 0, if 5 is added to the total three
times the new total will be fifteen (or 5 x 3). Division can also be accomplished by repeatedly
subtracting (using add) the divisor from the number to be divided until the remainder is zero, or
less than the divisor. Counting the number of subtractions then gives the result, for example if 3
(the divisor) is repeatedly subtracted from 15, after 5 subtractions the remainder will be zero and
the count will be 5, indicating that 15 divided by 3 is exactly 5.

There are more efficient methods for carrying out subtraction and division using software, or
extra features within some microprocessors and/or the use of embedded maths firmware.

Floating-Point Number Representation

A floating-point number (or real number) can represent a very large (1.23×10^88) or a very small
(1.23×10^-88) value. It could also represent very large negative number (-1.23×10^88) and very
small negative number (-1.23×10^88), as well as zero, as illustrated:
A floating-point number is typically expressed in the scientific notation, with a fraction (F), and an
exponent (E) of a certain radix (r), in the form of F×r^E. Decimal numbers use radix of 10 (F×10^E);
while binary numbers use radix of 2 (F×2^E).

Representation of floating point number is not unique. For example, the number 55.66 can be
represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so on. The fractional part can be
normalized. In the normalized form, there is only a single non-zero digit before the radix point. For
example, decimal number 123.4567 can be normalized as 1.234567×10^2; binary number
1010.1011B can be normalized as 1.0101011B×2^3.

It is important to note that floating-point numbers suffer from loss of precision when represented
with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there are infinite number of real
numbers (even within a small range of says 0.0 to 0.1). On the other hand, a n-bit binary pattern can
represent a finite 2^n distinct numbers. Hence, not all the real numbers can be represented. The
nearest approximation will be used instead, resulted in loss of accuracy.

It is also important to note that floating number arithmetic is very much less efficient than integer
arithmetic. It could be speed up with a so-called dedicated floating-point co-processor. Hence, use
integers if your application does not require floating-point numbers.

In computers, floating-point numbers are represented in scientific notation of fraction (F) and
exponent (E) with a radix of 2, in the form of F×2^E. Both E and F can be positive as well as negative.
Modern computers adopt IEEE 754 standard for representing floating-point numbers. There are two
representation schemes: 32-bit single-precision and 64-bit double-precision.

4.1 IEEE-754 32-bit Single-Precision Floating-Point Numbers

In 32-bit single-precision floating-point representation:

The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative
numbers.
The following 8 bits represent exponent (E).
The remaining 23 bits represents fraction (F).
Normalized Form

Let's illustrate with an example, suppose that the 32-bit pattern is 1 1000 0001 011 0000 0000
0000 0000 0000, with:

S = 1
E = 1000 0001
F = 011 0000 0000 0000 0000 0000

In the normalized form, the actual fraction is normalized with an implicit leading 1 in the form of 1.F.
In this example, the actual fraction is 1.011 0000 0000 0000 0000 0000 = 1 + 1×2^-2 + 1×2^-3
= 1.375D.

The sign bit represents the sign of the number, with S=0 for positive and S=1 for negative number. In
this example with S=1, this is a negative number, i.e., -1.375D.

In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This is because
we need to represent both positive and negative exponent. With an 8-bit E, ranging from 0 to 255,
the excess-127 scheme could provide actual exponent of -127 to 128. In this example, E-127=129-
127=2D.

Hence, the number represented is -1.375×2^2=-5.5D.

De-Normalized Form

Normalized form has a serious problem, with an implicit leading 1 for the fraction, it cannot
represent the number zero! Convince yourself on this!

De-normalized form was devised to represent zero and other numbers.

For E=0, the numbers are in the de-normalized form. An implicit leading 0 (instead of 1) is used for
the fraction; and the actual exponent is always -126. Hence, the number zero can be represented
with E=0 and F=0 (because 0.0×2^-126=0).

We can also represent very small positive and negative numbers in de-normalized form with E=0. For
example, if S=1, E=0, and F=011 0000 0000 0000 0000 0000 . The actual fraction is 0.011=1×2^-
2+1×2^-3=0.375D. Since S=1, it is a negative number. With E=0, the actual exponent is -126. Hence
the number is -0.375×2^-126 = -4.4×10^-39, which is an extremely small negative number (close
to zero).

Binary Codes
In the coding, when numbers, letters or words are represented by a specific group of symbols, it is said that
the number, letter or word is being encoded. The group of symbols is called as a code. The digital data is
represented, stored and transmitted as group of binary bits. This group is also called as binary code. The
binary code is represented by the number as well as alphanumeric letter.

Advantages of Binary Code


Following is the list of advantages that binary code offers.

 Binary codes are suitable for the computer applications.

 Binary codes are suitable for the digital communications.

 Binary codes make the analysis and designing of digital circuits if we use the binary codes.

 Since only 0 & 1 are being used, implementation becomes easy.

Gray Code
It is the non-weighted code and it is not arithmetic codes. That means there are no specific weights assigned
to the bit position. It has a very special feature that, only one bit will change each time the decimal number is
incremented as shown in fig. As only one bit changes at a time, the gray code is called as a unit distance code.
The gray code is a cyclic code. Gray code cannot be used for arithmetic operation.
Application of Gray code
 Gray code is popularly used in the shaft position encoders.

 A shaft position encoder produces a code word which represents the angular position of the shaft.

Binary Coded Decimal (BCD) code


In this code each decimal digit is represented by a 4-bit binary number. BCD is a way to express each of the
decimal digits with a binary code. In the BCD, with four bits we can represent sixteen numbers (0000 to
1111). But in BCD code only first ten of these are used (0000 to 1001). The remaining six code combinations
i.e. 1010 to 1111 are invalid in BCD.

Advantages of BCD Codes

 It is very similar to decimal system.

 We need to remember binary equivalent of decimal numbers 0 to 9 only.


Disadvantages of BCD Codes
 The addition and subtraction of BCD have different rules.

 The BCD arithmetic is little more complicated.

 BCD needs more number of bits than binary to represent the decimal number. So BCD is less
efficient than binary.

ASCII Character Codes

The ASCII character code charts contain the decimal and hexadecimal values of the extended
ASCII (American Standards Committee for Information Interchange) character set. The extended
character set includes the ASCII character set and 128 other characters for graphics and line
drawing, often called the "IBM character set."

The characters that appear in Windows above 127 depend on the selected typeface.

The charts in this section show the default character set for a console application.
ASCII Table
Dec = Decimal Value
Char = Character

'5' has the int value 53


if we write '5'-'0' it evaluates to 53-48, or the int 5
if we write char c = 'B'+32; then c stores 'b'

Dec Char Dec Char Dec Char Dec Char


--------- --------- --------- ----------
0 NUL (null) 32 SPACE 64 @ 96 `
1 SOH (start of heading) 33 ! 65 A 97 a
2 STX (start of text) 34 " 66 B 98 b
3 ETX (end of text) 35 # 67 C 99 c
4 EOT (end of transmission) 36 $ 68 D 100 d
5 ENQ (enquiry) 37 % 69 E 101 e
6 ACK (acknowledge) 38 & 70 F 102 f
7 BEL (bell) 39 ' 71 G 103 g
8 BS (backspace) 40 ( 72 H 104 h
9 TAB (horizontal tab) 41 ) 73 I 105 i
10 LF (NL line feed, new line) 42 * 74 J 106 j
11 VT (vertical tab) 43 + 75 K 107 k
12 FF (NP form feed, new page) 44 , 76 L 108 l
13 CR (carriage return) 45 - 77 M 109 m
14 SO (shift out) 46 . 78 N 110 n
15 SI (shift in) 47 / 79 O 111 o
16 DLE (data link escape) 48 0 80 P 112 p
17 DC1 (device control 1) 49 1 81 Q 113 q
18 DC2 (device control 2) 50 2 82 R 114 r
19 DC3 (device control 3) 51 3 83 S 115 s
20 DC4 (device control 4) 52 4 84 T 116 t
21 NAK (negative acknowledge) 53 5 85 U 117 u
22 SYN (synchronous idle) 54 6 86 V 118 v
23 ETB (end of trans. block) 55 7 87 W 119 w
24 CAN (cancel) 56 8 88 X 120 x
25 EM (end of medium) 57 9 89 Y 121 y
26 SUB (substitute) 58 : 90 Z 122 z
27 ESC (escape) 59 ; 91 [ 123 {
28 FS (file separator) 60 < 92 \ 124 |
29 GS (group separator) 61 = 93 ] 125 }
30 RS (record separator) 62 > 94 ^ 126 ~
31 US (unit separator) 63 ? 95 _ 127 DEL

Boolean Algebra Truth Tables


Logic Gate Truth Tables
As well as a standard Boolean Expression, the input and output information of any Logic Gate or circuit can
be plotted into a standard table to give a visual representation of the switching function of the system. The
table used to represent the boolean expression of a logic gate function is commonly called a Truth Table. A
logic gate truth table shows each possible input combination to the gate or circuit with the resultant output
depending upon the combination of these input(s).

For example, consider a single 2-input logic circuit with input variables labelled as A and B. There are ―four‖
possible input combinations or 22 of ―OFF‖ and ―ON‖ for the two inputs. However, when dealing with Boolean
expressions and especially logic gate truth tables, we do not general use ―ON‖ or ―OFF‖ but instead give them bit
values which represent a logic level ―1‖ or a logic level ―0‖ respectively.

Then the four possible combinations of A and B for a 2-input logic gate is given as:

 Input Combination 1. – ―OFF‖ – ―OFF‖ or ( 0, 0 )

 Input Combination 2. – ―OFF‖ – ―ON‖ or ( 0, 1 )

 Input Combination 3. – ―ON‖ – ―OFF‖ or ( 1, 0 )

 Input Combination 4. – ―ON‖ – ―ON‖ or ( 1, 1 )

Therefore, a 3-input logic circuit would have 8 possible input combinations or 2 3 and a 4-input logic circuit would
have 16 or 24, and so on as the number of inputs increases. Then a logic circuit with ―n‖ number of inputs would
have 2n possible input combinations of both ―OFF‖ and ―ON‖.

So in order to keep things simple to understand, in this tutorial we will only deal with standard 2-input type logic
gates, but the principals are still the same for gates with more than two inputs.

Then the Truth tables for a 2-input AND Gate, a 2-input OR Gate and a single input NOT Gate are given as:

2-input AND Gate


For a 2-input AND gate, the output Q is true if BOTH input A ―AND‖ input B are both true, giving the Boolean
Expression of: ( Q = A and B ).

Symbol Truth Table

A B Q

0 0 0

0 1 0

1 0 0

1 1 1

Boolean Expression Q = A.B Read as A AND B gives Q

Note that the Boolean Expression for a two input AND gate can be written as: A.B or just simply AB without the
decimal point.
2-input OR (Inclusive OR) Gate
For a 2-input OR gate, the output Q is true if EITHER input A ―OR‖ input B is true, giving the Boolean Expression
of: ( Q = A or B ).

Symbol Truth Table

A B Q

0 0 0

0 1 1

1 0 1

1 1 1

Boolean Expression Q = A+B Read as A OR B gives Q

NOT Gate
For a single input NOT gate, the output Q is ONLY true when the input is ―NOT‖ true, the output is the inverse or
complement of the input giving the Boolean Expression of: ( Q = NOT A ).

Symbol Truth Table

A Q

0 1

1 0

Boolean Expression Q = NOT A or A Read as inversion of A gives Q


The NAND and the NOR Gates are a combination of the AND and OR Gates with that of a NOT Gate or inverter.

2-input NAND (Not AND) Gate


For a 2-input NAND gate, the output Q is true if BOTH input A and input B are NOT true, giving the Boolean
Expression of: ( Q = not(A and B) ).

Symbol Truth Table

A B Q

0 0 1
0 1 1

1 0 1

1 1 0

Boolean Expression Q = A .B Read as A AND B gives NOT-Q

2-input NOR (Not OR) Gate


For a 2-input NOR gate, the output Q is true if BOTH input A and input B are NOT true, giving the Boolean
Expression of: ( Q = not(A or B) ).

Symbol Truth Table

A B Q

0 0 1

0 1 0

1 0 0

1 1 0

Boolean Expression Q = A+B Read as A OR B gives NOT-Q

As well as the standard logic gates there are also two special types of logic gate function called an Exclusive-OR
Gate and an Exclusive-NOR Gate. The actions of both of these types of gates can be made using the above standard
gates however, as they are widely used functions, they are now available in standard IC form and have been
included here as reference.

2-input EX-OR (Exclusive OR) Gate


For a 2-input Ex-OR gate, the output Q is true if EITHER input A or if input B is true, but NOT both giving the
Boolean Expression of: ( Q = (A and NOT B) or (NOT A and B) ).

Symbol Truth Table

A B Q

0 0 0

0 1 1

1 0 1
1 1 0

Boolean Expression Q = A B

2-input EX-NOR (Exclusive NOR) Gate


For a 2-input Ex-NOR gate, the output Q is true if BOTH input A and input B are the same, either true or false,
giving the Boolean Expression of: ( Q = (A and B) or (NOT A and NOT B) ).

Symbol Truth Table

A B Q

0 0 1

0 1 0

1 0 0

1 1 1

Boolean Expression Q = A B

Summary of 2-input Logic Gates

The following Truth Table compares the logical functions of the 2-input logic gates above.

Inputs Truth Table Outputs For Each Gate

A B AND NAND OR NOR EX-OR EX-NOR

0 0 0 1 0 1 0 1

0 1 0 1 1 0 1 0

1 0 0 1 1 0 1 0

1 1 1 0 1 0 0 1
The following table gives a list of the common logic functions and their equivalent Boolean notation.

Logic Function Boolean Notation

AND A.B

OR A+B
NOT A

NAND A .B

NOR A+B

EX-OR (A.B) + (A.B) or A B

EX-NOR (A.B) + or A B

2-input logic gate truth tables are given here as examples of the operation of each logic function, but there are many
more logic gates with 3, 4 even 8 individual inputs. The multiple input gates are no different to the simple 2-input
gates above, So a 4-input AND gate would still require ALL 4-inputs to be present to produce the required output at
Q and its larger truth table would reflect that.

Boolean Algebra Theorems and Laws of Boolean Algebra


Under Digital ElectronicsBoolean algebra is a different kind of algebra or rather
can be said a new kind of algebra which was invented by world famous
mathematician George Boole in the year of 1854. In digital electronics there are
several methods of simplifying the design of logic circuits. This algebra is one of the
method which it can also be called is switching algebra. According to George Boole
symbols can be used to represent the structure of logical thoughts. This type of
algebra deals with the rules or laws, which are known as laws of Boolean algebra by
which the logical operations are carried out. There are also few theorems of Boolean
algebra, that are needed to be noticed carefully because it makes calculation fastest
and easier. In Boolean algebra a digital circuit is represented by a set of input and
output signals and the function of the circuit is suppressed as a set of Boolean
relationship between the symbols. Boolean logic deals with only two variables, 1 for
'True' and 0 for 'false'.

We are familiar with the term Algebra which is related with general mathematics, but
in this article we are dealing with Boolean algebra which is very much different from
the original term. As in algebra we are accustomed with the general rules and
formulas of basic addition, subtraction, multiplication and division but in case of
Boolean algebra the implementation of abstract algebra and mathematical logic are
more important. This algebra helps to define the logical operations of digital circuits.
So it is a very important function of digital electronics and it helps in smooth running
of the system.

History of Boolean Algebra

Now before discussing about the basics of Boolean algebra we should know about its
history, who invented it, from where the original idea came. World famous
mathematician George Boole invented Boolean algebra in the year 1854 in his book
―An Investigation of the Laws of Thought‖. Later using this technique Claude
Shannon introduced a new type of algebra which is termed as Switching Algebra.
There are specific rules and laws of Boolean algebra which are discussed below.

Before going to the laws of Boolean algebra and theorems of Boolean algebra we
must know that know that Boolean expression can be stated as :-

i. A constant is a Boolean expression.

ii. A variable is a Boolean expression.

For example is B is a Boolean expression then B is also a Boolean expression.


Variables such as BC and BCA + C are also Boolean expressions but A – B is not a
Boolean expressions.

This type of expressions is formed by using binary constants, binary variables and
basic logical operation symbols. In Boolean algebra an expression given can also be
converted into a logic diagram using logical AND gate, OR gate & NOT gate. Basic
logical operations are nothing but operations that include AND function i.e. logical
multiplication, OR function i.e. logical addition and NOT function i.e. logical
inversion.

Laws of Boolean Algebra

There are several laws in Boolean algebra.

As we know that this kind of algebra is a mathematical system consisting of a set of


two or more district elements. There are also two binary operators denoted by the
symbol bar (-) or prime (‗). These altogether follows the following laws:-

i. Commutative law.
ii. Associative law.

iii. Distributive law.

iv. Absorption laws.

v. Consensus laws.

i) Commutative law :- Using OR operation the Law is given as -

A+B=B+A

By this Law order of the OR operations conducted on the variables makes no


differences.

This law using AND operation is -

A.B = B.A

This mean the same as previous the only difference is here the operator is (.). Here the
order of the AND operations conducted on the variables makes no difference. This is
an important law in Boolean algebra.

ii) Associative law : This law is given as -

A+(B+C) = (A+B)+C

This is for several variables, where the OR operation of the variables result is same
though the grouping of the variables. This law is quite same in case of AND operator.
It is

A.(B.C) = (A.B).C

Thus according to this law grouping of Boolean expressions do not make any
difference during the AND operation of several variables. Though but these laws are
also very important.

iii) Distributive law :- Among the laws of Boolean algebra this law is very famous and
important too. This law is composed of two operators. AND and OR. The law is

A + BC = (A + B)(A + C)
Here the logic is, AND operation of several variables and then the OR operation of the
result with a single variable is equivalent to the AND of the OR of single variable to
one of the variable of several variables to make it simple, set BC be the several
variables then A will be OR with B. Firstly and again A will be OR with C, then the
result of the OR operation will be AND. The proof of this law in Boolean algebra is
given below :-

Proof
A + BC = A.1 + BC [ Since, A.1 = A]

= A(1 + B) + BC [Since, B+1 = 1]

= A.1 + AB + BC

= A.(1 + C) + AB + BC [Since, A.A = A.1 = A]

= A (A + C) + B (A+C)

A+BC = (A+B)(A+C)

This law can also be for Boolean multiplication.

Such as - A.(B + C) = A.B + A.C

iv) Absorption laws : - In laws of Boolean algebra there are several groups of laws.
Absorption laws are such a group of laws. The laws and their respective proves are
given below.

i) A+AB = A

Proof. A+AB = A.1 + AB [A.1 = A]


= A(1+B) [Since, 1 + B = 1]

= A.1 = A

ii) A(A+B) = A

Proof. A(A+B) = A.A + A.B

= A+AB [Since, A.A = A]

= A(1+B)

= A.1

=A

iii) A+ĀB = A+B

Proof A+ĀB = (A+Ā) [Since, A+BC = (A+B)(A+C) using distributive law.]

= 1 (A+B) [Since, A+Ā = 1]

= A+B

iv) A.(Ā+B) = AB

Proof A.( Ā+B) = A. Ā+AB

= AB [‡AĀ = 0]
v) Consensus Laws :- This is other group of laws which are given more priority than
the theorems of Boolean algebra. The Laws are as follows.

a) AB + ĀC+BC = AB+ĀC

AB+ĀC+BC = AB + ĀC + BC.1

= AB+ĀC+BC(A+Ā) [‡A+Ā=1]

= AB+ĀC+ABC+ĀBC

= AB(1+C)+ ĀC(1+B) [‡ 1+B=1=1+C]

= AB+ĀC

b) (A+B)( Ā+C)(B+C) = (A+B)( Ā+C)

Proof. (A+B)( Ā+C)(B+C) = (A+B)( Ā+C)(B+C+O)

= (A+B)( Ā+C)(B+C+AĀ) [ By distributive law]

= (A+B)(A+B+C)( Ā+C)( Ā+C+B)

= (A+B)( Ā+C) [‡ A(A+B)= A]

The other small laws of Boolean are given below.

(a) A+0 = A (b) A+1 = 1 (c) A+A = A


(d) A+Ā = 1 (e) A.1 = A (f) A.0 = 0

(g) A.A = A (h) A. Ā = 0 (i) A' = A

Thus we have completed with the laws of Boolean algebra. In the other page we have
describes on De Morgan‘s theorems and related laws on it. Some problems given
below will make your idea clear on this types of calculations.

Ex 1 :- Simplify the expression :-

Y = (Ā+B)(A+B)

Solution Y = (Ā+B)(A+C)

= C + A. Ā [‡ (A+B)(A+C) = A+BC]

According to distributive law

= C+0

= C.

Therefore, Y = C

Here we have solved the equation using distributive law. We have to identify the
equation first and then implement the suitable law for that particular equation. In the
next step another law is used which is A. Ā = 0. Thus how we have solved it.

Simplify :-
Y = ĀBC+ĀBC+ABC+ABC
=ĀC(B+B)+AC(B+B)
= ĀC+AC

= C(Ā+A)

= C.1 = C

Thus Y = C

Here we have used very few laws of Boolean algebra but we have to identify the
proper place and proper formulas while using those formulas. Look at the second line
here (B+B = 1) this law is used. It is also used in the fourth line and thus the require
solution of the given sum is found out in Boolean algebra.

DeMorgan‘s Theorems
Size: A mathematician named DeMorgan developed a pair of important rules regarding group
complementation in Boolean algebra. By group complementation, I‘m referring to the complement of a group
of terms, represented by a long bar over more than one variable.

You should recall from the chapter on logic gates that inverting all inputs to a gate reverses that gate‘s essential
function from AND to OR, or vice versa, and also inverts the output. So, an OR gate with all inputs inverted (a
Negative-OR gate) behaves the same as a NAND gate, and an AND gate with all inputs inverted (a Negative-
AND gate) behaves the same as a NOR gate. DeMorgan‘s theorems state the same equivalence in ―backward‖
form: that inverting the output of any gate results in the same function as the opposite type of gate (AND vs.
OR) with inverted inputs:
A long bar extending over the term AB acts as a grouping symbol, and as such is entirely different from the
product of A and B independently inverted. In other words, (AB)‘ is not equal to A‘B‘. Because the ―prime‖
symbol (‘) cannot be stretched over two variables like a bar can, we are forced to use parentheses to make it
apply to the whole term AB in the previous sentence. A bar, however, acts as its own grouping symbol when
stretched over more than one variable. This has profound impact on how Boolean expressions are evaluated
and reduced, as we shall see.
DeMorgan‘s theorem may be thought of in terms of breaking a long bar symbol. When a long bar is broken,
the operation directly underneath the break changes from addition to multiplication, or vice versa, and the
broken bar pieces remain over the individual variables. To illustrate:

When multiple ―layers‖ of bars exist in an expression, you may only break one bar at a time, and it is
generally easier to begin simplification by breaking the longest (uppermost) bar first. To illustrate, let‘s take
the expression (A + (BC)‘)‘ and reduce it using DeMorgan‘s Theorems:
Following the advice of breaking the longest (uppermost) bar first, I‘ll begin by breaking the bar covering the
entire expression as a first step:

As a result, the original circuit is reduced to a three-input AND gate with the A input inverted:

You should never break more than one bar in a single step, as illustrated here:

As tempting as it may be to conserve steps and break more than one bar at a time, it often leads to an incorrect
result, so don‘t do it!
It is possible to properly reduce this expression by breaking the short bar first, rather than the long bar first:
The end result is the same, but more steps are required compared to using the first method, where the longest
bar was broken first. Note how in the third step we broke the long bar in two places. This is a legitimate
mathematical operation, and not the same as breaking two bars in one step! The prohibition against breaking
more than one bar in one step is not a prohibition against breaking a bar in more than one place. Breaking in
more than one place in a single step is okay; breaking more than one bar in a single step is not.
You might be wondering why parentheses were placed around the sub-expression B‘ + C‘, considering the fact
that I just removed them in the next step. I did this to emphasize an important but easily neglected aspect of
DeMorgan‘s theorem. Since a long bar functions as a grouping symbol, the variables formerly grouped by a
broken bar must remain grouped lest proper precedence (order of operation) be lost. In this example, it really
wouldn‘t matter if I forgot to put parentheses in after breaking the short bar, but in other cases it might.
Consider this example, starting with a different expression:
As you can see, maintaining the grouping implied by the complementation bars for this expression is crucial to
obtaining the correct answer.
Let‘s apply the principles of DeMorgan‘s theorems to the simplification of a gate circuit:

As always, our first step in simplifying this circuit must be to generate an equivalent Boolean expression. We
can do this by placing a sub-expression label at the output of each gate, as the inputs become known. Here‘s
the first step in this process:

Next, we can label the outputs of the first NOR gate and the NAND gate. When dealing with inverted-output
gates, I find it easier to write an expression for the gate‘s output without the final inversion, with an arrow
pointing to just before the inversion bubble. Then, at the wire leading out of the gate (after the bubble), I write
the full, complemented expression. This helps ensure I don‘t forget a complementing bar in the sub-expression,
by forcing myself to split the expression-writing task into two steps:
Finally, we write an expression (or pair of expressions) for the last NOR gate:

Now, we reduce this expression using the identities, properties, rules, and theorems (DeMorgan‘s) of Boolean
algebra:

Logic Simplification With Karnaugh Maps


The logic simplification examples that we have done so could have been performed with Boolean algebra about as quickly. Real world logic simplification problems
call for larger Karnaugh maps so that we may do serious work. We will work some contrived examples in this section, leaving most of the real world applications for
the Combinatorial Logic chapter. By contrived, we mean examples which illustrate techniques. This approach will develop the tools we need to transition to the more
complex applications in the Combinatorial Logic chapter.
We show our previously developed Karnaugh map. We will use the form on the right.

Note the sequence of numbers across the top of the map. It is not in binary sequence which would be 00, 01, 10, 11. It is 00, 01, 11 10, which is Gray code sequence.
Gray code sequence only changes one binary bit as we go from one number to the next in the sequence, unlike binary. That means that adjacent cells will only vary by
one bit, or Boolean variable. This is what we need to organize the outputs of a logic function so that we may view commonality. Moreover, the column and row
headings must be in Gray code order, or the map will not work as a Karnaugh map. Cells sharing common Boolean variables would no longer be adjacent, nor show
visual patterns. Adjacent cells vary by only one bit because a Gray code sequence varies by only one bit.
If we sketch our own Karnaugh maps, we need to generate Gray code for any size map that we may use. This is how we generate Gray code of any size.

Note that the Gray code sequence, above right, only varies by one bit as we go down the list, or bottom to top up the list. This property of Gray code is often useful in
digital electronics in general. In particular, it is applicable to Karnaugh maps.
Let us move on to some examples of simplification with 3-variable Karnaugh maps. We show how to map the product terms of the unsimplified logic to the K-map.
We illustrate how to identify groups of adjacent cells which leads to a Sum-of-Products simplification of the digital logic.

Above we, place the 1‘s in the K-map for each of the product terms, identify a group of two, then write a p-term (product term) for the sole group as our simplified
result.
Mapping the four product terms above yields a group of four covered by Boolean A’

Mapping the four p-terms yields a group of four, which is covered by one variable C.

After mapping the six p-terms above, identify the upper group of four, pick up the lower two cells as a group of four by sharing the two with two more from the other
group. Covering these two with a group of four gives a simpler result. Since there are two groups, there will be two p-terms in the Sum-of-Products result A’+B

The two product terms above form one group of two and simplifies to BC
Mapping the four p-terms yields a single group of four, which is B

Mapping the four p-terms above yields a group of four. Visualize the group of four by rolling up the ends of the map to form a cylinder, then the cells are adjacent. We
normally mark the group of four as above left. Out of the variables A, B, C, there is a common variable: C‘. C‘ is a 0 over all four cells. Final result is C’.

The six cells above from the unsimplified equation can be organized into two groups of four. These two groups should give us two p-terms in our simplified result of
A’ + C’.
Below, we revisit the Toxic Waste Incinerator from the Boolean algebra chapter. See Boolean algebra chapter for details on this example. We will simplify the logic
using a Karnaugh map.
The Boolean equation for the output has four product terms. Map four 1‘s corresponding to the p-terms. Forming groups of cells, we have three groups of two. There
will be three p-terms in the simplified result, one for each group. See ―Toxic Waste Incinerator‖, Boolean algebra chapter for a gate diagram of the result, which is
reproduced below.

Below we repeat the Boolean algebra simplification of Toxic waste incinerator for comparison.
Below we repeat the Toxic waste incinerator Karnaugh map solution for comparison to the above Boolean algebra simplification. This case illustrates why the
Karnaugh map is widely used for logic simplification.

The Karnaugh map method looks easier than the previous page of
boolean algebra.
The equivalent gate circuit for this much-simplified expression is as follows:

 REVIEW
 DeMorgan‘s Theorems describe the equivalence between gates with inverted inputs and gates with
inverted outputs. Simply put, a NAND gate is equivalent to a Negative-OR gate, and a NOR gate is
equivalent to a Negative-AND gate.
 When ―breaking‖ a complementation bar in a Boolean expression, the operation directly underneath the
break (addition or multiplication) reverses, and the broken bar pieces remain over the respective terms.
 It is often easier to approach a problem by breaking the longest (uppermost) bar before breaking any bars
under it. You must never attempt to break two bars in one step!
 Complementation bars function as grouping symbols. Therefore, when a bar is broken, the terms
underneath it must remain grouped. Parentheses may be placed around these grouped terms as a help to
avoid changing precedence.
Adder (electronics)
In electronics, an adder or summer is a digital circuit that performs addition of numbers. In
many computers and other kinds of processors, adders are used not only in the arithmetic logic
units, but also in other parts of the processor, where they are used to calculate addresses, table
indices, increment and decrement operators, and similar operations.

Although adders can be constructed for many numerical representations, such as binary-coded
decimal or excess-3, the most common adders operate on binary numbers. In cases where two's
complement or ones' complement is being used to represent negative numbers, it is trivial to
modify an adder into an adder–subtractor. Other signed number representations require a more
complex adder.

Half adder[edit]

Half adder logic diagram

The half adder adds two single binary digits A and B. It has two outputs, sum (S) and carry (C).
The carry signal represents an overflow into the next digit of a multi-digit addition. The value of
the sum is 2C + S. The simplest half-adder design, pictured on the right, incorporates an XOR
gate for S and an AND gate for C. With the addition of an OR gate to combine their carry
outputs, two half adders can be combined to make a full adder.[1]

The half adder adds two input bits and generates a carry and sum, which are the two outputs of a
half adder. The input variables of a half adder are called the augend and addend bits. The output
variables are the sum and carry. The truth table for the half adder is:

Inputs Outputs
A B C S
0 0 0 0
1 0 0 1
0 1 0 1
1 1 1 0
Full adder[edit]

Schematic symbol for a 1-bit full adder with Cin and Cout drawn on sides of block to emphasize
their use in a multi-bit adder

A full adder adds binary numbers and accounts for values carried in as well as out. A one-bit
full adder adds three one-bit numbers, often written as A, B, and Cin; A and B are the operands,
and Cin is a bit carried in from the previous less significant stage.[2] The full adder is usually a
component in a cascade of adders, which add 8, 16, 32, etc. bit binary numbers. The circuit
produces a two-bit output, output carry and sum typically represented by the signals Cout and S,
where . The one-bit full adder's truth table is:

Full-adder logic diagram


Inputs Outputs
A B Cin Cout S
0 0 0 0 0
1 0 0 0 1
0 1 0 0 1
1 1 0 1 0
0 0 1 0 1
1 0 1 1 0
0 1 1 1 0
1 1 1 1 1

A full adder can be implemented in many different ways such as with a custom transistor-level
circuit or composed of other gates. One example implementation is with
and .
In this implementation, the final OR gate before the carry-out output may be replaced by an
XOR gate without altering the resulting logic. Using only two types of gates is convenient if the
circuit is being implemented using simple IC chips which contain only one gate type per chip.

A full adder can be constructed from two half adders by connecting A and B to the input of one
half adder, connecting the sum from that to an input to the second adder, connecting Ci to the
other input and OR the two carry outputs. The critical path of a full adder runs through both
XOR-gates and ends at the sum bit . Assumed that an XOR-gate takes 3 delays to complete, the
delay imposed by the critical path of a full adder is equal to

The carry-block subcomponent consists of 2 gates and therefore has a delay of

Types of Encoders and Decoders with Truth Tables


December 20, 2014 by Tarun Agarwal

The encoders and decoders play an essential role in digital electronics projects;
encoders & decoders are used to convert data from one form to another form. These are frequently used in communication system such as
telecommunication, networking, etc..to transfer data from one end to the other end. Similarly, in the digital domain, for easy transmission of data,
it is often encrypted or placed within codes, and then transmitted. At the receiver, the coded data is decrypted or gathered from the code and is
processed in order to be displayed or given to the load accordingly.

Types of Encoders and Decoders


An encoder is an electronic device used to convert an analogue signal to a digital signal such as a BCD code. It has a number of input lines, but
only one of the inputs is activated at a given time and produces an N-bit output code that depends on the activated input. The encoders and
decoders are used in many electronics projects to compress the multiple number of inputs into smaller number of outputs. The encoder allows 2
power N inputs and generates N-number of outputs. For example, in 4-2 encoder, if we give 4 inputs it produces only 2 outputs.
Encoder

Truth Table Of The Encoder


The decoders and encoders are designed with logic gate such as an OR-gate. There are different types of encoders and decoders like 4 , 8, and 16
encoders and the truth table of encoder depends upon a particular encoder chosen by the user. Here, a 4-bit encoder is being explained along with
the truth table. The four-bit encoder allows only four inputs such as A0, A1, A2, A3 and generates the two outputs F0, F1, as shown in below
diagram.

Simple Encoder
Encoder Truth Table

Priority Encoder
A normal encoder has a number of input lines amongst which only one of which is activated at a given time while a priority encoder has more
than one input, which is activated based on priority. Which means that, the priority encoders are used to control interrupt requests by acting
according to the highest priority request? If two or more inputs are equal to one – at the same time, the input having the highest priority will be
preferred to take. Internal hardware will check this condition and priority, which is set.

Priority Encoder
Multiplexer

The multiplexers and demultiplexers are digitalelectronic devices that are usedto control applications. A multiplexer is a device that allows
multiple input signals and produces a single output signal. For example, sometimes we need to produce a single output from multiple input lines.
Electronic multiplexer can be considered as a multiple input and single output lines. In this case, the multiplexer used selects the input line to be
sent to the output. The digital code is applied to the selected inputs to generate respective output. The digital code is applied to the selected inputs
to generate respective output A common application of multiplexing occurs when several embedded system devices share a single transmission
line or bus line while communicating with the device. Each device in succession has a brief time to send and receive the data. This is the special
advantage of using this MUX.

Multiplexer

Introduction Of Decoder
The decoder is an electronic device that is used to convert digital signal to an analogue signal. It allows single input line and produces multiple
output lines. The decoders are used in many communication projects that are used to communicate between two devices. The decoder allows N-
inputs and generates 2 power N-numbers of outputs. For example, if we give 2 inputs that will produce 4 outputs by using 4 by 2 decoder.
Decoder

Truth Table Of The Decoder


The encoders and decoders are designed with logic gates such as AND gate. There are different types of decoders like 4, 8, and 16 decoders and
the truth table of decoder depends upon a particular decoder chosen by the user. The subsequent description is about a 4-bit decoder and its truth
table. The four bit decoder allows only four outputs such as A0, A1, A2, A3 and generates two outputs F0, F1, as shown in the below diagram.

Decoder Circuit
Decoder Truth T
2-to-4 line Decoder

In this type of encoders and decoders, decoders contain two inputs A0, A1, and four outputs represented by D0, D1, D2, and D3. As you can see
in the truth table – for each input combination, one output line is activated.

2-to-4 Decoder
In this example, you can notice that, each output of the decoder is actually a minterm, resulting from a certain inputs combination, that is:

 D0 =A1 A0, ( minterm m0) which corresponds to input 00


 D1 =A1 A0, ( minterm m1) which corresponds to input 01
 D2 =A1 A0, ( minterm m2) which corresponds to input 10
 D3 =A1 A0, ( minterm m3) which corresponds to input 11

The circuit is implemented with AND gates, as shown in the figure. In this circuit, the logic equation for D0 is A1/A0, and so on. Thus, each
output of the decoder will be generated to the input combination.
3-8 DECODERS

This type of decoder contains two inputs: A0, A1, A2; and four outputs represented by D0, D1, D2, D3, D4, D5, D6, and D7. As you can see in
the truth table, for each input combination, one output line is activated. For example, an input will activate the line A0, A1, A3 as 01 at the input
has activated line D1, and so on.

3-to-8 Decoder
In this example, you can notice that, each output of the decoder is actually a minterm, resulting from a certain inputs combination, that is;

 D0 =A2 A1 A0, ( minterm m0) which corresponds to input 000


 D1 = A2 A1 A0, ( minterm m1) which corresponds to input 001
 D2 = A2 A1 A0, ( minterm m2) which corresponds to input 010
 D3 = A2 A1 A0, ( minterm m3) which corresponds to input 011
 D4 = A2 A1 A0, ( minterm m0) which corresponds to input 100
 D5 = A2 A1 A0, ( minterm m1) which corresponds to input 101
 D6 = A2 A1 A0, ( minterm m2) which corresponds to input 110
 D7 = A2 A1 A0, ( minterm m3) which corresponds to input 111

The circuit is implemented with AND gates, as shown in the figure. In this circuit, the logic equation for D0 is A2/A1/A0/, and so on. Thus, each
output of the decoder will be generated to the input combination.

Decoder Design with NAND Gates

Some decoders are constructed with NAND rather than AND gates. In this case, all decoder outputs will be 1‘s except the one corresponding to
the input code which will be 0. 2-to-4 line decoder with an enable input constructed with NAND gates. The circuit operates with complemented
outputs and enables input E‘, which is also complemented to match the outputs of the decoder NAND gate. The decoder enabled when E‘ is equal
to zero. As represented by the truth table, only one output can be equal to zero at any given time, all other outputs being equal to one. The outputs
represent minterm selected by the inputs A1 and A0. The circuit is disabled when E‘ is equal to one, regardless of the values of the other two
inputs. If the circuit is disabled, then none of the outputs are equal to zero.

Multiplexer and Demultiplexer

Mutliplexer:
Multiplexer means many into one. A multiplexer is a circuit used to select and route any one of
the several input signals to a signal output. An simple example of an non electronic circuit of a
multiplexer is a single pole multiposition switch.

Multiposition switches are widely used in many electronics circuits. However circuits that
operate at high speed require the multiplexer to be automatically selected. A mechanical switch
cannot perform this task satisfactorily. Therefore, multiplexer used to perform high speed
switching are constructed of electronic components.

Multiplexer handle two type of data that is analog and digital. For analog application,
multiplexer are built of relays and transistor switches. For digital application, they are built from
standard logic gates.

The multiplexer used for digital applications, also called digital multiplexer, is a circuit with
many input but only one output. By applying control signals, we can steer any input to the
output. Few types of multiplexer are 2-to-1, 4-to-1, 8-to-1, 16-to-1 multiplexer.

Following figure shows the general idea of a multiplexer with n input signal, m control signals
and one output signal.

Multiplexer Pin Diagram

Understanding 4-to-1 Multiplexer:

The 4-to-1 multiplexer has 4 input bit, 2 control bits, and 1 output bit. The four input bits are
D0,D1,D2 and D3. only one of this is transmitted to the output y. The output depends on the
value of AB which is the control input. The control input determines which of the input data bit
is transmitted to the output.

For instance, as shown in fig. when AB = 00, the upper AND gate is enabled while all other
AND gates are disabled. Therefore, data bit D0 is transmitted to the output, giving Y = Do.
4 to 1 Multiplexer Circuit Diagram – ElectronicsHub.Org
If the control input is changed to AB =11, all gates are disabled except the bottom AND gate. In
this case, D3 is transmitted to the output and Y = D3.

 An example of 4-to-1 multiplexer is IC 74153 in which the output is same as the input.
 Another example of 4-to-1 multiplexer is 45352 in which the output is the compliment of
the input.
 Example of 16-to-1 line multiplexer is IC74150.

Applications of Multiplexer:

Multiplexer are used in various fields where multiple data need to be transmitted using a single
line. Following are some of the applications of multiplexers –

1. Communication system – Communication system is a set of system that enable


communication like transmission system, relay and tributary station, and communication
network. The efficiency of communication system can be increased considerably using
multiplexer. Multiplexer allow the process of transmitting different type of data such as
audio, video at the same time using a single transmission line.
2. Telephone network – In telephone network, multiple audio signals are integrated on a
single line for transmission with the help of multiplexers. In this way, multiple audio
signals can be isolated and eventually, the desire audio signals reach the intended
recipients.
3. Computer memory – Multiplexers are used to implement huge amount of memory into
the computer, at the same time reduces the number of copper lines required to connect the
memory to other parts of the computer circuit.
4. Transmission from the computer system of a satellite – Multiplexer can be used for the
transmission of data signals from the computer system of a satellite or spacecraft to the
ground system using the GPS (Global Positioning System) satellites.

Demultiplexer:

Demultiplexer means one to many. A demultiplexer is a circuit with one input and many output.
By applying control signal, we can steer any input to the output. Few types of demultiplexer are
1-to 2, 1-to-4, 1-to-8 and 1-to 16 demultiplexer.

Following figure illustrate the general idea of a demultiplexer with 1 input signal, m control
signals, and n output signals.

Demultiplexer Pin Diagram

Understanding 1- to-4 Demultiplexer:

The 1-to-4 demultiplexer has 1 input bit, 2 control bit, and 4 output bits. An example of 1-to-4
demultiplexer is IC 74155. The 1-to-4 demultiplexer is shown in figure below-
1 to 4 Dempultiplexer Circuit Diagram – ElectronicsHub.Org
The input bit is labelled as Data D. This data bit is transmitted to the data bit of the output lines.
This depends on the value of AB, the control input.

When AB = 01, the upper second AND gate is enabled while other AND gates are disabled.
Therefore, only data bit D is transmitted to the output, giving Y1 = Data.

If D is low, Y1 is low. IF D is high,Y1 is high. The value of Y1 depends upon the value of D. All
other outputs are in low state.

If the control input is changed to AB = 10, all the gates are disabled except the third AND gate
from the top. Then, D is transmitted only to the Y2 output, and Y2 = Data.

Example of 1-to-16 demultiplexer is IC 74154 it has 1 input bit, 4 control bits and 16 output bit.
Applications of Demultiplexer:

1. Demultiplexer is used to connect a single source to multiple destinations. The main


application area of demultiplexer is communication system where multiplexer are used.
Most of the communication system are bidirectional i.e. they function in both ways
(transmitting and receiving signals). Hence, for most of the applications, the multiplexer
and demultiplexer work in sync. Demultiplexer are also used for reconstruction of parallel
data and ALU circuits.
2. Communication System – Communication system use multiplexer to carry multiple data
like audio, video and other form of data using a single line for transmission. This process
make the transmission easier. The demultiplexer receive the output signals of the
multiplexer and converts them back to the original form of the data at the receiving end.
The multiplexer and demultiplexer work together to carry out the process of transmission
and reception of data in communication system.
3. ALU (Arithmetic Logic Unit) – In an ALU circuit, the output of ALU can be stored in
multiple registers or storage units with the help of demultiplexer. The output of ALU is fed
as the data input to the demultiplexer. Each output of demultiplexer is connected to
multiple register which can be stored in the registers.
4. Serial to parallel converter – A serial to parallel converter is used for reconstructing
parallel data from incoming serial data stream. In this technique, serial data from the
incoming serial data stream is given as data input to the demultiplexer at the regular
intervals. A counter is attach to the control input of the demultiplexer. This counter directs
the data signal to the output of the demultiplexer where these data signals are stored. When
all data signals have been stored, the output of the demultiplexer can be retrieved and read
out in parallel.

Digital Circuits/Flip-Flops

A flip-flop is a device very much like a latch in that it is a bistable multivibrator, having two
states and a feedback path that allows it to store a bit of information. The difference between a
latch and a flip-flop is that a latch is asynchronous, and the outputs can change as soon as the
inputs do (or at least after a small propagation delay). A flip-flop, on the other hand, is edge-
triggered and only changes state when a control signal goes from high to low or low to high.
This distinction is relatively recent and is not formal, with many authorities still referring to flip-
flops as latches and vice versa, but it is a helpful distinction to make for the sake of clarity.

There are several different types of flip-flop each with its own uses and peculiarities. The four
main types of flip-flop are : SR, JK, D, and T.
SR Flip-flops[edit]

Animated interactive SR flip-flop (suggested values: R1, R2 = 1 kΩ R3, R4 = 10 kΩ).

An SR(Set/Reset) flip-flop is perhaps the simplest flip-flop, and is very similar to the SR latch,
other than for the fact that it only transitions on clock edges. While as theoretically valid as any
flip-flop, synchronous edge-triggered SR flip-flops are extremely uncommon because they retain
the illegal state when both S and R are asserted.

Characteristic table

S R Qnext Comment
Excitation table
0 0 0 Hold state Q Qnext S R
0 0 0 X
0 1 1 0
0 1 0 Reset 1 0 0 1
1 1 X 0
1 0 1 Set

1 1 Metastable

D flip-flop[edit]

Symbol for a D flip-flop.


The D(DATA) flip-flop is the edge-triggered variant of the transparent latch. On the rising
(usually, although negative edge triggering is just as possible) edge of the clock, the output is
given the value of the D input at that moment. The output can be only changed at the clock edge,
and if the input changes at other times, the output will be unaffected.

Clock D Qnext Comment

0 0 Express D at Q

1 1 Express D at Q

Otherwise X Qprev Hold state

D flip-flops are by far the most common type of flip-flops and some devices (for example some
FPGAs) are made entirely from D flip-flops. They are also commonly used for shift-registers and
input synchronisation.

JK Flip-flop[edit]

Symbol for a JK flip-flop

The JK flip-flop is a simple enhancement of the SR flip-flop where the state J=K=1 is not
forbidden. It works just like a SR flip-flop where J is serving as set input and K serving as reset.
The only difference is that for the formerly ―forbidden‖ combination J=K=1 this flip-flop now
performs an action: it inverts its state. As the behavior of the JK flip-flop is completely
predictable under all conditions, this is the preferred type of flip-flop for most logic circuit
designs. But there is still a problem i.e. both the outputs are same when one tests the circuit
practically. This is because of the internal toggling on every propagation elapse completion. The
main remedy is going for master-slave jk flip-flop,this flip-flop overrides the self(internal)
recurring toggling through the pulsed clocking feature incorporated.

Excitation table
Q Qnext J K Comment
Characteristic table
0 0 0 X Hold state
0 1 1 X Set
1 0 X 1 Reset
J K Qnext Comment 1 1 X 0 Hold state

0 0 Qprev Hold state

0 1 0 Reset

1 0 1 Set

1 1 Qprev Toggle

T flip-flops[edit]

A circuit symbol for a T-type flip-flop: T is the toggle input and Q is the stored data output.

A T flip-flop is a device which swaps or "toggles" state every time it is triggered if the T input is
asserted, otherwise it holds the current output. This behavior is described by the characteristic
equation:

and can be described either of the following tables:

Characteristic table Excitation table


T Q Qnext Comment Q Qnext T
0 0 0 0 0 0
Hold state
0 1 1 0 1 1
1 0 1 1 0 1
Toggle
1 1 0 1 1 0

When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock
frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This
'divide-by' feature has application in various types of digital counters. A T flip-flop can also be
built using a JK flip-flop (J & K pins are connected together and act as T) or D flip-flop (T input
and Qprev are connected to the D input through an XOR gate). d t
The Master-Slave JK Flip-flop
The Master-Slave Flip-Flop is basically two gated SR flip-flops connected together in a series configuration with
the slave having an inverted clock pulse. The outputs from Q and Q from the ―Slave‖ flip-flop are fed back to the
inputs of the ―Master‖ with the outputs of the ―Master‖ flip flop being connected to the two inputs of the ―Slave‖
flip flop. This feedback configuration from the slave‘s output to the master‘s input gives the characteristic toggle of
the JK flip flop as shown below.

The Master-Slave JK Flip Flop

The input signals J and K are connected to the gated ―master‖ SR flip flop which ―locks‖ the input condition while
the clock (Clk) input is ―HIGH‖ at logic level ―1‖. As the clock input of the ―slave‖ flip flop is the inverse
(complement) of the ―master‖ clock input, the ―slave‖ SR flip flop does not toggle. The outputs from the ―master‖
flip flop are only ―seen‖ by the gated ―slave‖ flip flop when the clock input goes ―LOW‖ to logic level ―0‖.

When the clock is ―LOW‖, the outputs from the ―master‖ flip flop are latched and any additional changes to its
inputs are ignored. The gated ―slave‖ flip flop now responds to the state of its inputs passed over by the ―master‖
section.

Then on the ―Low-to-High‖ transition of the clock pulse the inputs of the ―master‖ flip flop are fed through to the
gated inputs of the ―slave‖ flip flop and on the ―High-to-Low‖ transition the same inputs are reflected on the output
of the ―slave‖ making this type of flip flop edge or pulse-triggered.

Then, the circuit accepts input data when the clock signal is ―HIGH‖, and passes the data to the output on the
falling-edge of the clock signal. In other words, the Master-Slave JK Flip flop is a ―Synchronous‖ device as it only
passes data with the timing of the clock signal.

In the next tutorial about Sequential Logic Circuits, we will look at Multivibrators that are used as waveform
generators to produce the clock signals to switch sequential circuits.

Flip-Flops and Registers


Consider the following circuit:
Here are two NAND gates with their outputs cross-connected to one input on each. This
arrangement makes use of feedback. When R and S are 1, the circuit has two stable states: Q can
be 1 and Q' 0, or Q can be 0 and Q' 1. Suppose the circuit has Q = 1. If S goes to 0, even
momentarily, the circuit will flip states and Q will become 0. If S goes back to 1, Q will remain 0
(and Q' 1). The circuit "remembers" that S was 0 at some time in the past. Then, if R goes to 0,
the circuit will flip back to the way it was initially, with Q = 1 and Q' = 0. This behavior is
central to circuits that hold data. These circuits are used in computer memory and registers. The
circuit above is called the R-S (reset-set) latch. It is the central circuit of the more complex
circuits called flip-flops.

Here is a circuit called a data or D-type flip-flop:

It is an RS latch with additonal NAND gates that make it simple to control. When Enable is 0,
both control NAND gates will have a 1 output, and the RS latch will remain stable. When Enable
= 1, Q will become equal to Data. If Data changes while Enable = 1, Q will also change. When
Enable goes back to 0, the most recent value of D will remain on the Q output (and Q' will be the
opposite). The D-type flip-flop has its own symbol, of course.
This flip-flop is sometimes called a transparent latch, because while Enable is high, the data
outputs follow the data input. Because the latch holds the data forever while the enable input is
low, it can be used to store information, for example, in a computer memory. Lots of latches plus
a big decoder equals one computer memory.

If two data-type latches are connected as below, the result is an edge-triggered latch:

In this configuration, two transparent D-type latches are connected in tandem. The enable input,
when low, causes the first flip-flop to be in the transparent state, and the second flip-flop to be
locked. When Enable goes high, the second flip flop becomes transparent first, then after a brief
delay the first flip-flop becomes locked. The output of the second flip-flop will show the locked
output of the first. Even though the second latch is in its transparent state, the data on the output
will not change, because the first latch is locked. When enable goes low again, the second flip-
flop becomes locked first, then the first becomes transparent. The data output will remain
unchanging, because now the second flip-flop is locked. The only time new data can be stored in
the circuit, is during the brief moment when the enable input goes from low to high. This
transition is referred to as a rising clock edge, and so this tandem latch configuration is called a
rising edge triggered latch. The two flip-flops and inverter can be enclosed by a box, and
represented by a single symbol. The Enable input of the transparent latch is replaced by the clock
pulse (CP) input. The edge-triggered latch is one of the central circuits in computer design.

It allows the following:


Assume a series of pulses (voltage going from 0 to 5V) is coming in to the CP input. The Q
output will be switching values every other clock pulse. This divides the clock frequency by two.
A chain of these acts as a binary counter.

The most important use of the edge triggered flip-flop is in the data register. The characteristic of
loading data only at the rising edge of the clock pulse is critical in creating the computer machine
cycle, as we will see. A register is simply a collection of edge triggered flip-flops. Here is a four-
bit register:

The edge-triggered flip-flop register is crucial for the computer machine cycle. The machine
cycle can be characterized by the following (very much abbreviated) diagram.
The computer clock provides a continuous stream of pulses to the computer processor. With each
upward swing of the voltage on the clock output, the register loads whatever values are present
on its inputs. As soon as the data is loaded it appears on the outputs, and stays there steady
throughout the rest of the clock cycle. The data on the register outputs is used as input for the
computer circuits that carry out the operations desired. For example, the accumulator register
data output may be directed by a multiplexor to one of the inputs of the ALU, where it serves as
an addend. The result of the addition appears on the ALU outputs, which can be directed by
multiplexors to the inputs of the same accumulator register. The result waits at the inputs of the
register for the next upward swing of voltage on the CP register input, sometimes called the write
input (WR). When this occurs, the ALU result is loaded into the register and displayed on the
outputs. The value held in the register from the previous cycle is overwritten and discarded,
having served its purpose. The cycle is repeated endlessly, the data following paths dictated the
program instructions, which control the various multiplexors and registers. If the accumulator
had used transparent latches, the ALU result would feed through the register, to the ALU, back
to the register and through again to the computer circuits in an uncontrolled loop. The edge-
triggered register ensures that the result is written in an instant, and then the inputs close,
allowing the computer to perform its operations in an orderly and stable way.

Synchronous Counters
A synchronous counter, in contrast to an asynchronous counter, is one whose output bits change state
simultaneously, with no ripple. The only way we can build such a counter circuit from J-K flip-flops is to
connect all the clock inputs together, so that each and every flip-flop receives the exact same clock pulse at the
exact same time:
Now, the question is, what do we do with the J and K inputs? We know that we still have to maintain the same
divide-by-two frequency pattern in order to count in a binary sequence, and that this pattern is best achieved
utilizing the ―toggle‖ mode of the flip-flop, so the fact that the J and K inputs must both be (at times) ―high‖ is
clear. However, if we simply connect all the J and K inputs to the positive rail of the power supply as we did in
the asynchronous circuit, this would clearly not work because all the flip-flops would toggle at the same time:
with each and every clock pulse!

Let‘s examine the four-bit binary counting sequence again, and see if there are any other patterns that predict
the toggling of a bit. Asynchronous counter circuit design is based on the fact that each bit toggle happens at
the same time that the preceding bit toggles from a ―high‖ to a ―low‖ (from 1 to 0). Since we cannot clock the
toggling of a bit based on the toggling of a previous bit in a synchronous counter circuit (to do so would create
a ripple effect) we must find some other pattern in the counting sequence that can be used to trigger a bit
toggle:
Examining the four-bit binary count sequence, another predictive pattern can be seen. Notice that just before a
bit toggles, all preceding bits are ―high:‖
This pattern is also something we can exploit in designing a counter circuit. If we enable each J-K flip-flop to
toggle based on whether or not all preceding flip-flop outputs (Q) are ―high,‖ we can obtain the same counting
sequence as the asynchronous circuit without the ripple effect, since each flip-flop in this circuit will be
clocked at exactly the same time:

The result is a four-bit synchronous ―up‖ counter. Each of the higher-order flip-flops are made ready to toggle
(both J and K inputs ―high‖) if the Q outputs of all previous flip-flops are ―high.‖ Otherwise, the J and K
inputs for that flip-flop will both be ―low,‖ placing it into the ―latch‖ mode where it will maintain its present
output state at the next clock pulse. Since the first (LSB) flip-flop needs to toggle at every clock pulse, its J and
K inputs are connected to Vcc or Vdd, where they will be ―high‖ all the time. The next flip-flop need only
―recognize‖ that the first flip-flop‘s Q output is high to be made ready to toggle, so no AND gate is needed.
However, the remaining flip-flops should be made ready to toggle only when all lower-order output bits are
―high,‖ thus the need for AND gates.
To make a synchronous ―down‖ counter, we need to build the circuit to recognize the appropriate bit patterns
predicting each toggle state while counting down. Not surprisingly, when we examine the four-bit binary count
sequence, we see that all preceding bits are ―low‖ prior to a toggle (following the sequence from bottom to
top):

Since each J-K flip-flop comes equipped with a Q‘ output as well as a Q output, we can use the Q‘ outputs to
enable the toggle mode on each succeeding flip-flop, being that each Q‘ will be ―high‖ every time that the
respective Q is ―low:‖
Taking this idea one step further, we can build a counter circuit with selectable between ―up‖ and ―down‖
count modes by having dual lines of AND gates detecting the appropriate bit conditions for an ―up‖ and a
―down‖ counting sequence, respectively, then use OR gates to combine the AND gate outputs to the J and K
inputs of each succeeding flip-flop:

This circuit isn‘t as complex as it might first appear. The Up/Down control input line simply enables either the
upper string or lower string of AND gates to pass the Q/Q‘ outputs to the succeeding stages of flip-flops. If the
Up/Down control line is ―high,‖ the top AND gates become enabled, and the circuit functions exactly the same
as the first (―up‖) synchronous counter circuit shown in this section. If the Up/Down control line is made
―low,‖ the bottom AND gates become enabled, and the circuit functions identically to the second (―down‖
counter) circuit shown in this section.
To illustrate, here is a diagram showing the circuit in the ―up‖ counting mode (all disabled circuitry shown in
grey rather than black):
Here, shown in the ―down‖ counting mode, with the same grey coloring representing disabled circuitry:

Up/down counter circuits are very useful devices. A common application is in machine motion control, where
devices called rotary shaft encoders convert mechanical rotation into a series of electrical pulses, these pulses
―clocking‖ a counter circuit to track total motion:

As the machine moves, it turns the encoder shaft, making and breaking the light beam between LED and
phototransistor, thereby generating clock pulses to increment the counter circuit. Thus, the counter integrates,
or accumulates, total motion of the shaft, serving as an electronic indication of how far the machine has
moved. If all we care about is tracking total motion, and do not care to account for changes in the direction of
motion, this arrangement will suffice. However, if we wish the counter to increment with one direction of
motion and decrement with the reverse direction of motion, we must use an up/down counter, and an
encoder/decoding circuit having the ability to discriminate between different directions.
If we re-design the encoder to have two sets of LED/phototransistor pairs, those pairs aligned such that their
square-wave output signals are 90o out of phase with each other, we have what is known as a quadrature
output encoder (the word ―quadrature‖ simply refers to a 90o angular separation). A phase detection circuit
may be made from a D-type flip-flop, to distinguish a clockwise pulse sequence from a counter-clockwise
pulse sequence:

When the encoder rotates clockwise, the ―D‖ input signal square-wave will lead the ―C‖ input square-wave,
meaning that the ―D‖ input will already be ―high‖ when the ―C‖ transitions from ―low‖ to ―high,‖ thus setting
the D-type flip-flop (making the Q output ―high‖) with every clock pulse. A ―high‖ Q output places the counter
into the ―Up‖ count mode, and any clock pulses received by the clock from the encoder (from either LED) will
increment it. Conversely, when the encoder reverses rotation, the ―D‖ input will lag behind the ―C‖ input
waveform, meaning that it will be ―low‖ when the ―C‖ waveform transitions from ―low‖ to ―high,‖ forcing the
D-type flip-flop into the reset state (making the Q output ―low‖) with every clock pulse. This ―low‖ signal
commands the counter circuit to decrement with every clock pulse from the encoder.
This circuit, or something very much like it, is at the heart of every position-measuring circuit based on a pulse
encoder sensor. Such applications are very common in robotics, CNC machine tool control, and other
applications involving the measurement of reversible, mechanical motion.
Multivibrators
Individual Sequential Logic circuits can be used to build more complex circuits such as
Multivibrators, Counters, Shift Registers, Latches and Memories etc, but for these types of
circuits to operate in a ―sequential‖ way, they require the addition of a clock pulse or timing
signal to cause them to change their state. Clock pulses are generally continuous square or
rectangular shaped waveform that is produced by a single pulse generator circuit such as a
Multivibrator.
A multivibrator circuit oscillates between a ―HIGH‖ state and a ―LOW‖ state producing a
continuous output. Astable multivibrators generally have an even 50% duty cycle, that is that
50% of the cycle time the output is ―HIGH‖ and the remaining 50% of the cycle time the output
is ―OFF‖. In other words, the duty cycle for an astable timing pulse is 1:1.
Sequential Logic Circuits

that use the clock signal for synchronization are dependant upon the frequency and and clock
pulse width to activate there switching action. Sequential circuits may also change their state on
either the rising or falling edge, or both of the actual clock signal as we have seen previously
with the basic flip-flop circuits. The following list are terms associated with a timing pulse or
waveform.
Active HIGH - if the state change occurs from a ―LOW‖ to a ―HIGH‖ at the clock‘s pulse rising
edge or during the clock width.

Clock Signal Waveform


Active LOW - if the state change occurs from a ―HIGH‖ to a ―LOW‖ at the clock‘s pulses
falling edge.
Duty Cycle - this is the ratio of the clock width to the clock period.
Clock Width - this is the time during which the value of the clock signal is equal to a logic ―1‖,
or HIGH.
Clock Period - this is the time between successive transitions in the same direction, ie, between
two rising or two falling edges.
Clock Frequency - the clock frequency is the reciprocal of the clock period, frequency = 1/clock
period
Clock pulse generation circuits can be a combination of analogue and digital circuits that
produce a continuous series of pulses (these are called astable multivibrators) or a pulse of a
specific duration (these are called monostable multivibrators). Combining two or more of
multivibrators provides generation of a desired pattern of pulses (including pulse width, time
between pulses and frequency of pulses).
There are basically three types of clock pulse generation circuits:
Astable – A free-running multivibrator that has NO stable states but switches continuously
between two states this action produces a train of square wave pulses at a fixed frequency.
Monostable – A one-shot multivibrator that has only ONE stable state and is triggered externally
with it returning back to its first stable state.
Bistable – A flip-flop that has TWO stable states that produces a single pulse either positive or
negative in value.
One way of producing a very simple clock signal is by the interconnection of logic gates. As
NAND gates contains amplification, they can also be used to provide a clock signal or timing
pulse with the aid of a single Capacitor and a single Resistor to provide the feedback and timing
function.
These timing circuits are often used because of there simplicity and are also useful if a logic
circuit is designed that has unused gates which can be utilised to create the monostable or astable
oscillator. This simple type of RC Oscillator network is sometimes called a ―Relaxation
Oscillator‖.
Monostable Multivibrator Circuits
Monostable Multivibrators or ―one-shot‖ pulse generators are generally used to convert short
sharp pulses into wider ones for timing applications. Monostable multivibrators generate a single
output pulse, either ―HIGH‖ or ―LOW‖, when a suitable external trigger signal or pulse T is
applied.
This trigger pulse signal initiates a timing cycle which causes the output of the monostable to
change state at the start of the timing cycle, ( t1 ) and remain in this second state until the end of
the timing period, ( t2 ) which is determined by the time constant of the timing capacitor, CT and
the resistor, RT.
The monostable multivibrator now stays in this second timing state until the end of the RC time
constant and automatically resets or returns itself back to its original (stable) state. Then, a
monostable circuit has only one stable state. A more common name for this type of circuit is
simply a ―Flip-Flop‖ as it can be made from two cross-coupled NAND gates (or NOR gates) as
we have seen previously. Consider the circuit below.
Simple NAND Gate Monostable Circuit

Suppose that initially the trigger input T is held HIGH at logic level ―1‖ by the resistor R1 so that
the output from the first NAND gate U1 is LOW at logic level ―0‖, (NAND gate principals). The
timing resistor, RT is connected to a voltage level equal to logic level ―0‖, which will cause the
capacitor, CT to be discharged. The output of U1 is LOW, timing capacitor CT is completely
discharged therefore junction V1 is also equal to ―0‖ resulting in the output from the second
NAND gate U2, which is connected as an inverting NOT gate will therefore be HIGH.

The output from the second NAND gate, ( U2 ) is fed back to one input of U1 to provide the
necessary positive feedback. Since the junction V1 and the output of U1 are both at logic ―0‖ no
current flows in the capacitor CT. This results in the circuit being Stable and it will remain in this
state until the trigger input T changes.
If a negative pulse is now applied either externally or by the action of the push-button to the
trigger input of the NAND gate U1, the output of U1 will go HIGH to logic ―1‖ (NAND gate
principles).
Since the voltage across the capacitor cannot change instantaneously (capacitor charging
principals) this will cause the junction at V1 and also the input to U2 to also go HIGH, which in
turn will make the output of the NAND gate U2 change LOW to logic ―0‖ The circuit will now
remain in this second state even if the trigger input pulse T is removed. This is known as the
Meta-stable state.
The voltage across the capacitor will now increase as the capacitor CT starts to charge up from
the output of U1 at a time constant determined by the resistor/capacitor combination. This
charging process continues until the charging current is unable to hold the input of U2 and
therefore junction V1 HIGH.
When this happens, the output of U2 switches HIGH again, logic ―1‖, which in turn causes the
output of U1 to go LOW and the capacitor discharges into the output of U1 under the influence
of resistor RT. The circuit has now switched back to its original stable state.
Thus for each negative going trigger pulse, the monostable multivibrator circuit produces a LOW
going output pulse. The length of the output time period is determined by the capacitor/resistor
combination (RC Network) and is given as the Time Constant T = 0.69RC of the circuit in
seconds. Since the input impedance of the NAND gates is very high, large timing periods can be
achieved.
As well as the NAND gate monostable type circuit above, it is also possible to build simple
monostable timing circuits that start their timing sequence from the rising-edge of the trigger
pulse using NOT gates, NAND gates and NOR gates connected as inverters as shown below.
NOT Gate Monostable Multivibrator

As with the NAND gate circuit above, initially the trigger input T is HIGH at a logic level ―1‖ so
that the output from the first NOT gate U1 is LOW at logic level ―0‖. The timing resistor, RT
and the capacitor, CT are connected together in parallel and also to the input of the second NOT
gate U2. As the input to U2 is LOW at logic ―0‖ its output at Q is HIGH at logic ―1‖.
When a logic level ―0‖ pulse is applied to the trigger input T of the first NOT gate it changes
state and produces a logic level ―1‖ output. The diode D1 passes this logic ―1‖ voltage level to
the RC timing network. The voltage across the capacitor, CT increases rapidly to this new
voltage level, which is also connected to the input of the second NOT gate. This in turn outputs a
logic ―0‖ at Q and the circuit stays in this Meta-stable state as long as the trigger input T applied
to the circuit remains LOW.
When the trigger signal returns HIGH, the output from the first NOT gate goes LOW to logic ―0‖
(NOT gate principals) and the fully charged capacitor, CT starts to discharge itself through the
parallel resistor, RT connected across it. When the voltage across the capacitor drops below the
lower threshold value of the input to the second NOT gate, its output switches back again
producing a logic level ―1‖ at Q. The diode D1 prevents the timing capacitor from discharging
itself back through the first NOT gates output.
Then, the Time Constant for a NOT gate Monostable Multivibrator is given as
T = 0.8RC + Trigger in seconds.
One main disadvantage of Monostable Multivibrators is that the time between the application of
the next trigger pulse T has to be greater than the RC time constant of the circuit.
Astable Multivibrator Circuits
Astable Multivibrators are the most commonly used type of multivibrator circuit. An astable
multivibrator is a free running oscillator that have no permanent ―meta‖ or ―steady‖ state but are
continually changing there output from one state (LOW) to the other state (HIGH) and then back
again. This continual switching action from ―HIGH‖ to ―LOW‖ and ―LOW‖ to ―HIGH‖
produces a continuous and stable square wave output that switches abruptly between the two
logic levels making it ideal for timing and clock pulse applications.
As with the previous monostable multivibrator circuit above, the timing cycle is determined by
the RC time constant of the resistor-capacitor, RC Network. Then the output frequency can be
varied by changing the value(s) of the resistors and capacitor in the circuit.
NAND Gate Astable Multivibrator

The astable multivibrator circuit uses two CMOS NOT gates such as the CD4069 or the 74HC04
hex inverter ICs, or as in our simple circuit below a pair of CMOS NAND such as the CD4011
or the 74LS132 and an RC timing network. The two NAND gates are connected as inverting
NOT gates.
Suppose that initially the output from the NAND gate U2 is HIGH at logic level ―1‖, then the
input must therefore be LOW at logic level ―0‖ (NAND gate principles) as will be the output
from the first NAND gate U1. Capacitor, C is connected between the output of the second
NAND gate U2 and its input via the timing resistor, R2. The capacitor now charges up at a rate
determined by the time constant of R2 and C.
As the capacitor, C charges up, the junction between the resistor R2 and the capacitor, C, which
is also connected to the input of the NAND gate U1 via the stabilizing resistor, R2 decreases
until the lower threshold value of U1 is reached at which point U1 changes state and the output
of U1 now becomes HIGH. This causes NAND gate U2 to also change state as its input has now
changed from logic ―0‖ to logic ―1‖ resulting in the output of NAND gate U2 becoming LOW,
logic level ―0‖.
Capacitor C is now reverse biased and discharges itself through the input of NAND gate U1.
Capacitor, C charges up again in the opposite direction determined by the time constant of both
R2 and C as before until it reaches the upper threshold value of NAND gate U1. This causes U1
to change state and the cycle repeats itself over again.
Then, the time constant for a NAND gate Astable Multivibrator is given as T = 2.2RC in seconds
with the output frequency given as f = 1/T.
For example: if the resistor R2 = 10kΩ and the capacitor C = 45nF, the oscillation frequency of
the circuit would be given as:
Then the output frequency is calculated as being 1kHz, which equates to a time constant of 1mS
so the output waveform would look like:

Bistable Multivibrator Circuits


The Bistable Multivibrators circuit is basically a SR flip-flop that we look at in the previous
tutorials with the addition of an inverter or NOT gate to provide the necessary switching
function. As with flip-flops, both states of a bistable multivibrator are stable, and the circuit will
remain in either state indefinitely. This type of multivibrator circuit passes from one state to the
other ―only‖ when a suitable external trigger pulse T is applied and to go through a full ―SET-
RESET‖ cycle two triggering pulses are required. This type of circuit is also known as a
―Bistable Latch―, ―Toggle Latch‖ or simply ―T-latch―.
NAND Gate Bistable Multivibrator

The simplest way to make a Bistable Latch is to connect together a pair of Schmitt NAND gates
to form a SR latch as shown above. The two NAND gates, U2 and U3 form the bistable which is
triggered by the input NAND gate, U1. This U1 NAND gate can be omitted and replaced by a
single toggle switch to make a switch debounce circuit as seen previously in the SR Flip-flop
tutorial.
When the input pulse goes ―LOW‖ the bistable latches into its ―SET‖ state, with its output at
logic level ―1‖, until the input goes ―HIGH‖ causing the bistable to latch into its ―RESET‖ state,
with its output at logic level ―0‖. The output of a bistable multivibrator will stay in this ―RESET‖
state until another input pulse is applied and the whole sequence will start again.
Then a Bistable Latch or ―Toggle Latch‖ is a two-state device in which both states either positive
or negative, (logic ―1‖ or logic ―0‖) are stable.
Bistable Multivibrators have many applications such as frequency dividers, counters or as a
storage device in computer memories but they are best used in circuits such as Latches and
Counters.
555 Timer Circuit.
Simple Monostable or Astable multivibrators can now be easily made using standard commonly
available waveform generator IC‘s specially design to create timing and oscillator circuits.
Relaxation oscillators can be constructed simply by connecting a few passive components to
their input pins with the most commonly used waveform generator type IC being the classic 555
timer.
The 555 Timer is a very versatile low cost timing IC that can produce a very accurate timing
periods with good stability of around 1% and which has a variable timing period from between a
few micro-seconds to many hours with the timing period being controlled by a single RC
network connected to a single positive supply of between 4.5 and 16 volts.
The NE555 timer and its successors, ICM7555, CMOS LM1455, DUAL NE556 etc, are covered
in the 555 Oscillator tutorial and other good electronics based websites, so are only included here
for reference purposes as a clock pulse generator. The 555 connected as an Astable oscillator is
given below.
NE555 Astable Multivibrator.

Here the 555 timer is connected as a basic Astable Multivibrator producing a continuous output
waveform. Pins 2 and 6 are connected together so that it will re-trigger itself on each timing
cycle, thereby functioning as an Astable oscillator. Capacitor, C1 charges up through resistor, R1
and resistor, R2 but discharges only through resistor, R2 as the other side of R2 is connected to
the discharge terminal, pin 7. Then the timing period of t1 and t2 is given as:
t1 = 0.693 (R1 + R2) C1
t2 = 0.693 (R2) C1
T = t1 + t2
The voltage across the capacitor, C1 ranges from between 1/3 Vcc to approximately 2/3 Vcc
depending upon the RC timing period. This type of circuit is very stable as it operates from a
single supply rail resulting in an oscillation frequency which is independent of the supply voltage
Vcc.
In the next tutorial about Sequential Logic Circuits, we will look another type of clock controlled
flop-flop called a Data Latch. Data latches are very useful sequential circuits which can be made
from any standard gated SR flip-flop and used for frequency division to produce various ripple
counters, frequency dividers and latches.
Memory Hierarchy

The CPU can only directly fetch instructions and data from cache memory, located directly on
the processor chip. Cache memory must be loaded in from the main system memory (the
Random Access Memory, or RAM). RAM however, only retains it's contents when the power is
on, so needs to be stored on more permanent storage.

We call these layers of memory the memory hierarchy

Table 3.1. Memory Hierarchy


Speed Memory Description
Cache memory is memory actually embedded inside the CPU. Cache
memory is very fast, typically taking only once cycle to access, but since it is
Fastest Cache embedded directly into the CPU there is a limit to how big it can be. In fact,
there are several sub-levels of cache memory (termed L1, L2, L3) all with
slightly increasing speeds.
All instructions and storage addresses for the processor must come from
RAM. Although RAM is very fast, there is still some significant time taken
 RAM for the CPU to access it (this is termed latency). RAM is stored in separate,
dedicated chips attached to the motherboard, meaning it is much larger than
cache memory.
We are all familiar with software arriving on a floppy disk or CDROM, and
saving our files to the hard disk. We are also familiar with the long time a
Slowest Disk program can take to load from the hard disk -- having physical mechanisms
such as spinning disks and moving heads means disks are the slowest form
of storage. But they are also by far the largest form of storage.

The important point to know about the memory hierarchy is the trade offs between speed an size
-- the faster the memory the smaller it is. Of course, if you can find a way to change this
equation, you'll end up a billionaire!

The reason caches are effective is becuase computer code generally exhibits two forms of
locality

1. Spatial locality suggests that data within blocks is likely to be accessed together.
2. Temporal locality suggests that data that was used recently will likely be used again
shortly.

This means that benefits are gained by implementing as much quickly accessible memory
(temporal) storing small blocks of relevant information (spatial) as practically possible.

Cache in depth
Cache is one of the most important elements of the CPU architecture. To write efficient code
developers need to have an understanding of how the cache in their systems works.

The cache is a very fast copy of the slower main system memory. Cache is much smaller than
main memories because it is included inside the processor chip alongside the registers and
processor logic. This is prime real estate in computing terms, and there are both economic and
physical limits to it's maximum size. As manufacturers find more and more ways to cram more
and more transistors onto a chip cache sizes grow considerably, but even the largest caches are
tens of megabytes, rather than the gigabytes of main memory or terrabytes of hard disk otherwise
common.

The cache is made up of small chunks of mirrored main memory. The size of these chunks is
called the line size, and is typically something like 32 or 64 bytes. When talking about cache, it is
very common to talk about the line size, or a cache line, which refers to one chunk of mirrored
main memory. The cache can only load and store memory in sizes a multiple of a cache line.

Caches have their own hierarchy, commonly termed L1, L2 and L3. L1 cache is the fastest and
smallest; L2 is bigger and slower, and L3 more so.

L1 caches are generally further split into instruction caches and data, known as the "Harvard
Architecture" after the relay based Harvard Mark-1 computer which introduced it. Split caches
help to reduce pipeline bottlenecks as earlier pipeline stages tend to reference the instruction
cache and later stages the data cache. Apart from reducing contention for a shared resource,
providing separate caches for instructions also allows for alternate implementations which may
take advantage of the nature of instruction streaming; they are read-only so do not need
expensive on-chip features such as multi-porting, nor need to handle handle sub-block reads
because the instruction stream generally uses more regular sized accesses.

Computer - Memory
A memory is just like a human brain. It is used to store data and instructions. Computer memory is the storage
space in computer where data is to be processed and instructions required for processing are stored. The
memory is divided into large number of small parts called cells. Each location or cell has a unique address
which varies from zero to memory size minus one. For example if computer has 64k words, then this memory
unit has 64 * 1024=65536 memory locations. The address of these locations varies from 0 to 65535.

Memory is primarily of three types

 Cache Memory

 Primary Memory/Main Memory

 Secondary Memory
Cache Memory
Cache memory is a very high speed semiconductor memory which can speed up CPU. It acts as a buffer
between the CPU and main memory. It is used to hold those parts of data and program which are most
frequently used by CPU. The parts of data and programs are transferred from disk to cache memory by
operating system, from where CPU can access them.

Advantages
The advantages of cache memory are as follows:

 Cache memory is faster than main memory.

 It consumes less access time as compared to main memory.

 It stores the program that can be executed within a short period of time.

 It stores data for temporary use.

Disadvantages
The disadvantages of cache memory are as follows:

 Cache memory has limited capacity.

 It is very expensive.

Primary Memory (Main Memory)


Primary memory holds only those data and instructions on which computer is currently working. It has
limited capacity and data is lost when power is switched off. It is generally made up of semiconductor device.
These memories are not as fast as registers. The data and instruction required to be processed reside in main
memory. It is divided into two subcategories RAM and ROM.

Characteristics of Main Memory

 These are semiconductor memories

 It is known as main memory.

 Usually volatile memory.

 Data is lost in case power is switched off.

 It is working memory of the computer.

 Faster than secondary memories.

 A computer cannot run without primary memory.

Secondary Memory
This type of memory is also known as external memory or non-volatile. It is slower than main memory. These
are used for storing data/Information permanently. CPU directly does not access these memories instead they
are accessed via input-output routines. Contents of secondary memories are first transferred to main memory,
and then CPU can access it. For example : disk, CD-ROM, DVD etc.

Characteristic of Secondary Memory

 These are magnetic and optical memories

 It is known as backup memory.


 It is non-volatile memory.

 Data is permanently stored even if power is switched off.

 It is used for storage of data in a computer.

 Computer may run without secondary memory.

 Slower than primary memories.

Read Only Memory (ROM) Types

There are five basic ROM types:

1. ROM - Read Only Memory


2. PROM - Programmable Read Only Memory
3. EPROM - Erasable Programmable Read Only Memory
4. EEPROM - Electrically Erasable Programmable Read Only Memory
5. Flash EEPROM memory

Each type has unique characteristics, but all types of ROM memory have two things in common:
Data stored in these chips is non-volatile -- it is not lost when power is removed.
Data stored in these chips is either unchangeable or requires a special operation to change.
ROM

A diode normally allows current to flow in only one direction and has a certain threshold, known
as the forward breakover, that determines how much current is required before the diode will
pass it on. In silicon-based items such as processors and memory chips, the forward breakover
voltage is approximately 0.6 volts.

By taking advantage of the unique properties of a diode, a ROM chip can send a charge that is
above the forward breakover down the appropriate column with the selected row grounded to
connect at a specific cell. If a diode is present at that cell, the charge will be conducted through
to the ground, and, under the binary system, the cell will be read as being "on" (a value of 1). Iif
the cell's value is 0, and there is no diode link at that intersection to connect the column and row.
So the charge on the column does not get transferred to the row.

The way a ROM chip works necessitates the programming of complete data when the chip is
created. You cannot reprogramme or rewrite a standard ROM chip. If it is incorrect, or the data
needs to be updated, you have to throw it away and start over. Creating the original template for
a ROM chip is often a laborious process. Once the template is completed, the actual chips can
cost as little as a few cents each. They use very little power, are extremely reliable and, in the
case of most small electronic devices, contain all the necessary programming to control the
device.
PROM

Creating ROM chips totally from scratch is time-consuming and very expensive in small
quantities. For this reason, developers created a type of ROM known as programmable read-
only memory (PROM). Blank PROM chips can be bought inexpensively and coded by the user
with a programmer.

PROM chips have a grid of columns and rows just as ordinary ROMs do. The difference is that
every intersection of a column and row in a PROM chip has a fuse connecting them. A charge
sent through a column will pass through the fuse in a cell to a grounded row indicating a value of
1. Since all the cells have a fuse, the initial (blank) state of a PROM chip is all 1s. To change the
value of a cell to 0, you use a programmer to send a specific amount of current to the cell. The
higher voltage breaks the connection between the column and row by burning out the fuse. This
process is known as burning the PROM.

PROMs can only be programmed once. They are more fragile than ROMs. A jolt of static
electricity can easily cause fuses in the PROM to burn out, changing essential bits from 1 to 0.
But blank PROMs are inexpensive and are good for prototyping the data for a ROM before
committing to the costly ROM fabrication process.

EPROM

Working with ROMs and PROMs can be a wasteful business. Even though they are inexpensive
per chip, the cost can add up over time. Erasable programmable read-only memory (EPROM)
addresses this issue. EPROM chips can be rewritten many times. Erasing an EPROM requires a
special tool that emits a certain frequency of ultraviolet (UV) light. EPROMs are configured
using an EPROM programmer that provides voltage at specified levels depending on the type of
EPROM used.

The EPROM has a grid of columns and rows and the cell at each intersection has two transistors.
The two transistors are separated from each other by a thin oxide layer. One of the transistors is
known as the floating gate and the other as the control gate. The floating gate's only link to the
row (wordline) is through the control gate. As long as this link is in place, the cell has a value of
1. To change the value to 0 requires a process called Fowler-Nordheim tunneling.

Tunneling is used to alter the placement of electrons in the floating gate. Tunneling creates an
avalanche discharge of electrons, which have enough energy to pass through the insulating oxide
layer and accumulate on the gate electrode. When the high voltage is removed, the electrons are
trapped on the electrode. Because of the high insulation value of the silicon oxide surrounding
the gate, the stored charge cannot readily leak away and the data can be retained for decades. An
electrical charge, usually 10 to 13 volts, is applied to the floating gate. The charge comes from
the column (bitline), enters the floating gate and drains to a ground.
This charge causes the floating-gate transistor to act like an electron gun. The excited electrons
are pushed through and trapped on the other side of the thin oxide layer, giving it a negative
charge. These negatively charged electrons act as a barrier between the control gate and the
floating gate. A device called a cell sensor monitors the level of the charge passing through the
floating gate. If the flow through the gate is greater than 50 percent of the charge, it has a value
of 1. When the charge passing through drops below the 50-percent threshold, the value changes
to 0. A blank EPROM has all of the gates fully open, giving each cell a value of 1.

To rewrite an EPROM, you must erase it first. To erase it, you must supply a level of energy
strong enough to break through the negative electrons blocking the floating gate. In a standard
EPROM, this is best accomplished with UV light at a wavelength of 253.7 nanometers (2537
angstroms). Because this particular frequency will not penetrate most plastics or glasses, each
EPROM chip has a quartz window on top of it. The EPROM must be very close to the eraser's
light source, within an inch or two, to work properly.

An EPROM eraser is not selective, it will erase the entire EPROM. The EPROM must be
removed from the device it is in and placed under the UV light of the EPROM eraser for several
minutes. An EPROM that is left under too long can become over-erased. In such a case, the
EPROM's floating gates are charged to the point that they are unable to hold the electrons at all.

EEPROMs and Flash Memory

Though EPROMs are a big step up from PROMs in terms of reusability, they still require
dedicated equipment and a labor-intensive process to remove and reinstall them each time a
change is necessary. Also, changes cannot be made incrementally to an EPROM; the whole chip
must be erased. Electrically erasable programmable read-only memory (EEPROM) chips remove
the biggest drawbacks of EPROMs.

In EEPROMs:

1. The chip does not have to removed to be rewritten.


2. The entire chip does not have to be completely erased to change a specific portion of it.
3. Changing the contents does not require additional dedicated equipment.

Instead of using UV light, you can return the electrons in the cells of an EEPROM to normal
with the localized application of an electric field to each cell. This erases the targeted cells of the
EEPROM, which can then be rewritten. EEPROMs are changed 1 byte at a time, which makes
them versatile but slow. In fact, EEPROM chips are too slow to use in many products that make
quick changes to the data stored on the chip.

Manufacturers responded to this limitation with Flash memory, a type of EEPROM that uses in-
circuit wiring to erase by applying an electrical field to the entire chip or to predetermined
sections of the chip called blocks. This erases the targeted area of the chip, which can then be
rewritten. Flash memory works much faster than traditional EEPROMs because instead of
erasing one byte at a time, it erases a block or the entire chip, and then rewrites it. The electrons
in the cells of a Flash-memory chip can be returned to normal ("1") by the application of an
electric field, a higher-voltage charge.

Microsoft DOS and Windows Command Line


Short for Microsoft Disk Operating System, MS-DOS is a non-graphical command line
operating system created for IBM compatible computers. MS-DOS was first introduced by
Microsoft in August 1981 and was last updated in 1994 with MS-DOS 6.22. Although the MS-
DOS operating system is rarely used today, the command shell commonly known as the
Windows command line is still widely used.

MS-DOS top 10 command

DOS and Windows command line Top 10 commands

Below is a listing of the top 10 MS-DOS commands most commonly used and
that you will most likely be using during a normal DOS session.

1. cd
2. dir
3. copy
4. del
5. edit
6. move
7. ren (rename)
8. deltree
9. cls
10. format

About cd

CD (Change Directory) is a command used to switch directories in MS-


DOS and the Windows command line.

Cd examples

cd\

Goes to the highest level, the root of the drive.

cd..

Goes back one directory. For example, if you are within the
C:\Windows\COMMAND> directory and typed the above command, this
would take you to C:\Windows> directory.

Windows 95, 98, and later versions have a feature in the CD command that
allows you to go back more than one directory when using the dots. For
example, typing: cd... with three dots after the cd would take you back two
directories.

cd Windows

If present, would take you into the Windows directory. Windows can be
substituted with any other name.
cd\windows

If present, would first move back to the root of the drive and then go into
the Windows directory.

cd\windows\system32

If present, would move into the system32 directory located in the Windows
directory. If at any time you need to see what directories are available in the
directory you're currently in use the dir command.

cd /d e:\pics

If for example you were on the C: drive, typing the above command with the
/d option would first switch the E: drive letter and then move into the pics
directory.

cd

Typing cd alone will print the working directory. For example, if you're in
c:\windows> and you type the cd it will print c:\windows. For those users
who are familiar with Unix or Linux this could be thought of as doing the pwd
(print working directory) command.

About dir

The dir command allows you to see the available files and directories
in the current directory. The dir command also shows the last
modification date and time, as well as the file size.

Dir examples

dir
Lists all files and directories in the current directory. By default the dir
command lists the files and directories in alphabetic order.

dir *.exe

The above command lists any file that ends with the .exe file extension. See
the wildcard definition for further wildcard examples.

dir *.txt *.doc

The above is using multiple filespecs to list any files ending with .txt and
.doc in one command.

dir /ad

List only the directories in the current directory. If you need to move into
one of the directories listed use the cd command.

dir /s

Lists the files in the directory that you are in and all sub directories after that
directory, if you are at root "C:\>" and type this command this will list to
you every file and directory on the C: drive of the computer.

dir /p

If the directory has lots of files and you cannot read all the files as they
scroll by, you can use this command and it displays all files one page at a
time.

dir /w

If you don't need file information you can use this command to list only the
files and directories going horizontally, taking as little as space needed.
dir /s /w /p

This would list all the files and directories in the current directory and the
sub directories after that, in wide format and one page at a time.

dir /on

List the files in alphabetical order by the names of the files.

dir /o-n

List the files in reverse alphabetical order by the names of the files.

dir \ /s |find "i" |more

A nice command to list all directories on the hard drive, one screen page at a
time, and see the number of files in each directory and the amount of space
each occupies.

dir > myfile.txt

Takes the output of dir and re-routes it to the file myfile.txt instead of
outputting it to the screen.

About copy

Allows the user to copy one or more files to an alternate location.

Copy examples

copy *.txt c:\

In the above copy command we are using a wildcard to copy all .txt files
(multiple files) from the current directory to the c:\ root directory.
copy *.* a:

Copy all files in the current directory to the floppy disk drive.

Note: If there are hidden files they will not be copied. To copy all files
including hidden files use the xcopy command.

copy autoexec.bat c:\windows

Copy the autoexec.bat, usually found at root, and copy it into the Windows
directory; the autoexec.bat can be substituted for any file(s).

copy win.ini c:\windows /y

Copy the win.ini file in the current directory to the Windows directory.
Because this file already exists in the Windows directory it normally would
prompt if you want to overwrite the file. However, with the /y switch you will
not receive any prompt.

copy "computer hope.txt" hope

Copy the file "computer hope.txt" into the hope directory. Whenever dealing
with a file or directory with a space, it must be surrounded with quotes.
Otherwise you'll get the "The syntax of the command is incorrect." error.

copy myfile1.txt+myfile2.txt

Copy the contents in myfile2.txt and combines it with the contents in


myfile1.txt.

copy con test.txt

Finally, a user can create a file using the copy con command as shown
above, which creates the test.txt file. Once the above command has been
typed in, a user could type in whatever he or she wishes. When you have
completed creating the file, you can save and exit the file by pressing
CTRL+Z, which would create ^Z, and then press enter. An easier way to
view and edit files in MS-DOS would be to use the edit command.

About del

Del is a command used to delete files from the computer.

Del examples

Note: In Microsoft Windows deleted items go to the Recycle Bin, keep in


mind that deleting files from MS-DOS or the Windows command line does
not send files to the Recycle Bin.

Tip: Use the rmdir or deltree command to delete directories

del test.tmp

Deletes the test.tmp in the current directory, if the file exists.

del c:\windows\test.tmp

Delete the c:\windows\test.tmp in the Windows directory if it exists.

del c:\windows\temp\*.*

The * (asterisks) is a wild character, *.* indicates that you would like to
delete all files in the c:\windows\temp directory.

del c:\windows\temp\?est.tmp

The ? (question mark) is a single wild character for one letter, which means
this command would delete any file ending with est.tmp such as pest.tmp or
zest.tmp.
About edit

The MS-DOS editor is a command line text editor that allows you to
view, create, or modify any file on your computer. When running the
edit command a screen similar to the picture below is shown.

Edit examples

edit c:\autoexec.bat

This would look at the autoexec.bat. However, if the file is not found a blank
blue screen is shown. When editing this or any file, ensure that you know
what you are placing in the files improperly editing the file can cause issues
with your computer.

If you are unable to get this program to work, try typing in "path
c:\windows\command" if you have Windows 95 or higher, or type in "path
c:\dos" if you have Dos 5.x/6.x/7.x or Windows 3.x and try again. If you still
are not able to get edit to work, it may not be on the hard drive; Type in dir
edit.com /s at the c:\>. If it says that the file is not found, you may not
have this feature.
Note: If you are using new versions of Windows running under a 64-bit
processor the edit command no longer works. See our how to open and view
the contents of a file on a computer document for other methods of opening
a file from the command line.

Using copy con

If you are running an MS-DOS version 4.x or lower or you are unable to find
edit.com on your hard drive, you can also use the below command to create
a file.

copy con <name of file>

Once you have entered the above command this will create the file with the
name specified.

Once you have typed all the lines you want to be in the file, press and hold
CTRL + Z. This should enter ^Z, once on the screen, press enter and one file
should be copied.

Using edit to create a file

Using edit you can also create files; for example, if you wanted to create a
file called myfile.txt, you would type the below command.

edit myfile.txt

This would bring up a blank edit screen, as long as the file is saved upon exit
this will create the file myfile.txt.

About move

Allows you to move files or directories from one folder to another, or


from one drive to another.
Move examples

move c:\windows\temp\*.* c:\temp

Move the files of c:\windows\temp to the temp directory in root, this is of


course assuming you have the Windows\temp directory. In this example, *.*
is wildcards telling the computer every file with every extension.

move "computer hope" example

If your directory name has a space, it must be surrounded with quotes,


otherwise you will get a "The syntax of the command is incorrect." error
message. In the above example, this command would move the "computer
hope" directory into the example directory contained in the same directory.

move stats.doc, morestats.doc c:\statistics

The above example would move the files stats.doc and morestats.doc into
the c:\statistics folder.

About ren and rename

Used to rename files and directories from the original name to a new name.

In earlier releases of MS-DOS instead of using ren or rename you need to


use the move command to rename your MS-DOS directories or files.

Ren and rename examples

rename c:\computer hope

Rename the directory computer to hope.

rename *.txt *.bak


Rename all text files to files with .bak extension.

rename * 1_*

Rename all files to begin with 1_. The asterisk (*) in this example is an
example of a wild character; because nothing was placed before or after the
first asterisk, this means all files in the current directory will be renamed
with a 1_ in front of the file. For example, if there was a file named hope.txt
it would be renamed to 1_pe.txt.

rename "computer hope.txt" "example file.txt"

Rename the file "computer hope.txt" to "example file.txt". Whenever dealing


with a file or directory with a space, it must be surrounded with quotes.
Otherwise you'll get the "The syntax of the command is incorrect." error.

About deltree

Short for delete tree, deltree is a command used to delete files and
directories permanently from the computer.

Deltree examples

deltree c:\fake010

Deletes the fake010 directory and everything in it.

About cls

Cls is a command that allows you to clear the complete contents of the
screen and leave only a prompt.

Cls Examples

cls
Running the cls command at the command prompt would clear your screen
of all previous text and only return the prompt.

About format

Format is used to erase information off of a computer diskette or fixed drive.

Format examples

Caution: When using the format command, remember all information on


the drive you want to format will be completely erased.

format a:

Would erase all the contents off a floppy disk. Commonly used on a diskette
that has not been formatted or on a diskette you want to erase.

format a: /q

Quickly erases all the contents of a floppy diskette. Commonly used to


quickly erase all information on the diskette.

format c:

This would erase the contents of your C: hard drive. In other words, unless
you want to erase all your computer's information, this command should not
be done unless you're planning to start over. Note: of you're in Windows or
files on the hard drive are in use this command will not work.

About attrib

Attrib allows a user to change the attributes of a file or files. With


attrib, you can change any of the add or remove any of the attributes
below. Note: if you need to change the ACL of a file see the CACLS
command.

Read-only - allows the file to be only viewed and not written to or changed.

Archived - allows Microsoft Backup and other backup programs to know


what files to back up.

Hidden - makes files invisible to standard users and hidden if show hidden
files is enabled.

System - makes the file an important system file.

Displays or changes file attributes.

ATTRIB [+R | -R] [+A | -A] [+S | -S] [+H | -H] [[drive:][path]filename]
[/S]

+ Sets an attribute.

- Clears an attribute.

R Read-only file attribute.

A Archive file attribute.

S System file attribute.

H Hidden file attribute.

/S Processes files in all directories in the specified path.


Attrib examples

attrib

Typing attrib by itself displays all files in the current directory and each of
their attributes. If any file is hidden it also displays those files.

As can be seen in the above example we typed the dir command to list the
files in the current directory and could only see the "computer.bat" file
listed. However, if you type attrib by itself, there are three files in this
directory, "computer.bat" with read-only, "example.txt" with hidden, and
"hope.txt" with the hidden and read-only attribute.

attrib +r autoexec.bat

Add the read-only attribute to the autoexec.bat file so it cannot be modified


until the read-only attribute is taken off. This command is helpful for
important system files or any other file that you do not want to have
mistakenly edited or changed by another program.

attrib +h config.sys
Add the hidden attribute to the config.sys file causing it to be not be seen by
the average user.

attrib -h config.sys

This command does the opposite of the example shown before this
command. Instead of hiding the file this command makes the file visible if
hidden.

attrib +r +h autoexec.bat

Finally, this example adds two attributes to the autoexec.bat and makes the
file read-only as well as hidden.

About ping

Helps in determining TCP/IP networks IP address as well as determine issues


with the network and assists in resolving them. See the ping definition for a
full description.

Windows XP and lower syntax

ping [-t] [-a] [-n count] [-l size] [-f] [-i TTL] [-v TOS] [-r count] [-s count]
[[-j host-list] | [-k host-list]] [-w timeout] destination-list

Options:

-t Pings the specified host until stopped.


To see statistics and continue - Type Control-Break;
To stop - press Ctrl + C.

-a Resolve addresses to hostnames.


-n count Number of echo requests to send.

-l size Send buffer size.

-f Set Don't Fragment flag in packet.

-i TTL Time To Live.

-v TOS Type Of Service.

-r count Record route for count hops.

-s count Timestamp for count hops.

-j host-list Loose source route along host-list.

-k host-list Strict source route along host-list.

-w timeout Timeout in milliseconds to wait for each reply.

Examples

ping localhost

Pings the localhost, which helps determine if the computer can send
information out and receive the information back from itself.

Note: The above command does not send information over network, but can
indicate if the card can respond .
ping computerhope.com

Ping supports the ability to ping an Internet address. In the above example,
we pinged "computerhope.com" and as can be seen, received four responses
back. If we couldn't reach the server or the server was blocking our request
we would have lost all four packets.

ping 69.72.169.241

Allows you to ping another computer where <69.72.169.241> can be the IP


address of the computer you want to ping. If you do not get a reply or get
lost packets you have a problem with your network, which can be a cable
issues, network card issues, drivers, router, switch, or other network
problem.

About set

Allows you to change one variable or string to another.

Set examples

set path=c:\windows\command
Set the path to c:\windows\command.

Notice: Users in Microsoft Windows 2000 and Windows XP may have


difficulty defining the set values through the MS-DOS prompt. See the link
below for setting variables within Windows.

 How to set the path and environment variables in Windows.

set

Display all of the set environment variables currently defined.

About net

The net command is used to update, fix, or view the network or network
settings. Listed in the Syntax are each of net commands.

Net examples

net use z: \\computer\folder

Map the Z: drive to the network path //computer/folder.

net send mrhope "There is hope!"

Send a text message to the computer with a host name of mrhope the
message There is hope!. Note: This command only works for Windows
versions that support this command.

Tip: Today's computers disable the messenger service, if this service is


disabled you cannot send or receive net send messages. If you need this
service enabled follow the instructions on this page and choose to enable the
service instead of disabling it.

Note: New versions of Windows no longer support the net send command.
net send * "There is hope!"

The above command would send There is hope! to all users in your current
domain. This command should be used with caution since if you're on a
school or work network many of the computers on that network if not all
well be sent a message if the messenger service is enabled on the
computers.

net config workstation

Display additional information about the network such as the computers


name, workgroup, logon domain, DNS, and other useful information.

net view \\hope

View the available computers and their shared resources you may use either
of the below commands. The first example displays available computers. The
last command would display the shared resources on the hope computer.

net localgroup

Display all groups currently setup on the computer you're running the
command on.

net share

Display all network shares on your computer.

net share hope=c:\hope\files

Create a share called "hope" for the "c:\hope\files" directory.

Pipes and Redirection


A number of Dos commands send output to the screen and/or require input from the user.
Redirection is a mechanism whereby the output of a command can be fed either to some other
device (eg a printer or file) or to another program or command.

There are four redirection functions:

> Redirect output


>> Append
< Redirect input
| Pipe

>

Redirects a command's output from the "standard output device" (usually the monitor) to another
device (eg printer) or a file.

Syntax:

To redirect output to a device:


Command > Device

To redirect output to a file:


Command > Filename

Notes:

1. Acceptable Device names are: CON (Monitor); PRN (LPT1 - assumed to be the printer);
LPT1 - 3 (Parallel Ports - usually connected to a printer); COM 1 - 4 (Serial Ports); and
NUL (an electronic void). If anything other than a recognised device is specified, it is
assumed to be the name of a file.
2. If a file already exists with the specified Filename, it is overwritten without any warnings.

Examples:

Probably the most common uses of this redirection function is to send directory listings to the
printer or to save them as a file. (One of Windows Explorer's biggest weaknesses is that it does not
enable either of these operations).

1. To print out a sorted directory listing of all files in the Windows directory:
DIR c:\windows /o/a > PRN
2. To create a file containing the directory listing of the same directory:
DIR c:\windows /o/a > c:\data\directories\windows.txt

>>

Appends the output from a command to the specified file.

Syntax:

Command >> Filename

Note:

If Filename does not exist, it is created. If Filename does exist, the output from the command is
added to it (unlike the > function where the original contents are overwritten).

Example:

To add the directory listing of the files in the c:\windows\system directory to that created above:
DIR c:\windows\system /o/a >> c:\data\directories\windows.txt

<

Directs input to a command from a source other than the default (the default source usually being
the keyboard).

Syntax:

Command < Datasource

Example:

To sort the lines in a text file (c:\data\address list.txt) on the 12th character, the SORT command is
fed input from the file:
SORT /+12 < c:\data\address list.txt

The "pipe" redirects the output of a program or command to a second program or command.
Syntax:

Command1 | Command2

Example:

To sort a directory listing based on the time the files were last modified, the output of a directory
listing is piped to the SORT filter which sorts on the 39th character of each line:
DIR c:\data\docs | SORT /+39

Note that if the output of the DIR command had been redirected to SORT /+39 using >, Dos
would return an "invalid switch" error after attempting to create a file called Sort.

batch file
A file that contains a sequence, or batch, of commands. Batch files are useful for storing sets of
commands that are always executed together because you can simply enter the name of the batch
file instead of entering each command individually.

In DOS systems, batch files end with a.BAT extension. For example, the following DOS batch
file prints the date and time and sets the prompt to GO>:
date
time
prompt [GO>]

Whenever you boot a DOS -based computer, the system automatically executes the batch file
named AUTOEXEC.BAT, if it exists.

Many operating systems use the terms command file or shell script in place of batch file.

Elements of a Windows Application

A well-designed Windows application should be so easy to use that a manual


isn't really needed; a good menu and thoughtful design of the user interface
goes a long way toward achieving this.

All Windows applications share certain common elements. Some of them


appear below in this example from the GUI ScreenIO panel editor:
Title Bar

All Windows applications have a title bar that usually displays the name of
the application; here, “GUI ScreenIO Panel Editor”.

At the right-hand end of the title bar are three boxes used to minimize
(remove the window from the screen and “park” it in the Windows taskbar,
out of the way) or maximize (expand the window to fill the screen) the
window. The “X” box closes the application.

Menu

Beneath the title bar is a menu which lists all of the actions available at any
given time.

All Windows applications should have a menu, since it's an essential part of
the interface; it explicitly shows the user what facilities are available.

Toolbar

A toolbar containing buttons is generally found beneath the menu. Toolbar


buttons generally just provide an alternative means of selecting actions that
are also available on the menu.

Note: GUI ScreenIO does not presently allow you to include toolbars in your
application.
Status Bar

The status bar resides at the bottom of the window. The status bar can be
divided into cells, which display a variety of information; in this sample, they
show the dimensions and the COBOL name (CB1) of the checkbox control.
In most applications, the status bar is used to display error or informative
messages.

Border

The border can be used to adjust the size of the window by “dragging” it
with the mouse. All Windows have a border.

Client Area

The rest of the window is called the “client area”. The client area is where
your control are placed.

The client area does not include the space occupied by the menu, borders,
status bar, or other visual elements, so its maximum size will be somewhat
smaller than the physical size of a display.

Dialog box units

The objects in the client area, and the client area itself, are measured and
positioned in dialog box units. A dialog box unit is, approximately, 1/4 of an
"average" character wide, and 1/8 of an "average" character in height. This
allows windows and their contents to scale in proportion and look "right"
regardless of the target system's screen resolution and font settings. This is
a Windows convention, used in all Windows applications, not just GUI
ScreenIO, and it works very well.

If, however, you place two systems side-by-side and they are running
different resolutions and font settings, it is possible that the same window
will be proportioned a bit differently, because of rounding errors in scaling
dialog box units to the physical resolution and character settings of the
target system. This is normal.

Controls

Controls are standard Windows elements that are used to display your text
and data.

Checkboxes
Checkboxes are used to display on/off conditions; when selected, a checkbox
displays a check mark; when off, it’s empty.

Groupboxes

A groupbox is just a visual element. It groups a set of related controls; in


this case, a pair of radiobuttons. Groupboxes are otherwise inactive.

Radiobuttons

Radiobuttons are like checkboxes except that they are used to implement
mutually exclusive choices. You create a radiobutton for each choice, and
then associate them into a group. When you run your program, Windows
only allows one button to be selected within any given group. If you select
another button in the group, the first one is automatically deselected.

Static Text

A static text control just displays text that would not normally be updated at
runtime.

Edit Controls

An edit control is used to accept and/or display textual data, including


numeric data.

GUI ScreenIO provides a number of other Windows controls, but these are
the most common.

Accessories
Tools to meet special vision, hearing and ability needs

Here is how you find your Windows tools.

Click:Start > Programs > Accessories > Accessability

Let's start with a few vision, hearing and ability tools. The first one is:

 The Magnifier

In all windows OS's

The Magnifier is a display utility that makes the computer screen more readable by people who have
low vision by creating a separate window that displays a magnified portion of the screen. Magnifier
provides a minimum level of functionality for people who have slight visual impairments.
Tip!
Usually the Magnifier by default is set to show up at the top of the screen, but you can move it around
by using your left mouse button: Click ad hold down the left button and move the magnifier window to
where you want it, then let go of the button.

When you open the Magnifier a new window appears: The Magnifier Settings window. From that
window you can change the level of magnification, change the tracking and the presentation.To get
out of the magnifer mode simply click Exit

Here is another tool:

 The Narrator

From Windows 2000 and newer

The Narrator is a text-to-speech utility for people who are blind or have low vision. Narrator reads
what is displayed on the screen—the contents of the active window, menu options, or text that has
been typed.

Note! The Narrator is designed to work with Notepad, WordPad, Control Panel programs, Internet
Explorer, the Windows desktop, and some parts of Windows Setup. Narrator may not read words
aloud correctly in other programs. Narrator has a number of options that allow you to customize the
way screen elements are read.

The third tool I will mention is:

 The On-screen Keyboard

On–Screen Keyboard is a utility that displays a virtual keyboard on the computer screen. This tool
allows people with mobility impairments to type data by using a pointing device or joystick. Besides
providing a minimum level of functionality for some people with mobility impairments, On–Screen
Keyboard can also be helpful for people who do not know how to type.

Note! The program in which you want to type characters must be active while you are using On–
Screen Keyboard. The accessibility tools in the Windows operating system are intended to provide a
minimum level of functionality for users with special needs.

Windows tools
to help you optimize your computers performance
In the Windows Accessories you wil also find some very handy tools that will help you keep your
system running smoothly. It's a good idea to get aquanted with these tools and how to use them.
You find the tools by clicking System Tools from the Accessories menu

The most important accessories to know are:

 Back Up
 Disk Cleanup
 Disk Defragmenter

Backup!
Where I come from, we have a saying "Real men do not back up!". The very same "real men" always
come to me for help, when they run into trouble from NOT having backed up anything and their hard
drive crashes. Unfortunately there is nothing I can do to help them - other than point out to them that
in MY opinion "real men" are smart men, who DO back up important files on a weekly or monthly
basis, - time frame depending on how much they use their computers and how important to them
their work is.

So take my advice and learn how to Backup!

The Backup accessory in the Windows Accessories menu makes it easy to backup all your important
files. If you click through the backup wizard presented to you when you click Backup from the System
Tools menu you will find several choices for backing up your files. If you took my advice on how to
manage your files and saved all your files in the Documents folder or on the D: drive (if you have
one), backing up your personal files is easy.

The Backup wizard gives you the oportunity to create a "System recovery Disk", too. This is a handy
tool if you're not TOO sure how well rotected you are from virus and other malware. The restore dsc
will include all data on your computer and the iles neccesary to restore windows in the case of a major
failure.

Note! Backing up means making a copy of files and store it on another media, such as a CD Rom disk,
a Smart drive or another hard drive. Choose the one that suites you the best.For the CD Rom backup
option - of course - you will need a "Burner" to make a copy.

An external hard drive attached to your USB port is a good choice for backup media.

The next of the important accessories is:

 The Disk Defragmenter

When you click the Disk Defragmenter from the Windows accessories > System Tools menu
you will see this window.

The Disk Fragmenter sorts out the files on your drive(s) to optimize the space you are using.
Defragmenting you drive(s) perodically is a good tool for optimizing your computers performance.

If you click the Analyze button first, the defragmenter will analyze the specific drive selected in the
window and tell you, whether or not it is time to go through the defragmentation process.

If you want to know more about what defragmentation is and how it works

What is defragmentation?
The word "Defragmentation" might seem a bit intimidating, but it's really quite simple.

Adding and deleting files is something we all do quite frequently.


The use of the HD space is not always as good and efficient as it could be.
Just imagine that every time you delete a small file, it leaves a small amount of space of your HD.
Then you add a bigger file, and the HD will split this file into fragments.
Every time to open this bigger file, the computer will still read this file as one file, but it will have to
look for it in several places on the HD. Of course this procedure will eventually slow your computer
down.
In order to minimize the search time for each file, you'll need to do some defragmentation.
A defragmentation is pretty similar to cleaning and tidying your kitchen cupboards ;-)
You get more space and it's so much easier to find things afterwards.

SO...how do you do that?


 Click your "Start button"
If you have many programs on your computer, you may have to also click the "all Programs"
tab.
 Click "accessories"
 Click "System Tools"
 Click "Defragmentation"

Do a defragmentation of your C drive. Depending on your HD size and amount of files saved on it, it
may take a while to do. You'll get a report ones the procedure is done.

You might also like