You are on page 1of 10

HISTORY OF COMPUTERS

This chapter is a brief summary of the history of Computers. It is supplemented by the two PBS
documentaries video tapes "Inventing the Future" And "The Paperback Computer". The chapter
highlights some of the advances to look for in the documentaries.

In particular, when viewing the movies you should look for two things:

The progression in hardware representation of a bit of data:

Vacuum Tubes (1950s) - one bit on the size of a thumb;

Transistors (1950s and 1960s) - one bit on the size of a fingernail;

Integrated Circuits (1960s and 70s) - thousands of bits on the size of a hand

Silicon computer chips (1970s and on) - millions of bits on the size of a finger nail.

The progression of the ease of use of computers:

Almost impossible to use except by very patient geniuses (1950s);

Programmable by highly trained people only (1960s and 1970s);

Useable by just about anyone (1980s and on).

to see how computers got smaller, cheaper, and easier


to use.

First Computers

Eniac:

Eniac Computer

The first substantial computer was the giant ENIAC


machine by John W. Mauchly and J. Presper Eckert at
the University of Pennsylvania. ENIAC (Electrical Numerical Integrator and Calculator) used a word of 10
decimal digits instead of binary ones like previous automated calculators/computers. ENIAC was also the
first machine to use more than 2,000 vacuum tubes, using nearly 18,000 vacuum tubes. Storage of all
those vacuum tubes and the machinery required to keep the cool took up over 167 square meters (1800
square feet) of floor space. Nonetheless, it had punched-card input and output and arithmetically had 1
multiplier, 1 divider-square rooter, and 20 adders employing decimal "ring counters," which served as
adders and also as quick-access (0.0002 seconds) read-write register storage.

The executable instructions composing a program were embodied in the separate units of ENIAC, which
were plugged together to form a route through the machine for the flow of computations. These
connections had to be redone for each different problem, together with presetting function tables and
switches. This "wire-your-own" instruction technique was inconvenient, and only with some license
could ENIAC be considered programmable; it was, however, efficient in handling the particular programs
for which it had been designed. ENIAC is generally acknowledged to be the first successful high-speed
electronic digital computer (EDC) and was productively used from 1946 to 1955. A controversy
developed in 1971, however, over the patentability of ENIAC's basic digital concepts, the claim being
made that another U.S. physicist, John V. Atanasoff, had already used the same ideas in a simpler
vacuum-tube device he built in the 1930s while at Iowa State College. In 1973, the court found in favor
of the company using Atanasoff claim and Atanasoff received the acclaim he rightly deserved.

Transistors

Vacuum tubes

Progression of Hardware
In the 1950's two devices would be invented that would improve the computer field and set in motion
the beginning of the computer revolution. The first of these two devices was the transistor. Invented in
1947 by William Shockley, John Bardeen, and Walter Brattain of Bell Labs, the transistor was fated to
oust the days of vacuum tubes in computers, radios, and other electronics.

The vacuum tube, used up to this time in almost all the computers and calculating machines, had been
invented by American physicist Lee De Forest in 1906. The vacuum tube, which is about the size of a
human thumb, worked by using large amounts of electricity to heat a filament inside the tube until it
was cherry red. One result of heating this filament up was the release of electrons into the tube, which
could be controlled by other elements within the tube. De Forest's original device was a triode, which
could control the flow of electrons to a positively charged plate inside the tube. A zero could then be
represented by the absence of an electron current to the plate; the presence of a small but detectable
current to the plate represented a one.

Vacuum tubes were highly inefficient, required a great deal of space, and needed to be replaced often.
Computers of the 1940s and 50s had 18,000 tubes in them and housing all these tubes and cooling the
rooms from the heat produced by 18,000 tubes was not cheap. The transistor promised to solve all of
these problems and it did so. Transistors, however, had their problems too. The main problem was that
transistors, like other electronic components, needed to be soldered together. As a result, the more
complex the circuits became, the more complicated and numerous the connections between the
individual transistors and the likelihood of faulty wiring increased.

In 1958, this problem too was solved by Jack St. Clair Kilby of Texas Instruments. He manufactured the
first integrated circuit or chip. A chip is really a collection of tiny transistors which are connected
together when the transistor is manufactured. Thus, the need for soldering together large numbers of
transistors was practically nullified; now only connections were needed to other electronic components.
In addition to saving space, the speed of the machine was now increased since there was a diminished
distance that the electrons had to follow.
Mainframes to PCs

The 1960s saw large


mainframe computers
become much more common in large industries and with the US military and space program. IBM
became the unquestioned market leader in selling these large, expensive, error-prone, and very hard to
use machines.

A veritable explosion of personal computers occurred in the early 1970s, starting with Steve Jobs and
Steve Wozniak exhibiting the first Apple II at the First West Coast Computer Faire in San Francisco. The
Apple II boasted built-in BASIC programming language, color graphics, and a 4100 character memory for
only $1298. Programs and data could be stored on an everyday audio-cassette recorder. Before the end
of the fair, Wozniak and Jobs had secured 300 orders for the Apple II and from there Apple just took off.

Also introduced in 1977 was the TRS-80. This was a home computer manufactured by Tandy Radio Shack.
In its second incarnation, the TRS-80 Model II, came complete with a 64,000 character memory and a
disk drive to store programs and data on. At this time, only Apple and TRS had machines with disk drives.
With the introduction of the disk drive, personal computer applications took off as a floppy disk was a
most convenient publishing medium for distribution of software.

IBM, which up to this time had been producing mainframes and minicomputers for medium to large-
sized businesses, decided that it had to get into the act and started working on the Acorn, which would
later be called the IBM PC. The PC was the first computer designed for the home market which would
feature modular design so that pieces could easily be added to the architecture. Most of the
components, surprisingly, came from outside of IBM, since building it with IBM parts would have cost
too much for the home computer market. When it was introduced, the PC came with a 16,000 character
memory, keyboard from an IBM electric typewriter, and a connection for tape cassette player for $1265.

By 1984, Apple and IBM had come out with new models. Apple released the first generation Macintosh,
which was the first computer to come with a graphical user interface(GUI) and a mouse. The GUI made
the machine much more attractive to home computer users because it was easy to use. Sales of the
Macintosh soared like nothing ever seen before. IBM was hot on Apple's tail and released the 286-AT,
which with applications like Lotus 1-2-3, a spreadsheet, and Microsoft Word, quickly became the
favourite of business concerns.

That brings us up to about ten years ago. Now people have their own personal graphics workstations and
powerful home computers. The average computer a person might have in their home is more powerful
by several orders of magnitude than a machine like ENIAC. The computer revolution has been the fastest
growing technology in man's history.

HARDWARE AND SOFTWARE

Hardware

Hardware refers to the physical elements of a computer. This is also sometime called the machinery or
the equipment of the computer. Examples of hardware in a computer are the keyboard, the monitor, the
mouse and the processing unit. However, most of a computer's hardware cannot be seen; in other
words, it is not an external element of the computer, but rather an internal one, surrounded by the
computer's casing (tower). A computer's hardware is comprised of many different parts, but perhaps the
most important of these is the motherboard. The motherboard is made up of even more parts that
power and control the computer.

In contrast to software, hardware is a physical entity. Hardware and software are interconnected,
without software, the hardware of a computer would have no function. However, without the creation of
hardware to perform tasks directed by software via the central processing unit, software would be
useless.

Hardware is limited to specifically designed tasks that are, taken independently, very simple. Software
implements algorithms (problem solutions) that allow the computer to complete much more complex
tasks.
Software

Software, commonly
known as programs,
consists of all the electronic
instructions that tell the
hardware how to perform a
task. These instructions
come from a software developer in the form that will be accepted by the platform (operating system +
CPU) that they are based on. For example, a program that is designed for the Windows operating system
will only work for that specific operating system. Compatibility of software will vary as the design of the
software and the operating system differ. Software that is designed for Windows XP may experience a
compatibility issue when running under Windows 2000 or NT.

Software is capable of performing many tasks, as opposed to hardware which only perform mechanical
tasks that they are designed for. Software is the electronic instructions that tells the computer to
perform a task. Practical computer systems divide software systems into two major classes:

System software: Helps run computer hardware and computer system itself. System software includes
operating systems, device drivers, diagnostic tools and more. System software is almost always pre-
installed on your computer.

Application software: Allows users to accomplish one or more tasks. Includes word processing, web
browsing and almost any other task for which you might install software. (Some application software is
pre-installed on most computer systems.)

Software is generally created (written) in a high-level programming language, one that is (more or less)
readable by people. These high-level instructions are converted into "machine language" instructions,
represented in binary code, before the hardware can "run the code". When you install software, it is
generally already in this machine language, binary, form.

OPERATING SYSTEMS

An operating system (OS)


is system software that manages computer hardware and software resources and provides common
services for computer programs. All computer programs, excluding firmware, require an operating
system to function.

Time-sharing operating systems schedule tasks for efficient use of the system and may also include
accounting software for cost allocation of processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and memory allocation, the operating system acts as an
intermediary between programs and the computer hardware,[1][2] although the application code is
usually executed directly by the hardware and frequently makes system calls to an OS function or is
interrupted by it. Operating systems are found on many devices that contain a computer – from cellular
phones and video game consoles to web servers and supercomputers.
The dominant desktop operating system is Microsoft Windows with a market share of around 82%. OS X
by Apple Inc. is in second place (9.8%), and Linux is in third position (1.5%).[3] In the mobile (smartphone
and tablet combined) sector, based on Strategy Analytics Q3 2016 data, Android by Google is dominant
with 87.5 percent or growth by 10.3 percent in one year and iOS by Apple is placed second with 12.1
percent or decrease by 5.2 percent in one year, while other operating systems amount to just 0.3
percent.[4] Linux is dominant in the server and supercomputing sectors. Other specialized classes of
operating systems, such as embedded and real-time systems, exist for many applications.

TYPES OF OPERATING SYSTEMS

Single- and multi-tasking

A single-tasking system can only run one program at a time, while a multi-tasking operating system
allows more than one program to be running in concurrency. This is achieved by time-sharing, dividing
the available processor time between multiple processes that are each interrupted repeatedly in time
slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in
preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU time
and dedicates a slot to each of the programs. Unix-like operating systems, e.g., Solaris, Linux, as well as
AmigaOS support preemptive multitasking. Cooperative multitasking is achieved by relying on each
process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft
Windows used cooperative multi-tasking. 32-bit versions of both Windows NT and Win9x, used
preemptive multi-tasking.

Single- and multi-user

Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to
run in tandem.[5] A multi-user operating system extends the basic concept of multi-tasking with facilities
that identify processes and resources, such as disk space, belonging to multiple users, and the system
permits multiple users to interact with the system at the same time. Time-sharing operating systems
schedule tasks for efficient use of the system and may also include accounting software for cost
allocation of processor time, mass storage, printing, and other resources to multiple users.

Distributed
A distributed operating system manages a group of distinct computers and makes them appear to be a
single computer. The development of networked computers that could be linked and communicate with
each other gave rise to distributed computing. Distributed computations are carried out on more than
one machine. When computers in a group work in cooperation, they form a distributed system.[6]

Templated

In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine
image as a guest operating system, then saving it as a tool for multiple running virtual machines. The
technique is used both in virtualization and cloud computing management, and is common in large
server warehouses.[7]

Embedded

Embedded operating systems are designed to be used in embedded computer systems. They are
designed to operate on small machines like PDAs with less autonomy. They are able to operate with a
limited number of resources. They are very compact and extremely efficient by design. Windows CE and
Minix 3 are some examples of embedded operating systems.

Real-time

A real-time operating system is an operating system that guarantees to process events or data by a
specific moment in time. A real-time operating system may be single- or multi-tasking, but when
multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is
achieved. An event-driven system switches between tasks based on their priorities or external events
while time-sharing operating systems switch tasks based on clock interrupts

Library
A library operating system is one in which the services that a typical operating system provides, such as
networking, are provided in the form of libraries. These libraries are composed with the application and
configuration code to construct unikernels – which are specialized, single address space, machine images
that can be deployed to cloud or embedded environments.

You might also like