You are on page 1of 46

Software history

Plankalkül—the world's first complete high-level language


The world's first complete high-level language was designed in 1940s (probably between 1941 and
1945, but the concept first published in 1948), by the german computer pioneer Konrad Zuse, the
creator of the first relay computer.

From 1941-1946 (at the same time he developed his Z4 computer), Konrad Zuse developed ideas as to
how his machines could be programmed in a very powerful way. The language Plankalkül was initially
described in Zuse's planned Ph.D. dissertation in 1943, later developed in his 1945 (also unpublished,
it was still wartime) work "Plankalkül. Theorie del angewandten Logistik.", and finally reached the
public in a 1948 article (but still did not attract much feedback and for a long time to come
programming a computer would only be thought of as programming with machine code). Plankalkül
was eventually more comprehensively published in a 1972 paper, while the first compiler for it was
implemented as late as in 1998.

Using the modern programming terminology, the Plankalkül is a typed high-level imperative
programming language with following main features:
1. Programs are reusable functions, and functions are not recursive
2. Variables are local to functions (programs)
3. The fundamental data types are arrays and tuples of arrays, but there are also floating point, fixed
point, complex numbers; records; hierarchical data structures; list of pairs.
4. The type of the variables does not need to be declared in a special header
5. There is no GOTO construct
6. Assignment operation (e.g.: V1 + V2 => R1).
7. Conditional statement (e.g.: V1 = V2 => R1. This means: Compare the variables V1 and V2: If they
are identical then assign the value true to R1, otherwise assign the value false. Such operations could
also be applied on complicated data structures.)
8. Possibility for defining sub-programs.
9. Possibility for defining repetition of statements (loops), WHILE construct for iteration.
10. Logical operations (predicate logic and Boolean algebra).
11. Arithmetic exception handling.

A good description of the syntax of Plankalkül language can be seen here.

First compiler of Grace Hopper


Rear Admiral Grace Murray Hopper (born as Grace Brewster Murray in New York, on December 9th,
1906) was a Ph.D. in mathematics, who devoted almost her entire life to computers and programming.
She was one of the most incisive strategic futurists in the world of computing in the middle of 20th
century. Perhaps her best-known contribution to computing was the invention of the first compiler,
the intermediate program that translates English language instructions into the language of the target
computer.

Hopper started her career in computing in 1943, when she entered the Computation Project at
Harvard University, to join the research team of Howard Aiken. Aiken, known to be rough-spoken,
greeted her with the words, "Where the hell have you been?", then pointed to his
electromechanical Mark I computer, saying "Here, compute the coefficients of the arc tangent series
by next Thursday."

Hopper quickly plunged in and learned to program the machine, putting together a 500-page Manual
of Operations for the Aiken's computers in which she outlined the fundamental operating principles
of computing machines. Later she joined the newly formed Eckert-Mauchly Corporation, and
remained associated with its successors (Remington-Rand, Sperry-Rand, and Univac) until her official
retirement in 1971.
In 1952, Hopper completed her first compiler (for Sperry-Rand computer), known as the A-0 System.
As she said later, she did this, because she was lazy and hoped that the programmer may return to
being a mathematician.

The A-0 System actually was a set of instructions that could translate symbolic mathematical code into
machine language. In producing A-0, Hopper took all the subroutines she had been collecting over the
years and put them on a tape. Each routine was given a call number, so that it the machine could find
it on the tape. As described by Hopper—"All I had to do was to write down a set of call numbers, let
the computer find them on the tape, bring them over and do the additions. This was the first
compiler."

After the A-0, Grace Hopper and her group produced versions A-1 and A-2, improvements over the
older version. The A-2 compiler was the first compiler to be used extensively, paving the way to the
development of programming languages.

The A-0 System was hardly accepted and dissuaded by the establishment, but Hopper followed her
philosophy of "Go ahead and do it. You can apologize later.". She was disappointed —"I had a running
compiler, and nobody would touch it because, they carefully told me, computers could only do
arithmetic; they could not do programs. It was a selling job to get people to try it. I think with any new
idea, because people are allergic to change, you have to get out and sell the idea."

Hopper also originated the idea that computer programs could be written in English. She viewed
letters as simply another kind of symbol that the computer could recognize and convert into machine
code. Hopper's compiler later evolved to FLOW-MATIC compiler, which will be the base for the
extremely important language—COBOL. FLOW-MATIC was aimed at business applications, such as
calculating payroll and automatic billing. By the end of 1956, Hopper had UNIVAC I & II
understanding twenty English-like statements using FLOW-MATIC.

Grace Murray Hopper died in Arlington, Virginia, on January 1st, 1992.

Fortran of John Backus


We have mentioned already Konrad Zuse's Plankalkül as the first high-level language in the world.
Plankalkül however, as well as other ideas and projects, remained only on paper. The first high level
programming language to be put to broad use was Fortran of John Backus.

John Warner Backus (1924-2007), a bachelor of mathematics from Columbia University, saw his first
computer in 1950, when he was hired as a programmer in the IBM Computer Center on Madison
Avenue, to take care for the Selective Sequence Electronic Calculator (SSEC), an electromechanical
computer (a hybrid of some 13000 vacuum tubes and 23000 electromechanical relays), built by IBM
in January 1948. Wallace J. Eckert, the director of IBM’s Watson Scientific Computing Laboratory,
did SSEC calculations of the moon’s orbit that would show up 20 years later in the Apollo space
program.

Programming the SSEC was a real challenge, as there was no set way of doing it. Backus spent three
years working on the SSEC, during which time he created a program called Speedcoding. The program
was the first to include a scaling factor, which allowed both large and small numbers to be easily
stored and manipulated.

In late 1953, Backus wrote a memo to his boss that outlined the design of a programming language for
IBM’s new computer, the IBM 701, which, only the year before, had launched the company into a
brand new world of electronic data processing. IBM approved Backus’s proposal, and in 1954 he was
appointed as a boss of a small team of 4 people at the IBM Watson Scientific Laboratory. As in May
1954 IBM launched a new computer, IBM 704 Data Processing System (an advanced computer with
high-speed magnetic core memory, a magnetic drum storage unit and tape device, holding up to 5
million characters, see the nearby image), the group switched to it.
IBM 704 had also a built-in scaling factor (automatic floating point operation), and index registers,
which significantly reduced operating time. The inefficient computer programs of the time however
would hamper the 704’s high performance, that's why Backus wanted to design not only a better
language, but one that would be easier and faster for programmers to use when working with the
machine. Backus wanted it to accept a concise formulation of a problem in terms of mathematical
notation and to produce automatically a high-speed 704 program for its solution. Thus he decided to
create a device that would translate the new language into something the machine could understand.
This device, known as a translator, would eliminate the laborious hand-coding that characterized
computer programming at the time. It contained an element known as a parser, which identified the
various components of the program and translated them from a high-level language (one that people
understand) into the binary language of the computer.

In November 1954, Backus and his team published the first formal proposal for the language—
Preliminary Report, Specifications for the IBM Mathematical FORmula TRANslating System,
FORTRAN. At the time, Backus anticipated completion of the compiler in six months. Instead, it
would take almost 3 years, to be released commercially in 1957.

When completed, the Fortran compiler consisted of some 25000 lines of machine code, stored on a
magnetic tape. A copy of the program was provided with every IBM 704 installation, along with a 51-
page manual. The first versions of the program were understandably buggy, but later versions would
refine and eliminate them.

The new language was designed preliminary for mathematicians and scientists, and remains the
preeminent programming language in these areas for more then 3 decades. It was the first widely used
language,which allows people to work with their computers without having to understand how the
machines actually work, and without having to learn the machine’s assembly language.

After Fortran, Backus turned his focus to other elements of computer programming. In 1959, in order
to express the grammar of the new ALGOL language, he developed a notation (a formal way to
describe formal languages) which will be called later the Backus-Naur Form. It describes grammatical
rules for high-level languages, and has been adapted for use in a number of languages. Backus-Naur
Form quickly became the de facto worldwide standard for publishing algorithms. This contribution
helped Backus win the Turing Award.

In the 1970s, Backus worked on finding better programming methods, and developed what he called
a function-level language, or FP (for functional programming).

LISP of John McCarthy


John McCarthy (1927-2011) is a legendary person in the fields of computer science and AI (artificial
intelligence). Primarily known as the creator of one of the longest-lived computer languages in use—
LISP (in 1958), McCarthy was one of the first people, been interested in AI (since 1948) and coined
the term in 1955. He also developed the concept of timesharing in the late fifties and early sixties. He
made also substantial contributions to the theory of computation and knowledge representation.

John McCarthy was born in Boston, Massachusetts, on September 4, 1927, to the immigrant family of
John Patrick McCarthy (Irish Catholic) and Ida Glatt-McCarthy (1893-1957) (Lithuanian Jewish).

When the Great Depression started in the beginning of 1930s, McCarthy's parents lost their house,
and the family (which now included a second child, Patrick), became briefly peripatetic. They lived for
a short while in New York and then in Cleveland, before finally settling in Los Angeles (in part because
of John's respiratory problems), where the senior John McCarthy was hired as a labor organizer for
the Amalgamated Clothing Workers and developed a hydraulic orange juice squeezer. Ida McCarthy
worked as a journalist and had been active in the women's suffrage movement and both parents of
John were active members of the US Communist Party.

Like many child prodigies, John McCarthy was partly self-educated. Due to childhood illness
(respiratory problems), he began school a year late, but he quickly made up the time on his own,
skipped several grades, and wound up graduating from Belmont High School in Los Angeles two years
early, in 1943.

As a teenager McCarthy developed an interest in mathematics


and decided he wanted to go to the CalTech—California
Institute of Technology. In his application to CalTech he wrote
a one-sentence statement of purpose: "I intend to be a
professor of mathematics."

Receiving a B.S. in Mathematics in 1948, McCarthy initially


continued his graduate studies at Caltech, but in 1949 moved to
Princeton University, where he received a Ph.D. in
Mathematics in 1951.

McCarthy remained as an instructor at Princeton from 1951


until 1953 when he came to Stanford as an assistant professor.
In 1955, he left for Dartmouth and then for MIT before
returning to Stanford in 1962 as a full professor of computer
science, where he stayed until his retirement almost 40 years
later.

McCarthy's idea for AI originated in September 1948, when he


went to the Hixon Symposium on Cerebral Mechanisms in Behavior, a conference that joined together
leading researchers in different areas related to cognitive science, including famous psychologist Karl
Lashley, as well as mathematicians Alan Turing and Claude Shannon. As McCarthy listened to the
discussions comparing computers and the brain, he had a watershed moment. From that time on, his
chief interests related to the development of machines that could think like people.

In 1950s, McCarthy was not the only researcher, dabbling in what would be called artificial
intelligence. There were several scientists (including Marvin Minsky, Herbert Simon, Allen Newell and
Oliver Selfridge) working in this field. What distinguished McCarthy's work was his emphasis on using
mathematical logic both as a language for representing the knowledge that an intelligent machine
should have, and as a means for reasoning with that knowledge. This emphasis on mathematical logic
was to lead to the development of the logicist approach to artificial intelligence, as well as to the
development of the computer language LISP in 1958.

The other difference between McCarthy's approach to AI and others, was that previous work in AI had
focused on getting a computer to replicate activities that are challenging for humans, such as playing
chess and proving theorems of mathematics. In contrast, McCarthy was concerned with mundane and
seemingly trivial tasks, such as constructing a plan to get to the airport.

McCarthy maintained that there were aspects of the human mind that could be described precisely
enough to be replicated: "The speeds and memory capacities of present computers may be insufficient
to simulate many of the higher functions of the human brain," he wrote in 1955, "but the major
obstacle is not lack of machine capacity but our inability to write programs taking full advantage of
what we have."

The term "artificial intelligence" was proposed by McCarthy in 1955, when he began writing (with
Minsky, Shannon, and Nathaniel Rochester), the proposal to fund the first conference dedicated to the
topic—the famous Dartmouth conference on artificial intelligence, which took place in the summer of
1956.

In 1961, McCarthy was the first to publicly suggest that computer timesharing technology might lead
to a future in which computing power and even specific applications could be sold through the utility
business model (just like water or electricity). He claimed that "computing may some day be organized
as a public utility". This idea of a computer or information utility became very popular in the late
1960s, but faded by the mid-1990s. In the beginning of 2000s however, the idea has resurfaced in new
forms (cloud services).
In 1966, McCarthy hosted a series of four simultaneous computer chess matches carried out via
telegraph against rivals in Soviet Union (see the upper photo). Although helped pioneer computer
chess, he came to think the game was a distraction for programmers.

During the 1970s McCarthy presented a paper on buying and selling by computer, prophesying what
has become known as e-commerce. He also invited a local computer hobby group, the Homebrew
Computer Club, to meet at the Stanford. Its members included Steve Jobs and Steven Wozniak, who
later would go on to found Apple Inc. However, his own interest in developing time-sharing systems
led him to underestimate the potential of personal computers. When the first PCs emerged in the
1970s he dismissed them as "toys".

In 1958, McCarthy specified LISP (name derives from "LISt Processing")—the second-oldest high-
level programming language in widespread use today (only Fortran is older, by one year). Like
Fortran, Lisp has changed a great deal since its early days, and a number of dialects have existed over
its history. Today, the most widely known general-purpose Lisp dialects are Common Lisp, Scheme,
and Clojure.

Linked lists are one of LISP languages' major data structures, and its source code is itself made up of
lists.

Originally created as a practical mathematical notation for computer programs, LISP is based on the
notation of Alonzo Church's lambda calculus. It quickly became the favored programming language
for AI research. As one of the earliest programming languages, Lisp pioneered many ideas in
computer science, including tree data structures, automatic storage management, dynamic typing,
and the self-hosting compiler.

In 1982 McCarthy appears to have originated the idea of the space fountain, a form of "space
elevator", a tremendously tall tower extending up from the ground.

John McCarthy was a holder of many awards, including the Turing Award (1971), the first IJCAI
Award for Research Excellence (1985), the Kyoto Prize (1988), the National Medal of Science (1990),
and the Benjamin Franklin Medal in Computer and Cognitive Sciences (2003).

John McCarthy married three times. With his first wife, Martha Coyote, he had two daughters. His
second wife, Vera Watson (1932-1978), a programmer and mountaineer, the first woman to complete
a solo ascent of Aconcagua in South America, died in October, 1978, attempting to scale Annapurna,
Nepal. He was survived by his third wife, Carolyn Talcott and their son.

John McCarthy died on October 24, 2011, in Palo Alto, CA, of heart failure.

OXO game
In 1952 Alexander Shafto Douglas (21 May 1921 – 29 April 2010), known as "Sandy", a graduate
student in Cambridge, was writing his Ph.D. thesis on human-computer interaction for the University
of Cambridge. He decided as part of his thesis to create a computer game on the EDSAC computer at
University (one of the first stored-program computers in the world), thus creating the first graphical
computer game OXO (also known as Noughts and Crosses, an old non-computer game that people
can play with pen and paper).

Electronic Delay Storage Automatic Calculator (EDSAC) was an early (1949) vacuum tubes (it
contained 3000 tubes) based British computer, operating 600 instructions per second. It used 32
mercury delay lines (or long tanks) each of which stored 32 words of 18 bits. Hence the total memory
capacity of the EDSAC was the equivalent of about 2 kilobytes. A useful feature of this serial memory
technology was that it was possible to display the contents of the store on Cathode Ray Tube (CRT)
monitors. Each tank stored 16 words of 35 bits. EDSAC used 3 CRTs, one of which displayed the
contents of one of the long tanks (see the nearby image). Thus, the display was a matrix of 35 x 16
dots.
The input of EDSAC was via five-hole punched tape (thus the actual program for the OXO game was
punched on a strip of paper ) and output was via a teleprinter or CRT.

To play the OXO game (see the OXO game simulator below), the player would enter input (where he
wanted to place his nought or cross) using a rotary telephone controller, and output was displayed on
the computer's dot matrix cathode ray tube. Each game was played against an artificially intelligent
opponent (EDSAC) and the player determined who played first (EDSAC or USER).

The text output of the OXO game was something like:

9 8 7 NOUGHTS AND CROSSES


6 5 4 BY
3 2 1 A S DOUGLAS, C.1952

LOADING PLEASE WAIT...

EDSAC/USER FIRST (DIAL 0/1):1


DIAL MOVE:6
DIAL MOVE:1
DIAL MOVE:2
DIAL MOVE:7
DIAL MOVE:9
DRAWN GAME...

EDSAC/USER FIRST (DIAL 0/1):


Alexander Douglas' thesis was a success, earning him his Ph.D. and starting his career in science,
however he would never again program another video or computer game.

The First Video Game of William Higinbotham


The American physicist William (Willy) Alfred Higinbotham (October 25, 1910—November 10, 1994)
is credited with creating the first computer video game to display motion and allow interactive control
with handheld controllers in 1958.

William A. Higinbotham earned an undergraduate degree from Williams College in 1932 and
continued his studies at Cornell University. During WW2 he went to work on the radar system at MIT.
In the later years of the war he worked at Los Alamos National Laboratory (where the first atomic
bomb was developed) and headed the lab's electronics group.

In 1947 Higinbotham entered the Brookhaven National Laboratory in Upton, New York as a senior
physicist and later as head of the Instrumentation Division. When in October, 1958, the Lab organized
its annual visitors’ days, Higinbotham realized how static and non-interactive most science exhibits
were at that time tried to change that, introducing a game as an element of entertainment. He wrote
later it might liven up the place to have a game that people could play, and which would convey the
message that our scientific endeavors have relevance for society.

Higinbotham decided to create a game—Tennis for Two, and despite of the fact, that he had only 2
weeks for this purpose, he managed to make it. In 1983 he recalled—It took me about two hours to
rough out the design and a couple of weeks to get it debugged and working. It didn't take long and it
was a big hit.

The Tennis for Two was first introduced on October 18, 1958, at one of the Lab’s annual visitors’ days.
Two people played the electronic tennis game with separate controllers that connected to an analog
computer and used an oscilloscope for a screen (see the lowe photo).

Visitors playing Tennis for Two saw a two-dimensional, side view of a tennis court on the oscilloscope
screen, which used a cathode-ray tube. The ball, a brightly lit, moving dot, left trails as it bounced to
alternating sides of the net. Players served and volleyed using controllers with buttons and rotating
dials to control the angle of an invisible tennis racquet’s swing.

Liven up the place it did! Hundreds of visitors lined up for a chance to play the pioneering electronic
tennis game. And Higinbotham could not have dreamed that his game would be a forerunner to an
entire industry that less than fifty years later, would account for $9.5 billion in sales in 2006 and 2007
in the USA alone.

Higinbotham had more than 20 patents on electronic circuits to his credit, but he never patented his
video game, which associates said was the forerunner of the early 1970's video game "Pong."
Higinbotham, discussing in 1983 his decision not to seek the patent, said, "It wasn't something the
Government was interested in" and that he "didn't think it was worth it."

Logic Theorist
The first artificial intelligence program (the first program specially engineered to mimic the problem
solving skills of a human being) was created in 1955-56 by Herbert Simon, Allen Newell and John
Shaw.

Herbert Alexander Simon (1916–2001), a Nobel Prize (in Economics) winner from 1978, was an
American political scientist, economist, sociologist, and psychologist, whose research ranged across
the fields of cognitive psychology, cognitive science, computer science, public administration,
economics, management, philosophy of science, sociology, and political science. With almost a
thousand highly cited publications, he was one of the most influential social scientists of the XX
century.

Simon was consulting RAND Corporation in the early 1950s, and while seeing there a printer typing
out a map, using ordinary letters, digits and punctuation as symbols, he realized that a machine that
could manipulate symbols could just as well simulate decision making and possibly even the process
of human thought.

The program that printed the map had been written by Allen Newell (1927-1992), a RAND
Corporation scientist studying logistics and organization theory. For Newell, the decisive moment was
in 1954 he watched a presentation on pattern matching and suddenly understood how the interaction
of simple, programmable units could accomplish complex behavior, including the intelligent behavior
of human beings.

Newell and Simon began to talk about the possibility of teaching machines to think. Their first project
was a program that could prove mathematical theorems like the ones used in Russell and
Whitehead's Principia Mathematica. Newell enlisted the help of a computer programmer from
RAND, John Clifford Shaw (1922–1991), to develop the program.

In the summer of 1956, John McCarthy, Marvin Minsky and Claude Shannon organized a conference
at Dartmouth College on the subject of what they called "artificial intelligence" (a term coined by
McCarthy for the occasion). Simon and Newell proudly presented the group with their Logic Theorist
and were somewhat surprised when the program received a lukewarm reception. Later on Simon
confides: They didn't want to hear from us, and we sure didn't want to hear from them: we had
something to show them! … In a way it was ironic because we already had done the first example of
what they were after; and second, they didn't pay much attention to it.

Cliff Shaw coded the Logic Theorist using an early version of IPL (Information Processing Language)
programming language, running on a computer of RAND's Santa Monica research facility.

The Logic Theorist established the field of heuristic programming and soon proved 38 of the first 52
theorems in chapter 2 of the Principia Mathematica. The proof of one of the theorems was
surprisingly more elegant than the proof produced laboriously by hand by Russell and Whitehead.
Simon was able to show the new proof to Bertrand Russell himself who responded with delight.
A detailed description of the Logic Theorist can be found in a RAND memorandum from 1963.

Spacewar game
Spacewar was not the first computer game ever written (let's mention only OXO of Alexander Douglas
and Tennis for Two of William Higinbotham), but it has an unquestioned place in the dawn of the
computer age and the history of computer games. Spacewar was the first to gain widespread
recognition, and it is generally recognized as the first of the "shoot-'em' up" genre.

Spacewar game was commenced in 1961 by the young computer programmer from MIT Steve "Slug"
Russell (born 1937), who was inspired from the writings of the early science fiction author Edward
Elmer Smith.

Russell wrote Spacewar on a PDP-1, an early DEC interactive mini computer (the first commercial
time-sharing computer) which used a cathode-ray tube type display and keyboard input.

Spacewar was a two-player game, with each player taking control of a spaceship and attempting to
destroy the other. A massive star in the center of the screen pulls on both ships (called "the needle"
and "the wedge") and requires maneuvering to avoid falling into it. In an emergency, a player can
enter hyperspace to return at a random location on the screen, but only at the risk of exploding if it is
used too often (there was an increasing probability of the ship exploding with each use).

Steve Russell needed about 200 man-hours to write the first version of Spacewar, and he was assisted
by his friends from the fictitious "Hingham Institute": Martin Graetz and Wayne Wiitanen. Additional
features were later developed by Dan Edwards and Peter Samson.

The game spread rapidly to other programmers, who began coding their own variants, including
features such as space mines, cloaking devices, and even a first-person perspective version, played
with two screens, that simulated each pilot's view out of the cockpit. It became was extremely popular
and was widely ported to other computer systems.

Sketchpad of Ivan Sutherland


Ivan Sutherland is considered by many to be the creator of Computer Graphics and an Internet
pioneer. Starting with his Ph.D. thesis, named Sketchpad, which is one of the most influential
computer programs ever written by an individual, Sutherland has contributed numerous ideas to the
study of Computer Graphics and Computer Interaction. He introduced concepts such as 3-D computer
modeling, visual simulations, computer aided design (CAD), virtual reality, etc.

Ivan Edward Sutherland was born in Hartings, Nebraska on May 16, 1938. He was immersed in
learning since he was young. His father, a Ph.D. in Civil Engineering, as well as his mother, a teacher,
led Sutherland to appreciate learning. His favorite subject in high school was geometry, saying that
"…if I can picture possible solutions, I have a much better chance of finding the right one." Sutherland
has always described himself a visual thinker, hence his interest in computer graphics.

His first computer experience was with the famous computer Simon of Edmund Berkeley. Ivan's first
big computer program was to make Simon divide. To make division possible, he added a conditional
stop to Simon's instruction set. This program was a great accomplishment, it was the longest program
ever written for Simon, a total of eight pages of paper tape.

Sutherland went on the study at Carnegie Mellon University, where he earned a Bachelor's degree in
Electrical Engineering and then went on to earn a M.S. also in Electrical Engineering from the Caltech
(California Institute of Technology). For his Ph.D., Sutherland went to MIT (Massachusetts Institute
of Technology) were he studied under Claude Shannon and Marvin Minsky and developed his
revolutionary thesis, "Sketchpad: A Man-machine Graphical Communications System", the first
Graphical User Interface.
Sketchpad was influenced by the conceptual Memex of Vannevar Bush, as it was envisioned in his
fundamental paper "As We May Think". Sketchpad, in turn, influenced Douglas Engelbart's NLS (oN-
Line System).

Sketchpad ran on the Lincoln TX-2 computer, an innovative machine designed in 1956 (it had a large
amount of memory for its time: a vacuum-tube-driven core of 64K words, a faster, transistor-driven
core of 4K words, a paper-tape reader and could also use magnetic tape as auxiliary storage.) TX-2
was an "on-line" computer (at that time most computers would run "batches" of jobs and were not
interactive), used to investigate the use of Surface Barrier transistors for digital circuits. TX-2 included
a nine inch CRT and a light pen which first gave Sutherland his idea. He imagined that one should be
able to draw on the computer. Sketchpad was able to do just this, creating highly precise drawings,
and also introduced important innovations such as memory structures to store objects and the ability
to zoom in and out.

The Sketchpad uses drawing as a novel communication medium for a computer. The system contains
input, output, and computation programs which enable it to interpret information drawn directly on a
computer display. It was a general purpose system and has been used to draw electrical, mechanical,
scientific, mathematical, and animated drawings. Sketchpad has shown the most usefulness as an aid
to the understanding of processes, such as the notion of linkages, which can be described with
pictures. Sketchpad also makes it easy to draw highly repetitive or highly accurate drawings and to
change drawings previously drawn with it.

A Sketchpad user sketches directly on a computer display with a "light pen." The light pen is used both
to position parts of the drawing on the display and to point to them to change them. A set of push
buttons controls the changes to be made such as "erase", "move", etc.

Information sketched can include straight line segments and circle arcs. Arbitrary symbols may be
defined from any collection of line segments, circle arcs, and previously defined symbols. A user may
define and use as many symbols as he wishes. Any change in the definition of a symbol is at once seen
wherever that symbol appears.

Sketchpad stores explicit information about the topology of a drawing. If the user moves one vertex of
a polygon, both adjacent sides will be moved. If the user moves a symbol, all lines attached to that
symbol will automatically move to stay attached to it. The topological connections of the drawing are
automatically indicated by the user as he sketches. Since Sketchpad is able to accept topological
information from a human being in a picture language perfectly natural to the human, it can be used
as an input program for computation programs which require topological data, e.g., circuit simulators.

Sketchpad itself is able to move parts of the drawing around to meet new conditions which the user
may apply to them. The user indicates conditions with the light pen and push buttons. For example, to
make two lines parallel, he successively points to the lines with the light pen and presses a button. The
conditions themselves are displayed on the drawing so that they may be erased or changed with the
light pen language. Any combination of conditions can be defined as a composite condition and
applied in one step.

It is easy to add entirely new types of conditions to Sketchpad's vocabulary. Since the conditions can
involve anything computable, Sketchpad can be used for a very wide range of problems. It has been
used, for example, to find the distribution of forces in the members of truss bridges drawn with it.

Sketchpad drawings are stored in the computer in a specially designed "ring" structure. The ring
structure features rapid processing of topological information with no searching at all. The basic
operations used in Sketchpad for manipulating the ring structure are described.

Sutherland contribution is not the revolutionary Sketchpad, however. In 1964 he replaced J. C. R.


Licklider as the head of the US Defense DARPA's Information Processing Techniques Office (IPTO),
the motive force of Internet.

In 1968, together with his student Bob Sproull, Sutherland created the first virtual reality and
augmented reality head-mounted display system, named The Sword of Damocles.
Among his students were the famous computer scientist Alan Kay, Henri Gouraud (who devised the
Gouraud shading technique), Frank Crow, who developed antialiasing methods, etc.

His company Evans and Sutherland (founded together with his friend David Evans) has done
pioneering work in the field of real-time hardware, accelerated 3D computer graphics, and printer
languages. Former employees of company included the future founders of Adobe (John Warnock) and
Silicon Graphics (Jim Clark).

Sutherland received the Turing Award from the Association for Computing Machinery in 1988 for the
invention of Sketchpad. He is a member of the National Academy of Engineering, as well as the
National Academy of Sciences among many other major awards.

Unix of Ken Thompson


In the end of 1960s, the young engineer at AT&T Bell Labs Kenneth (Ken) Thompson worked on the
project of Multics operating system. Multics (Multiplexing Information and Computer Services) was
an experimental operating system for GE-645 mainframe, developed in 1960s by Massachusetts
Institute of Technology, Bell Labs, and General Electric. It introduced many innovations, but had
many problems, and in the end of 1960s, Bell Labs, frustrated by the slow progress and difficulties,
pulled out of the project. Thus Thompson, with the help of his colleagues Dennis Ritchie, Douglas
McIlroy, and Joe Ossanna, decided to experiment with some Multics concepts and to redo it on a
much smaller scale. Thus in 1969 the idea of now ubiquitous Unix was born.

While Ken Thompson still had access to the Multics environment, he wrote simulations for the new
file and paging system on it. Later the group continued his work on blackboards and scribbled notes.
Also in 1969, Thompson developed a very attractive game, Space Travel, first written on Multics, then
transliterated into Fortran for GECOS, and finally for a little-used PDP-7 at Bell Labs. The same PDP-
7 then he decided to use for the implementation of the first UNIX. On this PDP-7 and using its
assembly language, the team of researchers (initially without financial support from Bell Labs) led by
Thompson and Ritchie, developed a hierarchical file system, the concepts of computer processes and
device files, a command-line interpreter, and some small utility programs.

The name Unics was coined in 1970 by the team member Brian Kernighan, who played
on Multics name. Unics (Uniplexed information and computing system) could eventually support
multiple simultaneous users, and was later shortened to Unix.

Structurally, the file system of PDP-7 Unix was nearly identical to today's, for example it had:
1. An i-list: a linear array of i-nodes each describing a file. An i-node contained less than it does now,
but the essential information was the same: the protection mode of the file, its type and size, and the
list of physical blocks holding the contents.
2. Directories: a special kind of file containing a sequence of names and the associated i-number.
3. Special files describing devices. The device specification was not contained explicitly in the i-node,
but was instead encoded in the number: specific i-numbers corresponded to specific files.

In 1970 Thompson and Ritchie wanted to use Unix on a much larger machine than the PDP-7, and
traded the promise of adding text processing capabilities to Unix to some financial support from Bell,
porting the code for a PDP-11/20 machine. Thus for the first time in 1970, the Unix operating system
was officially named and ran on the PDP-11/20. It added a text formatting program called roff and a
text editor. All three were written in PDP-11/20 assembly language. Bell Labs used this initial "text
processing system", made up of Unix, roff, and the editor, for text processing of patent applications.
Roff soon evolved into troff, the first electronic publishing program with a full typesetting capability.

In 1972, Unix was rewritten in the C programming language, contrary to the general notion at the time
"that something as complex as an operating system, which must deal with time-critical events, had to
be written exclusively in assembly language" (although Unix was not the first OS, written in high-level
language, it was Burroughs B5000 from 1961). C language was created by Ritchie as an improved
version of B language, created by Thompson as a translation of BCPL from Martin Richards. The
migration from assembly language to the higher-level language C resulted in much more portable
software, requiring only a relatively small amount of machine-dependent code to be replaced when
porting Unix to other computing platforms.

AT&T made Unix available to universities and commercial firms, as well as the United States
government, under licenses. The licenses included all source code including the machine-dependent
parts of the kernel, which were written in PDP-11 assembly code. Copies of the annotated Unix kernel
sources circulated widely in the late 1970s in the form of a much-copied book, which led to
considerable use of Unix as an educational example. At some point, ARPA (Advanced Research
Projects Agency) adopted Unix as a standard language for the Arpanet (the predecessor of Internet)
community.

During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale
adoption of Unix (particularly of the BSD version, originating from the University of California,
Berkeley) by many commercial startups, for example Solaris, HP-UX and AIX. Today, in addition to
certified Unix systems such as those already mentioned, Unix-like operating systems such as Linux
and BSD descendants (FreeBSD, NetBSD, and OpenBSD) are commonly encountered.

BASIC of John Kemeny and Thomas Kurtz


In 1962, John George Kemeny (born as János György Kemény) (1926–1992), chairman of the
Dartmouth College Mathematics Department and his colleague Thomas Eugene Kurtz (b. 1928)
submitted a grant to NSF, for the development of a new time-sharing system, with the aim of
providing easy access to computing facilities for all members of the college. Its implementation began
in 1963 by a student team under the direction of Kemeny and Kurtz. On May 1, 1964, the system,
named Dartmouth Time-Sharing System, or DTSS for short, originally implemented to run on a GE-
200 series computer (GE-200 series was a family of small mainframe computers of the 1960s,
manufactured by General Electric) began operations and remained in operation until the end of 1999!.

Having removed one of the main barriers to computer use, Kurtz and Kemeny went on to simplify the
user interface, so that a student could essentially learn enough to use the system in no time. But
writing programs in the computer languages then in use was a quite challenging task. Kurtz initially
tried to simplify certain existing languages, namely Algol and FORTRAN, but eventually decided
together with Kemeny that a new, simplified programming language was needed. The resulting
programming language was called BASIC (an acronym for Beginner's All-purpose Symbolic
Instruction Code), and has become the most widely used language in the world.

The BASIC language was initially based on FORTRAN II, with some influences from ALGOL 60 and
with additions to make it suitable for timesharing systems like DTSS. Initially, BASIC concentrated on
supporting straightforward mathematical work, with matrix arithmetic support from its initial
implementation as a batch language and full string functionality being added by 1965.

The Golden Era of BASIC came with the introduction of the first microcomputers in the mid-1970s.
BASIC had the advantage that it was fairly well known to the young designers and computer hobbyists
who took an interest in microcomputers.

In 1983, Kemeny, Kurtz along with several others formed True BASIC, Inc., with the intention of
creating a personal computer version of BASIC for educational purposes.

BASIC is a very powerful language as a tool for the novice programmer. BASIC allows for a wide range
of applications, and it has many versions. For example to write a program to print the phrase "Hello
World" infinitely, one have to enter only 2 lines of code:

10 PRINT "Hello World!"


20 GOTO 10
VisiCalc of Dan Bricklin and Bob Frankston
In the spring of 1978, a Harvard Business School student, Dan Bricklin, came up with the idea for an
interactive visible calculator, the program (called VisiCalc), which will be called later the First Killer
App of the Computer Era.

Bricklin certainly was not he inventor of the electronic spreadsheet. First known ideas for such a
program were from 1961, when professor Richard Mattessich pioneered the development of
computerized spreadsheets for use in business accounting. Then in 1969 Rene Pardo and Remy
Landau co-invented "LANPAR" LANguage for Programming Arrays at Random, an electronic
spreadsheet type application, which was used for budgeting at Bell Canada, AT&T, Bell operating
companies, and General Motors. Mattessich, Pardo and Landau's work and that of other developers of
spreadsheets on mainframe computers probably had no influence on Bricklin however. Thus, a history
of the modern era of microcomputer-based electronic spreadsheets should begin with the VisiCalc.

Daniel Singer Bricklin was born on 16 July, 1951, in Philadelphia, USA, where he attended Akiba
Hebrew Academy during his high school years. Then he received a B.S. in electrical
engineering/computer science from the MIT (Massachusetts Institute of Technology), before to go for
a MBA from Harvard University in 1977.

Once sitting in his room, as he remembered ...I would daydream. "Imagine if my calculator had a
ball in its back, like a mouse..." (I had seen a mouse previously, I think in a demonstration at a
conference by Doug Engelbart, and maybe the Alto). And "...imagine if I had a heads-up display, like
in a fighter plane, where I could see the virtual image hanging in the air in front of me. I could just
move my mouse/keyboard calculator around on the table, punch in a few numbers, circle them to
get a sum, do some calculations, and answer '10% will be fine!'" (10% was always the answer in
those days when we couldn't do very complicated calculations...).

Later in the summer of 1978, between first and second year of the MBA program, while riding a bike
along a path on Martha's Vineyard, he decided that he wanted to pursue this idea and create a real
product to sell after I graduated.

So in the spring of 1978 Bricklin tried prototyping the product's display screen in Basic on a video
terminal connected to the Business School's timesharing system. His hope for using a mouse
was replaced in the first personal computer prototype in the early fall of 1978 by the game paddle of
the Apple ][. (This was a dial one could turn to move game objects back and forth). One could move
the cursor left or right, and then push the "fire" button, and then turning the paddle would move the
cursor up and down. The R-C circuit or whatever in the Apple ][ was too sluggish and my pointing
too imprecise to accurately position the cursor that way, so I switched to the two arrow keys of the
Apple ][ keyboard (it only had 2) and used the space bar instead of the button to switch from
horizontal movement to vertical.

The first PC prototype of VisiCalc was created over a weekend on an Apple ][ (using Apple Integer
Basic), borrowed for the purpose from a friend, Dan Fylstra, later his publisher. It did not scroll, yet,
but it had the columns and rows and some arithmetic.

Then Bricklin decided to recruit a more experienced programmer, to do a real, assembler version of
the program (first for the MOS Technology 6502 microprocessor used in the Apple ][). Thus he called
his MIT colleague Bob Frankston, to build production code (faster speed, better arithmetic, scrolling,
etc.). Frankston not only managed to code the program in assembler (using an assembler, which ran
on a minicomputer equipped with the Multics operating system), but also expanded the program and
packed the code into a mere 20k of machine memory, making it both powerful and practical enough to
be run on a microcomputer. Actually the size of the program was the biggest problem for Frankston,
because Apple II had a limited memory, 16 KB of RAM on the low-end Apple II. No matter how hard
Frankston tried however, he could not fit VisiCalc in the 16, that's why VisiCalc would only be
available for the much more expensive 32 KB Apple II.
Bricklin and Frankston formed Software Arts Corporation in January, 1979. In May 1979, the firm
Personal Software of Dan Fylstra (later renamed VisiCorp) began marketing VisiCalc with a teaser ad
in Byte Magazine (see the nearby image). Initially Bricklin conceived several names for the program,
between them Calcu-ledger and Calcu-paper, but the name "VisiCalc" is an abbreviated form of the
phrase "visible calculator" was chosen by Dan Fylstra.

VisiCalc was one of the key products that helped bring the microcomputer from the hobbyist's desk
into the office. Before the release of this groundbreaking software, microcomputers were thought of as
toys; VisiCalc changed that.

VisiCalc went on sale in November of 1979 and became immediately a big hit. It retailed for US$100
and sold so well that many dealers started bundling the Apple II with VisiCalc. The success of VisiCalc
was one of the main reasons Apple to be turned into a successful company, selling tens of thousands of
the pricey 32 KB Apple IIs to businesses that wanted them only for the spreadsheet.

In 1981, Software Arts made over $12 million in royalties from VisiCalc. It became Personal Software's
flagship product, financing the groundbreaking VisiOn office suite and GUI. Just before the release of
VisiOn, Personal Software was renamed VisiCorp.

The success wouldn't last long, though. Soon, more powerful clones of VisiCalc were released.

In 1983, Lotus 1-2-3 was released. It was available exclusively for the IBM PC and other MS-DOS
computers, and it quickly outsold VisiCalc. Lotus worked a lot like VisiCalc, which made migration
easy, and it took full advantage of the PC's 80-column display and vast amounts of memory, which
allowed much bigger spreadsheets than the Apple II could handle.

Microsoft also released in 1980 a spreadsheet, MultiPlan, then Excel in 1985. Countless other
developers heated up, tensions developed between VisiCorp and Software Arts. Eventually, VisiCorp
sued Software Arts when the company delayed development of VisiCalc for the IBM PC so they could
first finish a version for the Apple IIe and III.

Software Arts' assets were eventually sold to Lotus, which unsurprisingly stopped development of
VisiCalc.

Simula of Ole-Johan Dahl and Kristen Nygaard


The first object-oriented programming language was developed in the 1960s at the Norwegian
Computing Center in Oslo, by two Norwegian computer scientists—Ole-Johan Dahl (1931-2002) and
Kristen Nygaard (1926-2002).

Kristen Nygaard, a MS in mathematics at the University of Oslo, started writing computer simulation
programs in 1957. He was seeking for a better way to describe the heterogeneity and the operation of a
system. To go further with his ideas on a formal computer language for describing a system, Nygaard
realized that he needed someone with more computer programming skills than he had, thus he
contacted Ole-Johan Dahl, also a MS in mathematics and one of the Norway's foremost computer
scientist, who joined him in January 1962.

By May 1962 the main concepts for a simulation language were set. "SIMULA I" was born, a special
purpose programming language (similar to ALGOL 60) for simulating discrete event systems.
SIMULA I was fully operational on the UNIVAC 1107 by January 1965. In the following years Dahl and
Nygaard spent a lot of time teaching Simula. Simula spread to several countries around the world and
was later implemented on Burroughs B5500 computers and the Russian URAL-16 computer.

In 1966 the British computer scientist Tony Hoare introduced the concept of record class construct,
which Dahl and Nygaard extended with the concept of prefixing and other features to meet their
requirements for a new generalized process concept. The first formal definition of Simula 67 appeared
in May, 1967. In June 1967 a conference was held to standardize the language and initiate a number of
implementations. Dahl proposed to unify the type and the class concept. This led to serious
discussions, and the proposal was rejected by the board. SIMULA 67 was formally standardized on the
first meeting of the SIMULA Standards Group in February 1968.

Simula 67 contained many of the concepts that are now available in mainstream OO languages such as
Java, C++, and C#:
• Class and object. The class concept as a template for creating instance (objects).
• Subclass. Classes may be organized in a classification hierarchy by means of subclasses.
• Virtual methods. A Simula class may define virtual methods that can be redefined in subclasses.
• Active objects. An object in Simula may be the head of an active thread (technically it is a
coroutine).
• Action combination. Simula has an inner-construct for combining the action-parts of a class and its
subclass.
• Processes and schedulers. It is easy in Simula to write new concurrency abstractions including
schedulers.
• Frameworks. Simula provided the first OO framework in form of Class Simulation. The simulation
features of Simula I was made available through Class Simulation.
• Automatic memory management. Simula had automatic memory management, including garbage
collection.

The Relational Databases of Edgar Codd


Edgar Codd is the creator of the relational databases model, an extremely influential general theory of
data management, the foundation of RDBMS (Relational Databases Management Systems), used
everywhere nowadays.

Edgar (Ted) Frank Codd was born on August 23, 1923, in Portland Bill, Dorset, in England. He was
the youngest of seven children of a leather manufacturer father and a schoolteacher mother. Edgar
earned degrees in mathematics and chemistry at Exeter College, Oxford, before serving as a pilot in
the Royal Air Force during the World War II.

In 1948, Codd moved to New York to work for IBM as a programmer for the Selective Sequence
Electronic Calculator, IBM's first electronic computer, an experimental machine with 12500 vacuum
tubes. He then invented a novel "multiprogramming" method for the pioneering IBM 7040 STRETCH
computer. This method enabled STRETCH, the forerunner to modern mainframe computers, to run
several programs at the same time. In 1953, disappointed by the USA policy, Codd moved to Ottawa,
Canada. A decade later he returned to the USA and received his doctorate in computer science from
the University of Michigan. Two years later he moved to San Jose, California, to work at IBM's San
Jose Research Laboratory.

In the 1960s and 1970s Codd worked out his theories of data arrangement, based on mathematical set
theory. He wanted to store data in cross-referenced tables, allowing the information to be presented in
multiple permutations. It was a revolutionary approach. In 1969 he published an internal IBM paper,
describing his ideas for replacing the hierarchical or navigational structure with simple tables
containing rows and columns, but without great success and interest. Codd firmly believed that
computer users should be able to work at a more natural-language level and not be concerned about
the details of where or how the data was stored. In 1970 Codd published his landmark paper, A
Relational Model of Data for Large Shared Data Banks.

Codd's concept of data arrangement was seen within IBM as an "intellectual curiosity" at best and, at
worst, as undermining IBM's existing products. Codd's ideas however were picked up by local
entrepreneurs and resulted in the formation of firms such as Oracle (today the number two
independent software firm after Microsoft), Ingres, Informix and Sybase.

Let's see how Don Chamberlin, an IBM colleague of Codd and coinventor of SQL, was acquainted with
Codd's ideas: "...since I'd been studying CODASYL (the language used to query navigational
databases), I could imagine how those queries would have been represented in CODASYL by
programs that were five pages long, that would navigate through this labyrinth of pointers and stuff.
Codd would sort of write them down as one-liners. ... They weren't complicated at all. I said, 'Wow.'
This was kind of a conversion experience for me. I understood what the relational thing was about
after that."

To Codd's disappointment, IBM proved slow to exploit his suggestions until commercial rivals started
implementing them. Initially, IBM refused to implement the relational model at all for business
reasons (to preserve revenue from its current database implementation—IMS/DB.

In 1973 IBM finally included the relational model of Codd in his plans, in System R subproject, but
Codd was not involved in the project. Among the critical technologies developed for System R is the
Structured Query Language (SQL), (initially called SEQUEL) developed by Chamberlin and Ray
Boyce. Boyce later worked with Codd to develop the Boyce-Codd Normal Form for efficiently
designing relational database tables so information was not needlessly duplicated in different tables.

In 1981 IBM released to market its first relational database product, SQL/DS. DB2, initially for large
mainframe machines, was announced in 1983. IBM's DB2 family of databases proved to be one of
IBMs most successful software products and is incorporated in the operating systems of mainframe
and middleware servers of IBM.

Still in IBM, Codd continued to develop and extend his relational model. As the relational model
started to become fashionable in the early 1980s, Codd fought a sometimes bitter campaign to prevent
the term being misused by database vendors who had merely added a relational veneer to older
technology. As part of this campaign, he published his famous 12 rules to define what constituted a
relational database.

Codd retired from IBM in 1984 at the age of 61, after a serious injury resulting from a fall.

Later he joined up with the British database guru Chris Date, whom Codd had introduced to San Jose
in 1971, to form the Codd and Date Consulting Group. The company, which included Codd's second
wife Sharon Weinberg, made a good living from conducting seminars, writing books and advising
major database vendors. Codd never became rich like the entrepreneurs like Larry Ellison, who
exploited his ideas. He remained active as a consultant until 1999.

Codd was a holder of the Turing Award in 1981, and in 1994 he was inducted as a Fellow of the
Association for Computing Machinery.

Edgar Codd died of heart failure at his home in Williams Island, Florida, on April 18, 2003, survived
by his wife, four children and six grandchildren.

The DOS of Tim Paterson


In the second half of 1980 IBM was in a hurry, searching for software for its new upcoming personal
computer, what would become the original IBM PC. In July, they approached Microsoft and Bill Gates
proposed that Microsoft will write the BASIC interpreter. When IBM's representatives mentioned that
they needed an operating system also, Gates referred them to Digital Research (DRI), makers of the
widely used CP/M operating system.

The CP/M was by far the most popular operating system in use at the time, and IBM felt it needed
namely CP/M in order to compete. So IBM's representatives visited DR and discussed licensing, but
the parties didn't make an agreement. Initially Digital Research hesitated to sign IBM's non-disclosure
agreement. The NDA was later accepted, but DR did not accept IBM's proposal of $250000 in
exchange for as many copies as IBM could sell, insisting on the usual royalty-based plan.

So in October 1980, IBM returned to Microsoft, asking again for an operating system. At that point,
Gates mentioned the existence of a operating system (initially named QDSOS, then 86-DOS) of a
Seattle company, Seattle Computer Products (SCP), a manufacturer of a 8086 computer kit, which sell
his board with Microsoft's Stand-alone Disk BASIC-86. IBM representative Jack Sams told Gates to
get a license for it, and in December 1980 Microsoft purchased a non-exclusive license for 86-DOS
from SCP for $25000 (In July 1981, Microsoft purchased all rights to 86-DOS from SCP for $50000.)
How did SCP come to QDOS? In the beginning of 1980 SCP wanted to offer its board with the 8086-
version of CP/M that Digital Research had announced, but its release date was uncertain, that's why
SCP decided to make its own operationg system, assigning in April 1980 the task to the engineer Tim
Paterson, a young graduate from Seattle's University of Washington.

Tim Paterson (born June 1, 1956) studied electrical enginneering at University of Washington and in
late 1976 he got a job as a technician at a Seattle-area retail computer store, where he started toying
around with designing his own peripheral boards for computers. The store was selling the boards of
SCP and in 1977 the manager of SCP asked Tim to consult for the company. In June 1978, after getting
his bachelor of science degree , Tim became a salaried employee of Seattle Computer.

At Seattle Computer Tim at first worked on redesigning a memory board, and then on a 8086 CPU
card. Once the prototype of the 8086 CPU card was up and running, SCP was approached by Digital
Research to see if it could get CP/M to run on it. Microsoft wanted to see if some of its programs
would work, too. At the end of May 1979, Paterson went to Microsoft and a week or so, he cranked out
all 32K of Microsoft's Basic onto the 8086 card.

In April 1980, Paterson began work on an operating system for SCP's 8086 card. He recalled: "Step
one was to write down what CP/M-80 did. Step two was to design a file system that was fast and
efficient."

By July, Paterson had finished some 50 percent of the operating system and called it QDOS 0.10 (for
quick and dirty). He quickly found a bug, so it became QDOS 0.11. By the end of August 1980, QDOS
0.11 worked fine and was being shipped as a complete operating system, including an assembler,
resident in the operating system, debugger, and a simple line editor.

In the late 1980 QDOS was renamed to 86-DOS and in December SCP came out with 86-DOS, 0.33,
which had significant improvements over QDOS. In May 1981 Paterson left SCP and went to Microsoft
to work full-time on the PC-DOS version of 86-DOS. He finished PC-DOS in July, one month before
the IBM PC was officially announced to the world. By this time, 86-DOS had become MS-DOS.

Microsoft Windows
On November 10, 1983, at the Plaza Hotel in New York City, happened a modest event, which will
have a very important impact on the software industry in the next decades—the little known company
Microsoft Corporation formally announced a graphical user interface (GUI) for its own operating
system (MS-DOS), which had shipped for IBM PC and compatible computers since 1981.

Initially the new product was developed under the name Interface Manager, but before the official
introducing in 1985, the marketing gurus convinced Bill Gates, that Windows is a more suitable name.

The main partner of Microsoft since 1981 was IBM, when MS-DOS became the highly successful
operating system, that came bundled with an IBM computer. That's why in that same November of
1983, the owner of Microsoft Bill Gates decided to show a beta version of Windows to IBM's
management. Their response was negative though, probably because IBM was working on their own
operating system, called Top View.

IBM Top View was released in February of 1985 as a DOS-based multitasking program manager
without any GUI features. IBM promised that future versions of Top View would have a GUI. That
promise was never kept, and the program was discontinued barely two years later.

It seems Bill Gates realized how profitable a successful GUI for IBM computers would be, while he had
seen Apple's Lisa computer and later the more successful Macintosh computer. Both Apple computers
came with a stunning graphical user interface.

Microsoft needed more than 2 years to launch the announced product—Microsoft Windows 1.0 (see
the upper image), which was introduced in November 20, 1985, and was initially sold for $100.
MS Windows version 1.0 was considered buggy, crude, and slow. Its rough start was made worse by a
threatened lawsuit from Apple Co. In September 1985, Apple lawyers warned Bill Gates that Windows
infringed on Apple copyrights and patents, and that his corporation had stolen Apple's trade secrets.
Windows had similar drop-down menus, tiled windows and mouse support as Apple's operating
system. Gates decided to make an offer to license features of Apple's OS. Apple agreed and a contract
was drawn up. A couple of years later though Bill Gates will have again copyrights infringement
problems with Apple (Apple vs. Microsoft & Hewlett-Packard copyright suit), and then he decided to
claim that Apple had taken ideas from the graphical user interface developed by Xerox for Alto and
Star computers.

Microsoft Windows 2.0 was released in December of 1987 and was initially sold for $100 also. It was a
much-improved Windows, that made Windows-based computers look more like a Macintosh,
introducing icons to represent programs and files, improved support for expanded-memory hardware
and windows, that could overlap.

In May, 1990, the initially critically accepted Windows 3.0 was released (see the upper screenshot),
full version was priced at $149.95 and the upgrade version—at $79.95. Windows 3.0 had an improved
program manager and icon system, a new file manager, support for sixteen colors, and improved
speed and reliability, and most important–Windows 3.0 gained widespread third-party support.
Programmers started writing Windows-compatible software, giving end users a reason to buy
Windows 3.0. Three million copies were sold the first year, and Windows finally came of age.

In April, 1992, Windows 3.1 was released and became a smash hit, selling almost 3 million copies
within the first two months of its release. It featured the new TrueType scalable font support, along
with multimedia capability, object linking and embedding (OLE), application reboot capability, and
more. Windows 3.x became the number one operating system installed in PCs until 1997, when
Windows 95 took over.

In July 1993 was introduced the first version of a family of Microsoft operating systems—Windows NT
3.1 (The version number was chosen to match the one of Windows 3.1, the then-latest operating
environment from Microsoft, on account of the similar visual appearance of the user interface).
Windows NT 3.1 was the first version of Windows to utilize 32-bit "flat" virtual memory addressing on
32-bit processors. Its companion product, Windows 3.1, used segmented addressing and switches
from 16-bit to 32-bit addressing in pages. Windows NT was originally designed to be a powerful high-
level-language-based, processor-independent, multiprocessing, multiuser operating system with
features comparable to Unix. It was intended to complement consumer versions of Windows that were
based on MS-DOS. NT was the first fully 32-bit version of Windows, whereas its consumer-oriented
counterparts, Windows 3.1x and Windows 9x, were 16-bit/32-bit hybrids. Windows 2000, Windows
XP, Windows Server 2003, Windows Vista, Windows Home Server, Windows Server 2008 and
Windows 7 are based on Windows NT, although they are not branded as Windows NT.

On August 24, 1995, Microsoft Windows 95 (see the lower screenshot) was released in a buying fever
so great that even consumers without home computers bought copies of the program. More than 1
Million copies were within 4 days. Windows 95 (code-named Chicago) was considered to be very user-
friendly. It included an integrated TCP/IP stack, dial-up networking, and long filename support. It
was also the first version of Windows, that did not require MS-DOS to be installed beforehand (but
was still based on the MS-DOS kernel).

In June, 1998, Microsoft released Windows 98. It was the last version of Windows based on the MS-
DOS kernel. Windows 98 has Microsoft's Internet browser "Internet Explorer 4" built in and
supported new input devices like USB.

Windows 2000 was released in February, 2000, and was based on Microsoft's NT technology.
Microsoft now offered automatic software updates over the Internet for Windows starting with
Windows 2000.

Windows XP (see the lower screenshot) was released in October 2001 and offered better multi-media
support and increased performance. According to Microsoft, "the XP in Windows XP stands for
experience, symbolizing the innovative experiences that Windows can offer to personal computer
users."

The next desktop OS of Microsoft was released more than five years after the introduction of its
predecessor, the longest time span between successive releases of Windows desktop operating
systems. Windows Vista (known by its code name "Longhorn") was released in January, 2007. It
contains many changes and new features, including an updated graphical user interface and visual
style dubbed Aero, a redesigned search function, multimedia tools including Windows DVD Maker,
and redesigned networking, audio, print, and display sub-systems. Vista aims to increase the level of
communication between machines on a home network, using peer-to-peer technology to simplify
sharing files and media between computers and devices. Windows Vista includes version 3.0 of the
.NET Framework, allowing software developers to write applications without traditional Windows
APIs.

Microsoft releases Windows 7 in October, 2009. Windows 7 seemed to be the best-selling Microsoft
operating system in the history, with 15 million copies sold during the first 9 months.

Unlike its predecessor, which introduced a large number of new features, Windows 7 was intended to
be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with
applications and hardware with which Windows Vista is already compatible. It is focused on multi-
touch support, a redesigned Windows Shell with a new taskbar, referred to as the Superbar, a home
networking system called HomeGroup, and performance improvements. Some standard applications
that have been included with prior releases of Microsoft Windows, including Windows Calendar,
Windows Mail, Windows Movie Maker, and Windows Photo Gallery, are not included in Windows 7,
most are instead offered separately at no charge as part of the Windows Live Essentials suite.

In October 2012, Microsoft launched its latest desktop operating system, called (surprise!) Windows
8. It is the first Micsrosoft's OS, which will be used on tablets and smart phones also. So, what was
changed?

The desktop was changed radically, as now it is relegated to the side-lines to make way for the new so-
called Modern UI (User Interface). There is no more Start button, as this interface is designed to be
used with touchscreens as well as with a mouse and keyboard, and requires programs to be written
specially for it. These programs will be downloaded via the new Windows Store, or from developers'
websites (one can still run programs written for older versions of Windows, but this is possible only on
PCs).

Microsoft claims also, that Win 8 is sleek, fast and fun (on the right hardware:-), has huge security
improvements, better battery life, faster boot, etc. Win 8 features the new "Hybrid Boot" mode (which
hibernates the Windows kernel on shutdown to speed up the next bootup).

Task Manager has also been redesigned, including a new processes tab with the option to display
fewer or more details of running applications and background processes, a heat map using different
colors indicating the level of resource usage, etc.

And most important improvement—the sinister BSOD (Blue Screen of Death), the nightmare of
generations poor users, has been updated with a simpler and modern design with less technical
information displayed. Rejoice, O People!

Mac OS of Apple
On January 24, 1984, Apple Computer Inc.'s chairman Steve Jobs took to the stage of the Apple's
annual shareholders meeting in Cupertino, to show off the very first Macintoshpersonal computer in a
live demonstration. Macintosh 128 came bundled with what was later called the Mac OS, but then
known simply as the System Software (or System).

The original System Software was partially based on the Lisa OS, previously released by Apple for the
Lisa computer in 1983, and both OS were directly inspired by Xerox Alto. It is known, that Steve Jobs
and a number of Apple engineers visited Xerox PARC (in exchange for Apple stock options) in
December 1979, to see Alto's WYSIWYG concept and the mouse-driven graphical user interface, three
months after the Lisa and Macintosh projects had begun. The final Lisa and Macintosh operating
systems upgraded the concepts of Xerox Alto with menubars, pop-up menus and drag and drop
action.

The primary software architect of the Mac OS was Andy Hertzfeld (see the lower photo, he is standing
in the middle). He coded much of the original Mac ROM, the kernel, the Macintosh Toolbox and some
of the desktop accessories. The icons of the operating system were designed by Susan Kare (the only
woman in the lower photo). Macintosh system utilities and Macintosh Finder were coded by Bruce
Horn and Steve Capps. Bill Atkinson (the man with the moustache in the lower photo) was creator of
the ground-breaking MacPaint application, as well as QuickDraw, the fundamental toolbox that the
Mac used for graphics. Atkinson also designed and implemented HyperCard, the first popular
hypermedia system.

Just like his direct rival, the IBM PC, Mac used a system ROM for the key OS code. However, IBM PC
used only 8 kB of ROM for its power-on self-test (POST) and basic input/output system (BIOS), while
the Mac ROM was significantly larger (64 kB), because it contained both low-level and high-level
code. The low-level code was for hardware initialization, diagnostics, drivers, etc. The higher-level
Toolbox was a collection of software routines meant for use by applications, quite like a shared library.
Toolbox functionality included the following: management of dialog boxes; fonts, icons, pull-down
menus, scroll bars, and windows; event handling; text entry and editing; arithmetic and logical
operations.

The first version of the Mac OS (the System Software, which resided on a single 400KB floppy disk)
was easily distinguished between other operating systems then because it does not use a command
line interface—it was one of the first operating systems to use an entirely graphical user interface or
GUI. Additional to the ROM and system kernel is the Finder, an application used for file management,
which also displays the Desktop. The two files were contained in a folder labeled System Folder, which
contained other resource files, like a printer driver, needed to interact with the System Software.

The first releases were single-user, single-tasking (only run one application at a time), though special
application shells such could work around this to some extent. They used a flat file system called
Macintosh File System (MFS), all files were stored in a single directory. The Finder provided virtual
folders that could be used to organize files in a hierarchical view with nested folders, but these were
not visible from any other application and did not actually exist in the file system.

Aldus Pagemaker of Paul Brainerd


The US company Aldus Corporation was founded in February 1984 in Seattle, Washington, by Paul
Brainerd (b. 1947). Brainerd has a graduate degree in journalism at the University of Minnesota and
worked at the Minneapolis newspaper Star and Tribune. In 1980 he joined Atex in Redmond,
Washington, a company that sold computer-assisted publishing equipment to the newspaper industry.
In 1983, the plant was closed, thus Brainerd and five other Atex engineers decided to stay in the
Pacific Northwest and start up their own company.

Brainerd and his partners decided to name their company Aldus, after Aldus Pius Manutius (Teobaldo
Mannucci) (1449–1515), a famous fifteenth-century Venetian pioneer in publishing, known for
standardizing the rules of punctuation and also presenting several typefaces, including the first italic.
Manutius went on to found the first modern publishing house, the Aldine Press.

The flagship program of Aldus Co—PageMaker was released in July 1985. This groundbreaking
program was the first ever desktop publishing application and revolutionized the use of personal
computers, virtually creating the desktop publishing industry. The term desktop publishing was itself
coined by Brainerd.

PageMaker relied on a graphical user interface, and was initially for the then new Apple Macintosh, in
1987 for PCs running the then new Windows 1.0. PageMaker relies on Adobe Systems' PostScript page
description language. Suddenly anyone could design brochures. Publishers had to become computer
literate, and Apple started selling Macs and LaserWriters in large numbers. Aldus helped Apple to
market its hardware to customers who wanted desktop publishing capabilities. In return, Apple
featured Aldus's software in much of its advertising, and also helped the fledgling company distribute
its program.

Retailed for $495 PageMaker 1.0 (see the upper screenshot) included all the basic elements needed to
lay out pages: free form drag and drop positioning of page elements, sophisticated type tools, a well-
chosen selection of drawing tools, the ability to import text and graphics (most importantly, EPS files)
from other applications, and the ability to print to high resolution PostScript printers with WYSIWYG
accuracy. The users could easily create professional-quality books, newspapers, newsletters,
brochures, pamphlets, and other graphic products.

PageMaker not only made desktop publishing possible, it spawned entire cottage industries for clip
art, fonts, service bureau output and scanning, and specialty products for laser printing such as foil
overlays.

Aldus succeeded to make the PageMaker program accessible to all kinds of computers and operating
systems, signing agreements with Hewlett-Packard, Microsoft Corporation, IBM, Wang Laboratories
and Digital Equipment. Then it decided to offer its program to computer users outside the United
States, establishing itself as Europe's leading producer of desktop publishing software by the spring of
1988. Controlling nearly half of the British, French, and German markets for these products,
PageMaker was the world's fourth most popular software program.

Despite the popularity of the PageMaker, Aldus's status as a single-product company caused some
concern among management and investors. The company needed to move beyond PageMaker to other
products and functions in order to continue its rapid growth and remain profitable. Thus in 1987
Aldus introduced FreeHand, a raster graphics drawing program, and SnapShot, an instant electronic
photography software package for use on personal computers. Later Aldus presented SuperPaint,
Personal Press, Digital Darkroom, PhotoStyler, PressWise and PageAhead.

In January 1988, sales of the PageMaker program topped 200000, with 90000 copies of the program
shipped in the preceding year. That spring, Aldus introduced updated versions of PageMaker for use
on the Macintosh and on IBM-compatible PCs. With this advance, Aldus maintained its dominant grip
on the desktop publishing market.

PageMaker continued to evolve, reaching new heights with its 1992 version 4.2, which had such
essential features as text rotation and a story editor. Despite some interesting marketing ploys
however Aldus had lost significant market share to the rival QuarkXPress, which had some powerful
features, such as color separation, which PageMaker then lacked. Later QuarkXPress continued to
gain market share, due largely to ignorance and mythology regarding PageMaker's capabilities. Most
people who used both programs preferred PageMaker, but that didn't seem to help Aldus, and their
dwindling revenues eventually led to their acquisition in 1995 by Adobe, which re-introduced
PageMaker as Adobe PageMaker and subsequently updated it to version 6, 6.5, and finally 7 in 2001.
Then, Adobe rebranded the next version of PageMaker to Adobe InDesign. InDesign was developed in
Seattle by the PageMaker product team.

OS/400 of IBM
On June 21, 1988, IBM introduced its new minicomputer and enterprise server platform AS/400
(later renamed to iSeries, System i and Power Systems), bundled with the operating system OS/400
(later renamed to i5/OS and IBM i). IBM's Chief Scientist for AS/400 computers was Frank Gerald
Soltis (see the nearby image), an US computer scientist, with pioneering contribution in the
architecture of technology-independent machine interfaces (TIMI) and single-level storages.

Which are the key differences between OS/400 and other common operating systems?
1. The OS/400 is an object-based system with an integrated Database Management System DB2.
2. Single-level storage—a computer storage concept where the entire storage of a computer is thought
of as a single two-dimensional plane of addresses. Pages may be in primary storage (RAM) or in
secondary storage (disk); however, the current location of an address is unimportant to a process.
3. Technology Independent Machine Interface (TIMI)—a virtual instruction set, which allows the
operating system and application programs to take advantage of advances in hardware and software
without recompilation. All user-mode programs are stored as TIMI instructions, which means that it
is not possible for them to use the instruction set of the underlying CPU, thus ensuring hardware
independence.

Key features of IBM i:

 Embedded into the OS DB2, 64-bit Relational Database Management System (RDBMS)
 HTTP Server (powered by Apache)
 IBM i NetServer (formerly AS/400 NetServer) is IBM i support for Windows Network
Neighborhood
 A rich set of security features and services that pertain to the goals of authentication,
authorization, integrity, confidentiality, and auditing; Simple-to-deploy, Virus resistance
object-based security model
 Integration with SQL, .NET, DRDA/CLI, ODBC and JDBC; Support for MySQL database
 Multiple file systems (Windows, UNIX, NFS)
 Integrated Web application server, Java and J2EE Web services environment
 Zend Server Community Edition providing PHP runtime environment
 PowerVM technology providing Micro-Partitioning and shared processor pools
 Automatic balancing of processor and memory resources
 Transaction-based journaling leveraged by IBM iCluster and complementary high availability
solutions
 Highly resilient with built-in cluster architecture
 Encryption of data on disk and backups
 IBM Rational development tools, C, RPG, COBOL, C++, Java, EGL, PHP, CL; Supports open
source applications built to Apache, MySQL, and PHP stack
 Web-based systems management, IBM Systems Navigator, IBM Systems Director
 logical partitioning (LPARs) with IBM i to support multiple virtual systems on a single
hardware footprint
 Integrated storage management, Hierarchal storage management for Solid State Drives
 Support for IBM POWER processor-based systems and blades
 Capacity on Demand (processor & memory)

The latest release of IBM i is 7.1, launched in April, 2010.

Photoshop of Thomas Knoll


Some time in the fall of 1987, the 25 years old PhD candidate in computer vision at University of
Michigan—Thomas Knoll, was trying to write a computer program to display grayscale images on a
monochrome bitmap monitor. Because the program wasn't directly related to his thesis on computer
vision, Knoll thought it had limited value at best. The program, called Display, Knoll wrote it on his
Macintosh Plus computer at home. Little did he know that this initial code would be the very
beginning of the real phenomenon, that would be known as Photoshop.

Thomas was a son of Glenn Knoll, professor at University of Michigan, who lived with his family in
Ann Arbor, Michigan. Glenn was a photo enthusiast, who maintained a darkroom in the family
basement. He was also a technology aficionado intrigued by the emergence of the personal computer.
His two sons, Thomas and John, inherited their father's inquisitive nature. And the vision for future
greatness began with their exposure to Glenn's basement darkroom and with the Apple II Plus that he
brought home for research projects.

Even though Thomas loved hands-on darkroom work and developed film and made prints, he too had
a keen interest in computers and programming. In 1987 he purchased an Apple Macintosh Plus, to
help him with his Ph.D. work. Much to his disappointment, the Mac couldn't display gray-scale levels
in his images. To solve that problem, Thomas wrote some command line utilities that did various
imaging processing steps (e.g. to simulate the gray-scale effect, etc.), before to tackle with something
bigger, the Display program.

When Thomas demonstrated his image processing tools to his older brother John, who was on
vacation in Ann Arbor, he embraced the idea. John worked at Industrial Light and Magic (ILM) in
Marin County, California. ILM was the visual effects arm of the famous film production company
Lucasfilm, and John was experimenting with computers to create special effects. In the book CG 101:
A Computer Graphics Industry Reference, John says: "Image processing is the fundamental basis of
any of that kind of work, and Tom had written a bunch of image processing tools. As Tom showed me
his work, it struck me how similar it was to the image processing tools on the Pixar [Pixar was a image
computer John had just seen a graphics demo on at ILM]". Thus the pair began to collaborate on a
larger, more cohesive application, which they dubbed Display.

In the end of 1987 John arranged to purchase a new Macintosh II, the first color-capable model of the
series, which cost almost 10000 USD. At the same time, Thomas rewrote the code for Display to work
in color. In the ensuing months, the brothers worked on expanding Display's capability. At John's
urging, Thomas added gama correction tools, as well as the ability to read and write various file
formats, while John developed image processing routines that would later become filter plug-ins.
Thomas developed also the unique capability to create soft-edged selections that would allow local
changes and also developed such features as Levels for adjusting tonality, Color Balance, Hue, and
Saturation for adjusting color, and painting capabilities.

In the beginning of 1988 however, the project was closed to be cancelled. Thomas' fellowship money
had run out and his wife was expecting their first child. Thomas was feeling pressure to finish what he
was doing and find a permanent job. Fortunately, he decided to give himself six more months to finish
a beta version and let John shop it around Silicon Valley. John thought they might have the basis of a
commercially viable product, while Thomas was reluctant: "Do you have any idea how much work it is
to write a commercial application?" he asked John. But with his naive optimism, John convinced
Thomas it would be worth the effort. "I'll figure out how to make money with this," he told his brother.
Well, John was right, but so was Thomas. It did take a lot of work, but hey will made a lot of money.

The brothers decided to choose a more appropriate name for the product. The initial name—Display,
was not appropriate anymore, because the program got more powerful, so they needed a name that
would reflect what it did. The next working names were Image Pro and then Photo Lab (even
PhotoHut was considered), but each of those were shot down in turn by similar products in that
market space.

Then, during a program demo to some company, Thomas confided to someone that he was having
problems naming the program. The confidant suggested PhotoShop, and that became the program's
working name. (the 'S' of shop was capitalized, but later the inter cap was removed.)

When John started shopping around for a company to invest in Photoshop, travelling all over Silicon
Valley, he gave program demos to number of companies, including a company named Adobe Systems.
This will became a lucky strike. At his time Thomas remained in Ann Arbor, Michigan, fine-tuning the
program, while John John kept pushing his brother to add new features, he even wrote a simple
manual to make the program more understandable.

John finally succeeded in attracting the attention of somebody, namely the scanner manufacturer
BarneyScan, who decided that the program would be of use to people who owned their scanners. A
short-term deal was worked out, and the first public iteration of the software was introduced as
Barneyscan XP. About 200 copies of the program, now in Version 0.87, were shipped with Barneyscan
scanners.

Around this time, John demonstrated the program to engineers at Apple computer. It was a huge hit.
Apple asked John to leave a couple of copies. There followed the first incident of Photoshop pirating.
Seems that the Apple engineers shared the program with some friends, a lot of friends!
Subsequently, John returned to Adobe for another demonstration. Russell Brown, Adobe's primary
art director, was blown away by the program. He had just signed an NDA disclosure agreement with
Letraset, to view their new image-editing program, ColorStudio. He was convinced that Photoshop
was better. The whole Adobe's internal creative team loved the product.

With a great deal of enthusiasm, Adobe decided to buy the license to distribute Photoshop. In the
September of 1988, the Knoll brothers and Fred Mitchell, head of Adobe Acquisitions, made the deal
with a handshake. It would be April, 1989, when the final legal agreements were worked out. The key
phrase in that deal was "license to distribute." Adobe didn't completely buy-out the program until
years after Photoshop had become a huge success. It was a smart move on the Knolls' part to work out
a royalty agreement based upon distribution. After the legal agreements were signed, Thomas and
John started developing a shipping version—Adobe Photoshop 1.0, which was shipped in February,
1990, using a single diskette (see the nearby image). Adobe decided to keep the working
name Photoshop, but not until an exhaustive attempt to find a better name provided nothing better.

The first release was certainly a success, despite the usual slew of bugs. Adobe's key marketing
decision was to present Photoshop as a mass-market, fairly simple tool for anyone to use, rather than
most graphics software of the time, which was aimed at specialists. With Photoshop, you could be
achieving the same things on your home desktop Mac that were previously only possible with
thousands of dollars of advanced equipment at least, that was the implicit promise. There was also the
matter of pricing. Letraset's ColorStudio, which had launched shortly before, cost $1995; Photoshop
was less than $1000.

Thomas wrote all the code for Photoshop in Ann Arbor, while John developed and wrote plug-ins in
California. Some of the Adobe people thought John's features were gimmicky and didn't belong in a
serious application. They viewed the product as a tool for retouching, not special effects, so John had
to find a way to "sneak" them into the program. Later those plug-ins have become one of the most
powerful aspects of Photoshop, which extended its functionality and act like mini-editors that modify
the image. The most common type are filter plugins that provide various image effects. They are
located in the Filter menu. Photoshop plugin API has become a standard, and many other image
editors also support Photoshop Plugins.

The second version of Photoshop, 2.0, also only for Mac OS, and was shipped in June, 1991. It
introduced paths, CMYK color and EPS Rasterization.

Until 1993 Photoshop was still a Mac-only application, but its success warranted a version for the
burgeoning Windows graphics market. Porting it was not a trivial task: a whole new team, headed by
Bryan Lamkin, was brought in for the PC. Oddly, although there were other significant new features
such as 16-bit file support, this iteration was shipped as version 2.5. From this moment, all versions
will be provided both for Mac and Windows operating systems, there are also several version for IRIX
and Solaris.

By general consensus, the addition of layers (together with Tabbed Palettes) in version 3.0, launched
in November, 1994, has been the single most important aspect of Photoshop development, and
probably the feature which finally persuaded many artists to try it. Yet the concept of layers wasn't
unique to Photoshop. HSC (later to become MetaCreations), was concurrently developing Live
Picture, an image-editing application including just such a facility. While an excellent program in its
own right, Live Picture was vastly overpriced on its launch, leaving Photoshop 3.0 for both Mac and
Windows to clean up.

Version 5 (launched in May, 1998) introduced color management and the History palette, with its
extra 'nonlinear history' behavior, which certainly opened up whole new creative possibilities. A major
update, version 5.5, bundled Adobe's package ImageReady in an entirely new iteration, giving
Photoshop excellent Web-specific features.

Layer styles and improved text handling popped up in version 6 in September, 2000, and the Healing
brush in version 7, in March, 2002.

The latest version of Photoshop is 12.0 (called Adobe Photoshop CS5) was launched in April, 2010.
Tetris of Alexey Pajitnov
One of the most genius and popular computer game of all times, Tetris, was conceived by the Russian
Alexey Pajitnov in 1984.

Алексей Леонидович Пажитнов (Alexey Leonidovich Pajitnov) was born on 14th of March, 1956, in
Moscow, Russia. After graduation at Moscow Mathematical School Nr. 91 and Moscow Insitute of
Aviation, he was hired as a programmer at the Computing Center of the Soviet Academy of Sciences, a
Soviet government-founded R&D center. Pajitnov worked in the field of artificial intelligence and
speech recognition.

Pajitnov was interested in the field of computer games from the very beginning of his career and
the Tetris was not the first game, which he conceived. Initially he created several psychology-related
games. Some time in 1984 he took to his workplace a puzzle game, just bought in the "Детский мир"
for 35 kopeek ("World of Children", a big children store in Moscow).

The name of the game was Pentamino. It consisted of plastic 5-cube figures, just like these in Tetris.
Pajitnov started to twiddle the figures trying to imagine how to implement this game in computers.
First he decided to cut off a piece from of the cubes, in order to simplify the shape of figures and thus
to accelerate the computer processing. Then he created a simpler computer game, called Genetic
Engineering, in which the player had to move the 4-square pieces (called tetramino) around the
screen using cursor keys. The player could assemble various shapes. The game however was not very
successful.

The breakthrough came when Pajitnov decided to drop the figures in a rectangular glass and piling up
at the bottom of the glass. Two weeks later, in the beginning of 1985, the first version was ready,
written on Pascal language for Electronica 60 (Электроника 60 was a russian copy of DEC LSI-11
computer). The game occupied only 2.7 KB storage memory.

The name of the game "Tetris" Pajitnov combined from 2 words: the name of the original game—
"tetramino" and his favorite sport—"tennis".

Pajitnov was assisted in creation of Tetris initially by Dmitry Pavlovsky, his colleague from the
Institute and later by Vadim Gerasimov, a 16-year-old high school schoolboy, who worked and played
with IBM PCs in the Computer Center. Gerasimov ported the game for the operating system MS DOS
of IBM PC (using Borland's Turbo Pascal) and was in charge of graphical design.

Pajitnov, Pavlovsky and Gerasimov created a team, planning to make some dozen addictive computer
games for the PC and put them together in one system, called a Computer Funfair. They also dreamed
about selling the games, but this part seemed unusual and difficult in the communist Soviet Union,
where making and selling something privately was a dangerous affair.

The first Tetris ran in a MS-DOS text mode, using colored space symbols to represent squares of
tetraminos. The game could even automatically recognize the IBM monochrome videocard, adjusting
the way it drew on the screen. The last version of the game was one of the first to use proper timer
delays, in order not to run too fast on the newer and faster machines, which did almost all other games
then.

When all efforts to sell the games failed, the group of Pajitnov decided to give to friends free copies of
all the games, including Tetris. Thus the games quickly spread around the country. When in 1986 the
freely distributed PC version of Tetris got outside of the Soviet Union and a foreign company
expressed an interest in licensing Tetris, Pajitnov decided to abandon all the games but Tetris, which
made Pavlovsky very unhappy and destroyed the team.
In 1991 Pajitnov moved to the United States in 1991 and later, in 1996, founded the Tetris Company.
In the same year he began working for Microsoft, where worked for the Microsoft Entertainment
Pack: The Puzzle Collection, MSN Mind Aerobics and MSN Games groups. He left Microsoft in 2005.

Linux of Linus Torvalds


In April, 1991, Linus Torvalds, a 21-years old student on computer science at the University of
Helsinki, Finland, began his personal project—to create a new operating system kernel. This modest
project will evolve to the world famous Linux operating system and will make Linus one of the most
influential people in the world of tehchnology today.

Linus Benedict Torvalds (St Linus was the second pope, 64-76 AD) was born on December 28, 1969 in
Helsinki, Finland. He is the son of Nils and Anna Torvalds. Nils was a reporter, Ana was a translator,
but both pursued careers in journalism. Torvalds's grandfather was a noted Finnish poet, Ole Torvalds
(1916–1995). His parents divorced when Linus was young, and he was raised by his mother and
grandparents. The Torvalds family belongs to the Swedish-speaking minority in Finland, which
numbers about 300000.

Linus took an early interest in computers mainly through the influence of his maternal grandfather—
Leo Toerngvist, a professor of statistics at the University of Helsinki. In the mid-1970s, Toerngvist
bought one of the first personal computers, a Commodore Vic 20. Linus soon became bored with the
few programs that were available for it, and by the time he was 10, he thus began to create new ones,
first using the BASIC programming language and then using the much more difficult but also more
powerful assembly language.

Programming and mathematics became Torvalds' passions in secondary school. His father's efforts to
interest him in sports, girls and other social activities were in vain, and Linus does not hesitate to
admit that he had little talent for or interest in such pursuits. In 1988 Torvalds followed in the
footsteps of his parents and enrolled in the University of Helsinki, the premier institution of higher
education in Finland. By that time he was already an accomplished programmer, and, naturally, he
majored in computer science. In 1990 he took his first class in the C programming language, the
language that he would soon use to write the Linux kernel (i.e., the core of the operating system).

In the beginning of 1991 he purchased an IBM-compatible personal computer with a 33MHz Intel 386
processor and a huge for the time 4MB of memory. This processor greatly appealed to him because it
represented a tremendous improvement over earlier Intel chips. As intrigued as he was with the
hardware, however, Linus was disappointed with the MS-DOS operating system that came with it.
That operating system had not advanced sufficiently to even begin to take advantage of the vastly
improved capabilities of the 386 chip, and he thus strongly preferred the much more powerful and
stable UNIX operating system that he had become accustomed to using on the university's computers.

Trying to obtain a version of UNIX for his new computer, he didn't manage to find even a basic system
for less than 5000 USD. That's why he obtained MINIX, a small clone of UNIX that was created in
1987 by operating systems expert Andrew Tanenbaum in the Netherlands to teach UNIX to university
students. However, although much more powerful than MS-DOS and designed to run on Intel x86
processors, MINIX still had some serious disadvantages. They included the facts that not all of the
source code was made public, it lacked some of the features and performance of UNIX and there was a
not-insignificant (although cheaper than for many other operating systems) licensing fee.

Linus in particular lamented MINIX inability to do terminal emulation, which he needed in order to
connect to the university's Unix computers. Linus decided to create a terminal emulation program
himself, independently of MINIX. These were the first steps toward creating Linux.

Development was done on MINIX using the GNU C compiler, which is still the main choice for
compiling Linux today (although the code can be built with other compilers, such as the Intel C
Compiler). Linus quickly developed the terminal emulation program and it was sufficient for his needs
for a while. However, he began thinking that it would be nice to be able to do other things with it like
transferring and saving files. This is where Linux was really born.
Originally, Linus wanted to name his creation Freax, a combination of "freak", "free", and "x" (as an
allusion to Unix), but when in September, 1991, the files were uploaded to the FTP server (ftp.funet.fi)
of FUNET (in order to facilitate development), his friend Ari Lemmke, one of the administrators of the
FTP server, thinking that "Freax" was not a good name, decided to name it "Linux" on the server
without consulting Linus. Later, however, Torvalds consented to "Linux".

In August, 1991, Linus announced that he was working on this operating system in a Usenet
newsgroup of Minix users:
Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for
386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any
feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical
layout of the file-system (due to practical reasons) among other things).
I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get
something practical within a few months, and I'd like to know what features most people would
want. Any suggestions are welcome, but I won't promise I'll implement them :-)
Linus (torvalds@kruuna.helsinki.fi)
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task
switching etc), and it probably never will support anything other than AT-harddisks, as that's all I
have :-(.

On September 17 of the same 1991, after a period of self-imposed isolation and intense concentration,
Linus completed a crude version (0.01) of his new operating system. Shortly thereafter, on October 5,
he announced version 0.02, the first official version. It featured the ability to run both the bash shell
(a program that provides the traditional, text-only user interface for Unix-like operating systems) and
the GCC (the GNU C Compiler), two key system utilities.

Since then the resulting Linux kernel has been marked by constant growth throughout its history.
Since the initial release of its source code in 1991, it has grown from a small number of C files under a
license prohibiting commercial distribution to its state in 2009 of over 370 megabytes of source under
the GNU General Public License.

Linux is a modular Unix-like operating system. It derives much of its basic design from principles
established in Unix during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux
kernel, which handles process control, networking, and peripheral and file system access. Device
drivers are either integrated directly with the kernel or added as modules loaded while the system is
running. An interesting battle happened in 1992 between Tanenbaum, the author of Minix, and
Torvalds, for details see http://www.linfo.org/linuxobsolete.html .

One of the best decisions ot Linus was that he decided to release Linux under the GPL (GNU General
Public License) rather than under the more restrictive license that he had earlier planned. Developed
by Richard Stallman, a notable programmer and a leading advocate of free software, this most popular
of the free software licenses allows anyone to study, use, modify, extend and redistribute the software
as long as they make the source code freely available for any modified versions that they create and
then redistribute.

The development of Linux is one of the most prominent examples of free and open source software
collaboration; typically all the underlying source code can be used, freely modified, and redistributed,
both commercially and non-commercially, by anyone under licenses such as the GNU General Public
License. Linux is not the only such operating system, although it is by far the most widely used.

Typically Linux is packaged in a format known as a Linux distribution for desktop and server use.
Linux distributions include the Linux kernel and all of the supporting software required to run a
complete system, such as utilities and libraries, the X Window System, the GNOME and KDE desktop
environments, the most populat in the world HTTP Server—Apache, etc. Commonly used applications
with desktop Linux systems include the Mozilla Firefox web-browser, the OpenOffice.org office
application suite and the GIMP image editor.
In 1993, Linus was teaching an introductory computer course at the University of Helsinki. A young
woman in the class named Tove Monni, a kindergarten teacher and 6 times karate champion of
Finland, e-mailed him and asked him out on a date. She would later become his wife. Tove and Linus
went on to have three daughters, Patricia (1996), Daniela (1998) and Celeste (2000).

In late 1996 Linus accepted an invitation to visit the California headquarters of Transmeta, a start-up
company in the first stages of designing an energy saving CPU. Linus was intrigued by their work and
in early 1997 he accepted a position at Transmeta and moved to California with his family. Along with
his work for Transmeta, Linus continued to oversee kernel development.

In June of 2003, Linus left Transmeta in order to focus exclusively on the Linux kernel and began to
work under the auspices of the Open Source Development Labs (OSDL) a consortium formed by high-
tech companies which include IBM, Hewlett-Packard, Intel, AMD, RedHat, Novell and many others.
The purpose of the consortium is to promote Linux development. OSDL merged with The Free
Standards Group in January 2007 to become The Linux Foundation.
Internet conquers the world
Why Internet became so ubiquitous technology and conquered the world? Which are the "killer" applications and
ideas, which allowed this to happen? Who are the smartest guys, who got the inspiration and energy to create
such applications?

IRC (Internet Relay Chat) of Jarkko Oikarinen


It was already mentioned in this site, that the first chat program in the world (EMISARI) was
designed in 1971 by Murray Turoff. EMISARI however was used mainly for government and
educational purposes and never became popular.

The program, which gave birth to the modern extremely popular chat movement was the Internet
Relay Chat (IRC) of Jarkko Oikarinen.

During the summer of 1988, Jarkko Oikarinen (born 16 August 1967, in Kuusamo, Finland), a 2nd
year student in the Department of Electrical Engineering at the University of Oulu, Finland, was
working at the university Department of Information Processing Science, where he administered the
department's Sun Unix server "tolsun.oulu.fi", running on a public access BBS (bulletin board system)
called OuluBox.

The work with server administration didn't take all his time, so Jarkko started doing a communication
program, which was meant to make OuluBox a little more usable. Partly inspired by Jyrki Kuoppala's
"rmsg" program for sending messages to people on other machines, and partly by Bitnet Relay Chat,
Oikarinen decided to improve the existing multi-user chat program on OuluBox called MultiUser
Talk (MUT) (which had a bad habit of not working properly), itself based on the basic talk program
then available on Unix computers. He called the resulting program IRC (for Internet Relay Chat), and
first deployed it at the end of August, 1988.

When IRC started occasionally having more than 10 users (the first IRC server was the above
mentioned tolsun.oulu.fi.), Jarkko asked some friends at Tampere University of Technology and
Helsinki University of Technology to start running IRC servers to distribute the load. Some other
universities soon followed. Markku Järvinen made the IRC client program more usable by including
support for Emacs editor commands, and before long IRC was in use across Finland on the Finnish
network FUNET, and then on the Scandinavian network NORDUNET.

In 1989 Oikarinen managed to get an account on the legendary machine "ai.ai.mit.edu" at the MIT
university, from which he recruited the first IRC users outside Scandinavia and arranged starting of
the first outside-Scandinavian IRC server. Soon followed 2 other IRC servers, at the University of
Denver and at Oregon State University, "orion.cair.du.edu" and "jacobcs.cs.orst.edu" respectively. The
administrators emailed Jarkko and obtained connections to the Finnish IRC network to create
transatlantic connection, and the number of IRC servers began to grow quickly across both North
America and Europe.

IRC became well known to the general public around the world in 1991, when its use skyrocketed as a
lot of users logged on to get up-to-date information on Iraq's invasion of Kuwait, through a functional
IRC link into the country that stayed operational for a week after radio and television broadcasts were
cut off.

The Internet Relay Chat Protocol was defined in May, 1993, in RFC 1459 of Jarkko Oikarinen and
Darren Reed. It was mainly described as a protocol for group communication in discussion forums,
called channels, but also allows one-to-one communication via private message as well as chat and
data transfers via Direct Client-to-Client.

As of the end of 2009, the top 100 IRC networks served more than half a million users at a time, with
hundreds of thousands of channels, operating on a total of some 1500 servers worldwide.
Archie of Alan Emtage
The Internet's first search engine—the Archie system, was created in 1989 by the student at the McGill
University in Montreal, Canada, Alan Emtage.

Emtage (born November 27, 1964, in Barbados) conceived the first version of Archie, which was
actually a pre-Web internet search engine for locating material in public FTP archives.

A native of Barbados, Alan attended high school at Harrison College from 1975 to 1983 (and in 1981
becoming the owner of a Sinclair ZX81 with 1K of memory), where he graduated at the top of his class,
winning the Barbados Scholarship. Alan was always crazy about computers and while a student at
Harrison College, he tossed around a number of other career choices including meteorology and
organic chemistry, but chose computer science.

In 1983 Alan entered McGill University in Montreal, Canada, to study for a Bachelor's degree in
computer science. In 1987 he continued his study for a Master's degree, which he obtained in 1991. He
was part of the team that brought the first Internet link to eastern Canada (and only the second link in
the country) in 1986.

In 1989 while a student and working as a systems administrator for the School of Computer Science,
Alan conceived and implemented the original version of the Archie search engine, the world's first
Internet search engine and the start of a line which leads directly to today's giants Yahoo and Google.
(The name Archie stands for "archives" without the "v", not the kid from the comics)

Working as a systems administrator, Alan was responsible for locating software for the students and
staff of the faculty. The necessity for searching information became the mother of invention.

He decided to develop a set of programs, that would go out and look through the repositories of
software (public anonymous FTP (File Transfer Protocol) sites) and build basically an index of the
available software, a searchable database of filename. One thing led to another and word got out that
he had an index available and people started writing in and asking if we could search the index on
their behalf.

As a result rather than having doing it himself, he allowed them to do it themselves so we wrote
software that would allow them to come in and search the index themselves. That was the beginning.

It seems that the administration of the university was the last to find out about what Alan had done.
As Alan remembered: "We had no permission from the school to provide this service; and as a matter
of fact, the head or our department found out about it for the first time by going to a conference.
Somebody went up to him and said they really wanted to congratulate him for providing this service
and he graciously smiled, said 'You're welcome' and went back to McGill and said 'what the hell is all
of this? I have no idea what they're talking about'."
"That was a once in a lifetime opportunity. It was largely being in the right place at the right time with
the right idea. There were other people who had similar ideas and were working on similar projects, I
just happened to get there first."

Archie is considered the original search engine and a lot of the techniques that Emtage and other
people that worked with him on Archie came up with are basically the same techniques that Google,
Yahoo! and all the other search engines use.

Later Alan and his colleagues developed various versions that allowed them to split up the service so
that it would be available at other universities rather than taxing the facility at McGill.

In 1992, Emtage along with Peter Deutsch formed Bunyip Information Systems—the world's first
company expressly founded for and dedicated to providing Internet information services with a
licensed commercial version of the Archie search engine used by millions of people worldwide.
Emtage was a founding member of the Internet Society and went on to create and chair several
Working Groups at the Internet Engineering Task Force, the standard-setting body for the Internet.
Working with other pioneers such as Tim Berners-Lee, Marc Andreessen, Mark McCahill (creator of
Gopher) and Jon Postel, Emtage co-chaired the Uniform Resource Identifier (URI) Working Group
which created and codified the standard for Uniform Resource Locators (URLs).

Emtage is currently Chief Technical Officer at Mediapolis, Inc., a web engineering company in New
York City. Besides computers, traveling and photography are his passions. He has been sky-diving in
Mexico, hand-gliding in Brazil, diving in Fiji, hot air-ballooning in Egypt and white-water rafting in
the Arctic circle.

NCSA Mosaic of Marc Andreessen and Eric Bina


NCSA Mosaic was neither the first web browser (first was the WorldWideWeb of Berners-Lee) nor the
first graphical web browser (it was preceded by the lesser-known Erwise and ViolaWWW), but it was
the web browser credited with popularizing the World Wide Web. Its clean, easily understood user
interface, reliability, Windows port and simple installation all contributed to making it the application
that opened up the Web to the general public.

In 1992 Marc Andreesen was a student in Computer Science and part-time assistant at the NCSA
(National Center for Supercomputing Applications) at the University of Illinois. His position at NCSA
allowed him to become quite familiar with the Internet and World Wide Web, that began to take off.

There were several web browsers available then, but they were for Unix machines which were rather
expensive. This meant that the Web was mostly used by academics and engineers who had access to
such machines. The user-interfaces of all available browsers also tended to be not very user-friendly,
which also hindered the spread of the WWW. That's why Marc decided to develop a browser that was
easier to use and more graphically rich.

In the same 1992, Andreesen recruited his colleague from NCSA and University of Illinois, Eric Bina
(Master in Computer Science from the University of Illinois from 1988), to help with his project. The
two worked tirelessly. Bina remembers that they would work three to four days straight, then crash
for about a day. They called their new browser Mosaic. It was much more sophisticated graphically
than other browsers of the time. Like other browsers it was designed to display HTML documents, but
new formatting tags like center were included.

The most important feature was the inclusion of the image tag which allowed to include images on
web pages. Earlier browsers allowed the viewing of pictures, but only as separate files. NCSA Mosaic
made it possible for images and text to appear on the same page. It also featured a graphical interface
with clickable buttons that let users navigate easily and controls that let users scroll through text with
ease. Another innovative feature was the new form of hyperlink. In earlier browsers hypertext links
had reference numbers that the user typed in to navigate to the linked document. The new hyperlinks
allowed the user to simply click on a link to retrieve a document.

NCSA Mosaic was also a client for earlier protocols such as FTP, NNTP, and gopher.

In January 1993, Mosaic was posted for free download on NCSA's servers and became immediately
popular, more than 5000 copies were being downloaded each month. Within weeks tens of thousands
of people had downloaded the program. The original version was for Unix, but Andreesen and Bina
quickly put together a team to develop PC and Mac versions, which were released in the late spring of
the same year. With Mosaic now available for more popular platforms, its popularity soon
skyrocketed. More users meant a bigger Web audience. The bigger audiences spurred the creation of
new content, which in turn further increased the audience on the Web and so on. As the number of
users on the Web increased, the browser of choice was Mosaic so its distribution increased
accordingly.

By December 1993, Mosaic's growth was so great that it made the front page of the New York Times
business section. The article concluded that Mosaic was perhaps "an application program so different
and so obviously useful that it can create a new industry from scratch". NCSA administrators were
quoted in the article, but there was no mention of either Andreesen or Bina. Marc realized that when
he was through with his studies NCSA would take over Mosaic for themselves. So when he graduated
in December 1993, he left and moved to Silicon Valley in California.

Later Andreesen and Jim Clark, the founder of Silicon Graphics, incorporated Mosaic
Communications Corp., and developed the famous Netscape browser and server products.

NCSA Mosaic won multiple technology awards, including being named 1993 Product of the Year by
InfoWorld magazine and 1994 Technology of the Year by Industry Week magazine.

NCSA discontinued support for Mosaic in 1997, shifting its focus to other research and development
projects, but Mosaic browser is still available for download along with documentation at NCSA FTP
site—ftp://ftp.ncsa.uiuc.edu/Mosaic/.

World Wide Web Wanderer of Matthew Gray


The brilliant idea of World Wide Web was devised in the spring of 1989 in the head of Tim Berners-
Lee, a physicist in CERN, but it didn't gain any widespread popular use until the remarkable NCSA
Mosaic web-browser was introduced in the beginning of 1993.

In the spring of 1993, just months after the release of Mosaic, Matthew Gray, who studied physics in
Massachusetts Institute of Technology (MIT) and was one of the three members of the Student
Information Processing Board (SIPB) who set up the site www.mit.edu, decided to write a program,
called World Wide Web Wanderer, to systematically traverse the Web and collect sites. Wanderer was
first functional in spring of 1993 and became the first automated Web agent (spider or web crawler).
The Wanderer certainly did not reach every site in Web, but it was run with consistent methodology,
hopefully yielding consistent data for the growth of the Web.

Matthew was initially motivated primarily to discover new sites, as the Web was still a relatively small
place (in the early 1993 the total number of web-sites all over the world was about 100, and in June of
1995, even with the phenomenal growth of the Internet, the number of Web servers increased to a
point where one in every 270 machines on the Internet is a Web server). As the Web started to grew
rapidly after 1993, the focus quickly changed to charting the growth of the Web. The first report,
compiled using the data collected by Wanderer (see the table bellow) covers the period from June
1993 to June 1995.

Wanderer was written using the Perl language and while crawling the Web, it generated an index
called Wandex—the first web database. Initially, the Wanderer counted only Web servers, but shortly
after its introduction, it started to capture URLs as it went along.

Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of
the program ran rampant through the Web and caused a noticeable network performance
degradation. This degradation occurred because it would access the same page hundreds of time a
day. The Wanderer soon amended its ways, but the controversy over whether spiders were good or
bad for the Internet remained for some time.

Wanderer certainly was not the Internet's first search engine, it was the Archie of Alan Emtage,
but Wanderer was the first web robot, and, with its index Wandex, clearly had the potential to become
the first general-purpose Web search engine, years before Yahoo and Google. Mathew Gray however
does not make this claim and he always stated that this was not its purpose.
Anyway Wanderer inspired a number of programmers to follow up on the idea of web robots.

Yahoo of David Filo and Jerry Yang


In the beginning of 1994 two Ph.D. candidates in Electrical Engineering at Stanford University—Jerry
Yang (born November 6, 1968, in Taipei, Taiwan) and David Filo (born April 20, 1966, in Wisconsin)
were looking for a single place to find useful Web sites and for a way to keep track of their personal
interests on the Internet. As they didn't manage to find such a tool, they decided to create their own.
Thus the now ubiquitous web portal and a global brand Yahoo! began as a student hobby and evolved
into a site, that has changed the way people communicate with each other, find and access
information.

Filo and Yang started realization of his project in a campus trailer in February, 1994, and before long
they were spending more time on their home-brewed lists of favorite links than on their doctoral
dissertations. Eventually, Jerry and David's lists became too long and unwieldy, and they broke them
out into categories. When the categories became too full, they developed subcategories, thus the core
concept behind Yahoo was born.

The Web site started out as Jerry and David's Guide to the World Wide Web, but eventually received
a new moniker with the help of a dictionary. Filo and Yang decided to select the name Yahoo, because
they liked the general definition of the word (which comes from Gulliver's Travels by Jonathan Swift,
where a Yahoo is a legendary being): rude, unsophisticated, uncouth. Later the name Yahoo was
popularized as an bacronym for Yet Another Hierarchical Officious Oracle.

The Yahoo! itself first resided on Yang's student workstation, Akebono, (URL was
akebono.stanford.edu/yahoo), while the software was lodged on Filo's computer, Konishiki, both
named after legendary sumo wrestlers.

For their surprise, Jerry and David soon found they were not alone in wanting a single place to find
useful Web sites. Before long, hundreds of people were accessing their guide from well beyond the
Stanford trailer. Word spread from friends to what quickly became a significant, loyal audience
throughout the closely-knit Internet community. Yahoo! celebrated its first million-hit day in the fall
of 1994, translating to almost 100 thousand unique visitors.

The Yahoo! domain was created on January 18, 1995. Due to the torrent of traffic and enthusiastic
reception Yahoo! was receiving, the founders knew they had a potential business on their hands. In
March 1995, the pair incorporated the business and met with dozens of Silicon Valley venture
capitalists, looking for financing. They eventually came across Michael Moritz of Sequoia Capital, the
well-regarded firm whose most successful investments included Apple Computer, Atari, Oracle and
Cisco Systems. Sequoia Capital agreed to fund Yahoo! in April 1995 with an initial investment of
nearly $2 million.

Like many other web search engines, Yahoo started as a web directory, but soon diversified into a web
portal and a search engine.

Realizing their new company had the potential to grow quickly, the founders began to shop for a
management team. They hired Tim Koogle, a veteran of Motorola, as chief executive officer and
Jeffrey Mallett, founder of Novell's WordPerfect consumer division, as chief operating officer. After
securing a second round of funding in Fall 1995 and an initial public offering, Yahoo raised $33.8
million in April 1996, with a total of 49 employees.

Here you can see the earliest known Yahoo! website from 1996.

Today, Yahoo! Inc. is a leading global Internet communications, commerce and media company that
offers a comprehensive branded network of services to more than 350 million individuals each month
worldwide. It provides internet communication services (such as Yahoo! Messenger and Yahoo! Mail),
social networking services and user-generated content (such as My Web, Yahoo! Personals, Yahoo!
360°, Delicious, Flickr, and Yahoo! Buzz), media contents and news (such as Yahoo! Sports, Yahoo!
Finance, Yahoo! Music, Yahoo! Movies, Yahoo! News, Yahoo! Answers and Yahoo! Games), etc.
Headquartered in Sunnyvale, California, Yahoo! has offices in Europe, Asia, Latin America, Australia,
Canada and the United States.
Amazon of Jeff Bezos
Jeffrey Preston Bezos, the founder of the famous Amazon.com, was born on January 12, 1964, in
Albuquerque, New Mexico, when his mother, Jackie, was still in her teens. Her marriage to his father
lasted little more than a year. She remarried when Bezos was five and Jeffrey took the name of his
stepfather, Miguel Bezos.

In 1971 the family moved to Houston, Texas, where Jeffrey attended an elementary school, showing
intense and varied scientific interests. He rigged an electric alarm to keep his younger siblings out of
his room and maintain his privacy and converted his parents' garage into a laboratory for his science
projects.

Later the family moved to Miami, Florida, where Bezos attended a high school. While in high school,
he attended the a student science training program at the University of Florida, which helped him
receive a Silver Knight Award in 1982. He entered Princeton University, planning to study physics, but
soon returned to his love of computers and graduated summa cum laude, Phi Beta Kappa with a
degree in computer science and electrical engineering.

After graduating from Princeton, Bezos worked on Wall Street in the computer science field. Then he
worked on building a network for international trade for a company known as Fitel. Then Bezos
worked for Bankers Trust, becoming a vice-president. Later on he also worked in computer science for
D. E. Shaw & Co.

In 1994, Bezos decided to take part in the Internet gold rush, developing the idea of selling books to a
mass audience through the Internet. There is an apocryphal legend, that Bezos decided to found
Amazon after making a cross country drive with his wife from New York to Seattle, writing up the
Amazon business plan on the way and setting up the original company in his garage.

Bezos decided to name the company Amazon after the world's largest river, and reserved the domain
name Amazon.com. The company was incorporated in the state of Washington, beginning service in
July 1995. The initial Web site was text heavy and gray, it wasn't pretty and didn't have even listing
book publication dates and other key information. But that didn't concern Madrona Venture Group's
Tom Alberg, who invested $100000 in Amazon in 1995.

By the fourth month in business, the company was selling more than 100 books a day.

Bezos succeeded to created more than a bookstore, he created an online community. The site was
revolutionary early on for allowing average consumers to create online product reviews. It not only
drew people who wanted to buy books, but also those who wanted to research them before buying.

The company began as an online bookstore, but gradually incorporated a number of products and
services into its shopping model, either through development or acquisition.

In 1997, Amazon added music CDs and movie videos to the Web site, what many considered to be a
wise move designed to complement the company's expansive book collection. Soon Amazon added
five more product categories—toys, electronics, software, video games and home improvement.

In 1999, Time magazine (on the lower image is shown the cover) named Bezos Person of the Year,
recognizing the company's success in popularizing online shopping.

Amazon's initial business plan was unusual: the company did not expect a profit for four to five years
and the strategy was effective. In 1996, its first full fiscal year in business, Amazon generated $15.7
million sales, a figure that would increase by 800 percent the following year. The company
successfully survived the dot-com bubble and remains profitable till now. Revenues increased thanks
to product diversification and an international presence: $3.9 billion in 2002, $5.3 billion in 2003,
$6.9 billion in 2004, $8.5 billion in 2005, and $10.7 billion in 2006. In May 1997 Amazon.com issued
its initial public offering of stock.
In 2007 Amazon launched the remarkable series of e-book readers Kindle. In 2011 Amazon entered
the tablet bisuness with Kindle Fire.

The site amazon.com attracted over 900 million visitors annually by 2011. In 2012 the company has
over 56000 employees.

In 2004, Bezos founded a human space flight startup company called Blue Origin. He is known for his
attention to business process details, trying to know about everything from contract minutiae to how
he is quoted in all Amazon press releases.

eBay of Pierre Omidyar


On September 4, 1995, the 28-year-old software developer and entrepreneur Pierre Omidyar launched
the famous eBay auction site as an experiment in how a level playing field would affect the efficiency
of a marketplace.

Pierre Morad Omidyar was born on June 21, 1967 in Paris, France, to Iranian immigrant parents, both
of whom had been sent by his grandparents to attend university there. The family moved to the US,
Washington, D.C. Pierre's graduated with a degree in computer science from Tufts University in 1988.
Shortly after, Omidyar went to work for Claris, an Apple Computer subsidiary, where he helped write
the vector based drawing application MacDraw.

In 1991 Pieere started his career as an entrepreneur, co-founding Ink Development, a pen-based
computing startup that was later rebranded as an e-commerce company and renamed eShop.

In a long holiday weekend some time in the middle of 1995, Pierre sat down in his living room in San
Jose, California, to write the original computer code for what eventually became an internet
superbrand—the auction site eBay. Initially, he wanted to call his site echobay, but the name had
already been registered. Thus the word eBay was made up on the fly by Omidyar.

The site www.ebay.com was launched on Labor Day, 1995, under the more prosaic title of Auction
Web, and was hosted on a site, Omidyar had created for information on the ebola virus. The site began
with the listing of a single broken laser pointer. Though Pierre had intended the listing to be a test
more than a serious offer to sell at auction, he was shocked when soon the item sold for $14.83.

Auction Web was later renamed eBay. The service, meant to be a marketplace for the sale of goods
and services for individuals, was free at first, but started charging in order to cover internet service
provider costs and soon started making profit.

What is the profitable Business Model of eBay?

It was built on the idea of an online person-to-person trading community on the Internet, using the
World Wide Web. Buyers and sellers are brought together in a manner, where sellers are permitted to
list items for sale, buyers to bid on items of interest and all eBay users to browse through listed items
in a fully automated way. The items are arranged by topics, where each type of auction has its own
category.

eBay has both streamlined and globalized traditional person-to-person trading, which has
traditionally been conducted through such forms as garage sales, collectibles shows, flea markets and
more, with their web interface. This facilitates easy exploration for buyers and enables the sellers to
immediately list an item for sale within minutes of registering.

Browsing and bidding on auctions is free of charge, but sellers are charged two kinds of charges:
• When an item is listed on eBay, a nonrefundable Insertion Fee is charged, which ranges between 30
cents and $3.30, depending on the seller's opening bid on the item.
• A fee is charged for additional listing options to promote the item, such as highlighted or bold listing.
• A Final Value (final sale price) fee is charged at the end of the seller's auction. This fee generally
ranges from 1.25% to 5% of the final sale price.
eBay notifies the buyer and seller via e-mail at the end of the auction if a bid exceeds the seller's
minimum price, and the seller and buyer finish the transaction independently of eBay. The binding
contract of the auction is between the winning bidder and the seller only.

This appeared to be an excellent business model.

By 1996 the company was large enough to require the skills of a Stanford MBA in Jeffrey Skoll, who
came aboard an already profitable ship. Meg Whitman, a Harvard graduate, soon followed as
president and CEO, along with a strong business team under whose leadership eBay grew rapidly,
branching out from collectibles into nearly every type of market. eBay's vision for success transitioned
from one of commerce—buying and selling things—to one of connecting people around the world
together.

With exponential growth and strong branding, eBay thrived, eclipsing many of the other upstart
auction sites that dotted the dot-com bubble. By the time eBay had gone public in 1998, both Omidyar
and Skoll were billionaires.

In 2009 the net worth of the company reached S$5.5 Billion. Over one million people all over the
world now rely on their eBay sales as part of their income.

Google of Larry Page and Sergey Brin


In January, 1996, exactly two years from the moment, when two Ph.D. candidates at Stanford
University (Jerry Yang and David Filo) started their work on the project, which will became the now
ubiquitous web portal Yahoo, two other graduates at Stanford—Larry Page and Sergey Brin, Stanford
computer science graduate students, working on the Stanford Digital Library Project (to develop the
enabling technologies for a single, integrated and universal digital library), began collaborating on a
research project, which will evolve to the world-renowned web portal Google.

Sergey Brin, was born in 1973 as Сергей Михайлович Брин in Moscow, Soviet Union, to Russian
Jewish family, which moved to USA in 1979. In 1990 Brin enrolled in the University of Maryland, to
study computer science and mathematics, where he received his Bachelor of Science degree in 1993
and began his graduate study in Computer Science at Stanford University on a graduate fellowship
from the National Science Foundation.

Lawrence "Larry" Page, was born in 1973 in East Lansing, Michigan, in a Jewish family of computer
science professors at Michigan State University. He holds a Bachelor of Science degree in computer
engineering from the University of Michigan and a Masters degree in Computer Science from Stanford
University.

In 1995, searching for a dissertation theme, Page considered (among other things) exploring the
mathematical properties of the World Wide Web, understanding its link structure as a huge graph.
His supervisor Terry Winograd encouraged him to pick this idea (which Page later recalled as the best
advice I ever got) and Page focused on the problem of finding out which web pages link to a given
page, considering the number and nature of such backlinks to be valuable information about that page
(with the role of citations in academic publishing in mind).

In his research project, nicknamed BackRub, Page was soon joined by Sergey Brin, a fellow Stanford
Ph.D. student, who Page met in 1995. Page's web crawler began exploring the web in March 1996,
setting out from Page's own Stanford home page as its only starting point. To convert the backlink
data that it gathered into a measure of importance for a given web page, Brin and Page developed
the PageRank algorithm.

Analyzing BackRub's output, which for a given URL, consisted of a list of backlinks ranked by
importance, it occurred to them that a search engine based on PageRank would produce better results
than existing techniques (existing search engines at the time essentially ranked results according to
how many times the search term appeared on a page). This was not an original invention, as a small
search engine called Rankdex was already exploring a similar strategy.
BackRub is written in Java and Python and runs on several Sun Ultras and Intel Pentiums boxes,
running Linux. The primary database is kept on a Sun Ultra II with 28GB of disk storage.

As the search engine grew in popularity at Stanford, installed on the Stanford website (domain name
google.stanford.edu), both Page and Brin decided that BackRub needed a new name. Page turned to
fellow graduate student Sean Anderson for help, and they discussed several possible new names. After
several days of brainstorming in their graduate student office, Anderson verbally suggested the
word googleplex (a googolplex is the number one followed by a googol zeros. The term googol was
coined in 1938 by Milton Sirotta (1929–1980), nephew of American mathematician Edward Kasner).

Page and Brin liked googleplex, but Page suggested they shorten it to googol. When Anderson
searched to see if the domain name was available, he misspelled googol as google which was available.
Because googol was unavailable and Page thought google had an Internet ring to it like Yahoo! or
Amazon, Page registered their search engine as google.com on the Internet domain name registry.

In contrast to other busy-looking pages with flashy banners and blinking lights, Brin and Page decided
to keep google.com clean and simple to allow for faster searches. This appeared to be a wise decision.

Though the Stanford Digital Library funded them $10000 which Brin and Page used to build and
string together inexpensive PCs, they were still perpetually short on money. As google.com’s
popularity continued to grow at Stanford, they were faced with a decision: finish their graduate work
or create a business around their growing search engine. Reluctant to leave their studies, Page and
Brin offered to sell their search engine for one million USD first to AltaVista search portal. To their
disappointment however, AltaVista passed, as did Yahoo, Excite, and other search engines.

They were rejected in part because many search engines wanted people to spend more time and
money on their Web site, while Google was designed to give people fast answers to their questions by
quickly sending them to relevant Web pages. Then Yahoo's cofounder David Filo, advised them to take
a leave of absence from Stanford to start their own business. Filo initially encouraged Brin and Page
not only because they were friends, but also because Yahoo was interested in cultivating a field of
healthy search engines they could use.

The domain google.com was registered on September 15, 1997. Page and Brin formally incorporated
their company, Google Inc., on September 4, 1998 at a friend's garage in Menlo Park, California.

Both Brin and Page had been against using advertising pop-ups in a search engine, or an "advertising
funded search engines" model, and they wrote a research paper in 1998 on the topic while still
students. However, they soon changed their minds on allowing simple text ads.

By the end of 1998, Google had an index of about 60 million pages. The home page was still marked as
"BETA", but some observers already argued that Google's search results were better than those of
competitors like Hotbot or Excite.com, and praised it for being more technologically innovative than
the overloaded portal sites (like Yahoo!, Excite.com, Lycos, Netscape's Netcenter, AOL.com and
MSN.com), which at that time, during the growing dot-com bubble, were seen as "the future of the
Web", especially by stock market investors.

The Google search engine attracted a loyal following among the growing number of Internet users,
who liked its simple design. In 2000, Google began selling advertisements associated with search
keywords. The ads were text-based to maintain an uncluttered page design and to maximize page
loading speed. Keywords were sold based on a combination of price bid and click-throughs, with
bidding starting at $.05 per click. This was not a pioneering approach, as this model of selling
keyword advertising was already used by Goto.com. While many of its dot-com rivals failed in the new
Internet marketplace, Google quietly rose in stature while generating revenue.

Now Google runs over one million servers in data centers around the world, and processes over one
billion search requests and twenty petabytes of user-generated data every single day. Google's rapid
growth since its incorporation has triggered a chain of products, acquisitions and partnerships beyond
the company's core product—the search engine. Google's offers online productivity software, such as
its Gmail e-mail software, and social networking tools, including Orkut and, more recently, Google
Buzz. Google's products extend to the desktop as well, with applications such as the web browser
Google Chrome, the Picasa photo organization and editing software, and the Google Talk instant
messaging application. Recently, Google created the Android mobile phone operating system (based
on Linux), used on a number of GSM smartphones.

Napster of Shawn Fanning


Shawn Fanning was born on November 22, 1980, in Brockton, Massachusetts. He worked summers at
his uncle's (John Fanning) Internet company, called Chess.net. During this work, wanting to create an
easier method of finding music than by searching IRC or Lycos, he spent months writing the code for
Napster, a program that could provide an easy way to download music, using an anonymous P2P
(peer-to-peer) file sharing service.

After graduating from Harwich High School in 1998, Shawn enrolled at Boston's Northeastern
University, but he rarely attended class and spent Christmas break working at the Hull, Massachusetts
chess.net office with his uncle John, pushing himself to get the Napster system completed. In January,
1999, Shawn drops out of Northeastern University after the first semester, to finish writing the
software.

The service, named Napster after Fanning's hairstyle-based nickname, was launched in June 1999 and
worked till July 2001, before to be shut down by court order.

Shawn's uncle ran all aspects of the company's operations for a period from their office. The final
agreement gave Shawn 30% control of the company, with the rest going to his uncle.

Napster was the first of the massively popular P2P file distribution systems, although it was not fully
peer-to-peer, since it used central servers, in order to maintain lists of connected systems and the files
they provided, while actual transactions were conducted directly between client machines. Actually
there were already networks that facilitated the distribution of files across the Internet, such as IRC,
Hotline, and USENET, but Napster specialized exclusively in music in the form of MP3 files and
presented a user-friendly interface.

The result was a system, whose popularity generated an enormous selection of music to download.
Napster made it relatively easy for music enthusiasts to download copies of songs that were otherwise
difficult to obtain, like older songs, unreleased recordings, and songs from concert bootleg recordings.
Many users felt justified in downloading digital copies of recordings they had already purchased in
other formats, like LP and cassette tape, before the compact disc emerged as the dominant format for
music recordings.

In 2000, Fanning and Napster were featured on the covers of two of the most popular magazines—
Newsweek and Time (see the lower image).

Napster's facilitation of transfer of copyrighted material (songs) raised the ire of the Recording
Industry Association of America (RIAA), which almost immediately (in December, 1999), filed a
lawsuit against it. In April, 2000, the rock band Metallica also sues Napster for copyright
infringement. The service would only get bigger as the trials, meant to shut down Napster, also gave it
a great deal of publicity. Soon millions of users, many of them college students, flocked to it.

An injunction was issued in March, 2001 ordering Napster to prevent the trading of copyrighted music
on its network. In July 2001, Napster shut down its entire network in order to comply with the
injunction. In September, 2001, the case was partially settled, as Napster agreed to pay music creators
and copyright owners a $26 million settlement for past, unauthorized uses of music, as well as an
advance against future licensing royalties of $10 million. In order to pay those fees, Napster attempted
to convert their free service to a subscription system. Thus traffic to Napster was reduced.

Although the original service was shut down, it paved the way for decentralized peer-to-peer file-
distribution programs, which have been much harder to control.
BitTorrent of Bram Cohen
In 2001 an American computer programmer, named Bram Cohen, began his work on a protocol and a
program, which will change the entertainment industry and the interchange of information in Web.
The peer-to-peer (P2P) protocol was named BitTorrent, as well as the first file sharing program to use
the protocol, also known as BitTorrent.

Bram Cohen was born in 1975 in New York in a Jewish family, and grew up in Manhattan's Upper
West Side, where his father taught him the basics of computer coding at age 6. In first grade, he
flummoxed friends with comparisons of the Commodore 64 vs. the Timex Sinclair personal
computers, and was actively programming by age 10. Cohen graduated from Stuyvesant High
School and later attended for 2 years State University of New York at Buffalo. In 1995 he decided to
drop out of college, bored out of his mind, he said, to work for several dot-com companies throughout
the mid to late 1990s, the last being MojoNation, an ambitious but ill-fated project.

MojoNation allowed people to break up confidential files into encrypted chunks and distribute those
pieces on computers also running the software. If someone wanted to download a copy of this
encrypted file, he would have to download it simultaneously from many computers. Cohen found this
concept as perfect for a file sharing program and protocol, since programs like the popular
then KaZaA take a long time to download a large file because the file is (usually) coming from one
source.

That's why Cohen designed BitTorrent to be able to download files from many different sources, thus
speeding up the download time, especially for users with faster download, than upload speeds, which
is the case with the vast majority of users. Thus, the more popular a file is, the faster a user will be able
to download it, since many people will be downloading it at the same time, and these people will also
be uploading the data to other users.

In April 2001, Cohen quit MojoNation and began work on BitTorrent. He wrote the first BitTorrent
client implementation in Python language, and several other programs have since implemented the
protocol.

BitTorrent gained its fame for its ability to quickly share large music and movie files online. Cohen
himself has claimed he has never violated copyright law using his software and initially he designed it
for the community at etree.org, a site that shares only music by artists who expressly allow the
practice. It however didn't take long for the software to take hold with illegal music and movie
swapping.

So how exactly BitTorrent works?

BitTorrent is a protocol that offloads some of the file tracking work to a central server (called
a tracker). Another basic rule is that it uses a principal called tit-for-tat, which means that in order to
receive files, you have to give them. This solves the problem of leeching, which was one of developer
Bram Cohen's primary goals. With BitTorrent, the more files you share with others, the faster your
downloads are. And as it was already mentioned, to make better use of available network bandwidth
(the pipeline for data transmission), BitTorrent downloads different pieces of the file you want
simultaneously from multiple computers.

To download a file with BitTorrent, you have open a Web page and click on a link for the file you want.
BitTorrent client software communicates with a tracker to find other computers running BitTorrent,
that have the complete file (so called seed computers) and those with a portion of the file (peers that
are usually in the process of downloading the file).

The tracker identifies the swarm, which is the connected computers that have all of or a portion of the
file and are in the process of sending or receiving it.

The tracker helps the client software trade pieces of the file you want with other computers in
the swarm. Your BitTorrent client receives multiple pieces of the file simultaneously.
If you continue to run the BitTorrent client software after your download is complete, others can
receive .torrent files from your computer and your future download rates improve because you are
ranked higher in the tit-for-tat system.

By 2004, Cohen formed BitTorrent, Inc., with his brother Ross Cohen and business partner Ashwin
Navin.

Soon P2P network BitTorrent became extremely popular. According to some estimations, in 2004 its
traffic represented from 20 to 35% of all traffic on the Internet.

Due to his principle of contacting many (up 300-500 servers per second) BitTorrent lead to the
interesting network issue. Routers that use NAT (Network address translation), must maintain tables
of source and destination IP addresses and ports. Typical home routers are limited to about 2000
table entries while some more expensive routers have larger table capacities. As BitTorrent frequently
contacts many servers per second, it rapidly fills the NAT tables, causing home routers to lock up.

Another "copyright related" issue is that, as the BitTorrent protocol provides no way to index torrent
files, comparatively small number of websites have hosted a large majority of torrents, many linking to
copyrighted material, rendering those sites especially vulnerable to lawsuits.

There has been much controversy over the use of BitTorrent trackers. BitTorrent metafiles themselves
do not store copyrighted data. Whether the publishers of BitTorrent metafiles violate copyrights by
linking to copyrighted material is controversial. Various jurisdictions have pursued legal action
against websites that host BitTorrent trackers. High-profile examples include the closing of the
trackers Suprnova.org, Torrentspy, LokiTorrent, Demonoid, Mininova and OiNK.cd. The Pirate Bay
torrent website, was noted for the "legal" section of its website in which letters and replies on the
subject of alleged copyright infringements are publicly displayed. In 2006 the Pirate Bay's servers in
Sweden were raided by Swedish police on allegations of copyright infringement, the tracker however
was up and running again three days later.

Wikipedia of Jimmy Wales and Larry Sanger


The old dream for a world encyclopedia of Paul Otlet and Herbert Wells from 1930s became possible
in 1990s, after the rapid progress of Internet and World Wide Web.

First known proposal for online Internet encyclopedia was made by the Internet enthusiast Rick Gates
in October, 1993. Gates proposed in a message titled The Internet Encyclopedia, published in the
Usenet newsgroup alt.internet.services, to collaboratively create an encyclopedia on the Internet,
which would allow anyone to contribute by writing articles and submitting them to the central catalog
of all encyclopedia pages. Later the term Interpedia was coined for this encyclopaedia, but the project
never left the planning stages and finally died.

Several years later, in 1999, the famous open-source activist Richard Stallman popularized his concept
of an open source web-based online encyclopedia, but his idea also remained unrealized.

The first working online encyclopedia became Wikipedia, launched in January 2001 by Jimmy Wales
and Larry Sanger. The founders used the concept and technology of a wiki devised in 1994 by the
computer programmer Ward Cunningham. A wiki is a website that allows the easy creation and
editing of any number of interlinked web pages via a web browser, using a simplified markup language
or a WYSIWYG text editor. A special wiki software has been created, which is often used to create
collaborative websites.

Jimmy Donal Wales was on born August 7, 1966, in Huntsville, Alabama. He received his bachelor's
degree in finance from Auburn University and entered the Ph.D. finance program at the University of
Alabama before leaving with a master's degree to enter the Ph.D. finance program at Indiana
University. He taught at both universities during his postgraduate studies, but did not write the
doctoral dissertation required for a Ph.D., something which he has ascribed to boredom.
In 1994, rather than writing his doctoral dissertation, Wales took a job in a Chicago futures and
options trading firm. By speculating on interest rate and foreign-currency fluctuations, he had soon
earned enough to support himself and his wife for the rest of their lives.

Wales was addicted to the Internet from an early stage and used to write computer code as a pastime.
Inspired by the remarkable initial public offering of Netscape in 1995, he decided to become an
internet entrepreneur, and in 1996 founded the web portal Bomis with two partners. The website
featured user-generated webrings and for a time sold erotic photographs. It was something like a
"guy-oriented search engine" with a market similar to that of a male magazine, and was positioned as
the Playboy of the Internet. Bomis did not become successful, but in 2000 hosted and provided the
initial funding for the Nupedia project.

Wales began thinking about an open-content online encyclopedia built by volunteers in the fall of
1999, and in January 2000, he hired Sanger to oversee its development. The project, called Nupedia,
officially went online in March, 2000. Nupedia was designed as a free content encyclopedia, whose
articles were written by experts and intended to generate revenue from online ads. It was initially not
based on the wiki concept, it was instead characterized by an extensive peer-review process, designed
to make its articles of a quality comparable to that of professional encyclopedias. In 2001 Sanger
brought the wiki concept to Wales and suggested it be applied to Nupedia and then, after some initial
skepticism, Wales agreed to try it. Nupedia's however ceased operating in 2003, producing only 24
articles that completed its complex review process.

In January 2001, Wales decided to switch to the GNU Free Documentation License at the urgings of
Richard Stallman and the Free Software Foundation, and started the Wikipedia project,
reserving Wikipedia.com and Wikipedia.org domain names. Initially Wikipedia (the word was
devised by Sanger) was created as a side-project and feeder to Nupedia, in order to provide an
additional source of draft articles and ideas, but it quickly overtook Nupedia, growing to become a
large global project, and originating a wide range of additional reference projects. Jimmy Wales
announced that he would never run commercial advertisements on Wikipedia.

There was another similar project at this time—GNUPedia, but it had somewhat bureaucratic
structure, that's why it was never really developed and soon died, surpassed by Wikipedia.

Initially Wikipedia ran on UseModWiki, written in Perl. The server has run on Linux to this day,
although the original text was stored in files rather than in a database. In 2002 it was replaced by a
new software, written specifically for the project, which included a PHP wiki engine. Later a new
version, MediaWiki, was developed, which has been updated many times and is working till now.

As Wikipedia grew and attracted contributors, it quickly developed a life of its own and began to
function largely independently of Nupedia, although Sanger initially led activity on Wikipedia by
virtue of his position as Nupedia's editor-in-chief. Due to the collapse of the internet economy at that
time, Jimmy Wales decided to discontinue funding for a salaried editor-in-chief in December 2001
and next year Sanger resigned.

During the years, a lot of sister Wikipedia projects evolved—Wiktionary, 2002 (dictionary and
thesaurus); Wikiquote, Wikibooks and Wikisource, 2003; etc.

Wikipedia has been blocked on some occasions by national authorities. To date these have related to
the People's Republic of China, Iran, Syria, Pakistan, Thailand, Tunisia, the United Kingdom and
Uzbekistan.

As of February 2010 there are some 14.4 million articles in Wikipedia, with approximately 3.2 million
articles in the English Wikipedia.

Skype of Niklas Zennström and Janus Friis


Skype is a extremely popular software application that allows users to make voice calls over the
Internet, as well as instant messaging, file transfer and video conferencing.
Its development in 2002 was financed by the Skype Group, founded by Swedish entrepreneur Niklas
Zennström and the Dane Janus Friis, who had already an experience in venture IT business, founding
in 2001 Kazaa Media Desktop (once capitalized as "KaZaA", but now usually written "Kazaa")—the
famous peer-to-peer file sharing application. Skype calls to other users of the program and, in some
countries, to free-of-charge numbers, are free, while calls to other land lines and mobile phones can be
made for a fee ($2.95/month gets you unlimited calls in the USA).

The name Skype came from one of the initial names for the project—SKY PEer-to-peer, which was
then abbreviated to Skyper. It appeared however, that some of the domain names associated
with Skyper were already taken. Dropping the final r left the current title Skype, for which domain
names were available.

The domain names Skype.com and Skype.net were registered in April 2003. In August 2003 was
released the first public beta version of the program.

The code of Skype was written by Estonian developers Ahti Heinla, Priit Kasesalu and Jaan Tallinn,
who had also originally developed Kazaa. They used several IDEs and languages—Delphi, C and C++,
developing initially Skype clients for several OS—Windows, Linux and MacOS.

Skype is a real nightmare for phone companies, because it provides almost free phone services to each
user with a Skype program and Internet connection. With popularizing of the wireless networks, the
business of the pure phone service providers is going to decline. Moreover, Skype is now running not
only on computers, but also on many models mobile phones and smartphones.

Skype uses a proprietary Internet telephony (VoIP) network, called the Skype protocol. The protocol
has not been made publicly available by Skype and official applications using the protocol are closed-
source. The main difference between Skype and standard VoIP clients is that Skype operates on a
peer-to-peer model (originally based on the Kazaa software), rather than the more usual client-server
model. The Skype user directory is entirely decentralized and distributed among the nodes of the
network, i.e. users' computers, thus allowing the network to scale very easily to large sizes without a
complex centralized infrastructure costly to the Skype Group.

Skype uses a secure communication protocol, as encryption cannot be disabled, and is invisible to the
user. Skype reportedly uses non-proprietary, widely trusted encryption techniques: RSA for key
negotiation and the Advanced Encryption Standard to encrypt conversations. Skype provides an
uncontrolled registration system for users with no proof of identity.

Now versions of Skype exist not only for Linux, Mac OS and Windows, but also for Maemo, iPhone
OS, Android and even Sony's PSP.

In April 2006, number of registered users reached 100 million. In 2009 this number reached 530
million. In January 2010 were reported over 22 million concurrent Skype users.

Facebook of Mark Zuckerberg


In the late evening of October 28, 2003, a sophomore in Harvard named Mark Zuckerberg,
disappointed from a girl who had dumped him, was sitting in his room in the dormitory, and was
trying to think of something to do to get her off his mind. A dormitory facebook was open on his
desktop and Mark found that some of these people have pretty horrendous facebook pictures He got
an idea to put some of these faces next to pictures of farm animals and have people vote on which is
more attractive:-)

Mark Elliot Zuckerberg was born on May 14, 1984, in White Plains, New York and raised in Dobbs
Ferry, New York. He started programming when he was in middle school. Early on, Zuckerberg
enjoyed developing computer programs, especially communication tools and games, and a music
player named Synapse that used artificial intelligence to learn the user's listening habits. Microsoft
and AOL tried to purchase Synapse and recruit Zuckerberg, but he decided to attend Harvard
University instead.

"Let the hacking begin", wrote Zuckerberg about midnight and started.

Thus Zuckerberg decided to create student directory with photos and basic personal information,
called Facemash, which used photos compiled from the online facebooks of nine dormitory Houses,
placing two next to each other at a time and asking users to choose the hotter person. To accomplish
this, Mark hacked into the protected areas of Harvard's computer network and copied the houses'
private dormitory ID images.

Harvard at that time did not have a student directory with photos and basic information and the
Facemash site generated 450 visitors and 22000 photo-views in its first several hours online. That the
initial site mirrored people’s physical community—with their real identities, represented the key
aspects of what later became Facebook.

The site was quickly forwarded to several campus group list-servers but was shut down a few days
later by the Harvard administration. Zuckerberg got into trouble, being charged by the administration
with breach of security, violating copyrights and violating individual privacy and faced expulsion, but
ultimately the charges were dropped.

The following semester, in January 2004, Mark began writing code for a new website. In February,
2004, he launched Thefacebook site, initially located at URL thefacebook.com. When Zuckerberg
finished the site, he told a couple of friends, and one of them put it on an online mailing list.
Immediately several dozen people joined, and then they were telling people at the other houses. It was
like avalanche, within twenty-four hours, Thefacebook had somewhere between twelve hundred and
fifteen hundred registrants.

Initially membership was initially restricted to students of Harvard College, and within the first
month, more than half the undergraduate population at Harvard was registered on the site. Mark soon
attracted assistants, in order promote the website—Eduardo Saverin (business aspects), Dustin
Moskovitz (programmer), Andrew McCollum (graphic artist), and Chris Hughes. In March 2004,
Facebook expanded to 3 other Universities—Stanford, Columbia, and Yale. This expansion continued
when it opened to all Ivy League and Boston area schools, and gradually most universities in Canada
and the United States.

The company Facebook incorporated in the summer of 2004 and the entrepreneur Sean Parker, who
had been informally advising Zuckerberg, became the company's president. At the same time the
company received its first investment of US$500000 from PayPal co-founder Peter Thiel and moved
its base of operations to Palo Alto, California. The company dropped The from its name after
purchasing the domain name facebook.com in 2005 for $200000.

Facebook launched a high school version in September 2005, at that time, high school networks
required an invitation to join. Facebook later expanded membership eligibility to employees of several
companies, including Microsoft and Apple. And finally on September 26, 2006, Facebook was opened
to everyone of ages 13 and older with a valid e-mail address.

Users of Facebook can create profiles with photos, lists of personal interests, contact information and
other personal information. Communicating with friends and other users can be done through private
or public messages or a chat feature. Users can also create and join interest and fan groups, some of
which are maintained by organizations as a means of advertising. To combat privacy concerns,
Facebook enables users to choose their own privacy settings and choose who can see what parts of
their profile.

Facebook is free to users and generates revenue from advertising, such as banner ads. By default, the
viewing of detailed profile data is restricted to users from the same network and reasonable
community limitations.
Microsoft is Facebook's exclusive partner for serving banner advertising, and as such Facebook only
serves advertisements that exist in Microsoft's advertisement inventory. According to comScore, an
internet marketing research company, Facebook collects as much data from its visitors as Google and
Microsoft, but considerably less than Yahoo!

Facebook has a number of features, with which users may interact. They include the Wall, a space on
every user's profile page that allows friends to post messages for the user to see and to post
attachments, depending on privacy settings, anyone who can see a user's profile can also view that
user's Wall; Pokes, which allows users to send a virtual poke to each other (a notification then tells a
user that they have been poked); Photos, where users can upload albums and photos; Status, which
allows users to inform their friends of their whereabouts and actions; etc.

The front-end servers of Facebook are running a PHP LAMP stack with the addition of Memcache,
and the back-end services are written in a variety of languages including C++, Java, Python and
Erlang. Other components of the Facebook infrastructure (which have been released as open source
projects) include Scribe, Thrift and Cassandra, as well as existing open-source components such as
ODS.

Today, Facebook is the leading social networking site based on monthly unique visitors, especially in
English-speaking countries, including Canada, the United Kingdom and the United States, having
overtaken main competitor MySpace in April 2008. ComScore reports that Facebook attracted 132.1
million unique visitors in June 2008, compared to MySpace, which attracted 117.6 million.

The website has won awards such as placement into the "Top 100 Classic Websites" by PC Magazine in
2007, and winning the "People's Voice Award" from the Webby Awards in 2008.

As of January 2010 Zuckerberg is the youngest self-made businessman worth more than a billion
dollars.

Digg of Kevin Rose


Kevin Rose is an American Internet Entrepreneur, known for his several Internet start-ups. He is the
co-founder of Revision3, Pownce, WeFollow and Diggnation. His masterpiece however is the social-
bookmarking website Digg.

Robert Kevin Rose was born on February 21, 1977, in Redding, California. Most of his childhood was
spent in Las Vegas, where at eight he began his first experience with computers, when his father
purchased a Gateway 80386 SX16. Rose soon got in to the world of BBS in the late 1980s. Eventually
he was running a two-node Wildcat! BBS (and PCBoard) with a CD-ROM full of shareware for people
to access.

In 1992 Rose transferred to Vo-Tech High School in Las Vegas, to study computers and animation.
Upon graduation from high school, he attended the University of Nevada Las Vegas, majoring in
computer science, but dropped out in 1998 to pursue the 1990s tech boom. After dropping out, he
worked for the Department of Energy, at the Nevada Test Site, as a technology advisor. He later
worked for several dot-com startups through the American technology and venture capital company
CMGI.

In October 2004, Kevin decided to invest $6000 (which was supposed to be for a deposit on a house
for him and his girlfriend) into a site, called Digg, together with his friends Owen Byrne, Ron
Gorodetzky and Jay Adelson.

Digg was designed as a social news website, made for people to discover and share content from
anywhere on the Internet, by submitting links and stories, and voting and commenting on submitted
links and stories. Voting stories up and down is the site's cornerstone function, respectively
called digging and burying. Many stories get submitted every day, but only the most Digg stories
appear on the front page.
The site digg.com was launched to the world on December 5, 2004. A Rose's friend, proposed to call
the site Diggnation, but Rose wanted a simpler name. He wanted the name Dig, because users are
able to dig stories, out of those submitted, up to the front page, but as the domain name dig.com had
been already registered, the site was called Digg.com.

The original design of Digg was free of advertisements, but as site became more popular, Google
AdSense was added to the website. In 2005, the site was updated to "Version 2.0", which featured a
friends list, the ability to "digg" a story, without being redirected to a "success" page, and a new
interface. In June 26, 2006, the Version 3 of Digg was released with specific categories for
Technology, Science, World & Business, Videos, Entertainment and Gaming as well as a View All
section where all categories are merged.

In August 27, 2007, Digg altered its main interface, mostly in the profile area. In April, 2007, Digg
opened their API (Application Programing Interface) to the public, thus allowing software developers
to write tools and applications based on queries of Digg's public data, dating back to 2004. The
domain digg.com attracted at least 236 million visitors annually by 2008.

YouTube of Chad Hurley and Steve Chen


In February, 2005, a dinner party of fellow PayPal (the famous e-commerce Internet site for money
transfers and payments) employees happened in San Francisco, California, in the apartment of some
Steve Chen. Chen was born in August 1978 in Taiwan, his family came to the United States when he
was eight years old, where he studied computer science at the University of Illinois at Urbana-
Champaign. At this party, Chen and his friend, Chad Hurley (born in 1977 in Philadelphia, holder of a
degree in fine arts from Indiana University of Pennsylvania, the first-ever graphic designer of PayPal
in the period 1999-2002), spent much of the party shooting videos and digital photos of each other.
They easily uploaded the photos to the Web. But the videos? Not a chance.

Realizing that digital photographs were easier to share thanks to new Web sites like Flickr, they
reasoned that a similar software package to share videos was possible, too, but... Stumbling across a
need to publish a video to Internet, the friends decided to to create a video sharing website on which
users can upload and share videos. And they had the means to address this need, because Chen was an
exceptional code writer, and Hurley’s gift for design could give a new Web site a compelling look.

Hurley was living in Menlo Park, California, and had a garage on his property, where in February of
2005 he and Chen set to work on creating what became YouTube. Two months later Chen and Hurley
called for help his ex-colleague from PayPal Jawed Karim, also a very good programmer. The domain
name www.youtube.com was activated the same month. Both founders agreed on a few caveats: The
site had to be easy to use for a person with only a minimum of computer skills, and it would not
require users to download any special software in order to upload or view videos. They also made it
easy for site visitors to view clips without going through a registration process, and created a quick
search function to access the archives.

YouTube relocated his work from the garage, to a modest office, situated above a pizzeria and
Japanese restaurant in San Mateo, California and offered the public a beta test of the site in May
2005, six months before the official launch in November 2005. In the same month YouTube got a
venture-capital kick-off—a US$11.5 million investment by Sequoia Capital and. The first YouTube
video was uploaded in April 23, 2005, (it was entitled Me at the zoo, showing founder Jawed Karim at
San Diego Zoo, still available on the site).

Before the launch of YouTube, there were few easy methods available for ordinary computer users
who wanted to post videos online. With its simple interface, YouTube made it possible for anyone with
an Internet connection to post a video that a worldwide audience could watch within a few minutes.
The wide range of topics covered by YouTube has turned video sharing into one of the most important
parts of Internet culture.

The site grew rapidly, and in July 2006 the company announced that more than 65000 new videos
were being uploaded every day, and that the site was receiving 100 million video views per day. Hurley
and Chen's idea proved to be one of the biggest Internet phenomena of 2006. YouTube soon became
the dominant provider of online video in the United States, with a market share of around 43 percent
and more than six billion videos viewed in January 2009. It was estimated that 20 hours of new
videos are uploaded to the site every minute, and that around three quarters of the material comes
from outside the United States. It is also estimated that in 2007 YouTube consumed as much
bandwidth as the entire Internet in 2000. In March 2008, YouTube's bandwidth costs were estimated
at approximately US$1 million a day.

In October 2006, Google Inc. announced that it had acquired YouTube for US$1.65 billion in Google
stock. Thus YouTube founders became overnight multi-millionaires.

Viewing YouTube videos on a personal computer requires the Adobe Flash Player plug-in to be
installed in the browser. The Adobe Flash Player plug-in is one of the most common pieces of software
installed on personal computers and accounts for almost 75% of online video material. YouTube
accepts videos uploaded in most container formats, including .AVI, .MKV, .MOV, .MP4, DivX, .FLV,
and .OGG. These include video codecs such as MPEG-4, MPEG, and .WMV.

Videos uploaded to YouTube by standard account holders are limited to ten minutes in length and a
file size of 2 GB. Initially it was possible to upload longer videos, but a ten minute limit was introduced
in March 2006 after YouTube found that the majority of videos exceeding this length were
unauthorized uploads of television shows and films. Partner accounts are permitted to upload videos
longer than ten minutes, subject to acceptance by YouTube.

Twitter of Jack Dorsey


Jack Dorsey was born on November 19, 1976, in St. Louis, Missouri. He started his occasions with
computers early, and at age 14, he created an open-source program for dispatch routing, still in use till
recently. Jack went to high school at Bishop DuBourg High School, and attended Missouri University
of Science and Technology. While working on dispatching as a programmer, he later moved to
Oakland, California.

In California in 2000, Dorsey started his company to dispatch couriers, taxis, and emergency services
from the Web. His other projects and ideas at this time included networks of medical devices and
a frictionless service market. In July 2000, building on dispatching and inspired partially by services
like LiveJournal and AOL Instant Messenger, he had the idea for the real time status communication.

When he first saw implementations of instant messaging, he had wondered if the software's user
status output could be shared among friends easily. He approached Biz Stone from Odeo (directory
and search destination website for RSS syndicated audio and video), who at the time happened to be
interested in text messaging. Dorsey and Stone decided that SMS text suited the status message idea,
and built a prototype of Twitter in about two weeks. The idea attracted many users at Odeo, as well as
investment from Evan Williams, the founder of Pyra Labs and Blogger.

The working concept of Twitter is partially inspired by the cell phone text messaging service TXTMob.

The name Twitter was picked up later, as Dorsey recalled:


We wanted to capture that in the name — we wanted to capture that feeling: the physical sensation
that you’re buzzing your friend’s pocket. It’s like buzzing all over the world. So we did a bunch of
name-storming, and we came up with the word "twitch," because the phone kind of vibrates when it
moves. But "twitch" is not a good product name because it doesn’t bring up the right imagery. So we
looked in the dictionary for words around it, and we came across the word "twitter," and it was just
perfect. The definition was "a short burst of inconsequential information," and "chirps from birds."
And that’s exactly what the product was.

Work on the project started on March 21, 2006, when Dorsey published the first Twitter message at
9:50 PST: "just setting up my twttr".
In July 2006 Twitter moved from an internal service for Odeo employees, into a full-scale version in
July 2006. In October 2006, Biz Stone, Evan Williams, Dorsey and other members of Odeo
formed Obvious Corporation and acquired Odeo and all of its assets—including Odeo.com and
Twitter.com—from the investors and other shareholders.

The tipping point for Twitter's popularity was the 2007 South by Southwest festival. During the event
usage went from 20000 tweets per day up to 60000. The Twitter people cleverly placed two 60-inch
plasma screens in the conference hallways, exclusively streaming Twitter messages. Hundreds of
conference-goers kept tabs on each other via constant twitters. Panelists and speakers mentioned the
service, and the bloggers in attendance touted it. Soon everyone was buzzing and posting about this
new thing that was sort of instant messaging and sort of blogging and maybe even a bit of sending a
stream of telegrams.

The 140-character limit on message length was initially set for compatibility with SMS messaging, and
has brought to the web the kind of shorthand notation and slang commonly used in SMS messages.
This limit has also spurred the usage of URL shortening services such as bit.ly, goo.gl, and tr.im, and
content hosting services, such as Twitpic and NotePub to accommodate multimedia content and text
longer than 140 characters.

The Twitter Web interface uses the Ruby on Rails framework, deployed on a Ruby Enterprise Edition.
The messages were handled initially by a Ruby persistent queue server called, but since 2009 this has
been gradually replaced with software written in Scala. The service's API allows other web services
and applications to integrate with Twitter. To group posts together by topic or type, users make use of
hash tags, words or phrases prefixed with a #. Similarly, the letter d followed by a username allows
users to send messages privately to their followers. Otherwise, the @ sign followed by a username
publicly states the attached tweets are a reply to (or just mention) any specific users (who can find
such recent tweets logged in their interface).

In late 2009, the new Twitter Lists feature was added, making it possible for users to follow (and
mention/reply to) lists of authors instead of following individual authors.

On March 2010, Twitter has recorded a 1500 per cent growth in the number of registered users, the
number of its employees has grown 500 percent, while over 70000 registered applications have been
created for the microblogging platform.

You might also like