You are on page 1of 12

1

ASSIGNMENT 1

NAME-:NITESH KUMAR SHARMA

ROLL NO. -: 14

SUBJECT-: INTRODUCTION TO IT

TOPIC-:
1) HISTORIC EVOUTION OF COMPUTER

2) CLASSIFICATION OF COMPUTERS

3) DIFFERENCE BETWEEN COMPUTER ORGANISATION AND


COMPUTER ARCHITECTURE

4) VON NEUMAN MODEL & HARVARD ARCHITECTURE

Teachers sign
2
● History of Computer
What is a Computer?
A computer is a programmable electronic device that
performs arithmetic and logical operations automatically using
a set of instructions provided by the user.

A computer is an electronic machine that collects information,


stores it, processes it according to user instructions, and then
returns the result.

Early Computing Devices


People used sticks, stones, and bones as counting tools
before computers were invented.

For example-:
1. Abacus
2. Napier’s Bone
3. Pascaline
4. Stepped Reckoner or Leibniz wheel
5. Difference Engine
6. Analytical Engine
7. Tabulating machine
8. Differential Analyzer etc

History of Computers Generation


In 1822, the father of computers, Charles Babbage began
developing what would be the first mechanical computer.
then in 1833 he actually designed an Analytical Engine which
was a general-purpose computer.
3
Generations of Computers

1st Generation: This was from the period of 1940 to 1955.


This was when machine language was developed for the use
of computers. They used vacuum tubes for the circuitry. For
the purpose of memory, they used magnetic drums. These
machines were complicated, large, and expensive. They were
mostly reliant on batch operating systems and punch cards.
As output and input devices, magnetic tape and paper tape
were implemented. For example, ENIAC, UNIVAC-1, EDVAC,
and so on.

2nd Generation: The years 1957-1963 were referred to as


the “second generation of computers” at the time. In
second-generation computers, COBOL and FORTRAN are
employed as assembly languages and programming
languages. Here they advanced from vacuum tubes to
transistors. This made the computers smaller, faster and more
energy-efficient. And they advanced from binary to assembly
languages. For instance, IBM 1620, IBM 7094, CDC 1604,
CDC 3600, and so forth.

3rd Generation: The hallmark of this period (1964-1971) was


the development of the integrated circuit. A single integrated
circuit (IC) is made up of many transistors, which increases
the power of a computer while simultaneously lowering its
cost. These computers were quicker, smaller, more reliable,
and less expensive than their predecessors. High-level
programming languages such as FORTRON-II to IV, COBOL,
and PASCAL PL/1 were utilized. For example, the IBM-360
series, the Honeywell-6000 series, and the IBM-370/168
4
4th Generation: The invention of the microprocessors
brought along the fourth generation of computers. The years
1971-1980 were dominated by fourth generation computers.
C, C++ and Java were the programming languages utilized in
this generation of computers. For instance, the STAR 1000,
PDP 11, CRAY-1, CRAY-X-MP, and Apple II. This was when
we started producing computers for home use.

5th Generation: These computers have been utilized since


1980 and continue to be used now. This is the present and
the future of the computer world. The defining aspect of this
generation is artificial intelligence. The use of parallel
processing and superconductors are making this a reality and
provide a lot of scope for the future. Fifth-generation
computers use ULSI (Ultra Large Scale Integration)
technology. These are the most recent and sophisticated
computers. C, C++, Java,.Net, and more programming
languages are used. For instance, IBM, Pentium, Desktop,
Laptop, Notebook, Ultrabook, and so on..

● Classification of Computers
The computer systems can be classified on the following
basis:

1. On the basis of size.


2. On the basis of functionality.
3. On the basis of data handling.

Classification on the basis of size


Super computers : The super computers are the most high
performing system. A supercomputer is a computer with a
5
high level of performance compared to a general-purpose
computer. The actual Performance of a supercomputer is
measured in FLOPS instead of MIPS. All of the world’s fastest
500 supercomputers run Linux-based operating systems.
Additional research is being conducted in China, the US, the
EU, Taiwan and Japan to build even faster, more high
performing and more technologically superior
supercomputers. Supercomputers actually play an important
role in the field of computation, and are used for intensive
computation tasks in various fields, including quantum
mechanics, weather forecasting, climate research, oil and gas
exploration, molecular modeling, and physical simulations.
and also Throughout the history, supercomputers have been
essential in the field of the cryptanalysis.
eg: PARAM, jaguar, roadrunner.

Mainframe computers : These are commonly called as big


iron, they are usually used by big organisations for bulk data
processing such as statistics, census data processing,
transaction processing and are widely used as the servers as
these systems has a higher processing capability as
compared to the other classes of computers, most of these
mainframe architectures were established in 1960s, the
research and development worked continuously over the
years and the mainframes of today are far more better than
the earlier ones, in size, capacity and efficiency.
Eg: IBM z Series, System z9 and System z10 servers.

Mini computers : These computers came into the market in


mid 1960s and were sold at a much cheaper price than the
main frames, they were actually designed for control,
instrumentation, human interaction, and communication
6
switching as distinct from calculation and record keeping, later
they became very popular for personal uses with evolution.
In the 60s to describe the smaller computers that became
possible with the use of transistors and core memory
technologies, minimal instructions sets and less expensive
peripherals such as the ubiquitous Teletype Model 33
ASR.They usually took up one or a few inch rack cabinets,
compared with the large mainframes that could fill a room,
there was a new term “MINICOMPUTERS” coined
Eg: Personal Laptop, PC etc.

Micro computers : A microcomputer is a small, relatively


inexpensive computer with a microprocessor as its CPU. It
includes a microprocessor, memory, and minimal I/O circuitry
mounted on a single printed circuit board.The previous to
these computers, mainframes and minicomputers, were
comparatively much larger, hard to maintain and more
expensive. They actually formed the foundation for present
day microcomputers and smart gadgets that we use in day to
day life.
Eg: Tablets, Smartwatches.

Classification on the basis of functionality


Servers : Servers are nothing but dedicated computers which
are set-up to offer some services to the clients. They are
named depending on the type of service they offered. Eg:
security server, database server.

Workstation : Those are the computers designed to primarily


to be used by single user at a time. They run multi-user
operating systems. They are the ones which we use for our
day to day personal / commercial work.
7

Information Appliances : They are the portable devices


which are designed to perform a limited set of tasks like basic
calculations, playing multimedia, browsing internet etc. They
are generally referred as the mobile devices. They have very
limited memory and flexibility and generally run on “as-is”
basis.
Embedded computers : They are the computing devices
which are used in other machines to serve limited set of
requirements. They follow instructions from the non-volatile
memory and they are not required to execute reboot or reset.
The processing units used in such device work to those basic
requirements only and are different from the ones that are
used in personal computers- better known as workstations.

Classification on the basis of data handling


Analog : An analog computer is a form of computer that uses
the continuously-changeable aspects of physical fact such as
electrical, mechanical, or hydraulic quantities to model the
problem being solved. Any thing that is variable with respect
to time and continuous can be claimed as analog just like an
analog clock measures time by means of the distance
traveled for the spokes of the clock around the circular dial.

Digital : A computer that performs calculations and logical


operations with quantities represented as digits, usually in the
binary number system of “0” and “1”, “Computer capable of
solving problems by processing information expressed in
discrete form. from manipulation of the combinations of the
binary digits, it can perform mathematical calculations,
organize and analyze data, control industrial and other
8
processes, and simulate dynamic systems such as global
weather patterns.

Hybrid : A computer that processes both analog and digital


data, Hybrid computer is a digital computer that accepts
analog signals, converts them to digital and processes them
in digital form.
9
● DIFFERENCES BETWEEN
COMPUTER ARCHITECTURE AND
COMPUTER ORGANIZATION
10

● Von Neumann Architecture


What is von Neumann Architecture?
Von Neumann’s architecture is a computing model that was
first proposed by mathematician and physicist John von
Neumann in 1945. The key features of this architecture are a
Central Processing Unit (CPU) that contains both an
Arithmetic Logic Unit (ALU) for performing calculations, a
control unit for controlling the sequence of operations, and a
memory system that stores both instructions and data. This
model has been used as the basis for most digital computers
since the 1950s.

How does it work?


Von Neumann architecture is a model for computer
architecture that divides memory into two separate parts: the
programme code and the data. This separation allows for
faster and more efficient execution of instructions by the CPU.
The Von Neumann architecture is used in most modern
computers and many other devices, such as smartphones
and tablets.

Drawbacks of von Neumann architecture


the only major drawback of the system, the Von Neumann
bottleneck is a term used in computer architecture to describe
the limitation of a CPU to fetch and execute one instruction at
a time from memory. This limitation was first identified by
mathematician and physicist John von Neumann in the early
days of computing. The bottleneck can be caused by a
number of factors, including the width of the CPU’s data bus,
11
the speed of the memory system, and the latency of the
cache.

The most common way to overcome the von Neumann


bottleneck is to use multiple processors, each with its own
independent memory system. This approach is known as
parallel processing. Parallel processing can be used to speed
up both single-threaded and multi-threaded applications.

● HARVARD ARCHITECTURE
What is Harvard Architecture?
Harvard architecture is a type of computer architecture that
has separate memory spaces for instructions and data. It was
developed at Harvard University in the 1930s, and it is named
after this institution. In a Harvard architecture system, the
CPU accesses instruction and data memory spaces
separately, which can lead to improved performance.

The Harvard architecture consists of the following main


components:
1. CPU
2. Instruction memory
3. Data memory
4. Input output devices
5. System bus (collection of wires)

Advantages Of Harvard Architecture


The CPU can access both instruction and data memory
simultaneously.
12
1. This can lead to improved performance because the
CPU does not have to switch between memory
spaces as often as in a Von Neumann architecture.

2. Additionally, because the instruction memory is


typically implemented as ROM or flash memory, it is
non-volatile, meaning that it does not lose its contents
when power is turned off.

3. This makes it well-suited for embedded systems that


need to operate without a constant power source.

Disadvatages of Harvard Architecture


1. As the CPU accesses instruction and data memory
separately, it can be more difficult to write programs
that require the CPU to modify its own code.

2. Additionally, because the instruction and data


memories are separate, it can be more difficult to
share data between different parts of a program.

THE END

You might also like