You are on page 1of 16

IT 9626 Chapter 2nd Hardware & Software Fahim Siddiq 03336581412

Main Frame Computers:

Mainframe computers are often referred to simply as mainframes. They are used
mainly by large organizations for bulk data-processing applications such as
censuses, industry and consumer statistics, and transaction processing.

A mainframe computer can have hundreds of processor cores and can process a
large number of small tasks at the same time very quickly. A mainframe is a
multitasking, multi-user computer, meaning it is designed so that many different
people can work on many different problems, all at the same time. Mainframe
computers are now the size of a large cupboard, but between 1950 and 1990 a
mainframe was big enough to fill a large room, so you can see how much the
average mainframe has decreased in size while also improving its performance.

Mainframe computers have almost total reliability, being very resistant to viruses
and Trojan horses. The most advanced mainframe is IBM z15, shown in Figure
below. The cabinet itself is taller than the average person. The IBM z15 with up to
190 cores and its predecessor.

Supercomputers:
Even more powerful are modern supercomputers, which can have in excess of 100
000 processing cores. In Chapter 1, batch processing of payroll was described. A
supercomputer can multiply different hourly wage rates from a master file by a list
of hours in a transaction file for hundreds of workers in roughly the same time that
it would take a personal computer to calculate the wages of just one employee.

The Oak Ridge National Laboratory in the USA launched its Summit supercomputer
in 2018. It claimed, ‘if every person on Earth completed one calculation per second,
it would take the world population 305 days to do what Summit can do in 1 second’.
The Summit supercomputer, however, fills a room the size of two tennis courts.

The picture below is of the Cray XC40 supercomputer, which can have up to 172
000 processor cores. Several countries have one of these and they use them in the
fields of weather forecasting and scientific research, such as the study of
astrophysics and mathematical and computational modelling.

Characteristics of Mainframe Computers and Supercomputers

1- Longevity
Mainframe computers have great longevity, or lifespans. This is because they can
run continuously for very long periods of time. Governments, banking
organizations and telecommunications companies still base their business dealings
on mainframes. The mainframe is still able to process more transactions and
calculations in a set period of time when compared with alternatives and it
continues to operate with a minimum of downtime, which means that companies
can operate 24 hours a day, every day.

In contrast, supercomputers have a lifespan of about five years. Research


institutions and meteorology organizations are always looking for faster ways to
process their data and so, unlike companies using mainframes, will tend to look at
replacing their existing systems whenever much faster supercomputers come on to
the market.

2- RAS
Reliability

Mainframes are the most reliable computers because their processors are able to
check themselves for errors and are able to recover without any undue effects on
the mainframe’s operation. The system’s software is also very reliable, as it is
thoroughly tested and updates are made quickly to overcome any errors.

Availability
This refers to the fact that a mainframe is available at all times and for extended
periods. Mean time between failures (MTBF) is a common measure of systems,
not just those involving computers. It is the average period of time that exists
between failures (or downtimes) of a system during its normal operation.
Mainframes give months or even years of system availability between system
downtimes. In addition to that, even if the mainframe becomes unavailable due to
failure, the length of time it is unavailable is very short. It is possible for a
mainframe to recover quickly, even if one of its components fails, by automatically
replacing failed components with spares. Spare CPUs are often included in
mainframes so that when errors are found with one, the mainframe is programmed
to switch to the other automatically. The operator is then alerted and the faulty
CPU is replaced, but all the time the system continues to work.

Serviceability
This is the ability of a mainframe to discover why a failure occurred and means that
hardware and software components can be replaced without having too great an
effect on the mainframe’s operations.

3- Security
In addition to their other characteristics, mainframes have greater security than
other types of computer systems. Mainframes are used to handle large volumes of
data. Most of this is personal data and, especially in the banking sector, it has to be
shared by the banks with customers. Fortunately, the mainframe computer has
wide-ranging security that enables it to share a company’s data among several
users but still be in a position to protect it.
In addition to the use of supercomputers to perform massive calculations, they
may also be used to store sensitive data such as DNA profiles. This obviously
requires a very high level of security. Most supercomputers use end-to-end
encryption, which means that only the sender or recipient is able to decrypt and
understand the data.

4- Performance metrics
The speed of a mainframe’s CPU is measured in millions of instructions per second
(MIPS). An application using five million simple instructions will take a lot less time
than one using five million complex ones. MIPS are often linked to cost by
calculating how much a mainframe costs per one million instructions per second.

Supercomputers use a different set of metrics. As they are used mainly with
scientific calculations, their performance is measured by how many Floating-point
Operations can be carried out Per Second (FLOPS). One petaflop is 1 000 000 000
000 floating point operations per second. Experts are already using the term
exaflops, which are 1000 times faster than petaflops, and are expecting the first
supercomputer to attain this speed sometime in the current decade. The speed of
the current fastest supercomputer, at the time of publication, is 148 petaflops and
even the tenth fastest operates at 18 petaflops.

5- Volume of input, output and throughput

Mainframes have specialized hardware, called peripheral processors, that deal


specifically, with all input and output operations, leaving the CPU to concentrate
on the processing of data. This enables mainframes to deal with very large amounts
of data being input (terabytes or more), records being accessed. A supercomputer
is designed for maximum processing power and speed, whereas
throughput is a distinct mainframe characteristic.

6- Fault tolerance
A computer with fault tolerance means that it can continue to operate even if one
or more of its components has failed. Mainframe computers have the
characteristic of being fault-tolerant in terms of their hardware. While in operation,
if a processor fails to function, the system is able to switch to another processor
without disrupting the processing of data. The system is also able to deal with
software problems by having two different versions of the software. If the first
version produces errors, the other version is automatically run.
Supercomputers have far more components than a mainframe, with up to a
thousand times more processors alone. This means that statistically, a failure is
more likely to occur and consequently interrupt the operation of the system. The
approaches to fault tolerance are much the same as those for mainframe
computers, but with millions of components, the system can go down at any time,
even though it tends to be up and running again quite quickly.

7- Operating system
Most mainframes run more than one operating system (OS) at any given time. The
OS on a mainframe divides a task into various sub-tasks, assigning each one to a
different processor core. When each sub-task has been processed, the results are
recombined to provide meaningful output.

Supercomputers tend to have just one OS, Linux, but most supercomputers utilize
massively parallel processing in that they have many processor cores, each one
with its own OS. Linux is the most popular, mainly because it is opensource
software, that is, it is free to use.

8- Type of processor
Early mainframes had just one processor (the CPU), but as they evolved more and
more processors were included in the mainframe system and the distinction
between the terms ‘CPU’ and ‘processor’ became confused. One major mainframe
manufacturer called them ‘CPU complexes’, which contained many processors. The
number of processor cores found in a mainframe is now measured in the hundreds.

By contrast, supercomputers have hundreds of thousands of processor cores.


Unlike mainframes, modern supercomputers use more than one GPU or graphics
processing unit.

9- Heat maintenance
Because of the large number of processors in both mainframes and
supercomputers, overheating becomes a major problem and heat maintenance or
heat management, as it is often called, has to be implemented. In the early days of
mainframe computing, the heat produced by the computer could not be controlled
using cooling by fans. Liquid-cooling systems had to be used. What were considered
to be a relatively cheap option – air cooling systems – are becoming more complex
and more expensive to use in more powerful systems. At the same time, water
cooling solutions have become more cost-effective. We now appear to have come
full circle and mainframe manufacturers are once again recommending water
cooled systems for larger machines.

The overheating problem has always been present with the use of supercomputers.
The large amount of heat produced by a system also has an effect on the lifetime
of components other than the processors. Some supercomputers draw four or
more megawatts of power to keep them operating at high efficiency. That is
enough to power several thousand homes! This, together with having so many
processors very close together, results in a great deal of heat being produced and
requires the use of direct liquid cooling to remove any excess heat.

Uses of Mainframe Computer


1- Census

The processing of census data has long been associated with computers. The
UNIVAC 1, which was the first computer on general sale to organizations or
businesses, was purchased by the United States Bureau of the Census in 1951. A
census usually takes place every ten years. Not surprisingly, as populations
increased, each census produced more data and so the use of mainframes to
process the data was crucial.

2- Industry statistics
Some businesses in certain sectors of industry need mainframes to process the vast
amount of data which helps to identify their major competitors. It shows their
competitors’ share of the market and the trends in their sales. This helps to identify
those products which could compete profitably with other businesses.

3- Consumer statistics
Consumer statistics allow businesses to assess the demand for their product, that
is how many people need or want that type of product. They can inform them about
the range of household incomes and employment status of those consumers who
might be interested in the product so that a price can be set. It may also allow
businesses to know how many similar products are already available to consumers
and what price they pay for them.

4- Transaction processing
Mainframe computers play a vital role in the daily operations of many
organizations. Finance companies, health care providers, insurance companies,
energy providers, travel agencies, and airlines all make use of mainframes. By far
and away, however, the greatest use of mainframes is in the banking sector, with
banks throughout the world using mainframes to process billions of transactions.

Uses of Supercomputer

The first use of supercomputers was in national defense, for example, designing
nuclear weapons and data encryption.

1- Quantum mechanics
Quantum mechanics is the study of the behavior of matter and light on the atomic
and subatomic scale. You do not need to understand quantum mechanics, but it is
important to realize that there are a very large number of calculations which
require great accuracy and thus require the use of a supercomputer.

2- Weather forecasting
Weather forecasting is based on the use of very complex computer models. Data
from the sensors at weather stations around the world is input to the model and
then many calculations are performed. Records of previous weather conditions
have also been collected over a very long period. Using the past weather readings,
the computer examines similar patterns of weather to those being experienced at
the moment and is able to predict the resulting weather.

3- Climate research
This is an extension of the use of IT in weather monitoring. Climate is measured
over a much longer timescale. The data collected over several decades can be used
to show the trends of different variables over time. For example, the levels of
nitrogen dioxide, sulphur dioxide and ozone are monitored to determine air quality.
Advantages of mainframe computers
1- Mainframes are very reliable and rarely have any system downtime. This is
one reason why organizations such as banks use them. hardware and
software upgrades can occur while the mainframe system is still up and
running.
2- Mainframes can deal with the huge amounts of data that some organizations
need to store and process.
3- Because of a mainframe’s ability to run different operating systems, it can
cope with data coming in a variety of database formats which other
platforms would find problematic.
4- Mainframes have stronger security than other systems with have complex
encryption systems and authorization procedures in place.

Disadvantages of mainframe computers


1- Mainframes are very expensive to buy and can only be afforded by large
organizations such as multinational banks.
2- Large rooms are required to house the system, which is not needed with
other systems.
3- As mainframes become more advanced, the cooling systems needed become
more expensive to install and run.
4- Many organizations are migrating to Cloud-based services so they do not
have to buy their own system or hire the expertise required. The software
required to run mainframe systems is more expensive to buy than using the
Cloud.

Advantages of Supercomputer
1. Can decrypt any password: As supercomputers have much faster speed so it can
guess any password. It can decrypt any phrase or password used in a computer or
any other device.
2. High processing time: The processing time of the computer is calculated in FLOPS
(Floating point operations). The normal home computer can process from 10 to 100
Gigaflops per second. While supercomputer is 100 to thousand times faster than a
home computer. The supercomputer can complete tasks in seconds that a normal
computer takes hours or days to complete.
3. Environment safety: Sometimes it is not safe to do real-time testing like testing
nuclear weapons and medical testing. So, supercomputers allow us to perform
tests scientifically which is safe for the surrounding environment.
4. Cost efficiency: Companies that require fast work and more efficiency in work
get supercomputer on a part-time basis. By doing this a company can save money
and time also.

Key Differences Between Supercomputer and Mainframe Computer


1. Supercomputers are used by scientist and engineers in order to solve very
complex and large mathematical and scientific calculations. Mainframe computers
are used for the storage of large databases and can serve a large number of people
at a time.
2. The supercomputer is known for its fast computation of complex mathematical
operations; it executes the billions of floating-point operations in a second. The
mainframe computers act as a server; it supports a large database, multiple user
and multiprogramming, it is basically for large business transactions. Its speed is in
millions of instructions per second.
3. Supercomputer is the fastest computer of the world whereas; the mainframe
computer is also faster but less than a supercomputer.
4. Supercomputer is the largest computer. However, the mainframe computer is
also large but less than a supercomputer.
5. Supercomputers are costly than mainframe computers.
6. The modern supercomputer operates on Linux or its derivative variants.
However, the mainframe computer can run multiple operating systems as a single
entity.
7. Supercomputers are mostly purpose-specified. They are used to solve any
specific purpose by executing the instructions to solve it. Mainframe computers are
not purpose specific. They can serve to perform multiple tasks such as Physical
simulations (such as simulations of the early moments of the universe, airplane and
spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion.

Software: Programs which give instructions to the computer.

System Software: System software refers to the programs that run and control
a computer’s hardware and application software. Examples of system software are
compilers, interpreters, linkers, device drivers, operating systems and utilities
Operating Systems:
An operating system manages the hardware within a computer system. When a
computer is turned on and after the Basic Input/Output System (BIOS) has loaded,
the operating system is the first piece of software that will load. Examples
Windows, Android, iOS.
An operating system manages hardware by carrying out tasks such as:
1. Dealing with user login and security.
2. giving each running task a fair share of processor and memory time.
3. responding to input devices a
4. sending data and instructions to output devices.
5. sending error messages or status messages to applications or users.
6. The OS manages the storing of files on, as well as the loading of them
from, backing storage. It knows the names of each file and exactly
where they are stored on the hard disk.
7. It allocates time to each task or program fairly, so that all tasks or
programs get a reasonable amount of time.

Compilers:
A compiler translates a program written in a high-level programming language into
machine code which a computer can understand. It can also be referred to as the
object file. The original high-level programming language file is known as the source
file. For example, programs that have been compiled for Windows will not work on
android unless they are compiled again for android.
Interpreters:
Interpreters also translate a program written in a high-level programming language
into machine code, but use a different method, instead of translating the whole
source code at once, it is translated one line at a time. Interpreters are also used
when testing programs so that parts of the program can be executed without
having to compile the whole program. They do not make an executable file.

Advantages of interpreter Advantages of Compiler


With interpreted program the once a program has been compiled it
programmer can correct errors as they executes all in one go.
are found. While a program is being
compiled, if it is a major application and
takes a long time, the programmer has
to wait.
Debugging is easier with an interpreter With a compiled program, the source
as error messages are output as soon as code has been translated and machine
an error in a statement is encountered. code is difficult to understand and
alter and each time for translating
source code is not required.
A compiled program will only run on a It created the executable file of the
computer with the same operating program.
system as the computer it was
originally compiled on, whereas an
interpreted program will still be in its
original source code and so it will work
on any system.
Interpreter uses less memory as
compared to compiler.

Linkers:
Computer programs often consist of several modules (parts) of programming code.
Each module carries out a specified task within the program. Each module will have
been compiled into a separate object file. The function of a linker (also known as a
link editor) is to combine the object files together to form a single executable file.

Device Driver:
A device driver is the software that comes with an external hardware component
and sends customized instructions to that specific component. Such as a printer
driver.

Utility Software:
Utilities are part of system software. They are designed to perform functions which
maintain the computer system. Utility software is required to manage the
allocation of computer memory in order to improve the computer’s performance
and so that users can customize the appearance of their desktop.
Anti-Virus: Anti-virus software is sometimes referred to as anti-malware software
as it deals with other threats such as adware and spyware as well as viruses. It has
two main functions. The first is an anti-virus monitor that is continually monitoring
the system for viruses and malwares. After scanning the system, user is usually
given a choice either to delete the infected file, quarantine, or ignore it. If user
selects to delete the file, then there can be a chance that useful data also gets
deleted. Ignoring is not a good option. Selecting to quarantine must be done in
order to do more analysis of the file. There are different methods employed by the
anti-virus software to detect viruses. One such method is signature-based
detection, which is based on recognizing existing viruses. heuristic-based
detection method, sometimes referred to as static heuristic, was devised whereby
a program is decompiled, its source code compared to that of known viruses, and
if a specific percentage matches, it is flagged as a potential threat and quarantined.

Backup: Back-up software is a program that is used to keep copies of files from a
computer or copy the content of a server’s backing storage. The back-up is an exact
duplicate of the files. It can be used for restoring the original files, should the
original files become corrupted or deleted, accidentally or deliberately. Backup
software allows the user to select the time and type of back-up they want and how
regularly the back-up is to take place.

Data Compression: It is the modifying of data so that it occupies less storage space
on a disk. It can be lossless or lossy. Lossless compression is where, after
compression, the file is converted back into its original state, without the loss of a
single bit (binary digit) of data. When the lossless compression software sees a
repeated sequence of bits it replaces the repeated sequences with a special
character which indicates what is being repeated and how many times. This type
of compression is normally used with spreadsheets, databases and word-processed
files, where the loss of just one bit could change the meaning completely. Lossy
compression permanently deletes data bits that are unnecessary, such as the
background in a frame of a video which might not change over several frames.
Lossy compression is commonly used with images and sound, where the loss of
some data bits would have little effect.

Structure of hard disk storage


A hard-disk drive consists of several platters (individual disks) with each side
(surface) of the platter (top and bottom) having its own read–write head. The read–
write heads move across the platters, floating on a film of air and, when they stop,
they either read data from or write data to the surface. They never touch the disk
surface and each surface is used to store data. The platters are stacked one above
the other and spin together at the same speed. Each surface is divided into several
tracks and each track is divided into sectors. The track on the top platter together
with the tracks exactly below it, form a cylinder.

Disc Defragmentation: When files are deleted, gaps are left on the disc. When all
the cylinders have been used, the only space to store files is within the gaps. If the
gaps are not big enough, then files have to be split across gaps, meaning that they
become defragmented. A defragmentation utility will reorganize all the files so that
each file is contiguous (kept together).
Formatting: When a disc is prepared for first time use, it needs to be formatted.
Low level formatting prepares the actual structure of the disk by dividing the disk
into cylinders and tracks and then dividing the tracks into sectors. This type of
formatting is usually carried out by manufacturers rather than individual users.
When high-level formatting is carried out, it does not permanently erase data files
but deletes the pointers on the disk that tell the OS where to find them. High-level
formatting is typically done to erase the hard disk and reinstall the OS back onto
the disk drive.
File Copying and Deleting: Files can be copied or deleted using features within an
operating system’s own interface. However, this can be slow and options are
limited. File-copying utilities enable users to have more control over which files are
copied or deleted and how they are done. The delete utility is a piece of software
which deletes the pointers that tell the
OS where to find the file. The OS now considers these sectors to be available for
the writing of fresh data. Until new data is written to the space the file occupies,
users can still retrieve the data. Data recovery software is available which allows
users to retrieve deleted data.
Types of User Interface in Operating Systems:
1. Command line interface (CLI): It requires a user to type in instructions to
choose options from the menu. It is usually used by programmers, analyst or
technicians who wants direct communication with a computer or to develop
a new software.

Advantages:
a. User is in direct communication with the computer.
b. It is possible to alter computer configuration.
c. User is not restricted to a number of predetermined options.

Disadvantages:

a. User needs to learn commands.


b. All commands are to be typed in.
c. Commands must be in correct format.

2. Graphical User Interface (GUI): It allows the user to interact with a computer
using pictures or icons (symbols). It uses various technologies such as WIMP
(windows icon menu and pointing device), in which a mouse is used to
control the curser and icons are selected

Advantages:

a. The user does not need to learn the commands.


b. It is user-friendly.
c. A pointing device is used to click an icon to lunch the application.

Disadvantages:
a. This type of OS needs more memory.
b. The user is limited to icons provided on the screen.

GUIs can include some or all of the elements shown in


Windows: An area of the screen devoted to a specific task.
Icons: An image that is used to represent a program, file or task.
Menus: Menus are words on the screen which represent a list of options that can
be expanded into further sub-menus.
Pointers: This is the method of representing movement from a pointing device such
as a mouse or the human finger on a touch screen.

3. Dialogue Interface: A dialogue interface refers to using the spoken word to


communicate with a computer system. A user can give commands using their
voice and the computer system can respond by carrying out an action or with
further information using a synthesized voice.

Advantages:
a. No hands are required, which makes them suitable for use in cars
or when holding a telephone.
b. Words can be spoken by a user more quickly than a user can type
them.
c. There is no need for a physical interface.

Disadvantages:
a. Computer system’s ability to recognize the spoken word due to
different accents, different voices, stammers and background noise
b. Systems are not intelligent enough to simply understand requests
in any format

4. Gesture-Based Interface: Gesture-based interfaces will recognize human


motion. This could include tracking eyes and lips, identifying hand signals or
monitoring whole body movement. There are many applications of gesture-
based interfaces, including gaming, which have led to the development of
other gesture-based interfaces. The original Nintendo Wii enabled gamers to
move their hands while holding a remote controller and that movement
would be mimicked in games such as ten pin bowling and boxing. Microsoft’s
Xbox took this a stage further and was able to track whole body movement
without any devices being held or worn by the user. This enabled gamers to
fully engage with a game using their whole body, so boxing could now
become kick-boxing and ten pin bowling could include a run-up.

Advantage & Disadvantage:


It becomes an interaction for some disabled users who cannot use conventional
input devices. Even this system can track the movement of an eye. One of the
biggest problems with gesture interfaces is accuracy.

Custom Written Software Off-the-Shelve Software


Software that is written specially to General purpose software available to
meet a large
the requirements of a client. Market.
The whole expense is met by the client It is spread between different clients so
for whom the software is written. it is not very expensive.
It takes a long time to get developed. It is readily available in the market so
can be used immediately.
All the required features for the client Often all the requirements are not met.
are present in the software.
Software is compatible with the Compatibility issue may occur.
hardware.
Client will have support from the Usually support is not available.
company who made the software.
It may contain bugs. The software will have been used by
thousands of customers
and bugs will have been identified and
fixed.

You might also like