You are on page 1of 11

Types of Computers

Mainframes
In the early days of computing, mainframes were huge computers that could fill an entire room or even
a whole floor! As the size of computers has diminished while their power has increased, the term
mainframe has fallen out of use in favor of enterprise server. You'll still hear the term mentioned,
though, particularly in large companies to describe the huge machines processing millions of
transactions every day, while simultaneously working to fulfill the needs of hundreds, if not thousands
of individual users. Although mainframes traditionally meant a centralized computer linked to less
powerful devices like workstations, this definition is blurring as smaller machines gain more power and
mainframes get more flexible.
Mainframes first came to life in the post-World War II era, as the U.S. Department of Defense ramped
up its energies to fight the Cold War. Even as servers become more numerous, mainframes are still
used to crunch some of the biggest and most complex databases in the world. They help to secure
countless sensitive transactions, from mobile payments to top-secret corporation information.
Indeed, IBM, one of the world's most enduring makers of mainframes for more than half a century, saw
a spike in mainframe sales in 2018, for the first time in five years. That's in part because mainframes
can pack so much calculating muscle into an area that's small than a rack of modern, high-speed
servers.

Supercomputers
This type of computer usually costs hundreds of thousands or even millions of dollars. Although some
supercomputers are single computer systems, most are composed of multiple high-performance
computers working in parallel as a single system. The best-known supercomputers are built by Cray
Supercomputers.
Supercomputers are different from mainframes. Both types of computers wield incredible computing
power for Earth's most intense industrial and scientific calculations. Mainframes are generally tweaked
to provide the ultimate in data reliability.
Supercomputers, on the other hand, are the Formula 1 race cars of the computer world, built for
breakneck processing speed, so that companies can hurtle through calculations that might take other
systems days, weeks, or even months to complete. They're often found at places like atomic research
centers, spy agencies, scientific institutes, or weather forecasting stations, where speed is of vital
concern. For example, the United States' National Oceanic and Atmospheric Administration, which has
some of the world's most advanced weather forecasting capabilities, uses some of the world's fastest
computers — capable of more than 8 quadrillion calculations per second.

Handheld Computers
Early computers of the 20th century famously required entire rooms. These days, you can carry much
more processing power right in your pants pocket. Handheld computers like smartphones and PDAs are
one of our era's iconic devices.
Debuting in the 1990s, personal digital assistants (PDAs) were tightly integrated computers that often
used flash memory instead of a hard drive for storage. These computers usually didn't have keyboards
but relied on touchscreen technology for user input. PDAs were typically smaller than a paperback
novel, very lightweight with a reasonable battery life. For a time, they were the go-to devices for
calendars, email, and simple messaging functions.
But as the smartphone revolution began, PDAs lost their luster. Smartphones like the iPhone and
Samsung Galaxy blend calling features and PDA functionality along with full-blown computer
capabilities that get more jaw-dropping by the day. They feature touch-screen interfaces, high-speed
processors, many gigabytes of memory, complete connectivity options (including Bluetooth, Wi-Fi,
and more), dual-lens cameras, high-quality audio systems, and other features that would startle
electronics engineers from half a century ago. Although smartphones have existed in some fashion
since 2000, it was the heavily hyped debut of the iPhone 3G in 2007 that brought the device to the
masses. The look, feel and functionality of that iPhone set the template for all the other smartphones
that have followed.

Laptops
Once upon a time, if you wanted to use a PC, you had to use a desktop. Engineers simply couldn't
condense the sophisticated systems in a PC into a portable box. In the mid-1980s, though, many big
computer manufacturers made a push to popularize laptop computers.
Laptops are portable computers that integrate the display, keyboard, a pointing device or trackball,
processor, memory and hard drive all in a battery-operated package slightly larger than an average
hardcover book.
The first true commercial laptop, though, was a far cry from the svelte devices crowding retail shops
today. The Osborne 1, released in 1981, sold for around $1,800, had 64 kb of memory — and weighed
about 24 pounds (10 kilograms). As it toned your biceps, the Osborne 1 also gave your eyes a workout,
as the screen was just 5 inches (12 centimeters)

Fortunately, manufacturers quickly improved upon the look and feel of laptops. Just two years later,
Radio Shack's TRS-80 Model 100 packed its component into a 4-pound (8 kilogram) frame, but it
lacked power. By the end of the decade, NEC's UltraLite smashed barriers by cramming real
computing efficiency into the first true notebook (i.e. very light laptop) style, which weighed just 5
pounds (2.2 kilograms). The race to ultra-portability was officially on. However, laptops didn't
overtake PCs in sales until 2005.

Microcomputers
The personal computer (PC) defines a computer designed for general use by a single person. While
an iMac is definitely a PC, most people relate the acronym to computers that run on the
Windows operating system instead. PCs were first known as microcomputers because they were
complete computers but built on a smaller scale than the huge systems in use by most businesses.
In 1981, iconic tech maker IBM unveiled its first PC, which relied on Microsoft's now-legendary
operating system — MS-DOS (Microsoft Disk Operating System). Apple followed up in 1983 by
creating the Lisa, one of the first PCs with a GUI (graphical user interface). That's a fancy way of
saying "icons" were visible on the screen. Before that, computer screens were pretty plain.

Along the way, critical components such as CPUs (central processing units) and RAM (random access
memory) evolved at a breakneck pace, making computers faster and more efficient. In 1986, Compaq
unleashed a 32-bit CPU on its 386 machines. And of course, Intel grabbed a place in computer history
in 1993 with its first Pentium processor.
Now, personal computers have touchscreens, all sorts of built-in connectivity (like Bluetooth and
WiFi), and operating systems that morph by the day. So do the sizes and shapes of the machines
themselves.

The problem-solving process

The problem-solving process starts with the problem specification and ends with a concrete (and
correct) program.

The steps to do in the problem-solving process may be:

1. problem definition

2. problem analysis

3. algorithm development

4. coding

5. program testing and debugging

6. documentation.

1st. Defining/Specifying the problem

 What the computer program do? What tasks will it perform?

 What kind of data will it use, and where will get its data from?

 What will be the output of the program?

 How will the program interact with the computer user?

Specifying the problem requirements forces you to state the problem clearly and unambiguously and to
gain a clear understanding of what is required for its solution. Your objective is to eliminate
unimportant aspects and to focus on the root problem, and this may not be as easy as it sound.

2nd. Analyzing the problem


Analysis involves identifying the problem (a) inputs, that is, the data you have to work with; (b)
outputs, the desired results; and (c) any additional requirements or constraints on the solution.

3rd. Algorithm development

Find an algorithm for its solution.Write step-by-step procedure and then verify that the algorithm
solves the problem as intended.

The development can be expressed as:

- pseudocode – a narrative description of the flow and logic of the intended program, written in plain
language that expresses each step of the algorithm;

- flowchart - a graphical representation that uses graphic symbols and arrows to express the algorithms.

After you write the algorithm you must realize step-by-step simulation of the computer execution of the
algorithm in a so called desk-check process (verifying the algorithm).

4th. Coding (or programming)

This is the process of translating the algorithm into the syntax of a given programming language
[Programming]. You must convert each algorithm step into one or more statements in a programming
language.

5th. Testing and debugging

- testing means running the program, executing all its instructions/functions, and testing the logic by
entering sample data to check the output;

- debugging is the process of finding and correcting program code mistakes:

• syntax errors;
• run-time errors;
• logic errors(or so called bugs).

- field testing is realized by users that operate the software with the purpose of locating problems.

6th. Documenting the program by:

 internal documentation;

 external documentation.

Characteristics of a well-designed algorithm


1. Precision – the steps are precisely stated(defined).
2. Uniqueness – results of each step are uniquely defined and only depend on the input and the
result of the preceding steps.
3. Finiteness – the algorithm stops after a finite number of instructions are executed.
4. Input – the algorithm receives input.
5. Output – the algorithm produces output.
6. Generality – the algorithm applies to a set of inputs.

Representing Algorithms

There are two main ways that algorithms can be represented – pseudocode and flowcharts.

Pseudocode
Most programs are developed using programming languages. These languages have specific syntax that
must be used so that the program will run properly. Pseudocode is not a programming language, it is a
simple way of describing a set of instructions that does not have to use specific syntax.
Writing in pseudocode is similar to writing in a programming language. Each step of the algorithm is
written on a line of its own in sequence. Usually, instructions are written in uppercase, variables in
lowercase and messages in sentence case.
In pseudocode, INPUT asks a question. OUTPUT prints a message on screen.
A simple program could be created to ask someone their name and age, and to make a comment based
on these. This program represented in pseudocode would look like this:

OUTPUT 'What is your name?'


INPUT user inputs their name
STORE the user's input in the name variable
OUTPUT 'Hello' + name
OUTPUT 'How old are you?'
INPUT user inputs their age
STORE the user's input in the age variable
IF age >= 70
THEN
OUTPUT 'You are aged to perfection!'
ELSE
OUTPUT 'You are a spring chicken!

Flowcharts
A flowchart is a diagram that represents a set of instructions. Flowcharts normally use standard
symbols to represent the different instructions. There are few real rules about the level of detail needed
in a flowchart. Sometimes flowcharts are broken down into many steps to provide a lot of detail about
exactly what is happening. Sometimes they are simplified so that a number of steps occur in just one
step.
Flowchart symbols

A simple program could be created to ask someone their name and age, and to make a comment based
on these. This program represented as a flowchart would look like this:

Constructs in structured programming


1. Sequence   Execute a list of statements in order.
Example: Baking Bread

Add flour.
Add salt.
Add yeast.
Mix.
Add water.
Knead.
Let rise.
Bake.

2. Repetition   Repeat a block of


statements while a condition is true.
Example: Washing Dishes

Stack dishes by sink.


Fill sink with hot soapy water.
While moreDishes
    Get dish from counter,
    Wash dish,
    Put dish in drain rack.
End While
Wipe off counter.
Rinse out sink.
3. Selection   Choose at most one action from several alternative conditions.
Example: Sorting Mail

Get mail from mailbox.


Put mail on table.
While moreMailToSort
    Get piece of mail from table.
    If pieceIsPersonal Then
       Read it.
    ElseIf pieceIsMagazine Then
       Put in magazine rack.
    ElseIf pieceIsBill Then
       Pay it,
    ElseIf pieceIsJunkMail Then
       Throw in wastebasket.
    End If
End While

Programming paradigms
A programming paradigm is a style, or “way,” of programming. Some languages make it easy to
write in some paradigms but not others.

1. Imperative Programming
Control flow in imperative programming is explicit: commands show how the computation takes
place, step by step. Each step affects the global state of the computation.

result = []
  i=0
start:
    numPeople = length(people)
    if i >= numPeople goto finished
    p = people[i]
    nameLength = length(p.name)
    if nameLength <= 5 goto nextOne
    upperName = toUpper(p.name)
    addToList(result, upperName)
nextOne:
  i=i+1
    goto start
finished:
    return sort(result)

2. Structured Programming
Structured programming is a kind of imperative programming where control flow is defined by
nested loops, conditionals, and subroutines, rather than via gotos. Variables are generally local to
blocks (have lexical scope).

result = [];
for i = 0; i < length(people); i++ {
    p = people[i];
    if length(p.name)) > 5 {
        addToList(result, toUpper(p.name));
  }
}
return sort(result);

Early languages emphasizing structured programming: Algol 60, PL/I, Algol 68, Pascal, C, Ada 83, Modula,
Modula-2. Structured programming as a discipline is sometimes though to have been started by a famous letter
by Edsger Dijkstra entitled Go to Statement Considered Harmful.

3. Object Oriented Programming


OOP is based on the sending of messages to objects. Objects respond to messages by performing
operations, generally called methods. Messages can have arguments. A society of objects, each with
their own local memory and own set of operations has a different feel than the monolithic processor
and single shared memory feel of non-object-oriented languages. Many popular languages that call
themselves OO languages (e.g., Java, C++), really just take some elements of OOP and mix them in to
imperative-looking code.
The first object oriented language was Simula-67; Smalltalk followed soon after as the first “pure”
object-oriented language. Many languages designed from the 1980s to the present have labeled
themselves object-oriented, notably C++, CLOS (object system of Common Lisp), Eiffel, Modula-3,
Ada 95, Java, C#, Ruby.

4. Declarative Programming
Control flow in declarative programming is implicit: the programmer states only what the result
should look like, not how to obtain it.

select upper(name)
from people
where length(name) > 5
order by name

No loops, no assignments, etc. Whatever engine that interprets this code is just supposed go get the
desired information, and can use whatever approach it wants. (The logic and constraint paradigms are
generally declarative as well.)

5. Functional Programming
In functional programming, control flow is expressed by combining function calls, rather than by assigning
values to variables:
With functional programming:
 There are no commands, only side-effect free expressions
 Code is much shorter, less error-prone, and much easier to prove correct
 There is more inherent parallelism, so good compilers can produce faster code
Some people like to say:
 Functional, or Applicative, programming is programming without assignment statements: one just
applies functions to arguments. Examples: Scheme, Haskell, Miranda, ML.
 Function-level programming does away with the variables; one combines functions with functionals,
a.k.a. combinators. Examples: FP, FL, J.

The Execution of High-level programming languages


Translators, compilers, interpreters and assemblers are all software programming tools that convert
code into another type of code, but each term has specific meaning. All of the above work in some way
towards getting a high-level programming language translated into machine code that the central
processing unit (CPU) can understand. Examples of CPUs include those made by Intel (e.g., x86),
AMD (e.g., Athlon APU), NXP (e.g., PowerPC), and many others. It’s important to note that all
translators, compilers, interpreters and assemblers are programs themselves.

The most general term for a software code converting tool is “translator.” A translator, in software
programming terms, is a generic term that could refer to a compiler, assembler, or interpreter; anything
that converts higher level code into another high-level code (e.g., Basic, C++, Fortran, Java) or lower-
level (i.e., a language that the processor can understand), such as assembly language or machine code.
If you don’t know what the tool actually does other than that it accomplishes some level of code
conversion to a specific target language, then you can safely call it a translator.

Compilers
Compilers convert high-level language code to machine (object) code in one session. Compilers can
take a while, because they have to translate high-level code to lower-level machine language all at once
and then save the executable object code to memory. A compiler creates machine code that runs on a
processor with a specific Instruction Set Architecture (ISA), which is processor-dependent. For
example, you cannot compile code for an x86 and run it on a MIPS architecture without a special
compiler. Compilers are also platform-dependent. That is, a compiler can convert C++, for example, to
machine code that’s targeted at a platform that is running the Linux OS. A cross-compiler, however,
can generate code for a platform other than the one it runs on itself.

A cross-compiler running on a Windows machine, for instance, could generate code that runs on a
specific Windows operating system or a Linux (operating system) platform. Source-to-source
compilers translate one program, or code, to another of a different language (e.g., from Java to C).
Choosing a compiler then, means that first you need to know the ISA, operating system, and the
programming language that you plan to use. Compilers often come as a package with other tools, and
each processor manufacturer will have at least one compiler or a package of software development
tools (that includes a compiler). Often the software tools (including compiler) are free; after all, a CPU
is completely useless without software to run on it. Compilers will report errors after compiling has
finished.

Interpreters
Another way to get code to run on your processor is to use an interpreter, which is not the same as a
compiler. An interpreter translates code like a compiler but reads the code and immediately executes on
that code, and therefore is initially faster than a compiler. Thus, interpreters are often used in software
development tools as debugging tools, as they can execute a single in of code at a time. Compilers
translate code all at once and the processor then executes upon the machine language that the compiler
produced. If changes are made to the code after compilation, the changed code will need to be
compiled and added to the compiled code (or perhaps the entire program will need to be re-compiled.)
But an interpreter, although skipping the step of compilation of the entire program to start, is much
slower to execute than the same program that’s been completely compiled.

Interpreters, however, have usefulness in areas where speed doesn’t matter (e.g., debugging and
training) and it is possible to take the entire interpreter and use it on another ISA, which makes it more
portable than a compiler when working between hardware architectures. There are several types of
interpreters: the syntax-directed interpreter (i.e., the Abstract Syntax Tree (AST) interpreter), bytecode
interpreter, and threaded interpreter (not to be confused with concurrent processing threads), Just-in-
Time (a kind of hybrid interpreter/compiler), and a few others. Instructions on how to build an
interpreter can be found on the web.[i] Some examples of programming languages that use interpreters
are Python, Ruby, Perl, and PHP.

Assemblers
An assembler translates a program written in assembly language into machine language and is
effectively a compiler for the assembly language, but can also be used interactively like an interpreter.
Assembly language is a low-level programming language. Low-level programming languages are less
like human language in that they are more difficult to understand at a glance; you have to study
assembly code carefully in order to follow the intent of execution and in most cases, assembly code has
many more lines of code to represent the same functions being executed as a higher-level language. An
assembler converts assembly language code into machine code (also known as object code), an even
lower-level language that the processor can directly understand.

Assembly language code is more often used with 8-bit processors and becomes increasingly unwieldy
as the processor’s instruction set path becomes wider (e.g., 16-bit, 32-bit, and 64-bit). It is not
impossible for people to read machine code, the strings of ones and zeros that digital devices (including
processors) use to communicate, but it’s likely only read by people in cases of computer forensics or
brute-force hacking. Assembly language is the next level up from machine code, and is quite useful in
extreme cases of debugging code to determine exactly what’s going on in a problematic execution, for
instance. Sometimes compilers will “optimize” code in unforeseen ways that affect outcomes to the
bafflement of the developer or programmer such that it’s necessary to carefully follow the step-by-step
action of the processor in assembly code, much like a hunter tracking prey or a detective following
clues.

You might also like