You are on page 1of 1

Supercomputer

From Wikipedia, the free encyclopedia


"High-performance computing" redirects here. For narrower definitions of HPC, se
e high-throughput computing and many-task computing. For other uses, see Superco
mputer (disambiguation).
IBM's Blue Gene/P supercomputer at Argonne National Laboratory runs over 250,000
processors using normal data center air conditioning, grouped in 72 racks/cabin
ets connected by a high-speed optical network[1]
A supercomputer is a computer with a high-level computational capacity compared
to a general-purpose computer. Performance of a supercomputer is measured in flo
ating-point operations per second (FLOPS) instead of million instructions per se
cond (MIPS). As of 2015, there are supercomputers which can perform up to quadri
llions of FLOPS.[2]
Supercomputers were introduced in the 1960s, made initially, and for decades pri
marily, by Seymour Cray at Control Data Corporation (CDC), Cray Research and sub
sequent companies bearing his name or monogram. While the supercomputers of the
1970s used only a few processors, in the 1990s machines with thousands of proces
sors began to appear and, by the end of the 20th century, massively parallel sup
ercomputers with tens of thousands of "off-the-shelf" processors were the norm.[
3][4]
As of June 2016 the fastest supercomputer in the world is the Sunway TaihuLight,
with a Linpack benchmark of 93 PFLOPS, exceeding the previous record holder, Ti
anhe-2, by around 59 PFLOPS. It tops the rankings in the TOP500 supercomputer li
st. Sunway TaihuLight's emergence is also notable for its use of indigenous chip
s, and is the first Chinese computer to enter the TOP500 list without using hard
ware from the United States. As of June 2016, the Chinese, for the first time, h
ad more computers (167) on the TOP500 list than the United States (165). However
, U.S. built computers held ten of the top 20 positions.[5][6]
Supercomputers play an important role in the field of computational science, and
are used for a wide range of computationally intensive tasks in various fields,
including quantum mechanics, weather forecasting, climate research, oil and gas
exploration, molecular modeling (computing the structures and properties of che
mical compounds, biological macromolecules, polymers, and crystals), and physica
l simulations (such as simulations of the early moments of the universe, airplan
e and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fu
sion). Throughout their history, they have been essential in the field of crypta
nalysis.[7]
Systems with massive numbers of processors generally take one of the two paths:
in one approach (e.g., in distributed computing), hundreds or thousands of discr
ete computers (e.g., laptops) distributed across a network (e.g., the Internet)
devote some or all of their time to solving a common problem; each individual co
mputer (client) receives and completes many small tasks, reporting the results t
o a central server which integrates the task results from all the clients into t
he overall solution.[8][9] In another approach, thousands of dedicated processor
s are placed in proximity to each other (e.g., in a computer cluster); this save
s considerable time moving data around and makes it possible for the processors
to work together (rather than on separate tasks), for example in mesh and hyperc
ube architectures.
The use of multi-core processors combined with centralization is an emerging tre
nd; one can think of this as a small cluster (the multicore processor in a smart
phone, tablet, laptop, etc.) that both depends upon and contributes to the cloud
.[10][11]