Professional Documents
Culture Documents
ISCA
2000
2
Agenda
Overview of Computing
Motivations & Enabling Technologies
Cluster Architecture & its Components
Clusters Classifications
Cluster Middleware
Single System Image
Representative Cluster Systems
Resources and Conclusions
3
Computing Elements
Applications
Programming Paradigms
Threads
ThreadsInterface
Interface
Microkernel
Microkernel Operating System
Multi-Processor Computing System
P P P P P P Hardware
Architectures
Sequential System Software
Era
Applications
P.S.Es
Architectures
Parallel
Era System Software
Applications
P.S.Es
1940 50 60 70 80 90 2000 2030
Commercialization
R&D Commodity
5
Computing Power and
Computer Architectures
6
Computing Power (HPC) Drivers
Life
LifeSciences
Sciences Aerospace
Aerospace
E-commerce/anything
CAD/CAM Digital
DigitalBiology
Biology Military Applications7
CAD/CAM
How to Run App. Faster ?
8
Computing Platforms Evolution
Breaking Adm inistrative Barriers
2 1 0
2 1 0 2 1 0 2 1 0
P
E
?
R 2 1 0 2 1 0 2 1 0
2 1 0
F 21 00
O Administrative Barriers
R
M Individual
A Group
N D epart ment
C C ampus
E Sta te
N ational
Globe
Inte r Plane t
U niverse
10
E-Commerce and PDC ?
11
Killer Applications of Clusters
12
Major problems/issues in E-commerce
Social Issues
Capacity Planning
Multilevel Business Support (e.g., B2P2C)
Information Storage, Retrieval, and Update
Performance
Heterogeneity
System Scalability
System Reliability
Identification and Authentication
System Expandability
Security
Cyber Attacks Detection and Control (cyberguard)
Data Replication, Consistency, and Caching
Manageability (administration and control)
13
Amazon.com: Online sales/trading killer
E-commerce Portal
Several Thousands of Items
– books, publishers, suppliers
Millions of Customers
– Customers details, transactions details, support for transactions
update
(Millions) of Partners
– Keep track of partners details, tracking referral link to partner
and sales and payment
Sales based on advertised price
Sales through auction/bids
– A mechanism for participating in the bid (buyers/sellers define
rules of the game)
14
Can these drive
2100 2100 2100
2100
Clusters are already in use for web serving, web-hosting, and number of other
Internet applications including E-commerce
– scalability, availability, performance, reliable-high performance-
massive storage and database support.
– Attempts to support online detection of cyber attacks (through data
mining) and control
Hyperclusters and the GRID:
– Support for transparency in (secure) Site/Data Replication for high
availability and quick response time (taking site close to the user).
– Compute power from hyperclusters/Grid can be used for data mining for
cyber attacks and fraud detection and control.
– Helps to build Compute Power Market, ASPs, and Computing Portals.
15
Science Portals - e.g., PAPIA system
Pentiums
Myrinet
NetBSD/Linuux
PM
Score-D
MPC++
17
Sequential Architecture
Limitations
18
Computational Power Improvement
Multiprocessor
C.P.I.
Uniprocessor
1 2. . . .
No. of Processors
19
Human Physical Growth Analogy:
Computational Power Improvement
Vertical Horizontal
Growth
5 10 15 20 25 30 35 40 45 . . . .
Age
20
Why Parallel Processing NOW?
21
History of Parallel Processing
22
Motivating Factors
24
Main HPC Architectures..1a
25
Motivation for using Clusters
26
Main HPC Architectures..1b.
27
Parallel Processing Paradox
28
The Need for Alternative
Supercomputing Resources
29
Technology Trend
30
Scalable Parallel Computers
31
Design Space of Competing
Computer Architecture
32
Towards Inexpensive
Supercomputing
It is:
Cluster Computing..
The Commodity Supercomputing!
33
Cluster Computing - Research
Projects
Beowulf (CalTech and NASA) - USA
CCS (Computing Centre Software) - Paderborn, Germany
Condor - Wisconsin State University, USA
DQS (Distributed Queuing System) - Florida State University, US.
EASY - Argonne National Lab, USA
HPVM -(High Performance Virtual Machine),UIUC&now UCSB,US
far - University of Liverpool, UK
Gardens - Queensland University of Technology, Australia
MOSIX - Hebrew University of Jerusalem, Israel
MPI (MPI Forum, MPICH is one of the popular implementations)
NOW (Network of Workstations) - Berkeley, USA
NIMROD - Monash University, Australia
NetSolve - University of Tennessee, USA
PBS (Portable Batch System) - NASA Ames and LLNL, USA
PVM - Oak Ridge National Lab./UTK/Emory, USA
34
Cluster Computing - Commercial
Software
35
Motivation for using Clusters
36
Motivation for using Clusters
37
Cycle Stealing
38
Cycle Stealing
39
Cycle Stealing
40
Rise & Fall of Computing
Technologies
41
Original Food Chain Picture
42
1984 Computer Food Chain
Mainframe
PC
Workstation
Mini Computer
Vector Supercomputer
43
1994 Computer Food Chain
PC
Workstation
Mainframe
(future is bleak)
44
Computer Food Chain (Now and Future)
45
What is a cluster?
46
Why Clusters now?
(Beyond Technology and Cost)
47
Architectural Drivers…(cont)
48
...Architectural Drivers
49
Clustering of Computers
for Collective Computing: Trends
51
Basic Components
MyriNet
160 MB/s
Myricom
NIC
P M
M I/O bus
P
Sun Ultra 170
52
Massive Cheap Storage Cluster
Basic unit:
2 PCs double-ending four
SCSI chains of 8 disks
each
53
Cluster of SMPs (CLUMPS)
54
Millennium PC Clumps
Inexpensive, easy to
manage Cluster
Replicated in many
departments
Prototype for very
large PC cluster
55
Adoption of the Approach
56
So What’s So Different?
Commodity parts?
Communications Packaging?
Incremental Scalability?
Independent Failure?
Intelligent Network Interfaces?
Complete System on every node
– virtual memory
– scheduler
– files
– ...
57
OPPORTUNITIES
&
CHALLENGES
58
Opportunity of Large-scale
Computing on NOW
Shared Pool of
Computing Resources:
Processors, Memory, Disks
Interconnect
MPP/DSM:
– Compute across multiple systems: parallel.
Network RAM:
– Idle memory in other nodes. Page across other nodes idle
memory
Software RAID:
– file system supporting parallel I/O and reliablity, mass-
storage.
Multi-path Communication:
– Communicate across multiple networks: Ethernet, ATM,
Myrinet
60
Parallel Processing
61
Network RAM
62
Software RAID: Redundant
Array of Workstation Disks
I/O Bottleneck:
– Microprocessor performance is improving more than 50% per
year.
– Disk access improvement is < 10%
– Application often perform I/O
RAID cost per byte is high compared to single disks
RAIDs are connected to host computers which are often a performance and
availability bottleneck
RAID in software, writing data across an array of workstation disks
provides performance and some degree of redundancy provides availability.
63
Software RAID, Parallel File
Systems, and Parallel I/O
64
Cluster Computer and its
Components
65
Clustering Today
67
Cluster Components...1a
Nodes
70
Cluster Components…3
High Performance Networks
Ethernet (10Mbps),
Fast Ethernet (100Mbps),
Gigabit Ethernet (1Gbps)
SCI (Dolphin - MPI- 12micro-sec latency)
ATM
Myrinet (1.2Gbps)
Digital Memory Channel
FDDI
71
Cluster Components…4
Network Interfaces
72
Cluster Components…5
Communication Software
Traditional OS supported facilities (heavy weight
due to protocol processing)..
– Sockets (TCP/IP), Pipes, etc.
Light weight protocols (User Level)
– Active Messages (Berkeley)
– Fast Messages (Illinois)
– U-net (Cornell)
– XTP (Virginia)
System systems can be built on top of the above
protocols
73
Cluster Components…6a
Cluster Middleware
74
Cluster Components…6b
Middleware Components
Hardware
– DEC Memory Channel, DSM (Alewife, DASH) SMP Techniques
OS / Gluing Layers
– Solaris MC, Unixware, Glunix)
Applications and Subsystems
– System management and electronic forms
– Runtime systems (software DSM, PFS etc.)
– Resource management and scheduling (RMS):
• CODINE, LSF, PBS, NQS, etc.
75
Cluster Components…7a
Programming environments
76
Cluster Components…7b
Development Tools ?
Compilers
– C/C++/Java/ ;
– Parallel programming with C++ (MIT Press book)
RAD (rapid application development
tools).. GUI based tools for PP modeling
Debuggers
Performance Analysis Tools
Visualization Tools
77
Cluster Components…8
Applications
Sequential
Parallel / Distributed (Cluster-aware app.)
– Grand Challenging applications
• Weather Forecasting
• Quantum Chemistry
• Molecular Biology Modeling
• Engineering Analysis (CAD/CAM)
• ……………….
– PDBs, web servers,data-mining
78
Key Operational Benefits of Clustering
79
Classification
of Cluster Computer
80
Clusters Classification..1
81
HA Cluster: Server Cluster with
"Heartbeat" Connection
82
Clusters Classification..2
83
Clusters Classification..3
84
Building Scalable Systems:
Cluster of SMPs (Clumps)
86
Clusters Classification..5
– Heterogeneous Clusters
• Nodes based on different processors and
running different OSes.
87
Clusters Classification..6a
Dimensions of Scalability & Levels of Clustering
(3)
Network
Public Metacomputing (GRID)
Enterprise Technology (1)
Campus
/ O S
Department
m o ry
e
Workgroup /M
/ I/ O
CPU
Uniprocessor
SMP
Cluster
MPP Platform (2)
88
Clusters Classification..6b
Levels of Clustering
Group Clusters (#nodes: 2-99)
– (a set of dedicated/non-dedicated computers - mainly connected
by SAN like Myrinet)
Departmental Clusters (#nodes: 99-999)
Organizational Clusters (#nodes: many 100s)
(using ATMs Net)
Internet-wide Clusters=Global Clusters: (#nodes: 1000s to many
millions)
– Metacomputing
– Web-based Computing
– Agent Based Computing
• Java plays a major in web and agent based computing
89
Major issues in cluster design
90
Cluster Middleware
and
Single System Image
91
A typical Cluster Computing
Environment
Application
???
Hardware/OS
92
CC should support
Application
Hardware/OS
94
SSI Clusters--SMP services on a CC
95
What is Cluster Middleware ?
96
Middleware Design Goals
98
Benefits of Single System Image
99
Desired SSI Services
User need not be aware of the underlying system architecture to use these machines
effectively 101
Scalability Vs. Single System Image
UP
102
SSI Levels/How do we implement
SSI ?
Hardware Level
103
SSI at Application and
Subsystem Level
Kernel/ Solaris MC, Unixware each name space: kernel support for
OS Layer MOSIX, Sprite,Amoeba files, processes, applications, adm
/ GLunix pipes, devices, etc. subsystems
106
(c) In search of clusters
SSI Characteristics
107
SSI Boundaries -- an
applications SSI boundary
Batch System
SSI
Boundary (c) In search
of clusters
108
Relationship Among
Middleware Modules
109
SSI via OS path!
110
SSI Representative Systems
OS level SSI
– SCO NSC UnixWare
– Solaris-MC
– MOSIX, ….
Middleware level SSI
– PVM, TreadMarks (DSM), Glunix,
Condor, Codine, Nimrod, ….
Application level SSI
– PARMON, Parallel Oracle, ... 111
SCO NonStop® Cluster for UnixWare
http://www.sco.com/products/clustering/
Standard OS Standard OS
Extensions Extensions
kernel calls kernel calls
Devices Devices
ServerNet™
Other nodes 112
How does NonStop Clusters Work?
Network
Solaris MC Architecture
http://www.sun.com/research/solaris-mc/ 114
Solaris MC components
Application
117
MOSIX for Linux at HUJI
118
NOW @ Berkeley
http://now.cs.berkeley.edu/ 119
NOW Software Components
Parallel Apps
Large Seq. Apps
Sockets, Split-C, MPI, HPF, vSM
Name Svr Global Layer Unix Active Messages
120
3 Paths for Applications on
NOW?
Revolutionary (MPP Style): write new programs from
scratch using MPP languages, compilers, libraries,…
Porting: port programs from mainframes, supercomputers,
MPPs, …
Evolutionary: take sequential program & use
1) Network RAM: first use memory of many computers
to reduce disk accesses; if not fast enough, then:
2) Parallel I/O: use many disks in parallel for accesses not
in file cache; if not fast enough, then:
3) Parallel program: change program until it sees enough
processors that is fast=> Large speedup without fine
grain parallel program
121
Comparison of 4 Cluster Systems
122
Cluster Programming
Environments
Shared Memory Based
– DSM
– Threads/OpenMP (enabled for clusters)
– Java threads (HKU JESSICA, IBM cJVM)
Message Passing Based
– PVM (PVM)
– MPI (MPI)
Parametric Computations
– Nimrod/Clustor
Automatic Parallelising Compilers
Parallel Libraries & Computational Kernels (NetSolve)
123
Levels
Levels of
of Parallelism
Parallelism
Code-Granularity
Code-Granularity
Code
CodeItem
Item
PVM/MPI Task
Taski-l
i-l Task
Taskii Task
Taski+1
i+1 Large
Largegrain
grain
(task
(tasklevel)
level)
Program
Program
func1
func1( () ) func2
func2( () ) func3
func3( () )
{{ {{ {{ Medium
Mediumgrain
grain
Threads ....
....
....
....
....
....
....
....
....
(control
(controllevel)
level)
.... .... .... Function
}} }} }} Function(thread)
(thread)
Fine
Finegrain
grain
aa( (00) )=.. aa( (11)=.. aa( (22)=.. (data
Compilers =..
bb( (00) )=..
)=..
bb( (11)=..
)=..
bb( (22)=.. (datalevel)
level)
=.. )=.. )=.. Loop
Loop(Compiler)
(Compiler)
Very
Veryfine
finegrain
grain
CPU ++ xx Load
Load (multiple
(multipleissue)
issue)
With
Withhardware
hardware 124
MPI (Message Passing Interface)
http://www.mpi-forum.org/
A standard message passing interface.
– MPI 1.0 - May 1994 (started in 1992)
– C and Fortran bindings (now Java)
Portable (once coded, it can run on virtually all HPC
platforms including clusters!
Performance (by exploiting native hardware features)
Functionality (over 115 functions in MPI 1.0)
– environment management, point-to-point & collective
communications, process group, communication world,
derived data types, and virtual topology routines.
Availability - a variety of implementations available, both
vendor and public domain.
125
A Sample MPI Program...
# include <stdio.h>
(master)
# include <string.h>
#include “mpi.h”
main( int argc, char *argv[ ]) Hello,...
{
int my_rank; /* process rank */
int p; /*no. of processes*/ …
int source; /* rank of sender */
int dest; /* rank of receiver */
int tag = 0; /* message tag, like “email subject” */
char message[100]; /* buffer */
(workers)
MPI_Status status; /* function return status */
/* Start up MPI */
MPI_Init( &argc, &argv );
/* Find our process rank/id */
MPI_Comm_rank( MPI_COM_WORLD, &my_rank);
/*Find out how many processes/tasks part of this run */
MPI_Comm_size( MPI_COM_WORLD, &p);
126
A Sample MPI Program
127
Execution
128
PARMON: A Cluster
Monitoring Tool
PARMON Server
PARMON Client on JVM on each node
parmon
PARMON parmond
High-Speed
Switch
http://www.buyya.com/parmon/ 129
Resource Utilization at a
Glance
130
Globalised Cluster Storage
Reference:
Designing SSI Clusters with Hierarchical Checkpointing and Single I/O
Space”, IEEE Concurrency, March, 1999
by K. Hwang, H. Jin et.al
131
Clusters with & without Single I/O
Space
Users Users
132
Benefits of Single I/O Space
133
Single I/O Space Design Issues
134
Integrated I/O Space
LD1 ...
D11 D12 D1t Local
D21 D22 ...
LD2 D2t Disks,
...
(RADD
... Space)
LDn Dn1 Dn2 Dnt
ses
addres
ntial
Seque
...
B11 B21 Bm1
...
B12 B22 Bm2 Shared
SD1 SD2 ... SDm RAIDs,
... (NASD Space)
B1k B2k Bmk
P1
. . . Peripherals
(NAP Space)
Ph
135
Addressing and Mapping
User Applications
Node 1
User Block
Application Mover Node 2
Request
Data I/O Agent
I/O Agent Block A
LD2 or SDi
LD1 of the NASD A
User Node 1
Application Node 2
Block
A I/O Agent
I/O Agent Mover
LD2 or SDi
LD1 of the NASD A
137
What Next ??
138
Clusters of Clusters (HyperClusters)
Cluster 1
Scheduler
Master
Daemon
LAN/WAN
Submit
Graphical Cluster 3
Control Execution
Daemon Scheduler
Clients
Master
Daemon
Cluster 2
Scheduler Submit
Graphical
Master Control Execution
Daemon Daemon
Clients
Submit
Graphical
Control Execution
Daemon
Clients
139
Towards Grid Computing….
140
For illustration, placed resources arbitrarily on the GUSTO test-bed!!
What is Grid ?
An infrastructure that couples
– Computers (PCs, workstations, clusters, traditional supercomputers,
and even laptops, notebooks, mobile computers, PDA, and so on)
– Software ? (e.g., renting expensive special purpose applications on
demand)
– Databases (e.g., transparent access to human genome database)
– Special Instruments (e.g., radio telescope--SETI@Home Searching for
Life in galaxy, Austrophysics@Swinburne for pulsars)
– People (may be even animals who knows ?)
across the local/wide-area networks (enterprise,
organisations, or Internet) and presents them as an unified
integrated (single) resource.
141
Conceptual view of the Grid
http://www.sun.com/hpc/
142
Grid Application-Drivers
143
Grid Components
144
Many GRID Projects and Initiatives
http://www.gridcomputing.com/ 145
NetSolve
Client/Server/Agent -- Based Computing
Easy-to-use tool to provide efficient and uniform
access to a variety of scientific packages on UNIX platforms
• Client-Server design
• Network-enabled solvers Software Repository
Network Resources
• Seam less access to resources
• Non-hierarchical system
• Load Balancing
• Fault Tolerance choice
reply
• Interfaces to Fortran, C, Java, Matlab, m ore
request
146
HARNESS Virtual Machine
Scalable Distributed con trol and CCA based Daemon
Host D
Virtual Another
Machine VM
Host B
Component
Host C based da emon
proce ss control
Customization
Operatio n within VM us es user features and extens ion
Distributed Control HARNESS daem on by dynamica lly
adding plug-ins
http://www.epm.ornl.gov/harness/ 147
HARNESS Core Research
Parallel Plug-ins for Heterogeneous Distributed Virtual Machin e
148
Nimrod - A Job Management System
http://www.dgs.monash.edu.au/~davida/nimrod.html 149
Job processing with Nimrod
150
Nimrod/G Architecture
Nimrod/G Client Nimrod/G Client Nimrod/G Client
Nimrod Engine
Schedule Advisor
TM TS
Middleware Services GE GIS
RM & TS
RM & TS
GUSTO Test Bed RM: Local Resource Manager, TS: Trade Server
151
Compute Power Market
Grid Explorer
Job
Application Control Schedule Advisor Trade Server Charging
Agent
Trading Alg.
Accounting
Trade Manager Resource
Reservation Other
services
Deployment Agent Resource Allocation
A Resource Domain
152
Pointers to Literature on
Cluster Computing
153
Reading Resources..1a
Internet & WWW
– Computer Architecture:
• http://www.cs.wisc.edu/~arch/www/
– PFS & Parallel I/O
• http://www.cs.dartmouth.edu/pario/
– DSMs
• http://www.cs.umd.edu/~keleher/dsm.html
154
Reading Resources..1b
Internet & WWW
– Solaris-MC
• http://www.sunlabs.com/research/solaris-mc
– Microprocessors: Recent Advances
• http://www.microprocessor.sscc.ru
– Beowulf:
• http://www.beowulf.org
– Metacomputing
• http://www.sis.port.ac.uk/~mab/Metacomputing/
155
Reading Resources..2
Books
– In Search of Cluster
• by G.Pfister, Prentice Hall (2ed), 98
156
Reading Resources..3
Journals
157
Cluster Computing Infoware
http://www.csse.monash.edu.au/~rajkumar/cluster/
158
Cluster Computing Forum
http://www.ieeetfcc.org
159
TFCC Activities...
Network Technologies
OS Technologies
Parallel I/O
Programming Environments
Java Technologies
Algorithms and Applications
>Analysis and Profiling
Storage Technologies
High Throughput Computing
160
TFCC Activities...
High Availability
Single System Image
Performance Evaluation
Software Engineering
Education
Newsletter
Industrial Wing
TFCC Regional Activities
– All the above have there own pages, see pointers from:
– http://www.ieeetfcc.org
161
TFCC Activities...
162
Clusters Revisited
163
Summary
164
Conclusions
165
Computing Platforms Evolution
Breaking Adm inistrative Barriers
2 1 0
2 1 0 2 1 0 2 1 0
P
E
?
R 2 1 0 2 1 0 2 1 0
2 1 0
F 21 00
O Administrative Barriers
R
M Individual
A Group
N D epart ment
C C ampus
E Sta te
N ational
Globe
Inte r Plane t
U niverse
167
Backup Slides...
168
SISD : A Conventional Computer
Instructions
Data Input Processor
Processor Data Output
169
The MISD Architecture
Instruction
Stream A
Instruction
Stream B
Instruction Stream C
Processor
A Data
Output
Data Processor Stream
Input B
Stream
Processor
C
Data Output
Data Input Processor stream A
stream A A
Data Output
Data Input Processor
stream B
stream B B
Processor Data Output
Data Input stream C
C
stream C
Ci<= Ai * Bi
Data Output
Data Input Processor stream A
stream A A
Data Output
Data Input Processor
stream B
stream B B
Processor Data Output
Data Input stream C
C
stream C