GRID COMPUTING

Faisal N. Abu-Khzam & Michael A. Langston

University of Tennessee
1

Outline
Hour 1: Introduction Break Hour 2: Using the Grid Break Hour 3: Ongoing Research Q&A Session
2

Hour 1: Introduction
What is Grid Computing? Who Needs It? An Illustrative Example Grid Users Current Grids

3

What is Grid Computing?
Computational Grids
± Homogeneous (e.g., Clusters) ± Heterogeneous (e.g., with one-of-a-kind instruments)

Cousins of Grid Computing Methods of Grid Computing
4

and data. switches. Resources may be owned by diverse organizations. Each user should have a single login account to access all resources. peripherals. instruments. .Computational Grids A network of geographically distributed resources including computers.

Gridware can be viewed as a special type of middleware that enable sharing and manage grid components based on user requirements and resource attributes (e.Computational Grids Grids are typically managed by gridware. capacity.. performance. availability«) .g.

etc. Client/Server Computing. . Internet Computing. Network Computing...Cousins of Grid Computing Parallel Computing Distributed Computing Peer-to-Peer Computing Many others: Cluster Computing.

Distributed Computing People often ask: Is Grid Computing a fancy new name for the concept of distributed computing? In general.´ Distributed Computing is most often concerned with distributing the load of a program across two or more processes. the answer is ³no. .

. Computers can act as clients or servers depending on what role is most efficient for the network.PEER2PEER Computing Sharing of computer resources and services by direct exchange between systems.

Methods of Grid Computing Distributed Supercomputing High-Throughput Computing On-Demand Computing Data-Intensive Computing Collaborative Computing Logistical Networking 10 .

Tackle problems that cannot be solved on a single system. 11 . virtual distributed supercomputer.Distributed Supercomputing Combining multiple high-capacity resources on a computational grid into a single.

High-Throughput Computing Uses the grid to schedule large numbers of loosely coupled or independent tasks. with the goal of putting unused processor cycles to work. 12 .

13 .On-Demand Computing Uses grid capabilities to meet short-term requirements for resources that are not locally accessible. Models real-time computing demands.

14 . Particularly useful for distributed data mining.Data-Intensive Computing The focus is on synthesizing new information from data that is maintained in geographically distributed repositories. and databases. digital libraries.

Applications are often structured in terms of a virtual shared space.Collaborative Computing Concerned primarily with enabling and enhancing human-to-human interactions. 15 .

which does not explicitly model storage resources in the network.Logistical Networking Global scheduling and optimization of data movement. Called "logistical" because of the analogy it bears with the systems of warehouses. 16 . depots. and distribution channels. Contrasts with traditional networking.

Meteorologists seek to visualize and analyze petabytes of climate data with enormous computational demands.Who Needs Grid Computing? A chemist may utilize hundreds of processors to screen thousands of compounds per hour. Teams of engineers worldwide pool resources to analyze terabytes of structural data. 17 .

a NASA research scientist. Virginia. 18 . University of California. She needed the high-performance microscope located at the National Center for Microscopy and Imaging Research (NCMIR).An Illustrative Example Tiffany Moisan. San Diego. collected microbiological samples in the tidewaters around Wallops Island.

Example (continued) She sent the samples to San Diego and used NPACI¶s Telescience Grid and NASA¶s Information Power Grid (IPG) to view and control the output of the microscope from her desk on Wallops Island. Thus. in addition to viewing the samples. she could move the platform holding them and make adjustments to the microscope. 19 .

This dataset was stored using a storage resource broker on NASA¶s IPG.Example (continued) The microscope produced a huge dataset of images. 20 . Moisan was able to run algorithms on this very dataset while watching the results in real time.

Grid Users Grid developers Tool developers Application developers End Users System Administrators 21 .

22 .Grid Developers Very small group. Implementers of a grid ³protocol´ who provides the basic services required to construct a grid.

Tool Developers Implement the programming models used by application developers. Implement basic services similar to conventional computing services: ± User authentication/authorization ± Process management ± Data access and communication 23 .

Tool Developers
Also implement new (grid) services such as:
± Resource locations ± Fault detection ± Security ± Electronic payment

24

Application Developers
Construct grid-enabled applications for end-users who should be able to use these applications without concern for the underlying grid. Provide programming models that are appropriate for grid environments and services that programmers can rely on when developing (higher-level) applications.
25

System Administrators
Balance local and global concerns. Manage grid components and infrastructure. Some tasks still not well delineated due to the high degree of sharing required.

26

Some Highly-Visible Grids
The NSF PACI/NCSA Alliance Grid. The NSF PACI/SDSC NPACI Grid. The NASA Information Power Grid (IPG). The Distributed Terascale Facility (DTF) Project.
27

Intel. Argonne. 28 . Quest Communications. SDSC. and Caltech will work in conjunction with IBM. and Oracle. Myricom.DTF Currently being built by NSF¶s Partnerships for Advanced Computational Infrastructure (PACI) A collaboration: NCSA. Sun Microsystems.

29 . and data at four sites. visualization systems. Stores more than 450 trillion bytes of data.6 trillion calculations per second. Performs 11.DTF Expectations A 40-billion-bits-per-second optical network (Called TeraGrid) is to link computers.

GRID COMPUTING BREAK 30 .

Hour 2: Using the Grid Globus Condor Harness Legion IBP NetSolve Others 31 .

Started in 1996 and is gaining popularity year after year.Globus A collaboration of Argonne National Laboratory¶s Mathematics and Computer Science Division. the University of Southern California¶s Information Sciences Institute. and the University of Chicago's Distributed Systems Laboratory. 32 .

displays. special instruments and so forth. 33 . Focuses on execution environments for integrating widely-distributed computational platforms.Globus A project to develop the underlying technologies needed for the construction of computational grids. data resources.

monitors. 34 .The Globus Toolkit The Globus Resource Allocation Manager (GRAM) ± Creates. and manages services. The Grid Security Infrastructure (GSI) ± Provides authentication services. ± Maps requests to local schedulers and computers.

network status. 35 . and locations of replicated datasets. etc.The Globus Toolkit The Monitoring and Discovery Service (MDS) ± Provides information about system status. Nexus and globus_io ± provides communication services for heterogeneous environments. including server configurations.

Heartbeat Monitor (HBM) ± Used by both system administrators and ordinary users to detect failure of system components or processes.The Globus Toolkit Global Access to Secondary Storage (GASS) ± Provides data movement and access mechanisms that enable remote programs to manipulate local data. 36 .

Condor The Condor project started in 1988 at the University of Wisconsin-Madison. The main goal is to develop tools to support High Throughput Computing on large collections of distributively owned computing resources. 37 .

A ³Condor pool´ consists of any number of machines. Condor pools can share resources by a feature of Condor called flocking.Condor Runs on a cluster of workstations to glean wasted CPU cycles. 38 . that are connected by a network. of possibly different architectures and operating systems.

A machine with job management installed is called a submit machine. ± Provides information about jobs that are already finished. ± Enables the submission of new jobs.The Condor Pool Software Job management services: ± Supports requests about the job queue . ± Puts a job on hold. 39 .

The Condor Pool Software Resource management: ± Keeps track of available machines. ± Performs resource allocation and scheduling. 40 . Machines with resource management installed are called execute machines. A machine could be a ³submit´ and an ³execute´ machine simultaneously.

Allows users to monitor jobs submitted through the Globus toolkit. Thus no need to have a Condor pool installed.Condor-G A version of Condor that uses Globus to submit jobs to remote resources. Can be installed on a single machine. 41 .

42 . to collaborate research and exchange information.Legion An object-based metasystems software project designed at the University of Virginia to support millions of hosts and trillions of objects linked together with high-speed links. Allows groups of users to construct shared virtual work spaces.

43 . runtime library implementations. The key feature of Legion is its object-oriented approach.Legion An open system designed to encourage third party development of new or updated applications. and core components.

44 . Conceived as a natural successor of the PVM project. and Emory University. the University of Tennessee.Harness A Heterogeneous Adaptable Reconfigurable Networked System A collaboration between Oak Ridge National Lab.

distributed virtual machine (DVM) that can run on anything from a Supercomputer to a PDA. Built on three key areas of research: Parallel Plug-in Interface. 45 . and Multiple DVM Collaboration. Distributed Peer-to-Peer Control.Harness An experimental system based on a highly customizable.

IBP The Internet Backplane Protocol (IBP) is a middleware for managing and using remote storage. distributed systems and applications. 46 . It was devised at the University of Tennessee to support Logistical Networking in large scale.

On a processor backplane. 47 . and can direct communication between them with DMA.IBP Named because it was designed to enable applications to treat the Internet as if it were a processor backplane. the user has access to memory and peripherals.

content servers implemented with standard sockets) and can direct communication between them with the IBP API.IBP IBP gives the user access to remote storage and standard Internet resources (e. 48 .g.

IBP By providing a uniform. IBP makes it possible for applications of all kinds to use logistical networking to exploit data locality and more effectively manage buffer resources. applicationindependent interface to storage in the network. 49 .

Designed for solving complex scientific problems in a loosely-coupled heterogeneous environment.NetSolve A client-server-agent model. 50 .

in addition to usage statistics.The NetSolve Agent A ³resource broker´ that represents the gateway to the NetSolve system Maintains an index of the available computational resources and their characteristics. 51 .

The NetSolve Agent Accepts requests for computational services from the client API and dispatches them to the best-suited sever. Runs on Linux and UNIX. 52 .

which in turn returns the server that can best service the request. and Windows. Runs on a user¶s local system. UNIX. Runs on Linux.The NetSolve Client Provides access to remote resources through simple and intuitive APIs. 53 . Contacts the NetSolve system through the agent.

symmetric multiprocessors (SMPs).The NetSolve Server The computational backbone of the system. 54 . cluster of workstations. Runs on different platforms: a single workstation. A daemon process that awaits client requests. or massively parallel processors (MPPs).

55 . routines local to a given server are made available to clients throughout the NetSolve system.The NetSolve Server A key component of the server is the Problem Description File (PDF). With the PDF.

The PDF Template PROBLEM Program Name « LIB Supporting Library Information « INPUT specifications « OUTPUT specifications « CODE 56 .

Uses sensor processes to monitor cpu loads and network traffic. NetSolve is currently integrating NWS into its agent. 57 . Uses statistical models on the collected data to generate a forecast of future behavior.Network Weather Service Supports grid technologies.

Gridware Collaboarations NetSolve is using Globus' "Heartbeat Monitor" to detect failed servers. Legion has adopted NetSolve¶s client-user interface to leverage its metacomputing resources. 58 . The NetSolve client uses Legion¶s data-flow graphs to keep track of data dependencies. A NetSolve client is now in testing that allows access to Globus.

This improves fault tolerance.Gridware Collaboarations NetSolve can access Condor pools among its computational resources. 59 . IBP-enabled clients and servers allow NetSolve to allocate and schedule storage resources as part of its resource brokering.

GRID COMPUTING BREAK 60 .

± Ongoing work at Tennessee General Issues. ± Open questions of interest to the entire research community 61 . Special Projects.Hour 3: Ongoing Research Motivation.

source Vined Khoslan. Kleiner.Motivation Computer speed doubles every 18 months Network speed doubles every 9 months Graph from Scientific American (Jan-2001) by Cleo Vilett. Caufield and Perkins 62 .

Special Projects The SInRG Project. 63 . ± Grid Service Clusters (GSCs) ± Data Switches Incorporating Hardware Acceleration. Unbridled Parallelism ± SETI@home and Folding@home ± The Vertex Cover Solver Security.

The SInRG Project 64 .

but tuned to take advantage of the highly structured and controlled design of the cluster. Each GSC will use the same software infrastructure as is now being deployed on the national Grid.The Grid Service Cluster The basic grid building block. Some GSCs are general-purpose and some are special-purpose. 65 .

The Grid Service Cluster 66 .

An advanced data switch The components that make up a GSC must be able to access each other at very high speeds and with guaranteed Quality of Service (QoS). 67 . Links of at least1Gbps assure QoS in many circumstances simply by over provisioning.

Out-of-core data storage unit provides a minimum of 450 gigabytes.Computational Ecology GSC Collaboration between computer science and mathematical ecology. Initial in-core memory (RAM) is approximately 4 gigabytes. 68 . 8-processor Symmetric MultiProcessor (SMP).

Distinguished by the need to have these workstations attached as directly as possible to the switch to facilitate interactive manipulation of the reconstructed images. 69 .Medical Imaging GSC Collaboration between computer science and the medical school. High-end graphics workstations.

Data visualization laboratory 32 dual processors High performance switch 70 .Molecular Design GSC Collaboration between computer science and chemical engineering.

12 Unix-based CAD workstations. Investigating the potential of reconfigurable computing in grid environments. 71 . 8 Linux boxes with Pilchard boards.Machine Design GSC Collaboration between computer science and electrical engineering.

Machine Design GSC 72 .

Types of Hardware General purpose hardware ± can implement any function ASICs ± hardware that can implement only a specific application FPGAs ± reconfigurable hardware that can implement any function 73 .

The FPGA FPGAs offer reprogrammability Allows optimal logic design of each function to be implemented Hardware implementations offer acceleration over software implementations which are run on general purpose processors 74 .

´ Pilchard is accessed through memory read/write operations. Plugs into 133MHz RAM DIMM slot and is an example of ³programmable active memory. 75 . Higher bandwidth and lower latency than other environments.The Pilchard Environment Developed at Chinese University in Hong Kong.

Provide an interface for the remote use of FPGAs. 76 .Objectives Evaluate utility of NetSolve gridware. Determine effectiveness of hardware acceleration in this environment. Allow users to experiment and gauge whether a given problem would benefit from hardware acceleration.

Sample Implementations Fast Fourier Transform (FFT) Data Encryption Standard algorithm (DES) Image backprojection algorithm A variety of combinatorial algorithms 77 .

Implementation Techniques Two types of functions are implemented Software version .runs on the PC¶s processor Hardware version .runs in the FPGA To implement the hardware version of the function. VHDL code is needed 78 .

partitioning. resource usage minimization. I/O pin counts.The Hardware Function Implemented in VHDL or some other hardware description language. CAD tools help make mapping decisions based on constraints such as: chip area. The VHDL code is then mapped onto the FPGA (synthesis). routing resources and topologies. 79 .

80 . This file defines how the FPGA is to be reprogrammed in order to implement the new desired functionality. To run. a copy of the configuration file must be loaded on the FPGA.The Hardware Function Result of synthesis is a configuration file (bit stream).

Libraries Request Client Request results NetSolve server 81 .Behind the Scenes VHDL programmer VHDL code Synthesis Software programmer Server administrator Configuration file Software and Hardware functions PDFs.

82 . hardware and hybrid solutions.Conclusions Hardware acceleration is offered to both local and remote users. A development environment is provided for devising and testing a wide variety of software. Resources are available through an efficient and easy-to-use interface.

We¶re currently building a Vertex Cover solver with multiple levels of acceleration. 83 .Unbridled Parallelism Sometimes the overhead of gridware is unneeded. Well known examples include SETI@home and Folding@home.

A Naked SSH Approach A bit of blasphemy: the anti-gridware paradigm Our work begs several questions. ± When does it make sense? ± How much efficiency are we gaining? ± What are its limitations? 84 .

Grid Security Algorithm complexity theory ± Verifiability ± Concealment Cryptography and checkpointing ± Corroboration ± Scalability Voting and spot-checking ± Fault tolerance ± Reliability 85 .

Performance monitoring. 86 . Resource management. QoS mechanisms.Some General Issues Grid architecture. Fault tolerance.

htm Condor: http://www.org 87 .utk.wisc.cs.edu/condor Globus: http://www.globus.References URL to these slides: http://www.edu/~abukhzam/gridtutorial.cs.

edu/netsolve Harness: NWS: http://nws.References NetSolve: http://icl.edu/sinrg 88 .utk.cs.cs.cs.edu/ SInRG: http://icl.ucsb.utk.

GRID COMPUTING END 89 .

Sign up to vote on this title
UsefulNot useful