This action might not be possible to undo. Are you sure you want to continue?
Douglas Thain University of Notre Dame GeoClouds Workshop 17 September 2009
The Cooperative Computing Lab
We collaborate with people who have large scale computing problems. We build new software and systems to help them achieve meaningful goals. We run a production computing system used by people at ND and elsewhere. We conduct computer science research, informed by real world experience, with an impact upon problems that matter.
Clouds in the Hype Cycle
Gartner Hype Cycle Report, 2009
What is cloud computing?
A cloud provides rapid, metered access to a virtually unlimited set of resources. Two significant impact on users:
– End users must have an economic model for the work that they want to accomplish. – Apps must be flexible enough to work with an arbitrary number and kind of resources.
1.5GB RAM. 1690GB disk – 80 cents/hour And the Simple Storage Service: – – – – 15 cents per GB-month stored 17 cents per GB transferred (outside of EC2) 1 cent per 1000 write operations 1 cent per 10000 read operations 6 .Example: Amazon EC2 Sep 2009 (simplified slightly for discussion) Small: 1 core. 7.7GB RAM. 15 GB. 850GB disk – 40 cents/hour Extra Large: 4 cores. 160GB disk – 10 cents/hour Large: 2 cores.
MPI. but a combination of the old ideas of utility computing and distributed computing.Is Cloud Computing New? Not entirely. Beowulf. – – – – – – – – – 1960 – MULTICS 1980 – The Cambridge Ring 1987 – Condor Distributed Batch System 1989 – Seti@Home 1990s – Clusters. Grid Computing 2001 –TeraGrid 2004 – Sun Rents CPUs at $1/hour 2006 – Amazon EC2 and S3 7 . NOW 1995 – Globus.
Clouds Trade CapEx for OpEx Cost 2X OpEx of Ownership Capital Expense of Ownership OpEx of Cloud Computing Time 8 .
What about grid computing? A vision much like clouds: – A worldwide framework that would make massive scale computing as easy to use as an electrical socket. The more modest realization: – A means for accessing remote computing facilities in their native form. particularly physics and astronomy. 9 . usually for CPUintensive tasks. The social context: – Large collaborative efforts between computer scientists and computer-savvy fields.
return the output. – Can be used to build interactive services. Clouds provide resource allocation: – Create a VM with 2GB of RAM for 7 days.Clouds vs Grids Grids provide a job execution interface: – Run program P on input A. – Gives predictable performance and accurate metering. but exposes problems to the user. but provides few performance guarantees and inaccurate metering. – How do I run 1M jobs on 100 servers? 10 . – Allows the system to maximize utilization and hide failures.
Submit 1M Jobs Allocate 100 CPUs Grid Computing Layer Provides Job Execution Dispatch Jobs Manage Load Cloud Computing Layer Provides Resource Allocation 11 .
Create a Condor Pool with 100 Nodes Allocate 100 Cores Run 1M Jobs 12 .
Clouds Solve Some Grid Problems Application compatibility is simplified.4. – 10% variations rather than orders of mag.3. Performance is reasonably predictable.2. – A credit card swipe instead of a certificate. – You provide a VM for Linux 2. 13 .1. Fewer administrative headaches for the lone user.
Problems New and Old: How do I reliably execute 1M jobs? Can I share resources and data with others in the cloud? How do I authenticate others in the cloud? Unfortunately. Can we make applications efficiently span multiple cloud providers? Can we join existing centers with clouds? (These are all problems contemplated by grid.) 14 . location still matters.But.
More Open Questions Can I afford to move my data in to the cloud? Can I afford to get it out? Do I trust the cloud to secure my data? How do I go about constructing an economic model for my research? Are there social/technical dangers in putting too many eggs in one basket? Is pay-go the proper model for research? Should universities get out of the data center business? 15 .
clouds. and grids give us access to unlimited CPUs. How do we write programs that can run effectively in large systems? 16 .Clusters.
V K.V K. M.MapReduce ( S.V K.V K.V K. R ) Set S K.V K.V Key0 V V V R O0 Key1 M V V R O1 KeyN V V V V 17 R O2 .
Of course. not all science fits into the Map-Reduce model! 18 .
F F 0.97 0.05 19 .Example: Biometrics Research Goal: Design robust face comparison function.
1 0.0 0.0 0.0 0.0 1.0 0.Similarity Matrix Construction 1.0 0.3 0.1 0.0 1.1 0.1 0.0 0.8 1.0 0.000 iris images 1MB each .1 0.0 20 .0 1.02s per F 833 CPU-days 600 TB of I/O 1.0 0.0 0.1 Challenge Workload: 60.
I own a few machines I have a laptop.000 iris images acquired in my research lab. Now What? 21 . and then compare all of them to each other. not struggling with computers. I want to reduce each one to a feature space. I can buy time from Amazon or TeraGrid. I want to spend my time doing science.I have 60.
Failure: Dispatch latency >> F runtime. F F F F F F F F F F F CPU CPU CPU CPU F F F F CPU F F F F F HN 25 . CPU CPU CPU CPU CPU F F F F F F F F F F F F F F F F CPU CPU CPU CPU F F F F CPU F F F F F HN HN Try 3: Bundle all files into one package. Try 2: Each row is a batch job.Non-Expert User Using 500 CPUs Try 1: Each F is a batch job. Failure: Everyone loads 1GB at once. Failure: Too many small ops on FS. Try 4: User gives up and attempts to solve an easier or smaller problem.
Observation In a given field of study. If the system knows the overall pattern in advance. then they have a better idea of how to construct their workloads. then it can do a better job of executing it reliably and efficiently. making slight changes to the data and algorithms. If the user knows in advance what patterns are allowed. 26 . many people repeat the same pattern of work many times.
27 . A restricted pattern. not meant to be a general purpose programming language. Uses data structures instead of files.Abstractions for Distributed Computing Abstraction: a declarative specification of the computation and data of a workload. Regular structure makes it tractable to model and predict performance. Provide users with a bright path.
F ) Custom Workflow Engine Compact Data Structure Cloud or Grid 28 .Working with Abstractions A1 A2 An A1 A2 Bn F AllPairs( A. B.
B[j] ) for all i. function F ) returns matrix M where M[i][j] = F( A[i].All-Pairs Abstraction AllPairs( set A.F) B2 F F F F F F . set B.j A1 A1 An B1 B1 Bn F B3 F F F 29 A1 A2 A3 allpairs A B F.exe B1 AllPairs(A.B.
How Does the Abstraction Help? The custom workflow engine: – Chooses right data transfer strategy. – Chooses the right number of resources. – Chooses blocking of functions into jobs. 30 . but are tractable (not trivial) to solve for a specific abstraction. – Recovers from a larger number of failures. All of these tasks are nearly impossible for arbitrary workloads. – Predicts overall runtime accurately.
Choose the Right # of CPUs 32 .
Resources Consumed 33 .
(We can go faster yet.) . making it feasible to repeat multiple times for 34 a graduate thesis. which can miss important population effects.396 irises from the Face Recognition Grand Challenge.All-Pairs in Production Our All-Pairs implementation has provided over 57 CPU-years of computation to the ND biometrics research group over the last year. Reduced computation time from 833 days to 10 days. Largest run so far: 58. The largest experiment ever run on publically available data. Competing biometric research relies on samples of 100-1000 images.
Are there other abstractions? 36 .
j].0] M[2.4] M[4.0] M[1. M[i-1.4] M[3.j-1].4] M[0.j-1] ) M[0.2] M[4.4] x M Wavefront(M.2] x d F x d F y F y F y F y M[0. M[I.F) x F d y F d y M[3.0] M[4.0] M[3.3] x d x d M[0.1] x d F y x d x d F y F y M[4.Wavefront( matrix M.3] x d F y M[2.d) ) returns matrix M such that M[i. function F(x.2] M[0.y.0] 37 .j] = F( M[i-1.
When will we run out of oil? Applies to any kind of optimization problem solvable with dynamic programming. E. each of which has an effect on resource consumption and market price.Applications of Wavefront Bioinformatics: – Compute the alignment of two large DNA strings in order to find similarities between species. Economics: – Simulate the interaction between two competing firms. Existing tools do not scale up to complete DNA strings.g. 38 .
job dispatch latency in an unloaded grid is about 30 seconds. (Idea from Falkon@UC) 39 . dispatch latency controls the total execution time: O(n) in the best case. which may outweigh the runtime of F. Things get worse when queues are long! Solution: Build a lightweight task dispatch system. However.Problem: Dispatch Latency Even with an infinite number of CPUs.
txt get out.txt F out.txt 40 .exe put in.txt >out.exe <in.worker worker worker worker worker worker 1000s of workers Dispatched to the cloud queue tasks wavefront engine tasks done work queue put F.txt worker In.txt exec F.
and abort those that lie significantly outside the mean.Fast Abort: Keep statistics on task runtimes. Prefer to assign jobs to machines with a fast history. Solution . 41 . Any delayed task in Wavefront has a cascading effect on the rest of the workload. – Policy based suspension. – Interference with disk/network.Problem: Performance Variation Tasks can be delayed for many reasons: – Heterogeneous hardware.
500x500 Wavefront on ~200 CPUs 42 .
Wavefront on a 200-CPU Cluster 43 .
Wavefront on a 32-Core CPU 44 .
The Genome Assembly Problem AGTCGATCGATCGATAATCGATCCTAGCTAGCTACGA Chemical Sequencing AGTCGATCGATCGAT TCGATAATCGATCCTAGCTA AGCTAGCTACGA Millions of “reads” 100s bytes long. Computational Assembly AGTCGATCGATCGAT TCGATAATCGATCCTAGCTA AGCTAGCTACGA 45 .
7GB 46 .Sample Genomes Reads A. gambiae scaffold A.9M 5.4GB 7. gambiae complete S. Bicolor simulated 101K Data 80MB Sequential Pairs Time 738K 12M 84M 12 hours 6 days 30 days 180K 1.
Some-Pairs Abstraction SomePairs( set A.3) F A1 A1 A2 A3 F SomePairs(A.j).F) A2 F F A3 F 47 .2) (2.3) (3.L. A[j] ) A1 A1 An (1. function F(x.y) ) returns list of F( A[i].1) (2. list (i.
txt >out.txt 48 .Distributed Genome Assembly A1 A1 An (1. worker worker Purdue.3) (3.txt exec F.1) (2.txt F out. and worker worker Wisconsin worker queue tasks somepairs master tasks done work queue detail of a single worker: put align.txt worker in.exe <in.3) F 100s of workers dispatched to worker Notre Dame.2) (2.txt get out.exe put in.
Small Genome (101K reads) 49 .
Medium Genome (180K reads) 50 .
Large Genome (7.9M) 51 .
Our solution is faster (wall-clock time) than the next faster assembler run on 1024x BG/L. You could almost certainly do better with a dedicated cluster and a fast interconnect. 52 . but such systems are not universally available. Our solution opens up research in assembly to labs with “NASCAR” instead of “Formula-One” hardware.What’s the Upshot? We can do full-scale assemblies as a routine matter on existing conventional machines.
What if your application doesn’t fit a regular pattern? 53 .
exe .exe part1 >out1 out2: part2 mysim./mysim.py out1 out2 out3 > result 54 .py ./mysim.exe .exe part3 >out3 result: out1 out2 out3 join./mysim.Makeflow part1 part2 part3: input.py input.exe .data split./join.data out1: part1 mysim./split.py .exe part2 >out2 out3: part3 mysim.
afile prog bfile 55 . Dispatch tasks to nodes with data.Makeflow Implementation bfile: afile prog prog afile >bfile worker worker worker worker worker worker 100s of workers dispatched to the cloud queue tasks makeflow master tasks done work queue detail of a single worker: put prog put afile exec prog afile > bfile get bfile worker Two optimizations: Cache inputs and output.
The workload says nothing about the distributed system.) Graduate students in bioinformatics running codes at production speeds on hundreds of nodes in less than a week. (This is good. so no big results to show just yet. 56 .Experience with Makeflow Still in initial deployment. Easy to test and debug on a desktop machine or a multicore server.
57 . in computer science. we have 10TB of input. by the way. – Makes it easy to see what must be plugged in. challenging. execution time. often neither side understands which details are essential or non-essential: – Can you deal with files that have upper case letters? – Oh.) An abstraction is an excellent chalkboard tool: – Accessible to anyone with a little bit of mathematics.Abstractions as a Social Tool Collaboration with outside groups is how we encounter the most interesting. – Forces out essential details: data size. and important problems. is that ok? – (A little bit of an exaggeration. However.
Limiting expressive power. and reliable. each requires different kinds of optimizations. scalable solution to a narrow category of problems. clouds. and clusters provide enormous computing power. results in systems that are usable. Is there a menu of abstractions that would satisfy many consumers of clouds? 58 . but are very challenging to use effectively. predictable.Conclusion Grids. An abstraction provides a robust.
nd.edu/~ccl Faculty: – – – – Patrick Flynn Nitesh Chawla Kenneth Judd Scott Emrich Grad Students – – – – – Chris Moretti Hoang Bui Li Yu Mike Olson Michael Albrecht Undergrads – – – – – Mike Kelly Rory Carmichael Mark Pasquier Christopher Lyon Jared Bulosan NSF Grants CCF-0621434. CNS-0643229 59 .cse.Acknowledgments Cooperative Computing Lab – http://www.