P. 1
MapReduce: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

MapReduce: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

|Views: 765|Likes:
Published by Emereo Publishing
The Knowledge Solution. Stop Searching, Stand Out and Pay Off. The #1 ALL ENCOMPASSING Guide to MapReduce.

An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily...

"Here's Your Chance To Skip The Struggle and Master MapReduce, With the Least Amount of Effort, In 2 Days Or Less..."

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Parts of the framework are patented in some countries.

The framework is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms.

MapReduce libraries have been written in C++, C#, Erlang, Java, OCaml, Perl, Python, PHP, Ruby, F#, R and other programming languages.

Get the edge, learn EVERYTHING you need to know about MapReduce, and ace any discussion, proposal and implementation with the ultimate book – guaranteed to give you the education that you need, faster than you ever dreamed possible!

The information in this book can show you how to be an expert in the field of MapReduce.

Are you looking to learn more about MapReduce? You're about to discover the most spectacular gold mine of MapReduce materials ever created, this book is a unique collection to help you become a master of MapReduce.

This book is your ultimate resource for MapReduce. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about MapReduce right away. A quick look inside: MapReduce, Aggregate Level Simulation Protocol, Amazon Relational Database Service, Amazon SimpleDB, Amoeba distributed operating system, Art of War Central, Autonomic Computing, Citrusleaf database, Client–server model, Code mobility, Connection broker, CouchDB, Data Diffusion Machine, Database-centric architecture, Distributed application, Distributed data flow, Distributed database, Distributed design patterns, Distributed Interactive Simulation, Distributed lock manager, Distributed memory, Distributed object, Distributed shared memory, Distributed social network, Dryad (programming), Dynamic infrastructure, Edge computing, Explicit multi-threading, Fabric computing, Fallacies of Distributed Computing, Fragmented object, Gemstone (database), HyperText Computer, High level architecture (simulation), IBZL, Kayou, Live distributed object, Master/slave (technology), Membase, Message consumer, Message passing, Messaging pattern, Mobile agent, MongoDB, Multi-master replication, Multitier architecture, Network cloaking, Opaak, Open architecture computing environment, Open Computer Forensics Architecture, OrientDB, Overlay network, Paradiseo, Parasitic computing, PlanetSim, Portable object (computing), Redis (data store), Remote Component Environment, Request Based Distributed Computing, RM-ODP, Semantic Web Data Space, Service-oriented distributed applications, Shared memory, Smart variables, Stub (distributed computing), Supercomputer, Terrastore, Transparency (human-computer interaction), TreadMarks, Tuple space, Utility computing, Virtual Machine Interface, Virtual Object System, Volunteer computing...and Much, Much More!

This book explains in-depth the real drivers and workings of MapReduce. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals - Grab your copy now, while you still can.
The Knowledge Solution. Stop Searching, Stand Out and Pay Off. The #1 ALL ENCOMPASSING Guide to MapReduce.

An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily...

"Here's Your Chance To Skip The Struggle and Master MapReduce, With the Least Amount of Effort, In 2 Days Or Less..."

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Parts of the framework are patented in some countries.

The framework is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms.

MapReduce libraries have been written in C++, C#, Erlang, Java, OCaml, Perl, Python, PHP, Ruby, F#, R and other programming languages.

Get the edge, learn EVERYTHING you need to know about MapReduce, and ace any discussion, proposal and implementation with the ultimate book – guaranteed to give you the education that you need, faster than you ever dreamed possible!

The information in this book can show you how to be an expert in the field of MapReduce.

Are you looking to learn more about MapReduce? You're about to discover the most spectacular gold mine of MapReduce materials ever created, this book is a unique collection to help you become a master of MapReduce.

This book is your ultimate resource for MapReduce. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about MapReduce right away. A quick look inside: MapReduce, Aggregate Level Simulation Protocol, Amazon Relational Database Service, Amazon SimpleDB, Amoeba distributed operating system, Art of War Central, Autonomic Computing, Citrusleaf database, Client–server model, Code mobility, Connection broker, CouchDB, Data Diffusion Machine, Database-centric architecture, Distributed application, Distributed data flow, Distributed database, Distributed design patterns, Distributed Interactive Simulation, Distributed lock manager, Distributed memory, Distributed object, Distributed shared memory, Distributed social network, Dryad (programming), Dynamic infrastructure, Edge computing, Explicit multi-threading, Fabric computing, Fallacies of Distributed Computing, Fragmented object, Gemstone (database), HyperText Computer, High level architecture (simulation), IBZL, Kayou, Live distributed object, Master/slave (technology), Membase, Message consumer, Message passing, Messaging pattern, Mobile agent, MongoDB, Multi-master replication, Multitier architecture, Network cloaking, Opaak, Open architecture computing environment, Open Computer Forensics Architecture, OrientDB, Overlay network, Paradiseo, Parasitic computing, PlanetSim, Portable object (computing), Redis (data store), Remote Component Environment, Request Based Distributed Computing, RM-ODP, Semantic Web Data Space, Service-oriented distributed applications, Shared memory, Smart variables, Stub (distributed computing), Supercomputer, Terrastore, Transparency (human-computer interaction), TreadMarks, Tuple space, Utility computing, Virtual Machine Interface, Virtual Object System, Volunteer computing...and Much, Much More!

This book explains in-depth the real drivers and workings of MapReduce. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals - Grab your copy now, while you still can.

More info:

Published by: Emereo Publishing on Sep 09, 2011
Copyright:Traditional Copyright: All rights reserved
List Price: $39.95

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
This book can be read on up to 6 mobile devices.
Full version available to members
See more
See less

10/31/2014

Sections

  • Aggregate Level Simulation Protocol
  • Amazon Relational Database Service
  • Amazon Relational Database Service[1]
  • Amazon SimpleDB
  • Amoeba distributed operating system
  • Art of War Central
  • Autonomic Computing
  • Citrusleaf database
  • Client–server model
  • Code mobility
  • Connection broker
  • CouchDB
  • Data Diffusion Machine
  • Database-centric architecture
  • Distributed application
  • Distributed data flow
  • Distributed database
  • Distributed design patterns
  • Distributed Interactive Simulation
  • Distributed lock manager
  • Distributed memory
  • Distributed object
  • Distributed shared memory
  • Distributed social network
  • Dryad (programming)
  • Dynamic infrastructure
  • Edge computing
  • Explicit multi-threading
  • Fabric computing
  • Fallacies of Distributed Computing
  • Fragmented object
  • Gemstone (database)
  • HyperText Computer
  • High level architecture (simulation)
  • IBZL
  • Kayou
  • Live distributed object
  • Master/slave (technology)
  • Membase
  • Message consumer
  • Message passing
  • Messaging pattern
  • Mobile agent
  • MongoDB
  • Multi-master replication
  • Multitier architecture
  • Network cloaking
  • Opaak
  • Open architecture computing environment
  • Open Computer Forensics Architecture
  • OrientDB
  • Overlay network
  • Paradiseo
  • Parasitic computing
  • PlanetSim
  • Portable object (computing)
  • Redis (data store)
  • Remote Component Environment
  • Request Based Distributed Computing
  • RM-ODP
  • Semantic Web Data Space
  • Service-oriented distributed applications
  • Shared memory
  • Smart variables
  • Stub (distributed computing)
  • Supercomputer
  • Terrastore
  • Transparency (human-computer interaction)
  • TreadMarks
  • Tuple space
  • Utility computing
  • Virtual Machine Interface
  • Virtual Machine Interface[1]
  • Virtual Machine Interface[2]

MapReduce

IN-DEPTH: THE REAL DRIVERS AND
WORKINGS

Kevin Roebuck

REDUCES THE RISK OF YOUR TECHNOLOGY, TIME AND RESOURCES
INVESTMENT DECISIONS

ENABLING YOU TO COMPARE YOUR
UNDERSTANDING WITH THE OBJECTIVITY OF EXPERIENCED PROFESSIONALS

High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

Topic relevant selected content from the highest rated entries, typeset, printed and shipped. Combine the advantages of up-to-date and in-depth knowledge with the convenience of printed books. A portion of the proceeds of each book will be donated to the Wikimedia Foundation to support their mission: to empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally. The content within this book was generated collaboratively by volunteers. Please be advised that nothing found here has necessarily been reviewed by people with the expertise required to provide you with complete, accurate or reliable information. Some information in this book maybe misleading or simply wrong. The publisher does not guarantee the validity of the information found here. If you need specific advice (for example, medical, legal, financial, or risk management) please seek a professional who is licensed or knowledgeable in that area. Sources, licenses and contributors of the articles and images are listed in the section entitled “References”. Parts of the books may be licensed under the GNU Free Documentation License. A copy of this license is included in the section entitled “GNU Free Documentation License” All used third-party trademarks belong to their respective owners.

Contents
Articles
MapReduce Aggregate Level Simulation Protocol Amazon Relational Database Service Amazon SimpleDB Amoeba distributed operating system Art of War Central Autonomic Computing Citrusleaf database Client–server model Code mobility Connection broker CouchDB Data Diffusion Machine Database-centric architecture Distributed application Distributed data flow Distributed database Distributed design patterns Distributed Interactive Simulation Distributed lock manager Distributed memory Distributed object Distributed shared memory Distributed social network Dryad (programming) Dynamic infrastructure Edge computing Explicit multi-threading Fabric computing Fallacies of Distributed Computing Fragmented object Gemstone (database) HyperText Computer High level architecture (simulation) 1 7 15 17 19 20 21 25 27 29 30 31 36 36 37 38 40 42 43 45 48 50 51 52 59 60 63 65 67 69 70 72 73 74

IBZL Kayou Live distributed object Master/slave (technology) Membase Message consumer Message passing Messaging pattern Mobile agent MongoDB Multi-master replication Multitier architecture Network cloaking Opaak Open architecture computing environment Open Computer Forensics Architecture OrientDB Overlay network Paradiseo Parasitic computing PlanetSim Portable object (computing) Redis (data store) Remote Component Environment Request Based Distributed Computing RM-ODP Semantic Web Data Space Service-oriented distributed applications Shared memory Smart variables Stub (distributed computing) Supercomputer Terrastore Transparency (human-computer interaction) TreadMarks Tuple space Utility computing Virtual Machine Interface

77 80 80 84 86 88 89 92 93 95 102 105 107 108 109 110 111 112 114 116 117 119 120 122 123 123 127 128 129 131 132 133 145 146 148 148 153 155

Licenses and Contributors 159 163 Article Licenses License 164 .Virtual Object System Volunteer computing 156 157 References Article Sources and Contributors Image Sources.

Overview MapReduce is a framework for processing huge datasets on certain kinds of distributable problems using a large number of computers (nodes). though one call is allowed to return more than one value. While this process can often appear inefficient compared to algorithms that are more sequential.[4] MapReduce libraries have been written in C++. and returns a list of pairs in a different domain: Map(k1. which in turn produces a collection of values in the same domain: Reduce(k2. Provided each mapping operation is independent of the others. partitions it up into smaller sub-problems. and passes the answer back to its master node. thus creating one group for each one of the different generated keys. Logical view The Map and Reduce functions of MapReduce are both defined with respect to data structured in (key. list (v2)) → list(v3) Each Reduce call typically produces either one value v3 or an empty return.v2) pairs for each call. a set of 'reducers' can perform the reduction phase provided all outputs of the map operation that share the same key are presented to the same reducer at the same time. MapReduce allows for distributed processing of the map and reduction operations. Computational processing can occur on data stored either in a filesystem (unstructured) or within a database (structured). Perl. Erlang. PHP. Similarly. "Map" step: The master node takes the input. collectively referred to as a cluster (if all nodes use the same hardware) or as a grid (if the nodes use different hardware). leading to a multi-level tree structure. After that. The parallelism also offers some possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails. "Reduce" step: The master node then takes the answers to all the sub-problems and combines them in some way to get the output – the answer to the problem it was originally trying to solve. the MapReduce framework collects all pairs with the same key from all lists and groups them together. MapReduce can be applied to significantly larger datasets than "commodity" servers can handle – a large server farm can use MapReduce to sort a petabyte of data in only a few hours. Python. value) pairs into a list of values. The worker node processes that smaller problem.MapReduce 1 MapReduce MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Ruby.[1] Parts of the framework are patented in some countries.v2) The Map function is applied in parallel to every item in the input dataset. OCaml. Java. all maps can be performed in parallel – though in practice it is limited by the data source and/or the number of CPUs near that data. Map takes one pair of data with a type in one data domain. R and other programming languages. This produces a list of (k2. the work can be rescheduled – assuming the input data is still available. which accepts a list of arbitrary values and .v1) → list(k2. This behavior is different from the typical functional programming map and reduce combination. and distributes those to worker nodes. A worker node may do this again in turn.[3] although their purpose in the MapReduce framework is not the same as their original forms.[2] The framework is inspired by the map and reduce functions commonly used in functional programming. value) pairs. Thus the MapReduce framework transforms a list of (key. C#. The returns of all calls are collected as the desired result list. The Reduce function is then applied in parallel to each group. F#.

MapReduce returns one single value that combines all the values returned by map. String document): // name: document name // document: document contents for each word w in document: EmitIntermediate(w. It is necessary but not sufficient to have implementations of the map and reduce abstractions in order to implement MapReduce. "1"). Emit(word. 2 Example The canonical example application of MapReduce is a process to count the appearances of each different word in a set of documents: void map(String name. such as direct streaming from mappers to reducers. each document is split into words. which the application defines. This may be a distributed file system. Iterator partialCounts): // word: a word // partialCounts: a list of aggregated partial counts int sum = 0. Dataflow The frozen part of the MapReduce framework is a large distributed sort. or for the mapping processors to serve up their results to reducers that query them. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce. are: • • • • • • an input reader a Map function a partition function a compare function a Reduce function an output writer . using the word as the result key. and each word is counted initially with a "1" value by the Map function. AsString(sum)). Other options are possible. Here. void reduce(String word. thus this function just needs to sum all of its input values to find the total appearances of that word. for each pc in partialCounts: sum += ParseInt(pc). Distributed implementations of MapReduce require a means of connecting the processes performing the Map and Reduce phases. The hot spots.

otherwise the MapReduce operation can be held up waiting for slow reducers to finish. If the application is doing a word count. If a node falls silent for longer than that interval. the master node (similar to the master server in the Google File System) records the node as dead and sends out the node's assigned work to other nodes. the Reduce function takes the input values. A common example will read a directory full of text files and return each line as a record. It is important to pick a partition function that gives an approximately uniform distribution of data per shard for load balancing purposes. The input reader reads data from stable storage (typically a distributed file system) and generates key/value pairs. The Reduce can iterate through the values that are associated with that key and output 0 or more values. Comparison function The input for each Reduce is pulled from the machine where the Map ran and sorted using the application's comparison function. A typical default is to hash the key and modulo the number of reducers. data produced and time taken by map and reduce computations. When files are renamed. The input and output types of the map can be (and often are) different from each other. it is possible to also copy them to another name in addition to the name of the task (allowing for . Distribution and reliability MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network. Map function Each Map function takes a series of key/value pairs. The partition function is given the key and the number of reducers and returns the index of the desired reduce. and generates zero or more output key/value pairs. Reduce function The framework calls the application's Reduce function once for each unique key in the sorted order. In the word count example. the map function would break the line into words and output a key/value pair for each word. Output writer The Output Writer writes the output of the Reduce to stable storage. the data is shuffled (parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced it to the shard in which it will be reduced. processes each. usually a distributed file system. Each output pair would contain the word as the key and "1" as the value. Each node is expected to report back periodically with completed work and status updates. sums them and generates a single output of the word and the final sum.MapReduce 3 Input reader The input reader divides the input into appropriate size 'splits' (in practice typically 16MB to 128MB) and the framework assigns one split to each Map function. The shuffle can sometimes take longer than the computation time depending on network bandwidth. Partition function Each Map function output is allocated to a particular reducer by the application's partition function for sharding purposes. Individual operations use atomic operations for naming file outputs as a check to ensure that there are not parallel conflicting threads running. CPU speeds. Between the map and reduce stages.

by Greg Jorgensen.[12] MapReduce's stable inputs and outputs are usually stored in a distributed file system. For example. MapReduce was used to completely regenerate Google's index of the World Wide Web.[11] At Google. distributed sort. Moreover. rejects these views.[16] They concluded that databases offer real advantages for many kinds of data use. They have published the data and code used in their study to allow other researchers to do comparable studies. machine learning. Because of their inferior properties with regard to parallel operations. web access log stats. DeWitt and Stonebraker have subsequently published a detailed benchmark study comparing performance of MapReduce and RDBMS approaches on several specific problems. especially on complex processing or where the data is used across an enterprise. have been critical of the breadth of problems that MapReduce can be used for. For example.[13] They called its interface too low-level and questioned whether it really represents the paradigm shift its proponents have claimed it is. Google has been granted a patent on MapReduce. It replaced the old ad hoc programs that updated the index and ran the various analyses. map and reduce functionality can be very easily implemented in Oracle's PL/SQL database oriented language. Criticism David DeWitt and Michael Stonebraker.[10] and mobile environments. the master node attempts to schedule reduce operations on the same node. They also compared MapReduce programmers to Codasyl programmers. document clustering. The reduce operations operate much the same way."[14] MapReduce's use of input files and lack of schema support prevents the performance improvements enabled by common database system features such as B-trees and hash partitioning. or in the same rack as the node holding the data being operated on. term-vector per host.[6] [7] desktop grids. there have been claims that this patent should not have been granted because MapReduce is too similar to existing products. Another article. but that MapReduce may be easier for users to adopt for simple or one-time processing tasks.[14] They challenged the MapReduce proponents' claims of novelty. inverted index construction. the MapReduce model has been adapted to several computing environments like multi-core and many-core systems. experts in parallel databases and shared-nothing architectures.[17] . Implementations are not necessarily highly-reliable.[9] dynamic cloud environments. web link-graph reversal. This property is desirable as it conserves bandwidth across the backbone network of the datacenter.[5] and statistical machine translation. However. in Hadoop the NameNode is a single point of failure for the distributed filesystem. though projects such as Pig (or PigLatin) and Sawzall are starting to address these problems.[8] volunteer computing environments. The transient data is usually stored on local disk and fetched remotely by the reducers.MapReduce side-effects).[15] Jorgensen asserts that DeWitt and Stonebraker's entire analysis is groundless as MapReduce was never designed nor intended to be used as a database. citing Teradata as an example of prior art that has existed for over two decades. noting both are "writing in a low-level language performing low-level record manipulation. 4 Uses MapReduce is useful in a wide range of applications including: distributed grep.

2005. ens-lyon. Domenico Talia. com/ article2/ 0. cs. 113–125. HPDC'10. acm.CNET News.com/matt/2009/01/18/ understanding-mapreduce/). "MOON: MapReduce On Opportunistic eNvironments" (http:/ / portal.331& RS=PN/ 7. Haiwu He and Fedak. PN.Revisited" (http:/ / citeseerx.1540. from Google Labs [4] "Google's MapReduce Programming Model -. cnet. "Misco: a MapReduce framework for mobile systems" (http:/ / portal. D. vertica. "More patent nonsense — Google MapReduce" (http:/ / www. com/ 8301-10784_3-9955184-7. org/ citation. PACT'08.650. NIPS 2006. "Map-Reduce for Machine Learning on Multicore" (http:/ / www. "A Comparison of Approaches to Large-Scale Data Analysis" (http:/ / database. Dimitrios Gunopulos. Paulson. cfm?id=1839332). Wu-chun Feng. acm. J. ist. Jeffrey & Ghemawat.331) [3] "Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages. baselinemag. Moca. and Kunle Olukotun. dbms2. [6] Colby Ranger.com (http:/ / news. [14] David DeWitt. .000 computing jobs per day through MapReduce. Zhe Zhang.html). springerlink. Rasin. YuanYuan Yu. Retrieved 2009-11-11. Retrieved 2010-03-07. HPDC'10. com/ 2010/ 02/ 11/ google-mapreduce-patent/ ). [7] Bingsheng He. meetup. these batch routines analyze the latest Web pages and update Google's indexes. Retrieved Apr. google. Qiong Luo. Antonopoulos. G. psu. jsp?arnumber=5662789). typicalprogrammer. • MapReduce Users Groups [19] around the world. N.com. Michael Stonebraker.. . A.650. edu/ viewdoc/ download?doi=10. Abadi. . com/ map-reduce-machine-learning-multicore). [9] Heshan Lin. 1. 104. Retrieved Apr. ."MapReduce: Simplified Data Processing on Large Clusters" (http:/ / labs. Chevalier. org/ citation.MapReduce 5 Conferences and users groups • The First International Workshop on MapReduce and its Applications (MAPREDUCE'10) [18] was held with the HPDC conference and OGF'29 meeting in Chicago." . html). and Christos Kozyrakis. . 5859& rep=rep1& type=pdf) — paper by Ralf Lämmel. [10] Fabrizio Marozzo. Yi-An Lin." [13] "Database Experts Jump the MapReduce Shark" (http:/ / typicalprogrammer. [17] Curt Monash. from Microsoft [5] Cheng-Tao Chu. [16] Andrew Pavlo. "Mars: a MapReduce framework on graphics processors" (http:/ / portal. Naga K. Among other things. Govindaraju. databasecolumn. pp. baselinemag. org/ citation. Best Paper. . com/ content/ h17r882710314147/ ). chapt. gov/ netacgi/ nph-Parser?Sect1=PTO1& Sect2=HITOFF& d=PALL& p=1& u=/ netahtml/ PTO/ srchnum. [15] Greg Jorgensen. and M. Paolo Trunfio. htm& r=1& f=G& l=50& s1=7.1985048. com/ General references: • Dean. . 1. . "Evaluating MapReduce for Multi-core and Multiprocessor Systems" (http:/ / www. Dewitt.650. Mark Gardner. uspto. Google was running about 3. L. J. Gary Bradski. Brown University.com/papers/mapreduce. 2010. 13. Systems and Applications.com. Ramanan Raghuraman.com. cfm?id=1454152). Vana Kalogeraki.. HPCA 2007. IL.650. Sanjay (2004). Sang Kyun Kim. "MapReduce: A major step backwards" (http:/ / databasecolumn. com/ ?p=16). com/ evaluating-mapreduce-multi-core-and-multiprocessor-systems). Springer. Tuyong Wang..com. willowgarage. Madden. acm. "Understanding Map-Reduce" (http://wordflows. "A Peer-to-Peer Framework for Supporting MapReduce Applications in Dynamic Cloud Environments" (http:/ / www. Taneli Mielikainen and Ville H. Retrieved 2008-08-27. Gary Bradski. E. 2011. S. "As of October. 3PGCIC'10. "Towards MapReduce for Desktop Grid Computing" (http:/ / ieeexplore. & OS=PN/ 7. com/ papers/ mapreduce. 7. "Relational Database Experts Jump The MapReduce Shark" (http:/ / typicalprogrammer. Gillam (Editors). Wenbin Fang.331: "System and method for efficient large-scale data processing " (http:/ / patft. Tuulos. com/ ?p=16). . dbms2. Arun Penmetsa. . asp). . Xiaosong Ma. [18] http:/ / graal. In: Cloud Computing: Principles. by Jeffrey Dean and Sanjay Ghemawat.google. html) [2] US Patent 7. fr/ mapreduce/ [19] http:/ / mapreduce. com/ database-innovation/ mapreduce-a-major-step-backwards/ ). [12] "How Google Works" (http:/ / www.00. D. ieee. • Matt WIlliams (2009).331. References Specific references: [1] Google spotlights data center inner workings | Tech news blog . "MapReduce: Simplified Data Processing on Large Clusters" (http:// labs. [8] Bing Tang. representing thousands of machine-days. [11] Adam Dou. . org/ xpl/ freeabs_all. 6. ISBN: 978-1-84996-240-7. . edu/ projects/ mapreduce-vs-dbms/ ). Jeremy Archuleta. Andrew Ng. Stonebraker. brown. Retrieved 2010-01-11. M. willowgarage. . according to a presentation by Dean. S. cfm?id=1851489).

ust.stanford.2010. This results in a more pipelined approach than Google's MapReduce with instantaneous failover.hpca. Jeffrey D. L.org/citation. San Diego • "Interpreting the Data: Parallel Analysis with Sawzall" (http://labs.html) — paper by Rob Pike.com/content/h17r882710314147/) — paper by Fabrizio Marozzo. published in Cloud Computing: Principles.dbms2.pdf?id=rong_chen&cache=cache) -.cse. Ali Dasdan. from Google Labs • "Evaluating MapReduce for Multi-core and Multiprocessor Systems" (http://csl.doi. Gary Bradski. Sean Quinlan. Architecture" (http://pages. 2010.cmp_mapreduce. • "Scheduling divisible MapReduce computations " (http://dx. published in Proc.edu/~ullman/pub/mapred.cs. Yu Wang. It presents scheduling and performance model of MapReduce.com/papers/sawzall.004) -.edu/647742. 2007. chapt. "Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters" (http://portal.edu/~christos/ publications/2007.acm.stanford. 1029–1040.org/10. pp. from Stanford University • "Why MapReduce Matters to SQL Data Warehousing" (http://www.google. of ACM SIGMOD.fudan.pdf) — paper by Colby Ranger. in FPGA '10.pdf) — paper by Marc de Kruijf and Karthikeyan Sankaralingam.psu.12. Wenbin Fang. Tuyong Wang. Systems and Applications. ISBN: 978-1-84996-240-7. Gillam (Editors). Antonopoulos. Beth Plale. Zhenhua Guo. from University of Wisconsin–Madison • "Mars: A MapReduce Framework on Graphics Processors" (http://www. Judy Qiu. but with additional implementation cost.1247602) — paper by Hung-Chih Yang.E.cn/_media/publications. Arun Penmetsa.paper by Joanna Berlińska from Adam Mickiewicz University and Maciej Drozdowski from Poznan University of Technology.1723129) -- • • • • • paper by Yi Shan. Domenico Talia. from Stanford University. Load Balancing (http://citeseer.MapReduce 6 External links Papers • "A Hierarchical Framework for Cross-Domain MapReduce Execution" (http://pti. Govindaraju. Robert Griesemer. published in Proc. from Indiana University and Wilfred Li.004. from Yahoo and UCLA. 2008 introduction of MapReduce/SQL integration by Aster Data Systems and Greenplum • "MapReduce for the Cell B.paper by Rong Chen.springerlink. This paper is an attempt to develop a general model in which one can compare algorithms for computing in an environment similar to what map-reduce expects.wisc. doi:10.) FLuX: the Fault-tolerant (http://citeseer.html) eXchange operator from UC Berkeley provides an integration of partitioned parallelism with process pairs.acm. and D.pdf) — paper by Bingsheng He. FPMR: MapReduce framework on FPGA (http://portal. Bo Wang. and Christos Kozyrakis.iu. Haibo Chen and Binyu Zang from Fudan University.edu/~dekruijf/docs/mapreduce-cell. Ruey-Lung Hsiao. edu/546646. Yiming Sun. cfm?doid=1247480. Ningyi Xu. PACT 2008.hk/catalac/users/saven/ GPGPU/MapReduce/PACT08/171.12.com/2008/08/26/ why-mapreduce-matters-to-sql-data-warehousing/) — analysis related to the August. Not published as of Nov 2009.html). Stott Parker. Ramanan Raghuraman. "A Peer-to-Peer Framework for Supporting MapReduce Applications in Dynamic Cloud Environments" (http:// www.psu.edu/pubs/ hierarchical-framework-cross-domain-mapreduce-execution) — paper by Yuan Luo. (This paper shows how to extend MapReduce for relational data processing. pdf) — paper by Foto N.ostrich-pact10. Naga K.org/beta/citation.2010.edu. Jing Yan. Qiong Luo. It presents the design and implementation of MapReduce on graphics processors.1016/j.jpdc. Huazhong Yang (2010). Paolo Trunfio.cfm?id=1723112. from University of California. Sean Dorward. published in Proc. It presents the Tiled-MapReduce programming model which optimizes resource usages of MapReduce applications on multicore environment using tiling strategy. 7. Ullman. Journal of Parallel and Distributed Computing 71 (2011) 450-459. N.jpdc. pp. . Proceedings of the 18th annual ACM/SIGDA international symposium on Field programmable gate arrays. from Hong Kong University of Science and Technology. "A New Computation Model for Rack-Based Computing" (http://infolab. from University of Calabria. PACT 2010. Afrati. • "Tiled-MapReduce: Optimizing Resource Usages of Data-parallel Applications on Multicore with Tiling" (http:// ppi. 113–125.ist.1016/j. Springer.ist.

html) course from Google Code University (http://code.Comparing Data Parallel Programming Models" (http://stratosphere.html) (manuscript) Educational courses • Cluster Computing and MapReduce (http://code.MapReduce • "Nephele/PACTs: A Programming Model and Execution Framework for Web-Scale Analytical Processing" (http://stratosphere.eu/files/ ComparingMapReduceAndPACTs_11. and D. The success of the prototype and users' recognition of the value of this technology to the training community led to development of production software.umiacs. 2. Nijkamp.google. of ACM SoCC 2010.pdf) -. Alexandrov.com/edu/submissions/mapreduce/listing.paper by D. A reusable ALSP Interface consisting of generic data exchange message protocols. The paper introduces the PACT programming model. Heimel. a generalization of MapReduce.google.eu) research project. O.com/edu/) contains a comprehensive introduction to MapReduce including lectures. it was used by the US military to link analytic and training simulations. Replaced by the High Level Architecture (simulation) (HLA).google. a community-based experiment was conducted in 1991 to extend SIMNET to link the US Army's Corps Battle Simulation (CBS) [1] and the US Air Force's Air Warfare Simulation (AWSIM) [2]. S. Warneke from TU Berlin (http://www. taught by engineers of Google Boston. html) course from Google Code University (http://code. The first ALSP confederation.eu/files/NephelePACTs_10.com/edu/) contains video lectures and related course materials from a series of lectures that was taught to Google software engineering interns during the Summer of 2007. Participating simulations adapted for use with ALSP.com/edu/submissions/mapreduce-minilecture/listing.iap.edu/ ~jimmylin/book. E.pdf) -. Ewen. . Markl. Battré.de/menue/home/parameter/en/) published in Proc. Kao. and 3. Hueske. V. M. part of 2008 Independent Activities Period at MIT. F. "Data-Intensive Text Processing with MapReduce" (http://www. Hueske.de/menue/home/parameter/en/ ) published in Proc. ALSP consists of: 1. Kao. providing air-ground interactions between CBS and AWSIM. • "MapReduce and PACT . F.googlepages. Based on prototype efforts. of BTW 2011.com/).tu-berlin. reading material. Books • Jimmy Lin and Chris Dyer. the Defense Advanced Research Projects Agency (DARPA) employed The MITRE Corporation to study the application of distributed interactive simulation principles employed in SIMNET to aggregate-level constructive training simulations. Markl. • MapReduce course (http://mr. ALSP Logo History In 1990. V.2008. O. Ewen. and programming assignments. • MapReduce in a Week (http://code.google.paper by A.stratosphere. S. 7 Aggregate Level Simulation Protocol The Aggregate Level Simulation Protocol (ALSP) is a protocol and supporting software that enables simulations to interoperate with one another. and D. ALSP Infrastructure Software (AIS) that provides distributed runtime simulation support and management. supported three major exercises in 1992.umd. Warneke from TU Berlin (http://www. developed in the Stratosphere (http://www.tu-berlin.

ALSP had transitioned to a multi-Service program with simulations representing the US Army (CBS). The GRWSIM simulation was unreliable and its distributed database was inconsistent. Training. was less successful. Its packetized video teleconferencing brought general officers of NATO nations face-to-face during a military exercise for the first time.Aggregate Level Simulation Protocol By 1995. . Motivation In 1989. the US Navy (RESA). The Defense Advanced Research Projects Agency (DARPA) used ACE-89 as a technology insertion opportunity by funding deployment of the Defense Simulation Internet (DSI). and intelligence (TACSIM [4]). virtual battlefield. tank-crew trainers were connected over local area networks and the DSI to cooperate in a single. distribution of Ground Warfare Simulation (GRWSIM). The program had also transitioned from DARPA’s research and development emphasis to mainstream management by the US Army’s Program Executive Office for Simulation. and Instrumentation (PEO STRI [5]) 8 Contributions ALSP developed and demonstrated key aspects of distributed simulation. this was well-received. But the software application of DSI. • An architecture that permits simulations to continue to use their existing architectures while participating in an ALSP confederation. the US Air Force (AWSIM). • Time management so that the times for all simulations appear the same to users and so that event causality is maintained – events should occur in the same sequence in all simulations. This includes multiple simulations controlling attributes of the same object. and the desire to combine existing combat simulations prompted DARPA to initiate research that lead to ALSP. the Warrior Preparation Center (WPC) in Einsiedlerhof. the disappointment of ACE-89. logistics (CSSTSS). electronic warfare (JECEWSI). DARPA was funding development of a distributed tank trainer system called SIMNET where individual. degrading the effectiveness of the exercise. Germany hosted the computerized military exercise ACE-89. The success of SIMNET. the US Marine Corps (MTWS [3]). • Data management permits all simulations to share information in a commonly understood manner even though each had its own representation of data. computerized. fires its own weapons and determines appropriate damage to its systems when fired upon • A message-based protocol for distributing information from one simulation to all other simulations. • No central node so that simulations can join and depart from the confederation at will • Geographic distribution where simulators can be distributed to different geographic locations yet exercise in the same simulated environment • Object ownership so each simulation controls its own resources. many of which were applied in the development of HLA.

The ALSP conceptual framework is object-based where a model is composed of objects that are characterized by attributes to which values are assigned. This mapping represents one of the three ways in which a simulation must be altered to participate in an ALSP confederation. the simulation-object relationship is more complicated. Typically. The architecture implied by ALSP must be unobtrusive to existing architectures. the ALSP design adopted the second strategy. To design a mechanism that permits existing simulations to interact. objects come into (and go out of) existence with the passage of simulation time and the disposition of these objects is solely the purview of the simulation. ALSP supports a confederation of simulations that coordinate using a common model. ALSP prescribes that each simulation maps between the representational scheme of the confederation and its own representational scheme. activity scanning and process interaction. The schemes for internal state representation differ among existing simulations. interaction is facilitated entirely through the interconnection infrastructure. The first strategy requires few perturbations to existing simulations. necessitating a common representational system and concomitant mapping and control mechanisms. this solution does not scale well. several principles of SIMNET applied to aggregate-level simulations: • Dynamic configurability. The remaining modifications are: • Recognizing that the simulation doesn’t own all of the objects that it perceives. • Architecture independence. Despite representational differences. • Data management. • Geographic distribution. Because of an underlying requirement for scalability. For the results of a [6] distributed simulation to be "correct. Aggregate-level combat simulations use Lanchestrian models of combat rather than individual physical weapon models and are typically used for high-level training. Each simulation controls its own resources. Conceptual Framework A conceptual framework is an organizing structure of concepts that facilitates simulation model development. simulation time is independent of wall-clock time. Simulations can reside in different geographic locations yet exercise over the same logical terrain. aggregate-level combat simulations. The ALSP challenge had requirements beyond those of SIMNET: • Simulation time management. Architectural characteristics (implementation language. When acting within a confederation. when one of its objects is hit." time must be consistent across all simulations. Object classes are organized hierarchically in much the same manner as with object-oriented programming languages. and time flow mechanism) of existing simulations differed. fires its own weapons and. conducts damage assessment locally. user interface. • Autonomous entities. Simulations may join and depart an exercise without restriction.Aggregate Level Simulation Protocol 9 Basic Tenets DARPA sponsored the design of a general interface between large. • Communication by message passing. However. existing. A simulation uses a message-passing protocol distribute information to all other simulations. In stand-alone simulations. or (2) define a common representational scheme and require all simulations to map to that scheme. two strategies are possible: (1) define an infrastructure that translates between the representations in each simulation.[7] Common conceptual frameworks include: event scheduling. . • Modifying the simulation’s internal time advance mechanism so that it works cooperatively with the other simulations within the confederation.

3. for any value of simulation time. Objects not owned by a particular simulation but within the area of perception for the simulation are known as ghosts. i. ALSP Common Module (ACM) The ALSP Common Module (ACM) provides a common interface for all simulations and contains the essential functionality for ALSP. Whenever a simulation takes an action between one of its objects and a ghost. it reports this fact to the confederation to let other simulations create ghosts. the ACM send a grant-advance to the simulation. A simulation sends an event-request message to its ACM with a time parameter corresponding to simulation time T. attributes and interactions supported by a confederation. when a simulation deletes an object. The term confederation model describes the object hierarchy. it reports this fact to enable ghost deletion. Filter incoming messages. Time management Joining and departing a confederation is an integral part of time management process. The simulation sends any messages resulting from the event to its ACM. The simulation repeats from step (1). The mechanism to support next-event simulations is 1. 4. ACM services require time management and object management. Enforce attribute ownership so that simulations report values only for attributes they own. several simulations may own different attributes of a given object. (the time of its next local event). when a simulation departs a confederation the other ACMs delete input message queues for that simulation. One ACM instance exists for each simulation in a confederation. a simulation owns an object if it owns the "identifying" attribute of the object. Coordinate simulation local time with confederation time. 10 ALSP Infrastructure Software (AIS) The object-based conceptual framework adopted by ALSP defines classes of information that must be distributed. By convention. In fact.. Coordinate ownership of object attributes. The mechanism to support time-stepped simulation is: 1.e. during its lifetime an object may be owned by more than one simulation. and permit ownership migration. giving it permission to process its local event at time T. The simulation processes all events for some time interval 2. they include: • • • • • Coordinate simulations joining and departing from a confederation. Likewise. If the ACM has messages for its simulation with timestamps older than or the same as T. The simulation sends an advance request to its ACM for time .Aggregate Level Simulation Protocol The simulation-object ownership property is dynamic. so that simulations receive only messages of interest. the ACM sends the oldest one to the simulation. . 2. this is an interaction. Principal components of AIS are the ALSP Common Module (ACM) and the ALSP Broadcast Emulator (ABE). Conversely. The ALSP Infrastructure Software (AIS) provides data distribution and process coordination. When a simulation joins a confederation. Ghosts are local copies of objects owned by other simulations. ALSP time management facilities support discrete event simulation using either asynchronous (next-event) or [8] synchronous (time-stepped) time advance mechanisms. When a simulation creates an object. In the parlance of ALSP. Owning an object’s attribute means that a simulation is responsible for calculating and reporting changes to the value of the attribute. If all messages have timestamps newer than T. . These fundamental concepts provide the basis for the remainder of the presentation. the simulation must report this to the confederation. all other ACMs in the confederation create input message queues for the new simulation.

For any object class. 11 AIS includes a deadlock avoidance mechanism using null messages. either owned or ghosted. It receives a message on one of its communications paths and retransmits the message on all of its remaining communications paths. but not mandatory. and (3) geographic location.Aggregate Level Simulation Protocol 3. Object management The ACM administers attribute database and filter information. information • Update set. The simulation sends any messages for the interval 5. ALSP Broadcast Emulator (ABE) An ALSP Broadcast Emulator (ABE) facilitates the distribution of ALSP information. This permits configurations where all ALSP components are local to one another (on the same computer or on a local area network). The attribute database maintains objects known to the simulation. and attributes of those objects that the simulation currently owns. Filtering provides discrimination by (1) object class. Attributes minimally required to represent an object • Interest set. The ACM sends all messages with time stamps on the interval grant-advance to T+?T. 4. It also permits configurations where sets of ACMs communicate with their own local ABE with inter-ABE communication over wide area networks. followed by a to the ACM. If (an update passes all filter criteria) | If (the object is known to the simulation) | | Send new attribute values to simulation | Else (object is unknown) | | If (enough information is present to create a ghost) | | | Send a create message to the simulation | | Else (not enough information is know) | | | Store information provided | | | Send a request to the confederation for missing data Else (the update fails filter criteria) | If (the object is known to the simulation) | | Send a delete message to the simulation | Else | | Discard the update data The ownership and filtering information maintained by the ACM provide the information necessary to coordinate the transfer of attribute ownership between simulations. Object attribute values reported by a simulation to the confederation Information flow across the network can be further restricted through filters. . Useful. to the simulation. Filters also define the interactions relevant to a simulation. The simulation repeats from step (1). (2) attribute value or range. attributes may be members of • Create set. The mechanism requires that the processes have exploitable lookahead characteristics.

The ACM then distributes the information via AIS to other simulations in that have indicated interest. object resource control. Additional protocol messages provide connection state. Inter-component Communications Model AIS employs a persistent connection communications model[9] to provide the inter-component communications. It is defined by an LALR( 1) context-free grammar. As a simulation changes the state its objects. Dispatch messages are delivered as soon as possible. and a set of attributes associated with a c1ass. the simulation sends a delete message to inform other simulations. It consists of four message types: • Update. and (4) a mechanism for intelligent message distribution. These issues are addressed by a layered protocol that has at the top a simulation protocol with underlying simulation/ACM. class attributes. it sends update messages to the ACM that provide initial or changed attribute values. a class. • Delete. object management.Aggregate Level Simulation Protocol 12 Communication Scheme The ALSP communication scheme consists of (1) an inter-component communications model that defines the transport layer interface that connects ALSP components. Simulation Protocol The simulation protocol is the main level of the ALSP protocol. filter registration. Simulation/ACM Connection Protocol The simulation/ACM connection protocol provides services for managing the connection between a simulation and its ACM and a method of information exchange between a simulation and its ACM. time management. Interactions between objects are identified by kind. non-local VMS platforms used either Transparent DECnet or TCP/IP. attribute lock control. (2) a layered protocol for simulation-to-simulation communication. • Interaction. Event messages are time-stamped and delivered in a temporally-consistent order. A simulation can request an update of a set of attribute values for any object or class of objects by sending a refresh request message to the confederation. and event distribution protocols. where the set of classes. object management. The semantics of the protocol are confederation-dependent. and time control services. confederation save control. just as objects are described by attributes. and time management. . The transport layer interface used to provide inter-component communications was dictated by simulation requirements and the transport layer interfaces on AIS-supporting operating systems: local VMS platforms used shared mailboxes. When a simulation’s object engages either another simulation’s object or a geographic area. Therefore. and time management. the syntactical representation of the simulation protocol may be defined without a priori knowledge of the semantics of the objects and interactions of any particular confederation. The simulation protocol is text-based. • Refresh request. ALSP Protocol The ALSP protocol is based on a set of orthogonal issues that comprise ALSP’s problem space: simulation-to-simulation communication. Interaction kinds are described by parameters. (3) a message filtering scheme to define the information of interest to a simulation. and interaction parameters are variable. without regard for simulation time. interactions. the simulation sends an interaction message to the ACM for further dissemination to other interested simulations. When a simulation causes one of its objects to cease to exist. Objects in ALSP are defined by a unique id number. and UNIX-like platforms use TCP/IP. object management. Two services control distribution of simulation protocol messages: events and dispatches.

(2) the ACM sends the simulation a create message. Distributed object ownership presumes that no single simulation must own all objects in a confederation. The ACM filters two types of messages: update messages and interaction messages. The state of control is held elsewhere in the confederation. A simulation "owns" the attribute if it has that attribute locked. Any simulation asking for control is granted control. when an ACM receives an update message there are four possible outcomes: (1) the ACM discards the message. time progression. An ACM may discard interaction messages because of the kind parameter. uses the object management protocol. If this simulation is interested in the objects. release. and pass filtering criteria and discards those that are not of interest. The ACM delivers messages to its simulation that are of interest. Update messages. These services allow AIS to manage distributed object ownership. acquisition. No simulation currently controls the attribute. (3) the ACM sends the simulation the update message. but many simulations require knowledge of some objects. • Object Discovery adds an object to the object database as a ghosted object. Locks implement attribute ownership. A simulation controls the attribute and may update the attribute value.Aggregate Level Simulation Protocol Object Management Protocol The object management protocol is a peer-level protocol that sits below the simulation protocol and provides object management services. it can ghost them (track their locations and state) and model interactions to them from owned objects. Interaction messages. The coordination of status. acquisition. The kind parameter has a hierarchical structure similar to the object class structure. The join/resign services and time synchronization mechanisms are described in Section earlier. A primary function of the object management protocol is to ensure that a simulation only updates attributes for which it has acquired a lock. translators and simulations for a particular value of simulation time. The save mechanism provides fault tolerance. 13 Message Filtering The ACM uses simulation message filtering to evaluates the content of a message received from the confederation. A simulation uses simulation protocol update messages to discover objects owned by other simulations. From the ACM’s perspective. objects come into existence through the registration process performed by its simulation or through the discovery of objects registered by other simulations. All of the attributes for this object are marked with a status of gone. The simulation may optionally specify attributes to be in the unlocked state. Services provided by the simulation/ACM protocol are used by the simulations to interact with the ACM’s attribute locking mechanism. A simulation "owns" the object if it has its id attribute locked. between ACMs. It provides time management services for synchronizing simulation time among ACMs. and release of object attributes. and verification (of the consistency of the distributed object database). or (4) the ACM sends the simulation a delete message. request. Coordination is required to produce a consistent snapshot of all ACMs. The initial state attribute locks for registered objects and discovered objects is as follows: • Object Registration places each object-attribute pair in the locked state. The protocol provides services for the distributed coordination of a simulation’s entrance into the confederation. As discussed in earlier. The object manager in the ACM manages the objects and object attributes of the owned and ghosted objects known to the ACM. • Gone. Time Management Protocol The time management protocol is also a peer-level protocol that sits below the simulation protocol. and confederation saves. The simulation informs its ACM of the interaction . ACMs solely use it for object attribute creation. Each attribute of each object known to a given ACM has a status that assumes one of three values: • Locked. • Unlocked. The ACM evaluates update messages based on the simulation’s update message filtering criteria that the simulation provides.

XEROX Palo Alto Research Center. E. (1993). Taft. [10] Weatherly. L." Report CSL-79. CA. except that the kind parameter in the interaction message determines where the message is sent. Shoch. E. J. and Griffin. peostri.J.M. distribution of this information allows ACMs to only distribute data on classes (and attributes of classes) that are of interest to the confederation. R. The ABE also use this information to send only information that is of interest to the components it serves.E. pp. . 14 Message Distribution To minimize message traffic between components in an ALSP confederation.. S. [7] Balci. (1978). and the Ordering of Events in a Distributed System. usmc.R.H.F. cfm?RID=SMN_AF_1000000 http:/ / www. "Time. 21(7). [8] Nance. Los Angeles. Clocks. 59-93.. "ALSP . (1971). and Future Directions. peostri. R. the process is similar. army.. R. pp.L. mil/ index. Derrick.. mil [6] Lamport. AIS employs a form of intelligent message routing that uses the Event Distribution Protocol (EDP). 257-263.. Page. R.E. In the case of update messages. mil/ products/ cbs/ https:/ / afmsrr. LA. pp. Nance. army. A. 558-565.M. References [1] [2] [3] [4] http:/ / www. af. (1979). [9] Boggs. July.10. asp http:/ / www. Wilson. O." Management Science. mil/ products/ tacsim [5] http:/ / www. E. afams. September. mil/ dirs/ ont/ mands/ mwts." Communications of the ACM. 12–15 December.A. and Metcalfe.[10] The EDP allows ACMs to inform the other AIS components about the update and interaction filters registered by their simulations.. pp." In: Proceedings of the 1990 Winter Simulation Conference. "PUP: An Internetwork Architecture.Theory. 18(l). peostri. and Bishop. New Orleans. 29palms. Experience. 1068-1072. "On Time Flow Mechanisms for Discrete Event Simulations. 9–12 December. J." In: Proceedings of the 1993 Winter Simulation Conference.Aggregate Level Simulation Protocol kinds that should pass or fail the interaction filter. army. "Model Generation Issues in a Simulation Support Environment.L. (1990).P. D. July. For interaction messages.

Scaling storage and compute resources can be performed by a single API call. Features Amazon RDS is simple to use. operate. up-front. In the event of planned database maintenance or unplanned service disruption. Monitoring the compute and storage resource utilization of your DB Instance is easy.[6] Amazon RDS supports MySQL and Oracle database engines.Amazon Relational Database Service 15 Amazon Relational Database Service Amazon Relational Database Service[1] or Amazon RDS is a distributed relational database service by Amazon. Some of the major features are: Multi AZ deployment Multi-Availability Zone deployments are targeted for production environments [10] . On-Demand instances are billed [13] at an ongoing hourly usage rate. In June 2011. It is a web service running "in the cloud" and provides users a relational database for use in their applications. and scale a relational database[2] . asynchronous replication functionality. These performance metrics are available using the AWS Management Console or Amazon CloudWatch APIs [9].com. Reserved Instances Amazon RDS DB instances come in two packages: On-Demand DB Instances and Reserved DB Instances [12]. Read Replicas Read Replicas make it easy to take advantage of MySQL’s native. Amazon RDS was first released on 22 October 2009[4] [5] . A new DB instance can be launched from the AWS Management Console [7] or using the Amazon RDS APIs [8]. Amazon RDS offers many different features to support different use cases. Multi-AZ deployments provide enhanced availability and data durability for MySQL instances. Amazon RDS makes it easy to set up. Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone [11] (independent infrastructure in a physically separate location). They can also be used for serving read traffic when the primary database is unavailable. . Thus Reserved DB Instances enable you to take advantage of the rich functionality of Amazon RDS at lower cost and can provide substantial savings over owning database assets or running only On-Demand DB instances. The two instance types are exactly the same except for their billing. one-time fee and in turn provide a significant discount on the hourly usage charge for that instance. backing up your database and enabling point in time recovery are managed automatically[3] . When you create or modify your DB Instance to run as a Multi-AZ deployment. Complex administration processes like patching the database software. Oracle database support was added. Reserved DB Instances require a low. Read Replicas help in scaling out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. Amazon RDS automatically failsover to the up-to-date standby ensuring that database operations resume quickly without administrative intervention.

php/ 426926 [7] https:/ / console. com/ rds/ faqs/ #41 [12] http:/ / aws. 64-bit platform. Moderate I/O Capacity Large DB Instance 7. com/ rds/ #features . com/ [8] http:/ / docs. com/ developertools/ 2534 [10] http:/ / en. com/ aws/ 2010/ 08/ by-popular-demand-amazon-rds-reserved-db-instances. jspa?externalID=2942& categoryID=291 [5] http:/ / www.25 ECUs each).25 ECUs each). com/ rds/ [2] http:/ / nerds. html [13] http:/ / aws.Amazon Relational Database Service 16 Database Instance Types Amazon RDS currently supports six DB Instance Classes. amazonwebservices. 64-bit platform. amazon. com/ AmazonRDS/ latest/ APIReference/ [9] http:/ / aws. High I/O Capacity High-Memory Double Extra Large DB Instance 34 GB of memory. 64-bit platform. amazon.7 GB memory. amazon. High I/O Capacity References [1] http:/ / aws. 8 ECUs (4 virtual cores with 2 ECUs each). High I/O Capacity Extra Large DB Instance 15 GB of memory. com/ mysql-in-the-cloud-at-airbnb [3] http:/ / aws. amazon.1 GB memory. amazon. internet. com/ connect/ entry. com/ 2009/ 10/ amazon_relational_database_service. typepad. High I/O Capacity (MySQL DB Engine Only) High-Memory Extra Large Instance 17. html [6] http:/ / cloudcomputing. amazon. oreilly. com/ applications/ article. to support different types of workloads [14] : Small DB Instance 1. 13 ECUs (4 virtual cores with 3. High I/O Capacity High-Memory Quadruple Extra Large DB Instance 68 GB of memory. 64-bit platform. 6. com/ rds/ amazon-rds-introduced/ [4] http:/ / developer. allthingsdistributed.25 ECUs each). airbnb. 4 ECUs (2 virtual cores with 2 ECUs each). 64-bit platform. com/ mysql2011/ public/ schedule/ detail/ 19732 [11] http:/ / aws. amazon. 1 ECU (1 virtual core with 1 ECU).5 ECU (2 virtual cores with 3. 26 ECUs (8 virtual cores with 3. 64-bit platform. amazonwebservices. aws.5 GB memory. com/ rds/ pricing/ [14] http:/ / aws.

10 GB attributes per domain 1. html''Amazon) . com/ AmazonSimpleDB/ latest/ DeveloperGuide/ index.com. More can be requested by filling a form. Amazon charges fees for SimpleDB storage. html?SDBLimits.A shift in AWS pricing (http:/ / blog. com/ 2008/ 12/ simpledb-2000000-free-requests-for-next-six-months/ ) [4] Amazon SimpleDB official home page (http:/ / www. com/ SimpleDB-AWS-Service-Pricing/ b?node=342335011& no=553872011& me=A36L942TSJ2AJA) [3] SimpleDB . Transfer to other Amazon Web Services is free of charge. org/ archives/ 2007/ 12/ 13/ amazon-simpledb/ ) [2] Amazon SimpleDB. 2007.000 attributes per item size per attribute 256 attributes 1024 bytes Query limitations Attribute items returned in a query response seconds a query may run Maximum 2500 items 5 seconds attribute names per query predicate 1 attribute name comparisons per predicate predicates per query expression 20 operators 5 predicates References [1] What You Need To Know About Amazon SimpleDB (http:/ / www.Limited Beta (http:/ / www.[4] Limitations Published limitations[5] : Store limitations Attribute domains size of domains Maximum 250 active domains per account. It was announced on December 13. amazon. amazonwebservices. On December 1. Amazon SimpleDB Developer Guide (API Latest version) (http:/ / docs.000. and throughput over the Internet. 2008.Amazon SimpleDB 17 Amazon SimpleDB Amazon SimpleDB is a distributed database written in Erlang[1] by Amazon. Amazon introduced a new pricing with free tier[3] for 1 GB of data & 25 machine hours.[2] As with EC2 and S3. com/ b?node=342335011) [5] SimpleDB Limits. amazon.Free Tier . sdbexplorer. It is used as a web service in concert with Amazon Elastic Compute Cloud (EC2) and Amazon S3 and is part of Amazon Web Services. transfer. satine.000.

html) • Simol .com/p/nsimpledb/) • M/DB . codeplex. can also be used as a proxy for SimpleDB (http://code.a Java Persistence API (JPA) implementation for Amazon's SimpleDB.Tool to explore Amazon SimpleDB service.google.amazon.com/mdb.Amazon SimpleDB 18 External links • Amazon SimpleDB official home page (http://aws. (http://simol.google.mgateway.a Free Open Source API-compatible alternative to SimpleDB that can be used as a local or cloud database (http://www.com/ p/simplejpa/) • SDB Explorer . (http://code.com/) .com/) • typica . (http://www.NET object-persistence framework for Amazon SimpleDB written in C#.google.Open-source .A Java client for SimpleDB and other Amazon Web Services (http://code.com/simpledb/) • NSimpleDB .Open source C# implementation of the SimpleDB data model for the desktop.com/p/typica/) • SimpleJPA .sdbexplorer.

nl/ pub/ amoeba/ [2] "Why was Python created in the first place?" (http:/ / www.bsslab. The Python programming language was originally developed for this platform. Recent development is carried forward by Dr. Stefan Bosse at BSS Lab. cs. External links • Amoeba home page (http://www.bsslab.de/english/projects_software. Sun 3/50 and Sun 3/60.bsslab. Development at Vrije Universiteit was stopped: the files in the latest version (5. The aim of the Amoeba project is to build a timesharing system that makes an entire network of computers appear to the user as a single machine. python.nl/pub/amoeba/) • FSD-Amoeba page at Sourceforge (http://fsd-amoeba.bsslab. org/ doc/ faq/ general/ #why-was-python-created-in-the-first-place).html).de/english/amunix. Stefan Bosse at BSS Lab (http://www. Retrieved 2008-02-11.de/english/index.bsslab. Amoeba runs on several platforms. The Virtual Amoeba Machine: distributed operating system based on Amoeba with virtual machine concepts and functional programming • AMUNIX (http://www. [2] References [1] http:/ / www. Amoeba on the top of UNIX: Amoeba extension for UNIX-like opertaing systems • AMCROSS (http://www.de/english/vam. .net) Recent development by Dr.bsslab. Tanenbaum and others at the Vrije Universiteit. 68030.sourceforge. including SPARC. Python FAQ.html).de/english/amcross. vu.3) were last modified on 12 February 2001.bsslab. Tanenbaum Available language(s) English Official website [1] Amoeba is an open source microkernel-based distributed operating system developed by Andrew S.html): • Overview (http://www.html) • VAM (http://www.de/english/vxkernel.html): the new VX-Amoeba Kernel • VAMNET (http://www.Amoeba distributed operating system 19 Amoeba distributed operating system Amoeba Company / developer Andrew S.html): Amoeba crosscompiling environment for UNIX • VX-Kernel (http://www. i386. The Virtual Amoeba Machine Network: a new hybrid distribute operating system environment .cs. i486. The system uses FLIP as a network protocol.de/english/vamnet.html).vu.

Steve Phallen as President. Organizations such as Club Conflict Online Gaming League[13] and TeamWarfare League[14] have used Art of War Central. the original intent was to provide a dedicated server for private team play.Art of War Central 20 Art of War Central Art of War Central is a game server company that provides game server hosting to game player clans for a variety of PC on-line multi-player games. While their primary business is directed at the on-line gaming community they also offer virtual servers.[8] VSK Game Servers was an early industry leader in developing lag or latency reducing technology to improve gaming performance. Their takeover of Wolf Servers and VSK Game Servers was announced in a press release November 26. it was the first such game server on the internet.[16] Art of War Central has co-sponsored a number of on-line game events with Superstar Gamers. Battlefield 2. Dallas.[3] The company began offering additional games when it introduced a beta version of Counter Strike in 2002 and has since expanded its portfolio to over 100 online games as of October 2010. a 64 team double elimination Counter-Strike competition.[17] .[4] In 2008 international operations were launched in London. Ohio Better Business Bureau with a rating of A. 2009. Amsterdam and Frankfurt. History Initially started in the basement of company founder and current Vice President Mr. Crysis 2. Medal of Honor. Homefront. Crysis. San Jose.as of April.[11] North American Game Technology LLC is an accredited member of the Columbus.[9] Incorporating specific performance requirements into the hardware of their in-house servers and partnering with Internap to improve routing performance. 2011. voice servers. Chicago. Germany. Bad Company 2. World in Conflict.com maintained dedicated game servers in the following markets Atlanta.[1] The site was registered on March 28. Frontlines: Fuel of War. Art of War Central sponsored the 2004 Cyberathlete Extreme World Championships[15] and in August 2004 participated with Team Sportscast Network by providing a 50.[5] Current ownership is listed as North American Game Technology.[12] Sponsorships and League Hosting Art of War Central has sponsored and hosted numerous on-line gaming tournaments and leagues for profession and amateur players.[7] WolfServers. Dallas Behling.000 slot HLTV network to broadcast “The-Rush”. dedicated servers and web hosting services for non-gaming users. Texas. the CPL (Cyberathlete Professional League). Los Angeles and Southampton/London UK. LLC founded in September 2006 with Mr. 2001 [2] and offered game servers for Tribes 1 and Tribes 2.[10] Accreditations Art of War Central is an approved ranked server provider for America's Army Honor.[6] Acquisitions In November 2009 Art of War Central acquired two competitors in the game server and dedicated server marketplace. Quake Wars. the CAL (Cyberathlete Amateur League) and was a contributing sponsor to the CPL World Tour. Virginia. Battlefield 2142. New York.

com”) http:/ / www. bbb. to overcome the rapidly growing complexity of computing systems management. a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. com/ cs/ story/ 22604/ [17] http:/ / www. effectors (for self-adjustment). teamwarfare.vskgamingservers. asp?forumid=662& threadid=449592 [15] http:/ / www.com/) Autonomic Computing Autonomic Computing refers to the self-managing characteristics of distributed computing resources. it will constantly check and optimize its status and automatically adapt itself to changing conditions. org/ centralohio/ business-reviews/ internet-gaming/ north-american-game-technology-in-worthington-oh-70041777 [7] http:/ / www. this initiative's ultimate aim is to develop computer systems capable of self-management. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring). i-newswire. vskgamingservers.php) • VSK Gaming Servers (http://www.artofwarcentral.and environment awareness. and to reduce the barrier that complexity poses to further growth. html [8] http:/ / www.com/) • Wolf Servers (http://www. wolfservers. com/ main. knowledge and planner/adapter for exploiting policies based on self. bbb. clubconflict. using high-level policies. org/ centralohio/ business-reviews/ internet-gaming/ north-american-game-technology-in-worthington-oh-70041777 [13] http:/ / www. adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. Driven by such vision. ant colony optimization could be studied in this paradigm. For example. com/ art-of-war-central-celebrates-10th/ 68295 [5] http:/ / nuclearwar2012. com/ art-of-war-continues-growth-with-expansion-to-frankfurt_281. org/ 10429472-gamers-are-winners-in-landmark-gamer-server-merger-art-of-war-central-merges-with-wolf-servers-and. Started by IBM in 2001. com/ [10] http:/ / www. com/ art-of-war-continues-growth-with-expansion-to-frankfurt_281. com/ art-of-war-central-celebrates-10th/ 68295 http:/ / www. com/ cs/ story/ 21732/ [16] http:/ / www. htm [12] http:/ / www. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve hard computational problems. An autonomic system makes decisions on its own. gkg. prlog. com/ main/ index. gotfrag. an autonomic computing framework might be seen composed by Autonomic Components (AC) interacting with each other [1]. internap. A very similar trend has recently characterized significant research work in the area of multi-agent systems. com/ business-internet-connectivity-services/ route-optimization-miro/ [11] http:/ / nuclearwar2012. sk-gaming.Art of War Central 21 References [1] [2] [3] [4] http:/ / artofwarcentral. asp?page=dp& dis=98290 https:/ / www. However. htm [6] http:/ / www. most of these approaches are typically conceived with centralized or cluster-based server architectures in mind and mostly address the need of reducing management costs rather than the need of enabling complex software systems or providing innovative services.[2] . i-newswire. com/ forums/ showthread. php [9] http:/ / www. com/ sponsors/ [14] http:/ / www. As widely reported in literature. net/ whois/ (query “artofwarcentral. com/ content/ 9934-TsN_Three_Continents_in_Three_Weeks External links • Art of War Central (http://www.wolfservers. gotfrag.com/main/index.

This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. The distributed applications running on these computer networks are diverse and deal with many different tasks. . but the demand for skilled IT personnel is already outstripping supply. IBM defined five evolutionary levels. Computing systems have brought great benefits of speed and automation but there is now an overwhelming economic need to automate their maintenance. In a self-managing Autonomic System. A general problem of modern distributed computing systems is that their complexity. Manual control is time-consuming. Large companies and institutions are employing large-scale computer networks for communication and computation. expensive.g. For this process. network and basic database parameters). for its deployment: Level 1 is the basic level that presents the current situation where systems are essentially managed manually. heart rate. she defines general policies and rules that serve as an input for the self-management process. the human operator takes on a new role: he or she does not control the system directly. • Self-Healing: Automatic discovery. IBM has defined the following four functional areas: • Self-Configuration: Automatic configuration of components. Levels 2 . and in particular the complexity of their management. and blood pressure) without any conscious intervention. operating system. It is inspired by the autonomic nervous system of the human body. They do so by using laptops. or the Autonomic deployment model [5]. ranging from internal control processes to presenting web content and to customer support. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. In "The Vision of Autonomic Computing". design and maintain the complexity of interactions. or mobile phones with diverse forms of wireless technologies to access their companies' data. hardware. PDAs. networked computing systems to manage themselves without direct human intervention. Currently this volume and complexity is managed by highly skilled humans. The manual effort needed to control a growing networked computer-system tends to increase very quickly.[4] Kephart and Chess warn that the dream of interconnectivity of computing systems and devices could become the “nightmare of pervasive computing” in which architects are unable to anticipate.Autonomic Computing 22 The problem of growing complexity Self-management means different things in different fields. Additionally. freeing administrators from low-level task management while delivering better system behavior. Forecasts suggests that the number of computing devices in use will grow at 38% per annum and the average complexity of each device is increasing. Most 'autonomic' service providers guarantee only up to the basic plumbing layer (power. Instead. Mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. • Self-Optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements.4 introduce increasingly automated management functions. Autonomic systems A possible solution could be to enable modern. is becoming a significant limiting factor in their further development. and correction of faults. • Self-Protection: Proactive identification and protection from arbitrary attacks. 80% of such problems in infrastructure happen at the client specific application and database layer. with labour costs exceeding equipment costs [3] by a ratio of up to 18:1. and error-prone. self-managing systems. They state the essence of autonomic computing is system self-management. while level 5 represents the ultimate goal of autonomic. respiration. This nervous system controls important bodily functions (e.

The actual operation of the autonomic system is dictated by the Logic. which is responsible for making the right decisions to serve its Purpose. etc..[6] 23 Control loops A basic concept that will be applied in Autonomic Systems are closed control loops.Autonomic Computing The design complexity of Autonomic Systems can be simplified by utilizing design patterns such as the Model View Controller (MVC) to improve concern separation by helping encapsulate functional concerns.. This model highlights the fact that the operation of an autonomic system is purpose-driven. an autonomic system must be self-contained and able to start-up and operate without any manual intervention or external help. Aware . that define the basic behaviour).) without external intervention. According to IBM. This well-known concept stems from Process Control Theory.. Adaptive An autonomic system must be able to change its operation (i. Characteristics Even though the purpose and thus the behaviour of autonomic systems vary from system to system. interpretation of sensory data. hundreds or even thousands of these control loops are expected to work in a large-scale self-managing computer system. the knowledge required to bootstrap the system (Know-how) must be inherent to the system. Inherent to an autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate itself (e. This includes its mission (e. and influence by the observation of the operational context (based on the sensor input). bootstrapping.. and the “survival instinct”. the policies (e. Essentially. Conceptual model A fundamental building block of an autonomic system is the sensing capability (Sensors Si).e. faults. state and functions). every autonomic system should be able to exhibit a minimum set of properties to achieve its purpose: Automatic This essentially means being able to self-control its internal functions and operations. Again. etc. which enables the system to observe its external operational context. the service it is supposed to offer).g. As such. This will allow the system to cope with temporal and spatial changes in its operational context either long term (environment customisation/optimisation) or short term (exceptional conditions such as malicious attacks. configuration knowledge. If seen as a control system this would be encoded as a feedback error function or in a heuristically assisted system as an algorithm combined with set of heuristics bounding its operational space. a closed control loop in a self-managing system monitors some resource (software or hardware component) and autonomously tries to keep its parameters within a desired range.g.).g. its configuration.

May.com/developerworks/tivoli/autonomic/library/ 1016/1016_autonomic. Jan 2003 [5] http:/ / www.com/explorer/ #display=mechanism-{http://resex.assl. Awareness will control adaptation of its operational behaviour in response to context or state changes. in Matthias Nickles. doi.ibm.uni-stuttgart.vassev.com/autonomic-technology-platform) • Applied Autonomics provides Autonomic Web Services (http://www. Agents and Computational Autonomy: Potential.providers of autonomic computing software (http://www.whitestein.org) • CASCADAS Autonomic Tool-Kit in Open Source (http://sourceforge.in German (ftp://ftp.org/) • ASSL (Autonomic System Specification Language) : A Framework for Specification. vol.ibm.inrialpes. Berkeley University of California. “Flexible Self-Management Using the Model-View-Controller Pattern. 2008. 24 References [1] http:/ / sourceforge. informatik. Risks.com) • Explanation of Autonomic Computing and its usage for business processes (IBM).html) • Practical Autonomic Computing . Validation and Generation of Autonomic Systems (http://www. March 2002 [4] IEEE Computer Magazine. and Gerhard Weiss (editors).html) • Barcelona Supercomputing Center . Lecture Notes in Computer Science.ibmpressbooks.com) • Enigmatec Website .ustuttgart_fi/DIP-2787/DIP-2787.Autonomic Systems and eBusiness Platforms (http://www. USA.com/ autonomic/pdfs/AC_Practical_Roadmap_Whitepaper. " From Individual Based Modeling to Autonomy Oriented Computation (http:/ / www. Michael Rovatsos. 84-90.ibm. 2969. survey.org/) • JADE .appliedautonomics. 2008.com/ bookstore/product. php?group_id=225956) • ANA Project: Autonomic Network Architecture Research Project. 1109/ MS. 3.rkbexplorer. org/ 10. php?group_id=225956 [2] Xiaolong Jin and Jiming Liu.net/project/showfiles. com/ press/ us/ en/ pressrelease/ 464.asp?isbn=013144025X) • IBM Autonomic Computing Website (http://www. Curry and P. Berlin.com) • IPsoft service providers delivering Autonomic Computing (http://www. 25.Autonomic Computing An autonomic system must be able to monitor (sense) its operational context as well as its internal state in order to be able to assess if its current operation serves its purpose. [3] ‘Trends in technology’. pages 151–169.org/) • Dynamically Self Configuring Automotive Systems (http://www.net) • Handsfree Networks . Grace.A framework for developing autonomic administration software (http://sardes. ISBN 978-3-540-22477-8. funded by the European Union (http://www.” (http:/ / dx. Situation-aware Communications And Dynamically Adaptable. wss [6] E. ana-project.dyscas. pp.pdf) • Autonomic computing blog (http://www-03.pdf) • Autonomic Computing Architecture in the RKBExplorer (http://www. net/ project/ showfiles. Springer.com/id/resilience-mechanism-87d79b11}) . External links • Autonomic Computing by Richard Murch published by IBM Press (http://www. asp?genre=article& issn=0302-9743& volume=2969& spage=151)".com/developerworks/blogs/page/DaveBartlett) • Whitestein Technologies . com/ openurl.ipsoft.bsc.com) • CASCADAS Project: Component-ware for Autonomic.research. vol. springerlink.providers of autonomic computing software (http://www.fp7-socrates.rkbexplorer.provider of development and integration environment for autonomic computing software (http://www.ibm.fr/jade. ibm.com/autonomic/) • Autonomic Computing articles and tutorials (http://www.handsfreenetworks.Roadmap to Self Managing Technology (http://www-03. funded by the European Union (http://www. 2004.cascadas-project.enigmatec. no.de/pub/library/medoc.es/ autonomic) • SOCRATES: Self-Optimization and Self-Configuration in Wireless Networks (http://www. and Solutions. 60) IEEE Software.

000 transactions per second per node.inderscience. The system is capable of 100. extremely fast. . Fault-tolerant design was an issue. Traditional databases approaches were designed with traditional rotational disk storage in mind. The volume and performance demands of Real-time web applications caused traditional SQL databases to fail. Together they created the Citrusleaf database platform. The first was the sheer volume of data. 2.umb. Citrusleaf takes advantage of the properties of Solid-state drive (SSD) to accomplish this.php/BiSNET/e) 25 Citrusleaf database Citrusleaf Developer(s) Stable release Written in Citrusleaf. As of 2010 Citrusleaf has been implemented in production. real-time prioritization.com/ijac/) • BiSNET/e: A Cognitive Sensor Networking Architecture with Evolutionary Multiobjective Optimization (http:// dssg. The average seek time of rotating disk storage is ten milliseconds and therefore a sub-millisecond response time is not possible.0. History While at Yahoo! and Aggregate Knowledge. To support these transaction loads in a non-stop manner during node arrivals and departures. 2010 C Operating system Linux Type License Website distributed key/value database system Enterprise (Perpetual or Subscription based) http:/ / www. Therefore in 2008 Brian Bulkowski created a key-value data store and later was joined by Srini Srinivasan in 2009. the authors created software solutions in the areas of distributed systems. Inc. Inc.Autonomic Computing • International Journal of Autonomic Computing (http://www.cs. fault-tolerant database engine. These applications require the ability to store 5 to 10 Kilobytes of information on hundreds of millions of webs users and compare it to potential ads to display with sub-millisecond response time. citrusleaf. the founders of Citrusleaf Corporation encountered a problem. Their applications were mission-critical so in addition to the performance requirements the solution had to be available without interruption.23 / September 1. This was due to several reasons. post-relational NoSQL database produced and marketed by Citrusleaf. Keeping track of 5 to 10 Kilobytes of information for each of hundreds of millions of people produced a database with billions of objects. It was originally developed for managing the mission-critical data for applications on the Real-time web. net/ The Citrusleaf database is an ACID-compliant. with a response time of under one millisecond. Retrieving and processing this information with sub-millisecond response time was impossible with traditional database approaches. Design Drivers The answer lay in making use of solid state drives SSD. and storage management across all kinds of storage.edu/wiki/index. The Citrusleaf database platform is an ACID-compliant. scalable. In addition to performance.

PHP. Randomized object replication allows smooth load balancing during failure recovery.citrusleaf. integers. in the style of Redis. which are binary data which has been reflected by the serializer of an individual object (such as a Java blob generated by Java's serializer). Each data object is a collection of 'bins' in Citrusleaf's parlance. individual data objects are referenced by tables and primary keys which could be strings. The use of typed values allows different languages to inter-operate simply: a string set in Java will appear correctly through the Python client. Flexible replication policy: Set replication factors for individual data items. Java. integers. These namespaces are similar to a database instance in an RDBMS. Some high level operations (such as atomically adding integers) are supported. Within a namespace. or binary data.net) . Scalability and Performance • Distributed object store: Easily store and retrieve large volumes of data through Citrusleaf client for C. even though Java and Python use different underlying character representations (Unicode vs UTF-8).Citrusleaf database 26 Data model Citrusleaf organizes all data into namespaces. but the set of instructions is not very rich. predictable sub-millisecond latency from memory or flash storage. The system is schema-less in that different columns can be used in different data objects of the same table. Python and Ruby. and control policies like replication count and storage location. Citrusleaf's data model allows it to be considered as a document store. and "reflection blobs". • Real-time performance: Low. References External links • Official Citrusleaf site (http://www. C#. blobs. The types supported are strings. A key is a unique reference to a piece of data: common keys include usernames and session identifiers. Replication and Failover • • • • Automatic failure detection and in-flight transaction rerouting for nonstop operation in the face of failure. which are similar to column names in SQL. Each column's value is typed.000 transactions per second per commodity node. • High sustained throughput of over 100. Automatic Client failover: Clients track cluster membership for automatic load balancing and transaction re-try. • Automatic cluster resizing and rebalancing: Citrusleaf cluster will automatically grow or shrink using zeroconfig networking. although it is more similar to a schema-less version of the row based schema typically used in relational systems.

That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve the account information. ftp servers. The file server on a client-server network is a high capacity. called servers. Peer-to-peer networks are typically less secure than a client-server networks because security is handled by the individual computers. mail servers. web access and database access. software applications can be installed on the single computer and shared by every computer in the network. Comparison to peer-to-peer architecture A client-server network involves multiple clients connecting to a single. built on the client–server model. not controlled and supervised on the network as a whole. Many business applications being written today use the client–server model. the collision of session may be larger than with routing via server nodes. which in turn serves it back to the web browser client displaying the results to the user. name servers. On the other hand. A server machine is a host that is running one or more server programs which share their resources with clients. are Schematic clients-server interaction. Users accessing banking services from their computer use a web browser client to send a request to a web server at a bank. Clients therefore initiate communication sessions with servers which await incoming requests. So do the Internet's main application protocols.Clientserver model 27 Client–server model The client–server model of computing is a distributed application that partitions tasks or workloads between the providers of a resource or service. In the peer to peer network. but both client and server may reside in the same system. The interaction between client and server is often described using sequence diagrams. The server component provides a function or service to one or many clients. central server. email clients.[1] Often clients and servers communicate over a computer network on separate hardware. Sequence diagrams are standardized in the Unified Modeling Language. By contrast. print servers. SMTP. called clients. the client-server model works with any size or physical layout of LAN and doesn't tend to slow down with a heavy use. database servers. application servers. but requests a server's content or service function. The advantage of peer-to-peer networking is the easier control concept not requiring any additional coordination entity and not delaying transfers by routing via server entities. However. Specific types of clients include web browsers. Description The client–server characteristic describes the relationship of cooperating programs in an application. The balance is returned to the bank database client. which initiate requests for such services. Telnet. and service requesters. CD-ROMs and printers[2] . Functions such as email exchange. A client does not share any of its resources. peer-to-peer networks involve two or more computers pooling individual resources such as disk drives. They are also cheaper to set up because most desktop operating systems have the software required for the network installed by default. Most web services are also types of servers. and terminal servers. such as HTTP. [3] . Specific types of servers include web servers. Each computer acts as both the client and the server which means all the computers on the network are equals. file servers. while each two of them communicate in a session. These shared resources are available to every computer in the network. The client–server model has become one of the central ideas of network computing. and online chat clients. and DNS. The resources of the computers in . that is where the term peer-to-peer comes from. high speed computer with a large hard disk capacity.

Under client–server.Clientserver model the network can become congested as they have to support not only the workstation user. This is a method of running a network with different limitations compared to fully fashioned clients. Contrast that to a P2P network. Then a single server may cause a bottleneck or constraints problem. Limitations include network load. • Using intelligent client terminals increases the maintenance and repair effort. Aspects of comparison for other architectural concepts today include cloud computing as well. pdf) [3] [Peer-to-Peer Networking and Applications] [4] Book: Computers are your future [5] Peer to Peer vs. Client-server networks with their additional capacities have a higher initial setup cost for networking than peer to peer networks. In P2P networks. and transaction recovery time. but may be taken by another server. clients’ requests cannot be fulfilled by this very entity. All processing is completed on few central computers. since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network. • Any single entity paradigm lacks the robustness of a redundant configuration. as long as required data is accessible. where its aggregated bandwidth actually increases as nodes are added. org/ imgs/ pdf/ education/ P2PNetworking. However. It is easier to configure and manage the server hardware and software compared to the distributed administering requirements with a flock of computers[4] [5] . Retrieved 2009-06-16. even if one or more nodes depart and abandon a downloading file. but also the requests from network users. . Sun Microsystem. the server can become overloaded. It is possible to set up a server on a modern desktop computer. References [1] "Distributed Application Architecture" (http:/ / java. com/ developer/ Books/ jdbc/ ch07. servers may be cloned and networked to fulfill all known capacity and performance requirements. If dynamic re-routing is established. sun. should a critical server fail. [2] Understanding peer-to-peer networking (http:/ / www. 28 Challenges Generally a server may be challenged beyond its capabilities. isafe. pdf). for example. this simple model ends with the bandwidth of the network: Then congestion comes on the network and not with the peers. Possible design decision considerations might be: • As soon as the total number of simultaneous client requests to a given server increases. but it is recommended to consider investment in enterprise-wide server facilities with standardised choice of hardware and software and with a systematic and remotely operable administering strategy. • Mainframe networks use dumb terminals. It may be difficult to provide systemwide services when the client operating system typically used in this type of network is incapable of hosting the service. resources are usually distributed among many nodes which generate as many locations to fail. network address volume. Lesser complete netbook clients allow for reduction of hardware entities that have limited life cycles. The long-term aspect of administering a client-server network with applications largely server-hosted surely saves administering effort compared to administering the application settings per each client. the remaining nodes should still have the data needed to complete the download. Client/Server Networks . However. In addition the concentration of functions in performant servers allows for lower grade performance qualification of the clients.

.1109/32. computer. ISSN 0098-5589. "Understanding Code Mobility" (http:/ / www2. instead of data . . Code mobility can be either Strong or Weak: • Strong code mobility involves moving the code. Giovanni Vigna (1998). data and the execution state from one host to another. Alfonso. This is the process of moving code across the nodes of a network as opposed to distributed computation where the data is moved. such as time-critical applications. It is common practice in distributed systems to require the movement of code or processes between parts of the [1] system. The purpose of code mobility is to support sophisticated operations.685258.Code mobility 29 Code mobility In distributed computing. USA: IEEE Press Piscataway) 24 (5): 342–361. Retrieved 29 July 2009. without the need to restart the program on the recipient's machine. org/ portal/ web/ csdl/ abs/ trans/ ts/ 1998/ 05/ e0342abs. codes or objects to be migrated (or moved) from one machine (host) to another. doi:10. This may necessitate restarting the execution of the program at the destination host. Gian Pietro Picco. • Weak code mobility involves moving the code and the data only. For example a user A can send a running program to another user B and the program continues to run as if it was still on the original machine. IEEE Transactions on Software Engineering (NJ. This is important in cases where the running application needs to maintain its state as it migrates from host to host. References [1] Fuggetta. htm). code mobility is the ability for running programs .

enabling rapid reuse of these connections by short-lived processes without the overhead of setting up a new connection each time. Connection brokers are often used in systems using N-tier architectures. a connection broker is a resource manager that manages a pool of connections to connection-based resources such as databases or remote desktops. .Connection broker 30 Connection broker In software engineering.

CouchDB is supported by commercial enterprises Couchbase and Cloudant.0 / May 30. apache. . Christopher Lenz. is an open source document-oriented database written in the Erlang programming language. It borrows from NoSQL and is designed for local replication and to scale horizontally across a wide range of devices. J. org/ Apache CouchDB. Noah Slater.0 http:/ / couchdb. Jan Lehnardt. 2011 Development status Active Written in Operating system Available in Type License Website Erlang Cross-platform English Document-oriented database Apache License 2. commonly referred to as CouchDB. Chris Anderson Apache Software Foundation 2005 1.CouchDB 31 CouchDB Apache CouchDB CouchDB's Futon Administration Interface. User database Original author(s) Developer(s) Initial release Preview release Damien Katz.1.

including Ubuntu. Every document in a CouchDB database has a unique id and there is no required document schema. CouchDB supports a view system using external socket servers and a JSON-based protocol. The documents in a collection need not share a schema. Damien Katz (former Lotus Notes developer at IBM. it graduated to a top-level project alongside the likes of the Apache HTTP Server. But you can also use ordered lists and associative maps. but what he did share was that it would be a "storage system for a large scale object database" and that it would be called CouchDB (Couch is an acronym for cluster of unreliable commodity hardware). He self-funded the project for almost two years and released it as an open source project under the GNU General Public License. “ Django may be built for the Web. Katz works on it full-time as the lead developer. numbers. CouchDB provides ACID semantics[9] . but CouchDB is built of the Web.CouchDB 32 History In April 2005. Details were sparse at this early stage. it became an Apache Incubator project and the license was changed to the Apache License rather [2] than the GPL.[7] Since Version 0. although queries may introduce temporary views. Views are defined with aggregate functions and filters are computed in parallel. It does this by implementing a form of Multi-Version Concurrency Control (MVCC) not unlike InnoDB or Oracle. Field values can be simple things like strings.11 CouchDB supports CommonJS' Module specification[8] . I’ve never seen software that so completely embraces the philosophies behind HTTP. but the project moved to the Erlang OTP platform for its emphasis on fault tolerance. CouchDB exposes a RESTful HTTP API and a large number of pre-written clients are available. Additionally. CTO of Couchbase) posted on his blog about a new database engine he was working on. Ruby. Django Developer [5] It is in use in many software projects and web sites[6] .[4] As a consequence. Views are generally stored in the database and their indexes updated continuously. much like MapReduce. ACID Semantics Like many relational database engines. On November 2008. PHP. You can think of a document as one or more field/value pairs expressed as JSON. Instead of storing data in rows and columns. view servers have been developed in a variety of languages. CouchDB design and philosophy borrows heavily from Web architecture and the concepts of resources. now founder. or dates. the database manages a collection of JSON documents. Tomcat and Ant. CouchDB makes Django look old-school in the same way that Django makes ASP look outdated. CouchDB was originally written in C++. a plugin architecture allows for using different computer languages as the view server such as JavaScript (default). ” —Jacob Kaplan-Moss.[1] His objectives for the database were for it to become the database of the Internet and that it would be designed from the ground up to serve web applications. It is not a relational database management system. methods and representations and can be simplified as the following. Design CouchDB is most similar to other document stores like MongoDB and Lotus Notes. Python and Erlang. where it is used to synchronize address and bookmark data.[3] Currently. In February 2008. but retain query abilities via views. That means CouchDB can . CouchDB is maintained at the Apache Software Foundation with backing from IBM. Support for other languages can be easily added. Features Document Storage CouchDB stores documents in their entirety.

0.1:5984/wiki The server replies with the following JSON message: {"db_name":"wiki".0. Delete) operations on all resources. or updated. if the database already exists: {"error":"file_exists". .1:5984/wiki CouchDB will reply with the following message. The function takes a document and transforms it into a single value which it returns.0. This provides a very powerful indexing mechanism that grants unprecedented control compared to most databases. Since computing a view over a large database can be an expensive operation. A lot of tools. All items have a unique URI that gets exposed via HTTP.1"} This is not terribly useful. REST API CouchDB treats all stored items (there are others besides documents) as a resource. but it illustrates nicely the way of interacting with CouchDB. REST uses the HTTP methods POST. That means multiple replicas can have their own copies of the same data. HTTP is widely understood. it returns a response in JSON as the following: {"couchdb":"Welcome". Distributed Architecture with Replication CouchDB was designed with bi-direction replication (or synchronization) and off-line operation in mind. scalable and proven technology. Map/Reduce Views and Indexes To provide some structure to the data stored in CouchDB."doc_count":0."doc_del_count":0. Read. the file already exists.0. Update.0. and then sync those changes at a later time. PUT or DELETE) by using the cURL lightweight command-line tool to interact with CouchDB server: curl http://127. modify it.CouchDB handle a high volume of concurrent readers and writers without conflict. GET. are available to do all sorts of things with HTTP like caching. In CouchDB. interoperable. Creating a database is simple—just issue the following command: curl -X PUT http://127."compact_running":false. POST. "purge_seq":0. removed. with a different response message. software and hardware. you can develop views that are similar to their relational database counterparts.0. proxying and load balancing."version":"1. each view is constructed by a JavaScript function (server-side JavaScript by using CommonJS and SpiderMonkey) that acts as the Map half of a MapReduce operation. 33 Examples CouchDB provides a set of RESTful HTTP methods (e."reason":"The database could not be created.1:5984/ The CouchDB server processes the HTTP request. The biggest gotcha typically associated with this level of flexibility is conflicts.g.. GET."} The command below retrieves information about the database: curl -X GET http://127. if the database does not exist: {"ok":true} or."update_seq":0."disk_size":79. PUT and DELETE for the four basic CRUD (Create. CouchDB can index views and keep those indexes updated as documents are added.0. The logic in your JavaScript functions can be arbitrarily complex.

jQuery ICU jQuery is a lightweight cross-browser JavaScript library that emphasizes interaction between JavaScript and Dual license: GPL HTML. single assignment.org [3] Re: Proposed Resolution: Establish CouchDB TLP (http:/ / mail-archives. IBM. apache.apache. apache.org [5] A Different Way to Model Your Data (http:/ / books. org/ docs/ overview. org/ couchdb/ CouchDB_in_the_wild) A list of software projects and websites using CouchDB [7] Email from Elliot Murphy (Canonical) (http:/ / mail-archives.0. com/ developerworks/ opensource/ library/ os-couchdb/ index. com>) on mail-archives. later released as open source and now maintained by the Mozilla Foundation. 3090104@canonical. MIT License OpenSSL Apache-like unique Erlang Erlang is a general-purpose concurrent programming language and runtime system. mbox/ <3d4032300802121136p361b52ceyfc0f3b0ad81a1793@mail. Component Description License MPL/GPL/LGPL tri-license SpiderMonkey SpiderMonkey is a code name for the first ever JavaScript engine. com>) on mail-archives. apache. org/ couchdb/ CommonJS_Modules [9] (http:/ / couchdb. References [1] Lennon. and dynamic typing. written by Brendan Eich at Netscape Communications. apache. apache. org/ relax/ intro/ why-couchdb#A Different Way to Model Your Data) [6] CouchDB in the wild (http:/ / wiki. .apache. html). mbox/ <3F352A54-5FC8-4CB0-8A6B-7D3446F07462@jaguNET. IBM. org/ mod_mbox/ couchdb-dev/ 200910.org [4] View Server Documentation (http:/ / wiki. html). and MIT International Components for Unicode (ICU) is an open source project of mature C/C++ and Java libraries for Unicode support. Joe (2009-03-31).CouchDB "instance_start_time":"1272453873691070". Retrieved 2009-03-31. The core library (written in the C programming language) implements the basic cryptographic functions and provides various utility functions. The sequential subset of Modified MPL Erlang is a functional language. couchdb. apache. "Exploring CouchDB" (http:/ / www. gmail. with strict evaluation. org/ mod_mbox/ incubator-general/ 200802. . [2] Apache mailing list announcement (http:/ / mail-archives. software internationalization and software globalization. ICU is widely portable to many operating systems and environments. OpenSSL is an open source implementation of the SSL and TLS protocols.1:5984/wiki CouchDB will reply with the following message: {"ok":true} 34 Open source components CouchDB includes a number of other open source projects as part of its default package. apache.apache."disk_format_version":5} The following command will remove the database and its contents: curl -X DELETE http://127.0. com>) to the CouchDB-Devel list [8] http:/ / wiki. org/ mod_mbox/ incubator-couchdb-dev/ 200811. org/ couchdb/ ViewServer) on wiki. mbox/ <4AD53996. see section on ACID Properties. ibm.

pp.). 300. 72.com/main/tag/couchdb) CouchDB green paper (http://manning. Noah. Joe (December 15.html) CouchDB news and articles on myNoSQL (http://nosql. pp. 2011). Bradley (April 11.com/tagged/couchdb) Scaling CouchDB (http://nosql. 76.google. Beginning CouchDB (http://www. CouchDB for Erlang Developers (http://www.com/catalog/9781449303433) (1st ed. Writing and Querying MapReduce Views in CouchDB (http://oreilly. O'Reilly Media.com/ catalog/0636920018247) (1st ed.org/relax/) CouchDB articles on NoSQLDatabases.org/) CouchDB: The Definitive Guide (http://books. 2009 by Damien Katz . 300. ISBN 0596158165 • Lennon. Chris.CouchDB 35 Bibliography • Anderson. 2011). 2009).apache. Slater.).nosqldatabases.apress. J.couchdb.). Scaling CouchDB (http://oreilly.).html) (1st ed. ISBN 1449303129 • Holt. ISBN 1430272376 • Holt.couchdb.000 feet Jan Lehnardt (http://video.mypopescu.com/1999/couchdb-php) Videos • Erlang eXchange 2008: Couch DB at 10. Bradley (March 7. CouchDB: The Definitive Guide (http:// guide.com/presentations/katz-couchdb-and-me) on Jan 31. O'Reilly Media.org/editions/1/en/index.org/couchdb/Complete_HTTP_API_Reference) • Simple PHP5 library to communicate with CouchDB (https://github.com/post/683838234/scaling-couchdb) • Complete HTTP API Reference (http://wiki.com/book/view/9781430272373) (1st ed. pp. O'Reilly Media.mypopescu. 2009). Jan (November 15. pp.com/ videoplay?docid=-3714560380544574985&hl=en#) • Jan Lehnardt is Giving the Following Talks. Apress.infoq.com/free/green_chandler. Lehnardt.apache.com/ conference/London2009/speakers/janlehnardt) • CouchDB and Me (http://www.erlang-factory. ISBN 1449303439 External links • • • • • • Official website (http://couchdb.com (http://www.

IEEE Computer. International. behavior that is heavily dictated by the contents of a database. ist. table-driven logic. acm.Data Diffusion Machine 36 Data Diffusion Machine Data Diffusion Machine is a historical virtual shared memory architecture where data is free to migrate through the machine. and S. much of which is either free or included with the operating system.D. Often this description is meant to contrast the design to an alternative approach. cfm?id=141718 [3] Henk L. i. For example. Muller.[1] [2] [3] Data Diffusion Machines were under active research in the late 1980s and early 1990s. edu/ viewdoc/ summary?doi=10. 1996. With the evolution of sophisticated DBMS software. David H.e.ac. psu. D. http:/ / citeseerx. but the research has ceased since then. 1. http:/ / portal. allows programs to be simpler and more flexible. p. The Data Diffusion Machine . 1. application developers have become increasingly reliant on standard database tools. In Proceedings of the 1988 International Conference on Fifth Generation Computer Systems. Haridi. 10th International Parallel Processing Symposium (IPPS '96). The Data Diffusion Machine (DDM) overcomes this problem by providing a virtual memory abstraction on top of a distributed memory machine. • using dynamic. A DDM appears to the user as a conventional shared memory machine but is implemented using a distributed memory architecture. A. Stallard. Hagersten. as opposed to logic embodied in previously compiled programs. org/ citation. generally relating to software architectures in which databases play a crucial role.[1] . Warren. [2] E. as opposed to greater reliance on logic running in middle-tier application servers in a multi-tier architecture. See also control tables for tables that are normally coded and embedded within programs as data structures (i.A.A Cache-only Memory Architecture. the characterization of an architecture as "database-centric" may mean any combination of the following: • using a standard. 152. Landin. as opposed to customized in-memory or file-based data structures and access methods.cs.A Scalable Shared Virtual Memory Multiprocessor. concluding that a database-centric approach has practical advantages from the standpoint of ease of development and maintainability. The use of table-driven logic. especially for the sake of rapid application development. 48. Shared memory machines are convenient for programming but do not scale beyond tens of processors.e. pp 943-952. • using stored procedures that run on database servers.uk/Research/DDM/) Database-centric architecture Database-centric architecture or data-centric architecture has several distinct meanings. Tokyo. "Implementing the Data Diffusion Machine using Crossbar Routers. 2301 External links • Data Diffusion Machine (University of Bristol) (http://www. This capability is a central feature of dynamic programming languages. general-purpose relational database management system. December 1988. DDM . Warren and Seif Haridi. not compiled statements) but could equally be read in from a flat file. Japan. September 1992. Toon Koppelaars presents a detailed analysis of alternative Oracle-based architectures that vary in the placement of business logic." Parallel Processing Symposium. database or even retrieved from a spreadsheet. For example. Paul W. The extent to which business logic should be placed at the back-end versus another tier is a subject of ongoing debate.bris. References [1] David H.

where many systems on several locations can take care of Load balancing (computing) by re-distribution of specific tasks. Distributed systems using general purpose and specialized APIs 2. doc) [2] Database-Centric Grid and Cluster Computing (http:/ / www. and scalability. com/ dbgrid. and capacity. A potential benefit of database-centric architecture in distributed applications is that it simplifies the design by utilizing DBMS-provided transaction processing and indexing to achieve a high degree of reliability. With the introduction of Intelligent agents. boic. htm) Distributed application Distributed Applications are applications running on two or more machines in a network. Introduction Where classic software systems of the past century were mostly based on Client–server models and Client-centric application development. Real time systems for data-input by people – Like HelpDesk software and Client Service Software taking care of appointments and updates on Client Data 3. nl. Hardware systems like "the Internet of Things" .Database-centric architecture • using a shared database as the basis for communicating between parallel processes in distributed computing applications. inter. fault-tolerance. For example.With independent components capable of processing specific tasks while communicating to other parts via a network 4. Web APIs and Web 2. both ultimately run on one single computer. performance. Render and computation farms – To render 3D images and do calculations on large datasets and process complex data in general .[2] 37 References [1] A database-centric approach to J2EE application development (http:/ / web. Examples Distributed Applications can include: 1. Koppelaars/ J2EE_DB_CENTRIC. be it the client computer or the server. Base One describes a database-centric distributed computing architecture for grid and cluster computing. as opposed to direct inter-process communication via message passing functions and message-oriented middleware.0 and the emergence of Cloud computing more and more "multiple machine" approaches emerge. and explains how this design provides enhanced security. net/ users/ T. or where each of these machines serves a specific purpose or task.

• Homogeneous. Thus. application layer to an underlying multicast protocol.. unidirectional. a set of events that includes multicast requests made by different applications to different multicast protocols would not be considered a distributed flow. The flow usually includes all events that flow between the two layers of software. and uniform. The requirement that events are one-way and asynchronous is important. • Concurrent. generally. This implies that in which flow. in such case. one can always point to the point in time at which the flow originated. and distributed. A flow.k.g. a set of events that includes all multicast requests issued by the same application layer to the same multicast protocol is a distributed flow. and are related to one-another. continuous. normally. eventually a new event will appear in the flow.. . and one-way. at any point in time. and another flow that represents responses. the network address of a physical node) at which the event occurs. For example. k is a version. there can be only finitely many events in the flow that occur at time t or earlier. The flow itself can be infinite.g. asynchronous method invocation or other form of explicit or implicit message passing between two layers or software components. all events must flow in the same direction (i. we require that they represent method calls or message exchanges between instances of the same functional layers. Each distributed flow is a (possibly infinite) set of such quadruples that satisfies the following three formal properties. events in a distributed flow are distributed both in space (they occur at different nodes) and in time (they occur at different times). simultaneously at different locations. and carry the same type of a payload.e. For example. or instances of the same components. the flow of multicast requests would include all such requests made by instances of the given application on different nodes. such flow would include events that occur on all nodes participating in the given multicast protocol. and v is a value that represents the event payload (e. Invocations of methods that may return results would normally be represented as two separate flows: one flow that represents the requests. All events in the distributed flow serve the same functional and logical purpose. where x is the location (e. On the other hand.t. issued by an An illustration of the basic concepts involved in the definition of a distributed data flow. t is the time at which this happens. non-blocking. Formally. and the other always consumes the events).. one type of a layer or component always produces. Each event represents a single instance of a non-blocking. in general. For example. in which all events occur at the same node would be considered degenerate. we represent each event in a distributed flow as a quadruple of the form (x.Distributed data flow 38 Distributed data flow Distributed data flow (also abbreviated as distributed flow) refers to a set of events in a distributed application or protocol that satisfies the following informal properties: • Asynchronous. all the arguments passed in a method call). Furthermore. and over a finite or infinite period of time. and neither would be a set of events that represent multicast requests as well as acknowledgments and error notifications. each event might represent a single request to multicast a packet. one-way. • For any finite point in time t. but perhaps on different nodes within a computer network. or a sequence number identifying the particular event.v).

"Programming Live Distributed Objects with Distributed Data Flows". edu/ ~krzys/ krzys_debs2009.. and Sakoda. K. pdf [2] Ostrowski. http:/ / www. • Consistency. Nashville. Languages and Applications (OOPSLA 2009). edu/ ~krzys/ krzys_oopsla2009. 3rd ACM International Conference on Distributed Event-Based Systems (DEBS 2009).[3] 39 References [1] Ostrowski. 5th ACM SIGOPS Workshop on Programming Languages and Operating Systems (PLOS 2009). In addition to the above. (2009). pdf . then e_1 must carry a smaller value than e_2. MT. distributed flows are a more natural way of modeling the semantics and inner workings of certain classes of distributed systems.Distributed data flow • For any pair of events e_1 and e_2 that occur at the same location. if the two events have the same version numbers. 2009. "Distributed Data Flow Language for Multi-Party Protocols". Big Sky. Dolev.. Strongly monotonic flows are always consistent. K. Birman. 2009. Weakly monotonic flows may or may not be consistent. cornell. As such. in that they can represent state that is stored or communicated by a layer of software. flows can have a number of additional properties. http:/ / www. K.. edu/ ~krzys/ krzys_plos2009. July 6–9. Unlike variables or parameters. Systems. USA. if e_1 has a smaller version than e_2. Birman. cornell. TN. the distributed data flow abstraction has been used as a convenient way of expressing the high-level logical relationships between parts of distributed protocols [1] [2] . A distributed flow is said to be weakly monotonic if for any pair of events e_1 and e_2 that occur at the same location. cs. • For any pair of events e_1 and e_2 that occur at the same location. (2009). cs. October 11. "Implementing Reliable Event Streams in Large Systems via Distributed Data Flows and Recursive Delegation". http:/ / www. C. • Monotonicity. Consistent flows typically represent various sorts of global decisions made by the protocol or application. USA. They typically represent various sorts of irreversible decisions. even if they occur at different locations. D. Dolev.. they must also have the same values. Submitted to the International Conference on Object Oriented Programming. A distributed flow is said to be strongly monotonic (or simply monotonic) if this is true even for pairs of events e_1 and e_2 that occur at different locations. if e_1 occurs at an earlier time than e_2. which represent a unit of state that resides in a single location. (2009). D. K. distributed flows are dynamic and distributed: they simultaneously appear in multiple locations within the network at the same time. In particular. pdf [3] Ostrowski. and Dolev.. A distributed flow is said to be consistent if events with the same version always have the same value. cornell. Birman.. then the version number in e_1 must also be smaller than that of e_2. K. Distributed data flows serve a purpose analogous to variables or method parameters in programming languages such as Java. K. cs. D..

there are two processes: replication and duplication. there are many other distributed database design technologies. [1] To ensure that the distributive databases are up to date and current. This applies to the system's performance. Basic architecture A database User accesses the distributed database through: Local applications applications which do not require data from other sites. It basically identifies one database as a master and then duplicates that database.[2] Besides distributed database replication and fragmentation. Both of the processes can keep the data current in all distributive locations. changes to the master database only are allowed. on corporate intranets or extranets. This is to ensure that local data will not be overwritten. and hence the price the business is willing to spend on ensuring data security. local autonomy. Important considerations Care with a distributed database must be taken to ensure the following: • The distribution is transparent — users must be able to interact with the system as if it were one logical system. or may be dispersed over a network of interconnected computers. Transactions must also be divided into subtransactions. This process can also require a lot of time and computer resources. The replication process can be very complex and time consuming depending on the size and number of the distributive databases. synchronous and asynchronous distributed database technologies. The duplication process is normally done at a set time after hours. Collections of data (e. These technologies' implementation can and does depend on the needs of the business and the sensitivity/confidentiality of the data to be stored in the database. in a database) can be distributed across multiple physical locations. Once the changes have been identified. or on other company networks. For example. In the duplication process.Distributed database 40 Distributed database A distributed database is a database in which storage devices are not all attached to a common CPU. and methods of access among other things. . It may be stored in multiple computers located in the same physical location. • Transactions are transparent — each transaction must maintain database integrity across multiple databases.g. each subtransaction affecting one database system. Global applications applications which do require data from other sites. A distributed database can reside on network servers on the Internet. A distributed database does not share main memory or disks. The replication and distribution of databases improves database performance at end-user worksites. consistency and integrity. Duplication on the other hand is not as complicated. This is to ensure that each distributed location has the same data. the replication process makes all the databases look the same. Replication involves using specialized software that looks for changes in the distributive database.

joins become prohibitively expensive when performed across multiple systems. • Security — remote database fragments must be secured. added and removed from the distributed database without affecting other modules (systems). . by encrypting the network links between remote sites). and the database systems themselves are parallelized. but distributed in multiple locations. i-isolation. • Operating System should support distributed environment.Distributed database 41 Advantages of distributed databases • • • • • • • Management of distributed data with different levels of transparency. It is solved by locking and timestamping.I. The infrastructure must also be secured (e. instead of one big one. Distributed Query processing. the transaction takes place as whole or not at all. Modularity — systems can be modified. Replication and Location Independence. allowing load on the databases to be balanced among servers. Disadvantages of distributed databases • Complexity — extra work must be done by the DBAs to ensure that the distributed nature of the system is transparent. allocation of fragments to specific sites and data replication.D. Extra database design work must also be done to account for the disconnected nature of the database — for example. the design of a distributed database has to consider fragmentation of data. the results of a transaction must survive system failures.) Economics — it costs less to create a network of smaller computers with the power of a single large computer.. Hardware. enforcing integrity over a network may require too much of the network's resources to be feasible. Network.) Protection of valuable data — if there were ever a catastrophic event such as a fire.g. and as a young field there is not much readily available experience on proper practice. Reflects organizational structure — database fragments are located in the departments they relate to. • Inexperience — distributed databases are difficult to work with. • Database design more complex — besides of the normal difficulties. Fragmentation. • Lack of standards — there are no tools or methodologies yet to help users convert a centralized DBMS into a distributed DBMS. The Merge Replication Method used to consolidate the data between databases. Extra work must also be done to maintain multiple disparate systems. and they are not centralized so the remote sites must be secured as well. Operating System. Continuous operation.Due to replication of database. d-durability.. Improved performance — data is located near the site of greatest demand. (A high load on one module of the database won't affect other modules of the database in a distributed database. DBMS. Reliable transactions . property: a-atomicity. • Economics — increased complexity and a more extensive infrastructure means extra labour costs. Local autonomy — a department can control the data about them (as they are the ones familiar with it. Increase reliability and availability. All transactions follow A. Easier expansion. • Additional software is required. • Difficult to maintain integrity — but in a distributed database. all of the data would not be in one place. • Concurrency control: it is a major issue. c-consistency. maps one consistent DB state to another. • • • • • • • Single site failure does not affect performance of system.C. Distributed Transaction management. each transaction sees a consistent DB.

bldrdoc. 185-189). Fundamentals of database systems (3rd edition).htm).(2008) Management Information Systems (pp.Distributed database 42 References [1] O'Brien. ISBN 0-13-659707-6 •  This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (http://www. T. ISBN 0-201-54263-3 Distributed design patterns In software engineering. NY: McGraw-Hill Irwin [2] O'Brien.M. & Marakas. NY: McGraw-Hill Irwin • M. a distributed design pattern is a design pattern focused on distributed computing problems.gov/fs-1037/fs-1037c. New York. Ozsu and P. Prentice-Hall. Valduriez. New York.(2008) Management Information Systems (pp. J. G. Principles of Distributed Databases (2nd edition). 185-189).its. Classification Distributed design patterns can be divided into several groups: • Distributed communication patterns • Security and reliability patterns • Event driven patterns Examples • MapReduce • Bulk synchronous parallel . G. J. & Marakas.M. Addison-Wesley Longman. • Elmasri and Navathe.

1A-1998 .1-1995 . developed by Bolt.2-1995 .Application protocols Errata (May 1998) IEEE-1278.Exercise Management and Feedback • IEEE 1278. the High Level Architecture (simulation) in 1996. BBN introduced the concept of dead reckoning to efficiently transmit the state of battle field entities.Fidelity Description Requirements (never published) In addition to the IEEE standards.5-XXXX .Recommended Practice for Distributed Interactive Simulation . especially by military organizations but also by other agencies such as those involved in space exploration and medicine. The DIS family of standards DIS is defined under IEEE Standard 1278: • • • • • • IEEE 1278-1993 . In the early 1990s.Standard for Distributed Interactive Simulation .Standard for Distributed Interactive Simulation .3-1996 .Application protocols IEEE 1278. History The standard was developed over a series of "DIS Workshops" at the Interactive Networked Simulation for Training symposium.Communication Services and Profiles IEEE 1278.Recommended Practice for Distributed Interactive . This was retired in favour of HLA in 1998 and officially cancelled in 2010 by the NATO Standardisation Agency (NSA). IST was contracted by the United States Defense Advanced Research Project Agency to undertake research in support of the US Army Simulator Network (SimNet) program. There was a NATO standardisation agreement (STANAG 4482. adopted in 1995) on DIS for modelling and simulation interoperability. HLA was produced by the merger of the DIS protocol with the Aggregate Level Simulation Protocol (ALSP) designed by MITRE. held by the University of Central Florida's Institute for Simulation and Training (IST). The standard itself is very closely patterned after the original SIMNET distributed interactive simulation protocol. Both PDF and XML versions are available.Verification Validation & Accreditation • IEEE 1278.Standard for Distributed Interactive Simulation .4-1997 . TENA and HLA federations.Application protocols IEEE 1278.Distributed Interactive Simulation 43 Distributed Interactive Simulation Distributed Interactive Simulation (DIS) is an open standard for conducting real-time platform-level wargaming across multiple host computers and is used worldwide.Standard for Distributed Interactive Simulation .1-1995 . . the Simulation Interoperability Standards Organization (SISO) maintains and publishes an "enumerations and bit encoded fields" document yearly. Standardised Information Technology Protocols for Distributed Interactive Simulation (DIS).Standard for Distributed Interactive Simulation . This document is referenced by the IEEE standards and used by DIS. Funding and research interest for DIS standards development decreased following the proposal and promulgation of its successor.Application protocols[1] IEEE 1278. Beranek and Newman (BBN) for Defense Advanced Research Project Agency (DARPA) in the early through late 1980s.

Supplemental Emission/Entity State (SEES) • Radio communications family .Standard for Distributed Interactive Simulation .0 Fourth Draft (March 1994) • Version 5 .Designator. This is a major upgrade to DIS to enhance extensibility and flexibility. Detonation.1-1995) • Version 7 . Repair Complete.1-1995 • Version 6 . Version 2. efficient and to support the simulation of more real world capabilities. Receiver.[2] Protocol data units The current version (DIS 6) defines 67 different PDU[3] types.Transmitter.scheduled for completion and IEEE balloting in the Spring of [2] 2010. arranged into 12 families.) Version 7 is also called DIS 7 .IEEE 1278.Information Operations Action. Stop/Freeze.1-2010 (in preparation . known as protocol data units (PDUs) and exchanged between hosts using existing transport layer protocols. Underwater Acoustic. Intercom Signal. and adds some higher-fidelity mission capabilities. Collision-Elastic. Intercom Control • Entity management family • Minefield family • Synthetic environment family • Simulation management with reliability family • Live entity family • Non-real time family • Information Operations family . including multicast. PDU and family names shown in italics are included in present draft DIS 7. Version 2.Service Request. but also drafts submitted during the standards balloting process.Standard for Distributed Interactive Simulation . Information Operations Report . Repair Response • Simulation management family . Major changes are already in the DIS 7 draft update to IEEE 1278.1[1] to make DIS more extensible. There are several versions of the DIS application protocol. not only including the formal standards.IEEE 1278.Standard for Distributed Interactive Simulation . Frequently used PDU types are listed below for each family.Application Protocols.Application Protocols.Application Protocols.Start/Resume.Distributed Interactive Simulation 44 Current status SISO. Entity State Update. a sponsor committee of the IEEE. Version 1. promulgates improvements in DIS.Fire.1a-1998 (amendment to IEEE 1278. Attribute • Warfare family . Directed Energy Fire. Entity Damage Status • Logistics family . IFF/ATC/NAVAIDS. • Entity information/interaction family . Collision.0 Draft (1992) • Version 2 .DIS Product Development Group.Entity State. Resupply Cancel. Electromagnetic Emission. Resupply Received.[2] Application protocol Simulation state information is encoded in formatted messages.IEEE 1278.IEEE 1278-1993 • Version 3 . See External Link . Signal. Resupply Offer. • Version 1 . Acknowledge • Distributed emission regeneration family . though broadcast User Datagram Protocol is also supported.0 Third Draft (May 1993) • Version 4 . It provides extensive clarification and more details of requirements.

VMScluster. relies on the OpenVMS DLM in just this way. org/ servlet/ opac?punumber=5896). 1-1995. Resources The DLM uses a generalised concept of a resource. org/ reading/ ieee/ updates/ errata/ 1278. . a record. IEEE. an area of shared memory. pdf). although the user interface was the same as the single-processor lock manager that was first implemented in Version 3. which is some entity to which shared access must be controlled.Application protocols" (http:/ / standards. a hypothetical database might define a resource hierarchy as follows: • • • • Database Table Record Field A process can then acquire locks on the database as a whole. DLMs have been used as the foundation for several successful clustered file systems. The DLM is used not only for file locking but also for coordination of all disk access. The main performance benefit comes from solving the problem of disk cache coherency between participating computers.org/StandardsActivities/SupportGroups/ DISPSGDistributedInteractiveSimulation. aspx?EntryId=29288) [3] "1278. in which the machines in a cluster can use each other's storage via a unified file system. ieee.1a-1998 IEEE Standard for Distributed Interactive Simulation . VMS implementation VMS was the first widely-available operating system to implement a DLM. or anything else that the application designer chooses.aspx) Distributed lock manager A distributed lock manager (DLM) provides distributed software applications with a means to synchronize their accesses to shared resources. External links • SISO DIS Product Support Group (http://www. org/ DigitalLibrary.Distributed Interactive Simulation 45 References [1] "Corrections to Standard for Distributed Interactive Simulation . the first clustering system to come into widespread use. [2] DIS 7 Overview. Retrieved 10100517. IEEE. . A hierarchy of resources may be defined.sisostds. This became available in Version 4. This can relate to a file. with significant advantages for performance and availability. sisostds. ieee. A lock must be obtained on a parent resource before a subordinate resource can be locked. For instance. so that a number of levels of locking can be implemented.Application Protocols" (http:/ / ieeexplore. . and then on particular parts of the database. SISO PSG File Library (http:/ / www. Retrieved 10100517.

g. . • Null Lock (NL). It has the advantage that the resource and its lock value block are preserved. This is usually employed on high-level resources. The following truth table shows the compatibility of each lock mode with the others: Mode NL CR CW PR PW EX NL Yes Yes Yes Yes Yes Yes CR Yes Yes Yes Yes Yes No CW Yes Yes Yes No No No PR Yes Yes No Yes No No PW Yes Yes No No No No EX Yes No No No No No Obtaining a lock A process can obtain a lock on a resource by enqueueing a lock request. which is triggered when a process has obtained a lock that is preventing access to the resource by another process. in order that more restrictive locks can be obtained on subordinate resources. which indicates a desire to read and update the resource and prevents others from updating it. in order that more restrictive locks can be obtained on subordinate resources. The enqueue lock request can either complete synchronously. the system's information about the resource is destroyed. but prevents others from having exclusive access to it. which indicates a desire to read the resource but prevents other from updating it. in which case an AST occurs when the lock has been obtained. Once a lock has been granted. The original process can then optionally take action to allow the other access (e. There are six lock modes that can be granted. Others can however also read the resource. When all processes have unlocked a resource. • Protected Write (PW). It also allows other processes to read or update the resource. • Exclusive (EX).Distributed lock manager 46 Lock modes A process running within a VMSCluster may obtain a lock on a resource. This is similar to the QIO technique that is used to perform I/O. and prevents others from having any access to it. or asynchronously. • Protected Read (PR). even when no processes are locking it. This is the traditional share lock. This is the traditional exclusive lock which allows read and update access to the resource. It allows other processes to read or update the resource. by demoting or releasing the lock). It is also possible to establish a blocking AST. it is possible to convert the lock to a higher or lower level of lock mode. This is the traditional update lock. but does not prevent other processes from locking it. This is also usually employed on high-level resources. in which case the process waits until the lock is granted. Indicates interest in the resource. • Concurrent Read (CR). Indicates a desire to read (but not update) the resource. Indicates a desire to read and update the resource. but prevents others from having exclusive access to it. • Concurrent Write (CW). Others with Concurrent Read access can however read the resource. and these determine the level of exclusivity of access to the resource.

19.g.6. and therefore it is unnecessary to read it again. and Process 2 has obtained an exclusive lock on Resource B.19. in January 2006. the Oracle Cluster File System was added[1] to the official Linux kernel with version 2. it is now heavily used inside Google as a name server. the second lock enqueue request of one of the processes would return with a deadlock status.16. has eight parameters.[4] It is designed for coarse-grained locking and also provides a limited but reliable distributed file system. including their DLM and Global File System was officially added to the Linux kernel [2] with version 2. the holder of the lock increments the lock value block. BigTable. supplanting DNS. Both systems use a DLM modeled on the venerable VMS DLM. In the example above. Deadlock detection When one or more processes have obtained locks on resources. including Google File System. use Chubby to synchronize accesses to shared resources.) Google's Chubby lock service Google has developed Chubby. The alpha-quality code warning on OCFS2 was removed in 2. dlmlock(). (the core function. this technique can be used to implement various types of cache in a database or similar application. But if Process 2 then tries to lock Resource A. It can be used to hold any information about the resource that the application designer chooses. Though Chubby was designed as a lock service.6.Distributed lock manager 47 Lock value block A lock value block is associated with each resource. a database record) is updated. it will have to wait for Process 2 to release it. If the value is the same. a lock service for loosely-coupled distributed systems. This can be read by any process that has obtained a lock on the resource (other than a null lock) and can be updated by a process that has obtained a protected update or exclusive lock on it. Hence. it obtains the appropriate lock and compares the current lock value with the value it had last time the process locked the resource. Each time the associated entity (e. It would then be up to this process to take action to resolve the deadlock — in this case by releasing the first lock it obtained. OCFS2. and none of them can proceed.[4] . the process knows that the associated entity has not been updated since last time it read it. This is known as a deadly embrace or deadlock.6. A typical use is to hold a version number of the resource. Linux clustering Both Red Hat and Oracle have developed clustering software for Linux.[3] Oracle's DLM has a simpler API. A simple example is when Process 1 has obtained an exclusive lock on Resource A. Key parts of Google's infrastructure. The OpenVMS DLM periodically checks for deadlock situations. whereas the VMS SYS$ENQ service and Red Hat's dlm_lock both have 11. If Process 1 then tries to lock Resource B. in November 2006. Red Hat's cluster software. it is possible to produce a situation where each is preventing another from obtaining a lock. and MapReduce. When another process wishes to read the resource. both processes will wait forever for each other.

h=1c1afa3c053d4ccdf44e5a4e159005cdfd48bfc6 [3] http:/ / lwn.www7.arcs. The links between nodes can be implemented using some standard network protocol (for example Ethernet). or using dual ported memories. git. net/ Articles/ 137278/ [4] http:/ / labs. 6. google. org/ git/ ?p=linux/ kernel/ git/ torvalds/ linux-2.us) Distributed memory In computer science.Distributed lock manager 48 SSI Systems A DLM is also a key component of more ambitious single system image projects such as OpenSSI. com/ papers/ chubby. and if remote data is required.a=commit. html • HP OpenVMS Systems Services Reference Manual – $ENQ (http://h71000.com/doc/82FINAL/ 4527/4527pro_044. Processors do not have to be aware where data resides. distributed memory refers to a multiple-processor computer system in which each processor has its own private memory.html#jun_227) • ARCS .a=commit. using bespoke network links (used in for example the Transputer). The network topology is a key factor in determining how the multi-processor machine scales. References [1] http:/ / www. and some form of interconnection that allows programs on each processor to interact with each other. kernel.h=29552b1462799afbe02af035b243e97579d63350 [2] http:/ / git. the computational task must communicate with one or more remote processors. The interconnect can be organised with point to point links or separate hardware can provide a switching network.A Web Service used as a Distributed Lock Manager (http://www. a shared memory multi processor offers a single memory space used by all processors. An illustration of a distributed memory system of three computers Architecture In a distributed memory system there is typically a processor. 6. In contrast. a memory. git. except that there may be performance penalties.hp. and that race conditions are to be avoided. . org/ ?p=linux/ kernel/ git/ torvalds/ linux-2. kernel. Computational tasks can only operate on local data.

and finally to the third node that computes H. (the result is H(G(F(X)))). Data can be moved on demand. and only changes on edges have to be reported to other nodes. then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performs F that passes the result onto the second node that computes G. nodes inform all neighboring nodes of the new edge data. G. This is also known as systolic computation. H. and that it forces the programmer to think about data distribution. in distributed shared memory each node of a cluster has access to a large shared memory in addition to each node's limited non-shared private memory. Data can be kept statically in nodes if most computations happen locally. Distributed shared memory Similarly. An example of this is simulation where data is modeled using a grid. if a problem can be described as a pipeline where data X is processed subsequently through functions F. and each node simulates a small part of the larger grid. Depending on the problem solved. Shared memory versus distributed memory versus distributed shared memory The advantage of (distributed) shared memory is that it offers a unified address space in which all data can be found. On every iteration.it does not hide the latency of communication. As an example. or it can be moved through the nodes. The advantage of distributed (shared) memory is that it is easier to design a machine that scales with the algorithm Distributed shared memory hides the mechanism of communication . The advantage of distributed memory is that it excludes race conditions.Distributed memory 49 Programming distributed memory machines The key issue in programming distributed memory systems is how to distribute the data over the memories. the data can be distributed statically. or data can be pushed to the new nodes in advance. etc. .

Communication : There are different communication primitives available for distributed objects requests Failure : Distributed objects have far more points of failure than typical local objects Security : Distribution makes them vulnerable to attack. One object sends a message to another object in a remote machine or process to perform some task. and that can encapsulate distributed state and behavior. Image describes communication between distributed objects residing in different machines. 4. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. DCOM is a framework for distributed objects on the Microsoft platform. Distributed objects are used in Java RMI. migration and deletion of distributed objects is different from local objects Reference : Remote references to distributed objects are more complex than simple pointers to memory addresses Request Latency : A distributed object request is orders of magnitude slower than local method invocation Object Activation : Distributed objects may not always be available to serve an object request at any point in time Parallelism : Distributed objects may be executed in parallel. Live distributed objects can also be defined as running instances of distributed multi-party protocols. Jt is a framework for distributed components using a messaging paradigm. shared memory (spaces based) . but reside either in multiple computers connected via a network or in different processes inside the same computer. • Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states. perhaps resulting in only a weak consistency between their local states. 3. The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing. 5. CORBA lets one build distributed mixed object systems. Local vs Distributed Objects Local and distributed objects differ in many respects. 2. See also Internet protocol suite. 7. [1] • Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol. viewed from the object-oriented perspective as entities that have distinct identity.Distributed object 50 Distributed object The term distributed objects usually refers to software modules that are designed to work together. Examples Distributed objects are implemented in Objective-C using the Cocoa API with the NSConnection class and supporting objects. JavaSpaces is a Sun specification for a distributed. The results are sent back to the calling object. 8. DDObjects is a framework for distributed objects using Borland Delphi.[2] Here are some of them: 1. Life cycle : Creation. 6. and that respond to requests in a coordinated manner. such as replicated objects or live distributed objects.

J. in which each node of a cluster has access to shared memory in addition to each node's non-shared private memory. A coherence protocol. Ed. 463-489. 51 References [1] Ostrowski. Software DSM systems implemented at the library or language level are not transparent and developers usually have to program differently. 2008. a concept that refers to a wide class of software and hardware implementations. Software DSM systems can be implemented in an operating system. J.. Distributed Ruby (DRb) is a framework for distributed objects using the Ruby programming language. Software DSM systems implemented in the operating system can be thought of as extensions of the underlying virtual memory architecture.Distributed object Pyro is a framework for distributed objects using the Python programming language. org/ citation. Another commonly seen implementation uses a tuple space.. In contrast. vol. Alternatively in computer science it is known as (DGAS). Cyprus. cfm?id=1428508. maintains memory coherence. Proceedings of the 22nd European Conference on Object-Oriented Programming. Berlin. Heidelberg. K. chosen in accordance with a consistency model. (2008). Dolev. the term shared does not mean that there is a single centralized memory but shared essentially means that the address space is shared (same physical address on two processors refers to the same location in memory)[1] . the object based approach organizes the shared memory region as an abstract space for storing shareable objects of variable sizes. However. or distributing all memory between nodes. Vitek. John Wiley & Sons Ltd.11. Paphos. Here. D. Emmerich (2000) Engineering distributed objects. K. The page based approach organizes shared memory into pages of fixed size. 1428536. acm. In contrast. and Ahnn. Birman. Software DSM systems also have the flexibility to organize the shared memory region in different ways. these systems offer a more portable approach to DSM system implementation. or as a programming library.. 5142. Examples of such systems include: • Kerrighed • • • • • OpenSSI MOSIX Terracotta TreadMarks DIPC . in which the unit of sharing is a tuple. Distributed shared memory Distributed Shared Memory (DSM).. Such systems are transparent to the developer. July 07 . Shared memory architecture may involve separating memory into shared parts distributed amongst nodes and main memory. in Computer Architecture is a form of memory architecture where the (physically separate) memories can be addressed as one (logically shared) address space. Springer-Verlag. which means that the underlying distributed memory is completely hidden from the users. Lecture Notes In Computer Science. http:/ / portal. "Programming with Live Distributed Objects". [2] W.

 201. Fourth Edition. OpenSocial widget APIs.com) • Memory coherence in shared virtual memory systems (http://portal.org/citation. OpenID authentication. Application framework. The software of the projects is generally free and open source. ISBN 0123704901. p. the Portable Contacts protocol. and John L. private messaging server [3] PHP MIT HTTP + REST. not yet demo [5] alpha 5 total Ampify Trust-based search. It contrasts with social network aggregation services. Comparison of projects The protocols of these projects are generally open and free.Distributed shared memory 52 References [1] Patterson. A few social networking service providers have used the term more broadly to describe provider-specific services that are distributable across different websites. which are used to manage accounts and activities across multiple discrete social networks. web-hook style sensor network development . addressbook. Public Domain HTTPS. Through the add-ons. Open standards such as OAuth authorization. typically through added widgets or plug-ins. Nov. and Atom web feeds—increasingly referred to together as the Open Stack—are often cited as enabling [1] technologies for distributed social networking. OStatus federation. the social network functionality is implemented on users' websites.cfm?id=75105&am) by Kai Li. microformats [4] Addressbook to send posts to either individuals or groups. microformats like XFN and hCard. External links • Distributed Shared Cache (http://www. Computer architecture : a quantitative approach. the Wave Federation Protocol. interoperability and federation capability. 1989 Distributed social network A distributed social network is an Internet social network service that is decentralized and distributed across different providers. Ampify Messaging Protocol Provides fine grained privacy control through object capability security and transport layer encryption. Morgan Kaufmann Publishers. The emphasis of the distribution is on portabilitya[›]. Project Name Features Software Programming Language 6d License Protocols Privacy Support Federation (with other applications or services) Instances Version/Maturity [2] Blog. XRD metadata discovery.acm. media library. Volume 7 Issue 4.sharedcache. Hennessy (2007). Paul Hudak published in ACM Transactions on Computer Systems. themeable. David A.

0 Friend2Friend [35] Strong encryption. hCard. Forum. photos. OStatus (next release).0 changing Salmon [17] . Channel Protocol [14] . WebOfTrust. email. but pre 1. 'aspects' . XML for all data exchange. ChoiceSocial. collaborative drawing. ChoiceSocial (web interface) Distributed Social Networking Protocol (DSNP) ? ? Friends in Feed [31] . profiles. XOXO). OStatus in testingdue in next release beta. opendd. scrobbling. photo/video sharing server client [12] [13] . IRC Excellent. messaging. Yes hosted on every users computer stable.Distributed social network [6] Photos. profile. XMPP chat. in use Approximately 120 [11] buddycloud [10] Location. content XMPP. DistribSocial. others easily added (plugin architecture) Appleseed total [9] beta. global darknet DHT on restricted routes (FOAF) or Opennet (anonymizing DHT). Activity Streams. in use Duuit! Search. OAuth push/pull. games. Privacy controls.0 XMPP. XMPP. video chat. pseudonymity. Location Query Diaspora Microblogging. OAuth. files. third party plugins p2p Java GPL [34] UDP. Java Apache 2. groups. privacy controls.friend management Diaspora X 2 [20] Yes in development [15] server [16] Ruby AGPL 3. feed reader. Atom. OpenSocial. document creation and editing. avatar. webpages.net [33] [29] GPLv2 FOAF. OStatus. anonymous DVCS. granular. ? alpha . buddycloud channels Activity Streams ? ? [22] Diaspora X 2 [24] . OpenID. Data is digitally signed LGPL Connect to known individuals. Messaging. Groups. photo sharing. Journals. updating bookmarks. Blog. PubSubHubbub. Status Updates. videos. RSS/Atom. buddycloud for federation DiSo Project [23] ? ? [25] WordPress plugins [26] microformats (XFN. XMPP [27] DSNP [28] DSNPd (server daemon). blogs. Newsfeeds 53 [7] PHP GPLv2 QuickSocial Appleseed server [8] Friend circles used to categorize friends and restrict/allow access Internally. microblogging. email. [18] ? ? Diaspora Alpha Wiki [19] pre-alpha 24 listed on Diaspora client using [21] XMPP. acl. customizable interface Freenet Censorship resistant publishing. anonymity. JavaScript.net [28] [30] [32] . mood.

Apache Wave (generates . wave extensions (gadgets. Kopal Feed microformat Kune [50] demo [51] real-time colaborative edition. videos). DFRN demo . Facebook. integrates Java-based GWT AJAX) AGPLv3 XMPP.0 services via XFN and FOAF. automatically updated address book from remote data sources.ca/Status. Kopal Connect protocol ? ? alpha [49] . Fans and one-way relationships. federation server. Twitter. photo albums. Communications encryption. contact import from Web 2.Distributed social network [36] Rich profiles. personal SPARQL API W3C OpenID. consolidated profile with RDF/FOAF export. profiles. multiple profiles w/assignment to specific friends. location. richtext status (not specifically length limited). networking groups. email.in [44] (based on SatusNet) Jappix [45] XMPP client + Microblogging server. GNU-social. maps. tasks. blogs/feeds/Diaspora/Google (via RSS/ATOM). lists. Friendika server components [38] [40] stable/production [39] [41] Server [42] AGPLv3 OStatus [43] ? Yes daisycha. community/group/celebrity pages. XMPP chat. like/dislike. more in development 54 [37] PHP BSD OStatus OpenID. multiple profiles Server [48] MIT OpenID. youtube share. galleries (photos. Local and global directory services. FOAF ? ? alpha Kopal [47] OpenID Core. identi. Apache Wave inbox (modern email). GNU Social extensive Friendika.Net. public webpages. Ability to restrict connection endpoints. web client AGPL XMPP Excellent: based on presence authorizations ? demo [46] production Knowee OpenID Signup. Wave Federation Protocol Total federation/interoperability with other Kune Excellent installations and Apache Wave accounts. single sign on to post directly to friend's profiles on co-operating systems. XMPP chat interoperable with other XMPP-compliant [52] alpha groups. robots). blogs. Activity Stream import and export.

Portable Contacts. id.0 .. groups. OpenMicroBlogger User-toggleable "apps" to add/remove functionality. Feed Aggregation.1. tasks. Address Spaces (ODS) Profile Management. (partial) OStatus (PubSubHubbub) Yes Yes alpha AGPLv3 WebDAV. PubSubHubbub. OAuth.0 ? PHP AGPLv3 XMPP Excellent not yet Yes ? not yet Yes demo development [57] [58] [59] [60] OpenID. OpenSocial. ownCloud Cloudstorage and plugins for Photos. rdf+sparql (10% development) Movim XMPP client + Microblogging Mr. security. features being added. HTTP. group mailing lists. streams. Wikis. File Servers (WebDAV based Briefcase). Social Graph API.1. flexible hosting. Fully Restful design. (partial) Twitter API support. application platform OneSocialWeb NoseRub protocol / WebID SimPL 2. RSS and more MIT Open Microblogging 0. Flickr integration. Activity Streams. Semantic Pingback. XMPP/psyc (50% development). RSSCloud and partial OStatus (PubSubHubbub) federation as well as Open Microblogging 0. IMAP sample server ObjectCloud customization. subgroups. Particle Yes ? 2 Alpha. XMPP extensions [63] Active developer Yes Yes community. active development [61] Microblogging Openfire plugin. Open Collaboration Services Yes ? ver.. user interface consumes Rest API.Distributed social network [53] Profiles. RSSCloud. GPL OpenID. WebID. Local follow/unfollow. SPARQL. Calendars. Privacy NoseRub server and webclient SMTP.myopenlink. Webfinger. Facebook. tagclouds 55 [54] . [62] OpenLink Data [64] Blogs.net among others [65] Active use Books. Atom Publishing. Discussion Forums (includes NNTP support). Media. Working on: OStatus ? project's group Lorea Elgg [56] production plugins [55] [54] (60% production). microblogging. PubSubHubbub. plugins. calendar. 1. Twitter. clients Java Apache 2 XMPP. more. WebID and others Yes (Comercial OpenID. Dual and GPL for Open Source Edition) WebID.

SMTP. Webfinger. WAP. OAuth 2. comments. modular apps (messages. OpenID.ca Army [74] . SSH XMPP. themes. Salmon StatusNet and Cliqset.org profiles. enables internet content sharing Socknet. Privacy Controls ? Yes Alpha Yes OpenID. cart. OpenID No Yes Beta StatusNet microblogging Server. XDI. and other open protocols psyced profiles. microblogging GPLv2 MIT PSYC. web client OSMP (Open Social Message Protocol) Socknet ProviderFoolishMortal. media). messaging. Twiter. 3rd party integration (Facebook. Activity Streams ? ? 3 production Alpha [69] friends. PubSubHubbub. Portable Contacts.0. chat. including communication untraceability ? demo [67] beta [68] SMOB Social-Igniter microblogging FOAF server GPL Webfinger. likely Eclipse or Apache OStatus. places. XMPP. will add support for OAuth SocialZE [72] server. IRC. POP development alpha planned Yes Planned Nov. OpenMicroBlogging (deprecated) Available for sites. Clients [73] PHP AGPLv3 OStatus. status. groups Safebook RSSN ? ? ? ? Yes TBD.9 (Active use) Thimbl Weestit microblogging Finger. ? TELNET.Distributed social network Project Danube 1) Sharing personal data with companies/organizations 2) Sharing personal data with "friends" 3) Use of personal data for "personal applications" Project Nori OStatus. mobile themes. Yes Yes SocialRiver [70] GPL AGPL OStatus [71] . PubSubHubbub. blog. hCard. Private Messaging. editable widgets. OAuth. planned for accounts and posts ? Planned for future Yes Identi. TWiT [75] 0. SMTP. OStatus. ? Webfinger. FOAF.20 2010 . OpenID. XRI. HTTP. RSS RSSN private messaging. OAuth. Portable Contacts. Applet. among others 56 development early alpha concept [66] GPL Extensive. YouTube). HTTP.

google. slideshare. appleseedproject. com/ group/ diaspora-dev/ browse_thread/ thread/ 4bfb9cd07722dfc0 [18] (http:/ / groups. org/ http:/ / complang. 27. friendika. org/ [12] https:/ / github. org/ wiki/ Main_Page#Components http:/ / diso-project. Retrieved 5 January 2009. com/ [20] http:/ / diaspora-x. com/ manifesto) [5] http:/ / demo6d. org/ quicksocial/ [9] http:/ / appleseedproject. org/ wiki/ Channel_Protocol [15] http:/ / open. com/ diaspora/ diaspora [17] http:/ / groups. appleseedproject. com/ [21] https:/ / github. . com/ [6] http:/ / opensource. com/ freenet [35] (http:/ / Friend2Friend. org/ dsnp/ [30] http:/ / complang. pp. com/ group/ salmon-protocol/ browse_thread/ thread/ efab99ca7311d4ae) [19] https:/ / joindiaspora. com/ [41] (http:/ / gnu. com/ [11] http:/ / buddycloud. net/ [33] https:/ / distribsocial. buddycloud. com/ [16] https:/ / github. org/ dsnp/ http:/ / complang. com/ [24] http:/ / diaspora-x. com/ node/ 7) [39] (http:/ / dfrn. com/ ) [37] http:/ / portal.Distributed social network 57 Notes ^ a: See DataPortability article. org/ projects/ social/ faq/ . org/ login/ [10] http:/ / buddycloud. [2] (http:/ / get6d. com/ buddycloud/ channel-server [13] https:/ / github. appleseedproject. org/ dsnp/ spec/ dsnp-spec. ""Blowing Up" Social Networks by Going Open" (http:/ / www. org [7] http:/ / opensource. org/ dfrn2. David (2008-10-09). com/ [32] https:/ / choicesocial. com/ download [38] (http:/ / portal. net/ [34] https:/ / github. net/ ) [36] (http:/ / friendika. org/ software/ social) [42] http:/ / gitorious. com/ #login [26] [27] [28] [29] [25] http:/ / diso-project. pdf) [40] http:/ / demo. com/ ) [3] https:/ / github. com/ cms/ content/ diaspora-x-now-running-buddycloud-channels-and-xmpp [23] http:/ / diaspora-x. org/ http:/ / diso-project. friendika. google. friendika. pdf [31] https:/ / friendsinfeed. org/ download/ [8] http:/ / opensource. net/ daveman692/ blowing-up-social-networks-by-going-open-presentation/ ). com/ bnolan/ diaspora-x2 [22] http:/ / buddycloud. com/ ijoey/ 6d [4] (http:/ / get6d. com/ buddycloud [14] http:/ / buddycloud. External links • • • • Wiki of Federated Social Web W3C Incubator Group [76] Federated Social Web Conference 2011 [77] Comparison of protocol/software projects for distributed social networking [78] Diploma Thesis from the University of Applied Sciences Dresden(HTW) about XMPP-based Federated Social Networks like buddycloud [79](CC-BY) References [1] Recordon. org/ + socialites/ statusnet/ gnu-socia [43] http:/ / foocorp.

com/ wiki/ ODS/ [65] http:/ / id. w3. org [52] http:/ / kune. google. twit. iepala. ca [75] http:/ / army. com/ p/ kopal/ ) [48] http:/ / code. org/ developers-protocol. in/ [45] http:/ / project. ourproject. eu/ [67] http:/ / www. org [73] http:/ / status. org/ 2005/ Incubator/ federatedsocialweb/ wiki/ Main_Page [77] http:/ / d-cent. beta. html [63] (http:/ / onesocialweb. net/ download [74] http:/ / identi. html) [64] http:/ / ods. google. tv/ [76] http:/ / www. es/ ws/ [53] http:/ / lorea. com/ cms/ sites/ default/ files/ thesis. com/ [58] http:/ / noserub. com/ [47] (http:/ / code. safebook. cc/ pg/ groups/ 7826/ lorea/ [57] http:/ / noserub. org/ fsw2011/ [78] http:/ / gitorious. org/ index. pdf 58 . openlinksw. safebook. com/ download/ [59] http:/ / noserub. php?content=demo [68] http:/ / www. org/ social/ pages/ ProjectComparison [79] http:/ / buddycloud. php?content=prototype [69] http:/ / social-igniter. safebook. org/ faq/ [72] http:/ / socialze. com/ quick-facts/ [60] http:/ / identoo. org/ developers-downloads. org/ rhizomatik [56] https:/ / n-1. com/ [46] http:/ / jappix. com/ p/ kopal/ wiki/ Kopal_Connect [50] http:/ / code. google. com/ [61] http:/ / onesocialweb. html. net/ ods/ [66] http:/ / www. en [54] http:/ / lorea. myopenlink. com/ p/ kopal/ wiki/ Getting_Started?tm=2 [49] http:/ / code. com/ [70] http:/ / socialriver. com/ p/ kopal/ wiki/ Kopal_Feed [51] http:/ / kune. us/ home. google. jappix. org/ [71] http:/ / socialriver. eu/ home. org/ [62] http:/ / onesocialweb.Distributed social network [44] http:/ / daisycha. org/ join [55] https:/ / bitbucket.

pdf [2] http:/ / research. • "SCOPE: Easy and Efficient Parallel Processing of Massive Data Sets" [3]. Dryad defines a domain-specific language. youtube. The Dryad runtime parallelizes the dataflow graph by distributing the computational vertices across various execution engines (which can be multiple processor cores on the same computer or different physical computers connected by a network. com/ watch?v=WPhE5JCP2Ak . microsoft. The flow of data between one computational vertex to another is implemented by using communication "channels" between the vertices. microsoft. • "Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks" [2]. edges are added by using a composition operator (defined by Dryad) that connects two graphs (or two nodes of a graph) with an edge. Microsoft Research. A stream is used at runtime to transport a finite number of structured Items. which in physical implementation is realized by TCP/IP streams. pdf [4] http:/ / blogs. com/ research/ sv/ dryad/ [6] http:/ / www.Dryad (programming) 59 Dryad (programming) Dryad is an ongoing research project at Microsoft Research for a general purpose runtime for execution of data parallel applications. Microsoft Research. com/ en-us/ um/ people/ jrzhou/ pub/ Scope. References • "DryadLINQ: A System for General-Purpose Distributed Data-Parallel Computing Using a High-Level Language" [1]. Retrieved 2007-12-04. Computational vertices are written using standard C++ constructs. shared memory or temporary files. External links • Dryad: Programming the Data Center [4] • Dryad Home [5] • Video of Michael Isard explaining Dryad at Google [6] References [1] http:/ / research. There exist several high-level language compilers which use Dryad as a runtime. The DAG defines the dataflow of the application. which is implemented via a C++ library. that is used to create and model a Dryad execution graph. com/ microsoft/ ?p=18 [5] http:/ / research. pdf [3] http:/ / research. Microsoft Scope and DryadLINQ. Retrieved 2009-01-21. Microsoft Research. The graph is defined by adding edges. Managed code wrappers for the Dryad API can also be written. To make them accessible to the Dryad runtime. microsoft. as in a cluster). devoid of any concurrency or mutual exclusion semantics. com/ en-us/ projects/ dryadlinq/ eurosys07. without any explicit intervention by the developer of the application or administrator of the network. microsoft. examples include PSQL. zdnet. they must be encapsulated in a class that inherits from the GraphNode base class. com/ en-us/ projects/ dryadlinq/ dryadlinq. The "computational vertices" are written using sequential constructs. Scheduling of the computational vertices on the available hardware is handled by the Dryad runtime. and the vertices of the graph defines the operations that are to be performed on the data. An application written for Dryad is modeled as a directed acyclic graph (DAG). Retrieved 2009-01-21.

reduce cost. Infrastructure 2. This will require network management and infrastructure to be consolidated.[7] The basic premise of Dynamic Infrastructures is to leverage pooled IT resources to provide flexible IT capacity.[10] Potential benefits of Dynamic Infrastructures include enhancing performance.Dynamic infrastructure 60 Dynamic infrastructure Dynamic Infrastructure is an information technology paradigm concerning the design of data centers so that the underlying hardware and software can respond dynamically to changing levels of demand in more fundamental and efficient ways than before. This is achieved by using server virtualization technology to pool computing resources wherever possible.[3] Sun. scalability. Dynamic Infrastructures also provide the fundamental business continuity and high availability requirements to facilitate cloud or grid computing. for example. improve quality-of-service and make more efficient use of energy through reducing the number of standby or under-utilized machines in their data centers.”[13] For networking companies. By reducing redundant capacity.[8] Enterprises switching to Dynamic Infrastructures can also reduce costs. increasing server utilization and the ability to perform routine maintenance on either physical or virtual systems all while minimizing interruption to business operations and reducing cost for IT. F5 Networks and Infoblox. Fujitsu's definition: "Dynamic Infrastructures enable customers to assign IT resources dynamically to services as required and to choose sourcing models which best fit their businesses. . systems and endpoints. especially virtualization and cloud computing.[11] system availability and uptime. The paradigm is also known as Infrastructure 2. Early examples of server-level Dynamic Infrastructures are the FlexFrame for SAP and FlexFrame for Oracle solutions introduced by Fujitsu Siemens Computers (now Fujitsu) in 2003. once a month. This brings IT flexibility and efficiency to the next level. This allows for load balancing and is a more efficient approach than keeping massive computing resources in reserve to run tasks that take place. new and more streamlined approach to helping improve service. provisioning. applications and endpoints will be required to reap the full benefits of virtualization and many types of cloud computing. According to companies like Cisco. Instead of the hot spare principle of keeping second servers on standby to replace all production machines in contingencies for hardware. but are otherwise under-utilized. network automation and connectivity intelligence between networks. enabling higher levels of dynamic control and connectivity between networks.0 refers to the ability of networks to keep up with the movement and scale requirements of new enterprise IT initiatives. The FlexFrame approach is to dynamically assign servers to applications on demand. and allocating these resources on-demand using automated tools. real-time allocation of IT resources in line with demand from business processes.[9] enhancing performance or building co-location facilities. leveling peaks and enabling organizations to maximize the benefit from their IT investments. Top tier vendors promoting dynamic infrastructures include IBM.0 and Next Generation Data Center. organizations are enabled to make more efficient use of their IT budgets and devote greater proportions of their budget to physical and virtual production servers. Dynamic Infrastructures may also be used to provide security and data protection when workloads are moved during migrations. Dynamic Infrastructures provide for failover from a smaller pool of spare machines.[4] Fujitsu.[5] HP [6] and Dell.[1] [2] Microsoft."[12] IBM's definition: “A dynamic infrastructure integrates business and IT assets and aligns them with the overall goals of the business while taking a smarter. enabling the seamless.and software-related failures. and manage risk.

and ensuring disaster recovery readiness. • Communications companies can better monitor usage by location. throughout an organization's entire facilities as well as between one organization and another. is for a new type of infrastructure that: • • • • • Enables visibility. cell phones. packaging and supporting an application by 60%. cost reductions initiatives are a driver 47% of the time and are now aligned well with green goals. a new approach is needed. networks. many organizations have thought of physical infrastructure and IT infrastructure as separate. and its effect on organizations is equally far-reaching. Benefits of having dynamic infrastructures Dynamic infrastructures take advantage of intelligence gained across the network. The range of this approach is broader than ever before. and optimize routing to enhance user experience. for example. PCs. and they reduced overall TCO by 5% to 7% in our model. valves and assembly equipment through embedded electronics. For example: • Transportation companies can optimize their vehicles' routes leveraging GPS and traffic information. – Source: Gartner – "Green IT Services as a Catalyst for Cost Optimization. that airports. This convergence of business and IT assets requires an infrastructure that can measure and manage the lifecycle of assets that exist beyond the data center. • Facilities organizations can secure access to locations and track the movement of assets by leveraging RFID technology. and broadband devices were managed quite differently. Combining the two means that at least 57% of data center outsourcing and hosting initiatives are driven by green. and technologies connecting and differentiating organizations. user or function. interconnected. This meant. roadways. managing spikes in demand. By design. every dynamic infrastructure is service-oriented and focused on supporting and enabling the end users in a highly responsive way. Brian Gammage / 16 April 2008 While green issues are a primary driver in 10% of current data center outsourcing and hosting initiatives. control and automation across all business and IT assets Is highly optimized to achieve more with less Addresses the information challenge Leverages flexible sourcing like clouds Manages and mitigates risks Organizations need an infrastructure that can propel them forward — not hold them back. utilities. Now. it is the infrastructure that continues to enable commerce and communications – the roads. Application Virtualization" / Michael A Silver. • Production environments can monitor and manage presses." Virtualized applications can reduce the cost of testing. the infrastructure of atoms and the infrastructure of bits are merging into an intelligent. and intelligent assets. Global organizations already have the foundation for a dynamic infrastructure that will bring together the business and IT infrastructure to create new possibilities. • Utility companies can reduce energy usage with a "smart grid. Until now. like cloud computing to deliver new services with agility and speed. • Technology systems can be optimized for energy efficiency. buildings. To succeed in today's world of instrumented. competitors and customers.Dynamic infrastructure 61 Need for a holistic approach Even in the face of global uncertainty. Terrence Cosgrove. routers. global. and oil wells were managed in one way. while datacenters. It can utilize alternative sourcing approaches. The need therefore. – Source: Gartner – "TCO of Traditional Software Distribution vs. Mark A Margevicious." / Kurt Potter / 4 December 2008 . power plants. dynamic infrastructure.

html) [13] Dynamic Infrastructure: Delivering superior business and IT services with agility and speed (ftp:/ / ftp.nec.com/ global/corporate-ad/images/it_infrastructure. [4] Dynamic Infrastructure at Sun (http:/ / www. fujitsu. PDF) External links • IBM Dynamic Infrastructure IBM Dynamic Infrastructure (http://www-03.fujitsu. freepatentsonline. VMware at Future in Review Conference May 2009 (http://vimeo. Donna Scott. theregister. service delivery and acquisition models that optimize the infrastructure for efficiency and flexibility while transforming management to an automated service delivery and management model. fujitsu.com/ article/111346-network-industry-needs-a-new-vision-infrastructure-2-0) • National Infrastructure Simulation and Analysis Center (http://www.com/ 4891610) • IDC 4th Annual Dynamic Infrastructure Conference (Event) (http://www.html?jumpid=reg_R1002_USEN/) • Fujitsu Dynamic Infrastructures (http://ts. / Roberta J Witty. com/ p/ articles/ mi_m0BRZ/ is_2007_Spring/ ai_n19493357/ pg_2). com/ it_trends/ dynamic_infrastructures/ index.com) • Microsoft Realizing the potential for dynamic infrastructure (http://technet.technorati. com/ community/ node/ 27354). com/ ci) [8] IDC White Paper Building the Dynamic DataCenter: FlexFrame for SAP (http:/ / docs. html) [7] Dell Converged Infrastructure (http:/ / www.idc. com/ common/ ssi/ sa/ wh/ n/ oiw03021usen/ OIW03021USEN.Dynamic infrastructure "By 2013. fujitsu. com/ dynamicinfrastructures) [6] Dynamic Infrastructure and Blades at HP (http:/ / h18000. [12] Fujitsu's Dynamic Infrastructures main page (http:/ / ts.dell.pdf) • Technorati Dynamic Infrastructure. com/ features/ 26054149.com/service/dynamicinfrastructure/index. www1.com/getdoc.jsp) • NEC Dynamic It Takes a Dynamic Infrastructure to sustain growth while staying green (http://www.0 Panel with Cisco. John P Morency. Cost and Outsourcing Risk"). com/ products/ blades/ components/ matrix/ big_picture. F5. Rober Desisto / 28 January 2009 The key to a business and IT infrastructure that is "dynamic" is leveraging technologies. ibm. com/ y2007/ 0294736.com/videos/tag/dynamic+ infrastructure) .com/products/solutions/ converged/main. hp.sun.com/systems/ dynamicinfrastructure/) • HP Converged Infrastructure HP Converged Infrastructure (http://h18004.www1. uk/ 2009/ 04/ 29/ ibm_storage_apr09/ ) [3] Microsoft's view of The Dynamic Datacenter coverered by networkworld (http:/ / www. aspx?id=140d1393-d5ff-4c3b-924d-0c7183ebee65) [9] Computation on Demand: The Promise of Dynamic Provisioning (http:/ / www.infra20. com/ service/ dynamicinfrastructure/ index. com/ dl.0 (http://seekingalpha. amazon.html) • Sun Dynamic Infrastructure Suite (http://www.0 blog (http://www. more than 50% of midsize organizations and more than 75% of large enterprises will implement layered recovery architectures. Dave Russell.ibm. dell. html) [2] IBM's dynamic infrastructure taking shape at TheRegister (http:/ / www.com/en-us/ infrastructure/bb736006.com/ci) • Infrastructure 2. co. Dynamic Infrastructure (http://www. html) [10] An overview of continuous data protection (http:/ / findarticles. ts. jsp?containerId=IDC_P15254) • Infrastructure 2. jsp) [5] Fujitsu Dynamic Infrastructures (http:/ / ts.gov/nisac/diisa.hp. on-demandenterprise." – Source: Gartner – "Predicts 2009: Business Continuity Management Juggles Standardization. sun.com/it_trends/dynamic_infrastructures/index. networkworld.sandia. com/ ec2/ ).html) • Dell Converged Infrastructure (http://www.microsoft. [11] Amazon Elastic Compute Cloud (http:/ / aws. software. 62 References [1] IBM patent: Method For Dynamic Information Technology Infrastructure Provisioning (http:/ / www.aspx) • Seeking Alpha The Network Industry Needs a New Vision — Infrastructure 2.

grid computing. bizvoicemagazine. and active hackers can be caught early on. Edge computing replicates fragments of information across distributed networks of web servers. large organizations typically implement Edge computing by deploying Web server farms with clustering. Virtual Iron: Dynamic Infrastructure for the Data Center (http://www. . SAAS. Mirroring transactional and interactive systems are however a much more complex endeavor. thereby reducing transmission costs. and other names implying non-centralized. where the cache is in the Internet itself. Edge computing imposes certain limitations on the choices of technology platforms.com/display/DI/DI+Home) 63 Edge computing Edge computing provides application processing load balancing capacity to corporate and other large-scale web servers. Static web-sites being cached on mirror sites is not a new concept. peer-to-peer computing. Edge computing pushes applications.virtual-strategy. and improving quality of service (QoS). • Sun Dynamic Infrastructures Wiki (http://wikis. • Ernst.html). which may be vast and include many networks. limiting or removing a major bottleneck and a potential point of failure. To ensure acceptable performance of widely-dispersed distributed services. Edge computing eliminates. Dynamic Infrastructures: Taking Business Continuity to the Next Level (http://www. Retrieved 2008-10-31. the data is checked as it passes through protected firewalls and other security points.com. louspringer. http://www. toward the network core. louspringer. As it approaches the enterprise. ADynamic Infrastructure. The target end-user is any Internet client making use of commercial Internet application services.com/2007/09/27/dynamic-infrastructure-joyent-saas-soa-and-the-ibm-pc). • Bizvoicemagazine. the core computing environment. Bruce. lou (September 2007). autonomic (self-healing) computing.vmworld.pdf). It is like an application cache.pdf) (September.It's Already Here! (http://www. Joyent. Edge application services significantly decrease the data volume that must be moved. Security is also improved as encrypted data moves further in.html).Dynamic infrastructure • Springer. SOA and the IBM PC (http://blog. Retrieved 2010-08-23. Ann. 3.sun. Jason (PDF). http://www. Overview As the name implies.virtual-strategy.com date=October 2008. Previously available only to very large corporate and government organizations. applications or services. data and computing power (services) away from centralized points to the logical extremes of a network. Retrieved 2008-10-31.com/article/ pressRelease/idUS118407+17-Mar-2008+BW20080317) (March 17.sun.vmworld.reuters.com/ Migration/Virtual-Iron-Dynamic-Infrastructure-for-the-Data-Center. or at least de-emphasizes. Edge computing has many advantages: 1.com/ static/sessions/2008/PO2596.com.com/downloads/opendi/ opendiR1-vision-high-level-design_v16. Edge computing is also referred to as mesh computing. As a topological paradigm. http://www. http://blog. 2008) • Herndon. ADynamic The Datacenter of the Future -. the consequent traffic. 2008) • Carolan. 2. nodeless availability.com/archives/08sepoct/PassItOn-Infrastructures.com date=September 2008. shrinking latency.com date=October 2005. compromised data. technology advancement and cost reduction for large-scale implementations have made the technology available to small and medium-sized business.com. where viruses. OpenDI Vision and High Level Desing Overview (http://kenai. Retrieved 2008-10-31. and the distance the data must go. • Reuters. all of which need to be specifically developed or configured for edge computing. NEC and Promark Deliver The Dynamic Infrastructure (http://www.

html http:/ / www1.Geo-Targeted Private Content Delivery Network Platform (pCDN) Companies providing edge computing services • • • • • Akamai Technologies EdgeCast Networks Exinda Limelight Networks Mirror Image Internet References [1] [2] [3] [4] http:/ / www. com/ cms__Main?name=exinda-introduces-the-exinda-edge-cache http:/ / www. Edge computing provides a generic template facility for any type of application to spread its execution across a dedicated grid of prepared expensive machines. External links • • • • Akamai [1] Exinda .Edge Cache implementation press release [2] GeoElastic . Whereas Grid computing would be hardcoded into a specific application to distribute its complex and resource intensive computational needs across a global grid of cheap networked machines.com [4] . GeoStratus.e. real-time basis) extends scalability.g. e. GeoElastic. com . Finally..Adhoc Geo-Targeted Computing Alliance [3] GeoStratus. com http:/ / www.. a subscriber base. the ability to "virtualize" (i. exinda. and it could be argued that typical customers for Edge services are organizations desiring linear scale of business application performance to the growth of. akamai. The Edge computing market is generally based on a "charge for network services" model. logically group CPU capabilities on an as-needed. com/ en/ html/ technology/ edgecomputing_howitworks.Edge computing 4. 64 Grid computing Edge computing and Grid computing are related.

A consequence of ICE is a step-by-step (inductive) explication of the instructions available next for concurrent execution. Moving beyond the serial von Neumann computer (the only successful general purpose platform to date). The XMT paradigm can be programmed using XMTC. The inclusion of the suppressed information is. the suppressed information is provided. the number of operations at each round need not be clear. by standards of other approaches to parallel algorithms. a parallel multi-threaded programming language which is a small extension of the programming language C. the WT framework was adopted as the basic presentation framework in the parallel algorithms books (for the PRAM model) JaJa (1992) and Keller. Researchers have developed a large body of knowledge of parallel algorithms for the PRAM model. processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. in fact. The rudimentary parallel abstraction behind XMT. when they were yet to be built. as well as in the class notes Vishkin (2009). simplicity of algorithms is important. Second. The PRAM computational model is an abstract parallel machine model that had been introduced to similarly study parallel algorithms and complexity for parallel computing. In the WT framework. but several issues can be suppressed. These parallel algorithms are also known for being simple. is that indefinitely many instructions available for concurrent execution execute immediately. The work-time (WT) (sometimes called work-depth) framework. Explicit Multi-Threading (XMT) is a computing paradigm for building and programming multi-core computers with tens.Explicit multi-threading 65 Explicit multi-threading Explicit Multi-Threading ( XMT ) is a computer science paradigm for building and programming parallel computers designed around the Parallel Random Access Machine (PRAM) parallel computational model. A consequence of this abstraction is a step-by-step (inductive) explication of the instruction available next for execution. The main levels of abstraction of XMT The Explicit Multi-Threading (XMT) computing paradigm integrates several levels of abstraction. dubbed Immediate Concurrent Execution (ICE) in Vishkin (2011). inserting the details suppressed by that initial description is often not very difficult. provides a simple way for conceptualizing and describing parallel algorithms. the operations to be performed are characterized. a parallel algorithm is first described in terms of parallel rounds. the aspiration of XMT is that computer science will again be able to augment mathematical induction with a simple one-line computing abstraction The random access machine (RAM) is an abstract machine model used in computer science to study algorithms and complexity for standard serial computing. This large body of parallel algorithms knowledge for the PRAM model and their relative simplicity motivated building computers whose programming can be guided by these parallel algorithms. The XMT paradigm include a programmer’s workflow that starts with casting an algorithm in the WT framework and proceeds to programming it in XMTC. They are widely used across many application domains including general-purpose computing. The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm. The XMT paradigm was introduced by Uzi Vishkin. For example. For example. Multi-core computers are built around two or more processor cores integrated on a single integrated circuit die. For each round. Since productivity of parallel programmers has long been considered crucial for the success a parallel computer. A more direct explanation of XMT starts with the rudimentary abstraction that made serial computing simple: that any single instruction available for execution in a serial program executes immediately. Kessler & Traeff (2001). . hundreds or thousands of processor cores. Vishkin (2011) explains the simple connection between the WT framework and the more rudimentary ICE abstraction noted above. introduced by Shiloach & Vishkin (1982). guided by the proof of a scheduling theorem due to Brent (1974).

(1998) and Naishlos et al. • Torbert. Jorg. "Explicit Multi-Threading (XMT) bridging models for instruction parallelism" [5]. • Vishkin. Nuzman. doi:10. Ron. Vishkin. that demonstrates the overall concept was completed. Dascal. Kessler. Vishkin. Ellison. One of them [1] generalizes the program counter concept. 10. Since making parallel programming easy is one of the biggest challenges facing computer science today. (2001). pp. Journal of Algorithms 3: 128–146. "The parallel evaluation of general arithmetic expressions".1866757 Using simple abstraction to reinvent computing for parallelism]. (2003) and the XMT 64-processor computer in Wen & Vishkin (2008). "An O(n2 log n) parallel max-flow algorithm". . Xingzhi. Nuzman. An Introduction to Parallel Algorithms. Proc. Practical PRAM Programming. Vishkin. the demonstration also sought to include teaching the basics of PRAM algorithms and XMTC programming to students ranging from high-school Torbert et al. March 10-13. on Parallel Algorithms and Architecture) 36: 551–552. Proc.".1145/1366230.1145/1866739. Chau-Wen.1366240. Tzur. Uzi (1982). pp. Wiley-Interscience. [10. The XMT concept was presented in Vishkin et al. Jesper L. 2010. Yossi. Joseph. "FPGA-based prototype of a PRAM-on-chip processor" [7]. ISBN 0-471-35351-5 • Naishlos. "Is teaching parallel algorithmic thinking to high-school student possible? One teacher’s experience. Efraim. 55–66. (1974). Uzi (2003). • JaJa. Traeff. Vishkin. • Vishkin. a 64-processor computer [2] named Paraleap [3] . 2008 ACM Conference on Computing Frontiers (Ischia. Tel Aviv University and the Technion • Wen. Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques. Addison-Wesley. Dorit. Berkovich. WI. Cristoph W. which is central to the von Neumann architecture to multi-core hardware. David (2010). Journal of the ACM 21: 201–208. Milwaukee.1866757. Theory of Computer Systems (Special Issue of 2001 ACM Symp. (2010) to graduate school. Uzi. doi:10. Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland. 1998 ACM Symposium on Parallel Algorithms and Architectures (SPAA). Uzi (2008). College Park. Italy).1145/1866739. Communications of the ACM 54: 75–85. January 2011". Uzi. Joseph (1998). 66 XMT prototyping and links to more information In January 2007. • Vishkin. Shlomit.. 140–151. Shane. References • Brent.1145/1866739.1866757. • Shiloach. ACM Technical Symposium on Computer Science Education (SIG CSE). Richard P. 104 pages [6]. Uzi (2011). Tseng. to appear. "Communications of the ACM. ISBN 0-201-54856-9 • Keller. Uzi (2009). Volume 54 Issue 1. "Towards a First Vertical Prototyping of an Extremely Fine-Grained Parallel Programming Approach" [4].Explicit multi-threading The XMT multi-core computer systems provides run-time load-balancing of multi-threaded programs incorporating several patents. Joseph (1992).

director of the Computation Institute at the Argonne National Laboratory and University of Chicago. pdf [5] http:/ / www. pdf External links • Home page of the XMT project. with links to a software release. ps [6] http:/ / www.[5] According to Ian Foster. umiacs. umiacs. umd. edu/ media/ pressreleases/ pr112707_superwinner. "grid computing 'fabrics' are now poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations. edu/ users/ vishkin/ XMT/ CompFrontiers08. Patent 6. Cisco.[1] Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage. memory.527.[3] The fundamental components of fabrics are "nodes" (processor(s).Explicit multi-threading 67 Notes [1] Vishkin. html). [2] University of Maryland. umd. Fabric computing Fabric computing or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a 'weave' or a 'fabric' when viewed collectively from a distance. 2007: "Next Big "Leap" in Computing Technology Gets a Name" (http:/ / www. newsdesk. (1998).S.[6] [7] . eng. networking and parallel processing functions linked by high bandwidth interconnects (such as 10 Gigabit Ethernet and InfiniBand)[2] but the term has also been used to describe platforms like the Azure Services Platform and grid computing in general (where the common theme is interconnected nodes that appear as a single logical unit).shtml). umd. James Clark School of Engineering. and/or peripherals) and "links" (functional connection between nodes). edu/ users/ vishkin/ PUBLICATIONS/ classnotes. press release. on-line tutorial and to material for teaching parallelism (http://www. November 28. HP and Egenera currently manufacture computing fabric equipment. June 26. umd. "data center fabric" and "unified data center fabric". umiacs. edu/ users/ vishkin/ XMT/ spaa98. See also Vishkin et al. [3] University of Maryland. umd. cfm?ArticleID=1459).[2] While the term "fabric" has also been used in association with storage area networks and switched fabric networking.umiacs. U. umiacs.463. press release. [4] http:/ / www.umd. A."[3] Brocade. 2007: "Maryland Professor Creates Desktop Supercomputer" (http:/ / www. edu/ scitech/ release. pdf [7] http:/ / www. Other terms used to describe such fabrics include "unified fabric"[4] . Spawn-join instruction set architecture for providing explicit multithreading. umd. Uzi. the introduction of compute resources provides a complete "unified" computing system. edu/ users/ vishkin/ XMT/ spaa01-j-03.edu/~vishkin/XMT/index.

toolbox.[2] Challenges include a non-linearly degrading performance curve. [10] "Cisco to sell servers aimed at data centers" (http:/ / www. reuters. .cisco. html) [9] "Cisco launches Unified Computing push with new blade server" (http:/ / www. Analysts claim that this "ambitious new direction" is "a big risk" as companies like IBM and HP who have previously partnered with Cisco on data center projects (accounting for $2-3bn of Cisco's [10] [9] annual revenue) are now competing with them. Reuters. whereby adding resources does not linearly increase performance which is a common problem with parallel computing and maintaining security. [1] [2] [3] [4] External links • Cisco Unified Computing and Servers (http://www. ComputerWorld. com/ en/ US/ prod/ collateral/ switches/ ps9441/ ps9670/ white_paper_c11-462181. There have been mixed reactions to Cisco's architecture. snagy. Key characteristics The main advantages of fabrics are that a massive concurrent processing combined with a huge. do?command=viewArticleBasic& articleId=9043698) [8] Cisco: Unified Data Center Fabric: Reduce Costs and Improve Flexibility (http:/ / www.html/) • HP Converged Infrastructure (http://h18004. 2009-03-16. php/ Data_Center_Fabric) [7] Switch maker introduces a 'Data Center Fabric' architecture (http:/ / www. cisco.[2] References What Is: The Azure Fabric and the Development Fabric (http:/ / azure. but features will live on (http:/ / www. 2009-03-16. cisco. dominopower. com/ openport/ blogs/ server/ 2008/ 02/ 13/ data-center-fabric) [6] Toolbox for IT: Data Center Fabric (http:/ / it. com/ wiki/ index. com/ en/ US/ prod/ collateral/ ps6418/ ps6423/ ps6429/ prod_white_paper0900aecd80337bb8. techworld. html) [5] Intel: Data Center Fabric (http:/ / communities. . Retrieved 2009-03-17. Retrieved 2009-03-17. com/ issuesprint/ issue199810/ fabric.com/products/solutions/converged/main. intel. do?command=viewArticleBasic& articleId=9129718& intsrc=news_ts_head).www1. com/ action/ article.hp. com/ opsys/ features/ index. tightly-coupled address space makes it possible to solve huge computing problems (such as those presented by delivery of cloud computing services) and that they are both scalable and able to be dynamically reconfigured. cfm?featureid=3614) Unified Fabric: Benefits and Architecture of Virtual I/O (http:/ / www. name/ blog/ ?p=84) Massively distributed computing using computing fabrics (http:/ / www.com/en/US/products/ps10265/index. computerworld. com/ action/ article.Fabric computing 68 History While the term has been in use since the mid to late 1990s[2] the growth of cloud computing and Cisco's evangelism of unified data center fabrics[8] followed by unified computing (an evolutionary data center architecture whereby blade servers are integrated or unified with supporting network and storage infrastructure[9] ) starting March 2009 has renewed interest in the technology. Other companies offering unified or fabric computing systems include Liquid Computing Corporation and Egenera. particularly from rivals who claim that these proprietary systems will lock out other vendors. computerworld. html) Grid computing: The term may fade. com/ article/ technologyNews/ idUSTRE52F68W20090316). html?jumpid=reg_R1002_USEN/) .

another Sun Fellow and the inventor of Java. sys-con. The fallacies The fallacies are summarized as follows:[1] 1. 10 Years After" (http:/ / java. Bill Joy and Tom Lyon had already identified the first four as "The Fallacies of Networked Computing"[3] (the article claims "Dave Lyon". as with subnets for rival companies. added the eighth fallacy. History The list of fallacies generally came about at Sun Microsystems. The network is reliable. or in large unplanned expenses required to redesign the system to meet its original goals. The network is homogeneous. 8.and transport-layer developers to allow unbounded traffic. com/ read/ 38665. 7. 2. Latency is zero. There is one administrator. sun. 4. . html)." is credited with penning the first seven fallacies in 1994. 6. and of the packet loss it can cause. Around 1997. . Effects of the Fallacies 1. a substantial reduction in system scope. 4.Fallacies of Distributed Computing 69 Fallacies of Distributed Computing Peter Deutsch asserted that programmers new to distributed applications invariably make a set of assumptions known as the Fallacies of Distributed Computing and that all of these assumptions ultimately prove false. Peter Deutsch. . Transport cost is zero. induces application. one of the original Sun "Fellows. Topology doesn't change. 5.[3] References [1] "The Eight Fallacies of Distributed Computing" (http:/ / blogs. Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures. however.[2] 3. com/ jag/ resource/ Fallacies. The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls. but this is considered a mistake). htm). [3] "Deutsch's Fallacies. com/ c/ a/ Security/ Malware-Defensive-Techniques-Will-Evolve-as-Security-Arms-Race-Continues-331833/ ). eweek. Ignorance of network latency. 2. Multiple administrators. greatly increasing dropped packets and wasting bandwidth. Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks over frequency-multiplexed mediums. may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths. . The network is secure. James Gosling. [2] "Malware Defensive Techniques Will Evolve as Security Arms Race Continues" (http:/ / www. resulting either in the failure of the system. Bandwidth is infinite. 3. 5.

The object developer can migrate the state and the functionality over the fragments by providing different fragment implementations.g. Those dynamically change the inside the fragmented objects. a local stub or a local fragment.named fragments .rgoarchitects.html) • Fallacies of Distributed Computing Explained (http://www. Of course an exchange request may trigger one or more other internal changes. Arbitrary internal structure The internal structure of a fragmented object is arranged by the object developer/deployer. In contrast to distributed objects they are physically distributed and encapsulate the distribution in the object itself. It may be client–server. In addition.sun.com/Files/fallacies.. Fragmented object Arbitrary internal configuration As both the distribution of state and functionality are hidden behind the object interface their respective distribution over the fragments is also arbitrary. when some fragment is considered to have failed.Fallacies of Distributed Computing 70 External links • The Eight Fallacies of Distributed Computing (http://blogs. RTP for media streaming) behind a standard CORBA interface. For instance..may exist on different nodes and provide the object's interface. A flexible internal partitioning is achieved providing transparent fault-tolerant replications as well. . Parts of the object .g. this allows to hide real-time protocols (e. Each client accessing a fragmented object by its unique object identity presumes a local fragment. peer-to-peer and others. It is a novel design principle extending the traditional concept of stub based distribution. hierarchical.com/jag/resource/Fallacies. a downward compatibility to stub based distribution is ensured. Full transparency is gained by the following characteristics of fragmented objects. Arbitrary internal communication Arbitrary protocols may be chosen for the internal communication between the fragments. Thus. This procedure can either be triggered by a user who changes object properties or by the fragmented object itself (that is the collectivity of its fragments) e. Fragmented objects may act like a RPC-based infrastructure or a (caching) smart proxy as well.pdf) by Arnon Rotem-Gal-Oz Fragmented object Fragmented objects are truly distributed objects. Therefore clients cannot distinguish between the access of a local object. an application using a fragmented object can also tolerate a change in distributions which is achieved by exchanging the fragment at one or multiple hosts.

In this research we are looking at a powerful unifying paradigm for the construction of large-scale wide area distributed systems: distributed shared objects. nl/ ~ast/ publications/ ieeeconc-1999.jsessionid=HT0pf1n2TGvnRGN2vhBQBX8xQvdBF1tzts4hTfslFZQjyr2nqhzK!-648338668 . org/ WSProceedings/ ARM05/ a2-kapitza. vu. org [2] http:/ / aspectix. org/ portal/ site/ dsonline/ menuitem. ist. html [7] http:/ / www. inria.The SOMIW object-oriented Operating System. objectweb. and automated source-code transformation. xml& xsl=article. 9ed3d9924aeb0dcd82ccc6716bbe36ec/ index. aspect-oriented programming. cs. informatik. xsl& . jsp?& pName=dso_level1& path=dsonline/ 2006/ 10& file=o10001. • SOS [4] . fault tolerance. References • • • • • • Structure and Encapsulation in Distributed Systems: the Proxy Principle [5] Fragmented objects for distributed abstractions [6] Globe: A Wide-Area Distributed System [7] Integrating fragmented objects into a CORBA environment [8] FORMI: An RMI Extension for Adaptive Applications [9] FORMI: Integrating Adaptive Fragmented Objects into Java RMI [10] References [1] http:/ / aspectix. • Globe [3] . • FORMI [2] . fr/ projects/ sos/ [5] http:/ / citeseer.The Aspectix group works on several projects that focus on on middleware architecture. vu. psu. org/ formi [3] http:/ / www. de/ Publications/ pdf/ Reiser-Hauck-Kapitza-Schmied-Fragments. psu.Fragmented object 71 Projects • Aspectix [1] . pdf [8] http:/ / www4. nl/ globe/ [4] http:/ / www-sor. uni-erlangen. edu/ makpangou92fragmented. edu/ shapiro86structure. pdf [10] http:/ / dsonline. adaptive and quality-of-service-aware applications.FORMI is an extension of Java RMI. cs. pdf [9] http:/ / middleware05. html [6] http:/ / citeseer. ist. computer.

and then became GemStone Systems. GemStone and VisualWave were an early web application server platform (VisualWave and VisualWorks are now owned by Cincom.0) (http://www. lang. References [1] http:/ / www. a division of VMware.) GemStone played an important sponsorship role in the Smalltalk Industry Council at the time when IBM was backing VisualAge Smalltalk (VA is now at Instantiations [1]). Inc in 1995. GemStone Systems. GemStone frameworks still see some interest for web services and service-oriented architectures. On May 6. Although Gemstone isn't often mentioned in print. gemstone. multi-tier distributed systems. GemStone's owners pioneered implementing distributed computing in business systems. Oregon. 2011) External links • Official website (http://www. data virtualization and distributed caching. smalltalk/ msg/ 9560a50c14522f13) [3] SpringSource acquires Gemstone Systems (http:/ / www.org/faqs/databases/GemStone-FAQ/) . After a major transition. GemStone Systems was founded in 1982 as Servio Logic. In the area of web application frameworks. now develops and markets GemFire. Inc. JBoss and BEA Weblogic are somewhat analogous to GemStone. and shipped its first product in 1986. GemStone systems serve as mission-critical applications[2] even though many computing industry business publications focus attention on other ecosystems and languages.faqs. com [2] Slovenian national gas operator has its billing system running on Smalltalk for 10 years (http:/ / groups. have been with the company since its inception. A recent revival of interest in Smalltalk has occurred as a result of its use to generate Javascript for e-commerce web pages or in web application frameworks such as the Seaside web framework. google. The engineering group resides in Beaverton. Allen Otis and Monty Williams. Many information system features now associated with Java EE were implemented earlier in GemStone. Bob Bretl. SpringSource. Object-oriented programming Influenced Java EE GemStone is a proprietary application framework that was first available for Smalltalk as an object database. such as Java or C# for Microsoft . GemStone developed its first prototype in 1982. announced it had entered into a definitive agreement to [3] acquire GemStone. Event Stream Processing. Three of the original co-founding engineers. Systems based on object databases are not as common as those based on ORM or Object-relational mapping frameworks such as TopLink or Hibernate. which is notable for CEP (complex event processing).gemstone. com/ news/ 2010/ 05/ 06/ springsource-acquires-gemstone-systems/ ) (Retrieved May 23.NET for new development. 2010. Gemstone builds on the Smalltalk programming language. instantiations.com/) • GemStone FAQ (v. GemStone for Smalltalk continues as "GemStone/S" and various C++ and Java products for scalable. com/ group/ comp.Gemstone (database) 72 Gemstone (database) GemStone Database Management System Paradigm(s) Appeared in Application framework 1991 Influenced by Smalltalk.1.

does not impact the user's or the programmer's view in any way. In its basic instruction set. zdnet. The HTC is a redesign of the computer. IP networking is rivaling computer backplane speeds leading him to observe that "It’s time to move the backplane on to the network and redesign the computer".HyperText Computer 73 HyperText Computer HTTP Persistence · Compression · HTTPS Request methods OPTIONS · GET · HEAD · POST · PUT · DELETE · TRACE · CONNECT Header fields Cookie · ETag · Location · Referer X-Forwarded-For Status codes 301 Moved permanently 302 Found 303 See Other 403 Forbidden 404 Not Found The HyperText Computer (HTC) has been proposed as a model computer.as the ability to fulfill HTTP requests. Computers with just enough processing power to run an instance of a user agent can access the same applications as those with additional processing power and storage available. The HTC is a model of a computer built from the ground up containing no implicit information about locality or technology. However. The transition from computers being connected by networks to the network as a computer has been anticipated for some time. Technologies like Ajax at the presentation level and iSCSI at the transport level are so undermining the Fallacies of Distributed Computing that inter and> intra-computer communications not carried over IP are looking like special case optimizations. External links • HyperText Computer Blog [2] References [1] http:/ / blogs. The HTC is a foundational model for distributed computing. Locally available processing capacity and storage is presented in the same way as remote processing and storage — that is . every operator is implemented by an HTTP request and every operand is a URL referring to a document. com/ BTL/ ?p=1945 [2] http:/ / www. com/ category/ hypertext-computer/ . Built on the Hypertext Transfer Protocol (HTTP). davidpratten. other issues such as intellectual property will dominate decisions as to where and how processing is done. the HTC is a general-purpose computer. In this case. As noted by Cisco's Giancarlo [1]. unplugging the local computing resources.

OMT consists of the following documents: • Federation Object Model (FOM). Common HLA terminology • • • • • • Federate: an HLA compliant simulation entity. and to synchronize actions) to other computer simulations regardless of the computing platforms. The RTI provides a programming library and an application programming interface (API) compliant to the interface specification. • Simulation Object Model (SOM). that defines how HLA compliant simulators interact with the Run-Time Infrastructure (RTI). attributes and interactions used for a single federate. The FOM describes the shared object. The interaction between simulations is managed by a Run-Time Infrastructure (RTI). computer simulations can interact (that is. . Using HLA. Many RTIs provide APIs in C++ and the Java programming languages. Federation: multiple simulation entities connected via the RTI using a common OMT. that simulations must obey in order to be compliant to the standard. • Rules. • Object Model Template (OMT). Object: a collection of related data sent between simulations. A SOM describes the shared object.High level architecture (simulation) 74 High level architecture (simulation) The High Level Architecture (HLA) is a general purpose architecture for distributed computer simulation systems. and how it is documented. Parameter: data field of an interaction. attributes and interactions for the whole federation. Attribute: data field of an object.. Interaction: event sent between simulation entities. The interface specification is divided into service groups: • • • • • • • Federation Management Declaration Management Object Management Ownership Management Time Management Data Distribution Management Support Services Object model template The object model template (OMT) provides a common framework for the communication between HLA simulations. Technical overview A High Level Architecture consists of the following components: • Interface Specification. Interface specification The interface specification is object oriented. to communicate data. that specifies what information is communicated between simulations.

High level architecture (simulation) 75 HLA rules The HLA rules describe the responsibilities of federations and the federates that join.Federate Interface Specification • IEEE 1516. 9. Distributed Simulation Engineering and Execution Process (DSEEP) In spring 2007 SISO started revising the FEDEP.Standard for Modeling and Simulation High Level Architecture . Federates shall be able to vary the conditions under which they provide updates of attributes of objects. 5. 4. In a federation. During a federation execution. federates shall interact with the run-time infrastructure (RTI) in accordance with the HLA interface specification. FEDEP is an overall framework overlay that can be used together with many other. commonly used development methodologies. all exchange of FOM data among federates shall occur via the RTI. Federates shall be able to manage local time in a way that will allow them to coordinate data exchange with other members of a federation. documented in accordance with the HLA Object Model Template (OMT). Federation Development and Execution Process (FEDEP) FEDEP. Federates shall be able to transfer and/or accept ownership of an attribute dynamically during a federation execution. Federations shall have an HLA Federation Object Model (FOM).Standard for Modeling and Simulation High Level Architecture . IEEE 1516. Federates shall be able to update and/or reflect any attributes of objects in their SOM and send and/or receive SOM object interactions externally. Base Object Model The Base Object Model (BOM) is a new concept created by SISO [2] to provide better reuse and composability for HLA simulations. 6. not in the run-time infrastructure (RTI). an attribute of an instance of an object shall be owned by only one federate at any given time. Standards HLA is defined under IEEE Standard 1516: • IEEE 1516-2010 . More information can be found at Boms.Standard for Modeling and Simulation High Level Architecture . During a federation execution. as specified in their SOM. 3.info [3].2-2010 . During a federation execution.3-2003 . 10.Object Model Template (OMT) Specification • IEEE 1516. all representation of objects in the FOM shall be in the federates. 8. Federates shall have an HLA Simulation Object Model (SOM).Recommended Practice for High Level Architecture Federation Development and Execution Process (FEDEP) .[1] 1. as specified in their SOM.3-2003.3). is a standardized and recommended process for developing interoperable HLA based federations.Framework and Rules • IEEE 1516. 7. 2. It has been renamed to Distributed Simulation Engineering and Execution Process (DSEEP) and is now an active standard IEEE 1730-2010 (instead of IEEE 1516. and is highly relevant for HLA developers.1-2010 . documented in accordance with the HLA Object Model Template (OMT). as specified in their SOM.

such as Schemas and extensibility Fault tolerance support services Web Services (WSDL) support/API Modular FOMs Update rate reduction Encoding helpers Extended support for additional transportation (such as QoS. The revised IEEE 1516-2010 standard includes current DoD standard interpretations and the EDLC API.4-2007 .3" version) is the subject of the NATO standardization agreement (STANAG 4603) for modeling and simulation: Modeling And Simulation Architecture Standards For Technical Interoperability: High Level Architecture (HLA). was known as HLA 1.Standard for Modeling and Simulation High Level Architecture . IPv6.1-2000 . published 1998. such as XML Schemas. • SISO-STD-004-2004 .3 API specification. Java and WSDL APIs as well as FOM/SOM samples can be downloaded from the IEEE 1516 download area of the IEEE web site [4]. Validation.Object Model Template (OMT) Specification See also: • Department of Defense (DoD) Interpretations of the IEEE 1516-2000 series of standards.. The DLC API addresses a limitation of the IEEE 1516 and 1. Other major improvements include: • • • • • • • Extended XML support for FOM/SOM.1-2000 Errata (2003-oct-16) [7] • IEEE 1516.Dynamic Link Compatible HLA API Standard for the HLA Interface Specification (IEEE 1516.3 [9] • SISO-STD-004. and Accreditation of a Federation an Overlay to the High Level Architecture Federation Development and Execution Process Machine-readable parts of the standard.Standard for Modeling and Simulation High Level Architecture .High level architecture (simulation) • IEEE 1516.Recommended Practice for Verification.Dynamic Link Compatible HLA API Standard for the HLA Interface Specification Version 1.2-2000 .3. an extended version of the SISO DLC API. STANAG 4603 HLA (in both the current IEEE 1516 version and its ancestor "1. C++.. The full standards texts are available at no extra cost to SISO [5] members or can be purchased from the IEEE shop [6].Federate Interface Specification • IEEE 1516..Standard for Modeling and Simulation High Level Architecture . informally known as Evolved DLC APIs (EDLC).) . whereby federate recompilation was necessary for each different RTI implementation. Release 2 (2003-jul-01) [8] 76 Prior to publication of IEEE 1516. Previous version: • IEEE 1516-2000 . The first complete version of the standard.Framework and Rules • IEEE 1516.1-2004 . DLC API SISO has developed a complementary HLA API specification known as the Dynamic Link Compatible (DLC) API. Note that this API has since been superseeded by the HLA Evolved APIs. the HLA standards development was sponsored by the US Defense Modeling and Simulation Office.1 Version) [10] HLA Evolved The IEEE 1516 standard has been revised under the SISO HLA-Evolved Product Development Group and was approved 25-Mar-2010 by the IEEE Standards Activities Board.

zip [11] http:/ / www. Background Next Generation Access (NGA) broadband is promoted strongly by policy makers as underpinning future economic growth. Youtube and Facebook. cross-platform HLA RTI implementation. mil/ public/ library/ projects/ hla/ rti/ DoD_interps_1516_Release_2. • Portico (http://www. pdf [10] http:/ / www. php?tg=fileman& idx=get& id=5& gr=Y& path=SISO+ Products%2FSISO+ Standards& file=SIS-STD-004.is a thought experiment that asks: what will happen when bandwidth (for connecting to the Internet for example) is so great. Department of Defense.were not foreseen. ieee. pdf [8] https:/ / www. tools. php?tg=fileman& idx=get& id=5& gr=Y& path=SISO+ Products%2FSISO+ Standards& file=SISO-STD-004-2004-Final. but the most innovative aspects of these . Defense Modeling and Simulation Office (2001). org/ downloads/ 1516/ [5] http:/ / www.open source knowledge. doc [9] http:/ / www. sisostds.S. ieee. dmso. There is however a lack of examples of the ways that NGA will be used or of the sort of innovations that may come about as a result of widespread access to NGA. sisostds.com/p/proto-x/): a cross-platform. 1-2000. org/ index. [2] http:/ / www. org/ [7] http:/ / standards. and utilities. org [3] http:/ / www. The IBZL programme[1] was started by the Open University and Manchester Digital in the UK.porticoproject. The IBZL programme has used a process (Imagine/ Triple Task Method) to explore the potentially novel applications of NGA and provide some ideas as to the key components of the future inter-networked landscape. com/ Creating-Computer-Simulation-Systems-Introduction/ dp/ 0130225118 External Links • proto-x (http://code. amazon.infinite bandwidth zero latency .3-Next Generation Programmer's Guide Version 4. open source C++ library for developing HLA compliant simulations. IBZL IBZL .org): an open source. sisostds. A parallel can be drawn with the advent of first generation broadband which arguably created the conditions for the success of innovations such as Wikipedia.S. boms. U. info [4] http:/ / standards. and latency so small. RTI 1. 1-2004. org/ reading/ ieee/ updates/ errata/ 1516. . org/ index. sisostds.google. org/ [6] http:/ / shop. video sharing and always-on social networking .High level architecture (simulation) • Standardized time representations 77 Books • Creating Computer Simulation Systems: An Introduction to the High Level Architecture [11] References [1] U. that it no longer matters? What will be the applications and services that become widespread?. ieee.

made possible next generation networks. To facilitate the process the Imagine methodology was adapted and applied as a form of future workshop for deep reflection on possible scenarios (numerous examples of this kind of work exist. three elements are usually considered essential: • NGA will provide a significant increase in the transmission speeds available to the domestic or small-business end-user. Next generation technology could support real-time collaborative generation of product ideas followed by the process of . ‘Always on social space’ . UK in May and October 2010. to imagine a digital future. IBZL addresses a gap in policy and strategic thought. Latency. in order to synthesize high quality informational and other connections. They were organized jointly by the Open University Faculty of Mathematics. The [2] ‘Digital Britain’ report refers to ‘next generation service up to’ 40 Mbps. Computing and Technology [8] and Manchester Digital. social and educational interactions. Google (Google. Zero Latency (IBZL[6] ) initiative was designed as a contribution to innovation by identifying new applications that will be made possible by NGA as it evolves and that may contribute to the continuing development of innovative digital industries. but also indicators of network performance including latency (the time taken for data packets to travel from source to destination). To put this in context. it is generally assumed that NGA will offer a step-change in upload as well as download speeds. For some. This would not only allow a new level of remote working and collaboration but also the sense of living in proximity with friends and relations could transform the lives of older people who need to stay longer in their homes as the population ages. but see for example: List 2006[7] ) There have been two IBZL workshops held in Manchester. supporting the kind of occasional. IBZL as a way to develop NGA The Infinite Bandwidth. The IBZL process is intended as a means to explore and speculate on potential future technologies. The speeds cited vary widely from 25 Mbps (e. 'Infinite bandwidth' and 'zero latency' are not meant literally. between people living and working remotely. private sector and academic participants. ‘Real artisans in a virtual world’ . • NGA is widely taken to offer improved ‘quality of service’ (QoS)[5] .IBZL 78 Next Generation Access (NGA) While there is no universally agreed definition of what qualifies a network to be considered ‘next generation’. in early 2010. IBZL outcomes The workshops produced ideas that will be further developed. and more recently UK ministers have referred to 50 Mbps and faster[3] . Five of these are briefly summarised below. jitter (the variation in latency among data packets) and data loss (the loss of data packets due to network congestion). OFCOM 2009[4] ). NGA bandwidth should be symmetrical. They brought together invited public sector. 2010) announced a plan for experimental community networks operating at 100 Gbps.the networked production of artefacts by artisans in multiple locations. in addition to ‘raw’ bandwidth. jitter and data loss are important aspects of the usability of applications such as internet telephony or video. real-time social encounters (‘collisions’) that happen when people are co-located.virtual spaces in which the connection is always on/perpetual. reflecting the demands of increasingly user-generated content. informal. they are a shorthand for networks where bandwidth and latency cease to be limiting factors. spontaneous.g. products and people. Behind this would be a thorough analysis of organizations.g. though others have a more relaxed view (e. ‘Intelligent matchmaking’ – bringing suppliers and consumers together optimally for business. QoS here is taken to mean not only service reliability and availability. a trade association of creative and digital companies in Manchester and the North West of England. • In contrast with currently widespread ADSL technologies. What is Digital Region? 2009) to over 200 Mbps. where relatively little attention has been given to what kinds of novel application are made feasible by networks which are relatively free of speed and latency capacity constraints.

resulting in a ‘geography of latency’ and the disruption of ‘simultaneous time’. Digital Britain: Final Report. network infrastructure and the network of relationships between service providers.Industry Day (http:/ / www. Zero Latency (IBZL) project website (http:/ / www. (2006). "Action Research Cycles for Multiple Futures Perspectives. uk/ External links • IBZL project website (http://www. open. INCA Policy Briefing No. net) [2] Department_for_Business_Innovation_and_Skills (2009). ibzl. London. Media and Sport.projects like SETI@home use the spare processor capacity of millions of personal computers to process batches of number-crunching tasks. Latency mapping . effectively re-engineering (or at least. development and distributed fabrication. ibzl. Peer-to-peer processor time-sharing . or commercial spaces." Futures 38: 673 .IBZL design.ibzl. inca.net) . Independent Networks Cooperative Association [4] OFCOM (2009). ac.the evolution of next generation networks will be uneven. Next generation networks could allow real time peer-to-peer sharing so that when an application needs additional capacity for processor-heavy tasks like video rendering it could have access to effectively limitless extra computing power. This could turn the conventional trading pattern on its head with artisans in the developing world crafting products for “3D printing” in the developed world. Delivering Super-Fast Broadband in the UK: Promoting investment and competition OFCOM [5] OFCOM (2009)Delivering Super-Fast Broadband in the UK: Promoting investment and competition OFCOM [6] Infinite Bandwidth. 1: Broadband Delivery UK . co-ordinated among volunteers by a central ‘master’ application. [8] http:/ / mct. Department for Business Innovation and Skills and the Department for Culture. net) [7] List. challenging) current craft value chains. coop/ policy/ inca-policy-briefing-no1). 79 References [1] Infinite Bandwidth. technical/geographic. The kinds of networked application that are feasible between two network locations will be a function of a range of factors including spatial distribution. Latency maps would be an enabling tool to identify the kinds of applications possible within/between. Zero Latency (IBZL) project website (http:/ / www. Page 54 [3] INCA (2010).684. D.

The object object. the identity of a membership service. such as the name of a multicast group. each executing the distributed protocol code with the same set of essential parameters. controls. Thus. In the case of multicast. and that exhibits a well-defined externally visible behavior. kayou is still in its design phase hence not much information is actually available about its design or its implementation. In this case. publish-subscribe channels and multicast groups are examples of live distributed objects: for each channel or group. there exists a single instance of a distributed protocol running among all computers sending. by the address of the membership service (the entity that manages the membership of the multicast group). for example. org Live distributed object Definitions The term live distributed object (also abbreviated as live object) refers to a running instance of a distributed multi-party (or peer-to-peer) protocol. as applied to live distributed objects. . may encapsulate internal state and threads of execution. The identity of a live distributed object is determined by the same factors that differentiate between instances of the An illustration of the basic concepts involved in the definition of a live distributed same distributed protocol. forwarding. qualified with the identity of the distributed system that provides. viewed from the object-oriented perspective. kayou provides a powerful distribution-oriented interface which enables applications to take advantage of the resources of networked computers. The key programming language concepts. consists of a group of software components physically executing on some set of physical machines and engaged in mutual communication. are defined as follows.Kayou 80 Kayou kayou is a distributed operating system project developed on top of the kaneton microkernel in the vein of Amoeba. • Identity. and manages the given channel or group. or receiving the data published in the channel or multicast within the group. etc. the identifier of a publish-subscribe topic. opaak. for example. External links • kayou official website [1] References [1] http:/ / kayou. the identity of the system might be determined. the object's identity is determined by the identifier of the channel or group. as an entity that has a distinct identity. Note that the kayou project is part of the Opaak educational trilogy along with kastor and kaneton.

and that never cease to execute or are excluded from the protocol. and concurrently consumed by instances of the application using this protocol. By definition. by recursively embedding a reference to the appropriate name resolution object. the concept of a live distributed object proxy generalizes the notion of a RPC. RMI. a similar event must be eventually generated by all non-faulty proxies (proxies that run on computers that never crash. the externally visible state of a leader election object would be defined as the identity of the currently elected leader. The behavior of a live distributed object is characterized by the set of possible patterns of external interactions that its proxies can engage in with their local runtime environments. and the patterns of events that may occur at the endpoints. or a web service's WSDL description. To say that a live object exposes a certain endpoint means that each of its proxies exposes an instance of this endpoint to its local environment. The reference to a live object is a complete set of serialized. type atomic multicast might specify that if an event of the form deliver(x) is generated by one proxy. a C/C++ pointer. In this sense. rather than as a particular value located in a given place at a given time. The term endpoint instance refers to a single specific event channel or user interface exposed by a single specific proxy. Much as it is the case for types in Java-like languages. the precise definition might vary). rather. The interface of a live distributed object is defined by the types of interfaces exposed by its proxies. • Behavior. The proxy or a replica of a live object is one of the software component instances involved in executing the live object's distributed protocol. virtual synchrony. depending on the protocol semantics: an instance of a consensus protocol will have the state of its replicas strongly consistent. The state of a live distributed object is defined as the sum of all internal.Live distributed object • Proxies (replicas). The constraints that the object's type places on event patterns may span across the network. and coordinating their operations. The object can thus be alternatively defined as a group of proxies engaged in communication. it serves as a gateway through which an application can gain access to a certain functionality or behavior that spans across a set of computers. For example. the reference must specify how this identifier is resolved. In this sense. Since live distributed objects may not reside in any particular place (but rather span across a dynamically changing set of computers). there might exist many 81 . To dereference a reference means to locally parse and follow these instructions on a particular computer. or state machine replication to achieve strong consistency between the internal states of its replicas. jointly maintaining some distributed state. and each of the endpoint instances carries events of the same types (or binds to the same type of a graphical display). The type of a live distributed object determines the patterns of external interactions with the object. These interactions are modeled as exchanges of explicit events (messages). to produce a running proxy of the live object. The different replicas of the object's state may be strongly or only weakly consistent. a live object reference plays the same role as a Java reference. For example. • References. it is determined by the types of endpoints and graphical user interfaces exposed by the object's proxies.NET remoting client-side proxy stub. it contains a complete information sufficient to locate the given object and interact with it. local states of its proxies. these may include event channels and various types of graphical user interfaces. The identity is not stored at any particular location. rather. or . If the object is identified by some sort of a gobally unique identifier (as might be the case for publish-subscribe topics or multicast groups). • State. Interfaces exposed by the proxies are referred to as the live distributed object's endpoints. Defined this way. the latter is a specific type of live distributed object that uses a protocol such as Paxos. The term proxy stresses the fact that a single software component does not in itself constitute an object. the term live distributed object generalizes the concept of a replicated object. it is distributed and replicated. • Interfaces (endpoints). the information contained in a live distributed object's reference cannot be limited to just an address. whereas an instance of a leader election protocol will have a weakly consistent state. on different machines distributed across the network. portable instructions for constructing its proxy. • Types. The state of a live distributed object should be understood as a dynamic notion: as a point (or consistent cut) in a stream of values. it materializes as a stream of messages of the form elected(x) concurrently produced by the proxies involved in executing this protocol.

a number of extension have been developed to embed live distributed objects in Microsoft Office documents [12] . in a paper published at the ECOOP [10] .D.[14] [15] [16] [17] [18] [19] [20] [21] . drag and drop tools for composing hierarchical documents resembling web pages. a comprehensive discussion of the relevant prior work can be found in Krzysztof Ostrowski's Ph. was the Live Distributed Objects [11] platform developed by Krzysztof Ostrowski [9] at Cornell University. behavior characteristic to atomic multicast might be exhibited by instances of distributed protocols such as virtual synchrony or Paxos. STC [5] conference [6] . which pioneered the idea that services are objects. The extension of the term has been motivated by the need to model live objects as compositions of conference other objects. the term was used to refer to the types of dynamic. the perspective dictates that their constituent parts. a more comprehensive discussion of the live object concept in the context of Web development can be found in Krzysztof Ostrowski [9]'s Ph. When applied to live distributed objects. interactive. and archival content that has been pre-assembled). The more general definition presented above has been first proposed in 2008. and connecting them together. and represents current. The semantics and behavior of live distributed objects can be characterized in terms of distributed data flows. as defined in the ECOOP paper[10] . Thus. and internally powered by instances of reliable multicast protocols. The word distributed expressed the fact that the information is not hosted. and at the MSR labs in Redmond. should also be modeled as live distributed objects. but rather stored on the end-user's client computers. the platform is being actively developed by its creators. in an IEEE Internet Computing article[8] . interactive Web content that is not hosted on servers in data centers. it is replicated among the end-user computers. message streams. The platform provided a set of visual. fresh. shared desktops.D. The word live expressed the fact that the displayed information is dynamic. dissertation[3] . and various sorts of mashups could be composed by dragging and dropping components representing user interfaces and protocol instances onto a design form. which includes instances of distributed multi-party protocols used internally to replicate state. WA [7] . but rather. live content that reflects recent updates made by the users (as opposed to static. the concept has been inspired by Smalltalk. Since the moment of its creation. dissertation[3] . The need for uniformity implies that the definition of a live distributed object must unify concepts such as live Web content. and protocol composition frameworks. dating back at least to the actor model developed in the early 1970s. and to support various types of hosted content such as Google Maps[13] . Originally. in this sense. programming language embeddings. the set of messages or events that appear on the instances of a live object's endpoint forms a distributed data flow [1] [2] . and containing XML-serialized live object references. stored at a server in a data center. and updated in a peer-to-peer fashion through a stream of multicast messages that may be produced directly by the end-users consuming the content. and then formally defined in 2007. 82 History Early ideas underlying the concept of a live distributed object have been influenced by a rich body of research on object-oriented environments. for example. which pioneered the uniform perspective that everything is an object. The term live distributed object was first used informally in a series of presentations given in the fall of 2006 at an ICWS conference [4] . and Jini. Visual content such as chat windows. The first implementation of the live distributed object concept. As of March 2009.Live distributed object very different implementations of the same type. and instances of distributed multi-party protocols. read-only.

'Extensible Web Services Architecture for Notification in Large-Scale Systems'. X.. 11(6):72-78. [4] Ostrowski. Companion '08. edu/ ~krzys/ krzys_ladis2009.. "ALGE (A Live Google Earth)". jsp?arnumber=4032049. and Birman. S.. 2008.. R. "Using live distributed objects for office automation". http:/ / portal. cornell. [13] http:/ / liveobjects. Lecture Notes In Computer Science. September 2006. cs. handle. K. edu/ community/ 4/ index. K. cornell. edu [12] Ahnn. "Cornell Yahoo! Live Objects". cornell. cornell. edu/ community/ index. (2008). http:/ / www. H. Birman. Birman. Microsoft Research. 1462743. cornell. December 01 .. K. and Subramaniyan. edu/ community/ 1/ index. 5142. cornell.. (2008). (2008). html [16] Kashyap. Cornell University. org/ prog/ displayevent.. K. html [20] Mahajan. and Nagarajappa. edu/ community/ 6/ index. K. New York. Vitek. edu/ community/ 2/ index.. cs. Submitted to the International Conference on Object Oriented Programming. http:/ / liveobjects.D.Live distributed object 83 References [1] Ostrowski. html [19] Gupta. aspx?rID=7870& fID=2276. pdf [2] Ostrowski. and Sakoda. cornell. K. IEEE Internet Computing. "Implementing Reliable Event Streams in Large Systems via Distributed Data Flows and Recursive Delegation". 2008. Berlin. IL. edu/ ~krzys/ krzys_debs2009. cfm?id=1428508. http:/ / liveobjects. cornell..05.. [7] Ostrowski. cs. "Live Maps". cs. org/ citation. cornell. Dolev. [5] http:/ / www. Z. and Wakankar. edu/ ~krzys [10] Ostrowski. 3rd ACM International Conference on Distributed Event-Based Systems (DEBS 2009). Cyprus. "Programming with Live Distributed Objects". 2009. 3rd ACM SIGOPS International Workshop on Large Scale Distributed Systems and Middleware (LADIS 2009). cornell. WA. cs. html [18] Prateek. and van Renesse. Proceedings of the 22nd European Conference on Object-Oriented Programming. html [17] Dong. ieee. Paphos. cs. (2008). [9] http:/ / www.. and Dolev. [8] Ostrowski.. R. K. K. July 6–9. NY. (2009). "Programming Live Distributed Objects with Distributed Data Flows". Birman.. org/ citation. USA. Dolev. cornell. (2008). November–December 2007. http:/ / liveobjects. K. K. http:/ / liveobjects. 2009. Redmond. cs. http:/ / portal. (2009). First ACM Workshop on Scalable Trusted Computing (ACM STC 2006). and Birman. H. "Live Google Earth UI". K. U. D. "Live Distributed Objects: Enabling the Active Web". R. org/ citation.. Heidelberg. pdf [3] Ostrowski. html [21] Wadhwa. Ed. http:/ / hdl. (2008). edu/ ~shxu/ stc06/ [6] Ostrowski. edu/ community/ 7/ index. acm. Ostrowski. Languages and Applications (OOPSLA 2009). (2008). 463-489. http:/ / ieeexplore. jsp?isnumber=4376216& arnumber=4376231. "Live Distributed Objects". and Birman. Chicago. net/ 1813/ 10881. Big Sky. acm. cornell. (2008). http:/ / ieeexplore.. A. July 07 . and Ahnn. J. S.. Systems. "Storing and Accessing Live Mashup Content in the Cloud". [11] http:/ / liveobjects. K. (2009). "Distributed Google Earth". html [14] Ostrowski. http:/ / www. vol. (2008). VA.11. http:/ / liveobjects. edu/ community/ 3/ index. cornell. http:/ / www. http:/ / portal. K. http:/ / www. Leuven. "Goole Earth Live Object". November 2006. cfm?id=1179477.. "Integrate Live Objects with Flickr Web Service". Dolev. 'Scalable Group Communication System for Scalable Trust'... K. org/ xpl/ freeabs_all. November 2006. and Zhang. http:/ / liveobjects. J.. J... researchchannel. K. (2008). October 11. D.. org/ xpls/ abs_all. C. Nashville.. K. S. cs. (2007). pdf [15] Akdogan. ACM. edu/ community/ 5/ index. cs. acm. D. Springer-Verlag. ieee. TN. 1428536. cs. K. cs. A. edu/ ~krzys/ krzys_oopsla2009. S. MT. Belgium. http:/ / liveobjects. cs. D. utsa. Ph. Birman. 30-35. IEEE International Conference on Web Services (ICWS 2006). K. and Polepalli. Dissertation. D. USA.. QuickSilver Scalable Multicast.. Fairfax. cs. and Vora. Proceedings of the ACM/IFIP/USENIX Middleware '08 Conference Companion. Sankar. cfm?id=1462735. (2006).. cs. Birman. html .

supplier and contractor review. the County of Los Angeles sent an e-mail to its suppliers asking them not to use these terms: Subject: IDENTIFICATION OF EQUIPMENT SOLD TO LA COUNTY Date: Tue. and the slave databases are synchronized to it. Some older pre-FireWire Macintoshes had a similar controversial "SCSI Disk Mode". "Master" is merely another term for device 0 and "slave" indicates device 1. See . As such. suppliers and contractors make a concentrated effort to ensure that any equipment. Controversy Sometimes the terms master and slave are deemed offensive. We would request that each manufacturer. 18 Nov 2003 14:21:16 -0800 From: "Los Angeles County" The County of Los Angeles actively promotes and is committed to ensure a work environment that is free from any discriminatory influence be it actual or perceived. with the other devices acting in the role of slaves. this is not an acceptable identification label. Thank you in advance for your cooperation and assistance. • Duplication is often done with several cassette tape or compact disc recorders linked together. it is the County's expectation that our manufacturers. [5] [6] On November 2003. Joe Sandoval. • On the Macintosh platform. Based on the cultural diversity and sensitivity of Los Angeles County. • Peripherals connected to a bus in a computer system. essentially a disk slave mode.with the operation of all locomotives in the train slaved to the controls of the first locomotive. Target Disk Mode allows a computer to operate as an external FireWire hard disk.[1] [2] [3] Examples • In database replication. In some systems a master is elected from a group of eligible devices. • In parallel ATA hard drive arrangements. Division Manager Purchasing and Contract Services [4] . supplies or services that are provided to County departments do not possess or portray an image that may be construed as offensive or defamatory in nature. the master database is regarded as the authoritative source. so that recording is done in parallel. the terms master and slave are used but neither drive has control over the other. identify and remove/change any identification or labeling of equipment or components thereof that could be interpreted as discriminatory or offensive in nature before such equipment is sold or otherwise provided to any County department. The terms also do not indicate precedence of one drive over the other in most situations. Operating the controls on the master triggers the same commands on the slaves. • Railway locomotives operating in multiple (for example: to pull loads too heavy for a single locomotive) can be referred to as a master/slave configuration . One such recent example included the manufacturer's labeling of equipment where the words "Master/Slave" appeared to identify the primary and secondary sources.Multiple-unit train control.Master/slave (technology) 84 Master/slave (technology) Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices.

microsoft. It has not had much effect on most of the products being produced. techtarget.sid7_gci783492. pl?sid=03/ 11/ 25/ 0014257& mode=thread& tid=103& tid=133& tid=186& tid=99) [6] 'Master' and 'slave' computer labels unacceptable. (See also political correctness. com/ inboxer/ outrage/ master. 2003. and does not require the use of master/slave terms. html) [2] Description of the Microsoft Computer Browser Service from Microsoft KnowledgeBase (http:/ / support. asp) [5] L.102878) [4] Urban Legends Reference Pages: Inboxer Rebellion (Master/Slave) from www. County Bans Use Of "Master/Slave" Term from Slashdot (http:/ / slashdot.) There were rumors of a major push to change the way hardware manufacturers refer to these devices . 85 References [1] master/slave . snopes. noting that the master/slave terminology accurately reflects what is going on inside the device and that this was not intended in any way to be a reference to slavery as it existed in the United States.snopes. cnn. reut/ index.00. com/ kb/ 188001) [3] Information on Browser Operation from Microsoft KnowledgeBase (http:/ / support. officials say (http:/ / www. The designation of hard drives as master/slave may decline in a few years.A. aspx?scid=KB. com/ sDefinition/ 0.com (http:/ / www. with SATA replacing older IDE (PATA) drives. html) (Wednesday. com/ default.en-us.Master/slave (technology) Internal Services Department County of Los Angeles Many in the Information Technology field rebuff this claim of discrimination and offence as ridiculous. This standard allows only one drive per connection.a searchNetworking definition (http:/ / searchnetworking. org/ article. microsoft. term. com/ 2003/ TECH/ ptech/ 11/ 26/ master.. CNN) . November 26.

but also provided the storage.1 / July 26. join it to the cluster and press the rebalance button to automatically rebalance data to it. key-value database management system optimized for storing data behind interactive web applications. in fact. Membase design decisions are weighed against three non-negotiable requirements.[4] Membase intends to be extremely easy to manage. Membase distributes data and data operation I/O across commodity servers (or VMs). replicates data for high-availability.org [2] in June 2010. 2011. Inc. announced a merger with CouchOne (a company with many of the principal players behind CouchDB) with an associated project merger.0 license) distributed. membase provides on-the-wire client protocol compatibility. manipulating and presenting data. In the parlance of Eric Brewer’s CAP theorem. Membase has wide language and application framework support due to its on-the-wire protocol compatibility with memcached. The merged project will be known as Couchbase[3] Design drivers According to the Membase site and presentations. By design. History Membase was developed by several leaders of the memcached project. storing. retrieving. persistence and querying capabilities of a database. In support of these kinds of application needs.Membase 86 Membase Membase Developer(s) Stable release Written in Couchbase (merged from NorthScale). It is designed to be clustered for single machine to very large scale deployments. data replication. creating. fast. expressly to meet the need for an key-value database that enjoyed all the simplicity. guaranteeing compatibility today and in to the future. live cluster reconfiguration. easy to scale key-value data operations with low latency and high sustained throughput. NorthScale. and project co-sponsors Zynga and NHN to a new project on membase. membase is a CP type system. Erlang Operating system Cross-platform Type License Website distributed key/value database system Apache License http:/ / membase. membase is simple. Zynga. and scalability of memcached. persists the data with a design for multi-tier storage . As of February 8. but is designed to add disk persistence (with hierarchical storage management). Every node is alike in a membase cluster – clone a node. [1] For those familiar with memcached. These applications must service many concurrent users. who had founded a company. rebalancing and multi-tenancy with data partitioning. transparently caches data in main memory. membase directly incorporates memcached “front end” source code. The original membase source code was contributed by NorthScale. membase is designed to provide simple. speed. and simple to develop against. org/ Membase (pronunciation: mem-base) is an Open Source (Apache 2. fast. the Membase project founders and Membase. 2011 C++.7. leveraging the memcached engine interface. NHN 1. and elastic. aggregating.

a running cluster with no application downtime. adapting to changing data management requirements of an application • Guaranteed data consistency: Never grapple with consistency issues in your application – no quorum reads required • High sustained throughput • Low. with low lock contention. full text search indexing. using any language or application framework • Dynamic cluster resizing and rebalancing: Effortlessly grow or shrink a membase cluster. • Configurable “tap” interface: External systems can subscribe to filtered data streams – supporting. 87 Data model Key Features (persistence. Servers can be added to. scalability/performance) Persistence • Asynchronously writes data to disk after acknowledging write to client. while disk writes are still asynchronous. data analytics or archiving. It is multi-threaded. virtual machines or cloud machine instances. most operations occur in far less than 1 ms (assuming gigabit Ethernet). . it automatically de-duplicates writes and is internally asynchronous everywhere possible. Membase claims to scale with linear cost. data management resources can be dynamically matched to the needs of an application with little effort. Employing commodity servers.[7] [6] Replication and failover • Multi-model replication support: Peer-to-peer replication support with underlying architecture supporting master-slave replication. • Tunables to define item ages that affect when data is persisted. • Configurable replication count: Balance resource utilization with availability requirements • High-speed failover: Fast failover to replicated items based upon request Scalability and performance • Distributed object store: Easily store and retrieve large volumes of data from any application. replication/failover. applications can ensure data is synced to more than one server. predictable latency. for example. It is a consistently low-latency and high-throughput processor of data operations. When operating out of memory.7 and later. or removed from. In version 1.[5] • Supports working set greater than a memory quota per "node" or "bucket" • Tunables to affect how max memory and migration from main-memory to disk is handled.Membase management model (planned to support Solid-state drive and Hard disk drive media).

html) Commercially supported distributions • Couchbase Membase Server (http://www. It is used to receive messages from a destination. northscale. membase. .membase. (http:/ / blog. com/ pr/ NorthScale-Membase-Server-beta. membase. it is possible to send a message to particular message consumer objects.com/group/membase) Message consumer A message consumer is a Java interface for distributed systems. The communication may be synchronous or asynchronous. Created by a selector.couchbase. membase. northscale.com/products-and-services/membase-server) External links • Official membase site (http://www. northscale.org wiki: Disk > Memory (http:/ / wiki. org/ bin/ view/ Main/ DiskGtMemory) [7] Want to know what your memcached servers are doing? Tap them. com/ ) [4] membase. html) [8] NorthScale Releases High-Performance NoSQL Database (http:/ / www.google. com/ p/ memcached/ wiki/ NewProtocols [2] http:/ / www. org/ whatsdifferent.org:Does the world really need another NoSQL Database? (http:/ / www. membase. To create it. google.org) • membase wiki (http://wiki. org/ bin/ view/ Main/ FlushingItems) [6] membase. org [3] Couchbase Website (http:/ / www. com/ northscale-blog/ 2010/ 03/ want-to-know-what-your-memcached-servers-are-doing-tap-them. html) [5] membase. com/ pr/ NorthScale-Membase-Server-beta. couchbase.Membase 88 Prominent users • Zynga – membase is the key-value database behind FarmVille[8] • NHN[9] References [1] http:/ / code.org) • membase mailing list (http://groups.membase. html) [9] NorthScale Releases High-Performance NoSQL Database (http:/ / www. a destination object is passed to a message-consumer creation method that is supplied by the session of this object.org wiki: membase Background Flush (http:/ / wiki.

because the sender will not continue until the receiver is ready. SOAP. the sender will not continue until the receiver has received the message. The first advantage is that reasoning about the program can be simplified in that there is a synchronisation point between sender and receiver on message transfer. object-oriented programming. Messages are also commonly used in the same sense as a means of interprocess communication. By waiting for messages. and/or transacted. That is. The advantage of asynchronous communication is that the sender and receiver can overlap their . Examples of the former include many distributed object systems. and the Message Passing Interface used in high-performance computing. the other common technique being streams or pipes. Implementations of concurrent systems that use message passing can either have message passing as an integral part of the language. and data packets. In this model. D-Bus and similar are message passing systems. etc.NET Remoting. Examples of the latter include Microkernel operating systems pass messages between one kernel and one or more server blocks. in which data are sent as a sequence of elementary data items instead (the higher-level version of a virtual circuit). Such messaging is used in Web Services by SOAP. or even segments of code) to other processes. QNX Neutrino RTOS.Message passing 89 Message passing Message passing in computer science is a form of communication used in parallel computing. and interprocess communication. or many-to-one (client–server). Message passing systems have been called "shared nothing" systems because the message passing abstraction hides underlying state changes that may be used in the implementation of sending messages. process. without waiting for the receiver to be ready. CTOS. The message can always be stored on the receiving side. Whether communication is synchronous or asynchronous. Message passing systems Distributed object and remote method invocation systems like ONC RPC. one-to-many (unicasting or multicast). Prominent theoretical foundations of concurrent computation. socket. secure. Synchronous versus asynchronous message passing Synchronous message passing systems require the sender and receiver to wait for each other to transfer the message. thread. Forms of messages include (remote) method invocation. Asynchronous message passing systems deliver a message from sender to receiver. complex data structures. such as the Actor model and the process calculi are based on message passing. processes can also synchronize. Java RMI. durable. processes or objects can send and receive messages (comprising zero or more bytes.). Overview Message passing is the paradigm of communication where messages are sent from a sender to one or more recipients. This concept is the higher-level version of a datagram except that messages can be larger than a packet and can optionally be made reliable. OpenBinder. or as a series of library calls from the language. . When designing a message passing system several choices are made: • • • • Whether messages are transferred reliably Whether messages are guaranteed to be delivered in order Whether messages are passed one-to-one. DCOM. signals. The second advantage is that no buffering is required. Message passing model based programming languages typically define messaging as the (usually asynchronous) sending (usually by copy) of a data item to a communication endpoint (Actor. Corba. Synchronous communication has two advantages.

Once the lock is acquired. ensuring that corruption from simultaneous writes does not occur. the callers memory in advance). By contrast. This is in contrast to the typical behaviour of an object upon which methods are being invoked: the latter is expected to remain in the same state between method invocations. each of the arguments has to have sufficient available extra memory for copying the existing argument into a portion of the new message. it may lead to an unexpected deadlock. After the process with the lock is finished with the resource. A message handler will. Examples of resources include shared memory. If the resource (or subsection) is available. Asynchronous message passing. With the message-passing solution. at least some of. the lock is then released. and all changes to it are made by an associated process. Web browsers and web servers are examples of processes that communicate by message passing. A URL is an example of a way of referencing resources that does depend on exposing the internals of a process. a resource is essentially shared. Synchronous communication can be built on top of asynchronous communication by ensuring that the sender always wait for an acknowledgement message from the receiver before continuing. This of course is not possible for distributed systems since an (absolute) address – in the callers address space – is normally meaningless to the remote program (however. then communication is no longer reliable. If messages are dropped. If the sender is blocked. by contrast. it is assumed that the resource is not exposed. the message handler behaves analogously to a volatile object). The buffer required in asynchronous communication can cause problems when it is full.Message passing computation because they do not wait for each other. In locking. and processes wishing to access it (or a sector of it) must first obtain a lock. This form of communication differs from message passing in at least three crucial areas: • total memory usage • transfer time • locality In message passing. This applies irrespective of the size of the original arguments – so if one of the arguments is (say) an HTML string of 31. in general. This means its state can change for reasons unrelated to the behaviour of a single sender or client process. it has to be copied in its entirety (and perhaps even transmitted) to the receiving program (if not a local program). a database table or set of rows. One of the main alternatives is mutual exclusion or locking. process messages from more than one sender. arguments are passed to the "callee" (the receiver) typically by one or more general purpose registers or in a parameter list containing the addresses of each of the arguments. 90 Message passing versus calling Message passing should be contrasted with the alternative communication method for passing information between programs – the Call. so that the resource is encapsulated. for the call method. only an address of say 4 or 8 bytes needs to be passed for each argument and may even be passed in a general purpose register requiring zero additional storage and zero "transfer time". can result in a response arriving a significant time after the request message was sent. A subroutine call or method invocation will not exit until the invoked computation has terminated. a relative address might in fact be usable if the callee had an exact copy of. other processes are blocked out. Processes wishing to access the resource send a request message to the handler.000 octets describing a web page (similar to the size of this article). In a traditional Call. the handler makes the requested change as an . Message passing and locks Message passing can be used as a way of controlling access to resources in a concurrent or asynchronous system. a disk file or region thereof. (in other words. A decision has to be made whether to block the sender or whether to discard future messages.

the request is generally queued. it uses the concept of a distributed data flow to characterize the behavior of a complex distributed system in terms of message patterns.com/2010/08/02/ beyond-locks-and-messages-the-future-of-concurrent-programming/) Further reading • Ramachandran. ACM Press. Two messages are considered to be the same message type. that is conflicting requests are not acted on until the first request has been completed. The live distributed objects programming model builds upon this observation. In pure object-oriented programming. functional-style specifications. Objects can send messages to other objects from within their method bodies. org/ pipermail/ squeak-dev/ 1998-October/ 017019. M. Toshiyuki. • McQuillan. In the terminology of some object-oriented programming languages.cfm?id=30371&coll=&dl=ACM&CFID=15151515&CFTOKEN=6184618). Solomon. See also Inversion of Control.cfm?id=810905&coll=&dl=ACM& CFID=15151515&CFTOKEN=6184618).. if the name and the arguments of the message are identical. Alan Kay has argued[3] that message passing is more important than objects in OOP. squeakfoundation. robust11. U.org/citation. If the resource is not available.cfm?id=140385&coll=&dl=ACM&CFID=15151515& . M. using high-level.acm.acm. acm. Proceedings of the 14th annual international symposium on Computer architecture.. "Hardware support for interprocess communication" (http:// portal. message passing is performed exclusively through a dynamic dispatch strategy.wordpress. "Some considerations for a high performance message-based interprocess communication system" (http://portal. ACM Press. "Low-latency message communication support for the AP1000" (http://portal. Message passing enables extreme late binding in systems. Takeshi Horie. The sending programme may or may not wait until the request has been completed.Message passing atomic event. org) [2] Elements of interaction: Turing award lecture (https:/ / dl. Vernon (1987). If the object "responds" to the message. org/ citation. Proceedings of the 1975 ACM SIGCOMM/SIGOPS workshop on Interprocess communications. and that objects themselves are often over-emphasized. • Shimizu. 91 Mathematical models The prominent mathematical models of message passing are the Actor model[1] and Pi calculus[2] . html External links • Future of Concurrent Programming (http://bartoszmilewski. but "knows" another object that may have one.acm. Walden (1975). Sending the same message to an object twice will usually result in the object applying the method twice. cfm?id=151240) [3] http:/ / lists. Examples • • • • Actor model implementation Amorphous computing Flow-based programming SOAP (protocol) References [1] Actor Model of Computation: Scalable Robust Information Systems (http:/ / www.org/citation. David C. Hiroaki Ishihata (1992). Some languages support the forwarding or delegation of method invocations from one object to another if the former has no method to handle the message. a message is the single means to pass control to an object.org/citation. it has a method for that message. John M.

In-Out: This is equivalent to request-response. the exchange is complete. This is a low-level pattern for specific.Message passing CFTOKEN=6184618). • Push-pull connects nodes in a fan-out / fan-in pattern that can have multiple steps. [4] . 6. and the UDP has a one-way pattern. and a one-way pattern. SOAP The term "Message Exchange Pattern" has a specific meaning within the SOAP protocol. The consumer initiates with a message to which the provider responds with status. Out-Only Robust Out-Only Out-In Out-Optional-In ØMQ The ØMQ message queueing library provides a so-called sockets (a kind of generalization over the traditional IP and Unix sockets) which require to indicate a messaging pattern to be used. 2. a message exchange pattern (MEP) describes the pattern of messages required by a communications protocol to establish or use a communication channel. • Publish-subscribe connects a set of publishers to a set of subscribers. the provider responds with a message or fault and the consumer responds with a status. 7. push-pull defines "parallelised pipeline". This is a parallel task distribution and collection pattern. a messaging pattern is a network-oriented architectural pattern which describes how two different parts of a message passing system connect and communicate with each other. A standard one-way messaging exchange where the consumer sends a message to the provider that provides only a status response. The basic ØMQ patterns are:[3] • Request-reply connects a set of clients to a set of services. If the response is a status. For example. Proceedings of the 19th annual international symposium on Computer architecture. and loops. the consumer must respond with a status. 92 Messaging pattern In software architecture.[1] [2] SOAP MEP types include: 1. Each pattern defines a particular network topology. 4. This is a data distribution pattern. Robust In-Only: This pattern is for reliable one-way message exchanges. This is a remote procedure call and task distribution pattern. In Optional-Out: A standard two-way message exchange where the provider's response is optional. advanced use cases. ACM Press. but if the response is a fault. All the patterns are deliberately designed in such a way as to be infinitely scalable and thus usable on Internet scale. In-Only: This is equivalent to one-way. the TCP is a request-response pattern protocol. and are particularly optimized for that kind of patterns. 5. 8. Request-reply defines so-called "service bus". • Exclusive pair connects two sockets in an exclusive pair. publish-subscribe defines "data distribution tree". 3. In telecommunications. A standard two-way message exchange where the consumer initiates with a message. There are two major message exchange patterns — a request-response pattern.

org/ TR/ wsdl20-additional-meps/ ) ØMQ User Guide (http:/ / www. Mobile agents decide when and where to move. and resumes execution from the saved state. is a type of software agent. transports this saved state to the new host. with the feature of autonomy. More specifically. social ability. in contrast to the Remote evaluation and Code on demand programming paradigms.Messaging pattern 93 References [1] [2] [3] [4] http:/ / www. and be capable of performing appropriately in the new environment. An open multi-agent systems (MAS) is a system in which agents.html) Mobile agent In computer science. w3. Source of trust information • • • • Direct experience Witness information Role-based rules Third-party references 2. w3. learning.Pattern Catalog (http://www. This makes them a powerful tool for implementing distributed applications in a computer network. Just as a user directs an Internet browser to "visit" a website (the browser merely downloads a copy of the site or one version of it in the case of dynamic web sites). com/ hits) External links • Messaging Patterns in Service-Oriented Architecture (http://msdn. a mobile agent is a process that can transport its state from one environment to another. zeromq.microsoft. Definition and overview A Mobile Agent. and most importantly. mobile agents are active in that they can choose to migrate between computers at any time during their execution. Reputation and Trust The following are general concerns about Trust and Reputation in Mobile Agent research: 1. A mobile agent is a specific form of mobile code.com/toc.com/en-us/library/aa480027. a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.2 Web Services Description Language (WSDL) Version 2. continuously enter and leave the system. a mobile agent accomplishes a move through data duplication. mobility. org/ TR/ soap12-part1/ #soapmep SOAP MEPs in SOAP W3C Recommendation v1. aspx) • Enterprise Integration Patterns . that are owned by a variety of stakeholders. it saves its own state.0: Additional MEPs (http:/ / www. 250bpm. with its data intact. similarly. namely.eaipatterns. Overall trust value What are the differences between trust and reputation systems? . However. org/ docs:user-guide) Scalability Layer Hits the Internet Stack (http:/ / www. Movement is often evolved from RPC methods. How trust value is calculated 3. When a mobile agent decides to move.

converts computational client/server round trips to relocatable data bundles. a standards body which defines an interface for agent based interactions. gov/ mobileagents/ projects. org/ [8] http:/ / semoa.to change an agent's actions. mobilec. • National Institute for Standards and Technology [3]. moe-lange. html . fipa. Inventor of Automatic Thread Migration (ATM). an OSS mobile agent framework written in JAVA. whereas reputation systems produce an entity’s (public) reputation score as seen by the whole community. • AgentLink III [4] • Mobile-C [5]. net/ about/ about. html http:/ / www. agentlink. agentos. sourceforge. nist. hosts a center for investigating security of mobile agents. org http:/ / www. org http:/ / jade. com/ danny/ docs/ 7reasons. pdf [3] [4] [5] [6] [2] http:/ / www.able to operate without an active connection between client and server • Flexible maintenance . External links • Seven Good Reasons for Mobile Agents [1] • Mobile Agent Technologies [2]. net/ http:/ / csrc. a multi-agent platform for mobile C/C++ agents. More: • Compare Reputation and Trust 94 Advantages Some advantages which mobile agents have over conventional agents: • Computation bundles . a project to develop a secure mobile agent server (last release 2007). com [7] http:/ / www. • The Foundation for Intelligent Physical Agents [7]. only the source (rather than the computation hosts) must be updated One particular advantage for remote deployment of software includes increased portability thereby making system requirements less influential.Mobile agent Trust systems produce a score that reflects the relying party’s subjective view of an entity’s trustworthiness. tilab. • JADE [6].actions are dependent on the state of the host environment • Tolerant to network faults . • Parallel processing -asynchronous execution on multiple heterogeneous network hosts • Dynamic adaptation . developer of AgentOS agent based operating system. reducing network load. References [1] http:/ / www. • Secure Mobile Agents Project [8].

2 / June 18. and Solaris. . document-oriented database written in the C++ programming language. 2011 Development status Active Written in Operating system Available in Type License Website C++ Cross-platform English Document-oriented database GNU AGPL v3. regular expressions. • Type-rich: supports dates. any field can be queried at any time. and retrieved with a special binary data type. and other special types of queries in addition to exactly matching fields. skipping.8.[2] Features Among the features are: • Consistent UTF-8 encoding. • Cross-platform support: binaries are available for Windows. code. binary data. Queries can also include user-defined JavaScript functions (if the function returns true. MongoDB supports range queries. regular expression searches. org/ MongoDB (from "humongous") is an open source. queried.MongoDB 95 MongoDB MongoDB Developer(s) Initial release Stable release 10gen 2009 1. Many applications can thus model data in a more natural way. as data can be nested in complex hierarchies and still be query-able and indexable. The first public release was in February 2009. the document matches). high-performance.[1] The database is document-oriented so it manages collections of JSON-like documents. Linux. OS X. Development of MongoDB began in October 2007 by 10gen. mongodb.0 (drivers: Apache license) http:/ / www. and more (all BSON types) • Cursors for query results More features: Ad hoc queries In MongoDB. and limiting results. as well as sorting. MongoDB can be compiled on almost any little-endian system. schema-free. Queries can return specific fields of documents (instead of the entire document). Non-UTF-8 data can be saved.

find({"address. "pear"]}) > db.insert({"fruit" : ["peach".foo. File storage The software implements a protocol called GridFS[5] that is used to store and retrieve files from the database.[7] Server-side JavaScript execution JavaScript is the lingua franca of MongoDB and can be used in queries. and sent directly to the database to be executed.state" : "NY"}) Array elements can also be queried: > db. This file storage mechanism has been used in plugins for NGINX[6] and lighttpd. Nested fields (as described above in the ad hoc query section) can also be indexed and indexing an array type will index each element of the array. "state" : "NY" } } We can query for this document (and all documents with an address in New York) with: > db. including single-key. "plum". If the following object is inserted into the users collection: { "username" : "bob".eval(function(name) { return "Hello. the database supports a couple of tools for aggregation. Indexes can be created or removed at any time.MongoDB 96 Querying nested fields Queries can "reach into" embedded objects and arrays. Example of JavaScript in a query: > db. including MapReduce[4] and a group function similar to SQL's GROUP BY. }}) Example of code sent to the database to be executed: > db. periodically resampling. . }. compound. non-unique.users. aggregation functions (such as MapReduce).find({"fruit" : "pear"}) Indexing The software supports secondary indexes.find({$where : function() { return this.x == this.food. "city" : "Springfield". Developers can see the index being used with the `explain` function and choose a different index with the `hint` function. MongoDB's query optimizer will try a number of different query plans when a query is run and select the fastest. unique. "address" : { "street" : "123 Main Street".y. Joe". Aggregation In addition to ad hoc queries. and geospatial[3] indexes.food. "+name. ["Joe"]) This returns "Hello.

[28] Delphi.[30] [31] Factor.[36] Lua. Groovy [35] ." 97 Capped collections MongoDB supports fixed-size collections called capped collections. etc. Capped collections are the only type of collection that maintains insertion order: once the specified size has been reached.[8] A capped collection is created with a set size and.[34] JVM languages (Clojure.[18] ColdFusion. but it is more commonly installed from a binary package.[29] Erlang. although most of the drivers work on both little-endian and big-endian systems. including functions and objects. returning new results as they are inserted into the capped collection.NET.[13] It can also be acquired through the official website.).[9] can be used with capped collections. for C# and . Many Linux package management systems now include a MongoDB package.[37] node.[15] The MongoDB server can only be used on little-endian systems. a capped collection behaves like a circular queue.[11] Gentoo[12] and Arch Linux. Scala. and does not close when it finishes returning results but continues to wait for more to be returned.[40] Racket.[42] .js. Language support MongoDB has official drivers for: • C[16] • • • • • • • • • • • C++[17] C#[18] Haskell[19] Java[20] JavaScript[21] Lisp[22] Perl[23] PHP[24] Python[25] Ruby[26] Scala[27] There are also a large number of unofficial drivers. optionally.[14] MongoDB uses memory-mapped files. This cursor was named after the `tail -f` command. A special type of cursor.[41] and Smalltalk. Any legal JavaScript type.[32] Fantom. number of elements. called a tailable cursor.[10] Debian and Ubuntu. including CentOS and Fedora.MongoDB JavaScript variables can also be stored in the database and used by any other JavaScript as a global variable.[33] Go. Deployment MongoDB can be built and installed from source. limiting data size to 2GB on 32-bit machines (64-bit systems have a much larger data size).[39] Ruby. can be stored in MongoDB so that JavaScript can be used to write "stored procedures.[38] HTTP REST.

which determines how the data in a collection will be distributed. mongosniff sniffs network traffic going to and from MongoDB. mongo. insert. mongo is built on SpiderMonkey. execute JavaScript. as well as what percentage of the time the database was locked and how much memory it is using. This `mongos` process knows what data is on each shard and routes the client's requests appropriately. remove. queries. and more. By default. A slave copies data from the master and can only be used for reads or backup (not writes). A master can perform reads and writes. Sharding MongoDB scales horizontally using a system called sharding[43] which is very similar to the BigTable and PNUTS scaling model. mongostat is a command-line tool that displays a simple list of stats about the last second: how many inserts. .MongoDB 98 Replication MongoDB supports master-slave replication. Administrative information can also be accessed through the admin interface: a simple html webpage that serves information about the current server status. Any number of `mongos` processes can be run: usually one per application server is recommended. updates. removes. MongoDB allows developers to guarantee that an operation has been replicated to at least N servers on a per-operation basis. For example.) The developer's application must know that it is talking to a sharded cluster when performing some operations. but they incorporate the ability for the slaves to elect a new master if the current one goes down./mongod --slave --port 10001 --dbpath ~/dbs/slave --source localhost:10000 Replica sets Replica sets are similar to master-slave. so it is a full JavaScript shell as well as being able to connect to MongoDB servers. the slave will replicate any changes to the data. The shell lets developers view. Example: starting a master/slave pair locally: $ mkdir -p ~/dbs/master ~/dbs/slave $ . The data is split into ranges (based on the shard key) and distributed across multiple shards. this interface is 1000 ports above the database port (http:/ / localhost:28017) and it can be turned off with the --norest option. The application talks to a special routing process called `mongos` that looks identical to a single MongoDB server. and commands were performed. as well as get replication information. a "findAndModify" query must contain the shard key if the queried collection is sharded[44] . shut down servers. and update data in their databases. The developer chooses a shard key. (A shard is a master with one or more slaves. All requests flow through this process: it not only forwards requests and responses but also performs any necessary final data merges or sorts./mongod --master --port 10000 --dbpath ~/dbs/master $ . setting up sharding. Master-slave As operations are performed on the master. Management and graphical frontends Official tools The most powerful and useful management tool is the database shell.

Fang of Mongo [50] Futon4Mongo – a clone of the CouchDB Futon web interface for MongoDB. Database Master [54] Windows based MongoDB Management Studio. MongoHub[52] – a native OS X application for managing MongoDB.ly[63] The New York Times[64] SourceForge[65] Business Insider[66] Etsy[67] CERN LHC[68] Thumbtack[69] AppScale[70] Uber[71] .[55] Prominent users • • • • • • • • • • • • • • • • MTV Networks[56] craigslist[57] Disney Interactive Media Group[58] Wordnik[59] diaspora[60] Shutterfly[61] foursquare[62] bit. The language drivers are available under an Apache License. Some popular ones are: • • • • • • [49] – a web-based UI built with Django and jQuery.MongoDB 99 Monitoring There are monitoring plugins available for MongoDB: • • • • munin[45] ganglia[46] scout[47] cacti[48] GUIs Several GUIs have been created to help developers visualize their data. Mongo3[51] – a Ruby-based interface. Licensing and support MongoDB is available for free under the GNU Affero General Public License. supports also RDBMS. Opricot[53] – a browser-based MongoDB shell written in PHP.

JS) [39] REST interface (http:/ / github. org/ display. org/ display/ DOCS/ CentOS+ and+ Fedora+ Packages) [11] Debian and Ubuntu (http:/ / www. . com/ Fiedzia/ Fang-of-Mongo) [50] Futon4Mongo (http:/ / github. com/ mongodb/ mongo-php-driver) Python driver (http:/ / github. com/ MongoTalk. mongodb. org/ post/ 137788967/ 32-bit-limitations) [16] C driver (http:/ / github. html) [43] sharding (http:/ / www. com/ virtix/ cfmongodb) Delphi (http:/ / code. mongodb. racket-lang. com/ erh/ mongo-munin) [46] Ganglia plugin (http:/ / github. org/ ) MongoDB Blog .March 2010 (http:/ / blog. org/ display/ DOCS/ MapReduce) [5] GridFS (http:/ / www. mongodb. mongodb. 2009 (http:/ / blog. com/ mongodb/ mongo) [18] C# driver (https:/ / github. com/ mongodb/ mongo-csharp-driver) [19] Haskell driver (http:/ / hackage. com/ ) [52] MongoHub (http:/ / www. html) [53] Opricot (http:/ / www. gentoo. com/ fons/ cl-mongo) Perl driver (http:/ / github. Retrieved 2011-07-06. google. com/ mongodb/ mongo-python-driver) Ruby driver (http:/ / github. org/ display/ DOCS/ JVM+ Languages) LuaMongo (http:/ / code. org/ packages. org/ display/ DOCS/ Tailable+ Cursors) [10] CentOS and Fedora (http:/ / www. squeaksource. com/ mongodb/ mongo-ruby-driver) Casbah. mongodb. paulopoiati. 2011-05-10. org/ bwmcadams/ lighttpd-gridfs/ src/ ) [8] capped collections (http:/ / www. com/ kchodorow/ sleepy. com/ slavapestov/ factor/ tree/ master/ extra/ mongodb/ ) Fantom driver (http:/ / bitbucket.MongoDB 100 References [1] [2] [3] [4] MongoDB website (http:/ / www. com/ tmm1/ rmongo) [41] (http:/ / planet. com/ 2010/ 06/ 20/ gmongo-0-5-released/ ) JVM language center (http:/ / www. mongodb. org/ display/ DOCS/ Capped+ Collections) [9] (http:/ / www. icmfinland. org/ display/ DOCS/ Javascript+ Language+ Center) (https:/ / github. org/ display/ DOCS/ findAndModify+ Command#) [45] Munin plugin (http:/ / github. mongodb. mongodb. com/ quiiver/ mongodb-ganglia) [47] Scout slow-query plugin (http:/ / scoutapp. org/ package/ mongoDB) [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] Java driver (http:/ / github. org/ post/ 434865639/ state-of-mongodb-march-2010) Geospatial indexes (http:/ / www. google. com/ mongodb/ mongo-c-driver) [17] C++ driver (http:/ / github. ss?package=mongodb. mongodb. mongodb. org/ display/ DOCS/ Ubuntu+ and+ Debian+ packages).js driver (http:/ / www. mongodb. the officially supported Scala Driver for MongoDB (https:/ / github. org/ post/ 5360007734/ mongodb-powering-mtvs-web-properties). fi/ oss/ opricot/ ) [54] http:/ / www. . com/ mikejs/ gomongo) GMongo (http:/ / blog. org/ display/ DOCS/ Sharding) [44] (http:/ / www. [12] Gentoo (http:/ / packages. org/ display/ DOCS/ GridFS) [6] NGINX (http:/ / github. mongoose) [40] rmongo (http:/ / github. org/ display/ DOCS/ Geospatial+ Indexing) MapReduce (http:/ / www. mongodb. org/ liamstask/ fantomongo/ wiki/ Home) gomongo Go driver (http:/ / github. php?ID=27971) [14] official website (http:/ / www. mongodb. com/ mongodb/ mongo-perl-driver) PHP driver (http:/ / github. mongodb. com/ downloads/ macosx/ development_tools/ mongohub. com/ plugin_urls/ 291-mongodb-slow-queries) [48] Cacti plugin (http:/ / tag1consulting. org/ rumataestor/ emongo) Erlmongo Erlang driver (http:/ / github. com/ mdirolf/ nginx-gridfs) [7] lighttpd (http:/ / bitbucket. com/ p/ pebongo/ ) Emongo Erlang driver (http:/ / bitbucket. mongodb.MongoDB Blog: May 5. com/ sbellity/ futon4mongo) [51] Mongo3 (http:/ / mongo3. org/ package/ dev-db/ mongodb) [13] Arch Linux (http:/ / aur. com/ wpntv/ erlmongo) Factor driver (http:/ / github. mongodb. com/ mongodb/ mongo-java-driver) JavaScript driver (http:/ / www. apple. org/ display/ DOCS/ node. org/ post/ 103832439/ the-agpl) [56] "MongoDB Powering MTV's Web Properties" (http:/ / blog. archlinux. haskell. mongodb. plt& owner=jaymccarthy) [42] Smalltalk driver (http:/ / www. com [55] The AGPL . org/ display/ DOCS/ Downloads) [15] (http:/ / blog. nucleonsoftware. mongodb. com/ mongodb/ casbah) ColdFusion driver (http:/ / github. com/ p/ luamongo/ ) node. com/ blog/ mongodb-cacti-graphs) [49] Fang of Mongo (http:/ / github.

). . ISBN 9780321705334 • Hawkins. Membrey.com/s/ article/9135086/No_to_SQL_Anti_database_movement_gains_steam_) MongoDB articles on NoSQLDatabases. Uber | JoyentCloud:" (http:/ / www. auto-sharded .). Retrieved 2010-06-28. [70] http:/ / appscale.com/main/tag/mongodb) June 2009 San Francisco NOSQL Meetup Page (http://nosql. PyCon 2010.net" (http:/ / us. Retrieved 2010-06-28. php/ MongoDB). 2010-04-30. ISBN 9781430230519 External links • • • • • • • • • • • Official MongoDB Project Website (http://www. [69] "Building Our Own Tracking Engine With MongoDB" (http:/ / engineering. pp. .eventbrite. . . Retrieved 2010-08-03.org/) mongoDB User Group (http://www. [64] Maher. . The Definitive Guide to MongoDB: The NoSQL Database for Cloud and Desktop Computing (1st ed.markus-gattol. 2010). NYTimes Open Blog. Batman!" (http:/ / blog. com/ index. . [68] "Holy Large Hadron Collider. MongoDB for Web Development (1st ed. 2010). edu/ datastores.MongoDB [57] "MongoDB live at craigslist" (http:/ / blog. The MongoDB NoSQL Database Blog. com/ 2010/ 05/ 25/ building-a-better-submission-form/ ). ucsb.html#faqs) .Software Engineer at MongoDB (http://www. TurboGears. pp.Presentation at MongoSF" (http:/ / www. 2010-06-03. com/ presentation/ mongosf2011/ disney). Jacqueline (2010-05-25). Dirolf. Retrieved 2011-05-24.technologyreview. Thumbtack Blog. tv/ file/ 3704098).). Retrieved 2010-12-23. org/ post/ 5545198613/ mongodb-live-at-craigslist). businessinsider. youtube. .slideshare.NET MVC (http://mongomvc. [62] "MongoDB at foursquare .com" (http:/ / www. com/ event_mongosf_10apr30#shutterfly). pycon. 2011).ly user history. etsy. Michael (September 23. pp. Mitch (March 3. 10gen. mongodb. 10gen. blogs.com/watch?v=dOP3w-9Q6lU) on YouTube Interview with Mike Dirolf on The Changelog about MongoDB background and design decisions (http:// thechangelog. 2010-05-21.A MongoDB Demo App with ASP.diasporatest.Presentation at MongoNYC" (http:/ / blip. 216. Business Insider.computerworld.com/video/?vid=356) at MIT Technology Review EuroPython Conference Presentation (http://www. Retrieved 2010-06-28. 2010-11-06. html#mongodb [71] "Node.com/groups?gid=3265391) on LinkedIn MongoDB news and articles on myNoSQL (http://nosql. cs. 2010-05-19. Kristina. Retrieved 2010-06-28. Code as Craft: Etsy Developer Blog. pp.linkedin.js Meetup: Distributed Web Architectures – Curtis Chambers. 375. com/ 2011/ 05/ 03/ building-our-own-tracking-engine-with-mongodb/ ). 360. ISBN 9781449381561 • Pirtle. 2011-05-03. MongoDB: The Definitive Guide (1st ed. Retrieved 2011-07-06. [60] "MongoDB .codeplex. Retrieved 2011-05-15. No to SQL? Anti-database movement gains steam (http://www. Manning. . "Building a Better Submission Form" (http:/ / open.com/) • FAQs about MongoDB (http://www. . [58] "Disney Central Services Storage: Leveraging Knowledge and skillsets" (http:/ / www. ISBN 9781935182870 • Chodorow. [67] "MongoDB at Etsy" (http:/ / codeascraft. Peter (September 26. . com/ 2010/ 05/ 19/ mongodb-at-etsy/ ). .net/mdirolf/mongodb-europython-2009) Non-relational data persistence in Java using MongoDB . 2011). Retrieved 2010-06-28. org/ 2010/ conference/ schedule/ event/ 110/ ). com/ how-we-use-mongodb-2009-11). . 2010-02-20.mypopescu.com (http://www. July 1). Apress. [66] "How This Web Site Uses MongoDB" (http:/ / www. [61] "Implementing MongoDB at Shutterfly . and MongoDB are Transforming SourceForge. Eelco. Retrieved 2011-07-06. org/ post/ 660037122/ holy-large-hadron-collider-batman).mongodb. 101 Bibliography • Banker.nosqldatabases. Retrieved 2010-06-28.com/tagged/mongodb) Eric Lai. Tim. . [59] "12 Months with MongoDB" (http:/ / blog.name/ws/mongodb.com/) Designing for the Cloud (http://www. com/ resources/ videos/ node-js-office-hours-curtis-chambers-uber/ ). . 2011-05-24. [65] "How Python.).Presentation at MongoNYC" (http:/ / blip. 350. Plugge. Kyle (March 28. MongoDB in Action (1st ed. 2010-12-23. wordnik. joyentcloud. mongodb. com/ 12-months-with-mongodb).com/post/287597162/episode-0-0-7-mike-dirolf-from-10gen-and-mongodb) MongoMvc . thumbtack. Retrieved 12 August 2011. 2010-05-21. O'Reilly Media. Addison-Wesley Professional. Retrieved 2010-06-28. tv/ file/ 3704043). 2010-10-25. nytimes. [63] "bit. (2009. diasporatest. 2011-05-16.

distributed across the network. Some Active Directory needs are however better served by Flexible single master operation. database changes can be distributed either synchronously or asynchronously. other masters continue to update the database. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group. domain controllers have a complex update pattern that ensures that all servers are updated in a timely fashion without excessive replication traffic. i. Multi-master replication can be contrasted with master-slave replication. and updated by any member of the group. Trigger-Based Triggers at the subscriber capture changes made to the database and submit them to the publisher. Other members wishing to modify the data item must first contact the master node. For log-based transaction capturing.e. Disadvantages • Most multi-master replication systems are only loosely consistent. Implementations Many directory servers based on LDAP implement multi-master replication. • Eager replication systems are complex and increase communication latency.e. With trigger-based transaction capturing. objects that are updated on one Domain Controller are then replicated to other domain controllers through multi-master replication. Active Directory One of the more prevalent multi-master replication implementations in directory servers is Microsoft's Active Directory. . • Issues such as conflict resolution can become intractable as the number of nodes involved rises and latency increases. Methods Log-Based A database transaction log is referenced to capture changes made to the database. It is not required for all domain controllers to replicate with each other domain controller as this would cause excessive network traffic in large Active Directory deployments. violating ACID properties.Multi-master replication 102 Multi-master replication Multi-master replication is a method of database replication which allows data to be stored by a group of computers. Instead. Within Active Directory. lazy and asynchronous. but is less flexible than multi-master replication. and resolving any conflicts that might arise between concurrent changes made by different members. Advantages • If one master fails. Allowing only a single master makes it easier to achieve consistency among the members of the group. in which a single member of the group is designated as the "master" for a given piece of data and is the only node allowed to modify that data item. database changes can only be distributed asynchronously. • Masters can be located in several physical sites. i.

data integrity is enforced through this two-phase commit protocol by ensuring that either the whole transaction is replicated. MySQL MariaDB and MySQL ships with replication support. Ingres Within Ingres Replicator. Yet another project. The OpenDS multi-master replication is asynchronous. OpenLDAP The widely used open source LDAP server implements multi-master replication since its version 2. In addition. Synchronous multi-master replication uses Oracle's two phase commit functionality to ensure that all databases with the cluster have a consistent dataset. however it is still in development.3. or network failure. Asynchronous multi-master replication commits data changes to a deferred transaction queue which is periodically processed on all databases in the cluster. objects that are updated on one Ingres server can then replicated to other servers whether local or remote through multi-master replication. rubyrep [3]. PgPool and PgPool-II [4].0. It is possible to achieve a multi-master replication scheme beginning with MySQL version 3. This means that some servers in the environment can serve as failover candidates while other servers can meet other requirements such as managing a subset of columns or tables for a departmental solution. Instead. In the event of a source. PostgreSQL PostgreSQL offers multiple solutions for multi-master replication. it uses a log with a publish-subscribe mechanism that allows scaling to a large number of nodes.Multi-master replication 103 CA Directory CA Directory supports multi-master replication. target. a subset of rows for a geographical region or one-way replication for a reporting server. MySQL Cluster supports conflict detection and resolution between multiple masters since version 6. implementing eager (synchronous) replication is Postgres-R [7]. If one server fails. Postgres-XC also is still under development. It is not required for all Ingres servers in an environment to replicate with each other as this could cause excessive network traffic in large implementations. PgCluster [5] and Sequoia [6] as well as some proprietary solutions. OpenDS OpenDS implements multi-master replication since its version 1. Another promising approach. or none of it is. OpenDS replication can be used over a Wide Area Network. OpenDS replication does conflict resolution at the entry and attribute level. Ingres Replicator provides an elegant and sophisticated design that allows the appropriate data to be replicated to the appropriate servers without excessive replication traffic. . Ingres Replicator can operate over RDBMS’s from multiple vendors to connect them. client connections can be re-directed to another server.23. including solutions based on two-phase commit. implementing synchronous replication is Postgres-XC [8]. There is Bucardo [2].4 (October 2007) [1]. Oracle Oracle database clusters implement multi-master replication using one of two methods.

and withstand periods of network outage. html http:/ / bucardo. com/ community/ lab-projects/ sequoia [7] http:/ / www. work across low-bandwidth connections. Oracle.org) • DMOZ Open Directory Project . Licensed under LGPL open source license.Multi-master replication 104 References [1] [2] [3] [4] http:/ / www.replicator. database independent. • DBReplicator Project Page (http://dbreplicator. projects.microsoft. DB2.asp?url=/resources/documentation/Windows/2000/server/reskit/en-us/distrib/ dsbh_rep_fgtk. The software was designed to scale for a large number of databases. postgres-r. and PostgreSQL. HSQLDB. MySQL. At present. Oracle.org/documentation/terms) • SymmetricDS (http://symmetricds. it supports following databases: Microsoft SQL Server. openldap.asp) • Terms and Definitions for Database Replication (http://www. org/ [6] http:/ / www. postgresql. Daffodil Replicator is available in both enterprise (commercial) and open source (GPL-licensed) versions. SymmetricDS guarantees that data changes are captured and atomicity is preserved.daffodilsw. Apache Derby. H2.dbspecialists. Firebird. It uses web and database technologies to replicate tables between relational databases in near real time.com/resources/documentation/Windows/2000/ server/reskit/en-us/Default. DB2. By using database triggers. html) • Active Directory Replication Model (http://www.codehaus.postgres-r. org [8] http:/ / sourceforge. PostgreSQL. and data backup between various database servers. • Daffodil Replicator (http://opensource. data migration. org/ wiki/ Bucardo http:/ / www.dmoz. postgresql. org/ [5] http:/ / pgcluster. net/ projects/ postgres-xc/ • Challenges Involved in Multimaster Replication (http://www. Daffodil Replicator works over standard JDBC driver and supports replication across heterogeneous databases. org/ software/ roadmap. and Apache Derby included. continuent. with implementations for MySQL.org/) is web-enabled. data synchronization/replication software. SQL Server. Support for database vendors is provided through a Database Dialect layer. projects.com/) is a Java tool for data synchronization. org http:/ / pgpool.com/presentations/mm_replication.org/Computers/Software/ Databases/Replication/) . rubyrep.Database Replication Page (http://www. Daffodil database.

the three-tier architecture is Visual overview of a Three-tiered application intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. . functional process logic ("business rules"). while a tier is a physical structuring mechanism for the system infrastructure. It was developed by John J. Three-tier architecture has the following three tiers: Presentation tier This is the topmost level of the application. Massachusetts. an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. There should be a presentation tier. and the data management are logically separate processes. and shopping cart contents. By breaking up an application into tiers. developers only have to modify or add a specific layer. multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network. and a data tier. Three-tier architecture Three-tier[3] is a client–server architecture in which the user interface. a business or data access tier. The concepts of layer and tier are often used interchangeably. Donovan in Open Environment Corporation (OEC). For example. most often on separate platforms. and that a layer is a logical structuring mechanism for the elements that make up the [1] [2] software solution. However. Apart from the usual advantages of modular software with well-defined interfaces. purchasing. a tools company he founded in Cambridge. the user interface runs on a desktop PC or workstation and uses a standard graphical user interface.Multitier architecture 105 Multitier architecture In software engineering. For example. functional process logic may consist of one or more separate modules running on a workstation or application server. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture"). The presentation tier displays information related to such services as browsing merchandise. one fairly common point of view is that there is indeed a difference. and an RDBMS on a database server or mainframe contains the computer data storage logic. rather than have to rewrite the entire application over. computer data storage and data access are developed and maintained as independent modules. The most widespread use of multi-tier architecture is the three-tier architecture. The three-tier model is a software architecture and a software design pattern. Typically. the application processing. N-tier application architecture provides a model for developers to create a flexible and reusable application. a change of operating system in the presentation tier would only affect the user interface code.

ASP. A back-end database.Multitier architecture Application tier (business logic. and the view gets updated directly from the model. In web based application. Data tier This tier consists of database servers. Here information is stored and retrieved.NET Remoting. This tier keeps data neutral and independent from application servers or business logic.. Java RMI. logic tier. for example Java EE. or middle tier) The logic tier is pulled out from the presentation tier and.g. however. the three tiers may seem similar to the model-view-controller (MVC) concept. three-tier is often used to refer to websites. Protocols involved may include one or more of SNMP. A front-end web server serving static content. 2. Windows Communication Foundation. commonly electronic commerce websites.NET. ColdFusion platform. CORBA. in a three-tier model all communication must pass through the middle tier. Front End is the content rendered by the browser. and potentially some cached dynamic content. 3. Web development usage In the web development field. sockets. . web services or other standard or proprietary protocols. as its own layer. Conceptually the three-tier architecture is linear. However. the MVC architecture is triangular: the view sends updates to the controller. middle ware and data tiers ran on physically separate platforms. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers. The content may be static or generated dynamically. comprising both data sets and the database management system or RDBMS software that manages and provides access to the data. Giving data its own tier also improves scalability and performance. Separate tiers often (but not necessarily) run on separate physical servers. 106 Comparison with the MVC architecture At first glance. MVC was applied to distributed applications later in its history (see Model 2). Whereas MVC comes from the previous decade (by work at Xerox PARC in the late 1970s and early 1980s) and is based on observations of applications that ran on a single graphical workstation. web applications) where the client. A middle dynamic content processing and generation level application server. UDP. PHP. . A fundamental rule in a three tier architecture is the client tier never communicates directly with the data tier. Traceability The end-to-end traceability of data flows through n-tier systems is a challenging task which becomes more important when systems increase in complexity. topologically they are different. which are built using three tiers: 1. the controller updates the model. and each tier may itself run on a cluster. it controls an application’s functionality by performing detailed processing. Often middleware is used to connect the separate tiers. From a historical perspective the three-tier architecture concept emerged in the 1990s from observations of distributed systems (e. data access tier. Other considerations Data transfer between tiers is part of the architecture.

or networks (processing nodes). "Three Tier Client/Server Architecture: Achieving Scalability. The network cloaking function immediately drops all packets from an offending IP address. the term tiers is used to describe physical distribution of components of a system on separate servers. Network cloaking Network cloaking is a technology that makes a protected network invisible to malicious external traffic. computers. and hack attempts. All non-encrypted Internet traffic entering a network is inspected for malicious code. Martin "Patterns of Enterprise Application Architecture" (2002)." Open Information Systems 10. or invisible. aspx) [2] Fowler. microsoft. com/ article/ 3508 [5] http:/ / msdn. while allowing complete and uninterrupted access for legitimate users. located in front of the internet firewall. com/ en-us/ library/ ms998478. malformed packets. linuxjournal. Patterns. Network cloaking is accomplished via a promiscuous bridge with firewall functionality. which is licensed under the GFDL. aspx <webopedia> This article was originally based on material from the Free On-line Dictionary of Computing. and responses from the protected network. and Efficiency in Client Server Applications. com/ en-us/ library/ ee658109. Wayne W. prohibited behaviors. A three-tier architecture then will have three processing nodes. [3] Eckerson. To the perpetrator. 1 (January 1995): 3(20) [4] http:/ / www. Three Tier Architecture [4] • Microsoft Application Architecture Guide [5] References [1] Deployment Patterns (Microsoft Enterprise Architecture. Layers refer to a logical grouping of components which may or may not be physically located on one processing node. the protected network simply appears to be unused. including the initial request packets. Performance.Multitier architecture 107 Comments Generally. . External links • Linux journal. Addison Wesley. microsoft. and Practices) (http:/ / msdn.

originally named k. Indeed.Opaak 108 Opaak The Opaak educational trilogy aims at providing material for the teaching and self-teaching of operating system concepts ranging from low-level programming. History The Opaak educational trilogy's projects have been used for teaching operating systems at EPITA since 2004. kayou kayou is an operating system built over the kaneton microkernel. date at which the kastor project was created. storage. The project is composed of several stages. the kaneton educational project competed[1] in the Alternative OS Contest run by the specialized website OSNews. The kernel extracts this game from a special and minimalistic file system. This project is taught following the kastor project and lasts for a few months. security etc. in an environment composed of multiple kayou instances. all the computers of the network share their resources with each other including memory. This project focuses on making students fully understand the kernel internals of a microkernel-based operating system by addressing advanced concepts such as multiprocessing. the memory management and the multitasking. The kayou's originality resides in its fully distributed architecture. kaneton kaneton represents the core of the Opaak trilogy as it aims at making students develop parts of a microkernel. the kastor monolithic kernel is provided with an ELF binary at the boot time which represents an arcade game to be run. The project lasts several weeks and allows students to understand what is the microprocessor's role in an operating system though many modern functionalities are not discussed in this project such as virtual memory and scheduling. devices etc. Arcanoid etc. is an introductory project targeting low-level programming. Indeed. loads it into memory and finally executes it. Projects Opaak is composed of the three following projects kastor kastor. the interrupts processing. The objective for students is to develop an emulator for arcade games such as Pong. . The Opaak trilogy has been introduced by Julien Quintard in 2007 following the relative success of the kastor and kaneton projects in the EPITA curriculum. In 2006. to kernel internals to operating system principles and distributed system paradigms. each one targeting a kernel functionality such as the booting phase. processor.

org) Open architecture computing environment Open Architecture Computing Environment (OACE) is a specification that aims to provide a standards-based computing environment in order to decouple computing environment from software applications.opaak. com/ story/ 15018/ The-kaneton-Microkernel-Project/ ) at the Alternative OS Contest External links • The Opaak educational trilogy official website (http://www. osnews. . This way it enables reusable software applications and components.Opaak 109 References [1] The kaneton Microkernel Project (http:/ / www.

0pl4 Development status Active Operating system Available in Type Website Linux English Computer forensics [1] [1] The Open Computer Forensics Architecture (OCFA) is an distributed open source computer forensics framework used to analyze digital media within a digital forensics laboratory environment.2.Open Computer Forensics Architecture 110 Open Computer Forensics Architecture Open Computer Forensics Architecture Developer(s) Stable release Korps landelijke politiediensten 2. GNU Privacy Guard. qemu-img and mbx2mbox. The front end for OCFA has not been made publicly available due to licencing issues. antiword. gzip. net/ apps/ trac/ ocfa/ wiki . Photorec. References [1] http:/ / sourceforge. The framework integrates with other open source forensic tools and includes modules for The Sleuth Kit. Scalpel. libmagic. OCFA is extensible in C++ or Java. The framework was built by the Dutch national police. a custom Content-addressable storage or CarvFS based data repository and a Lucene index. rar. objdump. Architecture OCFA consists of a back end for the Linux platform. bzip2. it uses a PostgreSQL database for data storage. 7-zip. exiftags. tar. zip.

OrientDB uses a new indexing algorithm called MVRB-Tree. No libraries needed • Commercial support available . RESTful protocol and JSON without use 3rd party libraries and components • Run everywhere: lll the engine is 100% pure Java: runs on Linux. Features • Transactional: supports ACID Transactions [2]. the relationships are managed as in graph databases with direct connections among records.OrientDB 111 OrientDB OrientDB Developer(s) Initial release Written in Luca Garulli 2010 Java Operating system Cross-platform Type License Website Graph database Apache 2 License [1] OrientDB is an open source NoSQL database management system written in Java. schema-full and schema-mixed modes. Windows and any system that supports the Java technology • Embeddable: local mode to use the database bypassing the Server. 100% compliant with TinkerPop Blueprints [3] standard for Graph database • SQL: supports SQL language [4] with extensions to handle relationships without SQL join. Even if it is a document-based database. derived from the Red-Black Tree and from the B+Tree with benefits of both: fast insertion and ultra fast lookup. Perfect for scenarios where the database is embedded • Apache 2 License: always FREE for any usage. No fees or royalties are requested to use it • Light: about 1Mb for the full server. Thank to the SQL layer OrientDB is straightforward to use it for people skilled in relational world. It has a strong security profiling system based on user and roles and support the SQL between the query languages. manage trees and graphs of connected documents • Web ready: supports natively HTTP. No dependencies from other software. On crash it recovers the pending documents • GraphDB: native management of graphs. It supports schema-less.

distributed systems such as cloud computing.[2] Telecommunication transport networks and IP networks (that combined make up the broader Internet) are all overlaid with at least an optical layer.OrientDB 112 External links • Official OrientDB website [5] • Code base on Google Code [6] • Public technical group [7] References [1] http:/ / www. orientechnologies. google. com [2] http:/ / code. Enterprise private networks were first overlaid on telecommunication networks such as frame relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP based MPLS networks and virtual private networks started (2001~2002). peer-to-peer networks. com/ [6] http:/ / code. google. The [1] Internet was built as an overlay upon the telephone network. google. From a physical standpoint overlay networks are quite complex (see Figure 1) as they combine various logical layers that are operated and built by Figure 2: Overlay network broken-up into logical layers Figure 1: A sample overlay network . Nodes in the overlay can be thought of as being connected by virtual or logical links. a transport layer and an IP or circuit layers (in the case of the PSTN). com/ p/ orient/ [7] https:/ / groups. com/ p/ orient/ wiki/ SQL [5] http:/ / www. com [4] http:/ / code. each of which corresponds to a path. orientechnologies. and client-server applications are overlay networks because their nodes run on top of the Internet. For example. Uses of overlay networks In telecommunication Overlay networks are used in telecommunication because of the availability of digital circuit switching equipments and optical fiber. tinkerpop. google. perhaps through many physical links. com/ p/ orient/ wiki/ Transactions [3] http:/ / blueprints. in the underlying network. com/ forum/ #!forum/ orient-database Overlay network An overlay network is a computer network which is built on top of another network.

such as through quality of service guarantees to achieve higher-quality streaming media. competitive telecom operators etc).edu/ron/) • Overcast: reliable multicasting with an overlay network (http://www. Previous proposals such as IntServ. Gnutella2. M. but it can control. utorrent.html) • Resilient Overlay Networks (http://nms. Andersen. The overlay has no control over how packets are routed in the underlying network between two overlay nodes. and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. an overlay network can be incrementally deployed on end-hosts running the overlay protocol software. Akamai Technologies manages an overlay network which provides reliable. For example. Telecoms in the Internet Age: From Boom to Bust to . Morris. 2001. RON (Resilient Overlay Network) for resilient routing. the sequence of overlay nodes a message traverses before reaching its destination. H. government etc) but they allow separation of concerns (and healthy business competition) that over time permitted the build up of a broad set of services that could not have been proposed by a single telecommunication operator overwise (ranging from broadband Internet access.?. • JXTA • Many peer-to-peer protocols including Gnutella. such as KAD and other protocols based on the Kademlia algorithm. Virtela Technology Services underlying telecom providers. among others. [2] AT&T history of Network transmission (http:/ / www. whose IP address is not known in advance. Resilient Overlay Networks (http:/ / nms. voice over IP or IPTV. Freenet and I2P.Overlay network various entities (businesses. Oct. universities.brown. for example. [5] provides an overlay network in 90+ countries on top of 500+ different List of overlay network protocols based on TCP/IP Overlay network protocols based on TCP/IP include: • Distributed hash tables (DHTs). com/ history/ nethistory/ transmission. Balakrishnan. DiffServ.fi/ffdoc/storm/pegboard/ available_overlays--hemppah/peg. For example. cs. and R. For example.gen. edu/ [5] Virtela Technology Services (http:/ / www.csail. Academic research includes End System Multicast [4] and Overcast for multicast.. Overlay networks have also been proposed as a way to improve Internet routing. net) External links • List of overlay network implementations. mit. without cooperation from ISPs. edu/ ron/ ).cs. [4] http:/ / esm.it. html) [3] Fransman.edu/~jj/papers/ overcast-osdi00. ACM SOSP. cmu. Oxford University Press..mit. corp.) • PUCC • Solipsis: a France Télécom system for massively shared virtual world References [1] D. efficient content delivery (a kind of multicast). for example.jyu. In Proc. (Examples: Limewire. Shareaza.[3] 113 Over the Internet Nowadays the Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. Martin. July 2003 (http://himalia. On the other hand. virtela. etc. distributed hash tables can be used to route messages to a node having a specific logical address. csail. and OverQoS for quality of service guarantees. att. Kaashoek.pdf) .

Furthermore. Mac OS X. The models can be exploited in a transparent way. hybrid metaheuristics.Overlay network • OverQoS: An overlay based architecture for enhancing Internet QoS (http://nms.edu/papers/ overqos-nsdi04. the most common parallel and distributed models and hybridization mechanisms. ParadisEO is distributed under the CeCill license and can be used under several environments. Overview ParadisEO is a white-box object-oriented framework dedicated to the reusable design of metaheuristics.mit. Particle swarm optimization. This template-based. This separation confers to the user a maximum code and design reuse. Their implementation is portable on distributed-memory machines as well as on shared-memory multiprocessors. PVM and PThreads. as it uses standard libraries such as MPI. ParadisEO is of the rare frameworks that provide the most common parallel and distributed models. Their experimentation on the radio network design real-world application demonstrate their efficiency. ParadisEO is based on a clear conceptual separation of the solution methods from the problems they are intended to solve. ParadisEO provides a broad range of features including evolutionary algorithms. and parallel and distributed metaheuristics. . the fine-grained nature of the classes provided by the framework allow a higher flexibility compared to other frameworks.). etc. This high content and utility encourages its use at International level.lcs.html) 114 Paradiseo Paradiseo Developer(s) Stable release DOLPHIN project-team 1.0 / October 12. Linux. ANSI-C++ compliant computation library is portable across both Windows system and sequential platforms (Unix. local searches. 2007 [1] of INRIA Operating system Cross-platform Type License Website Technical computing CeCill license [2] ParadisEO is a white-box object-oriented framework dedicated to the flexible design of metaheuristics. one has just to instantiate their associated provided classes. etc.

. CEC 2006. Japan • A hybrid metaheuristic for knowledge discovery in microarray experiments. statistical tools and some easy-to-use state-of-the-art multi-objective evolutionary algorithms (NSGA. it is a templates-based. Team • • • • • Jean-Charles Boisson Clive Canape [3] Thomas Legrand Arnaud Liefooghe Alexandru-Adrian Tantar External links • Official site [2]. Paradiseo-PEO Paradiseo-PEO provides tools for the design of parallel and distributed metaheuristics: parallel evaluation.. EMO 2007.. entropy.. ranking. Matsushima. Edited by S. incremental evaluation...).. elitism.Y. IBEA. island model. Olariu and A. NSGA-II.). Canada • "ParadisEO-MOEO: A Framework for Evolutionary Multi-objective Optimization" [5] (broken link?) • A Multi-Objective Approach to the Design of Conducting Polymer Composites for Electromagnetic Shielding. It contains classes for almost any kind of evolutionary computation you might come up to .. July 16-21 2006. so that if you don't find the class you need in it.).. indicator-based. diversity preservation mechanisms (sharing.. Tabu search. Simulated annealing. ANSI-C++ compliant evolutionary computation library (evolutionary algorithms. performance metrics (contribution. In Handbook of Bioinspired Algorithms and Applications. it provides tools for the development of single solution-based metaheuristics: Hill climbing..at least for the ones we could think of.Paradiseo 115 Modules Paradiseo-EO Paradiseo-EO deals with population based metaheuristics. Paradiseo-MOEO Paradiseo-MOEO provides a broad range of tools for the design of multiobjective optimization metaheuristics: fitness assignment shemes (achievement functions. Iterative Local Search (ILS).. hybrid and cooperative models. at Paradiseo website • Team [1].). parallel evaluation function. at DOLPHIN project-team website References • "Solving the Protein Folding Problem with a Bicriterion Genetic Algorithm on the Grid" [4] • Protein Sequencing with an Adaptive Genetic Algorithm from Tandem Mass Spectrometry. it is very easy to subclass existing abstract or concrete classes. cellular model. Zomaya • Grid computing for parallel bioinspired algorithms [6] . crowding). particle swarm optimization. partial neighbourhood. Paradiseo-MO Paradiseo-MO deals with single-solution based metaheuristics. 0-7803-9489-5. Paradiseo-PEO also introduces tools for the design of distributed. Vancouver. It is component-based. pp 1412–1419.

org/ 10. springerlink. ieeecomputersociety. gforge. This computer will. it will then request a new packet from the original computer. If there was a net gain. It is. a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program. and the 3-SAT problem would be solved much more quickly if just analyzed locally. The authors suggest that as one moves up the application stack. 172 [5] http:/ / www2. The first computer is attempting to solve a large and extremely difficult 3-SAT problem. However. The proof-of-concept is obviously extremely inefficient as the amount of computation necessary to merely send the packets in the first place easily exceeds the computations leached from the other program. fr/ ~canape [4] http:/ / doi. fr [3] http:/ / researchers. inria. fr/ ~jourdan/ publi/ jourdan_EMO07_A. com/ index. in a sense. as part of receiving the packet and deciding whether it is valid and well-formed. inria. php?cat_id=9& subject_area_id=7& journal_id=07437315 [7] http:/ / www. there might come a point where there is a net computational gain to the parasite . 1109/ CCGRID. pdf| [6] http:/ / top25. lifl. The packet/checksum is then sent to another computer. fr/ recherche/ equipes/ dolphin. in practice packets would probably have to be retransmitted occasionally when real checksum errors and network problems occur. In addition. sciencedirect.perhaps one could break down interesting problems into queries of complex cryptographic protocols using public keys. create a checksum of the packet and see whether it is identical to the provided checksum. The example given by the original paper was two computers communicating over the Internet. . 2006. one could in theory use a number of control nodes for which many hosts on the Internet form a distributed computing network completely unawares. org/ 10. com/ content/ up02m74726v1526u/ |ParadisEO: [8] http:/ / dx. and can transmit a fresh packet embodying a different sub-problem. inria. parasitic computing on the level of checksums is a demonstration of the concept. html [2] http:/ / paradiseo. Each of these smaller problems is then encoded as a relation between a checksum and a packet such that whether the checksum is accurate or not is also the answer to that smaller problem. under disguise of a standard communications session. it has decomposed the original 3-SAT problem in a considerable number of smaller problems. or even done anything besides have a normal TCP/IP session. 1016/ j. all the sub-problems will be answered and the final answer easily calculated. 2006. en. The original computer now knows the answer to that smaller problem based on the second computer's response. 08. lille. So in the end. If the checksum is invalid. Eventually.Paradiseo • A Framework for the Reusable Design of Parallel and Distributed Metaheuristics [7] (broken link?) • Designing cellular networks using a parallel hybrid metaheuristic [8] 116 References [1] http:/ / www. comcom. 017 Parasitic computing Parasitic computing is programming technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature. doi. the target computer(s) is unaware that it has performed computation for the benefit of the other computer.

we provide wrapper code that takes care of network communication and permits us to run the same code in network testbeds such as PlanetLab. 412: 894-897 (2001).nd. The Simulator dictates the overall life cycle of the framework by calling the appropriate methods in the overlay's Node and obtaining routing information to dispatch messages through the Network. Because of this. PlanetSim layered architecture .. This framework presents a layered and modular architecture with well defined hotspots documented using classical design patterns. We have proved that PlanetSim reproduces the measures of these environments and is also efficient in its network implementation. the overlay layer obtains proximity information to other nodes asking information to the Network layer. PlanetSim Architecture PlanetSim’s architecture comprises three main extension layers constructed one atop another. Besides. etc) on top of existing overlays. distributed services in the simulator use the Common API for Structured Overlays.Parasitic computing 117 References 1. Parasitic computing. We however have profiled and optimised the code to enable scalable simulations in reasonable time.edu/~parasite • http://www. This façade is built on the routing services offered by the underlying overlay layer. In PlanetSim. To validate the utility of our approach.ch/parasit/ PlanetSim PlanetSim is an object oriented simulation framework for overlay networks and services. CAST.szene. developers can work at two main levels: creating and testing new overlay algorithms like Chord or Pastry. DHT. PlanetSim logo PlanetSim also aims to enable a smooth transition from simulation code to experimentation code running in the Internet. or creating and testing new services (DHT. Applications are built in the upper layer using the standard Common API façade. External links • http://www. Barabasi et al. Moreover. PlanetSim has been developed in the Java language to reduce complexity and smooth the learning curve in our framework. we have implemented two overlays (Chord and Symphony) and a variety of services like CAST. This enables complete transparency to services running either against the simulator or the network. DOLR. and object middleware. Nature.

March 2005. Lecture Notes in Computer Science (LNCS). Random 1000-node Symphony network . PlanetSim: A New Overlay Network Simulation Framework. Rubén Mondéjar. whose node Ids are randomly built. Rubén Mondéjar. Jordi Pujol. Austria. 123-137. Random 1000-node Chord network Symphony A Symphony network with 1000 nodes. This output is obtained loading the output file into the Pajek graph editor (only Windows version). PlanetSim: A New [1] Overlay Network Simulation Framework . included into the current PlanetSim distribution. SEM 2004. Graphical Results Currently the PlanetSim can show the network topology as a GML or Pajek outputs. ISSN 0302-9743. Linz. Acceptance Rate: 34%. This site holds the latest release and collaborations. Carles Pairot. Jordi Pujol. and Robert Rallo.PlanetSim 118 Publications 2005 • Pedro García.net [4]. Austria. whose node Ids are randomly built. September 2004. This output is obtained loading the output file into the yEd graph editor. Volume 3437. not included into the current PlanetSim distribution. ISBN 3-540-25328-9. External links • PlanetSim official website [3] • PlanetSim at SourceForge. Revised Selected Papers. pdf [2] 2004 • Pedro García. pp. Helio Tejedor. Linz. See these examples: Chord A Chord network with 1000 nodes. Workshop on Software Engineering and Middleware (SEM 2004). Software Engineering and Middleware. Carles Pairot. Helio Tejedor. and Robert Rallo. Proceedings of the 19th IEEE International Conference on Automated Software Engineering (ASE 2004). ISBN 3-902457-02-3.

1007/ 11407386_10 http:/ / planet. Detractors cite this as a fault. as naïve programmers will not expect network-related errors or the unbounded nondeterminism associated with large networks. allowing programmers to be completely unaware that objects reside in other locations. The advantage of portable objects is that they are easy to use and very expressive. It is portable in the sense that it moves from machine to machine. .PlanetSim 119 References [1] [2] [3] [4] http:/ / www. springerlink. net/ projects/ planetsim/ Portable object (computing) In distributed programming. es/ planetsim/ planetsim. com/ index/ 10. irrespective of operating system or computer architecture. pdf http:/ / planet. a portable object is an object which can be accessed through a normal method call while possibly residing in memory on another computer. urv. es/ planetsim/ http:/ / sourceforge. urv. This mobility is the end goal of many remote procedure call systems.

Java. key-value data store. Redis supports high level atomic server side operations like intersection. Scala. Perl. Common Lisp. C#. sets and sorted sets. Data model In its outer layer.4 could be configured to use virtual memory[3] but this is now deprecated. Supported languages or language bindings include C. One of the main differences between Redis and other structured storage systems is that values are not limited to strings. Haskell. Objective-C. union. journaled. and is a semi-persistent durability mode where the dataset is asynchronously transferred from memory to disk from time to time. Clojure. It is written in ANSI C. the Redis data model is a dictionary where keys are mapped to values. Lua. Persistence Redis typically holds the whole dataset in RAM. JavaScript (both client and serverside). As of 15 March 2010.12 / June 12. In addition to strings. Python. networked. and difference between sets and sorting of lists. Ruby. Versions up to 2.2. and Tcl. development of Redis is sponsored by VMware[1] [2] . . Persistence is reached in two different ways: One is called snapshotting.1 the safer alternative is an append-only file (a journal) that is written as operations modifying the dataset in memory are processed. Redis is able to rewrite the append-only file in the background in order to avoid an indefinite growth of the journal. io/ Redis is an open-source.Redis (data store) 120 Redis (data store) Redis Developer(s) Initial release Stable release Salvatore Sanfilippo 2009 2. Go. in-memory. Since version 1. 2011 Development status Active Written in Operating system Available in Type License Website ANSI C Cross-platform English Document-oriented database BSD http:/ / redis. Erlang. C++. persistent. the following abstract data types are supported: • • • • Lists of strings Sets of strings (collections of non-repeating unsorted elements) Sorted sets of strings (collections of non-repeating elements ordered by a floating-point number called score) Hashes where keys are strings and values are either strings or integers The type of a value determines what operations (called commands) are available for the value itself. R. PHP.

linux-mag. • Billy Newport (IBM): "Evolving the Key/Value Programming Model to a Higher Level [11]" Qcon Conference 2009 San Francisco.[6] References • Jeremy Zawodny. so a client of a slave may SUBSCRIBE to a channel and receive a full feed of messages PUBLISHed to the master. de/ 2009/ 10/ 27/ theres_something_about_redis. pdf [10] http:/ / www.[4] Performance The in-memory nature of Redis allows it to perform extremely well compared to database systems that write every change to disk before considering a transaction committed. com/ console/ 2010/ 03/ vmware-hires-key-developer-for-redis. html [11] http:/ / www. Charnock: " Redis Benchmarking on Amazon EC2. vmware. and Slicehost (http:/ / porteightyeight. com/ cache/ 7496/ 1. Happenings: NoSQL Conference. The Publish/Subscribe feature is fully implemented. [5] There is no notable speed difference between write and read operations. google. Linux Magazine. The H. Summary .net/static/2010/ redis-tutorial/) . [1] VMware: the new Redis home (http:/ / antirez. Redis: Lightweight key/value Store That Goes the Extra Mile [7]. accessed January 18. com/ p/ redis/ wiki/ ReplicationHowto [5] "FAQ" (http:/ / redis. Redis slaves are writable. Replication is useful for read (but not write) scalability or data redundancy. h-online. Berlin . Flexiscale. com/ 2009/ 11/ 09/ redis-benchmarking-on-amazon-ec2-flexiscale-and-slicehost/ )" [7] http:/ / www. html) [3] Redis documentation "Virtual Memory" (http:/ / redis. August 31.com/post/ 2801342864/episode-0-4-5-redis-with-salvatore-sanfilippo/) • Extensive Redis tutorial with real use-cases by Simon WIllison (http://simonwillison. de/ slides/ NoSQLBerlin-Redis. com/ open/ features/ Happenings-NoSQL-Conference-Berlin-843597.io. html [9] http:/ / nosqlberlin. anywhere up the replication tree. infoq. html) [2] VMWare: The Console: VMware hires key developer for Redis (http:/ / blogs. io/ topics/ faq). redis. [6] A. paperplanes. Slides [10] for the Redis presentation. 2009 [8] [9] • Isabel Drost and Jan Lehnard (29 October 2009). com/ presentations/ newport-evolving-key-value-programming-model External links • Official Redis project page (http://redis. permitting intentional and unintentional inconsistency between instances. A slave may be a master to another slave.Redis (data store) 121 Replication Redis supports master-slave replication. . This allows Redis to implement a single-rooted replication tree. com/ post/ vmware-the-new-redis-home. 2011. [4] http:/ / code.io/) • Audio Interview with Salvatore Sanfillipo on The Changelog podcast (http://thechangelog. Data from any Redis server can replicate to any number of slaves. io/ topics/ virtual-memory). html [8] http:/ / www.

rcenvironment. distributed platform for the integration of applications. 2010 Java and Python Operating system Cross-platform Type License Website Integration platform. RCE enables the developers of integrated applications to concentrate on application-specific logic and to let the different applications interact by embedding them into one unified environment. clusters).dlr. RCE provides integrated applications access to general-purpose software components like a workflow engine.rcenvironment. org/ The Remote Component Environment (RCE) is an all-purpose. Development of the RCE platform took place in the SESIS [1] project. Is is a plug-in based system for application integration written in Java on top of the Eclipse framework.Remote Component Environment 122 Remote Component Environment Remote Component Environment (RCE) (Was: Reconfigurable Computing Environment) Stable release Written in 1.de) • DLR RCE product site (in German) (http://www. It supports and integrates well known middleware solutions like the GlobusToolkit toolkit and UNICORE and abstractions layers like Hibernate_(Java). or an interface to external compute and storage resources (Grid.0 / July 20. Multi-purpose Problem Solving Environment Eclipse Public License http:/ / www. nohuddleoffense. Previously the platform was known by Reconfigurable Computing Environment. Since it has been open sourced the name changed to Remote Component Environment[2] . a privilege management.de/sc/produkte/rce) . References [1] http:/ / www. sesis. de/ 2009/ 09/ 19/ remote-component-environment/ External links • Official RCE website (http://www.7. de [2] http:/ / www.

and emergence -. the specification of a system in terms of separate but interrelated viewpoint specifications. which provides five generic and complementary viewpoints current distributed processing on the system and its environment. davidpratten. . RM-ODP has four fundamental elements: • • • • an object modelling approach to system specification. interworking. have been around for a long time and have been rigorously described and explained in exact philosophy (for example. on the use of formal description techniques for specification of the architecture. and portability. is a joint effort by the International Organization for Standardization (ISO).Request Based Distributed Computing 123 Request Based Distributed Computing Request Based Distributed Computing (RBDC) is a term that refers to the distributed computing paradigm underlying the HyperText Computer.have recently been provided with a solid mathematical foundation in category theory. in the works of Friedrich Hayek). composition.[1] Overview The RM-ODP is a reference model based on precise concepts derived from The RM-ODP view model. developments and.901-X. Some of these concepts -such as abstraction. also named ITU-T Rec.904 and ISO/IEC 10746. in the works of Mario Bunge) and in systems thinking (for example. as far as possible. possibly under different names. which provides a co-ordinating framework for the standardization of open distributed processing (ODP). the International Electrotechnical Commission (IEC) and the Telecommunication Standardization Sector (ITU-T) . the definition of a system infrastructure providing distribution transparencies for system applications. Many RM-ODP concepts. together with an enterprise architecture framework for the specification of ODP systems. It supports distribution. platform and technology independence. and a framework for assessing system conformance. com/ 2008/ 01/ 07/ request-based-distributed-computing-a-rough-sketch/ RM-ODP Reference Model of Open Distributed Processing (RM-ODP) is a reference model in computer science. X. External links • HyperText Computer Blog [2] • Request Based Distributed Computing Blog [1] References [1] http:/ / www. RM-ODP.

4. Parts 1 and 4 were adopted in 1998. is to provide separate viewpoints into the specification of a given complex system. Viewpoints modeling and the RM-ODP framework Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications.RM-ODP The RM-ODP family of recommendations and international standards defines a system of interrelated essential concepts necessary to specify open distributed processing systems and provides a well-developed enterprise architecture framework for structuring the specifications for any large-scale systems including software systems. [7] 2. In only 18 pages. and involved a number of major computing and telecommunication companies. divide the design activity into several areas of concerns. and an outline of the ODP architecture. the viewpoints are not completely independent. It introduces the principles of conformance to ODP standards and the way in which they are applied. These are the constraints to which ODP standards must conform. This recommendation also defines RM-ODP viewpoints. Foundations : Contains the definition of the concepts and analytical framework for normalized description of (arbitrary) distributed processing systems. of course. Architectural Semantics[9] : Contains a formalization of the ODP modeling concepts by interpreting many concepts in terms of the constructs of the different standardized formal description techniques. who may include standard writers and architects of ODP systems. Parts 2 and 3 of the RM-ODP were eventually adopted as ISO standards in 1996. Overview[6] : Contains a motivational overview of ODP. established to bring together those particular pieces of information relevant to some particular area of concern. established to bring together those particular pieces of information relevant to some particular area of concern during the analysis or design of the system. Viewpoint modeling has become an effective approach for dealing with the inherent complexity of large distributed systems. we all have different interests in a given system and different reasons for examining the system's specifications. Current software architectural practices. this standard sets the basics of the whole model in a clear. justification and explanation of key concepts. the Zachman Framework. The concept of RM-ODP viewpoints framework. key items in each are identified as related to items in the other viewpoints. each one focusing on a specific aspect of the system. 3. each viewpoint substantially uses the same foundational concepts . therefore. RM-ODP Topics RM-ODP standards RM-ODP consists of four basic ITU-T Recommendations and ISO/IEC International Standards:[2] [3] [4] [5] 1. A viewpoint is a subdivision of the specification of a complete system. Although separately specified. giving scoping. TOGAF. precise and concise way. Associated with each viewpoint is a viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint. These viewpoints each satisfy an audience with interest in a particular set of aspects of the system. as described in IEEE 1471. This ran from 1984 until 1998 under the leadership of Andrew Herbert (now MD of Microsoft Research in Cambridge). RM-ODP. It contains explanatory material on how the RM-ODP is to be interpreted and applied by its users. DoDAF and. Furthermore. A business executive will ask different questions of a system make-up than would a system implementer. 124 History Much of the preparatory work that led into the adoption of RM-ODP as an ISO standard was carried out by the Advanced Networked Systems Architecture (ANSA) project. Architecture[8] : Contains the specification of the required characteristics that qualify distributed processing as open. Moreover. subdivisions of the specification of a whole system. Examples include the "4+1" view model.

RM-ODP (defined in Part 2 of RM-ODP). for expressing the specifications of open distributed systems in terms of the viewpoint specifications defined by the RM-ODP. The viewpoint languages defined in the reference model are abstract languages in the sense that they define what concepts should be used. hampers communication between system developers and makes it difficult to relate or merge system specifications where there is a need to integrate IT systems. This lack of precise notations for expressing the different models involved in a multi-viewpoint specification of a system is a common feature for most enterprise architectural approaches. • The information viewpoint. document (usually referred to as UML4ODP ISO/IEC 19505). the development of industrial tools for modeling the viewpoint specifications.Use of UML for ODP system specifications". However. there is no widely agreed approach to the structuring of such specifications. These approaches were consciously defined in a notation. and an approach for structuring them according to the RM-ODP principles. including the Zachman Framework. not how they should be represented. which focuses on the semantics of the information and the information processing performed. this makes more difficult. Although the ODP reference model provides abstract languages for the relevant concepts. The purpose of "UML4ODP" to allow ODP modelers to use the UML notation for expressing their ODP specifications in a standard graphical way. • The engineering viewpoint. More specifically. This adds to the cost of adopting the use of UML for system specification. the "4+1" model. • The technology viewpoint. In order to address these issues. one for each viewpoint language and one to express the correspondences between viewpoints. and to allow UML tools to be used to process viewpoint specifications. This [10] ) defines use of the Unified Modeling Language 2 (UML 2.and representation-neutral manner to increase their use and flexibility.Open distributed processing . However. to allow UML modelers to use the RM-ODP concepts and mechanisms to structure their large UML system specifications according to a mature and standard proposal. and the use of a common object model provides the glue that binds them all together. among other things. which focuses on the mechanisms and functions required to support distributed interactions between objects in the system. scope and policies for the system. which enables distribution through functional decomposition on the system into objects which interact at interfaces.906|ISO/IEC 19793: Information technology . It describes the distribution of processing performed by the system to manage the information and provide the functionality. which focuses on the choice of technology of the system. The mutual consistency among the viewpoints is ensured by the architecture defined by RM-ODP. the viewpoints are sufficiently independent to simplify reasoning about the complete specification. the formal analysis of the specifications produced. ISO/IEC and the ITU-T started a joint project in 2004: "ITU-T Rec. . 125 RM-ODP and UML Currently there is growing interest in the use of UML for system modelling. It describes the functionality provided by the system and its functional decomposition. X. It defines a set of UML Profiles. It describes the technologies chosen to provide the processing. thus facilitating the software design process and the enterprise architecture specification of large software systems. the RM-ODP framework provides five generic and complementary viewpoints on the system and its environment: • The enterprise viewpoint. which focuses on the purpose. It describes the business requirements and how to meet them. It describes the information managed by the system and the structure and content type of the supporting data. However. • The computational viewpoint. it does not prescribe particular notations to be used in the individual viewpoints. functionality and presentation of information. or the RM-ODP. and the possible derivation of implementations from the system specifications.

[13] • The Synapses European project. zip) [9] ISO/IEC 10746-4 | ITU-T Rec. [3] Copies of the RM-ODP family of standards can be obtained either from ISO (http:/ / www. Parts 1 to 4 of the RM-ODP are available for from free download from ISO (http:/ / isotc. joaquin. [6] ISO/IEC 10746-1 | ITU-T Rec. joaquin.[14] • The COMBINE project[11] Notes and references [1] A complete and updated list of references to publications related to RM-ODP (books. ie/ synapses/ public/ ) • • • • • • • • . Type repository function. zip) [8] ISO/IEC 10746-3 | ITU-T Rec. itu. Japan.RM-ODP In addition. or. ITU-T Rec. net/ publications.906 | ISO/IEC 19793 enables the seamless integration of the RM-ODP enterprise architecture framework with the Model-Driven Architecture (MDA initiative from the OMG. iso. All ODP-related ITU-T Recommendations. ccsds. ITU-T Rec. Reference model . 911. cs. 126 Applications In addition. Interface references and binding. rm-odp. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ s020697_ISO_IEC_10746-3_1996(E). net/ ODP) of Parts 2 and 3 of the RM-ODP.931 | ISO/IEC 14752:2000. tcd. org/ combine/ overview. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ c020698_ISO_IEC_10746-4_1998(E). iso. X. X.910 | ISO/IEC 14771:1999. pdf) are also available from the RM-ODP resource site (http:/ / www. [5] Some resources related to the current version of | ITU-T X.960 | ISO/IEC 14769:2001. made available in keeping with a resolution of the ISO council. itu. X. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ c020696_ISO_IEC_10746-1_1998(E). [4] There is also a very useful hyperlinked version (http:/ / www. [2] In the same series as the RM-ODP are a number of other standards and recommendations for the specification and development of open and distributed system. iso. intap. the viewpoint metamodels. etc. etc. rm-odp. Interface Definition Language. Provision of Trading Function using OSI directory service. X. rm-odp. net. Trading function: Specification. net/ files/ resources/ LON-040_UML4ODP_IS/ LON-040_UML4ODP_IS.952 | ISO/IEC 13235-3:1998. net/ files/ resources/ LON-040_UML4ODP_IS/ LON-040_UML4ODP_IS.902 (http:/ / www.) is available at the RM-ODP resource site (http:/ / www. ITU-T Rec. The Table of Contents and Index were prepared by Lovelace Computing and are being made available by Lovelace Computing as a service to the standards community.903 (http:/ / www. X. rm-odp. iso. ch/ livelink/ livelink/ fetch/ 2000/ 2489/ Ittf_Home/ PubliclyAvailableStandards. int/ rec/ T-REC-X/ en). org/ iso/ en/ ittf/ PubliclyAvailableStandards/ s018836_ISO_IEC_10746-2_1996(E). html). X. General Inter-ORB Protocol (GIOP)/Internet Inter-ORB Protocol (IIOP). • Interoperability Technology Association for Information Processing (INTAP). X.920 | ISO/IEC 14750:1999.9xx series. the GIF files for the ODP-specific icons. there are several projects that have used or currently use RM-ODP for effectively structuring their systems specifications: • The Reference Architecture for Space Data Systems (RASDS)[12] From the Consultative Committee for Space Data Systems. net/ ODP/ DIS_15414_X. ITU-T Rec. Naming framework. journal articles. aspx) [13] Interoperability Technology Association for Information Processing (INTAP) (http:/ / www.906 | ISO/IEC 19793 "Use of UML for ODP systems specifications" (http:/ / www. together with an index to the Reference Model. X.904 (http:/ / www. X. iso. htm) [12] Reference Architecture for Space Data Systems (RASDS) (http:/ / public. iso. ITU-T Rec. zip) [7] ISO/IEC 10746-2 | ITU-T Rec. conference papers. ITU-T Rec. htm).Enterprise language (http:/ / www. ch) or from ITU-T (http:/ / www. X. ITU-T Rec. zip) [10] http:/ / www.950 | ISO/IEC 13235-1:1998. org/ review/ default. pdf). including X. jp/ e) [14] The Synapses Project: a three-year project funded under the EU 4th Framework Health Telematics Programme (http:/ / www. • ISO/IEC 19500-2:2003. opengroup. ITU-T Rec. int/ ). Protocol support for computational interactions. net). pdf [11] COMBINE (http:/ / www. are freely available from the ITU-T (http:/ / www.930 | ISO/IEC 14753:1999. and with the service-oriented architecture (SOA).911 | ISO/IEC 15414:2002. X. X. X. for which RM-ODP provides a standardization framework: ITU-T Rec.901 (http:/ / www. They include the UML Profiles of the five ODP viewpoints.

RM-ODP 127 External links • RM-ODP Resource site (http://www.0. Canterbury UK. • Systèmes Répartis et Coopératifs (http://www-src.ukc.enst. and the Linked Data Project Object Oriented Databases Data Portability Web 2. University of Stirling.net/ODP/) • RM-ODP information at LAMS (http://lamswww.ch/reference/rm-odp). The underlying paradigm is quite new. and thus can be viewed in an Object Oriented fashion. and Data Portability Data in Data Spaces are linked across spaces and domains to enhance the meaning of internal data. Networks and ComputerScience Department of ENST. Swiss Federal Institute of Technology. This has the benefit of being a useful point for querying about information across domains.html) (Formalisation of ODP Systems Architecture). • Distributed Systems Technology Center (http://archive. and assists the development of a Web of Data.joaquin. Semantic Web Data Space A Semantic Web Data Space is a container for domain specific portable data.uk/~kjt/research/formosa. .edu. Semantic Web Data Spaces.epfl. Lausanne (EPFL).stir. and Content Management Systems Ontologies and Categorization The approach can be applied to both Web based systems and Desktop based systems. • ILR (http://www.dstc. UK.infres.cs.fr/recherche/ILR/rapport. UMPC.html).Reference Model (http://www. Australia. this supports the work of the Linked Data project which is part of the Semantic Web effort.ac. which is provided in human and/or machine friendly structures. • FORMOSA (http://www. however it brings together ideas and technologies from various sources: • • • • • The Semantic Web. Data in a Data Space can be referenced by an identifier. Paris France.au/AU/research_news/). This means that an object in a data space should be movable and should also have the ability to be referenced using an identifier such as a Uniform Resource Identifier.uk/). • Official Record of the ANSA project (http://www.ac. Switzerland.fr/).lip6.net/) • Open Distributed Processing .cs. A Data Space should be fully supportive of data portability such as that advocated by the DataPortability project. and is linked with other data across spaces and domains.ansa.uk/) • Computing Laboratory (http://www.rm-odp. Paris. University of Kent.co. Linked Data. France.

2008.Zhuge. Related web technologies • Uniform Resource Identifiers for object identifiers • Resource Description Framework for object and data space descriptions • SPARQL for querying about objects across domains References • H. 8/4. External links • Novell excerpt on Web Services Frameworks [1] References [1] http:/ / developer. OWL and Database: Mapping and Integration. a distributed collaborative data space system implemented as a Social networking service and Content Management System. Resource space model. novell. The Web Resource Space Model. php?title=MonoWebFrameworks& redirect=no . 72(1)(2004)71-81.Semantic Web Data Space 128 Examplary Semantic Web Data Space Implementation • OpenLink Data Spaces. It is built on top of the OpenLink Software Virtuoso Universal Server. Resource Space Model. com/ wiki/ index. its design method and applications. • H. Service-oriented distributed applications A RESTful programming architecture that allows some services to be run on the client and some on the server. ACM Transactions on Internet Technology.Zhuge. Springer. For example. Journal of Systems and Software.Shi.Xing and P. 2008. • H.Zhuge. Y. a product can first be released as a browser application and then functionality moved module by module to the client application.

programs may run on a single processor or on multiple separate processors. or inside the IStream object returned by CoMarshalInterThreadInterfaceInStream in the COM libraries under Windows. each having a similar set of issues. shared memory refers to a (typically) large block of random access memory that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system. shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. as for example the communicating processes must be running on the same machine (whereas other IPC methods can use a computer network). Most of them have ten or fewer processors.Shared memory 129 Shared memory In computing. and then lets the write succeed on the private copy. the change needs to be reflected to the other processors. A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to a same location.e. Shared memory computers cannot scale very well. In hardware In computer hardware. Using memory for communication inside a single program. and care must be taken to avoid issues if processes sharing memory are running on separate CPUs and the underlying architecture is not cache coherent. On the other hand they can sometimes become overloaded and become a bottleneck to performance The alternatives to shared memory are distributed memory and distributed shared memory. or • a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance instead. This is most often used for shared libraries and for XIP. • Cache coherence: Whenever one cache is updated with information that may be used by other processors. Since both processes can access the shared memory area like regular working memory. by using virtual memory mappings or with explicit support of the program in question. it is less powerful. shared memory is either • a method of inter-process communication (IPC). provide extremely high-performance access to shared information between multiple processors. IPC by shared memory is used for example to transfer images between the application and the X server on Unix systems. Depending on context. a way of exchanging data between programs running at the same time. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory. for example among its multiple threads. and only pages that had to be customized for the individual process (because a symbol resolved differently there) are duplicated. Such coherence protocols can. Unix domain sockets or CORBA). One process will create an area in RAM which other processes can access. Dynamic libraries are generally held in memory once and mapped to multiple processes. In software In computer software. which has two complications: • CPU-to-memory connection becomes a bottleneck. i. is generally not referred to as shared memory. usually with a mechanism that transparently copies the page when a write is attempted. See also Non-Uniform Memory Access. when they work well. On the other hand. . otherwise the different processors will be working with incoherent data (see cache coherence and memory coherence). this is a very fast way of communication (as opposed to other mechanisms of IPC such as named pipes.

h. p. Retrieved 2011-05-13. pdf [11] http:/ / citeseer. sun. lfbs. inf. rwth-aachen. org/ onlinepubs/ 007908799/ xsh/ sysshm. shmdt and shmget.Shared memory 130 Specific implementations POSIX provides a standardized API for using shared memory. html . shmop. Prentice Hall PTR.h.6 Linux kernel builds have started to offer /dev/shm as shared memory in the form of a RAM disk. External links • • • • • • • • • • • Shared Memory Interface [2] Shared Memory Library FAQ [3] by Márcio Serolli Pinho Article "IPC:Shared Memory [4]" by Dave Marshall shared memory facility [5] from the Single UNIX Specification [6] shm_open . microsoft. opengroup. mit. html [6] http:/ / www. Both the Fedora and Ubuntu distributions include it by default. cs. POSIX Shared Memory. Kay A. html [5] http:/ / www. POSIX interprocess communication (part of the POSIX:XSI Extension) includes the shared-memory functions shmat.. net/ manual/ en/ ref.[1] Unix System 5 provides an API for shared memory as well. shmctl.POSIX shmop [7] . This uses shmget from sys/shm.documentation from SunOS 5. de/ content/ smi [3] http:/ / www. UNIX systems programming: communication.9 CreateSharedMemory function [8] from Win32-SDK Functions in PHP-API [9] Paper "A C++ Pooled. more specifically as a world-writable directory that is stored in memory. com/ app/ docs/ doc/ 817-0691/ 6mgfmmdt3?a=view [8] http:/ / msdn2. opengroup. ISBN 9780130424112. BSD systems provide "anonymous mapped memory" which can be used by several processes. html [7] http:/ / docs. sourceforge. /dev/shm support is completely optional within the kernel configuration file. Shared Memory Allocator For The Standard Template Library [10]" by Marc Ronell Citations from CiteSeer [11] Boost. google. .). cf. com/ en-us/ library/ aa374778.Interprocess C++ Library [12] References [1] Robbins. and threads (http:/ / books. Steven Robbins (29003).h. br/ ~pinho/ shared_memory_library. 512. aspx [9] http:/ / www. ac. This uses the function shm_open from sys/mman. php [10] http:/ / allocator. com/ books?id=tdsZHyH9bQEC) (2 ed. php. htm [4] http:/ / www. net/ rtlinux2003. uk/ Dave/ C/ node27. concurrency." [2] http:/ / www. org/ onlinepubs/ 007908799/ xsh/ shm_open. org/ doc/ libs/ 1_36_0/ doc/ html/ interprocess. boost. pucrs. "The POSIX interprocess communication (IPC) is part of the POSIX:XSI Extension and has its origin in UNIX System V interprocess communication. csail. edu/ cs?q=shared+ memory+ library [12] http:/ / www. Recent 2.

Now. // attach to and subscribe to the remote object while (1) { cout << “greeting=” << greeting << endl. Programming Basics This C++ example is from the GPL open-source SmartVariables implementation at SmartVariables. World!. everywhere. World!.” The code on Alice appears to be a “tight loop.” Here is the code for Alice: Var greeting. Applications do not poll for content changes. it does change.” with no opportunity for the object to be modified. the environment transparently propagates the change to Alice. World!”. when the variable changes value.[1] SmartVariables style programming interfaces emulate simple "network shared memory.Name( “greeting@Charlie” ). and there is no code that explicitly connects to machine “Charlie” to retrieve the “greeting” object or any changes made to it.[2] The concept has some similarities to that of stored procedures and triggers in database systems. we run another program on machine “Bob” that simply changes the value of the remote “greeting@Charlie” object to be the string “Hello. it transparently connects to Charlie and modifies the “greeting” object to have its new value: “Hello. when the above program on machine Bob gets executed. greeting. Imagine an environment with three networked computers named: Alice. greeting. // modify all copies.Smart variables 131 Smart variables SmartVariables is a term introduced in 1998 referring to a design pattern that merges networking and distributed object technology with the goal of reducing complexity by transparently sharing information at the working program variable level. SmartVariables propagate themselves into process-level code automatically. however "callbacks" can be attached that execute when a "named" object's content changes. This means that the program still looping on Alice will now begin printing its new value of “Hello. Bob and Charlie.Name( “greeting@Charlie” ). however. . Modifications to the “greeting@Charlie” object become automatically reflected by Alice’s program output. our program running on “Alice” will function to continuously print out the contents of a remote container object named “greeting@Charlie. SmartVariables attach an email-like "name" to each container or list. where a change to one item can set off other changes in the database." The design emphasis is API simplicity for systems needing to exchange information. Next. To begin. Sharing and update behaviors do not need to be explicitly programmed.com.” Because SmartVariables containers “know” who have copies of their data. it automatically propagates change events across the network into other running processes working with that data. World!.” Here is the code for Bob: Var greeting = “Hello. // note that ‘greeting’ can change values here } Note that Alice’s display code is in a tight-loop. as events get processed asynchronously — working program variables simply receive new content.

2. an interface definition has information to indicate whether. otherwise the values of those parameters could not be used.g. Joseph Yoder (1998). so a Remote Function Call looks like a local function call for the remote computer. Pattern Languages of Programs Conference. The client and server use different address spaces. cs. using "smart variables" to simplify Grid computing and implement web services.) Stubs are used to perform the conversion of the parameters.. [2] Hounshell. each argument is input. Stub (distributed computing) A stub in distributed computing is a piece of code used for converting parameters passed during a Remote Procedure Call (RPC). . Manually: In this method. The client and server may also use different data representations even for simple parameters (e. A client stub is responsible for conversion of parameters used in a function call and deconversion of results passed from the server after execution of the function. Simplifying Web Infrastructure with SmartVariables (http:/ / www. so conversion of parameters used in a function call have to be performed. because of pointers to the computer's memory pointing to different data on each machine. uiuc. — Introduced the concept of "smart variables". External links • Open source commercial implementation (beta) in [[C++ (http://smartvariables. output or both — only input arguments need to be copied from client to server and only output elements need to be copied from server to client. . Automatically: This is more commonly used method for stub generation. — Refined and extended the concept. smartvariables.Smart variables 132 References [1] Foote. and distributed neural networks. Brian. Lee (March 2006) (pdf). Stub can be generated in one of the two ways: 1.com)]]. Stub libraries must be installed on client and server side. that is used for defining the interface between Client and Server. For example. the RPC implementer provides a set of translation functions from which a user can construct his or her own stubs.com. "Metadata and Active Object-Models" (http:/ / jerry. SmartVariables. com/ doc/ DistributedProgramming. function. big-endian versus little-endian for integers. pdf). edu/ ~plop/ plop98/ final_submissions). It uses an interface description language (IDL). A server stub is responsible for deconversion of parameters passed by the client and conversion of the results after the execution of the function. . The main idea of an RPC is to allow a local computer (client) to remotely call procedures on a remote computer (server). This method is simple to implement and can handle very complex parameter types. directory.

Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects. Today. attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs. weather forecasting. holding the top spot in supercomputing for five years (1985–1990). such as the PowerPC. CDC's early machines were simply very fast scalar processors. but many of these disappeared in the mid-1990s "supercomputer market crash". Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system. as defined by Graham et al. some being off the shelf units and others being custom designs. Today. built by Fujitsu in Kobe. In the 1980s a large number of smaller competitors entered the market. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC). polymers. which led the market into the 1970s until Cray left to form his own company. some ten times the speed of the fastest machines offered by other companies. He then took over the supercomputer market with his new designs. Typical numbers of processors were in the range of four to sixteen. Relevant here is the distinction between capability computing and capacity computing. the Tianhe-1A supercomputer located in China. simulation of the detonation of nuclear weapons. or Xeon. and research into nuclear fusion). and crystals). particularly speed of calculation. and many of the newer players developed their own such processors at a lower price to enter the market. molecular modeling (computing the structures and properties of chemical compounds. IBM Cell. AMD GPUs. Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics. In the later 1980s and 1990s. FPGAs. parallel designs are based on "off the shelf" server-class microprocessors. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. It is three times faster than previous one to hold that title. Japan is the fastest in the world. in parallel to the creation of the minicomputer market a decade earlier. supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray. In the 1970s most supercomputers were dedicated to running a vector processor. Opteron. [1] Japan's K computer. . and coprocessors like NVIDIA Tesla GPGPUs. biological macromolecules. who had purchased many of the 1980s companies to gain their experience. Currently. Often a capability system is able to solve a problem of a size or complexity that no other computer can. and the speed of today's supercomputers tends to become typical of tomorrow's ordinary computers. IBM and Hewlett-Packard.Supercomputer 133 Supercomputer A supercomputer is a computer that is at the frontline of current processing capacity. The term supercomputer itself is rather fluid. and physical simulations (such as simulation of airplanes in wind tunnels. Cray Research. climate research.

[3] [4] Cray left CDC in 1972 to form his own company.[8] While the supercomputers of the 1981 used only a few processors.[2] The CDC 6600.Supercomputer 134 History The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. released in 1964.[18] and the "Peak speed" is given as the "Rmax" rating.[16] In 2011 the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project. It performed at 1.[14] Current research using supercomputers The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex.6 billion neurons with approximately 9 trillion connections.[11] [12] [13] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[15] Modern-day weather forecasting also relies on supercomputers. setting new computational performance records. in A Cray-1 supercomputer preserved at the the 1990s. . and was ranked the fastest in the world in 1993.[6] [7] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. machines with thousands of processors began to appear both Deutsches Museum in the United States and in Japan. Cray delivered the 80 MHz Cray 1 in 1976. containing 1. For more historical data see History of supercomputing. communicating via the Message Passing Interface. Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[5] Four years after leaving CDC.9 gigaflops and was the world's fastest until 1990. is generally considered the first supercomputer. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh. allowing processes to execute on separate nodes.7 gigaflops per processor. Fujitsu's Numerical Wind Tunnel supercomputer [9] [10] The used 166 vector processors to gain the top spot in 1994 with a peak speed of 1. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain. and it become one of the most successful supercomputers in history.[17] This is a recent list of the computers which appeared at the top of the Top500 list.

requiring cooling.[27] The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt". 4MW at $0. with latency less of an issue.10/KWh is $400 an hour or about $3. USA 1. Their I/O systems tend to be designed to support high bandwidth.[25] In the Blue Gene system IBM deliberately used low power processors to deal with heat density.g.[20] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. Heat management is a major issue in complex electronic devices. Energy consumption and heat management An Blue Gene/L cabinet showing the stacked blades. released in 2011. and perform poorly at more general computing tasks. each holding many processors A typical supercomputer consumes large amounts of electrical power. has closely packed elements that require water cooling.759 PFLOPS DoE-Oak Ridge National Laboratory. the IBM Power 775. as well as complex detail engineering. Tennessee. China Fujitsu K computer 8. Tianhe-1A consumes 4. New Mexico. In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt.[21] [22] [23] The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with.[26] On the other hand. and affects powerful computer systems in various ways. and using hardware to address the remaining bottlenecks. USA 2.[30] [31] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 . Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact.5 million per year. and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company. The Cray 2 was liquid cooled. Kobe. The supercomputing awards for green computing reflect this issue. and supercomputer designs devote great effort to eliminating software serialization.04 Megawatts of electricity.105 PFLOPS 2009 2010 2011 Cray Jaguar Tianhe-IA 1. almost all of which is converted into heat. For example. much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors. usually numerical calculations. because supercomputers are not used for transaction processing. Japan Hardware and software design Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel.026 PFLOPS DoE-Los Alamos National Laboratory. Amdahl's law applies.[28] [29] In November 2010. the Blue Gene/Q reached 1684 MFLOPS/Watt. As with all highly parallel systems.566 PFLOPS National Supercomputing Center. e.Supercomputer 135 Year Supercomputer Peak speed (Rmax) Location 2008 IBM Roadrunner 1. and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. Tianjin.162 PFLOPS RIKEN.[19] The cost to power and cool the system can be significant.[24] However. They tend to be specialized for certain types of computation.

In particular. graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors. According to Ken Batcher. Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds.Supercomputer MFLOPS/W." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly. some graphics cards have the computing power of several TeraFLOPS. In modern supercomputers built of many conventional CPUs running in parallel.[32] 136 Supercomputer challenges. and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU). Technologies developed for supercomputers include: • • • • • Vector processing Liquid cooling Non-Uniform Memory Access (NUMA) Striped disks (the first instance of what was later called RAID) Parallel filesystems Processing techniques Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. is based on GPGPUs. technologies Information cannot move faster than the speed of light between two parts of a supercomputer. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers. Indeed. As video processing has become more sophisticated. the number 3 [33] supercomputer. An IBM HS20 blade server Supercomputers consume and produce massive amounts of data in a very short period of time. Nebulae built by Dawning in China. "A supercomputer is a device for turning compute-bound problems into I/O-bound problems. For this reason. . hence the cylindrical shape of his Cray range of computers. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason. latencies of 1–5 microseconds to send a message between CPUs are typical.

An easy programming language for supercomputers remains an open research topic in computer science. in general. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community.[34] Until the early-to-mid-1980s. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. . In the most common scenario. similar manner. the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. VTL. different and incompatible vectorizing and parallelizing compilers for Fortran existed. Fortran or C.Supercomputer 137 Operating systems Supercomputers today most often use variants of the Linux operating system as shown by the graph to the right. Software tools Software tools for distributed processing include standard APIs such as MPI and PVM. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology. using special libraries to share data between nodes. and the adoption of computer systems such as Cray's Unicos. The base language of supercomputer code is. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on. WareWulf. For the most part. In a [34] More than 90% of today's Supercomputers run some variant of Linux. supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA and OpenCL. or Linux. and openMosix. which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). Programming The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. and open source-based software solutions such as Beowulf.

The cores may all be in from one to thousands of multicore processor devices. and the type and number of co-processors. application-level software is indifferent to the number of CPU cores. the number of processors per multiprocessor. and with each multiprocessor controlling multiple co-processors. Furthermore. IBM also announced work on "Sequoia.6 petabytes of memory.507 petaflops. the Cray XT5 "Jaguar". The ratio of coprocessors to general-purpose processors varies dramatically." which appears to be a 20 petaflops supercomputer.LANL . • A SIMD core executes the same instruction on more than one set of data at the same time.6 million cores (specific 45-nanometer chips in development) and 1. As of 2007. The benchmark used for measurig TOP500 performance disregards the contribution of co-processors.000 laptops). In February 2009. It is slated for deployment in late 2011. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100. The cores share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA). Co-processors are often GPGPUs. Each computer runs under a separate instance of an Operating System (OS). Within this hierarchy we have: • A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric.Supercomputer 138 Modern supercomputer architecture Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors. but with specialized programming can exceed the performance of the multiprocessor by several orders of magnitude for certain applications. • A co-processor is incapable of executing "standard" code. over 30% faster than the world's next fastest computer. It will be housed in 96 refrigerators spanning roughly 3000 square feet (280 m2). the number of simultaneous instructions per SIMD processor.[35] The Sequoia will be powered by 1. operating under a single instance of an OS and using more than one CPU core. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. the costs of chip development and production make it uneconomical to design custom IBM Roadrunner . The core may be a general purpose commodity core or special-purpose vector processor.[36] Moore's Law and economies of scale are the dominant factors in supercomputer design. wherein the The CPU Architecture Share of Top500 Rankings between 1993 and 2009. The supercomputers vary radically with respect to the number of multiprocessors per cluster. As of October 2010 the fastest supercomputer in the world is the Tianhe-1A system at National University of Defense Technology with more than 21000 processors. • A multiprocessing computer is a computer. It may be in a high-performance processor or a low power processor. each core executes several SIMD instructions per nanosecond. each processor of which is SIMD. it boasts a speed of 2.

Shaw Research Anton. combined into the shorthand "TFLOPS" (1012 FLOPS.) This measurement is based on a particular benchmark. by some measure. which can be programmed to act as one large computer. for simulating molecular dynamics[43] The fastest supercomputers today Measuring supercomputer speed In general. the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second). fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. traditional supercomputers can be replaced. allowing for desktop supercomputers to become available. 15 "Petascale" supercomputers can process one quadrillion (10 ) (1000 trillion) FLOPS. . 139 Special-purpose supercomputers A special-purpose supercomputer is a high-performance computing device with a hardware architecture dedicated to a single problem. Supercomputing is taking a step of increasing density. pronounced petaflops.[37] Deep Blue. commonly used with an SI prefix such as tera-. splitting up into smaller parts to be worked on simultaneously) and.[38] and Hydra. most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer. for many applications.Supercomputer chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production.000 US dollars as of 2010. offering the computer power that in 1998 required a large room to require less than a desktop footprint. pronounced teraflops). This mimics a class of real-world problems.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s. allowing higher price/performance ratios by sacrificing generality. in particular. with over half being located in the United States. many problems carried out by supercomputers are particularly suitable for parallelization (in essence. which does LU decomposition of a large matrix.[40] for astrophysics and molecular dynamics Deep Crack. An exaflop is one quintillion (1018) FLOPS (one million teraflops). For example. 14 countries account for the vast majority of the world's 500 fastest supercomputers. Examples of special-purpose supercomputers: • Belle. by "clusters" of computers of standard design. A current model quad-core Xeon workstation running at 2. or peta-.[39] for playing chess • • • • • Reconfigurable computing machines or parts of machines GRAPE. This allows the use of specially programmed FPGA chips or even custom VLSI chips. E. They are used for applications such as astrophysics computation and brute-force codebreaking.[42] for protein structure computation D. In addition. Exascale is computing performance in the exaflops range. but is significantly easier to compute than a majority of actual real-world problems. For this reason. combined into the shorthand "PFLOPS" (1015 FLOPS.[41] for breaking the DES cipher MDGRAPE-3. GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.

Examples of Opportunistic Supercomputing Systems Example architecture of a grid computing system connecting many personal computers over the internet The fastest grid computing system. GIMPS's distributed Mersenne Prime search currently achieves about 60 teraflops through over 25. using the Tofu interconnect.5 petaflops through over 480.000 active computers on the network project (measured by computational power).[50] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. The BOINC platform hosts a number of distributed computing projects. The list does not claim to be unbiased or definitive. reported 8.Supercomputer 140 The TOP500 list Since 1993.[44] Opportunistic Supercomputing Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks. co-allocation subsystems.[47] As of May 2011. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. 7. and the rest from various computer systems[45] .[48] The Internet PrimeNet Server [49] supports GIMPS's grid computing approach. It does not use any GPUs or other accelerators. reports processing power of over 700 teraflops through over 33. However. Current fastest supercomputer system The K computer is ranked on the TOP500 list as the fastest supercomputer at 8. Of this.000 registered computers. 1. As of May 2011. quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements. It consists of 68.[] . Folding@home. MilkyWay@home. the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations. However. but it is a widely cited current definition of the "fastest" supercomputer available at any given time.8 petaflops of processing power as of May 2011. BOINC recorded a [46] The most active processing power of over 5. since 1997. and is one of the most energy-efficient systems on the list.000 active computers.8 petaflops come from PlayStation 3 systems. one of the earliest and most successful grid computing projects.544 SPARC64 VIIIfx CPUs.16 petaFLOPS. Quasi-opportunistic Supercomputing Quasi-opportunistic Supercomputing is a form of distributed computing whereby the “super virtual computer” of a large number of networked geographically disperse computers performs huge processing power demanding computing tasks. fault tolerant message passing libraries and data pre-conditioning.1 petaflops are contributed by clients running on various GPUs. which is based on BOINC. communication topology-aware allocation mechanisms.

SGI and Intel to build a 1 petaflops computer. The flash mob cluster allows the use of any computer in the network. Other PFLOPS projects include one by Narendra Karmarkar in India. a professor in the Physics Department of the University of Massachusetts Dartmouth with support from Sony Computer Entertainment and is the first PS3 cluster that generated numerical results that were published in scientific research literature. which is performing astrophysical simulations of large supermassive black holes capturing smaller compact objects.[61] Such systems might be built around 2030. and exploits the Cell processor for the intended The PlayStation 3 Gravity Grid application. intended to create a "supercomputer on a chip". supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[59] Using the Intel MIC (many integrated cores) architecture.[58] Meanwhile. while the Beowulf cluster still requires uniform architecture. based on the Blue Gene architecture which is scheduled to go online in 2011. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling.[53] According to 2008 estimates. scaling up to Fastest supercomputers: log speed vs. giving the machine a net of 16 general-purpose machines and 96 vector processors. in 2009.[55] a C-DAC effort targeted for 2010. which is Intel's response to GPU systems. Given the current speed of progress.[60] Erik P.Supercomputer Examples of Quasi-opportunistic Supercomputing Systems [51] uses a network of 16 machines.[62] Applications of supercomputers . Research and development IBM is developing the Cyclops64 architecture. SGI plans to achieve a 500 times increase in performance by 2018 to achieve an exaflop. the Qoscos Grid and the Beowulf cluster. 141 server farms contain 450. time 10 PFLOPs by 2012. as of April 2004. named Sequoia.[56] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).000 servers.[52] In June 2006 the New York Times estimated that the Googleplex and its Other notable computer clusters are the flash mob cluster.[60] Samples of MIC chips with 32 cores which combine vector processing units with standard CPU have become available. IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory. The Cell processor has a main CPU and 6 floating-point vector processors. the processing power of Google's cluster might reach from 20 to 100 petaflops. Gaurav Khanna.[57] In May 2008 a collaboration was announced between NASA. Pleiades.[54] Also a "quasi-supercomputer" is Google's search engine system with estimated total processing power of between 126 and 316 teraflops. This cluster was built in 2007 by Dr. which could cover a two week time span accurately.

Accessed 20 June 2011 [2] Hardware software co-design of a multimedia SOC platform by Sao-Jie Chen. Volume 1 Issue 7. com/ books?id=J46GinHakmkC& pg=PA172& dq=history+ of+ supercomputer+ cdc+ 6600& hl=en& ei=PeAcTv_eI8uf-wb3y9jvCA& sa=X& oi=book_result& ct=result& resnum=7& ved=0CEYQ6AEwBjgK#v=onepage& q=history of supercomputer cdc 6600& f=false) [4] The American Midwest: an interpretive encyclopedia by Richard Sisson. 2011 (http:/ / www. au/ News/ 65619. itnews. Yasuda. Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202 [9] TOP500 Annual Report 1994. O. iTnews Australia. washingtonpost.J. [17] Washington Post August 8. Y. Y. Yu-Hen Hu 2009 ISBN pages 70-72 [3] History of computing in education by John Impagliazzo. org/ sublist). Gurindar Sohi 1999 ISBN 9781558605398 page 41-48 [7] Milestones in computer science and information technology by Edwin D. com/ books?id=n3Xn7jMx1RYC& pg=PA1489& dq=history+ of+ supercomputer+ cdc+ 6600& hl=en& ei=nt8cTo-RFc2r-gaDiPHLCA& sa=X& oi=book_result& ct=result& resnum=6& ved=0CEkQ6AEwBQ#v=onepage& q=history of supercomputer cdc 6600& f=false) [5] Wisconsin Biographical Dictionary by Caryn Hannan 2008 ISBN 1878592637 pages 83-84 (http:/ / books. Hirose and M.1997. nationalgeographic. Retrieved 2011-07-08. Overview of recent supercomputers. Proceedings of 11th International Parallel Processing Symposium. Volume 60. . 19 June 2011.11/91. 2011). Balandin in IEEE Spectrum.1109/HPC. Top500. van der Steen. html). Lee 2004 ISBN 1402081359 page 172 (http:/ / books. Guang-Huei Lin. com/ business/ technology/ petaflop-computer-flap-ibm-unplugs-itself-from-supercomputer-project-at-univ-of-illinois/ 2011/ 08/ 08/ gIQAuiFG3I_story. green500. [12] Y. Norman Paul Jouppi. do?easyirid=A0D622CE9F579F09& version=live& prid=678988& releasejsp=release_157). aerodynamic research (Cray-1). Tokhi. lanl. 10-01-2003 doi 10. O. Fujii. Akashi. ieee. IEEE Computer Society. Kashiyama. January 1998. google. October 2009 (http:/ / spectrum. "Directory page for Top500 lists. Proceedings of HPC-Asia '97. com/ 2011/ 06/ 20/ technology/ 20computer. January 1997. [13] A. org). Koga. (http:/ / www.1145/957717. google. Pages 246-254. [19] Nvidia (29 October 2010). html).Proceedings Supplements. Reilly 2003 ISBN 1573565210 page 65 [8] Parallel computing for real-time signal processing and control by M. . April 1997. 2010s [68] Molecular Dynamics Simulation (Tianhe-1A) Notes [1] (http:/ / www. Probabilistic analysis. 2010-10-28. M.592130. Inagami. [22] "Green 500 list ranks supercomputers" (http:/ / www. Fukuda (1997). [23] Wu-chun Feng. pdf) [24] Parallel computing for real-time signal processing and control by M. Result for each list since June 1993" (http:/ / www. [16] "Faster Supercomputers Aiding Weather Forecasts" (http:/ / news. com/ news/ 2005/ 08/ 0829_050829_supercomputer. [66] Brute force code breaking (EFF DES cracker). Press release. N. Architecture and performance of the Hitachi SR2201 massively parallel processor system. Christian K. gov/ pubs/ 031001-acmq. . . com/ easyir/ customrel. Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202 . Retrieved 2010-10-31. "Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory". H. M. [64] [65] radiation shielding modeling (CDC Cyber). The CP-PACS project. nytimes. Issues 1-2. Reed 2003 ISBN 9780262681421 page 182 [15] Kaku. T.957772 (http:/ / sss. Iwasaki. Tokhi. nvidia. Stichting Nationale Computer Faciliteiten. Sumimoto. O. "NVIDIA Tesla GPUs Power World's Fastest Supercomputer" (http:/ / pressroom. Publication of the NCF. com. H. top500. Nuclear Physics B . doi:10. 65. Michio.green-500-list-ranks-supercomputers. New York Times. html) [10] N. John A. News. . html) [18] Intel brochure . [11] H. [20] Better Computing Through CPU Cooling by Alexander A. [67] 3D nuclear test simulations as a substitute for banned atmospheric nuclear testing (ASCI Q). [14] Scalable input/output: achieving system balance by Daniel A. org/ semiconductors/ materials/ better-computing-through-cpu-cooling/ 0) [21] "The Green 500" (http:/ / www. Pao-Ann Hsiung. the Netherlands.org.com. aspx). Ishihara.Supercomputer 142 Decade 1970s 1980s 1990s Uses and computer involved [63] Weather forecasting.nationalgeographic. Pages 233-241. Zacher 2006 ISBN 0253348862 page 1489 (http:/ / books. 2003 Making a Case for Efficient Supercomputing in ACM Queue Magazine. Physics of the Future (New York: Doubleday. netlib. google. org/ benchmark/ top500/ reports/ report94/ main. Wada. com/ books?id=V08bjkJeXkAC& pg=PA83& dq=cdc+ 6600+ 7600+ cray& hl=en& ei=7LMZTozDIInX8gP0xIkM& sa=X& oi=book_result& ct=result& resnum=1& ved=0CCgQ6AEwAA#v=onepage& q=cdc 6600 7600 cray& f=false) [6] Readings in computer architecture by Mark Donald Hill.

Rajeshwari Adappa (30 October 2006). . edu/ ps3. co. hot topic paper (2007)" (http:/ / citeseer. . May 20. php?pr=milkyway). . php?pr=bo). 2009-02-03. Hensell. html). [27] The Register: IBM 'Blue Waters' super node washes ashore in August (http:/ / www. networkworld. . The Economic Times. Cracking DES . Taiji. Retrieved 2011-05-28. "Nebulae #2 Supercomputer built with NVIDIA Tesla GPGPUs" (http:/ / www. "Hiding in Plain Sight. [56] C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010. of 14th International Conference on Field-Programmable Logic and Applications (FPL). php/ 3913536/ Top500-Supercomputing-List-Reveals-Computing-Trends. [47] BOINCstats: MilkyWay@home (http:/ / boincstats. Antwerp – Belgium. not those on the date last accessed. cms).uk. com/ communications/ 2008/ 05/ google-surpasses-supercomputer-community-unnoticed.ibm. com/ 2008/ TECH/ 06/ 09/ fastest.680 Mflops/watt. com/ content/ hp9la9pwq0a1cmrp/ ) Proc. Saul (June 14. stanford. ist. BOINC. ibm. GIMPS. Lorenz. htm) 143 . Retrieved 2010-10-31. Schuster. Stanford University.B.. Retrieved June 6. Associate Professor. University of Massachusetts Dartmouth. IEEE. BOINC. . boincstats. The Chess Monster Hydra. py?qtype=osstats). and K. (http:/ / www. co.curpg-2. springerlink. uk/ 2011/ 07/ 15/ power_775_super_pricing/ ) [28] "Government unveils world's fastest computer" (http:/ / web. J. U. com/ topic/ processors/ IBM_Roadrunner_Takes_the_Gold_in_the_Petaflop_Race. Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Completion of a one-petaflops computer system for simulation of molecular dynamics (http:/ / www.R. The Register. Valentin. serverwatch. Benny. Retrieved 2010-11-25. [48] "Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search" (http:/ / www. 927 – 932 [40] J Makino and M. Deshawresearch. .E.Thompson. 1982. . ap/ index. phy. co. org/ primenet/ [50] Kravtsov. mersenne. cnn. org/ overtime/ list/ 32/ os). . David. New York Times. Assaf. htm). John. html?page=1) By Tom Jowitt . com/ 2008/ TECH/ 06/ 09/ fastest. Feng-hsiung (2002). Carmeli. Archived from the original (http:/ / www. Retrieved 2008-03-16. com/ ).H. CNN. Gaurav Khanna. org/ lists/ 2011/ 06/ top/ list. Yoshpa. Pergamon Press. (http:/ / www. mersenne. hpcwire. IEEE International Symposium on High Performance Distributed Computing. [51] "PS3 Gravity Grid" (http:/ / gravity.com. . html). flonnet.Secrets of Encryption Research. indiatimes. edu/ viewdoc/ summary?doi=10. html). Timothy (2010-05-31).Supercomputer [25] Computational science -. edu/ cgi-bin/ main. Retrieved 2008-03-16. Note these link will give current statistics. Unnoticed? (http:/ / blogs.co. 1998. org/ cracking-des/ cracking-des. datacenterknowledge. com/ hreviews/ article. Werner. Top500. setting a record in power efficiency with a value of 1. [38] Hsu. Retrieved 4 August 2011. . pp. umassd. html) on 2008-06-10. [41] Electronic Frontier Foundation (1998).M. [35] IBM to build new monster supercomputer (http:/ / www. "IBM. tnl. top500." [31] "IBM Research A Clear Winner in Green 500" (http:/ / www. [54] Google Surpasses Supercomputer Community. Retrieved 2010-10-31. BlueGene/Q system .United States" (http:/ / www-03. 2008. 8993). com/ stories/ 20070518003711400. com/ stats/ project_graph. . Ariel. html) [43] "D.. org/ primenet). "Tatas get Karmakar to make super comp" (http:/ / economictimes. [55] Athley. 2006). com/ stats/ project_graph. . Gouri Agtey. org/ lists/ 2011/ 06/ press-release). Wiley. 1. [32] Green 500 list (http:/ / www. 2010-11-22. ISBN 0-691-09065-3 [39] C. .Clarke). more than twice that of the next best system. com/ 2006/ 06/ 14/ technology/ 14search.org. ap/ index. Retrieved 2011-05-28 [46] BOINCstats: BOINC Combined (http:/ / www. Google Seeks More Power" (http:/ / www. College of Engineering. Retrieved 2010-10-31. jp/ engn/ r-world/ info/ release/ press/ 2006/ 060619/ index. php) [33] Prickett.ICCS 2005: 5th international conference edited by Vaidy S. not those on the date last accessed. Retrieved 2010-10-31. [37] Condon. LNCS 3203. wss). html). . theregister. uk/ 2010/ 05/ 31/ top_500_supers_jun2010/ ). html). 2004. 135. com/ articleshow/ msid-225517. [52] How many Google machines (http:/ / www. In Advances in Computer Chess 3 (ed. Top500. org/ web/ 20080610155646/ http:/ / www. Orda. uk/ 2010/ 11/ 22/ ibm_blue_gene_q_super/ ). Theregister. [34] "Top500 OS chart" (http:/ / www. April 30. . top500. Princeton University Press. . cnn. com/ news/ 2009/ 020409-ibm-to-build-new-monster. archive. nytimes. Oreilly & Associates Inc. 03. 27.com. theregister. computer. Retrieved 2011-05-28. Scientific Simulations with Special Purpose Computers: The GRAPE Systems. theregister. htm). psu. deshawresearch. 2004 [53] Markoff. "Quasi-opportunistic supercomputing in grids. riken. . 2011 [49] http:/ / www.. . "performing 376 million calculations for every watt of electricity used. [45] Folding@home: OS Statistics (http:/ / fah-web. Shaw Research Anton" (http:/ / www. TechWorld . Donninger. "Belle Chess Hardware". net/ blog/ 2004/ 04/ 30/ how-many-google-machines/ ). ISBN 1-56592-520-3. Wiretap Politics & Chip Design (http:/ / cryptome. Sunderam 2005 ISBN 3540260439 pages 60-67 [26] "IBM uncloaks 20 petaflops BlueGene/Q super" (http:/ / www. Note these link will give current statistics. [44] "Japan Reclaims Top Ranking on Latest TOP500 List of World’s Supercomputers" (http:/ / www. com/ archives/ 2010/ 11/ 18/ ibm-system-clear-winner-in-green-500/ ). 02/04/2009 [36] "Petaflop Sequoia Supercomputer . com/ press/ us/ en/ pressrelease/ 26599. 1. green500. Dubitzky. [42] RIKEN press release. computer.org. [30] "Top500 Supercomputing List Reveals Computing Trends" (http:/ / www. nmscommunications." [29] "IBM Roadrunner Takes the Gold in the Petaflop Race" (http:/ / www. .

1977. Retrieved 2008-03-16. . "IBM breaks petaflop barrier" (http:/ / www. Acronym. computerworld. acronym.org. h-online. cfm?id=1062325). 2011. Heise online.be. Inc. 2007. 2008-04-04. com/ science?_ob=ArticleURL& _udi=B6VC5-3SWXX64-8& _user=10& _rdoc=1& _fmt=& _orig=search& _sort=d& view=c& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=0a76921c6623fa556491f2dccdf4377e) (Subscription required). InfoWorld.kuleuven. html). Intel plan to speed supercomputers 500 times by 2018. nvidia. "Reversible logic for supercomputing" (http:/ / portal. Blogs. acm. computerhistory. com/ s/ article/ 9217763/ SGI_Intel_plan_to_speed_supercomputers_500_times_by_2018?taxonomyId=67) [61] DeBenedictis. 391–402.nvidia. esat. [59] Thibodeau. .S. html). [65] "Abstract for SAMSY . de/ english/ newsticker/ news/ 107683). pdf) (PDF). June 20. kuleuven. 144 External links • Supercomputing (http://www. [63] "The Cray-1 Computer System" (http:/ / archive. Erik P.Supercomputer [57] "National Science Board Approves Funds for Petascale Computing Systems" (http:/ / www. org/ citation. Proceedings of the 2nd conference on Computing frontiers. August 10. Retrieved 2011-07-08. Department of Mathematics and School of Biomedical Engineering. . Rajani R.com. ISBN 1595930191. html). org/ resources/ text/ Cray/ Cray.org/Computers/Supercomputing/) at the Open Directory Project . Cray Research. . fr/ abs/ html/ iaea0837. . [68] "China’s Investment in GPU Supercomputing Begins to Pay Off Big Time!" (http:/ / blogs. Retrieved May 25. [64] Joshi. com/ 2011/ 06/ chinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/ ). cosic. com/ article/ 08/ 06/ 10/ IBM_breaks_petaflop_barrier_1. . html). Patrick (2008-06-10). Retrieved 2011-07-08. uk/ dd/ dd49/ 49doe.dmoz. [62] "IDF: Intel says Moore's Law holds until 2029" (http:/ / www.uk. pp. ComputerWorld. [67] "Disarmament Diplomacy: . heise. India. . [60] SGI. gov/ news/ news_summ. jsp?cntn_id=109850). Cray1. org. "A new heuristic algorithm for probabilistic optimization" (http:/ / www.esat. Bombay. . . 2008-05-09. . Heise Online. . (2005). U. [66] "EFF DES Cracker Source Code" (https:/ / www. 2000-08-22. sciencedirect. Retrieved May 25. National Science Foundation. nsf. com/ newsticker/ news/ item/ IDF-Intel-says-Moore-s-Law-holds-until-2029-734779.Shielding Analysis Modular System"" (http:/ / www. Retrieved 2008-07-01. nea. Indian Institute of Technology Powai. Cosic.DOE Supercomputing & Test Simulation Programme" (http:/ / www. 102638650. Retrieved 2011-07-08. 2011. [58] "NASA collaborates with Intel and SGI on forthcoming petaflops super computers" (http:/ / www. be/ des/ ). (9 June 1998). infoworld. 2011 (http:/ / www.

Master is also responsible to durably store the whole documents. Sven Johansson. In addition to this membership management. Replication is a pull strategy performed by server nodes from the master node. Data Model Data model is pure JSON[3] which is stored in documents and buckets which are analogous to table row and table correspondingly in relational DBs. Building Blocks and Architecture Terrastore system consists of an ensemble of clusters that in each cluster can exist one Terrastore master and several Terrastore servers. 2010 Development status Active Written in Operating system Available in Type License Website Java Cross-platform English Document-oriented database Apache License 2. Giuseppe Santoro. It is also responsible for replicating the data to server nodes but it does not partition the data itself and partitioning strategy is decided by the server nodes which is either the default consistent hashing or a user defined one. Terrastore employs Terracotta clustering software [2] .0 [1] Terrastore is a distributed. Mats Henricson. Amir Moulavi 2009 0. changing the group view. Moreover. In this way Terrastore facilitates with scalability at both data and computational layers.8. .Terrastore 145 Terrastore Terrastore Original author(s) Developer(s) Initial release Stable release Sergio Bossa Sergio Bossa. Hence each server requests its own partition from the master. as an intra-cluster group membership service. Data (documents and buckets) is partitioned according to the consistent hashing schema [4] and is distributed on different Terrastore servers.0 / December 13. All the writes go through the master but only the first read request goes through the master and later requests will be read from the server memory. Terracotta is used as a distributed lock manager for locking single document access during write operations. It provides advanced scalability support and elasticity feature without loosening the consistency at data level. Terrastore provides ubiquity by using universally supported HTTP protocol Data is partitioned and distributed among the nodes in the cluster(s) with automatic and transparent re-balancing when nodes join and leave. Master is responsible for managing the cluster membership: hence it notifies when the servers join/leave. scalable and consistent document store supporting single-cluster and multi-cluster deployments. and for durable document storage (and replication). it distributes the computational load for operations like queries and updates to the nodes that actually hold the data.

such as new feature or new component. The vast majority of the times. is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behaviour. google. Thus in the case of partitioning the data will be available locally but it can not be seen by other clusters except the cluster owns the data. All the write requests go to both the server that owns the document and the master node. Rina Panigrahy. If a request is sent to server that does not own the document. 146 External links • • • • Project website [1] Introduction to Terrastore [5] Terrastore. The purpose is to shield from change all systems (or human users) on the other end of the interface. org/ ). An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. then the request is routed to the corresponding server. Mathhew Levine. Tom Leighton. json. Eric Lehman. org/ ). com/ tagged/ terrastore Transparency (human-computer interaction) Any change in a computing system. Each document is only own by one server node. [5] http:/ / www. a document database for developers [6] Terrastore news and articles on myNoSQL [7] References [1] [2] [3] [4] http:/ / code. Karger.putting the code into modules that hid internal details. David. mypopescu.Terracotta "JSON" (http:/ / www. net/ sbtourist/ terrastore-a-document-database-for-developers [7] http:/ / nosql. It provides better scalability by providing multiple active masters. making them invisible for the main application. . net/ svjson/ introduction-to-terrastore [6] http:/ / www. The role of ensemble is to join multiple clusters and make them work together. terracotta. slideshare.Terrastore Each server owns a partition to which a number of documents are mapped. The term transparent is widely used in computing marketing in substitution of the term invisible. the term transparent is used in a misleading way to refer to the actual invisibility of a computing process. since the term invisible has a bad connotation (usually seen as something that the user can't see and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). Daniel Lewin. Confusingly. ACM Symposium on Theory of Computing. . The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighbouring layer. It also facilitates the whole system partition-tolerance behavior. Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web. the term refers to overall invisibility of the component. slideshare. it does not refer to visibility of component's internals (as in white box or open system). Also temporarily used later around 1969 in IBM and Honeywell programming manuals the term referred to a certain programming technique. It was achieved through encapsulation . com/ p/ terrastore/ "Terracotta" (http:/ / www. .

Regardless of how resource access and representation has to be performed on each individual computing entity. • Persistence transparency .If a resource is replicated among several locations. the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object. Not every system can or should hide everything from its users. here. so that the same application will work with different databases. • Migration transparency . the Network File System is transparent.Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location. • Location transparency . enabling users to store more files on a medium without any special knowledge. Types of transparency in distributed system Transparency means that any form of distributed system should hide its distributed nature from its users. There are many types of transparency: • Access transparency . or users will circumvent the security in preference of productivity.While multiple users may compete for and share a single resource. The degree to which these properties can or should be achieved may vary widely. • Concurrent transparency . due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. • Security transparency . .Should a resource move while in use. some file systems allow transparent compression and decompression of data. If one expects real-time interaction with the distributed system. In software engineering.Transparency (human-computer interaction) 147 Examples For example. this should not be apparent to any of them. because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system.Whether a resource lies in volatile or permanent memory should make no difference to the user. In object-oriented programming. some file systems encrypt files transparently. this may be very noticeable. appearing and functioning as a normal centralized system.Always try to hide any failure and recovery of computing entities and resources. for example). because it requires each user to learn how to access files through an ftp client.Negotiation of cryptographically secure access of resources must require a minimum of user intervention. • Failure transparency . For instance. the users of a distributed system should always access resources in a single. so the user might even not notice it while using the folder hierarchy. it is also considered good practice to develop or use abstraction layers for database access. it should appear to the user as a single resource. the Open Distributed Processing Reference Model (ISO 10746).Users of a distributed system should not have to be aware of where a resource is physically located. this should not be noticeable to the end user. transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes. • Relocation transparency . • Replication transparency .[1] Formal definitions of most of these concepts can be found in RM-ODP. The early File Transfer Protocol (FTP) is considerably less transparent. Similarly. This approach does not require running a compression or encryption utility manually. uniform way.

All the participants of the distributed application share an Object Space. Object Spaces. Clients of a service then access the Object Space. Tuple space may be thought as a form of distributed shared memory. org/ 10. was put forward by David Gelernter at Yale University. As an illustrative example. Lua. html Tuple space A tuple space is an implementation of the associative memory paradigm for parallel/distributed computing.NET framework. consider that there are a group of processors that produce pieces of data and a group of processors that use the data. its abstract carries an early example of usage in IT field. It is characterized by the existence of logical entities. pdf?key1=363836& key2=6763295811& coll=& dl=ACM& CFID=15151515& CFTOKEN=6184618 TreadMarks TreadMarks is a distributed shared memory system created at Rice University in the 1990s. 1145/ 370000/ 363836/ p203-gorn. and puts it in the Object Space. Ruby. html [2] http:/ / delivery. and the . edu/ CS/ Systems/ software/ treadmarks. as a computing paradigm. Tuple spaces were the theoretical underpinning of the Linda language developed by David Gelernter and Nicholas Carriero at Yale University. and the consumers then retrieve data from the space that match a certain pattern. Smalltalk. called Object Spaces. find out which object provides the needed service. shared amongst providers and accessors of network services. acm. and have the request serviced by the object. Gelernter developed a language called Linda to support the concept of global object coordination. External links • TreadMarks official site [1] References [1] http:/ / www. rice. Lisp. Object Space can be thought of as a virtual repository. Implementations of tuple spaces have also been developed for Java (JavaSpaces). . Object Spaces Object Spaces is a paradigm for development of distributed computing applications. Processes communicate among each other using these shared objects — by updating the state of the objects as and when needed. cs. This is also known as the blackboard metaphor. counterpane.Transparency (human-computer interaction) 148 References • Transparent-Mode Control Procedures for Data Communication [2] a paper from 1965. It provides a repository of tuples that can be accessed concurrently. References [1] http:/ / www. Tcl. com/ sandl. Prolog. which are themselves abstracted as objects. Python. A provider of a service encapsulates the service as an Object. Producers post their data as tuples in the space.

149 JavaSpaces JavaSpaces is a service specification providing a distributed object exchange and coordination mechanism (which may or may not be persistent) for Java objects. update the state of the object and place it back into the Object Space. First. The Master hands out units of work to the "space". The server which provides this service will create an Object Space. and these are read. it is regarded by many to be reliable as long as the power is reliable. In a typical environment there are several "spaces". where the property specifying the criteria for the lookup of the object is its name or some other property which uniquely identifies it. Here.. This paradigm inherently provides mutual exclusion. Distribution can also be to remote locations. The client reads the entry from the JavaSpace and invokes its method to access the service. high performance applications rather than reliable object caching. public Integer count = 0.e. // An Entry class public class SpaceEntry implements Entry { public final String message = "Hello World!". JavaSpaces remains a niche technology mostly used in the financial services and telco industries where it continues to maintain a faithful following. use the service provided by the object. Such an object is called an Entry in JavaSpace terminology. JavaSpaces can be used to achieve scalability through parallel processing. using properties lookup. needs to be registered with an Object Directory in the Object Space. Instead. they can take any unit of work from the space and process the task. and keeps track of how many times it was used. Example usage The following example shows an application made using JavaSpaces. The announcement of Jini/JavaSpaces created quite some hype although Sun co-founder and chief [1] Jini architect Bill Joy put it straight that this distributed systems dream will take "a quantum leap in thinking". Any processes can then identify the object from the Object Directory. The Entry is then written into the JavaSpace. which on its own has not been a commercial success. it has to be removed from the Object Space. In a JavaSpace. The technology has found and kept new users over the years and some vendors are offering JavaSpaces-based products. although this won't survive a total power failure like a disk. It is used to store the distributed system state and implement distributed algorithms. updating its usage count by doing so. an object to be shared in the Object Space is made.Tuple space An object. This means that no other process can access an object while it is being used by one process. when deposited into a space. i. their methods cannot be invoked while the objects are in the Object Space. however. processed and written back to the space by the workers. A process may choose to wait for an object to be placed in the Object Space. The updated Entry is written back to the JavaSpace. all communication partners (peers) communicate and coordinate by sharing state.e. the Entry is used to encapsulate a service which returns a Hello World! string. i. public String service() { ++count. The most common software pattern used in JavaSpaces is the Master-Worker pattern. if the needed object is not already present. thereby ensuring mutual exclusion. when deposited in an Object Space are passive. and is placed back only after it has been released. the workers are usually designed to be generic. it can also be used to provide reliable storage of objects through distributed replication. JavaSpaces is part of the Java Jini technology. or JavaSpace. Because once an object is accessed. several masters and many workers. Objects. . this is rare as JavaSpaces are usually used to low-latency. the accessing process must retrieve it from the Object Space into its local memory.

Long. } } // Client public class Client { public static void main(String[] args) throws Exception { JavaSpace space = (JavaSpace) space(). ISBN 0-321-11231-8 • Max K.out.write(e. Wrox Press.MAX_VALUE). // Create the Entry object JavaSpace space = (JavaSpace)space().write(entry.FOREVER). System.println(e). Thread. Lease. June 1999. } } 150 Books • Eric Freeman. Prentice Hall PTR. Goff: Network Distributed Computing: Fitscapes and Fallacies. null. Lease. ISBN 1861002777 • Steven Halter: JavaSpaces Example by Example. Ken Arnold: JavaSpaces Principles. Addison-Wesley Professional. ISBN 0131001523 • Sing Li.sleep(10 * 1000).read(entry. // Pause for 10 seconds and then retrieve the Entry and check its state. 1. et al. ISBN 0-13-061916-7 . space. null. Addison Wesley. null. } } // Hello World! server public class Server { public static void main(String[] args) throws Exception { SpaceEntry entry = new SpaceEntry().service()). ISBN 0-201-30955-6 • Phil Bishop. } public String toString() { return "Count: " + count.: Professional Java Server Programming. System.FOREVER). // Create an Object Space // Register and write the Entry into the Space space. null. SpaceEntry e = space. Prentice Hall. and Practice. Susanne Hupfer.take(new SpaceEntry().Tuple space return message. Long. Patterns. SpaceEntry e = space.out. 1999. 2002. 2002.MAX_VALUE). Nigel Warren: JavaSpaces in Practice.println(e. 2004.

net. Qusay H. . Python Python Ruby BSD License GPL Ruby License Clustered. blogs. "Designing as if Programmers are People (Interview with Ken Arnold)" [4]. Associate Publisher. Bill (2003). • Mamoud.. See page.sun. Li (2003). John Brockman.com. Susanne (1999). . (2003). "Getting Started With JavaSpaces Technology: Beyond Conventional Distributed Programming Paradigms" [13]. "Understanding JavaSpaces" [9].com. "Grid computing and Web services (Beowulf. Javaspaces)" [7]. Retrieved 2003-03-19.Net. "Loosely Coupled Communication and Coordination in Next-Generation Java Middleware" [10]. • White. See page. • Angerer. Joseph (2007). java. Tuple Space Implementations Project Apache River [18] [19] Supported Languages Java Java Java. • Angerer. [16] . Dr. The Blitz Project Single site server. Janice J. David (2009).net. • Arango. Part 2: Building adaptive. BOINC. Steven (2006). Entwickler. Retrieved 2006-06-03. "Space-Based Programming" [11]. Part 1 (from 5)" [14]. William (2007). Gerald (2004). Retrieved 2007-01-31. • Löffler. "JavaSpaces und ihr Platz im Enterprise Java Universum. Sun Developer Network (SDN). • Heiss. InformIT.com. SearchWebServices. "Coordination in parallel event-based systems" [17]. • Haines. The Fly Object Space GigaSpaces [20] Linda in a Mobile [21] Environment (LIME) LinuxTuples PyLinda Rinda [22] Java C. Das Modell zum Objektaustausch: JavaSpaces vorgestellt" [15]. theserverside. "Interview: GigaSpaces" [5].net. IBM developerworks.com. GigaSpaces • Shalom. (2005). Edge Foundation. • Venners. Notable features Based on the Jini project that Sun contributed to Apache. Allows free non-commercial use. fault-tolerant. Ruby. Bernhard (2003). "Computer Visions: A Conversation with David Gelernter" [3]. "How Web services can use JavaSpaces" [6]. • Hupfer. Retrieved 2007-03-20. William (2007). Scala Java. Erlacher. java. java. JavaWorld. Retrieved 2005-05-21. Offers free "community license" with a subset of features. Nati (2006). Editor and Publisher Russell Weinberger. Tom (2005). "Lord of the Cloud" [2]. • Brogden. "How To Build a ComputeFarm" [8]. "Make room for Javaspaces. "Space-Based Architecture and the End of Tier-Based Computing" Technologies. SearchWebServices. Retrieved 2004-02-01. Mauricio (2009). See page. Unknown Clustered. • Sing.com. scalable solutions with JavaSpaces" [12] . • Ottinger. Inc. onjava. Retrieved 2007-04-18. Articles • Brogden. Sun Developer Network (SDN). Bernhard. Commercial. Andreas (2005). "High-impact Web tier clustering.Tuple space 151 Interviews • Gelernter. C++ License used Apache License BSD License Commercial.

sun. tv/ http:/ / www. html http:/ / today. ibm. html [9] http:/ / www. by IBM for Java. sourceforge.11. com/ cs/ TSpaces/ . info/ http:/ / www. fault-tolerant. open-source. html [15] http:/ / www. com/ products/ soa/ in-memory-computing/ activespaces-enterprise-edition/ default. Inactive Projects: • SlackSpaces [26]. html http:/ / www. tibco. C#. Open Source implementation of the Linda/Tuplespace programming model • TSpaces [28]. com/ arango/ entry/ coordination_in_parallel_event_based [18] http:/ / www. java.289483. com/ developerworks/ java/ library/ j-cluster2/ ?Open& ca=daw-co-news [13] http:/ / java. 152 TIBCO ActiveSpaces Commercial Clustered. almaden.00. Prolog. com/ [21] http:/ / lime. no/ ?docname=SmallSpaces/ [28] http:/ / www.sid26_gci1248166. org/ [24] [25] [26] [27] http:/ / sqlspaces. onjava.00. net/ [23] http:/ / www. uakom. collide. com/ developer/ technicalArticles/ Interviews/ gelernter_qa. com/ tip/ 0. aspx?g=java& seqNum=263 [6] http:/ / searchwebservices. com/ tip/ 0. html [7] http:/ / searchwebservices. com/ os_papers. javaworld. html)". sourceforge. com/ tt/ articles/ article. com/ javaworld/ jw-11-1999/ jw-11-jiniology. project source is downloadable • SmallSpaces [27]. com/ [20] http:/ / www. [25] Java.id. ibm. theserverside. techtarget. semispace. sun. PHP. flyobjectspace. sun. com/ developer/ technicalArticles/ tools/ JavaSpaces/ [14] http:/ / www. net/ pub/ a/ today/ 2005/ 06/ 03/ loose. gigaspaces. html#a1 [17] http:/ / blogs.nodeid. August 1998 [15 January 2006] [2] [3] [4] [5] http:/ / www. geir. fongen.sid26_gci1251765. net/ [22] http:/ / linuxtuples. java. html [8] http:/ / today. main website down. dancres.289483. edge. sk/ sunworldonline/ swol-08-1998/ swol-08-jini. com/ pub/ a/ onjava/ 2003/ 03/ 19/ java_spaces. C/C++ Apache License AGPL (server) + LGPL (clients) Clustered with Terracotta Cluster.489. Ruby. informit. html http:/ / java. org/ 3rd_culture/ gelernter09/ gelernter09_index. java. SunWorld. com/ guides/ content. project stalled since 2000 References [1] Rob Guth: " More than just another pretty name: Sun's Jini opens up a new world of distributed computer systems (http:/ / sunsite. gigaspaces. jsp http:/ / slackspaces. html [11] http:/ / www. net/ pub/ a/ today/ 2003/ 06/ 10/ design. javamagazin. net/ pub/ a/ today/ 2005/ 04/ 21/ farm. org/ blitz/ [19] http:/ / www. techtarget. tss?l=UsingJavaSpaces [10] http:/ / today. html [12] http:/ / www-128.Tuple space SemiSpace SQLSpaces [23] [24] Java Server: Java. html [16] http:/ / www. de/ itr/ online_artikel/ psecom. Clients: Java.

2004). There was some initial skepticism about such a significant shift. sometimes called the Virtual Organization (VO). is more decentralized. The term "grid computing" is often used to describe a particular form of distributed computing. who might be paid with a portion of the revenue from clients. as they established their own utility services for computing.com/cgi/wiki?TupleSpace) at c2. storage and applications. computational resources are essentially rented . L. ACM Transactions on Programming Languages and Systems. IBM. a company can "bundle" the resources of members of the public for sale. . The technique of running a single calculation on multiple computers is known as distributed computing. natural gas. is for a central server to dispense tasks to participating nodes. Amazon and others started to take the lead in 2008. To provide utility computing services. This model has the advantage of a low or no initial cost to acquire computer resources. or telephone network). HP and Microsoft were early leaders in the new field of Utility Computing with their business units and researchers working on the architecture. Liu External links • "TupleSpace" (http://c2. January 1985 • Distributed Computing (First Indian reprint.jini. common among volunteer computing applications. the new model of computing caught and eventually became mainstream with the publication of Nick Carr's book "The Big Switch". Multiple servers are used on the "back end" to make this possible. with organizations buying and selling computing resources as needed or as they go idle. volume 7. Another model.[1] However. application and network as a service.com • "JavaSpace Specification" (http://www. The definition of "utility computing" is sometimes extended to specialized tasks.Tuple space 153 Sources • Gelernter. Utility Computing can support grid computing which has the characteristic of very large computations or a sudden peaks in demand which are supported via a large number of computers. These might be a dedicated computer cluster specifically built for the purpose of being rented out. One model.acm. storage and services. software and network bandwidth) into a service.org Utility computing Utility computing is the packaging of computing resources. on the behest of approved end-users (in the commercial case. instead. Software as a Service and Cloud Computing models that further propagated the idea of computing.2433). M. where the supporting nodes are geographically distributed or cross administrative domains. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. David. such as computation. water.org/citation. such as web services. "Generative communication in Linda" (http://portal.cfm?doid=2363.turning what was previously a need to purchase products (hardware. as a metered service similar to a traditional public utility (such as electricity. number 1.org/wiki/JavaSpaces_Specification) at jini. payment and development challenges of the new computing model. or even an under-utilized supercomputer. the paying customers). This repackaging of computing services became the foundation of the shift to "On Demand" computing. Google.

Utility computing

154

History
Utility computing is not a new concept, but rather has quite a long history. Among the earliest references is:

If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility... The computer utility could become the basis of a new and important industry.

[2] —John McCarthy, speaking at the MIT Centennial in 1961

IBM and other mainframe providers conducted this kind of business in the following two decades, often referred to as time-sharing, offering computing power and database storage to banks and other large organizations from their world wide data centers. To facilitate this business model, mainframe operating systems evolved to include process control facilities, security, and user metering. The advent of mini computers changed this business model, by making computers affordable to almost all companies. As Intel and AMD increased the power of PC architecture servers with each new generation of processor, data centers became filled with thousands of servers. In the late 90's utility computing re-surfaced. InsynQ ([3]), Inc. launched [on-demand] applications and desktop hosting services in 1997 using HP equipment. In 1998, HP set up the Utility Computing Division in Mountain View, CA, assigning former Bell Labs computer scientists to begin work on a computing power plant, incorporating multiple utilities to form a software stack. Services such as "IP billing-on-tap" were marketed. HP introduced the Utility Data Center in 2001. Sun announced the Sun Cloud service to consumers in 2000. In December 2005, Alexa launched Alexa Web Search Platform, a Web search building tool for which the underlying power is utility computing. Alexa charges users for storage, utilization, etc. There is space in the market for specific industries and [4] applications as well as other niche applications powered by utility computing. For example, PolyServe Inc. offers a clustered file system based on commodity server and storage hardware that creates highly available utility computing environments for mission-critical applications including Oracle and Microsoft SQL Server databases, as well as workload optimized solutions specifically tuned for bulk storage, high-performance computing, vertical industries such as financial services, seismic processing, and content serving. The Database Utility and File Serving Utility enable IT organizations to independently add servers or storage as needed, retask workloads to different hardware, and maintain the environment without disruption. In spring 2006 3tera announced its AppLogic service and later that summer Amazon launched Amazon EC2 (Elastic Compute Cloud). These services allow the operation of general purpose computing applications. Both are based on Xen virtualization software and the most commonly used operating system on the virtual computers is Linux, though Windows and Solaris are supported. Common uses include web application, SaaS, image rendering and processing but also general-purpose business applications. Utility computing merely means "Pay and Use", with regards to computing power.

References
[1] On-demand computing: What are the odds? (http:/ / www. zdnet. com/ news/ on-demand-computing-what-are-the-odds/ 296135), ZD Net, Nov 2002, , retrieved Oct 2010 [2] Architects of the Information Society, Thirty-Five Years of the Laboratory for Computer Science at MIT, Edited by Hal Abelson [3] http:/ / www. insynq. com [4] http:/ / www. polyserve. com/ index. php

Decision support and business intelligence 8th edition page 680 ISBN 0-13-198660-0

Utility computing

155

External links
• How Utility Computing Works (http://communication.howstuffworks.com/utility-computing.htm) • Utility computing definition (http://searchdatacenter.techtarget.com/sDefinition/0,,sid80_gci904539,00.html)

Virtual Machine Interface
Virtual Machine Interface[1] ("VMI") may refer to a communication protocol for running parallel programs on a distributed memory system. Virtual Machine Interface[2] is also the name given by VMware to the proposed open standard protocol that guest operating systems can use to communicate with the hypervisor of a virtual machine. An implementation of this standard was merged in the main Linux kernel version 2.6.21. A number of popular GNU/Linux distributions now ship with VMI support enabled by default. Since newer AMD and Intel CPUs allow for more efficient virtualization, VMI is being obsoleted and VMI support will be removed from Linux kernel in 2.6.37[3] and from VMware products in 2010-2011 timeframe [4] .

References
[1] Official web site for the VMI communication protocol (http:/ / vmi. ncsa. uiuc. edu/ ) [2] Transparent Paravirtualisation - VMware Inc (http:/ / www. vmware. com/ interfaces/ paravirtualization. html) [3] x86, vmi: Mark VMI deprecated and schedule it for removal (http:/ / git. kernel. org/ ?p=linux/ kernel/ git/ torvalds/ linux-2. 6. git;a=commit;h=d0153ca35d344d9b640dc305031b0703ba3f30f0) [4] Support for guest OS paravirtualization using VMware VMI to be retired from new products in 2010-2011 (http:/ / blogs. vmware. com/ guestosguide/ 2009/ 09/ vmi-retirement. html)

External links
• The VMI virtualization interface (http://lwn.net/Articles/175706/) - article in lwn.net

Virtual Object System

156

Virtual Object System
Virtual Object System
Developer(s) Stable release Interreality 0.23.0 / April 15, 2006 (S5 UI preview released October 19, 2007)

Operating system Linux, Windows, Mac OS X Type License Website Distributed systems, Networking, 3D graphics GNU Lesser General Public License interreality.org [1]

The Virtual Object System (VOS) is a computer software technology for creating distributed object systems. The sites hosting Vobjects are typically linked by a computer network, such as a local area network or the Internet. Vobjects may send messages to other Vobjects over these network links (remotely) or within the same host site (locally) to perform actions and synchronize state. In this way, VOS may also be called an object-oriented remote procedure call system. In addition, Vobjects may have a number of directed relations to other Vobjects, which allows them to form directed graph data structures. VOS is patent free, and its implementation is Free Software. The primary application focus of VOS is general purpose, multiuser, collaborative 3D virtual environments or virtual reality. The primary designer and author of VOS is Peter Amstutz.

External links
• Interreality.org official site [2]

References
[1] http:/ / interreality. org/ [2] http:/ / interreality. org

Volunteer computing

157

Volunteer computing
Volunteer computing is a type of distributed computing in which computer owners donate their computing resources (such as processing power and storage) to one or more "projects".

History
The first volunteer computing project was the Great Internet Mersenne Prime Search, which was started in January 1996.[1] It was followed in 1997 by distributed.net. In 1997 and 1998 several academic research projects developed Java-based systems for volunteer computing; examples include Bayanihan,[2] Popcorn,[3] Superweb,[4] and Charlotte.[5] . Another similar concept is Sideband computing which let a user to share his computing power while he is online. The term "volunteer computing" was coined by Luis F. G. Sarmenta, the developer of Bayanihan. It is also appealing for global efforts on social responsibility, or Corporate Social Responsibility as reported in a Harvard Business [6] [7] Review or used in the Responsible IT forum. In 1999 the SETI@home and Folding@home projects were launched. These projects received considerable media coverage, and each one attracted several hundred thousand volunteers. Between 1998 and 2002, several companies were formed with business models involving volunteer computing. Examples include Popular Power, Porivo, Entropia, and United Devices. In 2002, the Berkeley Open Infrastructure for Network Computing (BOINC) opensource project was founded, and became the software running the largest public computing grid (World Community Grid) in 2007. [8]

Middleware for volunteer computing
The client software of the early volunteer computing projects consisted of a single program that combined the scientific computation and the distributed computing infrastructure. This monolithic architecture was inflexible; for example, it was difficult to deploy new application versions. More recently, volunteer computing has moved to middleware systems that provide a distributed computing infrastructure independently of the scientific computation. Examples include: • The Berkeley Open Infrastructure for Network Computing (BOINC). BOINC is the most widely-used middleware system, and is currently used by the World Community Grid. It is open source (LGPL) and is developed by an NSF-funded research project located at the UC Berkeley Space Sciences Laboratory. It offers client software for Windows, Mac OS X, Linux, and other Unix variants. • XtremWeb is used primarily as a research tool. It is developed by a group based at the University of Paris - South. • Xgrid is developed by Apple. Its client and server components run only on Mac OS X. • Grid MP is a commercial middleware platform developed by United Devices and has been used in volunteer computing projects including grid.org, World Community Grid, Cell Computing, and Hikari Grid. Most of these systems have the same basic structure: a client program runs on the volunteer's computer. It periodically contacts project-operated servers over the Internet, requesting jobs and reporting the results of completed jobs. This "pull" model is necessary because many volunteer computers are behind firewalls that do not allow incoming connections. The system keeps track of each user's "credit", a numerical measure of how much work that user's computers have done for the project. Volunteer computing systems must deal with several problematic aspects of the volunteered computers: their heterogeneity, their churn (that is, the arrival and departure of hosts), their sporadic availability, and the need to not interfere with their performance during regular use. In addition, volunteer computing systems must deal with several related problems related to correctness:

harvard.G. K. "The POPCORN market~Wan online market for computational resources". hbsp. NY. .Volunteer computing • Volunteers are unaccountable and essentially anonymous. Proceedings of the First international Conference on information and Computation Economies. Japan. [8] BOINC Migration Announcement (http:/ / www. and network I/O contention. A. K.org. that is available e. Scheiman (1996). edu/ email/ pdfs/ Porter_Dec_2006. Proceedings of the 9th International Conference on Parallel and Distributed Computing Systems. if adequate cooling is not in place. increased disk cache misses and/or increased paging can result. ist. New York. 444-461. However the increased power consumption can be remedied to some extent by setting the option of desired processor usage percent. [7] "ResponsI. 1998 [3] Regev.F. Mark Kramer. Charleston. P. php).physics. CPU cache contention. Michael. html). This is due to increased CPU contention. it will impact performance of the PC. psu. worldcommunitygrid. disk I/O contention. [2] Sarmenta. A CPU that is idle generally has lower power consumption than when it is active. pdf). The results (and the corresponding credit) are accepted only if they agree sufficiently. which helps to alleviate CPU contention. this constant load on the volunteer's CPU can cause it to overheat. [6] Porter.g. Schauser. Kedem. Karaul. Springer-Verlag. 148–157. "Charlotte: Metacomputing on the Web" (http:/ / citeseer.. September 2009 .E.org/featuredetail. "Bayanihan: Web-Based Volunteer Computing Using Java". L.D. O. External links • Wanted: Your computer's spare time (http://www. 1998. . Z. New York: Syracuse University. the volunteer might choose to continue participating. [5] A. If the volunteer computing application attempts to run while the computer is in use. Lecture Notes in Computer Science 1368. South Carolina.E.. The desire to participate may also cause the volunteer to leave the PC on overnight. org/ various/ history. of the 2nd International Conference on World-Wide Computing and its Applications (WWCA'98). in BOINC client. .. . edu/ article/ baratloo96charlotte. Ibel. responsI. and even if they are noticeable. 158 Costs for volunteer computing participants • Increased power consumption. in which each job is performed on at least two computers. com/ articles. [4] Alexandrov. United States: ACM Press. M. If RAM is a limitation. • Some volunteers intentionally return incorrect results or claim excessive credit for results.TK" (http:/ / www. Proceedings of the Workshop on Java for High performance Scientific and Engineering Computing Simulation and Modelling. org/ forums/ wcg/ viewthread?thread=15715) [9] "Measuring Folding@Home's performance impact" (http:/ / techreport.asp?id=38) physics. One common approach to these problems is "replicated computing". pp. "The Link Between Competitive Advantage and Corporate Social Responsibility" (http:/ / harvardbusinessonline. Tsukuba. N (October 25 .28. Volunteer computing applications typically execute at a lower CPU scheduling priority. References [1] "GIMPS History" (http:/ / mersenne. pp. • Decreased performance of the PC. or to disable power-saving features like suspend. March 3-4.[9] These effects may or may not be noticeable. "SuperWeb: Research issues in Java-Based Global Computing". Wyckoff. Additionally. x/ 4341/ 1). M. • Some volunteer computers (especially those that are overclocked) occasionally malfunction and return incorrect results.. 1998). Responsible IT forum. Harvard Business Review. tk). Proc. . (Sept 1996). Nisan.

org/w/index. Clngre. Hmains. Prasanna8585.org/w/index.wikipedia. Agupte.cutler. A. GenerousOne. Rodrigoq. ReallyNiceGuy. Yonkeltron. Innv. Johnuniq. Orange Suede Sofa. Sparky132. Leandrod. Hannes Röst.php?oldid=439119968  Contributors: Alansohn. Oxymoron83. Richard Allen. FatalError. SimonP. Cliffb. Ppgardne. M.glaser. Plouin. Rich Farmbrough. Cybercobra. Gjbloom. Centrx. Derbeth. Unforgiven24. RexNL. Miym. Kanestar. 48 anonymous edits Distributed lock manager  Source: http://en. Joshsteiner. Pingveno. Suggestednickname. Rajithgune. Bovineone.delanoy.wikipedia. Crishoj. Snoyes. Bertung. Radugramescu. AxelBoldt. Julesd. Leckley. Winterst. PatrickFisher. R'n'B. MartinSpamer. T0ny. Freerick. Cntras. T@nn. Joey Parrish. Thingg.wikipedia.org/w/index. Da monster under your bed. Ozsu. Dmccreary. Chzz. Billymac00. Bryan Derksen.yosinski. Timdorr. Rjwilmsi. Tellyaddict. Ravedave. Niteowlneils. Jncraton. KnightsWizard. Vrenator. Blues-harp. Lexor.wikipedia. Alansohn. Dawynn. Grundle. Superm401. DVD R W. Gwernol. Orbst.org/w/index. Wasubire. Gardar Rurak. MelRobinson. Anonymous Dissident. Saligron. Tonieisner. Dpkade. Netlad. 23 anonymous edits Amoeba distributed operating system  Source: http://en. Urhixidur. Silas S. Yogendrasinh. MarkPDF. Qwertyus. Wuzzeb. Brucefulton. Kingturtle. lorenz. Kocio. Stephenb. Miym. Hhuili. Kuru. Iow. Allan McInnes. Happyrabbit.wikipedia. Midgrid. clown will eat me. MaNeMeBasat. Khazadum. Gslin. Chunbinlin. Тиверополник. Delirium. Furrykef. Nurg. Fenna. Tmpnz.php?oldid=447257745  Contributors: "alyosha". Last Lost. CraigKeogh. Azior. 20 anonymous edits Autonomic Computing  Source: http://en. Woohookitty. MJ94. Passport90. Frecklefoot.wikipedia. Miym. Tempodivalse. Tobias Bergemann. Szopen.org/w/index.wikipedia. Az1568. Kuteni. Intelligentfool. Jin.php?oldid=440706600  Contributors: Ashleytate. 1ForTheMoney. EvanProdromou. Chris 73. Stewartadcock. FatalError. Robomanx. Bryant. DVdm. Mtking. TheThomas. Immunize. Nagle.php?oldid=445601866  Contributors: Ambulnick. Wbrameld.wikipedia. Malerin. Balexandre. Dondemuth. Miym. Jivecat. Gimmetrow.org/w/index. (Ghost In The Machine). Miym.wikipedia. Shadowjams. Stephan Leeds. Flubeca. Ade oshineye. Evolvingjerk. YPavan. MParaz. Anshu. Pitel.petya. Al3xpopescu. Eliz81. Slackr. Tnxman307. 125 anonymous edits Distributed design patterns  Source: http://en. Nunquam Dormio. MySchizoBuddy. Saizai. M4gnum0n. Jsayre64. Wolfling. Jhoskins. Panoat. Kenyon. Sicard. Edward. Henriyugi. Prasenjitmukherjee. Ammubhave. PahaOlo. JCLately. Arleyl. Wbigger. NotSoAnonymous54. 14 anonymous edits Distributed memory  Source: http://en. AutumnSnow. Ignatzmice. Rank1cheng. Hugh. Expatrick. Miym. Wireless friend. Erdody. Marudubshinki.wikipedia. Canaima. Perada. Ewulp. LilHelpa. Crag. PigFlu Oink. Bazzargh. Liao. Gail. Bilbo1507. 5 anonymous edits Distributed object  Source: http://en. Butterwell. M4gnum0n. El Cubano. Mubaidr. Brick Thrower. Thatguyflint. RichardVeryard. Dyl. Minnaert. Woohookitty. Yorrose. Narthring.php?oldid=444825978  Contributors: AlexChurchill. Darsenault. Apapadop. Edward. Nurlan926. MrOllie. Idearat. Raghutech. Levin. Peruvianllama. Dionysostom. Ah2190. Mathiastck. KnowledgeOfSelf. Flegelpuss. Bobprime. Fuhghettaboutit. Neilc. Thumperward.C. Spdegabrielle. Miym. JFromm. Nivix. Tmcw. Yaronf. Miym. Shell Kinney. Spdegabrielle. 509 anonymous edits Code mobility  Source: http://en. Miym. Chewie. Thepohl. Rehashed. TheParanoidOne. Rich Farmbrough. Discospinster. Closedmouth. DeC. Sepreece. Alexwg.php?oldid=446912892  Contributors: Bearcat. Miym. RichMHelms. GoingBatty. Jncraton. Isnow. Owenja. PhuongCM88. Twn. Vegaswikian. Rjwilmsi. Khalid hassani. R. Phoe6. Addshore. Krzys ostrowski.org/w/index. RadioYeti. Kenyon. Mentifisto. Hooperbloob. Jason. Ebraminio. Chiborg. Josh Cherry. Pilif12p. Radagast83. Shenme.org/w/index. 185 anonymous edits Data Diffusion Machine  Source: http://en. Jhellerstein. Gwern.org/w/index. MaD70. Perfecto. Panoramix.php?oldid=410450977  Contributors: Friendlydata. Sprhodes.wikipedia.php?oldid=446577882  Contributors: A2Kafir. Locotorp. Sushi Tax. Dougher. EdgeOfEpsilon. 239 anonymous edits Aggregate Level Simulation Protocol  Source: http://en.D.wikipedia. GregRobson. Horv. Steveswei. Akshayagupta. JLaTondre. SimonP. Esap. Jni. Phatom87. Dmccreary. Siruguri. Gamer007. Frap. Robina Fox. RL0919. Qwertyus.wikipedia. BillNace. Mange01.g.bit. Kbrose. SymlynX. C. Dyl. Tweegirl. Debresser. Cholmes75. Gensanders. Ronz. Kku. Gregbard. Fudoreaper. Liftarn. Miym. Yworo. Hsr1. Henk.org/w/index.org/w/index. Guesty-Persony-Thingy.php?oldid=417577382  Contributors: Bearcat. Krzys ostrowski. Chocmah. Ewlyahoocom.Article Sources and Contributors 159 Article Sources and Contributors MapReduce  Source: http://en. BeardedCat. Mikecron. Abdull. Gajakannan. BigDunc. Zomno. SamJohnston. AutumnSnow. TimBentley. Tide rolls. Ronewolf. MrJones. FrankRHill. Cander0000. Edward. Kinema. Inimino. Qwertyus. Hadal. Shiftworker. Betacommand. Bogey97. Yamavu. Kwsn. Psiphiorg. 7 anonymous edits Amazon SimpleDB  Source: http://en. Jenvor. Csahut. Bruce1ee. Tinucherian. Miym.php?oldid=446929912  Contributors: Andrew80k. Papadopa. Johnuniq. Peterkaptein. Danny Rathjens. Dbroadwell. Andreclos. Roger D T. LilHelpa. SunCreator. Last Lost. SixSix.doe. Phantomsteve. Miym. Kbdank71. Eric-Wester. Chuunen Baka. Rhwawn. NicDumZ. Wayiran. Nabbia. Elsendero. Davidfstr. Rror. Haham hanuka.poznan. Borgx. BBCWatcher. Jww van. Simphonics. Bilaljaffery. JLaTondre. Jackiechen01. Luke Lonergan. Wizgha. Salvar. Diego Moya. Versus22. Husky.org/w/index. Awaterl. Diego guillen. Onegin. Pnr Database-centric architecture  Source: http://en. 6 anonymous edits Distributed application  Source: http://en. Kkarimi. Attilios. Eastlaw. Wernher. Ryanreporter. Moralis. Hu. Cybercobra.org/w/index. Mwtoews. Wikieditoroftoday.php?oldid=409057229  Contributors: Dgies. Skittleys. Cprompt. Wikiacc. MylesBraithwaite. Pbb. Superm401. Stephen E Browne Client–server model  Source: http://en. GregorySmith. Ne vasya. TexasAndroid. TitusEapen. Gary King. Dgies. Brown. Ehn. Llort.org/w/index. Donhalcon. Dyl. JDowning. Tomek. Kaicarver. Whadar. Can't sleep.omalley.org/w/index. Reeveorama.org/w/index. Mac. Mark Renier. Jaliyae. Gadfium.muller. . Kubanczyk. Jaskiern. Tetriphile. Nilei81. FatalError. Rickyphyllis. Тиверополник. Aldie. Mindmatrix. Jayron32. JSpung. Nslater. Piano non troppo. Ruakh. Psantora. EEMIV.H. Tremilux. Bovineone. 16 anonymous edits Distributed shared memory  Source: http://en. P199. X7q. Dstainer. John Vandenberg. 1 anonymous edits Distributed database  Source: http://en. Noahslater. VampWillow. Mate Hegyhati. Rouenpucelle. Terry1944. Gdo01.org/w/index. Chandlermbing.php?oldid=441383833  Contributors: 0x6adb015. Drt1245. Kuru. ProsperousOne. Brainix.wikipedia. Swingambassador. Ingenthr. Timmillea. Robomanx. Kragen2uk. GeorgeBills. Epbr123. Larry V. Sbowers3.php?oldid=439654564  Contributors: Adrianwn.php?oldid=422685042  Contributors: Derild4921. Bernhard Bauer.wikipedia. Gjnyasa. David Eppstein. Charles Matthews. Giftlite.php?oldid=436647527  Contributors: Alex. Patrick. Dawynn. Thumperward. Dysepsion. Merlinthe. Don. Rick Sidwell. Vegaswikian.wikipedia. Malcolma. Jason Davies. Shadowjams.php?oldid=444609359  Contributors: BClemente. ThorstenStaerk. Ripe.org/w/index. Primordium. CanisRufus. SamJohnston. Jfabrizio84. Jandalhandler. Gary King.php?oldid=400634770  Contributors: Alan Liefting. Murt. Owen. 16 anonymous edits Art of War Central  Source: http://en. Bogdangiusca. Beland. Mindmatrix. KeyStroke. Starwiz. TutterMouse. Blueboy96. Conversion script. Shire Reeve. TubularWorld. WereSpielChequers. R'n'B.Fedak. Greg 12000. White 720. Mschlindwein. Audriusa. Ukexpat.php?oldid=355812613  Contributors: Beland.php?oldid=443255340  Contributors: Bpalitaa. Calimo. J. Amberroom. Wonko9. Av pete. Belovedfreak. Shmlchr. The Anome CouchDB  Source: http://en. LOL. Ralfw08. FghIJklm.wikipedia. Caiyu. Bluezy. Pcap. Jmeddy. Fluffernutter. Patcito. Jhfireboy. Okj579. SegfaultWizard. HughesJohn. Pedant17. JonHarder. Gunnar42. Kbrose. Gilles. Charles Matthews. Someguy1221. Tree Biting Conspiracy. Quatloo. Android Mouse. Rcsheets. SimonP. Rst. Pottsdl. FuturePrefect. Sidna. Bezenek. Bryan Derksen. Miym. Vincnet. Bobo192. Xdxfp. Peu. Kbdank71. SteveLoughran. Nikhil search. John of Reading. Robofish.php?oldid=360159083  Contributors: Favonian. Wolfkeeper. Eggstasy. Xhienne. LuzGamma. X96lee15. Kbdank71. Katharineamy. A5b. 3 anonymous edits Distributed Interactive Simulation  Source: http://en. J mareeswaran. 8 anonymous edits Distributed data flow  Source: http://en. Mashoodp. SMasters. Sanxiyn. Wujj123456. Saric.kar. Palosirkka. Howard the Duck. Una Smith. Ttreitlinger.php?oldid=441818900  Contributors: Katharineamy. Jamitzky. LeaveSleaves. Cdiggins. JCLately. Psychcf.org/w/index.wikipedia. Peter. Cybercobra. Lguzenda. BenFrantzDale. R39132. Cburnett. Hairy Dude.org/w/index. Palfrey. Miym. Altenmann. A5b. Tiptoety. Hellisp. LeaveSleaves. Nealmcb. 1 anonymous edits Connection broker  Source: http://en. Portnadler. Lilwik. Miym. Diannaa. Addaintstopnme. Biscuittin. Nanami Kamimura.wikipedia. Jamelan. Lzur.org/w/index. Jswanson3141. Upsetspecial. OnePt618. Seifsallam. Bovineone.php?oldid=446873882  Contributors: 16@r. Captian Mar-Vell. Jordi. Ryanmcdaniel. JCDenton2052. Euclidbuena. Zen-master. LMB. Mohamed Ouda. GlennZ. Jamelan. Rtweed1955. Epbr123. Dangiankit. TedPavlic. The Thing That Should Not Be. Vy0123. Andreas Kaufmann. Mqtthiqs. Mdz. Miym. Danpovey. Centrx.org/w/index.org/w/index. Ghettoblaster. Jni. Pebkac. Cyplm. Nagle. Dkf11.php?oldid=420915171  Contributors: Atlant.torres.wikipedia. Sboehringer. 3 anonymous edits Amazon Relational Database Service  Source: http://en. Anwar saadat. OsamaK. 74 anonymous edits Citrusleaf database  Source: http://en. Vektor330. Gary King.wikipedia. VanGore. Nishant shobhit. Goldzahn. Andy Dingley. Chris Chittleborough. Zian. Jh51681. Wickethewok. Nakakapagpabagabag. Heelmijnlevenlang. PlatoCantRepent. Khazadum. Anthony. Bovineone. Beano. Favonian. John of Reading. Dianoetic. Ebainomugisha.wikipedia. Akamad. Creativename. Excirial. Yozh. Maksim. Triwbe. 28bytes.wikipedia. Uncle Dick. RickScott. Micrypt. Tom Edwards. JLaTondre. AllenDowney. Khukri. Ghettoblaster. Brutzman. PedroPVZ. SamJohnston. Drvsrinivasan. Ut Libet. Bporopat. CharlotteWebb.

Mernen. Econet. Zachlipton. Conversion script. BrennanNovak. Neilc. Mcsee. EagleOne. Foofy. Animum. Nforbes. ZS. 1 anonymous edits 160 . Elibarzilay. Rettetast. Ewlyahoocom. Frap. Hadrianheugh. 8oogers. Twimoki. SamJohnston.wikipedia. Miym. Yadavjpr.php?oldid=446828398  Contributors: Adm.php?oldid=447612200  Contributors: 61cc. ErnstRohlicek. MacTed. YUL89YYZ. Thumperward. A000040. Hazzik. CarlHewitt. Lelek. Oliphaunt.php?oldid=400012160  Contributors: Frap. Poison Oak. Stolenglances. JCLately. JLRedperson. Entonian. Tommy2010. Samer.org/w/index.wikipedia. Al3xpopescu. RadManCF.php?oldid=444950039  Contributors: 1000Faces. JonintheUK. MerlinMM. Shuitu. Lexor. Saifalisabri. JohnCatlin. Wikante. Ettrig. ErrantX.org/w/index. Cander0000. Siyamed. Repat. Guoguo12. Philippe Nicolai-Dashwood. SAE1962. JCLately. Akerans. 6 anonymous edits Fallacies of Distributed Computing  Source: http://en.org/w/index.php?oldid=443494567  Contributors: 16@r. Theinfo. W Nowicki. Heelmijnlevenlang. Bunnyhop11.php?oldid=427403475  Contributors: John of Reading. Rjaf29. JaGa. Nesjo. Wrboyce.wikipedia. Dto. Starwiz. Xamian.php?oldid=429891944  Contributors: Buddy23Lee. Yunyz. PullUpYourSocks. Sidna. Steve walkerou. Nestea Zen. TittoAssini.org/w/index. Seerinteractive. GreenReaper. Tomrbj. The Thing That Should Not Be.g. LobStoR. Seaneparker. Chester Markel. Armadillo-eleven. Phillow318.org/w/index. Galoubet.org/w/index. LilHelpa. Reedy. Louspringer. Djmackenzie. Dstainer. 5 anonymous edits Kayou  Source: http://en. Kennyluck. Jdzarlino. Frap. Karada. Reyk.org/w/index. Arkroll. Samutoko. Sushinut. WookieInHeat. Akuckartz.php?oldid=435295809  Contributors: Archimedius.php?oldid=402986017  Contributors: Andreas Kaufmann.wikipedia. William Avery. VictorAnyakin. Mac. Lismoreboy. Robertvan1.php?oldid=430841676  Contributors: David. Dismas. SimonP. Shanes. GoldKanga. Selket. RandomXYZb. Nasa-verve. M Almarshad. Brighterorange. Smartse. Miym. Davetrainer. Haikupoet. KennethJ. Netlad. Kadakas.php?oldid=430500136  Contributors: Aboutmovies. CKlunck. Ozten. Blaisorblade.wikipedia. 4th-otaku. Socraticscholar. Omicronpersei8. Remuel. Twirligig. Sbowers3. Miym. ESkog. Gravthuth. Hu12. WilliamAquarius. Rhopkins8. Mdirolf. Mycure. Miym. Fgiorgi. Pengwynn. Jncraton. Peterdjones. Eeekster. Smjg. ScottEdwinBailey.org/w/index. 43 anonymous edits IBZL  Source: http://en. Valio bg. William Avery. FreplySpang. Metrax. Gurch. AvicAWB. Thepaul0. Bearcat. Phatom87. Cynehelm. Aervanath. R'n'B. 3 anonymous edits Open architecture computing environment  Source: http://en. Mwalsh34. Ninja987. Ff1959. Moheed. Lateg.Wiggin. AlisonW. Thomas Willerich. Balabiot. Jpbowen. Miym. Tinucherian. Vrenator. Asafdapper. Mange01. Gsmgm. OwenBlacker. Ilammy. NeilK. WiktorWandachowicz.org/w/index. Skizzik. Yami Vizzini. Jonasfagundes. 用 心 阁 . Retired user 0001. Bpfurtado.org/w/index. Gogo Dodo. Scoutchen. Gemstone Staffing. Robklpd OrientDB  Source: http://en. Frap. Cactus26.php?oldid=420149708  Contributors: Miym. 2 anonymous edits Master/slave (technology)  Source: http://en.wikipedia. UncleDouggie. 5 anonymous edits Live distributed object  Source: http://en. Kraftlos.wikipedia. Mwazzap.wikipedia.wikipedia. CloudComputing. Lyricmac. RJFJR.brennan. 15 anonymous edits Message consumer  Source: http://en. Rayc. ABCD. Rajgopalv.wikipedia. Acolovic. Hairy Dude. Miym. Avgjoey2k. Momo54. WikiMax. Miym. CraigKeogh. Lodevermeiren. Randall311. Georgewilliamherbert. Robomanx. Beaddy1238. Alex. Mahanga. Chmod007. Heelmijnlevenlang. 220 anonymous edits Network cloaking  Source: http://en. Rakshith Amarnath. Jxm. Agne27. Dinarphatak. Coldacid. Raywil. Anon126. Cander0000. Mdd.php?oldid=430599102  Contributors: Alvin Seville. Nabla. Ceefour.org/w/index. SiarFisher. Henk. Bblfish.org/w/index.org/w/index. Jay. Dispenser. Arichnad. Robertvan1. DoctorElmo. Miym. Postcard Cathy. Yellowgoat. Dancter. Miym. 35 anonymous edits Dryad (programming)  Source: http://en. Brennels.org/w/index. Gribeco. Beland. Chris Capoccia. Skomorokh. DOSGuy.wikipedia.php?oldid=442006474  Contributors: Aristotle Pagaltzis. JVersteeg. Lowellian. Patrick. 3 anonymous edits High level architecture (simulation)  Source: http://en. Mike2782.org/w/index. Neilc. Space89. Gamer007.php?oldid=424717286  Contributors: Acdx. Edward321. RainbowCrane. Jsmethers. Rich Farmbrough. 10 anonymous edits HyperText Computer  Source: http://en. Richard Slater.wikipedia. Tobias Bergemann.php?oldid=446911323  Contributors: 4th-otaku. SamyPesse. Ewlyahoocom. Toni Stoev.org/w/index. Alexteclo.php?oldid=446515973  Contributors: Airplaneman.org/w/index. Miym.org/w/index. King Arthur6687. Arvindn. Technobadger. Chowbok. 39 anonymous edits Distributed social network  Source: http://en. JLaTondre. YourEyesOnly. Av pete. Ingenthr.wikipedia.wikipedia. Mortense.org/w/index. Jweston. Sander. Alabamaisntgreat. Marudubshinki. Zyx.Article Sources and Contributors Pion. Michael Hardy. Hoist2k. Abune. Opticalgirl. RHaworth.org/w/index. Happyinmaine. Miym. SiddhartaPranha. AlainV.php?oldid=430566863  Contributors: Alvin Seville. BadenW. John Bessa. Tide rolls. Dkf11.wikipedia. 49 anonymous edits MongoDB  Source: http://en. Yworo. My76Strat. Twimoki. Sjc. Nickptar. Thumperward. Neumeier. EagleOne. Kocio. Tobias Conradi. Darp-a-parp. Miym. Ladybirdintheuk.revah.php?oldid=387166889  Contributors: Dawynn. SBunce.php?oldid=446363990  Contributors: ENeville. Jamelan. JLaTondre. Drewnic.wikipedia. Greebo the Cat. John Nowak. Torc2. Nad.php?oldid=446408740  Contributors: Cybercobra. Madpaiand17. Samw. Eschuck. GrahamN. Shepard. Antonielly. Joonga. Saintrain. Magioladitis. JCLately. Sciurinæ. Johnny99. Jinlye.hc. Aottley. Everyking. Snezzy. Ruralhouse. Tevildo. Khazar. 1 anonymous edits Message passing  Source: http://en.php?oldid=444066741  Contributors: Atownballer. Nomeata. Daarklord. Jackollie.wikipedia. Dm. Romanc19s. ArneBab. Yurivict. Mboverload. Raul654. Michael Hardy. Tobias Bergemann. Khaless. Lauciusa. History2007. Abhinavkin. Mechanical digger. Bartledan. IanOsgood. R'n'B. Rick. 1 anonymous edits Open Computer Forensics Architecture  Source: http://en. Outlanderssc. Orso della campagna. Rwwww. Bovineone. Teknobo. Mu Mind. FrankTobia.wikipedia. Ttonyb1. Æåm Fætsøn. Woohookitty. Ruud Koot. Ben Ben. McSly. Jamelan. Rich Farmbrough. JubalHarshaw. Nagle. Malcolma.php?oldid=409252753  Contributors: Andreas Kaufmann. R'n'B. Ochbad. Bearcat. FatalError. JerryLerman. AvicAWB. Happyinmaine. Mbferg. SimonP. 30 anonymous edits Membase  Source: http://en. Adrianwn. Riadlem. Bearcat.org/w/index. Kanebender. Hervegirod. MarSch. Manasgarg. Fahdshariff.php?oldid=444067796  Contributors: Bunnyhop11. One-dimensional Tangent. Vdzhuvinov. Raul654. Mark Renier.wikipedia. Belovedfreak. Soumyasch. Stephan Leeds. Etenil. Michael Hardy. Ramin zeinali. Woohookitty. Fabrictramp.pratten. Allan McInnes. ClaesWallin. Yunyz. BClemente. Joeyguerra. 52 anonymous edits Messaging pattern  Source: http://en. Tuxcantfly. Suli123.php?oldid=444281694  Contributors: 1manfern. SamJohnston.org/w/index. Iridescent. Friendlydata. Meandtheshell.wikipedia. Hodsondd. Kbrhouse. Lee Carre. Supa Z. Youngtwig. 12 anonymous edits Edge computing  Source: http://en. Q Chris.wikipedia. Cander0000. Miym. Minghong. AdrianThurston. Philomathoholic. Pegship.org/w/index. MarkWahl. Zondor. Lastorset. Iridescent. Krzys ostrowski. Zacharewicz. Darren uk. Anrie Nord. RSaunders. Stypex. Hroðulf. Kbdank71.org/w/index. Gurch. Styfle. Leberwurscht. SvenGodo. 24 anonymous edits Fragmented object  Source: http://en. Cntras.org/w/index. MorganCribbs. Daf. Megaltoid. Coldacid. Radiojon. KSEltar. Lucaas. Wilbysuffolk. Rofrol. MarkusSchiltknecht. OmidPLuS. VampWillow. DellTechWebGuy. Kocarol. Hgrosser. Grshiplett.php?oldid=447208239  Contributors: 10nitro. Abdull. Bweck. Miym. Magog the Ogre. Srjskam. Gaensebluemchen at night. FeydHuxtable. Bearcat. Rjwilmsi. JCLately. Xissburg. Stephan Leeds. Andrewa. Urhixidur. Jakub Vrána. Tagishsimon. Smartse.wikipedia. PigFlu Oink Gemstone (database)  Source: http://en. Davidofithaca. Tgautier. MarktMan.php?oldid=447414552  Contributors: Afrab null. Eggyknap. Bovineone. Khalid hassani. Mat813. Philip Trueman. Robert K S. Oicumayberight. Spl. Plaes. Kusma. Wavemaster447.wikipedia. Vittyvk. Richwales. Masterhomer. Cybercobra. Kozuch. LOL. Sriehl.wikipedia. Catatoniatoday. Nrgiii. Najeeb1010. 16 anonymous edits Explicit multi-threading  Source: http://en. Shaunfensom. Frap. BMF81. Eleassar. Shijucv. Discospinster. Bovineone. Voteformike. Night of the Big Wind Turbo. Jonas AGX.wikipedia. Svick. Joshxyz Fabric computing  Source: http://en. The Anome. 5 anonymous edits Dynamic infrastructure  Source: http://en. Frap. Peridon. Rwwww. Szopen.org/w/index. Cesium 133. Sn0wflake. Chzz. Mdwh.muller.wikipedia. Elf Pavlik. Orderud. 10 anonymous edits Mobile agent  Source: http://en. Krzys ostrowski. Oleg Alexandrov. David-Sarah Hopwood. Radagast83. Nighthawk2050. Quercus basaseachicensis. Waldhorn Opaak  Source: http://en. Rodrigob. Bostonvaulter. Shaunfensom. FatalError. Carmeld1. Jonathan Williams. Fæ. Eustress. Σ. Dainis. Chris the speller. Zombie1986. 126 anonymous edits Multi-master replication  Source: http://en. Closedmouth. Ideogram. Munahaf. Rwwww. Senthryl. Deon Steyn. Pibara. Stardust8212. Miym. Jpbowen. Evileye73. Bertie A. Tonyony83. Bmatschke. Jimmyzimms. Hairhorn. Thurston51. Jonmmorgan. Dubwai. Josephgrossberg. Prickus. Miym. Jamierlawson. Joriki. 39 anonymous edits Multitier architecture  Source: http://en. Neilc. Miym. Jaycoh.org/w/index. Percy Snoodle. Beland. Phatom87. Nicolas Barbier. DavidBourguignon. Philippe Nicolai-Dashwood. CaptTofu. Nixdorf. Lenoxus. Miborovsky. Mechanical digger.wikipedia. JonHarder. Venustas 12. Emmanuel.wikipedia. Extols. Difu Wu. JCLately. Tmcw. Kintetsubuffalo. Kiwibird. Stephen B Streater. AlistairMcMillan. SteveLoughran. Lackett. Rich Farmbrough. Pjoef.org/w/index. JForget.php?oldid=428534732  Contributors: Agileball. Signalhead. Zondor.

Catin20. Yarnalgo. Frap.Article Sources and Contributors Overlay network  Source: http://en.wikipedia. Bobrayner.org/w/index. Wikicojamc. Soveran. Beetstra. ST47. Editor4567. Roche-Kerr. Rainald62. Qwertyus. Olegos. Bonadea. 62. Doc Daneeka. 2 anonymous edits RM-ODP  Source: http://en. Henriok. Cretog8. Miym. Frieda. Bd84. 15 anonymous edits Request Based Distributed Computing  Source: http://en. Sam Hocevar.davies. Chuunen Baka. Bentogoa. Miym. Ignatzmice. Tawker. Vssun. Old Death.org/w/index. Hibernian. Iluvcapra. Josh Parris. Gbleem. Tpbradbury. Rilak. Tiddly Tom. Nic Doye. Sorenriise. Michael Devore. Peturingi. Balcer. AdSR. La Pianista. Koffieyahoo. Jncraton. PrologFan. Champlax. Jamesontai. Ixfd64. Arvindn. 9 anonymous edits Transparency (human-computer interaction)  Source: http://en. Truaxd. JimParkerRogers. KarlKarlson. T-bonham. Ahmedabadprince. OrbitalAnalyst. Fiftyquid. Emmess2006. JForget. Katieh5584. Boul22435. Gatemansgc. LONGSHOT. Kozuch. X42bn6. Kompere. Yurivict. Piotrus. The 888th Avatar. RJASE1. AnonGuy. Orderud. 45 anonymous edits Remote Component Environment  Source: http://en. RJaguar3. Zrs 12. Giftlite. Liao.wikipedia.wikipedia. Meteshjj. JCLately. Roadrunner. Bsadowski1. Miym. RadiantRay. Mjr162006. Shinkansen Fan. Dgies. Nakakapagpabagabag. Rhobite. MichaelsProgramming. Nethgirb. Romanm. SriMesh. Duffman. Wang. Margana. Jahiegel. Elsajoy. Darkstar1st. Balderdash707. Hervegirod. Stevertigo. Platyk. Ciphergoth. Nono64. Rich Farmbrough. Hankwang. Kevin Saff. JonHarder. Calliopejen1. Harley peters. Pearle. Cwolfsheep. Maury Markowitz. Modulatum. Philip Trueman. Liao. Heron. Monobi. Adam M. Rgamble. Nick125. Bovineone. Some fool. Dto. Methedras. Piet Delport. SteveSims. Dialectric. Megacat. CES1596. Tangotango. Miym. Billymac00. Wknight94. Jharrell. Cyrius. Another-anomaly. Andymrhodes. Roger Davies. Toussaint. Greg Lindahl. Miym. Ripper234. 5 anonymous edits Service-oriented distributed applications  Source: http://en. Voidxor. Scm83x. Adamd1008. Evil Monkey. Herbee. Tony Fox. Lsb34. Ms. Anwar saadat. Tim1357. Tokek. Squideshi.ferre. み れ で ぃ ー . Igottalisp. LilHelpa. Mmernex. UltraMuffin. Tomtzigt. Alchemist Jack. Krtek2125. Maurice Carbonaro. Allan McInnes. Slathering. Mwtoews. Phr. MonoAV. clown will eat me. Chowbok. Mannjc. Moxfyre. Leadmelord. Plest. Ongar the World-Weary. Harmil. Gamester17. DavidCary. Miym. SchfiftyThree. Fireaxe888. Aumakua. Splash. Chuunen Baka.wikipedia. TakuyaMurata. Jbaxt7. Woohookitty. Propaniac. 130. Ericoides. Gioto. Gunter. Tnm8. Vivek prakash81. Arch dude. 32 anonymous edits Parasitic computing  Source: http://en. 5950FX. Glane23. Hydrox. Inkling. Dgies. MikhailGusarov. Miym. Johnlogic. LittleOldMe. RexNL. お む こ さ ん 志 望 . Kozuch. Scott McNay. Svjson. Ludraman. GregRobson. Demonkoryu. Quuxplusone. Sdornan. Epbr123. Zoeb. Peyre. Philippe Nicolai-Dashwood. Khazadum. Raistolo. MIT Trekkie. Geoff97. SJP. Vinceouca. Kuru. Krallja. Koavf. Mdd. Hgrobe. Damian Yerrick.wikipedia.wikipedia. Grzegorz Dubicki. Bevo. Jder. Hu12. Torla42.wikipedia. Gadomski. Capricorn42. Jaqiefox. ViveCulture.php?oldid=368256529  Contributors: Andreas Kaufmann. Ryanaxp. Wowiamgood123. Maycrow. Giraffedata.org/w/index.wikipedia. Shainer. Dipskinny. Evil Twin Skippy. Frap.wikipedia. Donreed. Violetriga. EdMcMahon.xxx. N328KF. DragonflySixtyseven. RedWolf.org/w/index. JonHarder. Cowman109. Jehochman. ClementSeveillac. Pczajkowski. Ronz. Russell. Dyl. Jmarcelo95. Grimey109. Wavelength. Ojw. Arto B. Trainor. Miym. Runtime. Canens. Miym. Cdamama. Heron. RichardVeryard. RJFJR. James086. Icey. Alexwcovington. Iamfscked. Jschwa1. Cp111. HaakonHjortland. Myscrnnm. Tide rolls. Anr. Mmason56. Drilnoth. Owenozier.org/w/index. Ekashp.php?oldid=385961828  Contributors: BD2412. SamJohnston. Mdd. JH-man. Robert Brockway. Fishnet37222. Davidweiner23. Miym. Tuqui.org/w/index. Shieldforyoureyes. 40 anonymous edits Paradiseo  Source: http://en. Shoeofdeath. Artlondon. T. Damian Yerrick. I dream of horses. Abce2. Jeh. Zahid Abdassabur. RxS. Bjh21. PeterStJohn. Gz33. Guanaco. Chanakyathegreat. Danbert8. Dontrustme.wikipedia. VB. Quest for Truth. Szopen. Thecheesykid. Ehn. JoeBruno. T0ny. Samuel Curtis. Pretzelpaws. EEPROM Eagle.php?oldid=440634049  Contributors: 667NotB. 16 anonymous edits Supercomputer  Source: http://en. Can't sleep. Mhdrateln. Kbtarc. Niroht. Gino chariguan. TAnthony. Pgk. Etxrge. Southen. Namazu-tron. Rjwilmsi.php?oldid=396957096  Contributors: AntonioVallecillo. MainBody. Joseph Solis in Australia. RichardVeryard. Mifam. Mean as custard.borders. GiM. History2007. Pedro. Chuck Marean. Brighterorange. Asparagus. Ttiotsw. MJSkia1. Jjmerelo. 50 anonymous edits Utility computing  Source: http://en.org/w/index. Nuno Tavares. Ævar Arnfjörð Bjarmason. A. Nivix. Agentbla. IvanLanin. 161 . Yakudza. Derek Ross. Supercomputtergeek. Paradiseo. CredoFromStart. LinaMishima. Angwill. Iridescent. Rosiestep. Pion. Statsone. Esap. Brevity. Sannse. Equendil. Kubanczyk. Robertvan1. Jonah Stein. Kubanczyk. Somatrix. Omicronpersei8.org/w/index. Page Up. Kristiewells. Grutness. Ww.NETLover.org/w/index. Długosz. Topbanana. Fæ. Poccil. JamesBondMI6. Gioto.us. Burschik. Jonkerz. Nagle. Discospinster.org/w/index. Hellis. Kbdank71. Seuakei. Radagast83. John Reaves. Pcap. BobM. Belovedfreak. Infrangible. Tempshill.. Stoakron97. Quadell. K25125. Miym. Mange01. Khalid hassani. Charles Matthews.org/w/index. SheckyLuvr101. JamesBWatson.org/w/index. Conversion script. Alansohn. Henry Robinson. Spf2.php?oldid=438526958  Contributors: Abdull. Vanished user 39948282. Rich Farmbrough. 2 anonymous edits Shared memory  Source: http://en. CarlHewitt. Barrylb. 13 anonymous edits Semantic Web Data Space  Source: http://en. Peyna. Roy da Vinci. Dannyc77. Waldir. Eugene-elgato. WikiTome. Bryan Derksen. RandomStringOfCharacters. Colonies Chris.wikipedia. C. Winterst. Miym. Michael Hardy. Hitman012. Colorvision. Duckbill. Johntex. IanBrock. Jessvj. FuFoFuEd. Jaybuffington. Ultimatewisdom. ZotovBST. Bosniak.. Vincentwilliamse. Muéro.php?oldid=443706886  Contributors: Amir. Mihaigalos. Strait. Jeff3000.). Th1rt3en. OrgasGirl. Jayen466. Nikai. Lord British. Natishalom. Arancaytar. Edward321. Hashproduct. AdjustShift. Adam850. Slark. JLaTondre. Тиверополник. Gronau. Manop. Nickg. Szopen. Stuartyeates. Jmurali. Gfoley4. Billbrixton. Arkanosis. Avi4now. Emperorbma.delanoy. Sonicology. Thatdumisdum. Cooperised.wikipedia. Husond. TheRanger. Eastmain. Yaos. Phr. Maxim. Stbalbach. Caltas. Wikipelli. Elvarg. Pohl. Devilrose. Da monster under your bed. Mipadi. Newone. Raul654. Phil Sandifer. Metageek. Qrex123. Husky. Bryan Derksen. Applicationit. Oleg Alexandrov.wikipedia. Andyzweb. Miym. LokiiT. Michael Rogers. Dekisugi. Texture. Hu12. Cowpriest2. Yworo. Dale Arnett. Antandrus. RTC. Suruena. Dbroadwell. ThinkEnemies. ESkog. MK8. Balabiot. Aomsyz. Erxnmedia. Akadruid. Nakon. Erik the Appreciator. Rjwilmsi. Funandtrvl. Shaw SANAR. Antonielly. Magnus Manske. RedWolf. Codetiger. Hadal. JteB. Rainer Wasserfuhr. Andre Engels. Martin451. Headsrfun. Finchsnows. DarlingMarlin. Sleske. MC10. Jpahullo. Frap. Davedx.php?oldid=400145051  Contributors: Lismoreboy. Alcachi. CosineKitty. Patstuart. Drphilharmonic. Fuzheado. Tim@.wikipedia.xxx. Leehounshell. Dmuth. Earle Martin. Jebba. Bramp. Philipp Weis. SimonP. Aufidius. Rwwww. TNLNYC.org/w/index.php?oldid=429896483  Contributors: Bpeel. Emre D. 29 anonymous edits TreadMarks  Source: http://en. Karmiq. Ryan Roos. Seegoon. Jpbowen. Kleinheero. Chrisch. Nanobri. CharlesGillingham. Randhirreddy. 48 anonymous edits Smart variables  Source: http://en. Jeff G. Neelix. Nutiketaiel. Paxsimius. Ryansca. Jawz44. Haeleth. Myanw. TreyGeek. Tempodivalse. Merope. Proofreader. Diannaa. TwoOneTwo. HazeNZ. Rich Farmbrough. Rror.wikipedia. Ali. Gilliam. Marcoacostareyes. Chych. El Baby. Cwolfsheep. Autarchprinceps. Efa. Miym. Er Komandante. Stesch.php?oldid=431188269  Contributors: BahramH. The Anome. Wwoods. Edward. Sfraza. Sbtourist. Guy Harris. Doc Daneeka.t. NickW557.org/w/index. DancingPenguin. Bijee.253. Lowellian. Samtheboy. IanOsgood. Bergin. Komap. Raryel. Zphelj. Onorem. TexasAndroid. D. Beland. Beland. Toddst1. Torswin. Gp5588. Rama's Arrow. Rookkey. Sean D Martin. RedWolf. Thadius856. Bubba73. No1lakersfan. Bodhran. Liquiddatallc.wikipedia. Xalfor.wikipedia. Jvs.org/w/index. LogicDictates. RichardVeryard. Matt Deres. Gererd+.cz. Thumperward. Hello Control. Тиверополник. Kavehmb. Edward.org/w/index. Bk0. Calmer Waters. Kubigula. Sanket ar. CanisRufus. Autopilots. Monedula. Jnmoyne. Femto.org/w/index. Heath. KathrynLybarger. Mojska. Vaibhavahlawat1913. Keraunos. DerHexer. Junkblocker. 6 anonymous edits Tuple space  Source: http://en. Searchme. Miym. MER-C. Lightmouse. Modify. Swillison. Emmess2005. Mskfisher. Closedmouth. Chillum. Simetrical. Vsm01. Jorfer. Edal.php?oldid=394780368  Contributors: Andreas Kaufmann. Gershwinrb. Dck7777. Samrawlins. History2007. Arthur a stevens. Intgr.php?oldid=432008745  Contributors: 4twenty42o. Alexkon. Chowbok.booth. Jmundo. Pearle. Miym. Unknown.wiki. Ruud Koot.moulavi. Circeus. Miym. New guestentry. Der Falke. Devgus. Datacenterguy. Maddiekate. Roaming. Alphachimp. Akata. Jeffshantz. Sukiari. Ali azad bakhsh. Shandris. Andy M. EoGuy. Hogne. Tovojolo. 4 anonymous edits Portable object (computing)  Source: http://en. Wernher. Kant66. Zachary.php?oldid=447465905  Contributors: Aaaidan. Fuhghettaboutit. Metapsyche. Corvus cornix. Ahoerstemeier. Robert Merkel. GreatTurtle. Editor at Large. Soumya92. Ehn. Delirium. Jay. Scovetta. Chocmah. Ludovic. Mgreenbe. Xeworlebi. Kvng. Laug. Ventura. Elcombe2000. Eivind. D6. Qwerty Binary. Mortense. Cpiral. Bkkeim2000. Fredrik. Wikibofh. Wikineer.php?oldid=400489047  Contributors: Arleyl. Hede2000. Wolph. Trcunning. Jpbowen. Marj Tiefert. CesarB. BMF81.wikipedia.94. Christopher. Mwaisberg.php?oldid=446536073  Contributors: An0n. Cswierkowski. Dyl. Thegreenj. Anastasios. Bongwarrior. Cec. Tonywalton Redis (data store)  Source: http://en. Afskymonkey.php?oldid=447344005  Contributors: 12 Noon. Slarson. Sorenriise. TheCoffee. Johndelorean. Bender235. Jerryobject. Pinethicket. SymlynX. Nirvana888. Smitty. Mion. Mikeroodeus. Vaceituno. Wiki alf. Melsaran. Mr Stephen. Calaka.php?oldid=443982412  Contributors: Befreax. List of marijuana slang terms. Remag Kee. Everyking. Stephenb. Kgfleischmann.122. Mchu amd. Nurg. RoyBoy. Jsbillings. Imroy. Barmijo. Nneonneo. Jj05y. Simesa. Harman malhotra. Isilanes. Lulzfish. Gwern.wikipedia. Heelmijnlevenlang. DanielSHaischt. FrummerThanThou. Sharon08tam. AlienZen. 1137 anonymous edits Terrastore  Source: http://en. Racklever. Ancheta Wis.org/w/index. JHunterJ. SkeletorUK. Mani1.64. Miami33139. MrOllie. 9 anonymous edits PlanetSim  Source: http://en. Gerry Ashton. Torqueing.php?oldid=428682338  Contributors: Bovineone. Artaxiad. Nojhan. Coffeespoon. Gsonnenf. ShaunMacPherson. Kjkolb. Materialscientist. Sink257. Teryx. Muijz. Kmerenkov. CSWarren. Eleckyt. HenryLi. Chad Vander Veen. FrenchIsAwesome. Koyaanis Qatsi. ZeroOne. SMC.org/w/index. Leszek Jańczuk. DARTH SIDIOUS 2. Richard Arthur Norton (1958. Taxman. 1 anonymous edits Stub (distributed computing)  Source: http://en. Delta759. Rrburke. KFP. Vroman. Nick Drake. L Kensington. AxelBoldt. Karim ElDeeb. Bovineone. Jasper Chua. Dthomsen8. Quietust. Mary quite contrary. Cometstyles. Michaelmas1957. J Milburn.php?oldid=438879586  Contributors: Elkman. DocendoDiscimus.php?oldid=400929191  Contributors: C777. Hft. Miracle Pen. Zodon. Lionelt. J. VictorianMutant. Loyalist Cannons. Jedonnelley. PeterBrian. Oldhamlet. Fijal. Humble Guy. Neilc. Ramu50. John. Richfife. Pbannister. Matt Crypto. Aldie. Spayrard. RossPatterson. Buster79. Bovineone. Ark. SCOnline. MER-C.php?oldid=443129974  Contributors: A5b.org/w/index. CSWarren. Javawizard. Khcw77. Epolk. Amwebb. MrOllie. Epatrocinio. Shell Kinney. Marangog. Miym. Poohneat. Agentbla. Linuxbeak. Jni. Air55. Doctorevil64. Anonymous Cow. Bloodshedder. Jasper Deng.wikipedia. Joffeloff. Rjwilmsi. MrMambo. Arakunem. Kku.

Snrjefe. Shire Reeve. SamJohnston. Guy Harris. SpigotMap.wikipedia. GraemeMcRae. Skysmith. Suyambuvel.wikipedia.org/w/index. StoneIsle. Wmahan. Dlrohrer2003. Tobias Bergemann. Pearle. ShellyT123. Softtest123. Shenme. Rwwww. THB. Tlausser. Miym. The Anome. Weregerbil. Marvinandmilo. Balrog-kun. Wojteklw. Inc ru.org/w/index. RodneyMyers. Soggyc. Paul Foxworthy. Chip Zero. UncleDouggie. Miym.org/w/index. SteveLoughran. Soumyasch. Softguyus. Thumperward. 4 anonymous edits Virtual Object System  Source: http://en. Rich Farmbrough. FatalError. CeciliaPang.php?oldid=332950032  Contributors: ArthurDenture. Licor. Mild Bill Hiccup. Ronz. Rare4. ReedHedges. Rich Farmbrough. Davepape. Posix memalign. Verbamundi. AzzAz. Salad Days. 89 anonymous edits Virtual Machine Interface  Source: http://en. Royalguard11.wikipedia. Bluemask. Ycagen.Article Sources and Contributors Roman Doroshenko. MathieuDutourSikiric.php?oldid=440045994  Contributors: Bovineone.php?oldid=434581986  Contributors: Avalon. 6 anonymous edits Volunteer computing  Source: http://en. 18 anonymous edits 162 .

wikipedia.php?title=File:IBM_HS20_blade_server.0  Contributors: Moxfyre File:Supercomputers countries share pie.svg  License: GNU Free Documentation License  Contributors: Sam Johnston Image:Fragmented object.php?title=File:Network_Overlay.svg  License: Creative Commons Attribution-Share Alike  Contributors: Ludovic.wikipedia. Licenses and Contributors Image:ALSP.png  Source: http://en.php?title=File:Overview_of_a_three-tier_application_vectorVersion.png  Source: http://en.jpeg  Source: http://en.org/w/index.org Image:Definition of a Distributed Data Flow.png  Source: http://en.0  Contributors: Damien Katz File:Couchdb screenshot. Licenses and Contributors 163 Image Sources.jpg  Source: http://en.org/w/index.wikipedia.org/w/index.wikipedia.urv.jpg  Source: http://en.urv.php?title=File:ArchitectureCloudLinksSameSite.wikipedia File:Network Overlay merged.jpg  Source: http://en.jpg  License: Public Domain  Contributors: BClemente Image:AutonomicSystemModel.org/w/index.wikipedia.org/w/index.png  Source: http://en.org/w/index.wikipedia.php?title=File:PoweredMongoDBbrown66.php?title=File:Planetsimlogo.wikipedia.php?title=File:Roadrunner_supercomputer_HiRes. PlanetSim was developed within the research project Planet (http://planet.5  Contributors: The picture is the logo of the PlanetSim simulator.php?title=File:BlueGeneL_cabinet.org/w/index.wikipedia.gif  Source: http://en.org/w/index. Yaleks Image:IBM HS20 blade server.jpg  Source: http://en.urv.png  License: Public Domain  Contributors: Gaensebluemchen at night Image:Definition of a Live Distributed Object.jpg  Source: http://en.wikipedia File:Client-server-model.5  Contributors: The picture is the results for a 1000-node Chord network. Records Management/Media Services and Operations Image:Processor families in TOP500 supercomputers.org/w/index.wikipedia.jpg  Source: http://en.es). Original uploader was Bartledan at en.org/w/index.svg  Source: http://en.png by Duesentrieb. Image:PlanetsimArchitecture.php?title=File:PlanetsimArchitecture.wikipedia.wikipedia.org/w/index.org/w/index.jpg  License: Creative Commons Attribution-Sharealike 2.svg  License: Public Domain  Contributors: Various. developed within the research project Planet (http://planet.png  Source: http://en.svg  Source: http://en.svg  Source: http://en.php?title=File:Definition_of_a_Live_Distributed_Object.org/w/index.svg: David Vignoni Gnome-fs-server.wikipedia.jpg  Source: http://en. (Original SVG was based on File:PD-icon.org/w/index.es). PlanetSim was developed within the research project Planet (http://planet.org/w/index.png  License: Public Domain  Contributors: Original uploader was Sjschmid at en.org/w/index.wikipedia.5  Contributors: The picture shows the PlanetSim layered architecture.php?title=File:Cray-1-deutsches-museum.org/w/index.0  Contributors: Robert Kloosterhuis Image:Operating systems used on top 500 supercomputers.org/w/index. based on a file by User:Foofy.jpg  License: GNU Free Documentation License  Contributors: Raul654.php?title=File:Fragmented_object.org/w/index.org/w/index.png  License: Public Domain  Contributors: Image:Fabric computing.org/w/index.wikipedia. See log.png  License: Public Domain  Contributors: Driquet Image:Supercomputing-rmax-graph.php?title=File:Fabric_computing.wikipedia.jpg  Source: http://en.urv.wikipedia.svg  Source: http://en.png  License: Creative Commons Zero  Contributors: Megaltoid File:Overview of a three-tier application vectorVersion.svg  License: GNU Lesser General Public License  Contributors: Gnome-fs-client.php?title=File:AutonomicSystemModel.php?title=File:Definition_of_a_Distributed_Data_Flow.svg  License: Public Domain  Contributors: Bartledan (talk). Sanchez.php?title=File:RM-ODP_viewpoints.svg  Source: http://en.php?title=File:Client-server-model.org/w/index.5  Contributors: Clemens PFEIFFER Image:BlueGeneL cabinet.org/w/index.org/w/index.svg  License: Public Domain  Contributors: Benedikt.php?title=File:Distributed_Memory.jpg  License: Creative Commons Attribution-Sharealike 3.gif  License: Creative Commons Attribution 3.png  License: Creative Commons Attribution 3.php?title=File:Couchdb-logo. File:RM-ODP viewpoints.0  Contributors: LokiiT File:ArchitectureCloudLinksSameSite.org/w/index.jpeg  License: GNU Free Documentation License  Contributors: Khazadum.0  Contributors: Krzysztof Ostrowski Image:PD-icon.org/w/index.png  Source: http://en.Image Sources.org/w/index.gif  Source: http://en.gif  Source: http://en.png  Source: http://en.jpg  License: Public Domain  Contributors: LeRoy N.wikipedia.png  License: GNU General Public License  Contributors: apache.php?title=File:Couchdb_screenshot.wikipedia.0  Contributors: Marcel Douwe Dekker Image:Cray-1-deutsches-museum.svg  Source: http://en.php?title=File:Supercomputers_countries_share_pie.org/w/index.php?title=File:Symphony_1000_random. which was based on Image:Red copyright.png  Source: http://en.wikipedia.jpg  License: Creative Commons Attribution 2. Image:Symphony 1000 random.ferre File:Network Overlay.wikipedia.php?title=File:Supercomputing-rmax-graph.php?title=File:Chord_1000_random.php?title=File:PD-icon.wikipedia.png by Rfl.es).gif  License: Creative Commons Attribution-Sharealike 2.0  Contributors: Krzys ostrowski File:PoweredMongoDBbrown66.wikipedia.jpg  License: Creative Commons Attribution-Sharealike 2.gif  Source: http://en.php?title=File:Operating_systems_used_on_top_500_supercomputers.) Image:Distributed Memory. Sfan00 IMG File:Distributed object communication.PNG  Source: http://en.php?title=File:Network_Overlay_merged. Image:Chord 1000 random.wikipedia.wikipedia.wikipedia.svg  Source: http://en.es).org/w/index.php?title=File:Distributed_object_communication.gif  License: Creative Commons Attribution-Sharealike 2.svg  Source: http://en.svg  License: Creative Commons Attribution-Sharealike 3. PlanetSim was developed within the research project Planet (http://planet.5  Contributors: The picture is the results for a 1000-node Symphony network.org/w/index.svg: David Vignoni derivative work: Calimo (talk) Image:Couchdb-logo.Seidl Image:Roadrunner supercomputer HiRes.wikipedia.ferre]] Image:Planetsimlogo.PNG  License: Creative Commons Attribution 3.php?title=File:ALSP.png  License: Creative Commons Zero  Contributors: Lucaswilkins .php?title=File:Processor_families_in_TOP500_supercomputers.wikipedia.jpg  License: Creative Commons Attribution-Sharealike 2.svg  License: Creative Commons Attribution-Share Alike  Contributors: Ludovic.gif  License: Creative Commons Attribution 3.wikipedia.wikipedia.wikipedia.

0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3.License 164 License Creative Commons Attribution-Share Alike 3. 0/ .

Distributed data flow. PHP. The framework is inspired by the map and reduce functions commonly used in functional programming.. Open architecture computing environment. Database-centric architecture. Much More! This book explains in-depth the real drivers and workings of MapReduce. Portable object (computing). with extensive references and links to get you to know all there is to know about MapReduce right away. Citrusleaf database. Distributed shared memory. Autonomic Computing.. Gemstone (database). Ruby. Terrastore.” MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Aggregate Level Simulation Protocol. time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals . Stop Searching. Network cloaking. Stub (distributed computing). Distributed Interactive Simulation. Live distributed object. CouchDB. Erlang. this book is a unique collection to help you become a master of MapReduce. In 2 Days Or Less. PlanetSim. faster than you ever dreamed possible! The information in this book can show you how to be an expert in the field of MapReduce. Distributed social network. Overlay network. learn EVERYTHING you need to know about MapReduce. proposal and implementation with the ultimate book – guaranteed to give you the education that you need. Stand Out and Pay Off. Mobile agent. Perl. while you still can. Master/slave (technology). It reduces the risk of your technology. With the Least Amount of Effort. Amazon Relational Database Service. Paradiseo. Distributed lock manager. IBZL. Serviceoriented distributed applications. Dynamic infrastructure. A quick look inside: MapReduce. Distributed database. Message passing. Amoeba distributed operating system. Multi-master replication. Art of War Central. MongoDB. Messaging pattern. “Here’s Your Chance To Skip The Struggle and Master MapReduce. R and other programming languages. This book is your ultimate resource for MapReduce. Request Based Distributed Computing. An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily. Shared memory. . Distributed application. Virtual Machine Interface. Membase. Remote Component Environment. MapReduce libraries have been written in C++. Fabric computing. OrientDB. Are you looking to learn more about MapReduce? You’re about to discover the most spectacular gold mine of MapReduce materials ever created. Distributed object.. Code mobility. Smart variables.and Much. Open Computer Forensics Architecture. Distributed memory. Parts of the framework are patented in some countries. Amazon SimpleDB. Fragmented object.. although their purpose in the MapReduce framework is not the same as their original forms. analysis. The #1 ALL ENCOMPASSING Guide to MapReduce. Data Diffusion Machine. Kayou. Get the edge. Java.. Transparency (human-computer interaction). Fallacies of Distributed Computing. Distributed design patterns. Explicit multi-threading.Grab your copy now. High level architecture (simulation). Semantic Web Data Space. Multitier architecture. Connection broker.. TreadMarks. Redis (data store). C#. HyperText Computer. Client–server model. Supercomputer. Here you will find the most up-to-date information. Utility computing.The Knowledge Solution. F#. Python. Tuple space. background and everything you need to know. Virtual Object System. RM-ODP. Edge computing. Dryad (programming). In easy to read chapters. and ace any discussion. Volunteer computing. OCaml. Opaak. Message consumer. Parasitic computing.

You're Reading a Free Preview

Download
scribd