P. 1
MapReduce: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

MapReduce: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

|Views: 765|Likes:
Published by Emereo Publishing
The Knowledge Solution. Stop Searching, Stand Out and Pay Off. The #1 ALL ENCOMPASSING Guide to MapReduce.

An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily...

"Here's Your Chance To Skip The Struggle and Master MapReduce, With the Least Amount of Effort, In 2 Days Or Less..."

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Parts of the framework are patented in some countries.

The framework is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms.

MapReduce libraries have been written in C++, C#, Erlang, Java, OCaml, Perl, Python, PHP, Ruby, F#, R and other programming languages.

Get the edge, learn EVERYTHING you need to know about MapReduce, and ace any discussion, proposal and implementation with the ultimate book – guaranteed to give you the education that you need, faster than you ever dreamed possible!

The information in this book can show you how to be an expert in the field of MapReduce.

Are you looking to learn more about MapReduce? You're about to discover the most spectacular gold mine of MapReduce materials ever created, this book is a unique collection to help you become a master of MapReduce.

This book is your ultimate resource for MapReduce. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about MapReduce right away. A quick look inside: MapReduce, Aggregate Level Simulation Protocol, Amazon Relational Database Service, Amazon SimpleDB, Amoeba distributed operating system, Art of War Central, Autonomic Computing, Citrusleaf database, Client–server model, Code mobility, Connection broker, CouchDB, Data Diffusion Machine, Database-centric architecture, Distributed application, Distributed data flow, Distributed database, Distributed design patterns, Distributed Interactive Simulation, Distributed lock manager, Distributed memory, Distributed object, Distributed shared memory, Distributed social network, Dryad (programming), Dynamic infrastructure, Edge computing, Explicit multi-threading, Fabric computing, Fallacies of Distributed Computing, Fragmented object, Gemstone (database), HyperText Computer, High level architecture (simulation), IBZL, Kayou, Live distributed object, Master/slave (technology), Membase, Message consumer, Message passing, Messaging pattern, Mobile agent, MongoDB, Multi-master replication, Multitier architecture, Network cloaking, Opaak, Open architecture computing environment, Open Computer Forensics Architecture, OrientDB, Overlay network, Paradiseo, Parasitic computing, PlanetSim, Portable object (computing), Redis (data store), Remote Component Environment, Request Based Distributed Computing, RM-ODP, Semantic Web Data Space, Service-oriented distributed applications, Shared memory, Smart variables, Stub (distributed computing), Supercomputer, Terrastore, Transparency (human-computer interaction), TreadMarks, Tuple space, Utility computing, Virtual Machine Interface, Virtual Object System, Volunteer computing...and Much, Much More!

This book explains in-depth the real drivers and workings of MapReduce. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals - Grab your copy now, while you still can.
The Knowledge Solution. Stop Searching, Stand Out and Pay Off. The #1 ALL ENCOMPASSING Guide to MapReduce.

An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily...

"Here's Your Chance To Skip The Struggle and Master MapReduce, With the Least Amount of Effort, In 2 Days Or Less..."

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Parts of the framework are patented in some countries.

The framework is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms.

MapReduce libraries have been written in C++, C#, Erlang, Java, OCaml, Perl, Python, PHP, Ruby, F#, R and other programming languages.

Get the edge, learn EVERYTHING you need to know about MapReduce, and ace any discussion, proposal and implementation with the ultimate book – guaranteed to give you the education that you need, faster than you ever dreamed possible!

The information in this book can show you how to be an expert in the field of MapReduce.

Are you looking to learn more about MapReduce? You're about to discover the most spectacular gold mine of MapReduce materials ever created, this book is a unique collection to help you become a master of MapReduce.

This book is your ultimate resource for MapReduce. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about MapReduce right away. A quick look inside: MapReduce, Aggregate Level Simulation Protocol, Amazon Relational Database Service, Amazon SimpleDB, Amoeba distributed operating system, Art of War Central, Autonomic Computing, Citrusleaf database, Client–server model, Code mobility, Connection broker, CouchDB, Data Diffusion Machine, Database-centric architecture, Distributed application, Distributed data flow, Distributed database, Distributed design patterns, Distributed Interactive Simulation, Distributed lock manager, Distributed memory, Distributed object, Distributed shared memory, Distributed social network, Dryad (programming), Dynamic infrastructure, Edge computing, Explicit multi-threading, Fabric computing, Fallacies of Distributed Computing, Fragmented object, Gemstone (database), HyperText Computer, High level architecture (simulation), IBZL, Kayou, Live distributed object, Master/slave (technology), Membase, Message consumer, Message passing, Messaging pattern, Mobile agent, MongoDB, Multi-master replication, Multitier architecture, Network cloaking, Opaak, Open architecture computing environment, Open Computer Forensics Architecture, OrientDB, Overlay network, Paradiseo, Parasitic computing, PlanetSim, Portable object (computing), Redis (data store), Remote Component Environment, Request Based Distributed Computing, RM-ODP, Semantic Web Data Space, Service-oriented distributed applications, Shared memory, Smart variables, Stub (distributed computing), Supercomputer, Terrastore, Transparency (human-computer interaction), TreadMarks, Tuple space, Utility computing, Virtual Machine Interface, Virtual Object System, Volunteer computing...and Much, Much More!

This book explains in-depth the real drivers and workings of MapReduce. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals - Grab your copy now, while you still can.

More info:

Published by: Emereo Publishing on Sep 09, 2011
Copyright:Traditional Copyright: All rights reserved
List Price: $39.95


Read on Scribd mobile: iPhone, iPad and Android.
This book can be read on up to 6 mobile devices.
Full version available to members
See more
See less



  • Aggregate Level Simulation Protocol
  • Amazon Relational Database Service
  • Amazon Relational Database Service[1]
  • Amazon SimpleDB
  • Amoeba distributed operating system
  • Art of War Central
  • Autonomic Computing
  • Citrusleaf database
  • Client–server model
  • Code mobility
  • Connection broker
  • CouchDB
  • Data Diffusion Machine
  • Database-centric architecture
  • Distributed application
  • Distributed data flow
  • Distributed database
  • Distributed design patterns
  • Distributed Interactive Simulation
  • Distributed lock manager
  • Distributed memory
  • Distributed object
  • Distributed shared memory
  • Distributed social network
  • Dryad (programming)
  • Dynamic infrastructure
  • Edge computing
  • Explicit multi-threading
  • Fabric computing
  • Fallacies of Distributed Computing
  • Fragmented object
  • Gemstone (database)
  • HyperText Computer
  • High level architecture (simulation)
  • IBZL
  • Kayou
  • Live distributed object
  • Master/slave (technology)
  • Membase
  • Message consumer
  • Message passing
  • Messaging pattern
  • Mobile agent
  • MongoDB
  • Multi-master replication
  • Multitier architecture
  • Network cloaking
  • Opaak
  • Open architecture computing environment
  • Open Computer Forensics Architecture
  • OrientDB
  • Overlay network
  • Paradiseo
  • Parasitic computing
  • PlanetSim
  • Portable object (computing)
  • Redis (data store)
  • Remote Component Environment
  • Request Based Distributed Computing
  • RM-ODP
  • Semantic Web Data Space
  • Service-oriented distributed applications
  • Shared memory
  • Smart variables
  • Stub (distributed computing)
  • Supercomputer
  • Terrastore
  • Transparency (human-computer interaction)
  • TreadMarks
  • Tuple space
  • Utility computing
  • Virtual Machine Interface
  • Virtual Machine Interface[1]
  • Virtual Machine Interface[2]



Kevin Roebuck



High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

Topic relevant selected content from the highest rated entries, typeset, printed and shipped. Combine the advantages of up-to-date and in-depth knowledge with the convenience of printed books. A portion of the proceeds of each book will be donated to the Wikimedia Foundation to support their mission: to empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally. The content within this book was generated collaboratively by volunteers. Please be advised that nothing found here has necessarily been reviewed by people with the expertise required to provide you with complete, accurate or reliable information. Some information in this book maybe misleading or simply wrong. The publisher does not guarantee the validity of the information found here. If you need specific advice (for example, medical, legal, financial, or risk management) please seek a professional who is licensed or knowledgeable in that area. Sources, licenses and contributors of the articles and images are listed in the section entitled “References”. Parts of the books may be licensed under the GNU Free Documentation License. A copy of this license is included in the section entitled “GNU Free Documentation License” All used third-party trademarks belong to their respective owners.

MapReduce Aggregate Level Simulation Protocol Amazon Relational Database Service Amazon SimpleDB Amoeba distributed operating system Art of War Central Autonomic Computing Citrusleaf database Client–server model Code mobility Connection broker CouchDB Data Diffusion Machine Database-centric architecture Distributed application Distributed data flow Distributed database Distributed design patterns Distributed Interactive Simulation Distributed lock manager Distributed memory Distributed object Distributed shared memory Distributed social network Dryad (programming) Dynamic infrastructure Edge computing Explicit multi-threading Fabric computing Fallacies of Distributed Computing Fragmented object Gemstone (database) HyperText Computer High level architecture (simulation) 1 7 15 17 19 20 21 25 27 29 30 31 36 36 37 38 40 42 43 45 48 50 51 52 59 60 63 65 67 69 70 72 73 74

IBZL Kayou Live distributed object Master/slave (technology) Membase Message consumer Message passing Messaging pattern Mobile agent MongoDB Multi-master replication Multitier architecture Network cloaking Opaak Open architecture computing environment Open Computer Forensics Architecture OrientDB Overlay network Paradiseo Parasitic computing PlanetSim Portable object (computing) Redis (data store) Remote Component Environment Request Based Distributed Computing RM-ODP Semantic Web Data Space Service-oriented distributed applications Shared memory Smart variables Stub (distributed computing) Supercomputer Terrastore Transparency (human-computer interaction) TreadMarks Tuple space Utility computing Virtual Machine Interface

77 80 80 84 86 88 89 92 93 95 102 105 107 108 109 110 111 112 114 116 117 119 120 122 123 123 127 128 129 131 132 133 145 146 148 148 153 155

Licenses and Contributors 159 163 Article Licenses License 164 .Virtual Object System Volunteer computing 156 157 References Article Sources and Contributors Image Sources.

all maps can be performed in parallel – though in practice it is limited by the data source and/or the number of CPUs near that data. This produces a list of (k2. Provided each mapping operation is independent of the others. The worker node processes that smaller problem. Ruby. which in turn produces a collection of values in the same domain: Reduce(k2. Thus the MapReduce framework transforms a list of (key. Python.v2) pairs for each call.[2] The framework is inspired by the map and reduce functions commonly used in functional programming. thus creating one group for each one of the different generated keys. Perl. value) pairs.v2) The Map function is applied in parallel to every item in the input dataset. F#. list (v2)) → list(v3) Each Reduce call typically produces either one value v3 or an empty return. "Reduce" step: The master node then takes the answers to all the sub-problems and combines them in some way to get the output – the answer to the problem it was originally trying to solve. and returns a list of pairs in a different domain: Map(k1.MapReduce 1 MapReduce MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Computational processing can occur on data stored either in a filesystem (unstructured) or within a database (structured). While this process can often appear inefficient compared to algorithms that are more sequential. "Map" step: The master node takes the input. value) pairs into a list of values. The returns of all calls are collected as the desired result list.v1) → list(k2. leading to a multi-level tree structure. which accepts a list of arbitrary values and . The Reduce function is then applied in parallel to each group. OCaml. Overview MapReduce is a framework for processing huge datasets on certain kinds of distributable problems using a large number of computers (nodes). MapReduce can be applied to significantly larger datasets than "commodity" servers can handle – a large server farm can use MapReduce to sort a petabyte of data in only a few hours. PHP. partitions it up into smaller sub-problems. and distributes those to worker nodes. Logical view The Map and Reduce functions of MapReduce are both defined with respect to data structured in (key. C#. and passes the answer back to its master node. After that. the MapReduce framework collects all pairs with the same key from all lists and groups them together. A worker node may do this again in turn. a set of 'reducers' can perform the reduction phase provided all outputs of the map operation that share the same key are presented to the same reducer at the same time. Map takes one pair of data with a type in one data domain. R and other programming languages. The parallelism also offers some possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails. Similarly.[4] MapReduce libraries have been written in C++. the work can be rescheduled – assuming the input data is still available. though one call is allowed to return more than one value. Java. MapReduce allows for distributed processing of the map and reduction operations. collectively referred to as a cluster (if all nodes use the same hardware) or as a grid (if the nodes use different hardware).[1] Parts of the framework are patented in some countries. Erlang.[3] although their purpose in the MapReduce framework is not the same as their original forms. This behavior is different from the typical functional programming map and reduce combination.

which the application defines. or for the mapping processors to serve up their results to reducers that query them.MapReduce returns one single value that combines all the values returned by map. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce. AsString(sum)). 2 Example The canonical example application of MapReduce is a process to count the appearances of each different word in a set of documents: void map(String name. Here. This may be a distributed file system. Distributed implementations of MapReduce require a means of connecting the processes performing the Map and Reduce phases. thus this function just needs to sum all of its input values to find the total appearances of that word. void reduce(String word. are: • • • • • • an input reader a Map function a partition function a compare function a Reduce function an output writer . It is necessary but not sufficient to have implementations of the map and reduce abstractions in order to implement MapReduce. The hot spots. using the word as the result key. Emit(word. "1"). Other options are possible. for each pc in partialCounts: sum += ParseInt(pc). such as direct streaming from mappers to reducers. String document): // name: document name // document: document contents for each word w in document: EmitIntermediate(w. and each word is counted initially with a "1" value by the Map function. Iterator partialCounts): // word: a word // partialCounts: a list of aggregated partial counts int sum = 0. Dataflow The frozen part of the MapReduce framework is a large distributed sort. each document is split into words.

If the application is doing a word count. Each node is expected to report back periodically with completed work and status updates. Partition function Each Map function output is allocated to a particular reducer by the application's partition function for sharding purposes. The input reader reads data from stable storage (typically a distributed file system) and generates key/value pairs. In the word count example. If a node falls silent for longer than that interval. the Reduce function takes the input values.MapReduce 3 Input reader The input reader divides the input into appropriate size 'splits' (in practice typically 16MB to 128MB) and the framework assigns one split to each Map function. Distribution and reliability MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network. Comparison function The input for each Reduce is pulled from the machine where the Map ran and sorted using the application's comparison function. processes each. The input and output types of the map can be (and often are) different from each other. The shuffle can sometimes take longer than the computation time depending on network bandwidth. Output writer The Output Writer writes the output of the Reduce to stable storage. When files are renamed. The partition function is given the key and the number of reducers and returns the index of the desired reduce. and generates zero or more output key/value pairs. Map function Each Map function takes a series of key/value pairs. data produced and time taken by map and reduce computations. Individual operations use atomic operations for naming file outputs as a check to ensure that there are not parallel conflicting threads running. It is important to pick a partition function that gives an approximately uniform distribution of data per shard for load balancing purposes. the data is shuffled (parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced it to the shard in which it will be reduced. A typical default is to hash the key and modulo the number of reducers. the master node (similar to the master server in the Google File System) records the node as dead and sends out the node's assigned work to other nodes. usually a distributed file system. CPU speeds. it is possible to also copy them to another name in addition to the name of the task (allowing for . Each output pair would contain the word as the key and "1" as the value. Between the map and reduce stages. otherwise the MapReduce operation can be held up waiting for slow reducers to finish. sums them and generates a single output of the word and the final sum. The Reduce can iterate through the values that are associated with that key and output 0 or more values. Reduce function The framework calls the application's Reduce function once for each unique key in the sorted order. the map function would break the line into words and output a key/value pair for each word. A common example will read a directory full of text files and return each line as a record.

They also compared MapReduce programmers to Codasyl programmers.MapReduce side-effects). document clustering. Because of their inferior properties with regard to parallel operations. For example. Criticism David DeWitt and Michael Stonebraker. The transient data is usually stored on local disk and fetched remotely by the reducers. by Greg Jorgensen. noting both are "writing in a low-level language performing low-level record manipulation. or in the same rack as the node holding the data being operated on. It replaced the old ad hoc programs that updated the index and ran the various analyses. map and reduce functionality can be very easily implemented in Oracle's PL/SQL database oriented language.[11] At Google. experts in parallel databases and shared-nothing architectures.[5] and statistical machine translation.[13] They called its interface too low-level and questioned whether it really represents the paradigm shift its proponents have claimed it is. though projects such as Pig (or PigLatin) and Sawzall are starting to address these problems. especially on complex processing or where the data is used across an enterprise."[14] MapReduce's use of input files and lack of schema support prevents the performance improvements enabled by common database system features such as B-trees and hash partitioning. have been critical of the breadth of problems that MapReduce can be used for. machine learning. there have been claims that this patent should not have been granted because MapReduce is too similar to existing products. This property is desirable as it conserves bandwidth across the backbone network of the datacenter. DeWitt and Stonebraker have subsequently published a detailed benchmark study comparing performance of MapReduce and RDBMS approaches on several specific problems. in Hadoop the NameNode is a single point of failure for the distributed filesystem. the master node attempts to schedule reduce operations on the same node. rejects these views.[12] MapReduce's stable inputs and outputs are usually stored in a distributed file system.[8] volunteer computing environments. Google has been granted a patent on MapReduce. the MapReduce model has been adapted to several computing environments like multi-core and many-core systems. However. inverted index construction.[10] and mobile environments.[9] dynamic cloud environments. web access log stats. 4 Uses MapReduce is useful in a wide range of applications including: distributed grep.[17] .[16] They concluded that databases offer real advantages for many kinds of data use. Implementations are not necessarily highly-reliable. citing Teradata as an example of prior art that has existed for over two decades. MapReduce was used to completely regenerate Google's index of the World Wide Web. but that MapReduce may be easier for users to adopt for simple or one-time processing tasks. The reduce operations operate much the same way. term-vector per host.[6] [7] desktop grids.[14] They challenged the MapReduce proponents' claims of novelty. They have published the data and code used in their study to allow other researchers to do comparable studies. distributed sort. web link-graph reversal. For example.[15] Jorgensen asserts that DeWitt and Stonebraker's entire analysis is groundless as MapReduce was never designed nor intended to be used as a database. Moreover. Another article.

Google was running about 3. A. [16] Andrew Pavlo.com/matt/2009/01/18/ understanding-mapreduce/). 13. vertica. Dimitrios Gunopulos. according to a presentation by Dean. [8] Bing Tang. 113–125. . Retrieved 2008-08-27. Zhe Zhang. • MapReduce Users Groups [19] around the world. . Sang Kyun Kim. Paulson. . Retrieved Apr. asp). Arun Penmetsa. & OS=PN/ 7. Govindaraju.331: "System and method for efficient large-scale data processing " (http:/ / patft. typicalprogrammer. Jeffrey & Ghemawat. • Matt WIlliams (2009). Retrieved Apr. Abadi. Moca. gov/ netacgi/ nph-Parser?Sect1=PTO1& Sect2=HITOFF& d=PALL& p=1& u=/ netahtml/ PTO/ srchnum. . uspto. Xiaosong Ma. "A Peer-to-Peer Framework for Supporting MapReduce Applications in Dynamic Cloud Environments" (http:/ / www. "Misco: a MapReduce framework for mobile systems" (http:/ / portal. In: Cloud Computing: Principles. com/ 8301-10784_3-9955184-7. Stonebraker. Springer. IL. . chapt. Yi-An Lin. E. com/ database-innovation/ mapreduce-a-major-step-backwards/ ).. cnet. . cs. [17] Curt Monash. "A Comparison of Approaches to Large-Scale Data Analysis" (http:/ / database. "Towards MapReduce for Desktop Grid Computing" (http:/ / ieeexplore.html). 2005. willowgarage. acm. [11] Adam Dou.331& RS=PN/ 7. org/ citation. . [6] Colby Ranger.com (http:/ / news. ist. Retrieved 2010-01-11. . brown. [18] http:/ / graal. jsp?arnumber=5662789). html). psu. and Christos Kozyrakis. Tuyong Wang. edu/ viewdoc/ download?doi=10. baselinemag. springerlink. dbms2. these batch routines analyze the latest Web pages and update Google's indexes. Sanjay (2004). J. L. G. com/ papers/ mapreduce. org/ xpl/ freeabs_all.com/papers/mapreduce. ens-lyon. 3PGCIC'10. J.." . Among other things. [10] Fabrizio Marozzo. "More patent nonsense — Google MapReduce" (http:/ / www. PACT'08. acm. Tuulos. . Jeremy Archuleta.000 computing jobs per day through MapReduce. Dewitt.MapReduce 5 Conferences and users groups • The First International Workshop on MapReduce and its Applications (MAPREDUCE'10) [18] was held with the HPDC conference and OGF'29 meeting in Chicago. M. [12] "How Google Works" (http:/ / www. D. S. Retrieved 2009-11-11. Gary Bradski. Wu-chun Feng. 104. 1. . YuanYuan Yu. "Understanding Map-Reduce" (http://wordflows.1540. com/ 2010/ 02/ 11/ google-mapreduce-patent/ ). com/ evaluating-mapreduce-multi-core-and-multiprocessor-systems). Systems and Applications. 2010. Naga K. google. "As of October. . HPCA 2007. cfm?id=1454152).com. Wenbin Fang. HPDC'10. "Relational Database Experts Jump The MapReduce Shark" (http:/ / typicalprogrammer. com/ General references: • Dean. Gary Bradski. . baselinemag. cfm?id=1851489). ieee. Andrew Ng. D. Michael Stonebraker. com/ article2/ 0. by Jeffrey Dean and Sanjay Ghemawat. "MOON: MapReduce On Opportunistic eNvironments" (http:/ / portal. com/ ?p=16). Domenico Talia. [14] David DeWitt.331) [3] "Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages. Brown University. from Microsoft [5] Cheng-Tao Chu."MapReduce: Simplified Data Processing on Large Clusters" (http:/ / labs. fr/ mapreduce/ [19] http:/ / mapreduce. acm. willowgarage. and Kunle Olukotun.com. 2011. htm& r=1& f=G& l=50& s1=7. Gillam (Editors). PN. Ramanan Raghuraman.650. Taneli Mielikainen and Ville H. 6.Revisited" (http:/ / citeseerx. HPDC'10. com/ ?p=16). org/ citation. "MapReduce: Simplified Data Processing on Large Clusters" (http:// labs. [7] Bingsheng He. ISBN: 978-1-84996-240-7." [13] "Database Experts Jump the MapReduce Shark" (http:/ / typicalprogrammer. Madden. 7.com. Mark Gardner. com/ map-reduce-machine-learning-multicore). "Map-Reduce for Machine Learning on Multicore" (http:/ / www.google.com. html) [2] US Patent 7. . com/ content/ h17r882710314147/ ). S. References Specific references: [1] Google spotlights data center inner workings | Tech news blog . Antonopoulos.. databasecolumn. 1. dbms2. "Evaluating MapReduce for Multi-core and Multiprocessor Systems" (http:/ / www. "MapReduce: A major step backwards" (http:/ / databasecolumn. from Google Labs [4] "Google's MapReduce Programming Model -. "Mars: a MapReduce framework on graphics processors" (http:/ / portal. Best Paper. cfm?id=1839332). Haiwu He and Fedak.650. 5859& rep=rep1& type=pdf) — paper by Ralf Lämmel. meetup. Retrieved 2010-03-07. representing thousands of machine-days. NIPS 2006.650. org/ citation. Paolo Trunfio. Chevalier.1985048. and M. Vana Kalogeraki. edu/ projects/ mapreduce-vs-dbms/ ).650. pp.331. . N. [15] Greg Jorgensen. Qiong Luo.CNET News.00. Rasin. [9] Heshan Lin.

Ali Dasdan. from Google Labs • "Evaluating MapReduce for Multi-core and Multiprocessor Systems" (http://csl.MapReduce 6 External links Papers • "A Hierarchical Framework for Cross-Domain MapReduce Execution" (http://pti.hk/catalac/users/saven/ GPGPU/MapReduce/PACT08/171. 2007. from University of Calabria. • "Tiled-MapReduce: Optimizing Resource Usages of Data-parallel Applications on Multicore with Tiling" (http:// ppi. Springer.1016/j.) FLuX: the Fault-tolerant (http://citeseer. N. Beth Plale.pdf) — paper by Marc de Kruijf and Karthikeyan Sankaralingam.12.edu/647742. from University of California. from Indiana University and Wilfred Li.html) — paper by Rob Pike. It presents the Tiled-MapReduce programming model which optimizes resource usages of MapReduce applications on multicore environment using tiling strategy. cfm?doid=1247480. from University of Wisconsin–Madison • "Mars: A MapReduce Framework on Graphics Processors" (http://www. ISBN: 978-1-84996-240-7.doi.com/papers/sawzall. pp.hpca. Jeffrey D.com/content/h17r882710314147/) — paper by Fabrizio Marozzo. Haibo Chen and Binyu Zang from Fudan University. published in Proc.com/2008/08/26/ why-mapreduce-matters-to-sql-data-warehousing/) — analysis related to the August. "A Peer-to-Peer Framework for Supporting MapReduce Applications in Dynamic Cloud Environments" (http:// www.jpdc. L. Journal of Parallel and Distributed Computing 71 (2011) 450-459.wisc. Not published as of Nov 2009.acm. Sean Quinlan. Wenbin Fang.springerlink. 1029–1040. FPMR: MapReduce framework on FPGA (http://portal. This results in a more pipelined approach than Google's MapReduce with instantaneous failover.pdf) — paper by Bingsheng He. Paolo Trunfio. from Yahoo and UCLA. Antonopoulos. and D.ist.iu. doi:10. chapt. from Stanford University. Ningyi Xu.E. (This paper shows how to extend MapReduce for relational data processing.fudan.org/beta/citation.pdf?id=rong_chen&cache=cache) -.stanford. published in Proc. 2008 introduction of MapReduce/SQL integration by Aster Data Systems and Greenplum • "MapReduce for the Cell B. "A New Computation Model for Rack-Based Computing" (http://infolab. Gillam (Editors). Arun Penmetsa.edu.pdf) — paper by Colby Ranger. "Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters" (http://portal.google. Ullman. of ACM SIGMOD. edu/546646.paper by Joanna Berlińska from Adam Mickiewicz University and Maciej Drozdowski from Poznan University of Technology. Bo Wang. 2010. but with additional implementation cost.dbms2. Judy Qiu. This paper is an attempt to develop a general model in which one can compare algorithms for computing in an environment similar to what map-reduce expects.ostrich-pact10. Zhenhua Guo.cse.cn/_media/publications. Ruey-Lung Hsiao. Systems and Applications. Sean Dorward.psu.org/10. Huazhong Yang (2010).html). from Stanford University • "Why MapReduce Matters to SQL Data Warehousing" (http://www.2010.004. Afrati. Jing Yan.edu/pubs/ hierarchical-framework-cross-domain-mapreduce-execution) — paper by Yuan Luo. . Yiming Sun. 113–125. from Hong Kong University of Science and Technology. pp. Govindaraju.1016/j.1247602) — paper by Hung-Chih Yang. PACT 2010. published in Cloud Computing: Principles. Yu Wang. PACT 2008.cmp_mapreduce.psu.1723129) -- • • • • • paper by Yi Shan.html) eXchange operator from UC Berkeley provides an integration of partitioned parallelism with process pairs. in FPGA '10. Ramanan Raghuraman. Domenico Talia.acm.cfm?id=1723112. San Diego • "Interpreting the Data: Parallel Analysis with Sawzall" (http://labs. It presents the design and implementation of MapReduce on graphics processors. published in Proc. pdf) — paper by Foto N. Naga K. Stott Parker.ist. Tuyong Wang.org/citation.2010.cs.paper by Rong Chen. It presents scheduling and performance model of MapReduce. • "Scheduling divisible MapReduce computations " (http://dx.jpdc. Qiong Luo. Proceedings of the 18th annual ACM/SIGDA international symposium on Field programmable gate arrays.stanford.ust.edu/~christos/ publications/2007. and Christos Kozyrakis. Architecture" (http://pages.12.edu/~ullman/pub/mapred. Gary Bradski.004) -. Robert Griesemer. Load Balancing (http://citeseer.edu/~dekruijf/docs/mapreduce-cell. 7.

google.eu) research project. Hueske.de/menue/home/parameter/en/) published in Proc. reading material. Kao. a community-based experiment was conducted in 1991 to extend SIMNET to link the US Army's Corps Battle Simulation (CBS) [1] and the US Air Force's Air Warfare Simulation (AWSIM) [2]. Markl. Battré.MapReduce • "Nephele/PACTs: A Programming Model and Execution Framework for Web-Scale Analytical Processing" (http://stratosphere. and D. O. Participating simulations adapted for use with ALSP.com/edu/) contains a comprehensive introduction to MapReduce including lectures. Kao.de/menue/home/parameter/en/ ) published in Proc. . S.google. Heimel. and programming assignments.2008. Ewen. Warneke from TU Berlin (http://www.googlepages.tu-berlin. ALSP consists of: 1.Comparing Data Parallel Programming Models" (http://stratosphere. F.stratosphere. F. O. Replaced by the High Level Architecture (simulation) (HLA). • "MapReduce and PACT . Alexandrov. Based on prototype efforts. part of 2008 Independent Activities Period at MIT. The first ALSP confederation. M. of BTW 2011.html) (manuscript) Educational courses • Cluster Computing and MapReduce (http://code.eu/files/NephelePACTs_10. supported three major exercises in 1992. • MapReduce in a Week (http://code. The success of the prototype and users' recognition of the value of this technology to the training community led to development of production software. V. V.com/). it was used by the US military to link analytic and training simulations. a generalization of MapReduce.google. and D. developed in the Stratosphere (http://www.eu/files/ ComparingMapReduceAndPACTs_11.pdf) -. The paper introduces the PACT programming model. Warneke from TU Berlin (http://www. 2. Books • Jimmy Lin and Chris Dyer. html) course from Google Code University (http://code. and 3. of ACM SoCC 2010. Nijkamp. Hueske.com/edu/) contains video lectures and related course materials from a series of lectures that was taught to Google software engineering interns during the Summer of 2007. providing air-ground interactions between CBS and AWSIM.paper by D. the Defense Advanced Research Projects Agency (DARPA) employed The MITRE Corporation to study the application of distributed interactive simulation principles employed in SIMNET to aggregate-level constructive training simulations. ALSP Infrastructure Software (AIS) that provides distributed runtime simulation support and management.umd.paper by A. Ewen.html) course from Google Code University (http://code. 7 Aggregate Level Simulation Protocol The Aggregate Level Simulation Protocol (ALSP) is a protocol and supporting software that enables simulations to interoperate with one another.google. "Data-Intensive Text Processing with MapReduce" (http://www. • MapReduce course (http://mr.com/edu/submissions/mapreduce-minilecture/listing. ALSP Logo History In 1990. taught by engineers of Google Boston.iap. S. Markl.edu/ ~jimmylin/book.com/edu/submissions/mapreduce/listing.umiacs. E.tu-berlin.pdf) -. A reusable ALSP Interface consisting of generic data exchange message protocols.

The Defense Advanced Research Projects Agency (DARPA) used ACE-89 as a technology insertion opportunity by funding deployment of the Defense Simulation Internet (DSI).Aggregate Level Simulation Protocol By 1995. virtual battlefield. was less successful. The success of SIMNET. But the software application of DSI. the US Marine Corps (MTWS [3]). Training. electronic warfare (JECEWSI). Germany hosted the computerized military exercise ACE-89. and intelligence (TACSIM [4]). • No central node so that simulations can join and depart from the confederation at will • Geographic distribution where simulators can be distributed to different geographic locations yet exercise in the same simulated environment • Object ownership so each simulation controls its own resources. many of which were applied in the development of HLA. Motivation In 1989. distribution of Ground Warfare Simulation (GRWSIM). the US Navy (RESA). . DARPA was funding development of a distributed tank trainer system called SIMNET where individual. this was well-received. The GRWSIM simulation was unreliable and its distributed database was inconsistent. degrading the effectiveness of the exercise. fires its own weapons and determines appropriate damage to its systems when fired upon • A message-based protocol for distributing information from one simulation to all other simulations. and Instrumentation (PEO STRI [5]) 8 Contributions ALSP developed and demonstrated key aspects of distributed simulation. • An architecture that permits simulations to continue to use their existing architectures while participating in an ALSP confederation. tank-crew trainers were connected over local area networks and the DSI to cooperate in a single. the disappointment of ACE-89. and the desire to combine existing combat simulations prompted DARPA to initiate research that lead to ALSP. the Warrior Preparation Center (WPC) in Einsiedlerhof. ALSP had transitioned to a multi-Service program with simulations representing the US Army (CBS). This includes multiple simulations controlling attributes of the same object. computerized. • Time management so that the times for all simulations appear the same to users and so that event causality is maintained – events should occur in the same sequence in all simulations. • Data management permits all simulations to share information in a commonly understood manner even though each had its own representation of data. logistics (CSSTSS). the US Air Force (AWSIM). Its packetized video teleconferencing brought general officers of NATO nations face-to-face during a military exercise for the first time. The program had also transitioned from DARPA’s research and development emphasis to mainstream management by the US Army’s Program Executive Office for Simulation.

• Geographic distribution. ALSP prescribes that each simulation maps between the representational scheme of the confederation and its own representational scheme. ALSP supports a confederation of simulations that coordinate using a common model. Despite representational differences. necessitating a common representational system and concomitant mapping and control mechanisms. and time flow mechanism) of existing simulations differed. • Modifying the simulation’s internal time advance mechanism so that it works cooperatively with the other simulations within the confederation. This mapping represents one of the three ways in which a simulation must be altered to participate in an ALSP confederation. the ALSP design adopted the second strategy. To design a mechanism that permits existing simulations to interact. . aggregate-level combat simulations. interaction is facilitated entirely through the interconnection infrastructure. A simulation uses a message-passing protocol distribute information to all other simulations. For the results of a [6] distributed simulation to be "correct. When acting within a confederation. several principles of SIMNET applied to aggregate-level simulations: • Dynamic configurability. Architectural characteristics (implementation language. or (2) define a common representational scheme and require all simulations to map to that scheme. • Data management. Simulations may join and depart an exercise without restriction. this solution does not scale well. The ALSP challenge had requirements beyond those of SIMNET: • Simulation time management. simulation time is independent of wall-clock time. Simulations can reside in different geographic locations yet exercise over the same logical terrain. two strategies are possible: (1) define an infrastructure that translates between the representations in each simulation. conducts damage assessment locally. The ALSP conceptual framework is object-based where a model is composed of objects that are characterized by attributes to which values are assigned. However. The schemes for internal state representation differ among existing simulations. Aggregate-level combat simulations use Lanchestrian models of combat rather than individual physical weapon models and are typically used for high-level training. Conceptual Framework A conceptual framework is an organizing structure of concepts that facilitates simulation model development. • Autonomous entities. Because of an underlying requirement for scalability. The remaining modifications are: • Recognizing that the simulation doesn’t own all of the objects that it perceives. In stand-alone simulations. The first strategy requires few perturbations to existing simulations. fires its own weapons and. • Communication by message passing." time must be consistent across all simulations. activity scanning and process interaction. the simulation-object relationship is more complicated. existing. Typically. Object classes are organized hierarchically in much the same manner as with object-oriented programming languages.Aggregate Level Simulation Protocol 9 Basic Tenets DARPA sponsored the design of a general interface between large.[7] Common conceptual frameworks include: event scheduling. • Architecture independence. user interface. when one of its objects is hit. Each simulation controls its own resources. objects come into (and go out of) existence with the passage of simulation time and the disposition of these objects is solely the purview of the simulation. The architecture implied by ALSP must be unobtrusive to existing architectures.

The simulation repeats from step (1). 2. Time management Joining and departing a confederation is an integral part of time management process. Whenever a simulation takes an action between one of its objects and a ghost. If the ACM has messages for its simulation with timestamps older than or the same as T. Likewise. The simulation sends any messages resulting from the event to its ACM. Conversely. The simulation processes all events for some time interval 2. The ALSP Infrastructure Software (AIS) provides data distribution and process coordination. 4. ACM services require time management and object management. The simulation sends an advance request to its ACM for time . When a simulation creates an object. this is an interaction. and permit ownership migration. Enforce attribute ownership so that simulations report values only for attributes they own.Aggregate Level Simulation Protocol The simulation-object ownership property is dynamic. i. Owning an object’s attribute means that a simulation is responsible for calculating and reporting changes to the value of the attribute. all other ACMs in the confederation create input message queues for the new simulation. The term confederation model describes the object hierarchy. the ACM send a grant-advance to the simulation. One ACM instance exists for each simulation in a confederation. In the parlance of ALSP. when a simulation departs a confederation the other ACMs delete input message queues for that simulation. . These fundamental concepts provide the basis for the remainder of the presentation. ALSP time management facilities support discrete event simulation using either asynchronous (next-event) or [8] synchronous (time-stepped) time advance mechanisms. (the time of its next local event). If all messages have timestamps newer than T. they include: • • • • • Coordinate simulations joining and departing from a confederation. so that simulations receive only messages of interest.. the simulation must report this to the confederation. it reports this fact to enable ghost deletion. 10 ALSP Infrastructure Software (AIS) The object-based conceptual framework adopted by ALSP defines classes of information that must be distributed. several simulations may own different attributes of a given object. when a simulation deletes an object. Coordinate ownership of object attributes. In fact. a simulation owns an object if it owns the "identifying" attribute of the object. By convention.e. Objects not owned by a particular simulation but within the area of perception for the simulation are known as ghosts. ALSP Common Module (ACM) The ALSP Common Module (ACM) provides a common interface for all simulations and contains the essential functionality for ALSP. the ACM sends the oldest one to the simulation. The mechanism to support next-event simulations is 1. it reports this fact to the confederation to let other simulations create ghosts. for any value of simulation time. Principal components of AIS are the ALSP Common Module (ACM) and the ALSP Broadcast Emulator (ABE). attributes and interactions supported by a confederation. Coordinate simulation local time with confederation time. during its lifetime an object may be owned by more than one simulation. The mechanism to support time-stepped simulation is: 1. 3. . When a simulation joins a confederation. A simulation sends an event-request message to its ACM with a time parameter corresponding to simulation time T. Ghosts are local copies of objects owned by other simulations. giving it permission to process its local event at time T. Filter incoming messages.

attributes may be members of • Create set. . Filters also define the interactions relevant to a simulation. For any object class. but not mandatory. 4. The simulation repeats from step (1). followed by a to the ACM. It receives a message on one of its communications paths and retransmits the message on all of its remaining communications paths. either owned or ghosted. It also permits configurations where sets of ACMs communicate with their own local ABE with inter-ABE communication over wide area networks. Attributes minimally required to represent an object • Interest set. 11 AIS includes a deadlock avoidance mechanism using null messages. and (3) geographic location. The attribute database maintains objects known to the simulation. ALSP Broadcast Emulator (ABE) An ALSP Broadcast Emulator (ABE) facilitates the distribution of ALSP information. The mechanism requires that the processes have exploitable lookahead characteristics. Useful. Object management The ACM administers attribute database and filter information. The ACM sends all messages with time stamps on the interval grant-advance to T+?T. The simulation sends any messages for the interval 5.Aggregate Level Simulation Protocol 3. (2) attribute value or range. Filtering provides discrimination by (1) object class. Object attribute values reported by a simulation to the confederation Information flow across the network can be further restricted through filters. This permits configurations where all ALSP components are local to one another (on the same computer or on a local area network). and attributes of those objects that the simulation currently owns. to the simulation. If (an update passes all filter criteria) | If (the object is known to the simulation) | | Send new attribute values to simulation | Else (object is unknown) | | If (enough information is present to create a ghost) | | | Send a create message to the simulation | | Else (not enough information is know) | | | Store information provided | | | Send a request to the confederation for missing data Else (the update fails filter criteria) | If (the object is known to the simulation) | | Send a delete message to the simulation | Else | | Discard the update data The ownership and filtering information maintained by the ACM provide the information necessary to coordinate the transfer of attribute ownership between simulations. information • Update set.

and time control services. It is defined by an LALR( 1) context-free grammar. the simulation sends an interaction message to the ACM for further dissemination to other interested simulations. it sends update messages to the ACM that provide initial or changed attribute values. The semantics of the protocol are confederation-dependent. filter registration. (2) a layered protocol for simulation-to-simulation communication. object resource control. These issues are addressed by a layered protocol that has at the top a simulation protocol with underlying simulation/ACM. Additional protocol messages provide connection state. the syntactical representation of the simulation protocol may be defined without a priori knowledge of the semantics of the objects and interactions of any particular confederation. Interaction kinds are described by parameters. attribute lock control. When a simulation causes one of its objects to cease to exist. The simulation protocol is text-based. The transport layer interface used to provide inter-component communications was dictated by simulation requirements and the transport layer interfaces on AIS-supporting operating systems: local VMS platforms used shared mailboxes. As a simulation changes the state its objects. confederation save control. When a simulation’s object engages either another simulation’s object or a geographic area.Aggregate Level Simulation Protocol 12 Communication Scheme The ALSP communication scheme consists of (1) an inter-component communications model that defines the transport layer interface that connects ALSP components. interactions. and UNIX-like platforms use TCP/IP. • Delete. . (3) a message filtering scheme to define the information of interest to a simulation. non-local VMS platforms used either Transparent DECnet or TCP/IP. and a set of attributes associated with a c1ass. the simulation sends a delete message to inform other simulations. A simulation can request an update of a set of attribute values for any object or class of objects by sending a refresh request message to the confederation. time management. ALSP Protocol The ALSP protocol is based on a set of orthogonal issues that comprise ALSP’s problem space: simulation-to-simulation communication. It consists of four message types: • Update. and (4) a mechanism for intelligent message distribution. and time management. without regard for simulation time. Two services control distribution of simulation protocol messages: events and dispatches. Simulation Protocol The simulation protocol is the main level of the ALSP protocol. and interaction parameters are variable. object management. Simulation/ACM Connection Protocol The simulation/ACM connection protocol provides services for managing the connection between a simulation and its ACM and a method of information exchange between a simulation and its ACM. where the set of classes. just as objects are described by attributes. Therefore. object management. Dispatch messages are delivered as soon as possible. and time management. Interactions between objects are identified by kind. • Refresh request. object management. • Interaction. and event distribution protocols. a class. The ACM then distributes the information via AIS to other simulations in that have indicated interest. Event messages are time-stamped and delivered in a temporally-consistent order. class attributes. Inter-component Communications Model AIS employs a persistent connection communications model[9] to provide the inter-component communications. Objects in ALSP are defined by a unique id number.

and release of object attributes. time progression. No simulation currently controls the attribute. Update messages. The initial state attribute locks for registered objects and discovered objects is as follows: • Object Registration places each object-attribute pair in the locked state. All of the attributes for this object are marked with a status of gone. acquisition. between ACMs. The join/resign services and time synchronization mechanisms are described in Section earlier. and pass filtering criteria and discards those that are not of interest. objects come into existence through the registration process performed by its simulation or through the discovery of objects registered by other simulations. A simulation "owns" the object if it has its id attribute locked. and confederation saves. A primary function of the object management protocol is to ensure that a simulation only updates attributes for which it has acquired a lock. A simulation "owns" the attribute if it has that attribute locked. (2) the ACM sends the simulation a create message. A simulation controls the attribute and may update the attribute value. An ACM may discard interaction messages because of the kind parameter. Any simulation asking for control is granted control. and verification (of the consistency of the distributed object database). If this simulation is interested in the objects. Services provided by the simulation/ACM protocol are used by the simulations to interact with the ACM’s attribute locking mechanism. • Unlocked. The ACM evaluates update messages based on the simulation’s update message filtering criteria that the simulation provides. These services allow AIS to manage distributed object ownership. The simulation informs its ACM of the interaction . Distributed object ownership presumes that no single simulation must own all objects in a confederation. The state of control is held elsewhere in the confederation. From the ACM’s perspective. It provides time management services for synchronizing simulation time among ACMs. it can ghost them (track their locations and state) and model interactions to them from owned objects. The protocol provides services for the distributed coordination of a simulation’s entrance into the confederation. The ACM delivers messages to its simulation that are of interest. Coordination is required to produce a consistent snapshot of all ACMs. but many simulations require knowledge of some objects. A simulation uses simulation protocol update messages to discover objects owned by other simulations. The coordination of status. Time Management Protocol The time management protocol is also a peer-level protocol that sits below the simulation protocol. release. Locks implement attribute ownership. request. • Object Discovery adds an object to the object database as a ghosted object. 13 Message Filtering The ACM uses simulation message filtering to evaluates the content of a message received from the confederation. • Gone. Interaction messages. The ACM filters two types of messages: update messages and interaction messages. when an ACM receives an update message there are four possible outcomes: (1) the ACM discards the message. The kind parameter has a hierarchical structure similar to the object class structure. uses the object management protocol. The save mechanism provides fault tolerance.Aggregate Level Simulation Protocol Object Management Protocol The object management protocol is a peer-level protocol that sits below the simulation protocol and provides object management services. translators and simulations for a particular value of simulation time. ACMs solely use it for object attribute creation. or (4) the ACM sends the simulation a delete message. The object manager in the ACM manages the objects and object attributes of the owned and ghosted objects known to the ACM. Each attribute of each object known to a given ACM has a status that assumes one of three values: • Locked. acquisition. (3) the ACM sends the simulation the update message. The simulation may optionally specify attributes to be in the unlocked state. As discussed in earlier.

the process is similar. mil [6] Lamport. 14 Message Distribution To minimize message traffic between components in an ALSP confederation. distribution of this information allows ACMs to only distribute data on classes (and attributes of classes) that are of interest to the confederation. Los Angeles. New Orleans. [10] Weatherly. (1971). Taft... D. (1990). A. "Time. mil/ products/ tacsim [5] http:/ / www. [9] Boggs. and Bishop. afams." Management Science.. 1068-1072. "PUP: An Internetwork Architecture. 257-263. and Metcalfe. and Griffin. "On Time Flow Mechanisms for Discrete Event Simulations. AIS employs a form of intelligent message routing that uses the Event Distribution Protocol (EDP).M.H. [7] Balci.10. July. E.E. References [1] [2] [3] [4] http:/ / www. [8] Nance. E. R. army. pp.[10] The EDP allows ACMs to inform the other AIS components about the update and interaction filters registered by their simulations. 21(7). . (1978)." Communications of the ACM.. Page. J. 18(l)." Report CSL-79. af. E. peostri.. army.M. The ABE also use this information to send only information that is of interest to the components it serves. peostri. R. and Future Directions. except that the kind parameter in the interaction message determines where the message is sent. For interaction messages. mil/ index. S. asp http:/ / www.L. R. R. Shoch. 9–12 December.R. Clocks.J. mil/ dirs/ ont/ mands/ mwts. CA. In the case of update messages.Theory. Wilson.E. pp. army. J. 12–15 December. pp. (1979). mil/ products/ cbs/ https:/ / afmsrr. (1993).. Derrick. "Model Generation Issues in a Simulation Support Environment. 59-93. and the Ordering of Events in a Distributed System. pp. LA. 558-565. September.Aggregate Level Simulation Protocol kinds that should pass or fail the interaction filter. peostri. L. "ALSP .F. cfm?RID=SMN_AF_1000000 http:/ / www." In: Proceedings of the 1990 Winter Simulation Conference. XEROX Palo Alto Research Center.L.A. 29palms. Nance. O.P. usmc. Experience. July." In: Proceedings of the 1993 Winter Simulation Conference.

These performance metrics are available using the AWS Management Console or Amazon CloudWatch APIs [9]. Oracle database support was added. Monitoring the compute and storage resource utilization of your DB Instance is easy.Amazon Relational Database Service 15 Amazon Relational Database Service Amazon Relational Database Service[1] or Amazon RDS is a distributed relational database service by Amazon. Some of the major features are: Multi AZ deployment Multi-Availability Zone deployments are targeted for production environments [10] . They can also be used for serving read traffic when the primary database is unavailable. up-front. Reserved DB Instances require a low. It is a web service running "in the cloud" and provides users a relational database for use in their applications. Amazon RDS was first released on 22 October 2009[4] [5] . When you create or modify your DB Instance to run as a Multi-AZ deployment. backing up your database and enabling point in time recovery are managed automatically[3] . The two instance types are exactly the same except for their billing. Reserved Instances Amazon RDS DB instances come in two packages: On-Demand DB Instances and Reserved DB Instances [12]. Amazon RDS makes it easy to set up. Features Amazon RDS is simple to use. Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone [11] (independent infrastructure in a physically separate location).com. Scaling storage and compute resources can be performed by a single API call. A new DB instance can be launched from the AWS Management Console [7] or using the Amazon RDS APIs [8]. Thus Reserved DB Instances enable you to take advantage of the rich functionality of Amazon RDS at lower cost and can provide substantial savings over owning database assets or running only On-Demand DB instances. Amazon RDS automatically failsover to the up-to-date standby ensuring that database operations resume quickly without administrative intervention. Amazon RDS offers many different features to support different use cases. In the event of planned database maintenance or unplanned service disruption. Read Replicas help in scaling out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. . operate. On-Demand instances are billed [13] at an ongoing hourly usage rate. and scale a relational database[2] . one-time fee and in turn provide a significant discount on the hourly usage charge for that instance. In June 2011. Multi-AZ deployments provide enhanced availability and data durability for MySQL instances. Read Replicas Read Replicas make it easy to take advantage of MySQL’s native.[6] Amazon RDS supports MySQL and Oracle database engines. asynchronous replication functionality. Complex administration processes like patching the database software.

com/ AmazonRDS/ latest/ APIReference/ [9] http:/ / aws. typepad. html [6] http:/ / cloudcomputing. amazon. jspa?externalID=2942& categoryID=291 [5] http:/ / www. aws. 1 ECU (1 virtual core with 1 ECU). amazon. amazon. amazon. amazonwebservices. 64-bit platform. High I/O Capacity References [1] http:/ / aws. airbnb. 64-bit platform. amazon. 64-bit platform.5 GB memory. com/ rds/ faqs/ #41 [12] http:/ / aws. 64-bit platform.25 ECUs each). 8 ECUs (4 virtual cores with 2 ECUs each).5 ECU (2 virtual cores with 3. oreilly. com/ [8] http:/ / docs. to support different types of workloads [14] : Small DB Instance 1. php/ 426926 [7] https:/ / console.1 GB memory.25 ECUs each).7 GB memory. com/ rds/ #features . amazon. 6. High I/O Capacity High-Memory Double Extra Large DB Instance 34 GB of memory. High I/O Capacity (MySQL DB Engine Only) High-Memory Extra Large Instance 17. High I/O Capacity Extra Large DB Instance 15 GB of memory.25 ECUs each). com/ 2009/ 10/ amazon_relational_database_service. com/ rds/ amazon-rds-introduced/ [4] http:/ / developer. High I/O Capacity High-Memory Quadruple Extra Large DB Instance 68 GB of memory. com/ mysql-in-the-cloud-at-airbnb [3] http:/ / aws. amazon. com/ aws/ 2010/ 08/ by-popular-demand-amazon-rds-reserved-db-instances. allthingsdistributed. com/ applications/ article. Moderate I/O Capacity Large DB Instance 7. com/ mysql2011/ public/ schedule/ detail/ 19732 [11] http:/ / aws. internet. 13 ECUs (4 virtual cores with 3.Amazon Relational Database Service 16 Database Instance Types Amazon RDS currently supports six DB Instance Classes. 4 ECUs (2 virtual cores with 2 ECUs each). 64-bit platform. amazonwebservices. 26 ECUs (8 virtual cores with 3. com/ developertools/ 2534 [10] http:/ / en. com/ connect/ entry. com/ rds/ pricing/ [14] http:/ / aws. html [13] http:/ / aws. com/ rds/ [2] http:/ / nerds. 64-bit platform.

sdbexplorer. com/ 2008/ 12/ simpledb-2000000-free-requests-for-next-six-months/ ) [4] Amazon SimpleDB official home page (http:/ / www. com/ b?node=342335011) [5] SimpleDB Limits. amazon. It is used as a web service in concert with Amazon Elastic Compute Cloud (EC2) and Amazon S3 and is part of Amazon Web Services. 10 GB attributes per domain 1.com. 2008. amazon.000.000 attributes per item size per attribute 256 attributes 1024 bytes Query limitations Attribute items returned in a query response seconds a query may run Maximum 2500 items 5 seconds attribute names per query predicate 1 attribute name comparisons per predicate predicates per query expression 20 operators 5 predicates References [1] What You Need To Know About Amazon SimpleDB (http:/ / www. Amazon charges fees for SimpleDB storage. 2007. html?SDBLimits.Amazon SimpleDB 17 Amazon SimpleDB Amazon SimpleDB is a distributed database written in Erlang[1] by Amazon. Amazon introduced a new pricing with free tier[3] for 1 GB of data & 25 machine hours. On December 1. More can be requested by filling a form.Limited Beta (http:/ / www. Transfer to other Amazon Web Services is free of charge. amazonwebservices. html''Amazon) . Amazon SimpleDB Developer Guide (API Latest version) (http:/ / docs. and throughput over the Internet. satine.Free Tier . org/ archives/ 2007/ 12/ 13/ amazon-simpledb/ ) [2] Amazon SimpleDB. com/ SimpleDB-AWS-Service-Pricing/ b?node=342335011& no=553872011& me=A36L942TSJ2AJA) [3] SimpleDB . transfer. It was announced on December 13. com/ AmazonSimpleDB/ latest/ DeveloperGuide/ index.[2] As with EC2 and S3.A shift in AWS pricing (http:/ / blog.000.[4] Limitations Published limitations[5] : Store limitations Attribute domains size of domains Maximum 250 active domains per account.

com/) • typica . can also be used as a proxy for SimpleDB (http://code.a Free Open Source API-compatible alternative to SimpleDB that can be used as a local or cloud database (http://www.google.Open-source .a Java Persistence API (JPA) implementation for Amazon's SimpleDB.sdbexplorer.com/mdb. codeplex. (http://www.Amazon SimpleDB 18 External links • Amazon SimpleDB official home page (http://aws.com/p/typica/) • SimpleJPA .com/) . (http://simol.mgateway.google.com/p/nsimpledb/) • M/DB .com/simpledb/) • NSimpleDB .Tool to explore Amazon SimpleDB service.google.amazon.A Java client for SimpleDB and other Amazon Web Services (http://code.NET object-persistence framework for Amazon SimpleDB written in C#.com/ p/simplejpa/) • SDB Explorer .Open source C# implementation of the SimpleDB data model for the desktop.html) • Simol . (http://code.

. The Python programming language was originally developed for this platform. Tanenbaum and others at the Vrije Universiteit. i486.nl/pub/amoeba/) • FSD-Amoeba page at Sourceforge (http://fsd-amoeba. The Virtual Amoeba Machine Network: a new hybrid distribute operating system environment .Amoeba distributed operating system 19 Amoeba distributed operating system Amoeba Company / developer Andrew S.html).de/english/vam. Retrieved 2008-02-11.de/english/amunix.de/english/projects_software.bsslab.vu.bsslab. including SPARC. Stefan Bosse at BSS Lab. nl/ pub/ amoeba/ [2] "Why was Python created in the first place?" (http:/ / www. Recent development is carried forward by Dr. python.html). [2] References [1] http:/ / www. Stefan Bosse at BSS Lab (http://www. Sun 3/50 and Sun 3/60. The system uses FLIP as a network protocol.net) Recent development by Dr.bsslab. Amoeba runs on several platforms.de/english/vxkernel. External links • Amoeba home page (http://www.bsslab.de/english/index. Development at Vrije Universiteit was stopped: the files in the latest version (5.html): the new VX-Amoeba Kernel • VAMNET (http://www. org/ doc/ faq/ general/ #why-was-python-created-in-the-first-place).bsslab. Amoeba on the top of UNIX: Amoeba extension for UNIX-like opertaing systems • AMCROSS (http://www. Python FAQ.bsslab.cs.sourceforge.html).de/english/vamnet.html): Amoeba crosscompiling environment for UNIX • VX-Kernel (http://www. The Virtual Amoeba Machine: distributed operating system based on Amoeba with virtual machine concepts and functional programming • AMUNIX (http://www. cs. Tanenbaum Available language(s) English Official website [1] Amoeba is an open source microkernel-based distributed operating system developed by Andrew S.de/english/amcross. 68030.3) were last modified on 12 February 2001. The aim of the Amoeba project is to build a timesharing system that makes an entire network of computers appear to the user as a single machine.bsslab.html) • VAM (http://www. vu. i386.html): • Overview (http://www.

[6] Acquisitions In November 2009 Art of War Central acquired two competitors in the game server and dedicated server marketplace. a 64 team double elimination Counter-Strike competition. Crysis 2.Art of War Central 20 Art of War Central Art of War Central is a game server company that provides game server hosting to game player clans for a variety of PC on-line multi-player games. Frontlines: Fuel of War. the CPL (Cyberathlete Professional League).[17] . Bad Company 2. Chicago. Crysis. Germany.as of April.[1] The site was registered on March 28. Dallas.[3] The company began offering additional games when it introduced a beta version of Counter Strike in 2002 and has since expanded its portfolio to over 100 online games as of October 2010. voice servers. the original intent was to provide a dedicated server for private team play. Art of War Central sponsored the 2004 Cyberathlete Extreme World Championships[15] and in August 2004 participated with Team Sportscast Network by providing a 50. 2011. dedicated servers and web hosting services for non-gaming users. it was the first such game server on the internet.[16] Art of War Central has co-sponsored a number of on-line game events with Superstar Gamers. LLC founded in September 2006 with Mr. Texas.[5] Current ownership is listed as North American Game Technology. Los Angeles and Southampton/London UK. Homefront.[8] VSK Game Servers was an early industry leader in developing lag or latency reducing technology to improve gaming performance.[11] North American Game Technology LLC is an accredited member of the Columbus.[12] Sponsorships and League Hosting Art of War Central has sponsored and hosted numerous on-line gaming tournaments and leagues for profession and amateur players. History Initially started in the basement of company founder and current Vice President Mr. the CAL (Cyberathlete Amateur League) and was a contributing sponsor to the CPL World Tour. New York. Battlefield 2142.[4] In 2008 international operations were launched in London. Organizations such as Club Conflict Online Gaming League[13] and TeamWarfare League[14] have used Art of War Central. 2001 [2] and offered game servers for Tribes 1 and Tribes 2.com maintained dedicated game servers in the following markets Atlanta. San Jose.000 slot HLTV network to broadcast “The-Rush”. Ohio Better Business Bureau with a rating of A. Quake Wars. While their primary business is directed at the on-line gaming community they also offer virtual servers. Battlefield 2. Virginia. Medal of Honor. Their takeover of Wolf Servers and VSK Game Servers was announced in a press release November 26.[10] Accreditations Art of War Central is an approved ranked server provider for America's Army Honor. 2009. Dallas Behling. Amsterdam and Frankfurt. Steve Phallen as President.[7] WolfServers. World in Conflict.[9] Incorporating specific performance requirements into the hardware of their in-house servers and partnering with Internap to improve routing performance.

using high-level policies. com/ main. com/ sponsors/ [14] http:/ / www. org/ 10429472-gamers-are-winners-in-landmark-gamer-server-merger-art-of-war-central-merges-with-wolf-servers-and. com/ art-of-war-central-celebrates-10th/ 68295 [5] http:/ / nuclearwar2012. knowledge and planner/adapter for exploiting policies based on self. teamwarfare.[2] . clubconflict. com/ art-of-war-continues-growth-with-expansion-to-frankfurt_281.com/) Autonomic Computing Autonomic Computing refers to the self-managing characteristics of distributed computing resources.php) • VSK Gaming Servers (http://www. Started by IBM in 2001. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring). bbb. to overcome the rapidly growing complexity of computing systems management. net/ whois/ (query “artofwarcentral. html [8] http:/ / www. asp?page=dp& dis=98290 https:/ / www. most of these approaches are typically conceived with centralized or cluster-based server architectures in mind and mostly address the need of reducing management costs rather than the need of enabling complex software systems or providing innovative services.artofwarcentral.vskgamingservers. gotfrag.Art of War Central 21 References [1] [2] [3] [4] http:/ / artofwarcentral.com/) • Wolf Servers (http://www. a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. For example.and environment awareness. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve hard computational problems. php [9] http:/ / www. it will constantly check and optimize its status and automatically adapt itself to changing conditions. this initiative's ultimate aim is to develop computer systems capable of self-management. org/ centralohio/ business-reviews/ internet-gaming/ north-american-game-technology-in-worthington-oh-70041777 [13] http:/ / www. com/ art-of-war-central-celebrates-10th/ 68295 http:/ / www. Driven by such vision. vskgamingservers. com/ main/ index. sk-gaming. As widely reported in literature. htm [6] http:/ / www. prlog. asp?forumid=662& threadid=449592 [15] http:/ / www. com/ content/ 9934-TsN_Three_Continents_in_Three_Weeks External links • Art of War Central (http://www. adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. com/ cs/ story/ 22604/ [17] http:/ / www. an autonomic computing framework might be seen composed by Autonomic Components (AC) interacting with each other [1]. effectors (for self-adjustment). com/ cs/ story/ 21732/ [16] http:/ / www. gotfrag. com/ forums/ showthread. internap. and to reduce the barrier that complexity poses to further growth. i-newswire. com/ [10] http:/ / www.wolfservers. htm [12] http:/ / www. However. A very similar trend has recently characterized significant research work in the area of multi-agent systems. wolfservers.com”) http:/ / www. gkg.com/main/index. i-newswire. An autonomic system makes decisions on its own. com/ art-of-war-continues-growth-with-expansion-to-frankfurt_281. ant colony optimization could be studied in this paradigm. com/ business-internet-connectivity-services/ route-optimization-miro/ [11] http:/ / nuclearwar2012. org/ centralohio/ business-reviews/ internet-gaming/ north-american-game-technology-in-worthington-oh-70041777 [7] http:/ / www. bbb.

IBM defined five evolutionary levels. she defines general policies and rules that serve as an input for the self-management process. freeing administrators from low-level task management while delivering better system behavior. design and maintain the complexity of interactions. Mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. self-managing systems. A general problem of modern distributed computing systems is that their complexity. is becoming a significant limiting factor in their further development.4 introduce increasingly automated management functions. Autonomic systems A possible solution could be to enable modern. Forecasts suggests that the number of computing devices in use will grow at 38% per annum and the average complexity of each device is increasing. 80% of such problems in infrastructure happen at the client specific application and database layer. ranging from internal control processes to presenting web content and to customer support. . expensive. or mobile phones with diverse forms of wireless technologies to access their companies' data. Large companies and institutions are employing large-scale computer networks for communication and computation. and error-prone. with labour costs exceeding equipment costs [3] by a ratio of up to 18:1. hardware. The distributed applications running on these computer networks are diverse and deal with many different tasks.g. Most 'autonomic' service providers guarantee only up to the basic plumbing layer (power. This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. IBM has defined the following four functional areas: • Self-Configuration: Automatic configuration of components. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. • Self-Healing: Automatic discovery. • Self-Optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements.[4] Kephart and Chess warn that the dream of interconnectivity of computing systems and devices could become the “nightmare of pervasive computing” in which architects are unable to anticipate. They do so by using laptops. or the Autonomic deployment model [5]. heart rate. The manual effort needed to control a growing networked computer-system tends to increase very quickly. operating system. and in particular the complexity of their management. networked computing systems to manage themselves without direct human intervention. Instead. In "The Vision of Autonomic Computing". but the demand for skilled IT personnel is already outstripping supply. the human operator takes on a new role: he or she does not control the system directly. Manual control is time-consuming. Additionally. Currently this volume and complexity is managed by highly skilled humans. This nervous system controls important bodily functions (e. and blood pressure) without any conscious intervention. They state the essence of autonomic computing is system self-management. It is inspired by the autonomic nervous system of the human body. Levels 2 . respiration. PDAs. • Self-Protection: Proactive identification and protection from arbitrary attacks. network and basic database parameters). In a self-managing Autonomic System. while level 5 represents the ultimate goal of autonomic. for its deployment: Level 1 is the basic level that presents the current situation where systems are essentially managed manually. Computing systems have brought great benefits of speed and automation but there is now an overwhelming economic need to automate their maintenance.Autonomic Computing 22 The problem of growing complexity Self-management means different things in different fields. and correction of faults. For this process.

every autonomic system should be able to exhibit a minimum set of properties to achieve its purpose: Automatic This essentially means being able to self-control its internal functions and operations.Autonomic Computing The design complexity of Autonomic Systems can be simplified by utilizing design patterns such as the Model View Controller (MVC) to improve concern separation by helping encapsulate functional concerns. which enables the system to observe its external operational context. This well-known concept stems from Process Control Theory. Again. state and functions).) without external intervention. etc. its configuration. etc.. Conceptual model A fundamental building block of an autonomic system is the sensing capability (Sensors Si). that define the basic behaviour).g. a closed control loop in a self-managing system monitors some resource (software or hardware component) and autonomously tries to keep its parameters within a desired range. This will allow the system to cope with temporal and spatial changes in its operational context either long term (environment customisation/optimisation) or short term (exceptional conditions such as malicious attacks. hundreds or even thousands of these control loops are expected to work in a large-scale self-managing computer system.g. This includes its mission (e. the policies (e.[6] 23 Control loops A basic concept that will be applied in Autonomic Systems are closed control loops. Adaptive An autonomic system must be able to change its operation (i. the knowledge required to bootstrap the system (Know-how) must be inherent to the system. This model highlights the fact that the operation of an autonomic system is purpose-driven. Characteristics Even though the purpose and thus the behaviour of autonomic systems vary from system to system. and the “survival instinct”. Inherent to an autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate itself (e. As such. the service it is supposed to offer).. According to IBM. configuration knowledge. an autonomic system must be self-contained and able to start-up and operate without any manual intervention or external help.). which is responsible for making the right decisions to serve its Purpose. faults. bootstrapping. The actual operation of the autonomic system is dictated by the Logic. Aware .e... Essentially. and influence by the observation of the operational context (based on the sensor input). If seen as a control system this would be encoded as a feedback error function or in a heuristically assisted system as an algorithm combined with set of heuristics bounding its operational space.g. interpretation of sensory data.

informatik. vol.ibm.net/project/showfiles. 84-90. “Flexible Self-Management Using the Model-View-Controller Pattern. Risks.com/ bookstore/product. ISBN 978-3-540-22477-8.net) • Handsfree Networks . Awareness will control adaptation of its operational behaviour in response to context or state changes. March 2002 [4] IEEE Computer Magazine. funded by the European Union (http://www. in Matthias Nickles. Grace.uni-stuttgart.org) • CASCADAS Autonomic Tool-Kit in Open Source (http://sourceforge. Curry and P. Berlin.com/autonomic-technology-platform) • Applied Autonomics provides Autonomic Web Services (http://www. 60) IEEE Software. pp. External links • Autonomic Computing by Richard Murch published by IBM Press (http://www. Jan 2003 [5] http:/ / www. [3] ‘Trends in technology’. 3.com) • CASCADAS Project: Component-ware for Autonomic.assl. May. Situation-aware Communications And Dynamically Adaptable.org/) • Dynamically Self Configuring Automotive Systems (http://www. USA.dyscas. ibm.enigmatec.in German (ftp://ftp. 2004.com) • Explanation of Autonomic Computing and its usage for business processes (IBM).ibm.com/developerworks/tivoli/autonomic/library/ 1016/1016_autonomic.fr/jade. Validation and Generation of Autonomic Systems (http://www.pdf) • Autonomic Computing Architecture in the RKBExplorer (http://www.asp?isbn=013144025X) • IBM Autonomic Computing Website (http://www.com/ autonomic/pdfs/AC_Practical_Roadmap_Whitepaper. asp?genre=article& issn=0302-9743& volume=2969& spage=151)". wss [6] E.Autonomic Computing An autonomic system must be able to monitor (sense) its operational context as well as its internal state in order to be able to assess if its current operation serves its purpose. no. 24 References [1] http:/ / sourceforge. com/ openurl. Michael Rovatsos.ibmpressbooks. php?group_id=225956) • ANA Project: Autonomic Network Architecture Research Project. 2969. net/ project/ showfiles.ustuttgart_fi/DIP-2787/DIP-2787.whitestein.bsc. vol.provider of development and integration environment for autonomic computing software (http://www. doi.pdf) • Autonomic computing blog (http://www-03.html) • Practical Autonomic Computing . springerlink.com/developerworks/blogs/page/DaveBartlett) • Whitestein Technologies . and Solutions.ibm.rkbexplorer.es/ autonomic) • SOCRATES: Self-Optimization and Self-Configuration in Wireless Networks (http://www. Lecture Notes in Computer Science.appliedautonomics. survey. com/ press/ us/ en/ pressrelease/ 464. 25.rkbexplorer.” (http:/ / dx.ipsoft.com/explorer/ #display=mechanism-{http://resex.providers of autonomic computing software (http://www. 1109/ MS.providers of autonomic computing software (http://www.handsfreenetworks. and Gerhard Weiss (editors).vassev.Autonomic Systems and eBusiness Platforms (http://www. Berkeley University of California.inrialpes. Agents and Computational Autonomy: Potential. " From Individual Based Modeling to Autonomy Oriented Computation (http:/ / www. 2008.com) • IPsoft service providers delivering Autonomic Computing (http://www.A framework for developing autonomic administration software (http://sardes.cascadas-project. org/ 10.Roadmap to Self Managing Technology (http://www-03.com/id/resilience-mechanism-87d79b11}) .org/) • ASSL (Autonomic System Specification Language) : A Framework for Specification.fp7-socrates. php?group_id=225956 [2] Xiaolong Jin and Jiming Liu. Springer.html) • Barcelona Supercomputing Center .com/autonomic/) • Autonomic Computing articles and tutorials (http://www.org/) • JADE .com) • Enigmatec Website .research. funded by the European Union (http://www.de/pub/library/medoc. 2008. ana-project. pages 151–169.ibm.

edu/wiki/index. Fault-tolerant design was an issue. As of 2010 Citrusleaf has been implemented in production. To support these transaction loads in a non-stop manner during node arrivals and departures.umb. Traditional databases approaches were designed with traditional rotational disk storage in mind. scalable. The volume and performance demands of Real-time web applications caused traditional SQL databases to fail.000 transactions per second per node.0. The first was the sheer volume of data. extremely fast. Inc. Retrieving and processing this information with sub-millisecond response time was impossible with traditional database approaches. Keeping track of 5 to 10 Kilobytes of information for each of hundreds of millions of people produced a database with billions of objects. It was originally developed for managing the mission-critical data for applications on the Real-time web. Inc.Autonomic Computing • International Journal of Autonomic Computing (http://www. Therefore in 2008 Brian Bulkowski created a key-value data store and later was joined by Srini Srinivasan in 2009. post-relational NoSQL database produced and marketed by Citrusleaf. with a response time of under one millisecond.23 / September 1. Citrusleaf takes advantage of the properties of Solid-state drive (SSD) to accomplish this.com/ijac/) • BiSNET/e: A Cognitive Sensor Networking Architecture with Evolutionary Multiobjective Optimization (http:// dssg. The average seek time of rotating disk storage is ten milliseconds and therefore a sub-millisecond response time is not possible.inderscience. Their applications were mission-critical so in addition to the performance requirements the solution had to be available without interruption.php/BiSNET/e) 25 Citrusleaf database Citrusleaf Developer(s) Stable release Written in Citrusleaf. net/ The Citrusleaf database is an ACID-compliant. In addition to performance. 2. The system is capable of 100. the authors created software solutions in the areas of distributed systems. These applications require the ability to store 5 to 10 Kilobytes of information on hundreds of millions of webs users and compare it to potential ads to display with sub-millisecond response time. . This was due to several reasons. the founders of Citrusleaf Corporation encountered a problem. real-time prioritization.cs. History While at Yahoo! and Aggregate Knowledge. The Citrusleaf database platform is an ACID-compliant. Design Drivers The answer lay in making use of solid state drives SSD. citrusleaf. fault-tolerant database engine. Together they created the Citrusleaf database platform. and storage management across all kinds of storage. 2010 C Operating system Linux Type License Website distributed key/value database system Enterprise (Perpetual or Subscription based) http:/ / www.

• Automatic cluster resizing and rebalancing: Citrusleaf cluster will automatically grow or shrink using zeroconfig networking. These namespaces are similar to a database instance in an RDBMS. Scalability and Performance • Distributed object store: Easily store and retrieve large volumes of data through Citrusleaf client for C. Each column's value is typed. Citrusleaf's data model allows it to be considered as a document store. The system is schema-less in that different columns can be used in different data objects of the same table. although it is more similar to a schema-less version of the row based schema typically used in relational systems.000 transactions per second per commodity node. Flexible replication policy: Set replication factors for individual data items. Java. or binary data. but the set of instructions is not very rich. in the style of Redis.net) . • Real-time performance: Low. A key is a unique reference to a piece of data: common keys include usernames and session identifiers. which are similar to column names in SQL. Automatic Client failover: Clients track cluster membership for automatic load balancing and transaction re-try. predictable sub-millisecond latency from memory or flash storage. blobs. • High sustained throughput of over 100. Each data object is a collection of 'bins' in Citrusleaf's parlance. which are binary data which has been reflected by the serializer of an individual object (such as a Java blob generated by Java's serializer). The types supported are strings. and control policies like replication count and storage location. PHP. even though Java and Python use different underlying character representations (Unicode vs UTF-8). C#. Replication and Failover • • • • Automatic failure detection and in-flight transaction rerouting for nonstop operation in the face of failure.Citrusleaf database 26 Data model Citrusleaf organizes all data into namespaces. integers. individual data objects are referenced by tables and primary keys which could be strings. and "reflection blobs".citrusleaf. Within a namespace. The use of typed values allows different languages to inter-operate simply: a string set in Java will appear correctly through the Python client. Python and Ruby. Some high level operations (such as atomically adding integers) are supported. Randomized object replication allows smooth load balancing during failure recovery. integers. References External links • Official Citrusleaf site (http://www.

A server machine is a host that is running one or more server programs which share their resources with clients. and terminal servers. They are also cheaper to set up because most desktop operating systems have the software required for the network installed by default. The client–server model has become one of the central ideas of network computing. CD-ROMs and printers[2] . email clients. central server. peer-to-peer networks involve two or more computers pooling individual resources such as disk drives. Most web services are also types of servers. name servers. ftp servers. and DNS. while each two of them communicate in a session. Description The client–server characteristic describes the relationship of cooperating programs in an application. high speed computer with a large hard disk capacity. Clients therefore initiate communication sessions with servers which await incoming requests. These shared resources are available to every computer in the network. However. but both client and server may reside in the same system. but requests a server's content or service function. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve the account information. [3] . database servers. Sequence diagrams are standardized in the Unified Modeling Language. Each computer acts as both the client and the server which means all the computers on the network are equals. which in turn serves it back to the web browser client displaying the results to the user. and online chat clients. are Schematic clients-server interaction. Many business applications being written today use the client–server model. software applications can be installed on the single computer and shared by every computer in the network. called servers. On the other hand. The interaction between client and server is often described using sequence diagrams. A client does not share any of its resources. which initiate requests for such services. SMTP. Specific types of clients include web browsers.Clientserver model 27 Client–server model The client–server model of computing is a distributed application that partitions tasks or workloads between the providers of a resource or service. file servers. The resources of the computers in . the client-server model works with any size or physical layout of LAN and doesn't tend to slow down with a heavy use. In the peer to peer network. The balance is returned to the bank database client. Specific types of servers include web servers. called clients. The file server on a client-server network is a high capacity. So do the Internet's main application protocols. Users accessing banking services from their computer use a web browser client to send a request to a web server at a bank. Telnet. the collision of session may be larger than with routing via server nodes. mail servers. not controlled and supervised on the network as a whole.[1] Often clients and servers communicate over a computer network on separate hardware. The advantage of peer-to-peer networking is the easier control concept not requiring any additional coordination entity and not delaying transfers by routing via server entities. print servers. The server component provides a function or service to one or many clients. web access and database access. Functions such as email exchange. Peer-to-peer networks are typically less secure than a client-server networks because security is handled by the individual computers. Comparison to peer-to-peer architecture A client-server network involves multiple clients connecting to a single. such as HTTP. By contrast. and service requesters. that is where the term peer-to-peer comes from. built on the client–server model. application servers.

The long-term aspect of administering a client-server network with applications largely server-hosted surely saves administering effort compared to administering the application settings per each client. com/ developer/ Books/ jdbc/ ch07. However. servers may be cloned and networked to fulfill all known capacity and performance requirements. It may be difficult to provide systemwide services when the client operating system typically used in this type of network is incapable of hosting the service. However. [2] Understanding peer-to-peer networking (http:/ / www. sun. In addition the concentration of functions in performant servers allows for lower grade performance qualification of the clients. clients’ requests cannot be fulfilled by this very entity. Limitations include network load. pdf) [3] [Peer-to-Peer Networking and Applications] [4] Book: Computers are your future [5] Peer to Peer vs. should a critical server fail. and transaction recovery time. . All processing is completed on few central computers. Under client–server. It is possible to set up a server on a modern desktop computer. org/ imgs/ pdf/ education/ P2PNetworking. • Using intelligent client terminals increases the maintenance and repair effort. Contrast that to a P2P network. this simple model ends with the bandwidth of the network: Then congestion comes on the network and not with the peers. even if one or more nodes depart and abandon a downloading file. It is easier to configure and manage the server hardware and software compared to the distributed administering requirements with a flock of computers[4] [5] . as long as required data is accessible. for example. Aspects of comparison for other architectural concepts today include cloud computing as well. If dynamic re-routing is established. since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network. where its aggregated bandwidth actually increases as nodes are added. the remaining nodes should still have the data needed to complete the download. References [1] "Distributed Application Architecture" (http:/ / java. Sun Microsystem. This is a method of running a network with different limitations compared to fully fashioned clients. • Any single entity paradigm lacks the robustness of a redundant configuration. but may be taken by another server. Retrieved 2009-06-16. network address volume. but it is recommended to consider investment in enterprise-wide server facilities with standardised choice of hardware and software and with a systematic and remotely operable administering strategy. Then a single server may cause a bottleneck or constraints problem. Client/Server Networks . resources are usually distributed among many nodes which generate as many locations to fail. pdf). In P2P networks. 28 Challenges Generally a server may be challenged beyond its capabilities. Possible design decision considerations might be: • As soon as the total number of simultaneous client requests to a given server increases. Lesser complete netbook clients allow for reduction of hardware entities that have limited life cycles. isafe. Client-server networks with their additional capacities have a higher initial setup cost for networking than peer to peer networks.Clientserver model the network can become congested as they have to support not only the workstation user. the server can become overloaded. • Mainframe networks use dumb terminals. but also the requests from network users.

685258. org/ portal/ web/ csdl/ abs/ trans/ ts/ 1998/ 05/ e0342abs. Giovanni Vigna (1998). without the need to restart the program on the recipient's machine. For example a user A can send a running program to another user B and the program continues to run as if it was still on the original machine.1109/32. This may necessitate restarting the execution of the program at the destination host. doi:10. code mobility is the ability for running programs . htm). computer. "Understanding Code Mobility" (http:/ / www2. Gian Pietro Picco. instead of data . References [1] Fuggetta. Code mobility can be either Strong or Weak: • Strong code mobility involves moving the code. . The purpose of code mobility is to support sophisticated operations. such as time-critical applications. It is common practice in distributed systems to require the movement of code or processes between parts of the [1] system. data and the execution state from one host to another. codes or objects to be migrated (or moved) from one machine (host) to another.Code mobility 29 Code mobility In distributed computing. This is the process of moving code across the nodes of a network as opposed to distributed computation where the data is moved. This is important in cases where the running application needs to maintain its state as it migrates from host to host. . ISSN 0098-5589. Retrieved 29 July 2009. USA: IEEE Press Piscataway) 24 (5): 342–361. Alfonso. IEEE Transactions on Software Engineering (NJ. • Weak code mobility involves moving the code and the data only.

a connection broker is a resource manager that manages a pool of connections to connection-based resources such as databases or remote desktops. Connection brokers are often used in systems using N-tier architectures. enabling rapid reuse of these connections by short-lived processes without the overhead of setting up a new connection each time. .Connection broker 30 Connection broker In software engineering.

User database Original author(s) Developer(s) Initial release Preview release Damien Katz. org/ Apache CouchDB.0 http:/ / couchdb. apache.CouchDB 31 CouchDB Apache CouchDB CouchDB's Futon Administration Interface. Noah Slater. J. CouchDB is supported by commercial enterprises Couchbase and Cloudant. commonly referred to as CouchDB. is an open source document-oriented database written in the Erlang programming language. Jan Lehnardt. 2011 Development status Active Written in Operating system Available in Type License Website Erlang Cross-platform English Document-oriented database Apache License 2. Christopher Lenz.0 / May 30.1. Chris Anderson Apache Software Foundation 2005 1. . It borrows from NoSQL and is designed for local replication and to scale horizontally across a wide range of devices.

CouchDB was originally written in C++. Tomcat and Ant. Field values can be simple things like strings. including Ubuntu. You can think of a document as one or more field/value pairs expressed as JSON. Support for other languages can be easily added. But you can also use ordered lists and associative maps. It does this by implementing a form of Multi-Version Concurrency Control (MVCC) not unlike InnoDB or Oracle. Django Developer [5] It is in use in many software projects and web sites[6] . methods and representations and can be simplified as the following. Every document in a CouchDB database has a unique id and there is no required document schema. It is not a relational database management system. but CouchDB is built of the Web. CouchDB design and philosophy borrows heavily from Web architecture and the concepts of resources. Additionally. On November 2008. Views are generally stored in the database and their indexes updated continuously. view servers have been developed in a variety of languages.11 CouchDB supports CommonJS' Module specification[8] . but what he did share was that it would be a "storage system for a large scale object database" and that it would be called CouchDB (Couch is an acronym for cluster of unreliable commodity hardware). Features Document Storage CouchDB stores documents in their entirety. numbers. although queries may introduce temporary views. Details were sparse at this early stage. ACID Semantics Like many relational database engines.[4] As a consequence. CTO of Couchbase) posted on his blog about a new database engine he was working on. That means CouchDB can . now founder. In February 2008.CouchDB 32 History In April 2005. it became an Apache Incubator project and the license was changed to the Apache License rather [2] than the GPL. He self-funded the project for almost two years and released it as an open source project under the GNU General Public License. but the project moved to the Erlang OTP platform for its emphasis on fault tolerance. where it is used to synchronize address and bookmark data. it graduated to a top-level project alongside the likes of the Apache HTTP Server. CouchDB exposes a RESTful HTTP API and a large number of pre-written clients are available. “ Django may be built for the Web. or dates. CouchDB is maintained at the Apache Software Foundation with backing from IBM. PHP. Ruby.[7] Since Version 0. but retain query abilities via views.[1] His objectives for the database were for it to become the database of the Internet and that it would be designed from the ground up to serve web applications. Design CouchDB is most similar to other document stores like MongoDB and Lotus Notes. Python and Erlang. CouchDB provides ACID semantics[9] . ” —Jacob Kaplan-Moss. the database manages a collection of JSON documents. The documents in a collection need not share a schema. CouchDB supports a view system using external socket servers and a JSON-based protocol. Damien Katz (former Lotus Notes developer at IBM. Views are defined with aggregate functions and filters are computed in parallel. Katz works on it full-time as the lead developer. much like MapReduce.[3] Currently. I’ve never seen software that so completely embraces the philosophies behind HTTP. a plugin architecture allows for using different computer languages as the view server such as JavaScript (default). CouchDB makes Django look old-school in the same way that Django makes ASP look outdated. Instead of storing data in rows and columns.

the file already exists.1:5984/wiki The server replies with the following JSON message: {"db_name":"wiki". PUT or DELETE) by using the cURL lightweight command-line tool to interact with CouchDB server: curl http://127. GET."update_seq":0. GET. REST API CouchDB treats all stored items (there are others besides documents) as a resource. Distributed Architecture with Replication CouchDB was designed with bi-direction replication (or synchronization) and off-line operation in mind. Creating a database is simple—just issue the following command: curl -X PUT http://127."disk_size":79. PUT and DELETE for the four basic CRUD (Create. software and hardware. modify it. The logic in your JavaScript functions can be arbitrarily complex. if the database already exists: {"error":"file_exists". . interoperable.0. 33 Examples CouchDB provides a set of RESTful HTTP methods (e. "purge_seq":0. but it illustrates nicely the way of interacting with CouchDB.0."reason":"The database could not be created. CouchDB can index views and keep those indexes updated as documents are added."compact_running":false."doc_del_count":0.1:5984/ The CouchDB server processes the HTTP request. The biggest gotcha typically associated with this level of flexibility is conflicts. POST. The function takes a document and transforms it into a single value which it returns. This provides a very powerful indexing mechanism that grants unprecedented control compared to most databases. That means multiple replicas can have their own copies of the same data.CouchDB handle a high volume of concurrent readers and writers without conflict. HTTP is widely understood."version":"1.0. removed. scalable and proven technology. it returns a response in JSON as the following: {"couchdb":"Welcome".0. A lot of tools.1"} This is not terribly useful. or updated.g. REST uses the HTTP methods POST."} The command below retrieves information about the database: curl -X GET http://127. Since computing a view over a large database can be an expensive operation.. In CouchDB. each view is constructed by a JavaScript function (server-side JavaScript by using CommonJS and SpiderMonkey) that acts as the Map half of a MapReduce operation. you can develop views that are similar to their relational database counterparts. All items have a unique URI that gets exposed via HTTP. with a different response message."doc_count":0. if the database does not exist: {"ok":true} or. Map/Reduce Views and Indexes To provide some structure to the data stored in CouchDB. Read. Update.0.0.1:5984/wiki CouchDB will reply with the following message. and then sync those changes at a later time. proxying and load balancing. are available to do all sorts of things with HTTP like caching. Delete) operations on all resources.0.

References [1] Lennon. and MIT International Components for Unicode (ICU) is an open source project of mature C/C++ and Java libraries for Unicode support.0. Joe (2009-03-31). org/ docs/ overview. single assignment.apache.org [4] View Server Documentation (http:/ / wiki. The core library (written in the C programming language) implements the basic cryptographic functions and provides various utility functions. "Exploring CouchDB" (http:/ / www. html). ibm. org/ relax/ intro/ why-couchdb#A Different Way to Model Your Data) [6] CouchDB in the wild (http:/ / wiki. org/ couchdb/ CouchDB_in_the_wild) A list of software projects and websites using CouchDB [7] Email from Elliot Murphy (Canonical) (http:/ / mail-archives. org/ couchdb/ CommonJS_Modules [9] (http:/ / couchdb. ICU is widely portable to many operating systems and environments. mbox/ <3d4032300802121136p361b52ceyfc0f3b0ad81a1793@mail. with strict evaluation. apache. com>) on mail-archives. written by Brendan Eich at Netscape Communications. 3090104@canonical. software internationalization and software globalization. gmail. org/ mod_mbox/ couchdb-dev/ 200910. Retrieved 2009-03-31.apache. apache.1:5984/wiki CouchDB will reply with the following message: {"ok":true} 34 Open source components CouchDB includes a number of other open source projects as part of its default package.org [5] A Different Way to Model Your Data (http:/ / books. [2] Apache mailing list announcement (http:/ / mail-archives. see section on ACID Properties. apache. IBM. mbox/ <3F352A54-5FC8-4CB0-8A6B-7D3446F07462@jaguNET. com>) to the CouchDB-Devel list [8] http:/ / wiki. apache. IBM. OpenSSL is an open source implementation of the SSL and TLS protocols.0. org/ mod_mbox/ incubator-general/ 200802.org [3] Re: Proposed Resolution: Establish CouchDB TLP (http:/ / mail-archives. org/ mod_mbox/ incubator-couchdb-dev/ 200811. MIT License OpenSSL Apache-like unique Erlang Erlang is a general-purpose concurrent programming language and runtime system. later released as open source and now maintained by the Mozilla Foundation. and dynamic typing.apache. apache. com/ developerworks/ opensource/ library/ os-couchdb/ index. . jQuery ICU jQuery is a lightweight cross-browser JavaScript library that emphasizes interaction between JavaScript and Dual license: GPL HTML. apache. . com>) on mail-archives.CouchDB "instance_start_time":"1272453873691070". apache. Component Description License MPL/GPL/LGPL tri-license SpiderMonkey SpiderMonkey is a code name for the first ever JavaScript engine. org/ couchdb/ ViewServer) on wiki. couchdb. The sequential subset of Modified MPL Erlang is a functional language. html)."disk_format_version":5} The following command will remove the database and its contents: curl -X DELETE http://127. mbox/ <4AD53996.

com/catalog/9781449303433) (1st ed. Joe (December 15.com/ catalog/0636920018247) (1st ed.nosqldatabases. CouchDB: The Definitive Guide (http:// guide. ISBN 0596158165 • Lennon. Bradley (March 7. ISBN 1449303439 External links • • • • • • Official website (http://couchdb.000 feet Jan Lehnardt (http://video. 300. CouchDB for Erlang Developers (http://www. 2011). O'Reilly Media.com/tagged/couchdb) Scaling CouchDB (http://nosql.org/couchdb/Complete_HTTP_API_Reference) • Simple PHP5 library to communicate with CouchDB (https://github. Jan (November 15.couchdb. Noah.com/main/tag/couchdb) CouchDB green paper (http://manning. Chris. ISBN 1430272376 • Holt. O'Reilly Media. O'Reilly Media.com/post/683838234/scaling-couchdb) • Complete HTTP API Reference (http://wiki.com (http://www.).couchdb. 2011).).com/book/view/9781430272373) (1st ed.CouchDB 35 Bibliography • Anderson.). Beginning CouchDB (http://www.com/presentations/katz-couchdb-and-me) on Jan 31. 300.com/1999/couchdb-php) Videos • Erlang eXchange 2008: Couch DB at 10. pp. ISBN 1449303129 • Holt.erlang-factory. 72.com/free/green_chandler.org/editions/1/en/index.mypopescu. 2009). Apress.com/ videoplay?docid=-3714560380544574985&hl=en#) • Jan Lehnardt is Giving the Following Talks.mypopescu. J.).com/ conference/London2009/speakers/janlehnardt) • CouchDB and Me (http://www.google.org/relax/) CouchDB articles on NoSQLDatabases. Bradley (April 11. pp.html) CouchDB news and articles on myNoSQL (http://nosql.html) (1st ed. 76.apache.apache. pp. Lehnardt.org/) CouchDB: The Definitive Guide (http://books.apress. Slater.infoq. 2009). pp. Scaling CouchDB (http://oreilly. Writing and Querying MapReduce Views in CouchDB (http://oreilly. 2009 by Damien Katz .

A. 1. pp 943-952. December 1988. Hagersten. http:/ / citeseerx.D. table-driven logic.e. Haridi. and S. org/ citation. Tokyo. generally relating to software architectures in which databases play a crucial role. In Proceedings of the 1988 International Conference on Fifth Generation Computer Systems. behavior that is heavily dictated by the contents of a database. September 1992.A Cache-only Memory Architecture. p. Warren. concluding that a database-centric approach has practical advantages from the standpoint of ease of development and maintainability.bris. 2301 External links • Data Diffusion Machine (University of Bristol) (http://www. Paul W. application developers have become increasingly reliant on standard database tools. database or even retrieved from a spreadsheet. as opposed to customized in-memory or file-based data structures and access methods. ist. 1996. the characterization of an architecture as "database-centric" may mean any combination of the following: • using a standard. References [1] David H. A DDM appears to the user as a conventional shared memory machine but is implemented using a distributed memory architecture. The extent to which business logic should be placed at the back-end versus another tier is a subject of ongoing debate.ac. much of which is either free or included with the operating system. Landin. The Data Diffusion Machine . Toon Koppelaars presents a detailed analysis of alternative Oracle-based architectures that vary in the placement of business logic. • using stored procedures that run on database servers.A. edu/ viewdoc/ summary?doi=10. For example. Japan. The Data Diffusion Machine (DDM) overcomes this problem by providing a virtual memory abstraction on top of a distributed memory machine. 48. cfm?id=141718 [3] Henk L. 1. but the research has ceased since then. IEEE Computer. not compiled statements) but could equally be read in from a flat file. Stallard." Parallel Processing Symposium.cs. i. acm. allows programs to be simpler and more flexible. This capability is a central feature of dynamic programming languages. general-purpose relational database management system. as opposed to greater reliance on logic running in middle-tier application servers in a multi-tier architecture. For example. D. Warren and Seif Haridi. See also control tables for tables that are normally coded and embedded within programs as data structures (i. 10th International Parallel Processing Symposium (IPPS '96). David H. 152. Often this description is meant to contrast the design to an alternative approach. psu.[1] [2] [3] Data Diffusion Machines were under active research in the late 1980s and early 1990s.uk/Research/DDM/) Database-centric architecture Database-centric architecture or data-centric architecture has several distinct meanings. http:/ / portal. International. With the evolution of sophisticated DBMS software. [2] E. The use of table-driven logic.e. especially for the sake of rapid application development. DDM . • using dynamic.Data Diffusion Machine 36 Data Diffusion Machine Data Diffusion Machine is a historical virtual shared memory architecture where data is free to migrate through the machine.[1] . as opposed to logic embodied in previously compiled programs. "Implementing the Data Diffusion Machine using Crossbar Routers. Muller. Shared memory machines are convenient for programming but do not scale beyond tens of processors.A Scalable Shared Virtual Memory Multiprocessor.

or where each of these machines serves a specific purpose or task. com/ dbgrid. Base One describes a database-centric distributed computing architecture for grid and cluster computing. Render and computation farms – To render 3D images and do calculations on large datasets and process complex data in general . and explains how this design provides enhanced security. Koppelaars/ J2EE_DB_CENTRIC. doc) [2] Database-Centric Grid and Cluster Computing (http:/ / www. as opposed to direct inter-process communication via message passing functions and message-oriented middleware.0 and the emergence of Cloud computing more and more "multiple machine" approaches emerge. Real time systems for data-input by people – Like HelpDesk software and Client Service Software taking care of appointments and updates on Client Data 3. Web APIs and Web 2. Examples Distributed Applications can include: 1.[2] 37 References [1] A database-centric approach to J2EE application development (http:/ / web.Database-centric architecture • using a shared database as the basis for communicating between parallel processes in distributed computing applications. and capacity. A potential benefit of database-centric architecture in distributed applications is that it simplifies the design by utilizing DBMS-provided transaction processing and indexing to achieve a high degree of reliability. For example. be it the client computer or the server. nl. Hardware systems like "the Internet of Things" . With the introduction of Intelligent agents. where many systems on several locations can take care of Load balancing (computing) by re-distribution of specific tasks. fault-tolerance. htm) Distributed application Distributed Applications are applications running on two or more machines in a network. inter. net/ users/ T. Distributed systems using general purpose and specialized APIs 2. and scalability.With independent components capable of processing specific tasks while communicating to other parts via a network 4. Introduction Where classic software systems of the past century were mostly based on Client–server models and Client-centric application development. boic. performance. both ultimately run on one single computer.

Each event represents a single instance of a non-blocking. • Concurrent. and one-way. in such case. generally. Each distributed flow is a (possibly infinite) set of such quadruples that satisfies the following three formal properties. in which all events occur at the same node would be considered degenerate. or a sequence number identifying the particular event. and carry the same type of a payload. where x is the location (e. asynchronous method invocation or other form of explicit or implicit message passing between two layers or software components. and are related to one-another. a set of events that includes multicast requests made by different applications to different multicast protocols would not be considered a distributed flow.Distributed data flow 38 Distributed data flow Distributed data flow (also abbreviated as distributed flow) refers to a set of events in a distributed application or protocol that satisfies the following informal properties: • Asynchronous. or instances of the same components.v). we represent each event in a distributed flow as a quadruple of the form (x. Formally. t is the time at which this happens. This implies that in which flow. continuous. .g. application layer to an underlying multicast protocol. one can always point to the point in time at which the flow originated. and the other always consumes the events). unidirectional. and v is a value that represents the event payload (e. each event might represent a single request to multicast a packet. • Homogeneous. Invocations of methods that may return results would normally be represented as two separate flows: one flow that represents the requests.k.e. For example. Thus. such flow would include events that occur on all nodes participating in the given multicast protocol. at any point in time. The flow itself can be infinite. All events in the distributed flow serve the same functional and logical purpose. A flow. the network address of a physical node) at which the event occurs.. and uniform.t. but perhaps on different nodes within a computer network. issued by an An illustration of the basic concepts involved in the definition of a distributed data flow. For example. Furthermore. one type of a layer or component always produces. k is a version. there can be only finitely many events in the flow that occur at time t or earlier.. non-blocking. The flow usually includes all events that flow between the two layers of software. in general. all the arguments passed in a method call)..g. normally. we require that they represent method calls or message exchanges between instances of the same functional layers. and neither would be a set of events that represent multicast requests as well as acknowledgments and error notifications. eventually a new event will appear in the flow. The requirement that events are one-way and asynchronous is important. For example. simultaneously at different locations. and over a finite or infinite period of time. and another flow that represents responses. a set of events that includes all multicast requests issued by the same application layer to the same multicast protocol is a distributed flow. On the other hand. events in a distributed flow are distributed both in space (they occur at different nodes) and in time (they occur at different times). all events must flow in the same direction (i. one-way. the flow of multicast requests would include all such requests made by instances of the given application on different nodes. and distributed. • For any finite point in time t.

even if they occur at different locations. 3rd ACM International Conference on Distributed Event-Based Systems (DEBS 2009). K. distributed flows are dynamic and distributed: they simultaneously appear in multiple locations within the network at the same time. K. distributed flows are a more natural way of modeling the semantics and inner workings of certain classes of distributed systems. Nashville. July 6–9. • For any pair of events e_1 and e_2 that occur at the same location.. Systems. pdf [3] Ostrowski.Distributed data flow • For any pair of events e_1 and e_2 that occur at the same location. cs. Weakly monotonic flows may or may not be consistent. then e_1 must carry a smaller value than e_2. K. USA. A distributed flow is said to be weakly monotonic if for any pair of events e_1 and e_2 that occur at the same location. Big Sky. cornell. Unlike variables or parameters. which represent a unit of state that resides in a single location. if e_1 occurs at an earlier time than e_2. Dolev. "Distributed Data Flow Language for Multi-Party Protocols". then the version number in e_1 must also be smaller than that of e_2. pdf . MT. In addition to the above. D. Birman. In particular. cornell. Distributed data flows serve a purpose analogous to variables or method parameters in programming languages such as Java. D. pdf [2] Ostrowski. (2009). the distributed data flow abstraction has been used as a convenient way of expressing the high-level logical relationships between parts of distributed protocols [1] [2] . cs. 2009.. http:/ / www. Submitted to the International Conference on Object Oriented Programming. edu/ ~krzys/ krzys_plos2009. http:/ / www. • Consistency. They typically represent various sorts of irreversible decisions.. K. K. http:/ / www. Consistent flows typically represent various sorts of global decisions made by the protocol or application. edu/ ~krzys/ krzys_debs2009.. edu/ ~krzys/ krzys_oopsla2009. 2009. Dolev. A distributed flow is said to be consistent if events with the same version always have the same value. "Programming Live Distributed Objects with Distributed Data Flows".. if e_1 has a smaller version than e_2. A distributed flow is said to be strongly monotonic (or simply monotonic) if this is true even for pairs of events e_1 and e_2 that occur at different locations.[3] 39 References [1] Ostrowski. (2009). "Implementing Reliable Event Streams in Large Systems via Distributed Data Flows and Recursive Delegation". they must also have the same values. cornell. in that they can represent state that is stored or communicated by a layer of software.. if the two events have the same version numbers. Birman. USA. and Dolev. and Sakoda. Languages and Applications (OOPSLA 2009). Birman. D. (2009). cs. 5th ACM SIGOPS Workshop on Programming Languages and Operating Systems (PLOS 2009). Strongly monotonic flows are always consistent. flows can have a number of additional properties. TN. October 11. • Monotonicity. As such. C.. K.

This is to ensure that local data will not be overwritten. It may be stored in multiple computers located in the same physical location. Transactions must also be divided into subtransactions. synchronous and asynchronous distributed database technologies. For example. or may be dispersed over a network of interconnected computers. in a database) can be distributed across multiple physical locations. each subtransaction affecting one database system. A distributed database can reside on network servers on the Internet.[2] Besides distributed database replication and fragmentation. there are two processes: replication and duplication. The duplication process is normally done at a set time after hours. Global applications applications which do require data from other sites. and hence the price the business is willing to spend on ensuring data security. This applies to the system's performance. A distributed database does not share main memory or disks.Distributed database 40 Distributed database A distributed database is a database in which storage devices are not all attached to a common CPU. The replication and distribution of databases improves database performance at end-user worksites.g. changes to the master database only are allowed. Collections of data (e. or on other company networks. on corporate intranets or extranets. there are many other distributed database design technologies. and methods of access among other things. local autonomy. Once the changes have been identified. Both of the processes can keep the data current in all distributive locations. It basically identifies one database as a master and then duplicates that database. In the duplication process. This is to ensure that each distributed location has the same data. Replication involves using specialized software that looks for changes in the distributive database. . Duplication on the other hand is not as complicated. These technologies' implementation can and does depend on the needs of the business and the sensitivity/confidentiality of the data to be stored in the database. • Transactions are transparent — each transaction must maintain database integrity across multiple databases. Important considerations Care with a distributed database must be taken to ensure the following: • The distribution is transparent — users must be able to interact with the system as if it were one logical system. The replication process can be very complex and time consuming depending on the size and number of the distributive databases. consistency and integrity. [1] To ensure that the distributive databases are up to date and current. This process can also require a lot of time and computer resources. Basic architecture A database User accesses the distributed database through: Local applications applications which do not require data from other sites. the replication process makes all the databases look the same.

All transactions follow A. Modularity — systems can be modified. enforcing integrity over a network may require too much of the network's resources to be feasible. It is solved by locking and timestamping. DBMS. and they are not centralized so the remote sites must be secured as well. the transaction takes place as whole or not at all. Improved performance — data is located near the site of greatest demand. Replication and Location Independence. Disadvantages of distributed databases • Complexity — extra work must be done by the DBAs to ensure that the distributed nature of the system is transparent. added and removed from the distributed database without affecting other modules (systems). Reflects organizational structure — database fragments are located in the departments they relate to. Hardware. Continuous operation. • • • • • • • Single site failure does not affect performance of system. Network. Operating System. Extra database design work must also be done to account for the disconnected nature of the database — for example.. Fragmentation. the results of a transaction must survive system failures. d-durability. joins become prohibitively expensive when performed across multiple systems. by encrypting the network links between remote sites). Local autonomy — a department can control the data about them (as they are the ones familiar with it. Reliable transactions . (A high load on one module of the database won't affect other modules of the database in a distributed database. • Operating System should support distributed environment. i-isolation.D. Distributed Transaction management.) Economics — it costs less to create a network of smaller computers with the power of a single large computer. • Economics — increased complexity and a more extensive infrastructure means extra labour costs.I. instead of one big one. • Database design more complex — besides of the normal difficulties. and the database systems themselves are parallelized. • Concurrency control: it is a major issue. property: a-atomicity. all of the data would not be in one place. and as a young field there is not much readily available experience on proper practice.g..Distributed database 41 Advantages of distributed databases • • • • • • • Management of distributed data with different levels of transparency. . Easier expansion.) Protection of valuable data — if there were ever a catastrophic event such as a fire. each transaction sees a consistent DB. Extra work must also be done to maintain multiple disparate systems. • Lack of standards — there are no tools or methodologies yet to help users convert a centralized DBMS into a distributed DBMS. Increase reliability and availability. maps one consistent DB state to another. c-consistency. • Security — remote database fragments must be secured.C. Distributed Query processing. but distributed in multiple locations. the design of a distributed database has to consider fragmentation of data. • Inexperience — distributed databases are difficult to work with. The Merge Replication Method used to consolidate the data between databases. allowing load on the databases to be balanced among servers. The infrastructure must also be secured (e. allocation of fragments to specific sites and data replication. • Difficult to maintain integrity — but in a distributed database.Due to replication of database. • Additional software is required.

M. J.its. Prentice-Hall. G. • Elmasri and Navathe. Principles of Distributed Databases (2nd edition). Fundamentals of database systems (3rd edition). J. NY: McGraw-Hill Irwin [2] O'Brien. T. & Marakas.(2008) Management Information Systems (pp.bldrdoc. 185-189). 185-189). ISBN 0-13-659707-6 •  This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (http://www. Ozsu and P. Classification Distributed design patterns can be divided into several groups: • Distributed communication patterns • Security and reliability patterns • Event driven patterns Examples • MapReduce • Bulk synchronous parallel .(2008) Management Information Systems (pp.htm). & Marakas. New York.gov/fs-1037/fs-1037c. New York. G.M. NY: McGraw-Hill Irwin • M. Addison-Wesley Longman. a distributed design pattern is a design pattern focused on distributed computing problems. Valduriez.Distributed database 42 References [1] O'Brien. ISBN 0-201-54263-3 Distributed design patterns In software engineering.

1-1995 . adopted in 1995) on DIS for modelling and simulation interoperability.Recommended Practice for Distributed Interactive .3-1996 . BBN introduced the concept of dead reckoning to efficiently transmit the state of battle field entities.5-XXXX . . The standard itself is very closely patterned after the original SIMNET distributed interactive simulation protocol. Funding and research interest for DIS standards development decreased following the proposal and promulgation of its successor. Standardised Information Technology Protocols for Distributed Interactive Simulation (DIS).Standard for Distributed Interactive Simulation .Application protocols[1] IEEE 1278.Fidelity Description Requirements (never published) In addition to the IEEE standards. History The standard was developed over a series of "DIS Workshops" at the Interactive Networked Simulation for Training symposium.Distributed Interactive Simulation 43 Distributed Interactive Simulation Distributed Interactive Simulation (DIS) is an open standard for conducting real-time platform-level wargaming across multiple host computers and is used worldwide.Standard for Distributed Interactive Simulation .2-1995 . developed by Bolt. The DIS family of standards DIS is defined under IEEE Standard 1278: • • • • • • IEEE 1278-1993 .Communication Services and Profiles IEEE 1278. the High Level Architecture (simulation) in 1996.Application protocols IEEE 1278. especially by military organizations but also by other agencies such as those involved in space exploration and medicine. This was retired in favour of HLA in 1998 and officially cancelled in 2010 by the NATO Standardisation Agency (NSA). In the early 1990s. There was a NATO standardisation agreement (STANAG 4482.Verification Validation & Accreditation • IEEE 1278.Standard for Distributed Interactive Simulation . Beranek and Newman (BBN) for Defense Advanced Research Project Agency (DARPA) in the early through late 1980s. This document is referenced by the IEEE standards and used by DIS. TENA and HLA federations.Exercise Management and Feedback • IEEE 1278. IST was contracted by the United States Defense Advanced Research Project Agency to undertake research in support of the US Army Simulator Network (SimNet) program. Both PDF and XML versions are available. HLA was produced by the merger of the DIS protocol with the Aggregate Level Simulation Protocol (ALSP) designed by MITRE.1A-1998 .1-1995 .4-1997 .Application protocols IEEE 1278. held by the University of Central Florida's Institute for Simulation and Training (IST).Standard for Distributed Interactive Simulation .Application protocols Errata (May 1998) IEEE-1278. the Simulation Interoperability Standards Organization (SISO) maintains and publishes an "enumerations and bit encoded fields" document yearly.Recommended Practice for Distributed Interactive Simulation .Standard for Distributed Interactive Simulation .

known as protocol data units (PDUs) and exchanged between hosts using existing transport layer protocols. PDU and family names shown in italics are included in present draft DIS 7.Designator.Start/Resume.Distributed Interactive Simulation 44 Current status SISO. promulgates improvements in DIS. Version 1. Collision-Elastic. Frequently used PDU types are listed below for each family. See External Link .Application Protocols. Version 2.0 Draft (1992) • Version 2 . efficient and to support the simulation of more real world capabilities.[2] Application protocol Simulation state information is encoded in formatted messages.Application Protocols. There are several versions of the DIS application protocol. though broadcast User Datagram Protocol is also supported.1-2010 (in preparation .1-1995) • Version 7 .) Version 7 is also called DIS 7 . Resupply Cancel. Detonation. Receiver. Major changes are already in the DIS 7 draft update to IEEE 1278. Directed Energy Fire.Application Protocols. Resupply Received.1-1995 • Version 6 . Signal. Intercom Control • Entity management family • Minefield family • Synthetic environment family • Simulation management with reliability family • Live entity family • Non-real time family • Information Operations family . Entity State Update.1a-1998 (amendment to IEEE 1278.0 Third Draft (May 1993) • Version 4 . Information Operations Report . Version 2.Entity State.Fire. Repair Complete. • Version 1 .Service Request. including multicast. Acknowledge • Distributed emission regeneration family .Information Operations Action.DIS Product Development Group. Collision. a sponsor committee of the IEEE. not only including the formal standards. It provides extensive clarification and more details of requirements.IEEE 1278. Repair Response • Simulation management family . Stop/Freeze.[2] Protocol data units The current version (DIS 6) defines 67 different PDU[3] types. arranged into 12 families.IEEE 1278.Standard for Distributed Interactive Simulation .0 Fourth Draft (March 1994) • Version 5 . but also drafts submitted during the standards balloting process.Standard for Distributed Interactive Simulation .Standard for Distributed Interactive Simulation . Attribute • Warfare family .1[1] to make DIS more extensible. and adds some higher-fidelity mission capabilities. Entity Damage Status • Logistics family . Supplemental Emission/Entity State (SEES) • Radio communications family . • Entity information/interaction family . IFF/ATC/NAVAIDS.IEEE 1278. Resupply Offer.Transmitter. Intercom Signal. Electromagnetic Emission.scheduled for completion and IEEE balloting in the Spring of [2] 2010. Underwater Acoustic.IEEE 1278-1993 • Version 3 . This is a major upgrade to DIS to enhance extensibility and flexibility.

an area of shared memory. a record. org/ servlet/ opac?punumber=5896). with significant advantages for performance and availability. Retrieved 10100517. and then on particular parts of the database. org/ DigitalLibrary. ieee. The DLM is used not only for file locking but also for coordination of all disk access. relies on the OpenVMS DLM in just this way. ieee. VMS implementation VMS was the first widely-available operating system to implement a DLM. sisostds. pdf). IEEE. . the first clustering system to come into widespread use.1a-1998 IEEE Standard for Distributed Interactive Simulation . A lock must be obtained on a parent resource before a subordinate resource can be locked. Retrieved 10100517. in which the machines in a cluster can use each other's storage via a unified file system. The main performance benefit comes from solving the problem of disk cache coherency between participating computers. a hypothetical database might define a resource hierarchy as follows: • • • • Database Table Record Field A process can then acquire locks on the database as a whole. This became available in Version 4. DLMs have been used as the foundation for several successful clustered file systems. which is some entity to which shared access must be controlled. [2] DIS 7 Overview. .org/StandardsActivities/SupportGroups/ DISPSGDistributedInteractiveSimulation. although the user interface was the same as the single-processor lock manager that was first implemented in Version 3. 1-1995. A hierarchy of resources may be defined. or anything else that the application designer chooses. VMScluster.Application Protocols" (http:/ / ieeexplore. SISO PSG File Library (http:/ / www.Application protocols" (http:/ / standards. org/ reading/ ieee/ updates/ errata/ 1278. This can relate to a file. aspx?EntryId=29288) [3] "1278. Resources The DLM uses a generalised concept of a resource.aspx) Distributed lock manager A distributed lock manager (DLM) provides distributed software applications with a means to synchronize their accesses to shared resources.Distributed Interactive Simulation 45 References [1] "Corrections to Standard for Distributed Interactive Simulation . External links • SISO DIS Product Support Group (http://www. .sisostds. For instance. so that a number of levels of locking can be implemented. IEEE.

• Protected Write (PW).Distributed lock manager 46 Lock modes A process running within a VMSCluster may obtain a lock on a resource. It is also possible to establish a blocking AST. The following truth table shows the compatibility of each lock mode with the others: Mode NL CR CW PR PW EX NL Yes Yes Yes Yes Yes Yes CR Yes Yes Yes Yes Yes No CW Yes Yes Yes No No No PR Yes Yes No Yes No No PW Yes Yes No No No No EX Yes No No No No No Obtaining a lock A process can obtain a lock on a resource by enqueueing a lock request. This is usually employed on high-level resources. • Exclusive (EX). and prevents others from having any access to it. but prevents others from having exclusive access to it. in which case the process waits until the lock is granted. even when no processes are locking it. This is similar to the QIO technique that is used to perform I/O. • Null Lock (NL). Others can however also read the resource. but does not prevent other processes from locking it. which indicates a desire to read the resource but prevents other from updating it. it is possible to convert the lock to a higher or lower level of lock mode. The original process can then optionally take action to allow the other access (e. Indicates a desire to read (but not update) the resource. This is the traditional exclusive lock which allows read and update access to the resource. This is also usually employed on high-level resources. in order that more restrictive locks can be obtained on subordinate resources. It allows other processes to read or update the resource. which indicates a desire to read and update the resource and prevents others from updating it. It also allows other processes to read or update the resource. . but prevents others from having exclusive access to it. • Concurrent Write (CW). • Concurrent Read (CR). There are six lock modes that can be granted. The enqueue lock request can either complete synchronously. which is triggered when a process has obtained a lock that is preventing access to the resource by another process. • Protected Read (PR). in order that more restrictive locks can be obtained on subordinate resources. by demoting or releasing the lock). or asynchronously. Once a lock has been granted. This is the traditional update lock. Indicates a desire to read and update the resource.g. This is the traditional share lock. Others with Concurrent Read access can however read the resource. When all processes have unlocked a resource. and these determine the level of exclusivity of access to the resource. Indicates interest in the resource. the system's information about the resource is destroyed. in which case an AST occurs when the lock has been obtained. It has the advantage that the resource and its lock value block are preserved.

Though Chubby was designed as a lock service. it obtains the appropriate lock and compares the current lock value with the value it had last time the process locked the resource.6. A typical use is to hold a version number of the resource. and none of them can proceed.19.6. in January 2006. and therefore it is unnecessary to read it again. When another process wishes to read the resource. and MapReduce. A simple example is when Process 1 has obtained an exclusive lock on Resource A. it is now heavily used inside Google as a name server. Both systems use a DLM modeled on the venerable VMS DLM. If Process 1 then tries to lock Resource B. OCFS2. has eight parameters. If the value is the same. Each time the associated entity (e. supplanting DNS.16. this technique can be used to implement various types of cache in a database or similar application. including Google File System. and Process 2 has obtained an exclusive lock on Resource B. dlmlock(). Red Hat's cluster software. a lock service for loosely-coupled distributed systems. This is known as a deadly embrace or deadlock.Distributed lock manager 47 Lock value block A lock value block is associated with each resource. in November 2006. including their DLM and Global File System was officially added to the Linux kernel [2] with version 2. the process knows that the associated entity has not been updated since last time it read it. But if Process 2 then tries to lock Resource A. Key parts of Google's infrastructure.g. a database record) is updated. It can be used to hold any information about the resource that the application designer chooses. use Chubby to synchronize accesses to shared resources. The alpha-quality code warning on OCFS2 was removed in 2.) Google's Chubby lock service Google has developed Chubby. whereas the VMS SYS$ENQ service and Red Hat's dlm_lock both have 11. It would then be up to this process to take action to resolve the deadlock — in this case by releasing the first lock it obtained.[4] It is designed for coarse-grained locking and also provides a limited but reliable distributed file system. it will have to wait for Process 2 to release it. This can be read by any process that has obtained a lock on the resource (other than a null lock) and can be updated by a process that has obtained a protected update or exclusive lock on it. the holder of the lock increments the lock value block. (the core function. Linux clustering Both Red Hat and Oracle have developed clustering software for Linux. In the example above. Hence.[3] Oracle's DLM has a simpler API. The OpenVMS DLM periodically checks for deadlock situations. the Oracle Cluster File System was added[1] to the official Linux kernel with version 2. Deadlock detection When one or more processes have obtained locks on resources. both processes will wait forever for each other. the second lock enqueue request of one of the processes would return with a deadlock status.19.[4] . BigTable. it is possible to produce a situation where each is preventing another from obtaining a lock.6.

a=commit.Distributed lock manager 48 SSI Systems A DLM is also a key component of more ambitious single system image projects such as OpenSSI. The interconnect can be organised with point to point links or separate hardware can provide a switching network. com/ papers/ chubby.h=29552b1462799afbe02af035b243e97579d63350 [2] http:/ / git.com/doc/82FINAL/ 4527/4527pro_044. The network topology is a key factor in determining how the multi-processor machine scales.www7. org/ git/ ?p=linux/ kernel/ git/ torvalds/ linux-2.hp.html#jun_227) • ARCS .A Web Service used as a Distributed Lock Manager (http://www. org/ ?p=linux/ kernel/ git/ torvalds/ linux-2. except that there may be performance penalties. 6. 6. and if remote data is required. a memory. using bespoke network links (used in for example the Transputer).us) Distributed memory In computer science. git. kernel.h=1c1afa3c053d4ccdf44e5a4e159005cdfd48bfc6 [3] http:/ / lwn. html • HP OpenVMS Systems Services Reference Manual – $ENQ (http://h71000.a=commit. Computational tasks can only operate on local data. and some form of interconnection that allows programs on each processor to interact with each other. kernel. The links between nodes can be implemented using some standard network protocol (for example Ethernet). distributed memory refers to a multiple-processor computer system in which each processor has its own private memory. or using dual ported memories.arcs. a shared memory multi processor offers a single memory space used by all processors. In contrast. . References [1] http:/ / www. An illustration of a distributed memory system of three computers Architecture In a distributed memory system there is typically a processor. Processors do not have to be aware where data resides. the computational task must communicate with one or more remote processors. and that race conditions are to be avoided. git. google. net/ Articles/ 137278/ [4] http:/ / labs.

and only changes on edges have to be reported to other nodes. and that it forces the programmer to think about data distribution. An example of this is simulation where data is modeled using a grid. Data can be kept statically in nodes if most computations happen locally. The advantage of distributed (shared) memory is that it is easier to design a machine that scales with the algorithm Distributed shared memory hides the mechanism of communication .Distributed memory 49 Programming distributed memory machines The key issue in programming distributed memory systems is how to distribute the data over the memories. the data can be distributed statically. in distributed shared memory each node of a cluster has access to a large shared memory in addition to each node's limited non-shared private memory. or data can be pushed to the new nodes in advance. The advantage of distributed memory is that it excludes race conditions. As an example. or it can be moved through the nodes.it does not hide the latency of communication. (the result is H(G(F(X)))). This is also known as systolic computation. Data can be moved on demand. and finally to the third node that computes H. if a problem can be described as a pipeline where data X is processed subsequently through functions F. Depending on the problem solved. G. . Shared memory versus distributed memory versus distributed shared memory The advantage of (distributed) shared memory is that it offers a unified address space in which all data can be found. and each node simulates a small part of the larger grid. H. etc. then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performs F that passes the result onto the second node that computes G. nodes inform all neighboring nodes of the new edge data. Distributed shared memory Similarly. On every iteration.

perhaps resulting in only a weak consistency between their local states. 3. See also Internet protocol suite. Jt is a framework for distributed components using a messaging paradigm. The results are sent back to the calling object. shared memory (spaces based) . and that can encapsulate distributed state and behavior. Life cycle : Creation.Distributed object 50 Distributed object The term distributed objects usually refers to software modules that are designed to work together. JavaSpaces is a Sun specification for a distributed. Local vs Distributed Objects Local and distributed objects differ in many respects. 7. Image describes communication between distributed objects residing in different machines. One object sends a message to another object in a remote machine or process to perform some task. such as replicated objects or live distributed objects. Live distributed objects can also be defined as running instances of distributed multi-party protocols. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. 5. DDObjects is a framework for distributed objects using Borland Delphi. [1] • Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol. DCOM is a framework for distributed objects on the Microsoft platform. • Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states. 4. viewed from the object-oriented perspective as entities that have distinct identity. The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing. 2. 8. Communication : There are different communication primitives available for distributed objects requests Failure : Distributed objects have far more points of failure than typical local objects Security : Distribution makes them vulnerable to attack. migration and deletion of distributed objects is different from local objects Reference : Remote references to distributed objects are more complex than simple pointers to memory addresses Request Latency : A distributed object request is orders of magnitude slower than local method invocation Object Activation : Distributed objects may not always be available to serve an object request at any point in time Parallelism : Distributed objects may be executed in parallel. and that respond to requests in a coordinated manner. CORBA lets one build distributed mixed object systems. 6.[2] Here are some of them: 1. Distributed objects are used in Java RMI. Examples Distributed objects are implemented in Objective-C using the Cocoa API with the NSConnection class and supporting objects. but reside either in multiple computers connected via a network or in different processes inside the same computer.

or as a programming library. Software DSM systems also have the flexibility to organize the shared memory region in different ways. [2] W. Dolev. In contrast. chosen in accordance with a consistency model. However. in which the unit of sharing is a tuple. and Ahnn. Examples of such systems include: • Kerrighed • • • • • OpenSSI MOSIX Terracotta TreadMarks DIPC . Software DSM systems implemented at the library or language level are not transparent and developers usually have to program differently. Vitek. Proceedings of the 22nd European Conference on Object-Oriented Programming. Software DSM systems can be implemented in an operating system. D. In contrast. J. Here. Heidelberg. org/ citation. these systems offer a more portable approach to DSM system implementation. or distributing all memory between nodes. K. Cyprus.Distributed object Pyro is a framework for distributed objects using the Python programming language. (2008). Alternatively in computer science it is known as (DGAS). J.. Ed. John Wiley & Sons Ltd. July 07 . Lecture Notes In Computer Science. which means that the underlying distributed memory is completely hidden from the users. in which each node of a cluster has access to shared memory in addition to each node's non-shared private memory. A coherence protocol. in Computer Architecture is a form of memory architecture where the (physically separate) memories can be addressed as one (logically shared) address space. Emmerich (2000) Engineering distributed objects. 51 References [1] Ostrowski. Distributed Ruby (DRb) is a framework for distributed objects using the Ruby programming language. acm. 2008. "Programming with Live Distributed Objects". 463-489. Such systems are transparent to the developer. K. maintains memory coherence. Distributed shared memory Distributed Shared Memory (DSM).. Birman. the object based approach organizes the shared memory region as an abstract space for storing shareable objects of variable sizes. the term shared does not mean that there is a single centralized memory but shared essentially means that the address space is shared (same physical address on two processors refers to the same location in memory)[1] . Paphos. cfm?id=1428508. Software DSM systems implemented in the operating system can be thought of as extensions of the underlying virtual memory architecture.11. The page based approach organizes shared memory into pages of fixed size. 5142.. Shared memory architecture may involve separating memory into shared parts distributed amongst nodes and main memory. 1428536.. Springer-Verlag. vol. Berlin. Another commonly seen implementation uses a tuple space. a concept that refers to a wide class of software and hardware implementations. http:/ / portal.

themeable. web-hook style sensor network development . interoperability and federation capability. p. 1989 Distributed social network A distributed social network is an Internet social network service that is decentralized and distributed across different providers. It contrasts with social network aggregation services.sharedcache. and John L. Project Name Features Software Programming Language 6d License Protocols Privacy Support Federation (with other applications or services) Instances Version/Maturity [2] Blog. The emphasis of the distribution is on portabilitya[›]. Fourth Edition. ISBN 0123704901.com) • Memory coherence in shared virtual memory systems (http://portal. Nov. OStatus federation. Application framework. Volume 7 Issue 4. Open standards such as OAuth authorization. Comparison of projects The protocols of these projects are generally open and free.acm.Distributed shared memory 52 References [1] Patterson. XRD metadata discovery. Computer architecture : a quantitative approach. microformats [4] Addressbook to send posts to either individuals or groups. OpenID authentication. private messaging server [3] PHP MIT HTTP + REST. External links • Distributed Shared Cache (http://www. Morgan Kaufmann Publishers. and Atom web feeds—increasingly referred to together as the Open Stack—are often cited as enabling [1] technologies for distributed social networking.cfm?id=75105&am) by Kai Li. OpenSocial widget APIs. A few social networking service providers have used the term more broadly to describe provider-specific services that are distributable across different websites. The software of the projects is generally free and open source. addressbook. Hennessy (2007). Public Domain HTTPS. which are used to manage accounts and activities across multiple discrete social networks. microformats like XFN and hCard. the social network functionality is implemented on users' websites. media library. Through the add-ons. not yet demo [5] alpha 5 total Ampify Trust-based search. the Portable Contacts protocol. typically through added widgets or plug-ins. the Wave Federation Protocol.org/citation. David A. Ampify Messaging Protocol Provides fine grained privacy control through object capability security and transport layer encryption. 201. Paul Hudak published in ACM Transactions on Computer Systems.

global darknet DHT on restricted routes (FOAF) or Opennet (anonymizing DHT). Privacy controls. OStatus. customizable interface Freenet Censorship resistant publishing. 'aspects' . collaborative drawing. buddycloud channels Activity Streams ? ? [22] Diaspora X 2 [24] . anonymity. hCard. groups. email.net [28] [30] [32] . profile. pseudonymity. profiles. Java Apache 2. mood. content XMPP. Atom. XMPP chat. scrobbling. photos. photo/video sharing server client [12] [13] . avatar.0 XMPP. files. acl.0 Friend2Friend [35] Strong encryption. messaging. but pre 1.Distributed social network [6] Photos. XML for all data exchange. games. Status Updates. DistribSocial. OAuth. Activity Streams. microblogging. OStatus in testingdue in next release beta. granular. OpenSocial. ChoiceSocial. PubSubHubbub. [18] ? ? Diaspora Alpha Wiki [19] pre-alpha 24 listed on Diaspora client using [21] XMPP. Groups. privacy controls. Forum. JavaScript. webpages. Channel Protocol [14] . Location Query Diaspora Microblogging. OStatus (next release). ? alpha . document creation and editing.net [33] [29] GPLv2 FOAF. XOXO). OpenID.friend management Diaspora X 2 [20] Yes in development [15] server [16] Ruby AGPL 3. ChoiceSocial (web interface) Distributed Social Networking Protocol (DSNP) ? ? Friends in Feed [31] . WebOfTrust. XMPP [27] DSNP [28] DSNPd (server daemon). Data is digitally signed LGPL Connect to known individuals. updating bookmarks. in use Duuit! Search. feed reader. in use Approximately 120 [11] buddycloud [10] Location. photo sharing.0 changing Salmon [17] . Newsfeeds 53 [7] PHP GPLv2 QuickSocial Appleseed server [8] Friend circles used to categorize friends and restrict/allow access Internally. Journals. RSS/Atom. XMPP. third party plugins p2p Java GPL [34] UDP. Blog. anonymous DVCS. buddycloud for federation DiSo Project [23] ? ? [25] WordPress plugins [26] microformats (XFN. email. video chat. videos. OAuth push/pull. Yes hosted on every users computer stable. opendd. others easily added (plugin architecture) Appleseed total [9] beta. IRC Excellent. Messaging. blogs.

Friendika server components [38] [40] stable/production [39] [41] Server [42] AGPLv3 OStatus [43] ? Yes daisycha.Distributed social network [36] Rich profiles.Net. tasks.0 services via XFN and FOAF. richtext status (not specifically length limited). blogs. profiles. multiple profiles w/assignment to specific friends. Twitter. consolidated profile with RDF/FOAF export. location. maps. integrates Java-based GWT AJAX) AGPLv3 XMPP. XMPP chat interoperable with other XMPP-compliant [52] alpha groups. robots). Activity Stream import and export. GNU Social extensive Friendika. contact import from Web 2. youtube share. Fans and one-way relationships. more in development 54 [37] PHP BSD OStatus OpenID. Kopal Connect protocol ? ? alpha [49] . lists. galleries (photos. networking groups. federation server. photo albums. Communications encryption. email. Local and global directory services. Facebook. web client AGPL XMPP Excellent: based on presence authorizations ? demo [46] production Knowee OpenID Signup. identi. DFRN demo . Apache Wave (generates . like/dislike. Apache Wave inbox (modern email). Wave Federation Protocol Total federation/interoperability with other Kune Excellent installations and Apache Wave accounts.in [44] (based on SatusNet) Jappix [45] XMPP client + Microblogging server. Kopal Feed microformat Kune [50] demo [51] real-time colaborative edition. FOAF ? ? alpha Kopal [47] OpenID Core. single sign on to post directly to friend's profiles on co-operating systems. blogs/feeds/Diaspora/Google (via RSS/ATOM).ca/Status. wave extensions (gadgets. public webpages. Ability to restrict connection endpoints. personal SPARQL API W3C OpenID. automatically updated address book from remote data sources. community/group/celebrity pages. XMPP chat. GNU-social. videos). multiple profiles Server [48] MIT OpenID.

Semantic Pingback. subgroups. (partial) Twitter API support. Social Graph API. features being added.0 . Discussion Forums (includes NNTP support). active development [61] Microblogging Openfire plugin. GPL OpenID. application platform OneSocialWeb NoseRub protocol / WebID SimPL 2. security. Address Spaces (ODS) Profile Management.0 ? PHP AGPLv3 XMPP Excellent not yet Yes ? not yet Yes demo development [57] [58] [59] [60] OpenID. Flickr integration. Wikis. Feed Aggregation. Webfinger. XMPP/psyc (50% development). Portable Contacts. id. OpenSocial. Calendars. streams.. OAuth. flexible hosting. groups. Particle Yes ? 2 Alpha. Privacy NoseRub server and webclient SMTP. Fully Restful design. PubSubHubbub. group mailing lists. Local follow/unfollow. more. XMPP extensions [63] Active developer Yes Yes community. 1. calendar. microblogging. WebID. tasks. RSS and more MIT Open Microblogging 0. Media. Dual and GPL for Open Source Edition) WebID. tagclouds 55 [54] . rdf+sparql (10% development) Movim XMPP client + Microblogging Mr. Facebook. HTTP..Distributed social network [53] Profiles.1. user interface consumes Rest API. Open Collaboration Services Yes ? ver. Twitter. RSSCloud and partial OStatus (PubSubHubbub) federation as well as Open Microblogging 0. Activity Streams. plugins. ownCloud Cloudstorage and plugins for Photos. clients Java Apache 2 XMPP. PubSubHubbub.myopenlink. File Servers (WebDAV based Briefcase).1. Atom Publishing. [62] OpenLink Data [64] Blogs. WebID and others Yes (Comercial OpenID.net among others [65] Active use Books. IMAP sample server ObjectCloud customization. OpenMicroBlogger User-toggleable "apps" to add/remove functionality. (partial) OStatus (PubSubHubbub) Yes Yes alpha AGPLv3 WebDAV. Working on: OStatus ? project's group Lorea Elgg [56] production plugins [55] [54] (60% production). RSSCloud. SPARQL.

editable widgets. including communication untraceability ? demo [67] beta [68] SMOB Social-Igniter microblogging FOAF server GPL Webfinger. ? TELNET.Distributed social network Project Danube 1) Sharing personal data with companies/organizations 2) Sharing personal data with "friends" 3) Use of personal data for "personal applications" Project Nori OStatus. modular apps (messages. HTTP. Privacy Controls ? Yes Alpha Yes OpenID. status. OpenID No Yes Beta StatusNet microblogging Server. Salmon StatusNet and Cliqset. media). XMPP. RSS RSSN private messaging. Clients [73] PHP AGPLv3 OStatus. chat. OpenID.9 (Active use) Thimbl Weestit microblogging Finger. Yes Yes SocialRiver [70] GPL AGPL OStatus [71] . web client OSMP (Open Social Message Protocol) Socknet ProviderFoolishMortal. OpenMicroBlogging (deprecated) Available for sites. will add support for OAuth SocialZE [72] server. likely Eclipse or Apache OStatus. planned for accounts and posts ? Planned for future Yes Identi. Portable Contacts. SMTP. YouTube). IRC. Applet. PubSubHubbub.0. XRI. Twiter. hCard. places. Webfinger. comments. Private Messaging. Portable Contacts. XDI. OAuth.20 2010 . mobile themes. 3rd party integration (Facebook. WAP. PubSubHubbub. TWiT [75] 0.org profiles. and other open protocols psyced profiles. OStatus. OAuth. groups Safebook RSSN ? ? ? ? Yes TBD. cart.ca Army [74] . FOAF. SSH XMPP. OAuth 2. POP development alpha planned Yes Planned Nov. OpenID. themes. messaging. enables internet content sharing Socknet. SMTP. HTTP. microblogging GPLv2 MIT PSYC. among others 56 development early alpha concept [66] GPL Extensive. Activity Streams ? ? 3 production Alpha [69] friends. ? Webfinger. blog.

27. com/ [6] http:/ / opensource. org/ wiki/ Main_Page#Components http:/ / diso-project. com/ manifesto) [5] http:/ / demo6d. com/ buddycloud/ channel-server [13] https:/ / github. com/ [11] http:/ / buddycloud. org/ login/ [10] http:/ / buddycloud. net/ [34] https:/ / github. net/ ) [36] (http:/ / friendika. appleseedproject. org/ software/ social) [42] http:/ / gitorious. com/ [16] https:/ / github. com/ #login [26] [27] [28] [29] [25] http:/ / diso-project. Retrieved 5 January 2009. pdf) [40] http:/ / demo. org/ projects/ social/ faq/ . ""Blowing Up" Social Networks by Going Open" (http:/ / www. org/ dsnp/ spec/ dsnp-spec. org/ + socialites/ statusnet/ gnu-socia [43] http:/ / foocorp. com/ [20] http:/ / diaspora-x. org/ dsnp/ http:/ / complang. com/ [41] (http:/ / gnu. org/ http:/ / diso-project. org/ http:/ / complang. com/ bnolan/ diaspora-x2 [22] http:/ / buddycloud. net/ daveman692/ blowing-up-social-networks-by-going-open-presentation/ ). com/ ijoey/ 6d [4] (http:/ / get6d. com/ buddycloud [14] http:/ / buddycloud. com/ diaspora/ diaspora [17] http:/ / groups. com/ [21] https:/ / github.Distributed social network 57 Notes ^ a: See DataPortability article. com/ group/ salmon-protocol/ browse_thread/ thread/ efab99ca7311d4ae) [19] https:/ / joindiaspora. pp. friendika. com/ ) [37] http:/ / portal. friendika. com/ node/ 7) [39] (http:/ / dfrn. [2] (http:/ / get6d. David (2008-10-09). com/ ) [3] https:/ / github. org/ [12] https:/ / github. org/ wiki/ Channel_Protocol [15] http:/ / open. . org/ dfrn2. slideshare. org/ download/ [8] http:/ / opensource. pdf [31] https:/ / friendsinfeed. org/ dsnp/ [30] http:/ / complang. com/ [24] http:/ / diaspora-x. org/ quicksocial/ [9] http:/ / appleseedproject. google. google. appleseedproject. net/ [33] https:/ / distribsocial. com/ [32] https:/ / choicesocial. External links • • • • Wiki of Federated Social Web W3C Incubator Group [76] Federated Social Web Conference 2011 [77] Comparison of protocol/software projects for distributed social networking [78] Diploma Thesis from the University of Applied Sciences Dresden(HTW) about XMPP-based Federated Social Networks like buddycloud [79](CC-BY) References [1] Recordon. friendika. appleseedproject. org [7] http:/ / opensource. buddycloud. com/ group/ diaspora-dev/ browse_thread/ thread/ 4bfb9cd07722dfc0 [18] (http:/ / groups. com/ cms/ content/ diaspora-x-now-running-buddycloud-channels-and-xmpp [23] http:/ / diaspora-x. com/ download [38] (http:/ / portal. com/ freenet [35] (http:/ / Friend2Friend.

com/ p/ kopal/ wiki/ Getting_Started?tm=2 [49] http:/ / code. org/ fsw2011/ [78] http:/ / gitorious. google. org/ join [55] https:/ / bitbucket. org/ developers-downloads. eu/ home. com/ p/ kopal/ wiki/ Kopal_Connect [50] http:/ / code. net/ download [74] http:/ / identi. com/ cms/ sites/ default/ files/ thesis. org/ 2005/ Incubator/ federatedsocialweb/ wiki/ Main_Page [77] http:/ / d-cent. beta. com/ [70] http:/ / socialriver. com/ download/ [59] http:/ / noserub. com/ [47] (http:/ / code. php?content=prototype [69] http:/ / social-igniter. html. iepala. html) [64] http:/ / ods. openlinksw. org/ rhizomatik [56] https:/ / n-1. safebook. google. com/ wiki/ ODS/ [65] http:/ / id. org [52] http:/ / kune. org/ [62] http:/ / onesocialweb. org/ [71] http:/ / socialriver. in/ [45] http:/ / project. org/ developers-protocol. myopenlink. net/ ods/ [66] http:/ / www. safebook. us/ home. com/ [46] http:/ / jappix.Distributed social network [44] http:/ / daisycha. org/ index. com/ [61] http:/ / onesocialweb. ca [75] http:/ / army. twit. com/ quick-facts/ [60] http:/ / identoo. safebook. eu/ [67] http:/ / www. pdf 58 . w3. com/ p/ kopal/ ) [48] http:/ / code. cc/ pg/ groups/ 7826/ lorea/ [57] http:/ / noserub. org/ social/ pages/ ProjectComparison [79] http:/ / buddycloud. com/ p/ kopal/ wiki/ Kopal_Feed [51] http:/ / kune. html [63] (http:/ / onesocialweb. google. com/ [58] http:/ / noserub. php?content=demo [68] http:/ / www. en [54] http:/ / lorea. org/ faq/ [72] http:/ / socialze. es/ ws/ [53] http:/ / lorea. tv/ [76] http:/ / www. google. org [73] http:/ / status. ourproject. jappix.

com/ en-us/ um/ people/ jrzhou/ pub/ Scope. examples include PSQL. shared memory or temporary files. devoid of any concurrency or mutual exclusion semantics. The DAG defines the dataflow of the application. • "Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks" [2]. Computational vertices are written using standard C++ constructs. microsoft. pdf [2] http:/ / research. Retrieved 2009-01-21. • "SCOPE: Easy and Efficient Parallel Processing of Massive Data Sets" [3]. The "computational vertices" are written using sequential constructs. The Dryad runtime parallelizes the dataflow graph by distributing the computational vertices across various execution engines (which can be multiple processor cores on the same computer or different physical computers connected by a network. Retrieved 2007-12-04. pdf [3] http:/ / research. com/ research/ sv/ dryad/ [6] http:/ / www. com/ microsoft/ ?p=18 [5] http:/ / research. References • "DryadLINQ: A System for General-Purpose Distributed Data-Parallel Computing Using a High-Level Language" [1]. microsoft. The flow of data between one computational vertex to another is implemented by using communication "channels" between the vertices. Microsoft Research. edges are added by using a composition operator (defined by Dryad) that connects two graphs (or two nodes of a graph) with an edge. pdf [4] http:/ / blogs. com/ watch?v=WPhE5JCP2Ak . as in a cluster). Microsoft Research.Dryad (programming) 59 Dryad (programming) Dryad is an ongoing research project at Microsoft Research for a general purpose runtime for execution of data parallel applications. Retrieved 2009-01-21. microsoft. that is used to create and model a Dryad execution graph. The graph is defined by adding edges. An application written for Dryad is modeled as a directed acyclic graph (DAG). Scheduling of the computational vertices on the available hardware is handled by the Dryad runtime. External links • Dryad: Programming the Data Center [4] • Dryad Home [5] • Video of Michael Isard explaining Dryad at Google [6] References [1] http:/ / research. which in physical implementation is realized by TCP/IP streams. To make them accessible to the Dryad runtime. youtube. There exist several high-level language compilers which use Dryad as a runtime. Microsoft Scope and DryadLINQ. Dryad defines a domain-specific language. microsoft. Managed code wrappers for the Dryad API can also be written. zdnet. A stream is used at runtime to transport a finite number of structured Items. without any explicit intervention by the developer of the application or administrator of the network. which is implemented via a C++ library. com/ en-us/ projects/ dryadlinq/ eurosys07. and the vertices of the graph defines the operations that are to be performed on the data. Microsoft Research. com/ en-us/ projects/ dryadlinq/ dryadlinq. they must be encapsulated in a class that inherits from the GraphNode base class.

"[12] IBM's definition: “A dynamic infrastructure integrates business and IT assets and aligns them with the overall goals of the business while taking a smarter. applications and endpoints will be required to reap the full benefits of virtualization and many types of cloud computing. .[1] [2] Microsoft. Dynamic Infrastructures may also be used to provide security and data protection when workloads are moved during migrations.[4] Fujitsu. This is achieved by using server virtualization technology to pool computing resources wherever possible. Fujitsu's definition: "Dynamic Infrastructures enable customers to assign IT resources dynamically to services as required and to choose sourcing models which best fit their businesses. Infrastructure 2. The FlexFrame approach is to dynamically assign servers to applications on demand. By reducing redundant capacity. scalability.0 refers to the ability of networks to keep up with the movement and scale requirements of new enterprise IT initiatives. real-time allocation of IT resources in line with demand from business processes. Early examples of server-level Dynamic Infrastructures are the FlexFrame for SAP and FlexFrame for Oracle solutions introduced by Fujitsu Siemens Computers (now Fujitsu) in 2003. increasing server utilization and the ability to perform routine maintenance on either physical or virtual systems all while minimizing interruption to business operations and reducing cost for IT. Dynamic Infrastructures provide for failover from a smaller pool of spare machines. for example. systems and endpoints. once a month.[5] HP [6] and Dell.and software-related failures. reduce cost. enabling higher levels of dynamic control and connectivity between networks.[8] Enterprises switching to Dynamic Infrastructures can also reduce costs.[10] Potential benefits of Dynamic Infrastructures include enhancing performance. Instead of the hot spare principle of keeping second servers on standby to replace all production machines in contingencies for hardware. provisioning.0 and Next Generation Data Center. organizations are enabled to make more efficient use of their IT budgets and devote greater proportions of their budget to physical and virtual production servers. F5 Networks and Infoblox.[9] enhancing performance or building co-location facilities. The paradigm is also known as Infrastructure 2. network automation and connectivity intelligence between networks. This will require network management and infrastructure to be consolidated.[11] system availability and uptime. This brings IT flexibility and efficiency to the next level. and allocating these resources on-demand using automated tools.”[13] For networking companies. leveling peaks and enabling organizations to maximize the benefit from their IT investments. enabling the seamless.[3] Sun. improve quality-of-service and make more efficient use of energy through reducing the number of standby or under-utilized machines in their data centers. This allows for load balancing and is a more efficient approach than keeping massive computing resources in reserve to run tasks that take place. and manage risk. Dynamic Infrastructures also provide the fundamental business continuity and high availability requirements to facilitate cloud or grid computing. new and more streamlined approach to helping improve service. Top tier vendors promoting dynamic infrastructures include IBM.[7] The basic premise of Dynamic Infrastructures is to leverage pooled IT resources to provide flexible IT capacity. According to companies like Cisco.Dynamic infrastructure 60 Dynamic infrastructure Dynamic Infrastructure is an information technology paradigm concerning the design of data centers so that the underlying hardware and software can respond dynamically to changing levels of demand in more fundamental and efficient ways than before. but are otherwise under-utilized. especially virtualization and cloud computing.

managing spikes in demand. – Source: Gartner – "TCO of Traditional Software Distribution vs. The need therefore. and ensuring disaster recovery readiness. • Technology systems can be optimized for energy efficiency. valves and assembly equipment through embedded electronics. interconnected. • Utility companies can reduce energy usage with a "smart grid." / Kurt Potter / 4 December 2008 . global. the infrastructure of atoms and the infrastructure of bits are merging into an intelligent. Until now. For example: • Transportation companies can optimize their vehicles' routes leveraging GPS and traffic information. and its effect on organizations is equally far-reaching. and technologies connecting and differentiating organizations. networks. and intelligent assets. that airports. control and automation across all business and IT assets Is highly optimized to achieve more with less Addresses the information challenge Leverages flexible sourcing like clouds Manages and mitigates risks Organizations need an infrastructure that can propel them forward — not hold them back. every dynamic infrastructure is service-oriented and focused on supporting and enabling the end users in a highly responsive way. Terrence Cosgrove. Mark A Margevicious. dynamic infrastructure. and oil wells were managed in one way. To succeed in today's world of instrumented. cost reductions initiatives are a driver 47% of the time and are now aligned well with green goals. and optimize routing to enhance user experience. By design. many organizations have thought of physical infrastructure and IT infrastructure as separate. and they reduced overall TCO by 5% to 7% in our model. PCs. utilities. Combining the two means that at least 57% of data center outsourcing and hosting initiatives are driven by green. cell phones. for example. and broadband devices were managed quite differently. Benefits of having dynamic infrastructures Dynamic infrastructures take advantage of intelligence gained across the network. • Communications companies can better monitor usage by location. • Facilities organizations can secure access to locations and track the movement of assets by leveraging RFID technology. Brian Gammage / 16 April 2008 While green issues are a primary driver in 10% of current data center outsourcing and hosting initiatives. user or function. throughout an organization's entire facilities as well as between one organization and another. This convergence of business and IT assets requires an infrastructure that can measure and manage the lifecycle of assets that exist beyond the data center. competitors and customers. buildings. Now. routers. packaging and supporting an application by 60%. power plants. roadways. The range of this approach is broader than ever before. Application Virtualization" / Michael A Silver. it is the infrastructure that continues to enable commerce and communications – the roads. Global organizations already have the foundation for a dynamic infrastructure that will bring together the business and IT infrastructure to create new possibilities. • Production environments can monitor and manage presses. It can utilize alternative sourcing approaches. while datacenters.Dynamic infrastructure 61 Need for a holistic approach Even in the face of global uncertainty." Virtualized applications can reduce the cost of testing. like cloud computing to deliver new services with agility and speed. a new approach is needed. This meant. – Source: Gartner – "Green IT Services as a Catalyst for Cost Optimization. is for a new type of infrastructure that: • • • • • Enables visibility.

sun. html) [13] Dynamic Infrastructure: Delivering superior business and IT services with agility and speed (ftp:/ / ftp. fujitsu.www1. [12] Fujitsu's Dynamic Infrastructures main page (http:/ / ts. uk/ 2009/ 04/ 29/ ibm_storage_apr09/ ) [3] Microsoft's view of The Dynamic Datacenter coverered by networkworld (http:/ / www. networkworld.sandia. more than 50% of midsize organizations and more than 75% of large enterprises will implement layered recovery architectures.html?jumpid=reg_R1002_USEN/) • Fujitsu Dynamic Infrastructures (http://ts.com/getdoc. F5. com/ service/ dynamicinfrastructure/ index.com/systems/ dynamicinfrastructure/) • HP Converged Infrastructure HP Converged Infrastructure (http://h18004. theregister.com/ci) • Infrastructure 2. sun.infra20. amazon.microsoft. VMware at Future in Review Conference May 2009 (http://vimeo. freepatentsonline. Dynamic Infrastructure (http://www.ibm. co.com/service/dynamicinfrastructure/index. com/ dl.jsp) • NEC Dynamic It Takes a Dynamic Infrastructure to sustain growth while staying green (http://www. ts.com) • Microsoft Realizing the potential for dynamic infrastructure (http://technet.com/videos/tag/dynamic+ infrastructure) . com/ ec2/ ).technorati. www1. Cost and Outsourcing Risk"). com/ ci) [8] IDC White Paper Building the Dynamic DataCenter: FlexFrame for SAP (http:/ / docs. html) [10] An overview of continuous data protection (http:/ / findarticles.nec.html) • Dell Converged Infrastructure (http://www.fujitsu.com/ article/111346-network-industry-needs-a-new-vision-infrastructure-2-0) • National Infrastructure Simulation and Analysis Center (http://www. html) [7] Dell Converged Infrastructure (http:/ / www. com/ p/ articles/ mi_m0BRZ/ is_2007_Spring/ ai_n19493357/ pg_2).aspx) • Seeking Alpha The Network Industry Needs a New Vision — Infrastructure 2.com/ 4891610) • IDC 4th Annual Dynamic Infrastructure Conference (Event) (http://www. Rober Desisto / 28 January 2009 The key to a business and IT infrastructure that is "dynamic" is leveraging technologies. com/ features/ 26054149." – Source: Gartner – "Predicts 2009: Business Continuity Management Juggles Standardization. 62 References [1] IBM patent: Method For Dynamic Information Technology Infrastructure Provisioning (http:/ / www. / Roberta J Witty.com/products/solutions/ converged/main.com/ global/corporate-ad/images/it_infrastructure. on-demandenterprise.0 blog (http://www.gov/nisac/diisa. ibm. software.com/it_trends/dynamic_infrastructures/index. John P Morency. service delivery and acquisition models that optimize the infrastructure for efficiency and flexibility while transforming management to an automated service delivery and management model. com/ community/ node/ 27354). fujitsu. com/ common/ ssi/ sa/ wh/ n/ oiw03021usen/ OIW03021USEN.0 (http://seekingalpha.com/en-us/ infrastructure/bb736006. hp.dell.hp. Dave Russell. fujitsu.Dynamic infrastructure "By 2013.0 Panel with Cisco.pdf) • Technorati Dynamic Infrastructure. html) [2] IBM's dynamic infrastructure taking shape at TheRegister (http:/ / www. Donna Scott. PDF) External links • IBM Dynamic Infrastructure IBM Dynamic Infrastructure (http://www-03. [11] Amazon Elastic Compute Cloud (http:/ / aws. [4] Dynamic Infrastructure at Sun (http:/ / www. com/ dynamicinfrastructures) [6] Dynamic Infrastructure and Blades at HP (http:/ / h18000. com/ it_trends/ dynamic_infrastructures/ index. jsp?containerId=IDC_P15254) • Infrastructure 2. com/ y2007/ 0294736. dell.idc.html) • Sun Dynamic Infrastructure Suite (http://www. jsp) [5] Fujitsu Dynamic Infrastructures (http:/ / ts. com/ products/ blades/ components/ matrix/ big_picture. aspx?id=140d1393-d5ff-4c3b-924d-0c7183ebee65) [9] Computation on Demand: The Promise of Dynamic Provisioning (http:/ / www.

Edge computing replicates fragments of information across distributed networks of web servers. It is like an application cache.reuters.com date=October 2008. and the distance the data must go. http://www. compromised data.com date=September 2008. toward the network core.It's Already Here! (http://www. which may be vast and include many networks. • Sun Dynamic Infrastructures Wiki (http://wikis. bizvoicemagazine. Overview As the name implies. • Bizvoicemagazine.pdf) (September. Retrieved 2008-10-31.com/ static/sessions/2008/PO2596.com. where the cache is in the Internet itself. nodeless availability. • Ernst. http://www.com date=October 2005. peer-to-peer computing. grid computing.html). SAAS. limiting or removing a major bottleneck and a potential point of failure. Edge computing eliminates. 2008) • Herndon. • Reuters.pdf). and other names implying non-centralized. shrinking latency. Retrieved 2010-08-23. or at least de-emphasizes. Security is also improved as encrypted data moves further in. 2008) • Carolan. 2.vmworld. and improving quality of service (QoS). thereby reducing transmission costs. Previously available only to very large corporate and government organizations. Edge computing is also referred to as mesh computing.com/ Migration/Virtual-Iron-Dynamic-Infrastructure-for-the-Data-Center.com/archives/08sepoct/PassItOn-Infrastructures. Dynamic Infrastructures: Taking Business Continuity to the Next Level (http://www. and active hackers can be caught early on.com. the core computing environment. data and computing power (services) away from centralized points to the logical extremes of a network. louspringer. As it approaches the enterprise.com/display/DI/DI+Home) 63 Edge computing Edge computing provides application processing load balancing capacity to corporate and other large-scale web servers. 3. the data is checked as it passes through protected firewalls and other security points. where viruses. Edge application services significantly decrease the data volume that must be moved.Dynamic infrastructure • Springer. Ann.html).com/2007/09/27/dynamic-infrastructure-joyent-saas-soa-and-the-ibm-pc).com.virtual-strategy. As a topological paradigm. ADynamic The Datacenter of the Future -. the consequent traffic.com/article/ pressRelease/idUS118407+17-Mar-2008+BW20080317) (March 17. NEC and Promark Deliver The Dynamic Infrastructure (http://www. applications or services. louspringer. ADynamic Infrastructure. autonomic (self-healing) computing. Edge computing pushes applications.sun. http://blog. . technology advancement and cost reduction for large-scale implementations have made the technology available to small and medium-sized business. Retrieved 2008-10-31. To ensure acceptable performance of widely-dispersed distributed services. http://www. Virtual Iron: Dynamic Infrastructure for the Data Center (http://www.virtual-strategy. Joyent. SOA and the IBM PC (http://blog. all of which need to be specifically developed or configured for edge computing.sun. The target end-user is any Internet client making use of commercial Internet application services. Jason (PDF). Edge computing has many advantages: 1. OpenDI Vision and High Level Desing Overview (http://kenai.com/downloads/opendi/ opendiR1-vision-high-level-design_v16. Static web-sites being cached on mirror sites is not a new concept.vmworld. Mirroring transactional and interactive systems are however a much more complex endeavor. Edge computing imposes certain limitations on the choices of technology platforms. lou (September 2007). large organizations typically implement Edge computing by deploying Web server farms with clustering. Retrieved 2008-10-31. Bruce.

Geo-Targeted Private Content Delivery Network Platform (pCDN) Companies providing edge computing services • • • • • Akamai Technologies EdgeCast Networks Exinda Limelight Networks Mirror Image Internet References [1] [2] [3] [4] http:/ / www.e. External links • • • • Akamai [1] Exinda . akamai.g. com http:/ / www. com/ cms__Main?name=exinda-introduces-the-exinda-edge-cache http:/ / www.com [4] . 64 Grid computing Edge computing and Grid computing are related. a subscriber base. real-time basis) extends scalability. html http:/ / www1. and it could be argued that typical customers for Edge services are organizations desiring linear scale of business application performance to the growth of. GeoElastic. The Edge computing market is generally based on a "charge for network services" model. the ability to "virtualize" (i. Finally.Adhoc Geo-Targeted Computing Alliance [3] GeoStratus. e. com . Edge computing provides a generic template facility for any type of application to spread its execution across a dedicated grid of prepared expensive machines.Edge Cache implementation press release [2] GeoElastic .Edge computing 4. GeoStratus. exinda. logically group CPU capabilities on an as-needed. com/ en/ html/ technology/ edgecomputing_howitworks. Whereas Grid computing would be hardcoded into a specific application to distribute its complex and resource intensive computational needs across a global grid of cheap networked machines...

the operations to be performed are characterized. Since productivity of parallel programmers has long been considered crucial for the success a parallel computer. the suppressed information is provided. Moving beyond the serial von Neumann computer (the only successful general purpose platform to date). simplicity of algorithms is important. A consequence of ICE is a step-by-step (inductive) explication of the instructions available next for concurrent execution. Kessler & Traeff (2001). The PRAM computational model is an abstract parallel machine model that had been introduced to similarly study parallel algorithms and complexity for parallel computing. the aspiration of XMT is that computer science will again be able to augment mathematical induction with a simple one-line computing abstraction The random access machine (RAM) is an abstract machine model used in computer science to study algorithms and complexity for standard serial computing. They are widely used across many application domains including general-purpose computing. For example. inserting the details suppressed by that initial description is often not very difficult. as well as in the class notes Vishkin (2009). by standards of other approaches to parallel algorithms. Vishkin (2011) explains the simple connection between the WT framework and the more rudimentary ICE abstraction noted above. when they were yet to be built. This large body of parallel algorithms knowledge for the PRAM model and their relative simplicity motivated building computers whose programming can be guided by these parallel algorithms. The XMT paradigm was introduced by Uzi Vishkin. introduced by Shiloach & Vishkin (1982). hundreds or thousands of processor cores. These parallel algorithms are also known for being simple. A more direct explanation of XMT starts with the rudimentary abstraction that made serial computing simple: that any single instruction available for execution in a serial program executes immediately. For example. the number of operations at each round need not be clear. guided by the proof of a scheduling theorem due to Brent (1974). provides a simple way for conceptualizing and describing parallel algorithms. The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm. The rudimentary parallel abstraction behind XMT. The XMT paradigm include a programmer’s workflow that starts with casting an algorithm in the WT framework and proceeds to programming it in XMTC. A consequence of this abstraction is a step-by-step (inductive) explication of the instruction available next for execution. In the WT framework. Multi-core computers are built around two or more processor cores integrated on a single integrated circuit die. The work-time (WT) (sometimes called work-depth) framework. Second. Researchers have developed a large body of knowledge of parallel algorithms for the PRAM model. in fact. processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. For each round. is that indefinitely many instructions available for concurrent execution execute immediately. The inclusion of the suppressed information is. The XMT paradigm can be programmed using XMTC.Explicit multi-threading 65 Explicit multi-threading Explicit Multi-Threading ( XMT ) is a computer science paradigm for building and programming parallel computers designed around the Parallel Random Access Machine (PRAM) parallel computational model. a parallel algorithm is first described in terms of parallel rounds. the WT framework was adopted as the basic presentation framework in the parallel algorithms books (for the PRAM model) JaJa (1992) and Keller. dubbed Immediate Concurrent Execution (ICE) in Vishkin (2011). The main levels of abstraction of XMT The Explicit Multi-Threading (XMT) computing paradigm integrates several levels of abstraction. . but several issues can be suppressed. a parallel multi-threaded programming language which is a small extension of the programming language C. Explicit Multi-Threading (XMT) is a computing paradigm for building and programming multi-core computers with tens.

Italy). Vishkin. Uzi (2003). Richard P. "An O(n2 log n) parallel max-flow algorithm". (2001). Kessler. Uzi (2008). Dorit. [10. Since making parallel programming easy is one of the biggest challenges facing computer science today. Jesper L. 1998 ACM Symposium on Parallel Algorithms and Architectures (SPAA). Practical PRAM Programming. Shlomit. Ellison. Proc.1145/1866739. The XMT concept was presented in Vishkin et al. on Parallel Algorithms and Architecture) 36: 551–552. 66 XMT prototyping and links to more information In January 2007. Tzur. (1974). Vishkin. Nuzman. "FPGA-based prototype of a PRAM-on-chip processor" [7]. to appear. Traeff.. Journal of the ACM 21: 201–208. which is central to the von Neumann architecture to multi-core hardware. pp. Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland. Cristoph W. . (2010) to graduate school.". ISBN 0-201-54856-9 • Keller. a 64-processor computer [2] named Paraleap [3] . 2008 ACM Conference on Computing Frontiers (Ischia.1866757. 140–151. Communications of the ACM 54: 75–85. 2010. Milwaukee. Addison-Wesley. Wiley-Interscience. ACM Technical Symposium on Computer Science Education (SIG CSE). doi:10. Uzi (2011). • JaJa. "Explicit Multi-Threading (XMT) bridging models for instruction parallelism" [5]. References • Brent.Explicit multi-threading The XMT multi-core computer systems provides run-time load-balancing of multi-threaded programs incorporating several patents. Journal of Algorithms 3: 128–146. David (2010). the demonstration also sought to include teaching the basics of PRAM algorithms and XMTC programming to students ranging from high-school Torbert et al. Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques. Vishkin. Dascal. • Vishkin. January 2011". Vishkin. (1998) and Naishlos et al. Xingzhi.1866757. • Shiloach. Uzi (1982). Uzi. Joseph (1998). doi:10. 10. Shane. Chau-Wen. One of them [1] generalizes the program counter concept.1145/1866739.1866757 Using simple abstraction to reinvent computing for parallelism]. 55–66. (2003) and the XMT 64-processor computer in Wen & Vishkin (2008). Ron. An Introduction to Parallel Algorithms. Volume 54 Issue 1. Nuzman. that demonstrates the overall concept was completed. "Towards a First Vertical Prototyping of an Extremely Fine-Grained Parallel Programming Approach" [4].1145/1366230. Jorg. Tseng. • Vishkin.1366240. Berkovich. March 10-13. pp. Joseph. Tel Aviv University and the Technion • Wen. • Torbert. Yossi. "The parallel evaluation of general arithmetic expressions". 104 pages [6]. Uzi (2009).1145/1866739. Efraim. Theory of Computer Systems (Special Issue of 2001 ACM Symp. WI. "Is teaching parallel algorithmic thinking to high-school student possible? One teacher’s experience. College Park. Joseph (1992). Uzi. • Vishkin. "Communications of the ACM. Proc. ISBN 0-471-35351-5 • Naishlos.

[3] University of Maryland. and/or peripherals) and "links" (functional connection between nodes). the introduction of compute resources provides a complete "unified" computing system. umd.[3] The fundamental components of fabrics are "nodes" (processor(s). pdf [7] http:/ / www. edu/ users/ vishkin/ XMT/ spaa01-j-03. edu/ users/ vishkin/ PUBLICATIONS/ classnotes. umiacs.[2] While the term "fabric" has also been used in association with storage area networks and switched fabric networking."[3] Brocade. edu/ users/ vishkin/ XMT/ spaa98. press release. pdf External links • Home page of the XMT project.527. (1998). A. [2] University of Maryland. umd. ps [6] http:/ / www. 2007: "Maryland Professor Creates Desktop Supercomputer" (http:/ / www.shtml). edu/ media/ pressreleases/ pr112707_superwinner.edu/~vishkin/XMT/index. "data center fabric" and "unified data center fabric". Patent 6. umiacs. Uzi. umd.[6] [7] . umiacs. U. umd. director of the Computation Institute at the Argonne National Laboratory and University of Chicago. "grid computing 'fabrics' are now poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations.umd.[1] Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage.463. Spawn-join instruction set architecture for providing explicit multithreading. 2007: "Next Big "Leap" in Computing Technology Gets a Name" (http:/ / www. Fabric computing Fabric computing or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a 'weave' or a 'fabric' when viewed collectively from a distance.umiacs. umd. networking and parallel processing functions linked by high bandwidth interconnects (such as 10 Gigabit Ethernet and InfiniBand)[2] but the term has also been used to describe platforms like the Azure Services Platform and grid computing in general (where the common theme is interconnected nodes that appear as a single logical unit). press release. with links to a software release. umd. edu/ users/ vishkin/ XMT/ CompFrontiers08. Cisco. HP and Egenera currently manufacture computing fabric equipment. edu/ scitech/ release. on-line tutorial and to material for teaching parallelism (http://www.Explicit multi-threading 67 Notes [1] Vishkin. See also Vishkin et al.S. html). pdf [5] http:/ / www. Other terms used to describe such fabrics include "unified fabric"[4] . memory. James Clark School of Engineering. June 26. November 28. umiacs. [4] http:/ / www. newsdesk.[5] According to Ian Foster. cfm?ArticleID=1459). eng.

There have been mixed reactions to Cisco's architecture.cisco. com/ issuesprint/ issue199810/ fabric. ComputerWorld. 2009-03-16.Fabric computing 68 History While the term has been in use since the mid to late 1990s[2] the growth of cloud computing and Cisco's evangelism of unified data center fabrics[8] followed by unified computing (an evolutionary data center architecture whereby blade servers are integrated or unified with supporting network and storage infrastructure[9] ) starting March 2009 has renewed interest in the technology. reuters. com/ wiki/ index. html?jumpid=reg_R1002_USEN/) . computerworld. com/ opsys/ features/ index. com/ action/ article.hp. snagy.com/products/solutions/converged/main.html/) • HP Converged Infrastructure (http://h18004. php/ Data_Center_Fabric) [7] Switch maker introduces a 'Data Center Fabric' architecture (http:/ / www. [1] [2] [3] [4] External links • Cisco Unified Computing and Servers (http://www. techworld. intel. com/ en/ US/ prod/ collateral/ ps6418/ ps6423/ ps6429/ prod_white_paper0900aecd80337bb8. do?command=viewArticleBasic& articleId=9043698) [8] Cisco: Unified Data Center Fabric: Reduce Costs and Improve Flexibility (http:/ / www. particularly from rivals who claim that these proprietary systems will lock out other vendors. Key characteristics The main advantages of fabrics are that a massive concurrent processing combined with a huge. com/ action/ article. com/ en/ US/ prod/ collateral/ switches/ ps9441/ ps9670/ white_paper_c11-462181.[2] References What Is: The Azure Fabric and the Development Fabric (http:/ / azure. whereby adding resources does not linearly increase performance which is a common problem with parallel computing and maintaining security. name/ blog/ ?p=84) Massively distributed computing using computing fabrics (http:/ / www. Reuters.com/en/US/products/ps10265/index. cisco. com/ article/ technologyNews/ idUSTRE52F68W20090316). html) [5] Intel: Data Center Fabric (http:/ / communities. [10] "Cisco to sell servers aimed at data centers" (http:/ / www. Retrieved 2009-03-17. dominopower. . do?command=viewArticleBasic& articleId=9129718& intsrc=news_ts_head). cfm?featureid=3614) Unified Fabric: Benefits and Architecture of Virtual I/O (http:/ / www. tightly-coupled address space makes it possible to solve huge computing problems (such as those presented by delivery of cloud computing services) and that they are both scalable and able to be dynamically reconfigured. html) [9] "Cisco launches Unified Computing push with new blade server" (http:/ / www. toolbox.www1. Analysts claim that this "ambitious new direction" is "a big risk" as companies like IBM and HP who have previously partnered with Cisco on data center projects (accounting for $2-3bn of Cisco's [10] [9] annual revenue) are now competing with them. com/ openport/ blogs/ server/ 2008/ 02/ 13/ data-center-fabric) [6] Toolbox for IT: Data Center Fabric (http:/ / it. computerworld. Retrieved 2009-03-17. cisco. html) Grid computing: The term may fade. Other companies offering unified or fabric computing systems include Liquid Computing Corporation and Egenera. 2009-03-16. .[2] Challenges include a non-linearly degrading performance curve. but features will live on (http:/ / www.

3. James Gosling. 10 Years After" (http:/ / java. 4. another Sun Fellow and the inventor of Java. Peter Deutsch.and transport-layer developers to allow unbounded traffic. induces application. 5. .[3] References [1] "The Eight Fallacies of Distributed Computing" (http:/ / blogs. 4.[2] 3. Around 1997. Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks over frequency-multiplexed mediums. 7. html). one of the original Sun "Fellows. sun. . resulting either in the failure of the system. added the eighth fallacy. There is one administrator. The network is secure. com/ jag/ resource/ Fallacies. Effects of the Fallacies 1. sys-con." is credited with penning the first seven fallacies in 1994. and of the packet loss it can cause. [3] "Deutsch's Fallacies. 6. eweek. Bill Joy and Tom Lyon had already identified the first four as "The Fallacies of Networked Computing"[3] (the article claims "Dave Lyon". Ignorance of network latency. 8. com/ read/ 38665. as with subnets for rival companies. Latency is zero. History The list of fallacies generally came about at Sun Microsystems. however. htm). Bandwidth is infinite. com/ c/ a/ Security/ Malware-Defensive-Techniques-Will-Evolve-as-Security-Arms-Race-Continues-331833/ ). a substantial reduction in system scope. The fallacies The fallacies are summarized as follows:[1] 1. 5. Transport cost is zero. The network is homogeneous. The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls. The network is reliable. may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths. [2] "Malware Defensive Techniques Will Evolve as Security Arms Race Continues" (http:/ / www. . Multiple administrators.Fallacies of Distributed Computing 69 Fallacies of Distributed Computing Peter Deutsch asserted that programmers new to distributed applications invariably make a set of assumptions known as the Fallacies of Distributed Computing and that all of these assumptions ultimately prove false. Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures. but this is considered a mistake). 2. 2. . or in large unplanned expenses required to redesign the system to meet its original goals. Topology doesn't change. greatly increasing dropped packets and wasting bandwidth.

Arbitrary internal structure The internal structure of a fragmented object is arranged by the object developer/deployer.may exist on different nodes and provide the object's interface. It is a novel design principle extending the traditional concept of stub based distribution. In addition..rgoarchitects. A flexible internal partitioning is achieved providing transparent fault-tolerant replications as well. RTP for media streaming) behind a standard CORBA interface. Full transparency is gained by the following characteristics of fragmented objects. an application using a fragmented object can also tolerate a change in distributions which is achieved by exchanging the fragment at one or multiple hosts.com/Files/fallacies.Fallacies of Distributed Computing 70 External links • The Eight Fallacies of Distributed Computing (http://blogs. Fragmented objects may act like a RPC-based infrastructure or a (caching) smart proxy as well. Of course an exchange request may trigger one or more other internal changes.com/jag/resource/Fallacies. a local stub or a local fragment. Those dynamically change the inside the fragmented objects. when some fragment is considered to have failed. This procedure can either be triggered by a user who changes object properties or by the fragmented object itself (that is the collectivity of its fragments) e.pdf) by Arnon Rotem-Gal-Oz Fragmented object Fragmented objects are truly distributed objects. a downward compatibility to stub based distribution is ensured. It may be client–server. The object developer can migrate the state and the functionality over the fragments by providing different fragment implementations. Arbitrary internal communication Arbitrary protocols may be chosen for the internal communication between the fragments. this allows to hide real-time protocols (e. Parts of the object . For instance.sun. .g. hierarchical. In contrast to distributed objects they are physically distributed and encapsulate the distribution in the object itself. Therefore clients cannot distinguish between the access of a local object. Thus.html) • Fallacies of Distributed Computing Explained (http://www. Fragmented object Arbitrary internal configuration As both the distribution of state and functionality are hidden behind the object interface their respective distribution over the fragments is also arbitrary.named fragments . Each client accessing a fragmented object by its unique object identity presumes a local fragment.. peer-to-peer and others.g.

de/ Publications/ pdf/ Reiser-Hauck-Kapitza-Schmied-Fragments. fr/ projects/ sos/ [5] http:/ / citeseer. uni-erlangen. References • • • • • • Structure and Encapsulation in Distributed Systems: the Proxy Principle [5] Fragmented objects for distributed abstractions [6] Globe: A Wide-Area Distributed System [7] Integrating fragmented objects into a CORBA environment [8] FORMI: An RMI Extension for Adaptive Applications [9] FORMI: Integrating Adaptive Fragmented Objects into Java RMI [10] References [1] http:/ / aspectix.jsessionid=HT0pf1n2TGvnRGN2vhBQBX8xQvdBF1tzts4hTfslFZQjyr2nqhzK!-648338668 . xml& xsl=article. org/ portal/ site/ dsonline/ menuitem. psu.The SOMIW object-oriented Operating System. • SOS [4] .Fragmented object 71 Projects • Aspectix [1] . xsl& . 9ed3d9924aeb0dcd82ccc6716bbe36ec/ index. ist. pdf [10] http:/ / dsonline. aspect-oriented programming. cs. fault tolerance. cs. org [2] http:/ / aspectix. computer. pdf [8] http:/ / www4. org/ WSProceedings/ ARM05/ a2-kapitza. pdf [9] http:/ / middleware05. html [6] http:/ / citeseer. html [7] http:/ / www.The Aspectix group works on several projects that focus on on middleware architecture. edu/ shapiro86structure. nl/ globe/ [4] http:/ / www-sor. jsp?& pName=dso_level1& path=dsonline/ 2006/ 10& file=o10001. vu. • FORMI [2] . edu/ makpangou92fragmented. adaptive and quality-of-service-aware applications. vu. nl/ ~ast/ publications/ ieeeconc-1999. ist. inria. • Globe [3] . and automated source-code transformation. psu.FORMI is an extension of Java RMI. informatik. objectweb.In this research we are looking at a powerful unifying paradigm for the construction of large-scale wide area distributed systems: distributed shared objects. org/ formi [3] http:/ / www.

GemStone developed its first prototype in 1982. Event Stream Processing. and shipped its first product in 1986. GemStone and VisualWave were an early web application server platform (VisualWave and VisualWorks are now owned by Cincom. Although Gemstone isn't often mentioned in print. and then became GemStone Systems. Object-oriented programming Influenced Java EE GemStone is a proprietary application framework that was first available for Smalltalk as an object database. have been with the company since its inception.Gemstone (database) 72 Gemstone (database) GemStone Database Management System Paradigm(s) Appeared in Application framework 1991 Influenced by Smalltalk. google. now develops and markets GemFire. SpringSource. In the area of web application frameworks. GemStone frameworks still see some interest for web services and service-oriented architectures. which is notable for CEP (complex event processing). After a major transition.NET for new development. com [2] Slovenian national gas operator has its billing system running on Smalltalk for 10 years (http:/ / groups. data virtualization and distributed caching. com/ group/ comp. Allen Otis and Monty Williams.) GemStone played an important sponsorship role in the Smalltalk Industry Council at the time when IBM was backing VisualAge Smalltalk (VA is now at Instantiations [1]). GemStone systems serve as mission-critical applications[2] even though many computing industry business publications focus attention on other ecosystems and languages. On May 6. 2011) External links • Official website (http://www. such as Java or C# for Microsoft . 2010.com/) • GemStone FAQ (v. Three of the original co-founding engineers.faqs. GemStone's owners pioneered implementing distributed computing in business systems.gemstone. multi-tier distributed systems. Oregon. a division of VMware. gemstone. announced it had entered into a definitive agreement to [3] acquire GemStone. instantiations. Bob Bretl. A recent revival of interest in Smalltalk has occurred as a result of its use to generate Javascript for e-commerce web pages or in web application frameworks such as the Seaside web framework. smalltalk/ msg/ 9560a50c14522f13) [3] SpringSource acquires Gemstone Systems (http:/ / www. GemStone Systems. Gemstone builds on the Smalltalk programming language. Systems based on object databases are not as common as those based on ORM or Object-relational mapping frameworks such as TopLink or Hibernate. GemStone for Smalltalk continues as "GemStone/S" and various C++ and Java products for scalable. lang. GemStone Systems was founded in 1982 as Servio Logic. Inc in 1995. References [1] http:/ / www. The engineering group resides in Beaverton. com/ news/ 2010/ 05/ 06/ springsource-acquires-gemstone-systems/ ) (Retrieved May 23.1.org/faqs/databases/GemStone-FAQ/) . JBoss and BEA Weblogic are somewhat analogous to GemStone.0) (http://www. Many information system features now associated with Java EE were implemented earlier in GemStone. Inc.

Built on the Hypertext Transfer Protocol (HTTP). The HTC is a foundational model for distributed computing. Computers with just enough processing power to run an instance of a user agent can access the same applications as those with additional processing power and storage available. As noted by Cisco's Giancarlo [1]. the HTC is a general-purpose computer. In this case. However. davidpratten. In its basic instruction set. does not impact the user's or the programmer's view in any way. every operator is implemented by an HTTP request and every operand is a URL referring to a document. com/ BTL/ ?p=1945 [2] http:/ / www. The HTC is a redesign of the computer. The HTC is a model of a computer built from the ground up containing no implicit information about locality or technology. Technologies like Ajax at the presentation level and iSCSI at the transport level are so undermining the Fallacies of Distributed Computing that inter and> intra-computer communications not carried over IP are looking like special case optimizations. zdnet. com/ category/ hypertext-computer/ .as the ability to fulfill HTTP requests.HyperText Computer 73 HyperText Computer HTTP Persistence · Compression · HTTPS Request methods OPTIONS · GET · HEAD · POST · PUT · DELETE · TRACE · CONNECT Header fields Cookie · ETag · Location · Referer X-Forwarded-For Status codes 301 Moved permanently 302 Found 303 See Other 403 Forbidden 404 Not Found The HyperText Computer (HTC) has been proposed as a model computer. IP networking is rivaling computer backplane speeds leading him to observe that "It’s time to move the backplane on to the network and redesign the computer". The transition from computers being connected by networks to the network as a computer has been anticipated for some time. External links • HyperText Computer Blog [2] References [1] http:/ / blogs. other issues such as intellectual property will dominate decisions as to where and how processing is done. Locally available processing capacity and storage is presented in the same way as remote processing and storage — that is . unplugging the local computing resources.

Attribute: data field of an object. • Simulation Object Model (SOM). Many RTIs provide APIs in C++ and the Java programming languages. computer simulations can interact (that is. The interaction between simulations is managed by a Run-Time Infrastructure (RTI). Parameter: data field of an interaction. A SOM describes the shared object. • Rules.. to communicate data. The RTI provides a programming library and an application programming interface (API) compliant to the interface specification. and how it is documented. that simulations must obey in order to be compliant to the standard. . that defines how HLA compliant simulators interact with the Run-Time Infrastructure (RTI). • Object Model Template (OMT). Using HLA. Interface specification The interface specification is object oriented. attributes and interactions for the whole federation. Federation: multiple simulation entities connected via the RTI using a common OMT.High level architecture (simulation) 74 High level architecture (simulation) The High Level Architecture (HLA) is a general purpose architecture for distributed computer simulation systems. Object: a collection of related data sent between simulations. The FOM describes the shared object. OMT consists of the following documents: • Federation Object Model (FOM). Common HLA terminology • • • • • • Federate: an HLA compliant simulation entity. and to synchronize actions) to other computer simulations regardless of the computing platforms. Interaction: event sent between simulation entities. The interface specification is divided into service groups: • • • • • • • Federation Management Declaration Management Object Management Ownership Management Time Management Data Distribution Management Support Services Object model template The object model template (OMT) provides a common framework for the communication between HLA simulations. that specifies what information is communicated between simulations. attributes and interactions used for a single federate. Technical overview A High Level Architecture consists of the following components: • Interface Specification.

[1] 1. More information can be found at Boms.Federate Interface Specification • IEEE 1516. federates shall interact with the run-time infrastructure (RTI) in accordance with the HLA interface specification.Framework and Rules • IEEE 1516. documented in accordance with the HLA Object Model Template (OMT). commonly used development methodologies. 2. all exchange of FOM data among federates shall occur via the RTI.3-2003 . 10. an attribute of an instance of an object shall be owned by only one federate at any given time.Standard for Modeling and Simulation High Level Architecture .3-2003.Standard for Modeling and Simulation High Level Architecture . 6. During a federation execution.Recommended Practice for High Level Architecture Federation Development and Execution Process (FEDEP) . Federates shall be able to vary the conditions under which they provide updates of attributes of objects. 4. Federation Development and Execution Process (FEDEP) FEDEP. all representation of objects in the FOM shall be in the federates. FEDEP is an overall framework overlay that can be used together with many other.info [3]. Standards HLA is defined under IEEE Standard 1516: • IEEE 1516-2010 . as specified in their SOM. Federates shall have an HLA Simulation Object Model (SOM). IEEE 1516. Base Object Model The Base Object Model (BOM) is a new concept created by SISO [2] to provide better reuse and composability for HLA simulations. Federates shall be able to update and/or reflect any attributes of objects in their SOM and send and/or receive SOM object interactions externally. In a federation. 9. 8. Federates shall be able to transfer and/or accept ownership of an attribute dynamically during a federation execution. Federations shall have an HLA Federation Object Model (FOM).3). Distributed Simulation Engineering and Execution Process (DSEEP) In spring 2007 SISO started revising the FEDEP. documented in accordance with the HLA Object Model Template (OMT). During a federation execution. During a federation execution. as specified in their SOM. It has been renamed to Distributed Simulation Engineering and Execution Process (DSEEP) and is now an active standard IEEE 1730-2010 (instead of IEEE 1516.Standard for Modeling and Simulation High Level Architecture . not in the run-time infrastructure (RTI). 3.High level architecture (simulation) 75 HLA rules The HLA rules describe the responsibilities of federations and the federates that join.2-2010 . 7.1-2010 . as specified in their SOM. is a standardized and recommended process for developing interoperable HLA based federations. 5.Object Model Template (OMT) Specification • IEEE 1516. and is highly relevant for HLA developers. Federates shall be able to manage local time in a way that will allow them to coordinate data exchange with other members of a federation.

The DLC API addresses a limitation of the IEEE 1516 and 1.) .1-2004 . The first complete version of the standard.1-2000 Errata (2003-oct-16) [7] • IEEE 1516.Standard for Modeling and Simulation High Level Architecture . such as Schemas and extensibility Fault tolerance support services Web Services (WSDL) support/API Modular FOMs Update rate reduction Encoding helpers Extended support for additional transportation (such as QoS.3 [9] • SISO-STD-004. IPv6. Note that this API has since been superseeded by the HLA Evolved APIs.Standard for Modeling and Simulation High Level Architecture . Validation. STANAG 4603 HLA (in both the current IEEE 1516 version and its ancestor "1.. The revised IEEE 1516-2010 standard includes current DoD standard interpretations and the EDLC API. Java and WSDL APIs as well as FOM/SOM samples can be downloaded from the IEEE 1516 download area of the IEEE web site [4]. Other major improvements include: • • • • • • • Extended XML support for FOM/SOM.Framework and Rules • IEEE 1516. an extended version of the SISO DLC API.Dynamic Link Compatible HLA API Standard for the HLA Interface Specification Version 1. DLC API SISO has developed a complementary HLA API specification known as the Dynamic Link Compatible (DLC) API.Federate Interface Specification • IEEE 1516.3 API specification.3.. whereby federate recompilation was necessary for each different RTI implementation.3" version) is the subject of the NATO standardization agreement (STANAG 4603) for modeling and simulation: Modeling And Simulation Architecture Standards For Technical Interoperability: High Level Architecture (HLA). • SISO-STD-004-2004 .Dynamic Link Compatible HLA API Standard for the HLA Interface Specification (IEEE 1516.1-2000 . Release 2 (2003-jul-01) [8] 76 Prior to publication of IEEE 1516. C++. Previous version: • IEEE 1516-2000 .Object Model Template (OMT) Specification See also: • Department of Defense (DoD) Interpretations of the IEEE 1516-2000 series of standards.2-2000 .1 Version) [10] HLA Evolved The IEEE 1516 standard has been revised under the SISO HLA-Evolved Product Development Group and was approved 25-Mar-2010 by the IEEE Standards Activities Board. such as XML Schemas. informally known as Evolved DLC APIs (EDLC).. was known as HLA 1.Recommended Practice for Verification. published 1998. The full standards texts are available at no extra cost to SISO [5] members or can be purchased from the IEEE shop [6].Standard for Modeling and Simulation High Level Architecture . and Accreditation of a Federation an Overlay to the High Level Architecture Federation Development and Execution Process Machine-readable parts of the standard.High level architecture (simulation) • IEEE 1516. the HLA standards development was sponsored by the US Defense Modeling and Simulation Office.4-2007 .

. org/ [6] http:/ / shop.org): an open source. org/ [7] http:/ / standards. There is however a lack of examples of the ways that NGA will be used or of the sort of innovations that may come about as a result of widespread access to NGA. org [3] http:/ / www. A parallel can be drawn with the advent of first generation broadband which arguably created the conditions for the success of innovations such as Wikipedia. boms. sisostds. cross-platform HLA RTI implementation. tools. RTI 1.High level architecture (simulation) • Standardized time representations 77 Books • Creating Computer Simulation Systems: An Introduction to the High Level Architecture [11] References [1] U.com/p/proto-x/): a cross-platform. that it no longer matters? What will be the applications and services that become widespread?. ieee. video sharing and always-on social networking . sisostds. The IBZL programme[1] was started by the Open University and Manchester Digital in the UK.open source knowledge. 1-2000. php?tg=fileman& idx=get& id=5& gr=Y& path=SISO+ Products%2FSISO+ Standards& file=SIS-STD-004. org/ index.were not foreseen. info [4] http:/ / standards. Defense Modeling and Simulation Office (2001). sisostds. • Portico (http://www. and utilities. U. doc [9] http:/ / www. org/ index.3-Next Generation Programmer's Guide Version 4. org/ reading/ ieee/ updates/ errata/ 1516. org/ downloads/ 1516/ [5] http:/ / www.infinite bandwidth zero latency . The IBZL programme has used a process (Imagine/ Triple Task Method) to explore the potentially novel applications of NGA and provide some ideas as to the key components of the future inter-networked landscape. pdf [10] http:/ / www.S. ieee. [2] http:/ / www. php?tg=fileman& idx=get& id=5& gr=Y& path=SISO+ Products%2FSISO+ Standards& file=SISO-STD-004-2004-Final. com/ Creating-Computer-Simulation-Systems-Introduction/ dp/ 0130225118 External Links • proto-x (http://code. but the most innovative aspects of these . IBZL IBZL . 1-2004. amazon. dmso. open source C++ library for developing HLA compliant simulations.S. and latency so small.google. ieee. Background Next Generation Access (NGA) broadband is promoted strongly by policy makers as underpinning future economic growth. zip [11] http:/ / www. mil/ public/ library/ projects/ hla/ rti/ DoD_interps_1516_Release_2. Youtube and Facebook.is a thought experiment that asks: what will happen when bandwidth (for connecting to the Internet for example) is so great.porticoproject. Department of Defense. pdf [8] https:/ / www. sisostds.

Next generation technology could support real-time collaborative generation of product ideas followed by the process of . where relatively little attention has been given to what kinds of novel application are made feasible by networks which are relatively free of speed and latency capacity constraints. The speeds cited vary widely from 25 Mbps (e.g. To put this in context. in addition to ‘raw’ bandwidth. informal. but also indicators of network performance including latency (the time taken for data packets to travel from source to destination). IBZL outcomes The workshops produced ideas that will be further developed. Google (Google. products and people. • In contrast with currently widespread ADSL technologies.virtual spaces in which the connection is always on/perpetual. 'Infinite bandwidth' and 'zero latency' are not meant literally. three elements are usually considered essential: • NGA will provide a significant increase in the transmission speeds available to the domestic or small-business end-user.g. Computing and Technology [8] and Manchester Digital. QoS here is taken to mean not only service reliability and availability. in order to synthesize high quality informational and other connections. private sector and academic participants. ‘Real artisans in a virtual world’ .IBZL 78 Next Generation Access (NGA) While there is no universally agreed definition of what qualifies a network to be considered ‘next generation’. in early 2010. to imagine a digital future. The IBZL process is intended as a means to explore and speculate on potential future technologies. it is generally assumed that NGA will offer a step-change in upload as well as download speeds. spontaneous. ‘Always on social space’ . 2010) announced a plan for experimental community networks operating at 100 Gbps. What is Digital Region? 2009) to over 200 Mbps. Zero Latency (IBZL[6] ) initiative was designed as a contribution to innovation by identifying new applications that will be made possible by NGA as it evolves and that may contribute to the continuing development of innovative digital industries.the networked production of artefacts by artisans in multiple locations. IBZL addresses a gap in policy and strategic thought. but see for example: List 2006[7] ) There have been two IBZL workshops held in Manchester. jitter (the variation in latency among data packets) and data loss (the loss of data packets due to network congestion). The [2] ‘Digital Britain’ report refers to ‘next generation service up to’ 40 Mbps. They brought together invited public sector. • NGA is widely taken to offer improved ‘quality of service’ (QoS)[5] . This would not only allow a new level of remote working and collaboration but also the sense of living in proximity with friends and relations could transform the lives of older people who need to stay longer in their homes as the population ages. Latency. For some. UK in May and October 2010. NGA bandwidth should be symmetrical. a trade association of creative and digital companies in Manchester and the North West of England. Five of these are briefly summarised below. Behind this would be a thorough analysis of organizations. between people living and working remotely. They were organized jointly by the Open University Faculty of Mathematics. social and educational interactions. they are a shorthand for networks where bandwidth and latency cease to be limiting factors. made possible next generation networks. and more recently UK ministers have referred to 50 Mbps and faster[3] . IBZL as a way to develop NGA The Infinite Bandwidth. OFCOM 2009[4] ). ‘Intelligent matchmaking’ – bringing suppliers and consumers together optimally for business. jitter and data loss are important aspects of the usability of applications such as internet telephony or video. though others have a more relaxed view (e. real-time social encounters (‘collisions’) that happen when people are co-located. reflecting the demands of increasingly user-generated content. To facilitate the process the Imagine methodology was adapted and applied as a form of future workshop for deep reflection on possible scenarios (numerous examples of this kind of work exist. supporting the kind of occasional.

Independent Networks Cooperative Association [4] OFCOM (2009). ibzl. 1: Broadband Delivery UK . London. net) [7] List. INCA Policy Briefing No. ibzl. Latency mapping ." Futures 38: 673 . Peer-to-peer processor time-sharing . Zero Latency (IBZL) project website (http:/ / www.the evolution of next generation networks will be uneven. net) [2] Department_for_Business_Innovation_and_Skills (2009). D. [8] http:/ / mct. Page 54 [3] INCA (2010). Delivering Super-Fast Broadband in the UK: Promoting investment and competition OFCOM [5] OFCOM (2009)Delivering Super-Fast Broadband in the UK: Promoting investment and competition OFCOM [6] Infinite Bandwidth. effectively re-engineering (or at least. uk/ External links • IBZL project website (http://www. Next generation networks could allow real time peer-to-peer sharing so that when an application needs additional capacity for processor-heavy tasks like video rendering it could have access to effectively limitless extra computing power. Department for Business Innovation and Skills and the Department for Culture. 79 References [1] Infinite Bandwidth. co-ordinated among volunteers by a central ‘master’ application. Zero Latency (IBZL) project website (http:/ / www. open. This could turn the conventional trading pattern on its head with artisans in the developing world crafting products for “3D printing” in the developed world.IBZL design.684. challenging) current craft value chains.projects like SETI@home use the spare processor capacity of millions of personal computers to process batches of number-crunching tasks. network infrastructure and the network of relationships between service providers.Industry Day (http:/ / www. (2006). resulting in a ‘geography of latency’ and the disruption of ‘simultaneous time’. technical/geographic. Media and Sport. Latency maps would be an enabling tool to identify the kinds of applications possible within/between.ibzl. Digital Britain: Final Report. inca. "Action Research Cycles for Multiple Futures Perspectives. coop/ policy/ inca-policy-briefing-no1). ac. The kinds of networked application that are feasible between two network locations will be a function of a range of factors including spatial distribution.net) . or commercial spaces. development and distributed fabrication.

Thus. The key programming language concepts. In the case of multicast. • Identity. the identity of a membership service. forwarding. as applied to live distributed objects. etc. In this case. for example. kayou is still in its design phase hence not much information is actually available about its design or its implementation. The object object. External links • kayou official website [1] References [1] http:/ / kayou. there exists a single instance of a distributed protocol running among all computers sending. and manages the given channel or group. as an entity that has a distinct identity. such as the name of a multicast group. consists of a group of software components physically executing on some set of physical machines and engaged in mutual communication. the object's identity is determined by the identifier of the channel or group. or receiving the data published in the channel or multicast within the group. are defined as follows. may encapsulate internal state and threads of execution. each executing the distributed protocol code with the same set of essential parameters. controls. publish-subscribe channels and multicast groups are examples of live distributed objects: for each channel or group. .Kayou 80 Kayou kayou is a distributed operating system project developed on top of the kaneton microkernel in the vein of Amoeba. by the address of the membership service (the entity that manages the membership of the multicast group). The identity of a live distributed object is determined by the same factors that differentiate between instances of the An illustration of the basic concepts involved in the definition of a live distributed same distributed protocol. qualified with the identity of the distributed system that provides. for example. Note that the kayou project is part of the Opaak educational trilogy along with kastor and kaneton. opaak. and that exhibits a well-defined externally visible behavior. org Live distributed object Definitions The term live distributed object (also abbreviated as live object) refers to a running instance of a distributed multi-party (or peer-to-peer) protocol. the identity of the system might be determined. kayou provides a powerful distribution-oriented interface which enables applications to take advantage of the resources of networked computers. viewed from the object-oriented perspective. the identifier of a publish-subscribe topic.

• Behavior. For example. there might exist many 81 . jointly maintaining some distributed state. the externally visible state of a leader election object would be defined as the identity of the currently elected leader. For example. In this sense. to produce a running proxy of the live object. To dereference a reference means to locally parse and follow these instructions on a particular computer. The identity is not stored at any particular location. • References. local states of its proxies. • Types. In this sense. a similar event must be eventually generated by all non-faulty proxies (proxies that run on computers that never crash. The different replicas of the object's state may be strongly or only weakly consistent. Interfaces exposed by the proxies are referred to as the live distributed object's endpoints. rather. by recursively embedding a reference to the appropriate name resolution object. Much as it is the case for types in Java-like languages. or . rather. on different machines distributed across the network. the precise definition might vary). • State. the latter is a specific type of live distributed object that uses a protocol such as Paxos. the reference must specify how this identifier is resolved. it contains a complete information sufficient to locate the given object and interact with it.Live distributed object • Proxies (replicas). By definition. The term endpoint instance refers to a single specific event channel or user interface exposed by a single specific proxy. The type of a live distributed object determines the patterns of external interactions with the object. it is distributed and replicated. or a web service's WSDL description. The interface of a live distributed object is defined by the types of interfaces exposed by its proxies. these may include event channels and various types of graphical user interfaces. Defined this way. These interactions are modeled as exchanges of explicit events (messages). The constraints that the object's type places on event patterns may span across the network. Since live distributed objects may not reside in any particular place (but rather span across a dynamically changing set of computers). it is determined by the types of endpoints and graphical user interfaces exposed by the object's proxies. The state of a live distributed object should be understood as a dynamic notion: as a point (or consistent cut) in a stream of values. portable instructions for constructing its proxy. it materializes as a stream of messages of the form elected(x) concurrently produced by the proxies involved in executing this protocol. depending on the protocol semantics: an instance of a consensus protocol will have the state of its replicas strongly consistent. The term proxy stresses the fact that a single software component does not in itself constitute an object. The behavior of a live distributed object is characterized by the set of possible patterns of external interactions that its proxies can engage in with their local runtime environments. • Interfaces (endpoints). The state of a live distributed object is defined as the sum of all internal. The proxy or a replica of a live object is one of the software component instances involved in executing the live object's distributed protocol. To say that a live object exposes a certain endpoint means that each of its proxies exposes an instance of this endpoint to its local environment. and coordinating their operations. and the patterns of events that may occur at the endpoints. RMI. The object can thus be alternatively defined as a group of proxies engaged in communication.NET remoting client-side proxy stub. and concurrently consumed by instances of the application using this protocol. a C/C++ pointer. the term live distributed object generalizes the concept of a replicated object. the concept of a live distributed object proxy generalizes the notion of a RPC. The reference to a live object is a complete set of serialized. a live object reference plays the same role as a Java reference. If the object is identified by some sort of a gobally unique identifier (as might be the case for publish-subscribe topics or multicast groups). rather than as a particular value located in a given place at a given time. or state machine replication to achieve strong consistency between the internal states of its replicas. whereas an instance of a leader election protocol will have a weakly consistent state. virtual synchrony. the information contained in a live distributed object's reference cannot be limited to just an address. and each of the endpoint instances carries events of the same types (or binds to the same type of a graphical display). and that never cease to execute or are excluded from the protocol. type atomic multicast might specify that if an event of the form deliver(x) is generated by one proxy. it serves as a gateway through which an application can gain access to a certain functionality or behavior that spans across a set of computers.

and containing XML-serialized live object references. and represents current. the term was used to refer to the types of dynamic. fresh. dissertation[3] . dissertation[3] . and at the MSR labs in Redmond. and instances of distributed multi-party protocols. WA [7] . programming language embeddings. and to support various types of hosted content such as Google Maps[13] . in an IEEE Internet Computing article[8] . and protocol composition frameworks. The word distributed expressed the fact that the information is not hosted. was the Live Distributed Objects [11] platform developed by Krzysztof Ostrowski [9] at Cornell University. a more comprehensive discussion of the live object concept in the context of Web development can be found in Krzysztof Ostrowski [9]'s Ph. and Jini. As of March 2009. Originally. read-only. When applied to live distributed objects. dating back at least to the actor model developed in the early 1970s. The need for uniformity implies that the definition of a live distributed object must unify concepts such as live Web content. message streams. Since the moment of its creation. the platform is being actively developed by its creators.D. The extension of the term has been motivated by the need to model live objects as compositions of conference other objects. it is replicated among the end-user computers. live content that reflects recent updates made by the users (as opposed to static. in a paper published at the ECOOP [10] . but rather stored on the end-user's client computers. interactive. Thus. for example. a number of extension have been developed to embed live distributed objects in Microsoft Office documents [12] . STC [5] conference [6] . but rather. and archival content that has been pre-assembled). The platform provided a set of visual. which pioneered the uniform perspective that everything is an object. should also be modeled as live distributed objects.Live distributed object very different implementations of the same type. the set of messages or events that appear on the instances of a live object's endpoint forms a distributed data flow [1] [2] . and updated in a peer-to-peer fashion through a stream of multicast messages that may be produced directly by the end-users consuming the content. stored at a server in a data center. Visual content such as chat windows. in this sense. The word live expressed the fact that the displayed information is dynamic. The more general definition presented above has been first proposed in 2008. and then formally defined in 2007. and various sorts of mashups could be composed by dragging and dropping components representing user interfaces and protocol instances onto a design form. which pioneered the idea that services are objects. which includes instances of distributed multi-party protocols used internally to replicate state. as defined in the ECOOP paper[10] . 82 History Early ideas underlying the concept of a live distributed object have been influenced by a rich body of research on object-oriented environments. drag and drop tools for composing hierarchical documents resembling web pages. a comprehensive discussion of the relevant prior work can be found in Krzysztof Ostrowski's Ph. shared desktops. behavior characteristic to atomic multicast might be exhibited by instances of distributed protocols such as virtual synchrony or Paxos. The semantics and behavior of live distributed objects can be characterized in terms of distributed data flows. The term live distributed object was first used informally in a series of presentations given in the fall of 2006 at an ICWS conference [4] . The first implementation of the live distributed object concept. the concept has been inspired by Smalltalk. the perspective dictates that their constituent parts.[14] [15] [16] [17] [18] [19] [20] [21] . and connecting them together.D. interactive Web content that is not hosted on servers in data centers. and internally powered by instances of reliable multicast protocols.

ACM. IEEE International Conference on Web Services (ICWS 2006). K. html [14] Ostrowski. Birman. cfm?id=1428508. C. http:/ / liveobjects. edu/ ~shxu/ stc06/ [6] Ostrowski. VA. "Cornell Yahoo! Live Objects". edu/ community/ index.. Paphos. 30-35. acm. (2008). (2008). Leuven. Nashville. Sankar. and Subramaniyan.. cornell. July 07 . K.D. Birman. "Live Maps". cornell. http:/ / liveobjects. [5] http:/ / www. html [17] Dong. "Programming Live Distributed Objects with Distributed Data Flows"... org/ xpl/ freeabs_all. Dolev. 5142. http:/ / ieeexplore. K. cornell. cornell. and Dolev. NY. (2009). http:/ / www. (2008). "Programming with Live Distributed Objects". "Live Distributed Objects".. http:/ / hdl. K. K. Dolev. R. Languages and Applications (OOPSLA 2009). (2008). D. and Vora. Submitted to the International Conference on Object Oriented Programming. [8] Ostrowski. html [18] Prateek. "Live Google Earth UI". and Birman. Microsoft Research. ieee. jsp?isnumber=4376216& arnumber=4376231. edu/ community/ 7/ index. First ACM Workshop on Scalable Trusted Computing (ACM STC 2006). (2008). cs. (2008). and Zhang. QuickSilver Scalable Multicast. D. edu/ community/ 1/ index.. K. html [19] Gupta. edu/ ~krzys/ krzys_debs2009. 3rd ACM SIGOPS International Workshop on Large Scale Distributed Systems and Middleware (LADIS 2009).. jsp?arnumber=4032049. edu/ ~krzys [10] Ostrowski. (2008). http:/ / www. html [21] Wadhwa. [11] http:/ / liveobjects. K. S. WA. J. edu [12] Ahnn. K. 2008. September 2006. and Wakankar. cs. "Distributed Google Earth". November 2006. Proceedings of the 22nd European Conference on Object-Oriented Programming. K. http:/ / liveobjects. D.. 11(6):72-78. edu/ community/ 2/ index. 2008. pdf [3] Ostrowski. A. 463-489. handle. cs. and van Renesse. org/ prog/ displayevent. Berlin. cs. cornell. K. (2007). "Using live distributed objects for office automation".. "Integrate Live Objects with Flickr Web Service". K. http:/ / ieeexplore. Systems. H. [13] http:/ / liveobjects. Belgium. and Polepalli. November 2006. Cyprus. http:/ / portal. html [16] Kashyap. cs. cs. http:/ / liveobjects. aspx?rID=7870& fID=2276. A. S. 2009. K. IEEE Internet Computing. cornell. Dissertation. [9] http:/ / www.. pdf [2] Ostrowski.. edu/ ~krzys/ krzys_oopsla2009.. U. org/ xpls/ abs_all.. New York. [7] Ostrowski. and Nagarajappa. cs. "Goole Earth Live Object". Cornell University.. cs. net/ 1813/ 10881. MT. (2008). "Live Distributed Objects: Enabling the Active Web". 3rd ACM International Conference on Distributed Event-Based Systems (DEBS 2009). D. http:/ / liveobjects.. org/ citation. 'Extensible Web Services Architecture for Notification in Large-Scale Systems'. cfm?id=1179477. Z.. vol. edu/ community/ 4/ index. Heidelberg. edu/ community/ 3/ index.Live distributed object 83 References [1] Ostrowski. researchchannel. K. Companion '08. cornell. cornell.. cornell. pdf [15] Akdogan. R. and Sakoda. utsa. cs. K. July 6–9. cornell. Ostrowski. cs. html [20] Mahajan. Fairfax. and Ahnn. cornell. IL.. TN. edu/ community/ 5/ index. K. H.. D. R. http:/ / www. S. Birman. Redmond. Vitek. Birman.. (2009). J. Proceedings of the ACM/IFIP/USENIX Middleware '08 Conference Companion. org/ citation... S. http:/ / liveobjects.. edu/ ~krzys/ krzys_ladis2009.05. October 11. Ed.11.. cs. cs. http:/ / portal. ieee. Lecture Notes In Computer Science. org/ citation. Chicago. cornell. K. 2009.. (2009). November–December 2007. Ph. "ALGE (A Live Google Earth)". html . USA. X. (2008).. (2006). Springer-Verlag. "Implementing Reliable Event Streams in Large Systems via Distributed Data Flows and Recursive Delegation". http:/ / portal. 'Scalable Group Communication System for Scalable Trust'. USA. K. cornell.. cs. 1428536. cs. (2008). and Birman. 1462743. http:/ / www. and Birman. Birman. Big Sky. acm. cfm?id=1462735. [4] Ostrowski. edu/ community/ 6/ index. Dolev. K. December 01 . J. acm. "Storing and Accessing Live Mashup Content in the Cloud". http:/ / liveobjects.

Some older pre-FireWire Macintoshes had a similar controversial "SCSI Disk Mode". Operating the controls on the master triggers the same commands on the slaves.Multiple-unit train control. the master database is regarded as the authoritative source. • In parallel ATA hard drive arrangements. See . with the other devices acting in the role of slaves.Master/slave (technology) 84 Master/slave (technology) Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. this is not an acceptable identification label. so that recording is done in parallel. "Master" is merely another term for device 0 and "slave" indicates device 1. the County of Los Angeles sent an e-mail to its suppliers asking them not to use these terms: Subject: IDENTIFICATION OF EQUIPMENT SOLD TO LA COUNTY Date: Tue. We would request that each manufacturer. [5] [6] On November 2003. • On the Macintosh platform.[1] [2] [3] Examples • In database replication. • Railway locomotives operating in multiple (for example: to pull loads too heavy for a single locomotive) can be referred to as a master/slave configuration . Target Disk Mode allows a computer to operate as an external FireWire hard disk. supplies or services that are provided to County departments do not possess or portray an image that may be construed as offensive or defamatory in nature. Division Manager Purchasing and Contract Services [4] . identify and remove/change any identification or labeling of equipment or components thereof that could be interpreted as discriminatory or offensive in nature before such equipment is sold or otherwise provided to any County department. • Peripherals connected to a bus in a computer system. One such recent example included the manufacturer's labeling of equipment where the words "Master/Slave" appeared to identify the primary and secondary sources. As such. and the slave databases are synchronized to it. essentially a disk slave mode. suppliers and contractors make a concentrated effort to ensure that any equipment. The terms also do not indicate precedence of one drive over the other in most situations. • Duplication is often done with several cassette tape or compact disc recorders linked together. it is the County's expectation that our manufacturers. Controversy Sometimes the terms master and slave are deemed offensive. Joe Sandoval. In some systems a master is elected from a group of eligible devices. the terms master and slave are used but neither drive has control over the other.with the operation of all locomotives in the train slaved to the controls of the first locomotive. 18 Nov 2003 14:21:16 -0800 From: "Los Angeles County" The County of Los Angeles actively promotes and is committed to ensure a work environment that is free from any discriminatory influence be it actual or perceived. supplier and contractor review. Thank you in advance for your cooperation and assistance. Based on the cultural diversity and sensitivity of Los Angeles County.

com/ kb/ 188001) [3] Information on Browser Operation from Microsoft KnowledgeBase (http:/ / support. 85 References [1] master/slave . snopes.en-us. org/ article. microsoft. aspx?scid=KB.00. (See also political correctness.com (http:/ / www. 2003. County Bans Use Of "Master/Slave" Term from Slashdot (http:/ / slashdot. pl?sid=03/ 11/ 25/ 0014257& mode=thread& tid=103& tid=133& tid=186& tid=99) [6] 'Master' and 'slave' computer labels unacceptable. html) (Wednesday. This standard allows only one drive per connection. asp) [5] L. and does not require the use of master/slave terms. term. com/ 2003/ TECH/ ptech/ 11/ 26/ master. microsoft.a searchNetworking definition (http:/ / searchnetworking. with SATA replacing older IDE (PATA) drives. The designation of hard drives as master/slave may decline in a few years. html) [2] Description of the Microsoft Computer Browser Service from Microsoft KnowledgeBase (http:/ / support.102878) [4] Urban Legends Reference Pages: Inboxer Rebellion (Master/Slave) from www. com/ sDefinition/ 0.snopes. noting that the master/slave terminology accurately reflects what is going on inside the device and that this was not intended in any way to be a reference to slavery as it existed in the United States. reut/ index. November 26. com/ default. officials say (http:/ / www.) There were rumors of a major push to change the way hardware manufacturers refer to these devices .. CNN) . cnn. techtarget.A.sid7_gci783492. com/ inboxer/ outrage/ master. It has not had much effect on most of the products being produced.Master/slave (technology) Internal Services Department County of Los Angeles Many in the Information Technology field rebuff this claim of discrimination and offence as ridiculous.

easy to scale key-value data operations with low latency and high sustained throughput. In the parlance of Eric Brewer’s CAP theorem. [1] For those familiar with memcached. Membase design decisions are weighed against three non-negotiable requirements. aggregating. and simple to develop against. persistence and querying capabilities of a database. transparently caches data in main memory. and project co-sponsors Zynga and NHN to a new project on membase. creating. who had founded a company. By design. Erlang Operating system Cross-platform Type License Website distributed key/value database system Apache License http:/ / membase. membase is a CP type system. replicates data for high-availability. retrieving.1 / July 26.7. These applications must service many concurrent users. It is designed to be clustered for single machine to very large scale deployments. guaranteeing compatibility today and in to the future. Inc. rebalancing and multi-tenancy with data partitioning. expressly to meet the need for an key-value database that enjoyed all the simplicity. and scalability of memcached. membase is designed to provide simple. fast. 2011. key-value database management system optimized for storing data behind interactive web applications. storing. the Membase project founders and Membase. Zynga. NorthScale. Membase has wide language and application framework support due to its on-the-wire protocol compatibility with memcached. membase directly incorporates memcached “front end” source code. persists the data with a design for multi-tier storage . The merged project will be known as Couchbase[3] Design drivers According to the Membase site and presentations. In support of these kinds of application needs. leveraging the memcached engine interface. speed. 2011 C++. The original membase source code was contributed by NorthScale. live cluster reconfiguration. org/ Membase (pronunciation: mem-base) is an Open Source (Apache 2.Membase 86 Membase Membase Developer(s) Stable release Written in Couchbase (merged from NorthScale).0 license) distributed. but also provided the storage. data replication. announced a merger with CouchOne (a company with many of the principal players behind CouchDB) with an associated project merger.org [2] in June 2010.[4] Membase intends to be extremely easy to manage. membase is simple. and elastic. History Membase was developed by several leaders of the memcached project. membase provides on-the-wire client protocol compatibility. join it to the cluster and press the rebalance button to automatically rebalance data to it. in fact. As of February 8. Membase distributes data and data operation I/O across commodity servers (or VMs). Every node is alike in a membase cluster – clone a node. NHN 1. fast. but is designed to add disk persistence (with hierarchical storage management). manipulating and presenting data.

Membase claims to scale with linear cost. while disk writes are still asynchronous. or removed from. • Configurable “tap” interface: External systems can subscribe to filtered data streams – supporting. scalability/performance) Persistence • Asynchronously writes data to disk after acknowledging write to client.7 and later. using any language or application framework • Dynamic cluster resizing and rebalancing: Effortlessly grow or shrink a membase cluster. data management resources can be dynamically matched to the needs of an application with little effort. • Tunables to define item ages that affect when data is persisted. it automatically de-duplicates writes and is internally asynchronous everywhere possible. . 87 Data model Key Features (persistence. replication/failover. It is a consistently low-latency and high-throughput processor of data operations. When operating out of memory. a running cluster with no application downtime. applications can ensure data is synced to more than one server. In version 1. data analytics or archiving. for example. Employing commodity servers. full text search indexing. with low lock contention.Membase management model (planned to support Solid-state drive and Hard disk drive media). It is multi-threaded. predictable latency.[5] • Supports working set greater than a memory quota per "node" or "bucket" • Tunables to affect how max memory and migration from main-memory to disk is handled. Servers can be added to. virtual machines or cloud machine instances. adapting to changing data management requirements of an application • Guaranteed data consistency: Never grapple with consistency issues in your application – no quorum reads required • High sustained throughput • Low.[7] [6] Replication and failover • Multi-model replication support: Peer-to-peer replication support with underlying architecture supporting master-slave replication. • Configurable replication count: Balance resource utilization with availability requirements • High-speed failover: Fast failover to replicated items based upon request Scalability and performance • Distributed object store: Easily store and retrieve large volumes of data from any application. most operations occur in far less than 1 ms (assuming gigabit Ethernet).

The communication may be synchronous or asynchronous. com/ pr/ NorthScale-Membase-Server-beta. org [3] Couchbase Website (http:/ / www.Membase 88 Prominent users • Zynga – membase is the key-value database behind FarmVille[8] • NHN[9] References [1] http:/ / code.org) • membase wiki (http://wiki. com/ ) [4] membase. To create it. google.google. org/ whatsdifferent. membase. membase. html) [8] NorthScale Releases High-Performance NoSQL Database (http:/ / www.couchbase. membase. northscale. northscale. html) Commercially supported distributions • Couchbase Membase Server (http://www. it is possible to send a message to particular message consumer objects. html) [9] NorthScale Releases High-Performance NoSQL Database (http:/ / www.com/group/membase) Message consumer A message consumer is a Java interface for distributed systems.org wiki: Disk > Memory (http:/ / wiki. Created by a selector.membase. com/ p/ memcached/ wiki/ NewProtocols [2] http:/ / www. com/ pr/ NorthScale-Membase-Server-beta. html) [5] membase.org) • membase mailing list (http://groups.org wiki: membase Background Flush (http:/ / wiki. northscale. couchbase. (http:/ / blog. org/ bin/ view/ Main/ FlushingItems) [6] membase. a destination object is passed to a message-consumer creation method that is supplied by the session of this object.org:Does the world really need another NoSQL Database? (http:/ / www. . org/ bin/ view/ Main/ DiskGtMemory) [7] Want to know what your memcached servers are doing? Tap them.membase. It is used to receive messages from a destination.com/products-and-services/membase-server) External links • Official membase site (http://www. com/ northscale-blog/ 2010/ 03/ want-to-know-what-your-memcached-servers-are-doing-tap-them. membase.

signals. Corba. When designing a message passing system several choices are made: • • • • Whether messages are transferred reliably Whether messages are guaranteed to be delivered in order Whether messages are passed one-to-one. processes or objects can send and receive messages (comprising zero or more bytes. The advantage of asynchronous communication is that the sender and receiver can overlap their . secure. processes can also synchronize. because the sender will not continue until the receiver is ready. the other common technique being streams or pipes. one-to-many (unicasting or multicast). process. and the Message Passing Interface used in high-performance computing. or even segments of code) to other processes. OpenBinder. the sender will not continue until the receiver has received the message. Asynchronous message passing systems deliver a message from sender to receiver. CTOS. D-Bus and similar are message passing systems. such as the Actor model and the process calculi are based on message passing. Implementations of concurrent systems that use message passing can either have message passing as an integral part of the language. DCOM. or as a series of library calls from the language. complex data structures. and/or transacted. The message can always be stored on the receiving side. without waiting for the receiver to be ready. Examples of the former include many distributed object systems. QNX Neutrino RTOS. and interprocess communication. Whether communication is synchronous or asynchronous. Messages are also commonly used in the same sense as a means of interprocess communication. durable. Message passing systems Distributed object and remote method invocation systems like ONC RPC. That is. and data packets. in which data are sent as a sequence of elementary data items instead (the higher-level version of a virtual circuit). Overview Message passing is the paradigm of communication where messages are sent from a sender to one or more recipients. object-oriented programming. Forms of messages include (remote) method invocation. . The second advantage is that no buffering is required.). thread. Message passing systems have been called "shared nothing" systems because the message passing abstraction hides underlying state changes that may be used in the implementation of sending messages. Examples of the latter include Microkernel operating systems pass messages between one kernel and one or more server blocks. Java RMI. or many-to-one (client–server). Synchronous communication has two advantages. etc.Message passing 89 Message passing Message passing in computer science is a form of communication used in parallel computing. This concept is the higher-level version of a datagram except that messages can be larger than a packet and can optionally be made reliable. By waiting for messages.NET Remoting. The first advantage is that reasoning about the program can be simplified in that there is a synchronisation point between sender and receiver on message transfer. Prominent theoretical foundations of concurrent computation. Synchronous versus asynchronous message passing Synchronous message passing systems require the sender and receiver to wait for each other to transfer the message. SOAP. socket. Such messaging is used in Web Services by SOAP. In this model. Message passing model based programming languages typically define messaging as the (usually asynchronous) sending (usually by copy) of a data item to a communication endpoint (Actor.

90 Message passing versus calling Message passing should be contrasted with the alternative communication method for passing information between programs – the Call. This of course is not possible for distributed systems since an (absolute) address – in the callers address space – is normally meaningless to the remote program (however. so that the resource is encapsulated. After the process with the lock is finished with the resource. This applies irrespective of the size of the original arguments – so if one of the arguments is (say) an HTML string of 31. A URL is an example of a way of referencing resources that does depend on exposing the internals of a process. If the resource (or subsection) is available. a resource is essentially shared. With the message-passing solution. only an address of say 4 or 8 bytes needs to be passed for each argument and may even be passed in a general purpose register requiring zero additional storage and zero "transfer time". The buffer required in asynchronous communication can cause problems when it is full. Once the lock is acquired. other processes are blocked out. process messages from more than one sender. the message handler behaves analogously to a volatile object). (in other words. and processes wishing to access it (or a sector of it) must first obtain a lock. This form of communication differs from message passing in at least three crucial areas: • total memory usage • transfer time • locality In message passing. the callers memory in advance). This is in contrast to the typical behaviour of an object upon which methods are being invoked: the latter is expected to remain in the same state between method invocations. by contrast. In a traditional Call. then communication is no longer reliable. A decision has to be made whether to block the sender or whether to discard future messages. Processes wishing to access the resource send a request message to the handler. One of the main alternatives is mutual exclusion or locking. the lock is then released. it is assumed that the resource is not exposed. arguments are passed to the "callee" (the receiver) typically by one or more general purpose registers or in a parameter list containing the addresses of each of the arguments. If messages are dropped. at least some of. By contrast. Message passing and locks Message passing can be used as a way of controlling access to resources in a concurrent or asynchronous system. in general. Examples of resources include shared memory. a disk file or region thereof. a relative address might in fact be usable if the callee had an exact copy of. A message handler will.000 octets describing a web page (similar to the size of this article). a database table or set of rows. This means its state can change for reasons unrelated to the behaviour of a single sender or client process. can result in a response arriving a significant time after the request message was sent. Synchronous communication can be built on top of asynchronous communication by ensuring that the sender always wait for an acknowledgement message from the receiver before continuing. it may lead to an unexpected deadlock. ensuring that corruption from simultaneous writes does not occur. If the sender is blocked. Asynchronous message passing. In locking. and all changes to it are made by an associated process. A subroutine call or method invocation will not exit until the invoked computation has terminated.Message passing computation because they do not wait for each other. for the call method. each of the arguments has to have sufficient available extra memory for copying the existing argument into a portion of the new message. Web browsers and web servers are examples of processes that communicate by message passing. it has to be copied in its entirety (and perhaps even transmitted) to the receiving program (if not a local program). the handler makes the requested change as an .

Message passing enables extreme late binding in systems. "Low-latency message communication support for the AP1000" (http://portal. it uses the concept of a distributed data flow to characterize the behavior of a complex distributed system in terms of message patterns. If the resource is not available. If the object "responds" to the message. Objects can send messages to other objects from within their method bodies. Two messages are considered to be the same message type. The live distributed objects programming model builds upon this observation. message passing is performed exclusively through a dynamic dispatch strategy. Takeshi Horie. it has a method for that message. Some languages support the forwarding or delegation of method invocations from one object to another if the former has no method to handle the message. In pure object-oriented programming. the request is generally queued. Toshiyuki. John M. but "knows" another object that may have one. Proceedings of the 1975 ACM SIGCOMM/SIGOPS workshop on Interprocess communications. • McQuillan. • Shimizu. squeakfoundation. robust11. functional-style specifications. and that objects themselves are often over-emphasized. Sending the same message to an object twice will usually result in the object applying the method twice. a message is the single means to pass control to an object. org/ citation. Examples • • • • Actor model implementation Amorphous computing Flow-based programming SOAP (protocol) References [1] Actor Model of Computation: Scalable Robust Information Systems (http:/ / www.acm. ACM Press.acm.wordpress. org) [2] Elements of interaction: Turing award lecture (https:/ / dl. See also Inversion of Control.com/2010/08/02/ beyond-locks-and-messages-the-future-of-concurrent-programming/) Further reading • Ramachandran. "Some considerations for a high performance message-based interprocess communication system" (http://portal. The sending programme may or may not wait until the request has been completed.. org/ pipermail/ squeak-dev/ 1998-October/ 017019. acm. David C. using high-level. Alan Kay has argued[3] that message passing is more important than objects in OOP.org/citation. Proceedings of the 14th annual international symposium on Computer architecture. if the name and the arguments of the message are identical.cfm?id=140385&coll=&dl=ACM&CFID=15151515& . Solomon. 91 Mathematical models The prominent mathematical models of message passing are the Actor model[1] and Pi calculus[2] . Hiroaki Ishihata (1992). M. Walden (1975). U.cfm?id=810905&coll=&dl=ACM& CFID=15151515&CFTOKEN=6184618). ACM Press.Message passing atomic event.org/citation..cfm?id=30371&coll=&dl=ACM&CFID=15151515&CFTOKEN=6184618). cfm?id=151240) [3] http:/ / lists. Vernon (1987). In the terminology of some object-oriented programming languages. that is conflicting requests are not acted on until the first request has been completed. M.acm. html External links • Future of Concurrent Programming (http://bartoszmilewski. "Hardware support for interprocess communication" (http:// portal.org/citation.

In telecommunications. In Optional-Out: A standard two-way message exchange where the provider's response is optional. In-Out: This is equivalent to request-response. Robust In-Only: This pattern is for reliable one-way message exchanges. A standard one-way messaging exchange where the consumer sends a message to the provider that provides only a status response. 3.[1] [2] SOAP MEP types include: 1. SOAP The term "Message Exchange Pattern" has a specific meaning within the SOAP protocol. advanced use cases. 2. This is a parallel task distribution and collection pattern. • Publish-subscribe connects a set of publishers to a set of subscribers. but if the response is a fault. The basic ØMQ patterns are:[3] • Request-reply connects a set of clients to a set of services. and loops. Each pattern defines a particular network topology. 8. This is a remote procedure call and task distribution pattern. For example. Request-reply defines so-called "service bus". the TCP is a request-response pattern protocol. the provider responds with a message or fault and the consumer responds with a status. 7. a messaging pattern is a network-oriented architectural pattern which describes how two different parts of a message passing system connect and communicate with each other. 92 Messaging pattern In software architecture. 5. ACM Press. Proceedings of the 19th annual international symposium on Computer architecture. and the UDP has a one-way pattern. [4] . the exchange is complete. The consumer initiates with a message to which the provider responds with status. push-pull defines "parallelised pipeline". the consumer must respond with a status. If the response is a status. 4. • Exclusive pair connects two sockets in an exclusive pair. In-Only: This is equivalent to one-way.Message passing CFTOKEN=6184618). and a one-way pattern. and are particularly optimized for that kind of patterns. • Push-pull connects nodes in a fan-out / fan-in pattern that can have multiple steps. This is a low-level pattern for specific. 6. There are two major message exchange patterns — a request-response pattern. This is a data distribution pattern. Out-Only Robust Out-Only Out-In Out-Optional-In ØMQ The ØMQ message queueing library provides a so-called sockets (a kind of generalization over the traditional IP and Unix sockets) which require to indicate a messaging pattern to be used. All the patterns are deliberately designed in such a way as to be infinitely scalable and thus usable on Internet scale. publish-subscribe defines "data distribution tree". a message exchange pattern (MEP) describes the pattern of messages required by a communications protocol to establish or use a communication channel. A standard two-way message exchange where the consumer initiates with a message.

A mobile agent is a specific form of mobile code. 250bpm. and most importantly.Pattern Catalog (http://www. in contrast to the Remote evaluation and Code on demand programming paradigms.0: Additional MEPs (http:/ / www. Just as a user directs an Internet browser to "visit" a website (the browser merely downloads a copy of the site or one version of it in the case of dynamic web sites). is a type of software agent. with its data intact. zeromq. w3.com/toc.html) Mobile agent In computer science. a mobile agent is a process that can transport its state from one environment to another.Messaging pattern 93 References [1] [2] [3] [4] http:/ / www. namely. An open multi-agent systems (MAS) is a system in which agents. that are owned by a variety of stakeholders. and resumes execution from the saved state.eaipatterns. mobile agents are active in that they can choose to migrate between computers at any time during their execution. w3. org/ TR/ wsdl20-additional-meps/ ) ØMQ User Guide (http:/ / www. org/ TR/ soap12-part1/ #soapmep SOAP MEPs in SOAP W3C Recommendation v1. How trust value is calculated 3.microsoft. Overall trust value What are the differences between trust and reputation systems? . continuously enter and leave the system. with the feature of autonomy. com/ hits) External links • Messaging Patterns in Service-Oriented Architecture (http://msdn. a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer. Source of trust information • • • • Direct experience Witness information Role-based rules Third-party references 2. Movement is often evolved from RPC methods. org/ docs:user-guide) Scalability Layer Hits the Internet Stack (http:/ / www. and be capable of performing appropriately in the new environment. transports this saved state to the new host. mobility. Mobile agents decide when and where to move. More specifically. This makes them a powerful tool for implementing distributed applications in a computer network. Definition and overview A Mobile Agent. a mobile agent accomplishes a move through data duplication.2 Web Services Description Language (WSDL) Version 2. However. it saves its own state. social ability. similarly. learning.com/en-us/library/aa480027. aspx) • Enterprise Integration Patterns . Reputation and Trust The following are general concerns about Trust and Reputation in Mobile Agent research: 1. When a mobile agent decides to move.

mobilec. • JADE [6]. fipa.actions are dependent on the state of the host environment • Tolerant to network faults . a multi-agent platform for mobile C/C++ agents. com/ danny/ docs/ 7reasons. whereas reputation systems produce an entity’s (public) reputation score as seen by the whole community. • National Institute for Standards and Technology [3]. net/ about/ about. pdf [3] [4] [5] [6] [2] http:/ / www. moe-lange.able to operate without an active connection between client and server • Flexible maintenance . agentlink. nist. gov/ mobileagents/ projects. hosts a center for investigating security of mobile agents. org http:/ / www. com [7] http:/ / www. a project to develop a secure mobile agent server (last release 2007). html . only the source (rather than the computation hosts) must be updated One particular advantage for remote deployment of software includes increased portability thereby making system requirements less influential. External links • Seven Good Reasons for Mobile Agents [1] • Mobile Agent Technologies [2]. an OSS mobile agent framework written in JAVA. reducing network load. • AgentLink III [4] • Mobile-C [5]. References [1] http:/ / www. org/ [8] http:/ / semoa. developer of AgentOS agent based operating system.to change an agent's actions. Inventor of Automatic Thread Migration (ATM). tilab.Mobile agent Trust systems produce a score that reflects the relying party’s subjective view of an entity’s trustworthiness. agentos. • The Foundation for Intelligent Physical Agents [7].converts computational client/server round trips to relocatable data bundles. org http:/ / jade. sourceforge. html http:/ / www. a standards body which defines an interface for agent based interactions. • Parallel processing -asynchronous execution on multiple heterogeneous network hosts • Dynamic adaptation . More: • Compare Reputation and Trust 94 Advantages Some advantages which mobile agents have over conventional agents: • Computation bundles . • Secure Mobile Agents Project [8]. net/ http:/ / csrc.

Queries can also include user-defined JavaScript functions (if the function returns true. Linux. 2011 Development status Active Written in Operating system Available in Type License Website C++ Cross-platform English Document-oriented database GNU AGPL v3. Development of MongoDB began in October 2007 by 10gen. OS X. any field can be queried at any time. high-performance. Non-UTF-8 data can be saved. org/ MongoDB (from "humongous") is an open source. The first public release was in February 2009. and other special types of queries in addition to exactly matching fields.MongoDB 95 MongoDB MongoDB Developer(s) Initial release Stable release 10gen 2009 1. Many applications can thus model data in a more natural way. and Solaris. queried. as well as sorting. • Type-rich: supports dates.[2] Features Among the features are: • Consistent UTF-8 encoding. and limiting results.0 (drivers: Apache license) http:/ / www. and more (all BSON types) • Cursors for query results More features: Ad hoc queries In MongoDB. skipping. binary data. .2 / June 18. Queries can return specific fields of documents (instead of the entire document). regular expressions. mongodb. • Cross-platform support: binaries are available for Windows. document-oriented database written in the C++ programming language.[1] The database is document-oriented so it manages collections of JSON-like documents. the document matches). as data can be nested in complex hierarchies and still be query-able and indexable. code. and retrieved with a special binary data type.8. MongoDB can be compiled on almost any little-endian system. schema-free. MongoDB supports range queries. regular expression searches.

and sent directly to the database to be executed. This file storage mechanism has been used in plugins for NGINX[6] and lighttpd. Indexes can be created or removed at any time.foo. "plum".find({$where : function() { return this.find({"address.x == this. If the following object is inserted into the users collection: { "username" : "bob". ["Joe"]) This returns "Hello. . including single-key. aggregation functions (such as MapReduce). "address" : { "street" : "123 Main Street".eval(function(name) { return "Hello.find({"fruit" : "pear"}) Indexing The software supports secondary indexes. }.MongoDB 96 Querying nested fields Queries can "reach into" embedded objects and arrays. Nested fields (as described above in the ad hoc query section) can also be indexed and indexing an array type will index each element of the array.state" : "NY"}) Array elements can also be queried: > db. "city" : "Springfield". Developers can see the index being used with the `explain` function and choose a different index with the `hint` function. Aggregation In addition to ad hoc queries. File storage The software implements a protocol called GridFS[5] that is used to store and retrieve files from the database. non-unique.y.users.[7] Server-side JavaScript execution JavaScript is the lingua franca of MongoDB and can be used in queries. "pear"]}) > db. "+name. Example of JavaScript in a query: > db.insert({"fruit" : ["peach". compound. }}) Example of code sent to the database to be executed: > db. including MapReduce[4] and a group function similar to SQL's GROUP BY.food. and geospatial[3] indexes. periodically resampling. "state" : "NY" } } We can query for this document (and all documents with an address in New York) with: > db. unique.food. Joe". MongoDB's query optimizer will try a number of different query plans when a query is run and select the fastest. the database supports a couple of tools for aggregation.

including CentOS and Fedora. a capped collection behaves like a circular queue.[30] [31] Factor.[15] The MongoDB server can only be used on little-endian systems.[11] Gentoo[12] and Arch Linux. A special type of cursor. Language support MongoDB has official drivers for: • C[16] • • • • • • • • • • • C++[17] C#[18] Haskell[19] Java[20] JavaScript[21] Lisp[22] Perl[23] PHP[24] Python[25] Ruby[26] Scala[27] There are also a large number of unofficial drivers. returning new results as they are inserted into the capped collection.NET.[33] Go.[42] . number of elements.[32] Fantom. Capped collections are the only type of collection that maintains insertion order: once the specified size has been reached. can be stored in MongoDB so that JavaScript can be used to write "stored procedures.[34] JVM languages (Clojure.[29] Erlang. etc. Any legal JavaScript type.). This cursor was named after the `tail -f` command. limiting data size to 2GB on 32-bit machines (64-bit systems have a much larger data size).[10] Debian and Ubuntu.[39] Ruby.[38] HTTP REST.[9] can be used with capped collections.[37] node.[18] ColdFusion.[36] Lua.[13] It can also be acquired through the official website.js.[8] A capped collection is created with a set size and.[28] Delphi. Groovy [35] . and does not close when it finishes returning results but continues to wait for more to be returned.MongoDB JavaScript variables can also be stored in the database and used by any other JavaScript as a global variable. although most of the drivers work on both little-endian and big-endian systems.[14] MongoDB uses memory-mapped files. for C# and . including functions and objects. but it is more commonly installed from a binary package. optionally. Many Linux package management systems now include a MongoDB package. Deployment MongoDB can be built and installed from source.[40] Racket.[41] and Smalltalk." 97 Capped collections MongoDB supports fixed-size collections called capped collections. called a tailable cursor. Scala.

as well as get replication information. The data is split into ranges (based on the shard key) and distributed across multiple shards. queries. A slave copies data from the master and can only be used for reads or backup (not writes). and more. For example./mongod --slave --port 10001 --dbpath ~/dbs/slave --source localhost:10000 Replica sets Replica sets are similar to master-slave. Administrative information can also be accessed through the admin interface: a simple html webpage that serves information about the current server status. The application talks to a special routing process called `mongos` that looks identical to a single MongoDB server. execute JavaScript. This `mongos` process knows what data is on each shard and routes the client's requests appropriately. The shell lets developers view. A master can perform reads and writes. MongoDB allows developers to guarantee that an operation has been replicated to at least N servers on a per-operation basis. and update data in their databases. so it is a full JavaScript shell as well as being able to connect to MongoDB servers. this interface is 1000 ports above the database port (http:/ / localhost:28017) and it can be turned off with the --norest option. Any number of `mongos` processes can be run: usually one per application server is recommended. (A shard is a master with one or more slaves. mongo. a "findAndModify" query must contain the shard key if the queried collection is sharded[44] . mongostat is a command-line tool that displays a simple list of stats about the last second: how many inserts./mongod --master --port 10000 --dbpath ~/dbs/master $ .) The developer's application must know that it is talking to a sharded cluster when performing some operations. The developer chooses a shard key. insert. removes. Example: starting a master/slave pair locally: $ mkdir -p ~/dbs/master ~/dbs/slave $ . updates. Management and graphical frontends Official tools The most powerful and useful management tool is the database shell. Master-slave As operations are performed on the master. mongosniff sniffs network traffic going to and from MongoDB. Sharding MongoDB scales horizontally using a system called sharding[43] which is very similar to the BigTable and PNUTS scaling model. By default. which determines how the data in a collection will be distributed. remove. shut down servers. and commands were performed. mongo is built on SpiderMonkey. the slave will replicate any changes to the data. as well as what percentage of the time the database was locked and how much memory it is using.MongoDB 98 Replication MongoDB supports master-slave replication. setting up sharding. but they incorporate the ability for the slaves to elect a new master if the current one goes down. . All requests flow through this process: it not only forwards requests and responses but also performs any necessary final data merges or sorts.

Some popular ones are: • • • • • • [49] – a web-based UI built with Django and jQuery. Fang of Mongo [50] Futon4Mongo – a clone of the CouchDB Futon web interface for MongoDB.MongoDB 99 Monitoring There are monitoring plugins available for MongoDB: • • • • munin[45] ganglia[46] scout[47] cacti[48] GUIs Several GUIs have been created to help developers visualize their data. Database Master [54] Windows based MongoDB Management Studio.[55] Prominent users • • • • • • • • • • • • • • • • MTV Networks[56] craigslist[57] Disney Interactive Media Group[58] Wordnik[59] diaspora[60] Shutterfly[61] foursquare[62] bit.ly[63] The New York Times[64] SourceForge[65] Business Insider[66] Etsy[67] CERN LHC[68] Thumbtack[69] AppScale[70] Uber[71] . supports also RDBMS. The language drivers are available under an Apache License. Opricot[53] – a browser-based MongoDB shell written in PHP. Mongo3[51] – a Ruby-based interface. Licensing and support MongoDB is available for free under the GNU Affero General Public License. MongoHub[52] – a native OS X application for managing MongoDB.

org/ ) MongoDB Blog . com/ p/ luamongo/ ) node. mongodb. mongodb. the officially supported Scala Driver for MongoDB (https:/ / github.March 2010 (http:/ / blog. com/ Fiedzia/ Fang-of-Mongo) [50] Futon4Mongo (http:/ / github. . com/ mongodb/ mongo-java-driver) JavaScript driver (http:/ / www. JS) [39] REST interface (http:/ / github. org/ display/ DOCS/ Ubuntu+ and+ Debian+ packages). archlinux. org/ display/ DOCS/ node. org/ post/ 137788967/ 32-bit-limitations) [16] C driver (http:/ / github. apple. mongodb. com/ p/ pebongo/ ) Emongo Erlang driver (http:/ / bitbucket. org/ display/ DOCS/ GridFS) [6] NGINX (http:/ / github. org/ display/ DOCS/ findAndModify+ Command#) [45] Munin plugin (http:/ / github. com/ mongodb/ mongo-ruby-driver) Casbah. org/ display/ DOCS/ Tailable+ Cursors) [10] CentOS and Fedora (http:/ / www. com/ mongodb/ mongo-python-driver) Ruby driver (http:/ / github. mongodb. com/ MongoTalk. mongodb. mongodb. mongodb. com/ slavapestov/ factor/ tree/ master/ extra/ mongodb/ ) Fantom driver (http:/ / bitbucket. com/ wpntv/ erlmongo) Factor driver (http:/ / github. org/ bwmcadams/ lighttpd-gridfs/ src/ ) [8] capped collections (http:/ / www. google. google. org/ display/ DOCS/ Geospatial+ Indexing) MapReduce (http:/ / www. mongodb. html) [43] sharding (http:/ / www. com/ downloads/ macosx/ development_tools/ mongohub. org/ rumataestor/ emongo) Erlmongo Erlang driver (http:/ / github. mongodb. mongodb. mongodb. com/ mongodb/ mongo) [18] C# driver (https:/ / github. mongodb. com [55] The AGPL . com/ sbellity/ futon4mongo) [51] Mongo3 (http:/ / mongo3. com/ quiiver/ mongodb-ganglia) [47] Scout slow-query plugin (http:/ / scoutapp. haskell. 2009 (http:/ / blog.MongoDB 100 References [1] [2] [3] [4] MongoDB website (http:/ / www. 2011-05-10. fi/ oss/ opricot/ ) [54] http:/ / www. Retrieved 2011-07-06. com/ tmm1/ rmongo) [41] (http:/ / planet. com/ kchodorow/ sleepy. org/ display/ DOCS/ JVM+ Languages) LuaMongo (http:/ / code. html) [53] Opricot (http:/ / www. org/ post/ 103832439/ the-agpl) [56] "MongoDB Powering MTV's Web Properties" (http:/ / blog. com/ mongodb/ mongo-php-driver) Python driver (http:/ / github. org/ display/ DOCS/ MapReduce) [5] GridFS (http:/ / www. org/ packages. org/ display/ DOCS/ Sharding) [44] (http:/ / www.MongoDB Blog: May 5. mongodb. com/ mongodb/ casbah) ColdFusion driver (http:/ / github. com/ 2010/ 06/ 20/ gmongo-0-5-released/ ) JVM language center (http:/ / www. com/ plugin_urls/ 291-mongodb-slow-queries) [48] Cacti plugin (http:/ / tag1consulting. . mongoose) [40] rmongo (http:/ / github.js driver (http:/ / www. com/ fons/ cl-mongo) Perl driver (http:/ / github. com/ mongodb/ mongo-c-driver) [17] C++ driver (http:/ / github. ss?package=mongodb. mongodb. squeaksource. mongodb. com/ blog/ mongodb-cacti-graphs) [49] Fang of Mongo (http:/ / github. org/ post/ 5360007734/ mongodb-powering-mtvs-web-properties). org/ liamstask/ fantomongo/ wiki/ Home) gomongo Go driver (http:/ / github. com/ mikejs/ gomongo) GMongo (http:/ / blog. racket-lang. org/ display/ DOCS/ Capped+ Collections) [9] (http:/ / www. com/ erh/ mongo-munin) [46] Ganglia plugin (http:/ / github. com/ mongodb/ mongo-perl-driver) PHP driver (http:/ / github. mongodb. plt& owner=jaymccarthy) [42] Smalltalk driver (http:/ / www. org/ display/ DOCS/ Downloads) [15] (http:/ / blog. nucleonsoftware. org/ package/ mongoDB) [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] Java driver (http:/ / github. org/ post/ 434865639/ state-of-mongodb-march-2010) Geospatial indexes (http:/ / www. org/ display/ DOCS/ Javascript+ Language+ Center) (https:/ / github. com/ ) [52] MongoHub (http:/ / www. gentoo. com/ virtix/ cfmongodb) Delphi (http:/ / code. icmfinland. org/ package/ dev-db/ mongodb) [13] Arch Linux (http:/ / aur. php?ID=27971) [14] official website (http:/ / www. org/ display/ DOCS/ CentOS+ and+ Fedora+ Packages) [11] Debian and Ubuntu (http:/ / www. com/ mongodb/ mongo-csharp-driver) [19] Haskell driver (http:/ / hackage. mongodb. paulopoiati. org/ display. com/ mdirolf/ nginx-gridfs) [7] lighttpd (http:/ / bitbucket. mongodb. [12] Gentoo (http:/ / packages.

A MongoDB Demo App with ASP.Presentation at MongoSF" (http:/ / www. pp. [62] "MongoDB at foursquare .codeplex.com/) Designing for the Cloud (http://www. com/ 12-months-with-mongodb). Retrieved 2011-05-15. Retrieved 2010-06-28. cs. Retrieved 2010-06-28. .slideshare. Retrieved 2010-12-23. Retrieved 2011-07-06.com/watch?v=dOP3w-9Q6lU) on YouTube Interview with Mike Dirolf on The Changelog about MongoDB background and design decisions (http:// thechangelog. ISBN 9781449381561 • Pirtle. wordnik. . 2010). [65] "How Python. com/ index.Software Engineer at MongoDB (http://www. 2010-05-21.org/) mongoDB User Group (http://www.com/) • FAQs about MongoDB (http://www.markus-gattol.NET MVC (http://mongomvc. 2010-05-21. Code as Craft: Etsy Developer Blog. O'Reilly Media. tv/ file/ 3704043). MongoDB: The Definitive Guide (1st ed. .ly user history. Manning. [63] "bit.).diasporatest. Retrieved 2010-06-28. 2010-11-06. Thumbtack Blog.com/post/287597162/episode-0-0-7-mike-dirolf-from-10gen-and-mongodb) MongoMvc . pp. Retrieved 2010-06-28. The Definitive Guide to MongoDB: The NoSQL Database for Cloud and Desktop Computing (1st ed. [64] Maher.com (http://www. NYTimes Open Blog. Retrieved 2011-05-24.com" (http:/ / www. Michael (September 23. 101 Bibliography • Banker.eventbrite. No to SQL? Anti-database movement gains steam (http://www. [68] "Holy Large Hadron Collider. 216.). 2010-06-03.mongodb.com/groups?gid=3265391) on LinkedIn MongoDB news and articles on myNoSQL (http://nosql. Membrey. com/ presentation/ mongosf2011/ disney). . tv/ file/ 3704098). Business Insider. The MongoDB NoSQL Database Blog. 2011-05-24. pp. joyentcloud. ISBN 9781935182870 • Chodorow. Kyle (March 28. 2010-02-20. [69] "Building Our Own Tracking Engine With MongoDB" (http:/ / engineering. [58] "Disney Central Services Storage: Leveraging Knowledge and skillsets" (http:/ / www. Mitch (March 3.computerworld. 2011-05-03. Retrieved 2010-06-28. July 1). . [67] "MongoDB at Etsy" (http:/ / codeascraft. 2010-10-25. 360. com/ 2011/ 05/ 03/ building-our-own-tracking-engine-with-mongodb/ ). Retrieved 12 August 2011. 2010-05-19. [60] "MongoDB . Tim. MongoDB in Action (1st ed. TurboGears. youtube.mypopescu. Batman!" (http:/ / blog. pycon. Plugge. com/ how-we-use-mongodb-2009-11). .technologyreview.MongoDB [57] "MongoDB live at craigslist" (http:/ / blog. ucsb. . .name/ws/mongodb. [59] "12 Months with MongoDB" (http:/ / blog. Addison-Wesley Professional. blogs. Apress. pp. .Presentation at MongoNYC" (http:/ / blip. Jacqueline (2010-05-25). mongodb. 2010-04-30.com/main/tag/mongodb) June 2009 San Francisco NOSQL Meetup Page (http://nosql. Uber | JoyentCloud:" (http:/ / www.js Meetup: Distributed Web Architectures – Curtis Chambers. thumbtack. 2011-05-16. auto-sharded . 2011). and MongoDB are Transforming SourceForge. ISBN 9781430230519 External links • • • • • • • • • • • Official MongoDB Project Website (http://www. ISBN 9780321705334 • Hawkins.net" (http:/ / us. 10gen. mongodb. 2010). . com/ resources/ videos/ node-js-office-hours-curtis-chambers-uber/ ).com/s/ article/9135086/No_to_SQL_Anti_database_movement_gains_steam_) MongoDB articles on NoSQLDatabases. com/ 2010/ 05/ 25/ building-a-better-submission-form/ ).Presentation at MongoNYC" (http:/ / blip. com/ 2010/ 05/ 19/ mongodb-at-etsy/ ). 375. nytimes.linkedin.com/tagged/mongodb) Eric Lai. Peter (September 26.html#faqs) . Kristina. PyCon 2010. php/ MongoDB). 350. [61] "Implementing MongoDB at Shutterfly . org/ post/ 660037122/ holy-large-hadron-collider-batman).nosqldatabases. Retrieved 2010-06-28. 2011). .). businessinsider. 2010-12-23. [70] http:/ / appscale. html#mongodb [71] "Node. [66] "How This Web Site Uses MongoDB" (http:/ / www. Dirolf. Eelco. etsy. org/ post/ 5545198613/ mongodb-live-at-craigslist). . . Retrieved 2010-06-28. "Building a Better Submission Form" (http:/ / open.com/video/?vid=356) at MIT Technology Review EuroPython Conference Presentation (http://www. com/ event_mongosf_10apr30#shutterfly). org/ 2010/ conference/ schedule/ event/ 110/ ). (2009. . Retrieved 2011-07-06.net/mdirolf/mongodb-europython-2009) Non-relational data persistence in Java using MongoDB .). 10gen. Retrieved 2010-08-03. MongoDB for Web Development (1st ed. edu/ datastores. diasporatest.

Trigger-Based Triggers at the subscriber capture changes made to the database and submit them to the publisher.Multi-master replication 102 Multi-master replication Multi-master replication is a method of database replication which allows data to be stored by a group of computers. • Masters can be located in several physical sites. It is not required for all domain controllers to replicate with each other domain controller as this would cause excessive network traffic in large Active Directory deployments. distributed across the network. With trigger-based transaction capturing. Methods Log-Based A database transaction log is referenced to capture changes made to the database. violating ACID properties. objects that are updated on one Domain Controller are then replicated to other domain controllers through multi-master replication. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group. lazy and asynchronous. database changes can be distributed either synchronously or asynchronously. and resolving any conflicts that might arise between concurrent changes made by different members.e. but is less flexible than multi-master replication. Some Active Directory needs are however better served by Flexible single master operation. database changes can only be distributed asynchronously.e. and updated by any member of the group. Instead. . Implementations Many directory servers based on LDAP implement multi-master replication. Multi-master replication can be contrasted with master-slave replication. in which a single member of the group is designated as the "master" for a given piece of data and is the only node allowed to modify that data item. • Issues such as conflict resolution can become intractable as the number of nodes involved rises and latency increases. For log-based transaction capturing. Other members wishing to modify the data item must first contact the master node. • Eager replication systems are complex and increase communication latency. Allowing only a single master makes it easier to achieve consistency among the members of the group. Within Active Directory. i. Active Directory One of the more prevalent multi-master replication implementations in directory servers is Microsoft's Active Directory. domain controllers have a complex update pattern that ensures that all servers are updated in a timely fashion without excessive replication traffic. Advantages • If one master fails. Disadvantages • Most multi-master replication systems are only loosely consistent. i. other masters continue to update the database.

PostgreSQL PostgreSQL offers multiple solutions for multi-master replication.3. OpenDS replication can be used over a Wide Area Network. or none of it is. including solutions based on two-phase commit. Ingres Replicator can operate over RDBMS’s from multiple vendors to connect them. Ingres Within Ingres Replicator. rubyrep [3]. PgPool and PgPool-II [4]. It is not required for all Ingres servers in an environment to replicate with each other as this could cause excessive network traffic in large implementations. Synchronous multi-master replication uses Oracle's two phase commit functionality to ensure that all databases with the cluster have a consistent dataset. it uses a log with a publish-subscribe mechanism that allows scaling to a large number of nodes.23. a subset of rows for a geographical region or one-way replication for a reporting server. MySQL Cluster supports conflict detection and resolution between multiple masters since version 6. OpenDS OpenDS implements multi-master replication since its version 1. Asynchronous multi-master replication commits data changes to a deferred transaction queue which is periodically processed on all databases in the cluster. . implementing eager (synchronous) replication is Postgres-R [7]. Ingres Replicator provides an elegant and sophisticated design that allows the appropriate data to be replicated to the appropriate servers without excessive replication traffic. Oracle Oracle database clusters implement multi-master replication using one of two methods.0.4 (October 2007) [1]. The OpenDS multi-master replication is asynchronous. PgCluster [5] and Sequoia [6] as well as some proprietary solutions. Yet another project. however it is still in development. OpenLDAP The widely used open source LDAP server implements multi-master replication since its version 2. target. objects that are updated on one Ingres server can then replicated to other servers whether local or remote through multi-master replication. MySQL MariaDB and MySQL ships with replication support. or network failure. Another promising approach. If one server fails. implementing synchronous replication is Postgres-XC [8]. It is possible to achieve a multi-master replication scheme beginning with MySQL version 3. client connections can be re-directed to another server. This means that some servers in the environment can serve as failover candidates while other servers can meet other requirements such as managing a subset of columns or tables for a departmental solution.Multi-master replication 103 CA Directory CA Directory supports multi-master replication. There is Bucardo [2]. In addition. Postgres-XC also is still under development. data integrity is enforced through this two-phase commit protocol by ensuring that either the whole transaction is replicated. Instead. In the event of a source. OpenDS replication does conflict resolution at the entry and attribute level.

Licensed under LGPL open source license. rubyrep.daffodilsw. data migration. DB2. projects. Oracle. Support for database vendors is provided through a Database Dialect layer. it supports following databases: Microsoft SQL Server. org/ software/ roadmap. html) • Active Directory Replication Model (http://www.postgres-r. Daffodil database. Daffodil Replicator works over standard JDBC driver and supports replication across heterogeneous databases. postgresql. PostgreSQL. The software was designed to scale for a large number of databases.codehaus. com/ community/ lab-projects/ sequoia [7] http:/ / www.dmoz. html http:/ / bucardo. postgres-r.com/resources/documentation/Windows/2000/ server/reskit/en-us/Default. continuent. net/ projects/ postgres-xc/ • Challenges Involved in Multimaster Replication (http://www. and Apache Derby included. MySQL. org/ [6] http:/ / www. with implementations for MySQL. org/ wiki/ Bucardo http:/ / www.org/Computers/Software/ Databases/Replication/) .Database Replication Page (http://www.org) • DMOZ Open Directory Project . Firebird. org [8] http:/ / sourceforge.asp) • Terms and Definitions for Database Replication (http://www. H2. It uses web and database technologies to replicate tables between relational databases in near real time. Apache Derby. postgresql. SymmetricDS guarantees that data changes are captured and atomicity is preserved. and data backup between various database servers. and PostgreSQL. At present. • DBReplicator Project Page (http://dbreplicator.com/presentations/mm_replication. data synchronization/replication software.dbspecialists.microsoft. org http:/ / pgpool. openldap. database independent. SQL Server. and withstand periods of network outage.com/) is a Java tool for data synchronization. DB2. work across low-bandwidth connections.replicator. HSQLDB. Oracle. projects. Daffodil Replicator is available in both enterprise (commercial) and open source (GPL-licensed) versions. • Daffodil Replicator (http://opensource.Multi-master replication 104 References [1] [2] [3] [4] http:/ / www.org/documentation/terms) • SymmetricDS (http://symmetricds.org/) is web-enabled. By using database triggers.asp?url=/resources/documentation/Windows/2000/server/reskit/en-us/distrib/ dsbh_rep_fgtk. org/ [5] http:/ / pgcluster.

Donovan in Open Environment Corporation (OEC). while a tier is a physical structuring mechanism for the system infrastructure. a business or data access tier. Typically. functional process logic may consist of one or more separate modules running on a workstation or application server. and the data management are logically separate processes. a tools company he founded in Cambridge. N-tier application architecture provides a model for developers to create a flexible and reusable application. rather than have to rewrite the entire application over. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture"). Apart from the usual advantages of modular software with well-defined interfaces. and shopping cart contents. developers only have to modify or add a specific layer. a change of operating system in the presentation tier would only affect the user interface code. purchasing. . most often on separate platforms. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network. and that a layer is a logical structuring mechanism for the elements that make up the [1] [2] software solution. Three-tier architecture Three-tier[3] is a client–server architecture in which the user interface. computer data storage and data access are developed and maintained as independent modules. an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of multi-tier architecture is the three-tier architecture. functional process logic ("business rules"). The concepts of layer and tier are often used interchangeably. However. Massachusetts. one fairly common point of view is that there is indeed a difference. For example. the three-tier architecture is Visual overview of a Three-tiered application intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. the user interface runs on a desktop PC or workstation and uses a standard graphical user interface. By breaking up an application into tiers. the application processing. Three-tier architecture has the following three tiers: Presentation tier This is the topmost level of the application. multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation. For example. and a data tier. The three-tier model is a software architecture and a software design pattern. The presentation tier displays information related to such services as browsing merchandise. and an RDBMS on a database server or mainframe contains the computer data storage logic.Multitier architecture 105 Multitier architecture In software engineering. There should be a presentation tier. It was developed by John J.

Separate tiers often (but not necessarily) run on separate physical servers. the MVC architecture is triangular: the view sends updates to the controller. Data tier This tier consists of database servers. Traceability The end-to-end traceability of data flows through n-tier systems is a challenging task which becomes more important when systems increase in complexity. in a three-tier model all communication must pass through the middle tier.Multitier architecture Application tier (business logic.NET. A middle dynamic content processing and generation level application server. web applications) where the client. comprising both data sets and the database management system or RDBMS software that manages and provides access to the data. the controller updates the model. Often middleware is used to connect the separate tiers. Other considerations Data transfer between tiers is part of the architecture. Whereas MVC comes from the previous decade (by work at Xerox PARC in the late 1970s and early 1980s) and is based on observations of applications that ran on a single graphical workstation. . ColdFusion platform. web services or other standard or proprietary protocols.NET Remoting. and each tier may itself run on a cluster. A fundamental rule in a three tier architecture is the client tier never communicates directly with the data tier. and potentially some cached dynamic content. it controls an application’s functionality by performing detailed processing. or middle tier) The logic tier is pulled out from the presentation tier and. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers. UDP. From a historical perspective the three-tier architecture concept emerged in the 1990s from observations of distributed systems (e. commonly electronic commerce websites. ASP. . A back-end database. the three tiers may seem similar to the model-view-controller (MVC) concept. Windows Communication Foundation. which are built using three tiers: 1. This tier keeps data neutral and independent from application servers or business logic.. Front End is the content rendered by the browser. However. and the view gets updated directly from the model. Protocols involved may include one or more of SNMP. Here information is stored and retrieved. three-tier is often used to refer to websites. logic tier. In web based application. Conceptually the three-tier architecture is linear. CORBA. 2. middle ware and data tiers ran on physically separate platforms.g. 106 Comparison with the MVC architecture At first glance. PHP. MVC was applied to distributed applications later in its history (see Model 2). Java RMI. Giving data its own tier also improves scalability and performance. however. for example Java EE. Web development usage In the web development field. A front-end web server serving static content. The content may be static or generated dynamically. sockets. topologically they are different. as its own layer. data access tier. 3.

com/ en-us/ library/ ee658109. . Three Tier Architecture [4] • Microsoft Application Architecture Guide [5] References [1] Deployment Patterns (Microsoft Enterprise Architecture. 1 (January 1995): 3(20) [4] http:/ / www. or networks (processing nodes). and Practices) (http:/ / msdn. computers. which is licensed under the GFDL. com/ article/ 3508 [5] http:/ / msdn. including the initial request packets. or invisible. and hack attempts. Network cloaking Network cloaking is a technology that makes a protected network invisible to malicious external traffic. and Efficiency in Client Server Applications. The network cloaking function immediately drops all packets from an offending IP address. [3] Eckerson. prohibited behaviors. and responses from the protected network. while allowing complete and uninterrupted access for legitimate users. Layers refer to a logical grouping of components which may or may not be physically located on one processing node. Patterns. All non-encrypted Internet traffic entering a network is inspected for malicious code. located in front of the internet firewall. Martin "Patterns of Enterprise Application Architecture" (2002). aspx <webopedia> This article was originally based on material from the Free On-line Dictionary of Computing. microsoft.Multitier architecture 107 Comments Generally. microsoft. Network cloaking is accomplished via a promiscuous bridge with firewall functionality. linuxjournal. malformed packets. Wayne W. A three-tier architecture then will have three processing nodes. the protected network simply appears to be unused. To the perpetrator. aspx) [2] Fowler. com/ en-us/ library/ ms998478. the term tiers is used to describe physical distribution of components of a system on separate servers. "Three Tier Client/Server Architecture: Achieving Scalability. External links • Linux journal. Addison Wesley. Performance." Open Information Systems 10.

History The Opaak educational trilogy's projects have been used for teaching operating systems at EPITA since 2004. in an environment composed of multiple kayou instances. This project is taught following the kastor project and lasts for a few months. In 2006. processor. the memory management and the multitasking. security etc. kayou kayou is an operating system built over the kaneton microkernel. all the computers of the network share their resources with each other including memory. The project is composed of several stages. each one targeting a kernel functionality such as the booting phase. kaneton kaneton represents the core of the Opaak trilogy as it aims at making students develop parts of a microkernel. Arcanoid etc. the kaneton educational project competed[1] in the Alternative OS Contest run by the specialized website OSNews. The objective for students is to develop an emulator for arcade games such as Pong. The Opaak trilogy has been introduced by Julien Quintard in 2007 following the relative success of the kastor and kaneton projects in the EPITA curriculum. the interrupts processing. The kernel extracts this game from a special and minimalistic file system. the kastor monolithic kernel is provided with an ELF binary at the boot time which represents an arcade game to be run. Indeed. storage. devices etc. . Indeed. This project focuses on making students fully understand the kernel internals of a microkernel-based operating system by addressing advanced concepts such as multiprocessing. to kernel internals to operating system principles and distributed system paradigms. date at which the kastor project was created.Opaak 108 Opaak The Opaak educational trilogy aims at providing material for the teaching and self-teaching of operating system concepts ranging from low-level programming. The project lasts several weeks and allows students to understand what is the microprocessor's role in an operating system though many modern functionalities are not discussed in this project such as virtual memory and scheduling. loads it into memory and finally executes it. originally named k. Projects Opaak is composed of the three following projects kastor kastor. is an introductory project targeting low-level programming. The kayou's originality resides in its fully distributed architecture.

com/ story/ 15018/ The-kaneton-Microkernel-Project/ ) at the Alternative OS Contest External links • The Opaak educational trilogy official website (http://www.Opaak 109 References [1] The kaneton Microkernel Project (http:/ / www.org) Open architecture computing environment Open Architecture Computing Environment (OACE) is a specification that aims to provide a standards-based computing environment in order to decouple computing environment from software applications. . osnews. This way it enables reusable software applications and components.opaak.

The front end for OCFA has not been made publicly available due to licencing issues. 7-zip. Architecture OCFA consists of a back end for the Linux platform.Open Computer Forensics Architecture 110 Open Computer Forensics Architecture Open Computer Forensics Architecture Developer(s) Stable release Korps landelijke politiediensten 2. libmagic. qemu-img and mbx2mbox.0pl4 Development status Active Operating system Available in Type Website Linux English Computer forensics [1] [1] The Open Computer Forensics Architecture (OCFA) is an distributed open source computer forensics framework used to analyze digital media within a digital forensics laboratory environment. zip. a custom Content-addressable storage or CarvFS based data repository and a Lucene index. The framework was built by the Dutch national police. rar. OCFA is extensible in C++ or Java. GNU Privacy Guard.2. net/ apps/ trac/ ocfa/ wiki . tar. Photorec. The framework integrates with other open source forensic tools and includes modules for The Sleuth Kit. Scalpel. gzip. it uses a PostgreSQL database for data storage. antiword. exiftags. objdump. bzip2. References [1] http:/ / sourceforge.

Perfect for scenarios where the database is embedded • Apache 2 License: always FREE for any usage. Features • Transactional: supports ACID Transactions [2]. Thank to the SQL layer OrientDB is straightforward to use it for people skilled in relational world. No fees or royalties are requested to use it • Light: about 1Mb for the full server. Even if it is a document-based database. OrientDB uses a new indexing algorithm called MVRB-Tree. RESTful protocol and JSON without use 3rd party libraries and components • Run everywhere: lll the engine is 100% pure Java: runs on Linux. No dependencies from other software. No libraries needed • Commercial support available . It supports schema-less.OrientDB 111 OrientDB OrientDB Developer(s) Initial release Written in Luca Garulli 2010 Java Operating system Cross-platform Type License Website Graph database Apache 2 License [1] OrientDB is an open source NoSQL database management system written in Java. the relationships are managed as in graph databases with direct connections among records. 100% compliant with TinkerPop Blueprints [3] standard for Graph database • SQL: supports SQL language [4] with extensions to handle relationships without SQL join. manage trees and graphs of connected documents • Web ready: supports natively HTTP. Windows and any system that supports the Java technology • Embeddable: local mode to use the database bypassing the Server. schema-full and schema-mixed modes. derived from the Red-Black Tree and from the B+Tree with benefits of both: fast insertion and ultra fast lookup. It has a strong security profiling system based on user and roles and support the SQL between the query languages. On crash it recovers the pending documents • GraphDB: native management of graphs.

From a physical standpoint overlay networks are quite complex (see Figure 1) as they combine various logical layers that are operated and built by Figure 2: Overlay network broken-up into logical layers Figure 1: A sample overlay network . Uses of overlay networks In telecommunication Overlay networks are used in telecommunication because of the availability of digital circuit switching equipments and optical fiber. For example. com/ p/ orient/ [7] https:/ / groups. com/ p/ orient/ wiki/ Transactions [3] http:/ / blueprints. Enterprise private networks were first overlaid on telecommunication networks such as frame relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP based MPLS networks and virtual private networks started (2001~2002). The [1] Internet was built as an overlay upon the telephone network. tinkerpop. com/ [6] http:/ / code. and client-server applications are overlay networks because their nodes run on top of the Internet. com/ p/ orient/ wiki/ SQL [5] http:/ / www. distributed systems such as cloud computing.OrientDB 112 External links • Official OrientDB website [5] • Code base on Google Code [6] • Public technical group [7] References [1] http:/ / www. a transport layer and an IP or circuit layers (in the case of the PSTN). perhaps through many physical links. com [2] http:/ / code. com/ forum/ #!forum/ orient-database Overlay network An overlay network is a computer network which is built on top of another network. google. google. orientechnologies. orientechnologies. Nodes in the overlay can be thought of as being connected by virtual or logical links. com [4] http:/ / code. google. google. peer-to-peer networks. in the underlying network. each of which corresponds to a path.[2] Telecommunication transport networks and IP networks (that combined make up the broader Internet) are all overlaid with at least an optical layer.

distributed hash tables can be used to route messages to a node having a specific logical address. without cooperation from ISPs. • JXTA • Many peer-to-peer protocols including Gnutella. Gnutella2. For example.brown. Martin. efficient content delivery (a kind of multicast). For example..gen.?. such as KAD and other protocols based on the Kademlia algorithm.pdf) .[3] 113 Over the Internet Nowadays the Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. edu/ [5] Virtela Technology Services (http:/ / www. On the other hand. [5] provides an overlay network in 90+ countries on top of 500+ different List of overlay network protocols based on TCP/IP Overlay network protocols based on TCP/IP include: • Distributed hash tables (DHTs). att. for example. for example.mit. July 2003 (http://himalia. RON (Resilient Overlay Network) for resilient routing. Oxford University Press. Oct. Kaashoek.edu/ron/) • Overcast: reliable multicasting with an overlay network (http://www. and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network.csail. (Examples: Limewire. csail. corp. html) [3] Fransman.fi/ffdoc/storm/pegboard/ available_overlays--hemppah/peg. 2001. Telecoms in the Internet Age: From Boom to Bust to . and OverQoS for quality of service guarantees. Akamai Technologies manages an overlay network which provides reliable. Freenet and I2P.. etc.edu/~jj/papers/ overcast-osdi00. For example. com/ history/ nethistory/ transmission. Shareaza. Resilient Overlay Networks (http:/ / nms. virtela. The overlay has no control over how packets are routed in the underlying network between two overlay nodes. Andersen. cmu.jyu. mit.it. voice over IP or IPTV. Virtela Technology Services underlying telecom providers. the sequence of overlay nodes a message traverses before reaching its destination. but it can control. In Proc. H. universities. among others. Morris. M. an overlay network can be incrementally deployed on end-hosts running the overlay protocol software. utorrent.html) • Resilient Overlay Networks (http://nms. [4] http:/ / esm. [2] AT&T history of Network transmission (http:/ / www. and R. net) External links • List of overlay network implementations. whose IP address is not known in advance. government etc) but they allow separation of concerns (and healthy business competition) that over time permitted the build up of a broad set of services that could not have been proposed by a single telecommunication operator overwise (ranging from broadband Internet access.) • PUCC • Solipsis: a France Télécom system for massively shared virtual world References [1] D. cs. Balakrishnan. Previous proposals such as IntServ.cs. Academic research includes End System Multicast [4] and Overcast for multicast. competitive telecom operators etc). DiffServ. edu/ ron/ ). such as through quality of service guarantees to achieve higher-quality streaming media.Overlay network various entities (businesses. Overlay networks have also been proposed as a way to improve Internet routing. ACM SOSP.

the most common parallel and distributed models and hybridization mechanisms.edu/papers/ overqos-nsdi04. hybrid metaheuristics. 2007 [1] of INRIA Operating system Cross-platform Type License Website Technical computing CeCill license [2] ParadisEO is a white-box object-oriented framework dedicated to the flexible design of metaheuristics. the fine-grained nature of the classes provided by the framework allow a higher flexibility compared to other frameworks. This separation confers to the user a maximum code and design reuse. etc.mit. This high content and utility encourages its use at International level. The models can be exploited in a transparent way. Their experimentation on the radio network design real-world application demonstrate their efficiency. Mac OS X. ANSI-C++ compliant computation library is portable across both Windows system and sequential platforms (Unix. etc. and parallel and distributed metaheuristics. ParadisEO is based on a clear conceptual separation of the solution methods from the problems they are intended to solve.). Linux. ParadisEO provides a broad range of features including evolutionary algorithms. This template-based. local searches. Furthermore. PVM and PThreads. ParadisEO is of the rare frameworks that provide the most common parallel and distributed models.0 / October 12. as it uses standard libraries such as MPI. ParadisEO is distributed under the CeCill license and can be used under several environments.lcs. one has just to instantiate their associated provided classes.Overlay network • OverQoS: An overlay based architecture for enhancing Internet QoS (http://nms. . Particle swarm optimization.html) 114 Paradiseo Paradiseo Developer(s) Stable release DOLPHIN project-team 1. Their implementation is portable on distributed-memory machines as well as on shared-memory multiprocessors. Overview ParadisEO is a white-box object-oriented framework dedicated to the reusable design of metaheuristics.

diversity preservation mechanisms (sharing. Iterative Local Search (ILS). ANSI-C++ compliant evolutionary computation library (evolutionary algorithms. IBEA.. particle swarm optimization. Vancouver.Y.at least for the ones we could think of. at DOLPHIN project-team website References • "Solving the Protein Folding Problem with a Bicriterion Genetic Algorithm on the Grid" [4] • Protein Sequencing with an Adaptive Genetic Algorithm from Tandem Mass Spectrometry. Edited by S. hybrid and cooperative models. so that if you don't find the class you need in it. pp 1412–1419.Paradiseo 115 Modules Paradiseo-EO Paradiseo-EO deals with population based metaheuristics..). CEC 2006. Paradiseo-PEO Paradiseo-PEO provides tools for the design of parallel and distributed metaheuristics: parallel evaluation. July 16-21 2006. Simulated annealing. Canada • "ParadisEO-MOEO: A Framework for Evolutionary Multi-objective Optimization" [5] (broken link?) • A Multi-Objective Approach to the Design of Conducting Polymer Composites for Electromagnetic Shielding. Zomaya • Grid computing for parallel bioinspired algorithms [6] . entropy.. cellular model. it is very easy to subclass existing abstract or concrete classes. at Paradiseo website • Team [1].. partial neighbourhood... indicator-based. Tabu search.. Paradiseo-PEO also introduces tools for the design of distributed. it provides tools for the development of single solution-based metaheuristics: Hill climbing.). Matsushima.. NSGA-II. parallel evaluation function.). ranking. EMO 2007. statistical tools and some easy-to-use state-of-the-art multi-objective evolutionary algorithms (NSGA. 0-7803-9489-5. It is component-based. In Handbook of Bioinspired Algorithms and Applications. performance metrics (contribution.. Japan • A hybrid metaheuristic for knowledge discovery in microarray experiments.). elitism. Team • • • • • Jean-Charles Boisson Clive Canape [3] Thomas Legrand Arnaud Liefooghe Alexandru-Adrian Tantar External links • Official site [2]. it is a templates-based. incremental evaluation.. It contains classes for almost any kind of evolutionary computation you might come up to .. Paradiseo-MOEO Paradiseo-MOEO provides a broad range of tools for the design of multiobjective optimization metaheuristics: fitness assignment shemes (achievement functions. Paradiseo-MO Paradiseo-MO deals with single-solution based metaheuristics. Olariu and A. island model. crowding)..

If the checksum is invalid. it has decomposed the original 3-SAT problem in a considerable number of smaller problems. inria. The first computer is attempting to solve a large and extremely difficult 3-SAT problem. php?cat_id=9& subject_area_id=7& journal_id=07437315 [7] http:/ / www. If there was a net gain. 2006. In addition. 017 Parasitic computing Parasitic computing is programming technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature. fr/ recherche/ equipes/ dolphin. The original computer now knows the answer to that smaller problem based on the second computer's response. lifl.perhaps one could break down interesting problems into queries of complex cryptographic protocols using public keys.Paradiseo • A Framework for the Reusable Design of Parallel and Distributed Metaheuristics [7] (broken link?) • Designing cellular networks using a parallel hybrid metaheuristic [8] 116 References [1] http:/ / www. ieeecomputersociety. com/ index. there might come a point where there is a net computational gain to the parasite . fr/ ~canape [4] http:/ / doi. It is. This computer will. inria. Eventually. or even done anything besides have a normal TCP/IP session. 1109/ CCGRID. inria. it will then request a new packet from the original computer. as part of receiving the packet and deciding whether it is valid and well-formed. in practice packets would probably have to be retransmitted occasionally when real checksum errors and network problems occur. and can transmit a fresh packet embodying a different sub-problem. create a checksum of the packet and see whether it is identical to the provided checksum. 08. fr/ ~jourdan/ publi/ jourdan_EMO07_A. lille. html [2] http:/ / paradiseo. sciencedirect. comcom. The authors suggest that as one moves up the application stack. a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program. gforge. 1016/ j. The example given by the original paper was two computers communicating over the Internet. en. pdf| [6] http:/ / top25. However. com/ content/ up02m74726v1526u/ |ParadisEO: [8] http:/ / dx. Each of these smaller problems is then encoded as a relation between a checksum and a packet such that whether the checksum is accurate or not is also the answer to that smaller problem. doi. fr [3] http:/ / researchers. all the sub-problems will be answered and the final answer easily calculated. 172 [5] http:/ / www2. under disguise of a standard communications session. . The proof-of-concept is obviously extremely inefficient as the amount of computation necessary to merely send the packets in the first place easily exceeds the computations leached from the other program. So in the end. one could in theory use a number of control nodes for which many hosts on the Internet form a distributed computing network completely unawares. org/ 10. and the 3-SAT problem would be solved much more quickly if just analyzed locally. org/ 10. 2006. The packet/checksum is then sent to another computer. in a sense. parasitic computing on the level of checksums is a demonstration of the concept. springerlink. the target computer(s) is unaware that it has performed computation for the benefit of the other computer.

the overlay layer obtains proximity information to other nodes asking information to the Network layer. or creating and testing new services (DHT. This façade is built on the routing services offered by the underlying overlay layer. DOLR. we provide wrapper code that takes care of network communication and permits us to run the same code in network testbeds such as PlanetLab. Moreover.nd. Applications are built in the upper layer using the standard Common API façade. PlanetSim logo PlanetSim also aims to enable a smooth transition from simulation code to experimentation code running in the Internet. PlanetSim Architecture PlanetSim’s architecture comprises three main extension layers constructed one atop another.edu/~parasite • http://www. Barabasi et al. Because of this. 412: 894-897 (2001). This framework presents a layered and modular architecture with well defined hotspots documented using classical design patterns. In PlanetSim. We have proved that PlanetSim reproduces the measures of these environments and is also efficient in its network implementation.. Nature. CAST. We however have profiled and optimised the code to enable scalable simulations in reasonable time. DHT. we have implemented two overlays (Chord and Symphony) and a variety of services like CAST. To validate the utility of our approach. External links • http://www. distributed services in the simulator use the Common API for Structured Overlays.szene. developers can work at two main levels: creating and testing new overlay algorithms like Chord or Pastry. Besides. Parasitic computing. The Simulator dictates the overall life cycle of the framework by calling the appropriate methods in the overlay's Node and obtaining routing information to dispatch messages through the Network. etc) on top of existing overlays. PlanetSim layered architecture . and object middleware. PlanetSim has been developed in the Java language to reduce complexity and smooth the learning curve in our framework.Parasitic computing 117 References 1.ch/parasit/ PlanetSim PlanetSim is an object oriented simulation framework for overlay networks and services. This enables complete transparency to services running either against the simulator or the network.

and Robert Rallo. PlanetSim: A New [1] Overlay Network Simulation Framework . Linz. Helio Tejedor. Proceedings of the 19th IEEE International Conference on Automated Software Engineering (ASE 2004). 123-137. Austria. Rubén Mondéjar. ISBN 3-540-25328-9. This output is obtained loading the output file into the Pajek graph editor (only Windows version). SEM 2004. pdf [2] 2004 • Pedro García. included into the current PlanetSim distribution. Random 1000-node Chord network Symphony A Symphony network with 1000 nodes. Jordi Pujol. Rubén Mondéjar. This output is obtained loading the output file into the yEd graph editor. External links • PlanetSim official website [3] • PlanetSim at SourceForge. September 2004. Volume 3437. ISBN 3-902457-02-3. Revised Selected Papers. Helio Tejedor. Random 1000-node Symphony network .PlanetSim 118 Publications 2005 • Pedro García. Carles Pairot. PlanetSim: A New Overlay Network Simulation Framework. ISSN 0302-9743. whose node Ids are randomly built. Carles Pairot. Austria. Linz. March 2005. Graphical Results Currently the PlanetSim can show the network topology as a GML or Pajek outputs. whose node Ids are randomly built. Software Engineering and Middleware. Jordi Pujol. Workshop on Software Engineering and Middleware (SEM 2004).net [4]. This site holds the latest release and collaborations. See these examples: Chord A Chord network with 1000 nodes. pp. and Robert Rallo. Acceptance Rate: 34%. not included into the current PlanetSim distribution. Lecture Notes in Computer Science (LNCS).

The advantage of portable objects is that they are easy to use and very expressive. es/ planetsim/ http:/ / sourceforge. as naïve programmers will not expect network-related errors or the unbounded nondeterminism associated with large networks. allowing programmers to be completely unaware that objects reside in other locations. pdf http:/ / planet. net/ projects/ planetsim/ Portable object (computing) In distributed programming. It is portable in the sense that it moves from machine to machine. a portable object is an object which can be accessed through a normal method call while possibly residing in memory on another computer. . irrespective of operating system or computer architecture. 1007/ 11407386_10 http:/ / planet. urv. Detractors cite this as a fault.PlanetSim 119 References [1] [2] [3] [4] http:/ / www. es/ planetsim/ planetsim. This mobility is the end goal of many remote procedure call systems. springerlink. com/ index/ 10. urv.

JavaScript (both client and serverside). key-value data store.1 the safer alternative is an append-only file (a journal) that is written as operations modifying the dataset in memory are processed. in-memory. Common Lisp. In addition to strings. Erlang. As of 15 March 2010.Redis (data store) 120 Redis (data store) Redis Developer(s) Initial release Stable release Salvatore Sanfilippo 2009 2. R. Objective-C. Supported languages or language bindings include C. .12 / June 12. Persistence is reached in two different ways: One is called snapshotting. Haskell. development of Redis is sponsored by VMware[1] [2] . One of the main differences between Redis and other structured storage systems is that values are not limited to strings. Since version 1. networked. the following abstract data types are supported: • • • • Lists of strings Sets of strings (collections of non-repeating unsorted elements) Sorted sets of strings (collections of non-repeating elements ordered by a floating-point number called score) Hashes where keys are strings and values are either strings or integers The type of a value determines what operations (called commands) are available for the value itself. Redis is able to rewrite the append-only file in the background in order to avoid an indefinite growth of the journal. and Tcl. union. Persistence Redis typically holds the whole dataset in RAM. persistent. Go. Perl. Python. C++. Data model In its outer layer. Clojure. It is written in ANSI C. Ruby. and is a semi-persistent durability mode where the dataset is asynchronously transferred from memory to disk from time to time. Scala.4 could be configured to use virtual memory[3] but this is now deprecated.2. io/ Redis is an open-source. Lua. 2011 Development status Active Written in Operating system Available in Type License Website ANSI C Cross-platform English Document-oriented database BSD http:/ / redis. and difference between sets and sorting of lists. PHP. Versions up to 2. C#. the Redis data model is a dictionary where keys are mapped to values. sets and sorted sets. Java. Redis supports high level atomic server side operations like intersection. journaled.

and Slicehost (http:/ / porteightyeight. com/ 2009/ 11/ 09/ redis-benchmarking-on-amazon-ec2-flexiscale-and-slicehost/ )" [7] http:/ / www. vmware. 2009 [8] [9] • Isabel Drost and Jan Lehnard (29 October 2009). paperplanes. linux-mag. io/ topics/ virtual-memory). [5] There is no notable speed difference between write and read operations. permitting intentional and unintentional inconsistency between instances. anywhere up the replication tree. de/ 2009/ 10/ 27/ theres_something_about_redis.[6] References • Jeremy Zawodny. [1] VMware: the new Redis home (http:/ / antirez. [6] A. com/ cache/ 7496/ 1. com/ p/ redis/ wiki/ ReplicationHowto [5] "FAQ" (http:/ / redis. Linux Magazine. A slave may be a master to another slave. Summary . Replication is useful for read (but not write) scalability or data redundancy.com/post/ 2801342864/episode-0-4-5-redis-with-salvatore-sanfilippo/) • Extensive Redis tutorial with real use-cases by Simon WIllison (http://simonwillison.Redis (data store) 121 Replication Redis supports master-slave replication. Redis: Lightweight key/value Store That Goes the Extra Mile [7]. html [11] http:/ / www. [4] http:/ / code. html [9] http:/ / nosqlberlin. Slides [10] for the Redis presentation. The H. Happenings: NoSQL Conference. infoq.io/) • Audio Interview with Salvatore Sanfillipo on The Changelog podcast (http://thechangelog. com/ post/ vmware-the-new-redis-home. google. pdf [10] http:/ / www. Data from any Redis server can replicate to any number of slaves. Redis slaves are writable. html) [3] Redis documentation "Virtual Memory" (http:/ / redis. This allows Redis to implement a single-rooted replication tree.io. Berlin . redis.[4] Performance The in-memory nature of Redis allows it to perform extremely well compared to database systems that write every change to disk before considering a transaction committed. Flexiscale. html) [2] VMWare: The Console: VMware hires key developer for Redis (http:/ / blogs. August 31. com/ console/ 2010/ 03/ vmware-hires-key-developer-for-redis. The Publish/Subscribe feature is fully implemented. accessed January 18. so a client of a slave may SUBSCRIBE to a channel and receive a full feed of messages PUBLISHed to the master. com/ presentations/ newport-evolving-key-value-programming-model External links • Official Redis project page (http://redis. de/ slides/ NoSQLBerlin-Redis. 2011. • Billy Newport (IBM): "Evolving the Key/Value Programming Model to a Higher Level [11]" Qcon Conference 2009 San Francisco. Charnock: " Redis Benchmarking on Amazon EC2. io/ topics/ faq). html [8] http:/ / www. com/ open/ features/ Happenings-NoSQL-Conference-Berlin-843597. h-online. .net/static/2010/ redis-tutorial/) .

Since it has been open sourced the name changed to Remote Component Environment[2] . nohuddleoffense. sesis. a privilege management.Remote Component Environment 122 Remote Component Environment Remote Component Environment (RCE) (Was: Reconfigurable Computing Environment) Stable release Written in 1. rcenvironment. distributed platform for the integration of applications. References [1] http:/ / www. RCE enables the developers of integrated applications to concentrate on application-specific logic and to let the different applications interact by embedding them into one unified environment. RCE provides integrated applications access to general-purpose software components like a workflow engine.7. or an interface to external compute and storage resources (Grid. Is is a plug-in based system for application integration written in Java on top of the Eclipse framework.de) • DLR RCE product site (in German) (http://www. clusters). org/ The Remote Component Environment (RCE) is an all-purpose. Development of the RCE platform took place in the SESIS [1] project.0 / July 20.dlr.de/sc/produkte/rce) . de [2] http:/ / www. It supports and integrates well known middleware solutions like the GlobusToolkit toolkit and UNICORE and abstractions layers like Hibernate_(Java).rcenvironment. 2010 Java and Python Operating system Cross-platform Type License Website Integration platform. de/ 2009/ 09/ 19/ remote-component-environment/ External links • Official RCE website (http://www. Multi-purpose Problem Solving Environment Eclipse Public License http:/ / www. Previously the platform was known by Reconfigurable Computing Environment.

and a framework for assessing system conformance.[1] Overview The RM-ODP is a reference model based on precise concepts derived from The RM-ODP view model.Request Based Distributed Computing 123 Request Based Distributed Computing Request Based Distributed Computing (RBDC) is a term that refers to the distributed computing paradigm underlying the HyperText Computer. External links • HyperText Computer Blog [2] • Request Based Distributed Computing Blog [1] References [1] http:/ / www. interworking. and emergence -. which provides five generic and complementary viewpoints current distributed processing on the system and its environment. together with an enterprise architecture framework for the specification of ODP systems. . the International Electrotechnical Commission (IEC) and the Telecommunication Standardization Sector (ITU-T) . on the use of formal description techniques for specification of the architecture. developments and. platform and technology independence. RM-ODP has four fundamental elements: • • • • an object modelling approach to system specification. davidpratten. have been around for a long time and have been rigorously described and explained in exact philosophy (for example. Many RM-ODP concepts. which provides a co-ordinating framework for the standardization of open distributed processing (ODP). com/ 2008/ 01/ 07/ request-based-distributed-computing-a-rough-sketch/ RM-ODP Reference Model of Open Distributed Processing (RM-ODP) is a reference model in computer science.904 and ISO/IEC 10746. It supports distribution. RM-ODP. composition. also named ITU-T Rec. in the works of Mario Bunge) and in systems thinking (for example. X. the definition of a system infrastructure providing distribution transparencies for system applications.have recently been provided with a solid mathematical foundation in category theory. as far as possible. and portability. Some of these concepts -such as abstraction. in the works of Friedrich Hayek). is a joint effort by the International Organization for Standardization (ISO). the specification of a system in terms of separate but interrelated viewpoint specifications.901-X. possibly under different names.

This ran from 1984 until 1998 under the leadership of Andrew Herbert (now MD of Microsoft Research in Cambridge). Parts 2 and 3 of the RM-ODP were eventually adopted as ISO standards in 1996. Viewpoints modeling and the RM-ODP framework Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications. RM-ODP. Associated with each viewpoint is a viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint. A viewpoint is a subdivision of the specification of a complete system. therefore. Architectural Semantics[9] : Contains a formalization of the ODP modeling concepts by interpreting many concepts in terms of the constructs of the different standardized formal description techniques. It contains explanatory material on how the RM-ODP is to be interpreted and applied by its users. and involved a number of major computing and telecommunication companies. and an outline of the ODP architecture. justification and explanation of key concepts. established to bring together those particular pieces of information relevant to some particular area of concern. This recommendation also defines RM-ODP viewpoints. Furthermore. giving scoping. Current software architectural practices. precise and concise way. as described in IEEE 1471. we all have different interests in a given system and different reasons for examining the system's specifications. established to bring together those particular pieces of information relevant to some particular area of concern during the analysis or design of the system. the viewpoints are not completely independent. is to provide separate viewpoints into the specification of a given complex system. In only 18 pages. DoDAF and. 124 History Much of the preparatory work that led into the adoption of RM-ODP as an ISO standard was carried out by the Advanced Networked Systems Architecture (ANSA) project. Viewpoint modeling has become an effective approach for dealing with the inherent complexity of large distributed systems. These are the constraints to which ODP standards must conform. RM-ODP Topics RM-ODP standards RM-ODP consists of four basic ITU-T Recommendations and ISO/IEC International Standards:[2] [3] [4] [5] 1. Moreover. each viewpoint substantially uses the same foundational concepts . 4. the Zachman Framework.RM-ODP The RM-ODP family of recommendations and international standards defines a system of interrelated essential concepts necessary to specify open distributed processing systems and provides a well-developed enterprise architecture framework for structuring the specifications for any large-scale systems including software systems. Overview[6] : Contains a motivational overview of ODP. this standard sets the basics of the whole model in a clear. Examples include the "4+1" view model. Architecture[8] : Contains the specification of the required characteristics that qualify distributed processing as open. Parts 1 and 4 were adopted in 1998. These viewpoints each satisfy an audience with interest in a particular set of aspects of the system. who may include standard writers and architects of ODP systems. Although separately specified. [7] 2. subdivisions of the specification of a whole system. each one focusing on a specific aspect of the system. Foundations : Contains the definition of the concepts and analytical framework for normalized description of (arbitrary) distributed processing systems. A business executive will ask different questions of a system make-up than would a system implementer. divide the design activity into several areas of concerns. TOGAF. It introduces the principles of conformance to ODP standards and the way in which they are applied. of course. key items in each are identified as related to items in the other viewpoints. The concept of RM-ODP viewpoints framework. 3.

• The information viewpoint. which focuses on the mechanisms and functions required to support distributed interactions between objects in the system. ISO/IEC and the ITU-T started a joint project in 2004: "ITU-T Rec. scope and policies for the system. it does not prescribe particular notations to be used in the individual viewpoints. However. The purpose of "UML4ODP" to allow ODP modelers to use the UML notation for expressing their ODP specifications in a standard graphical way. The mutual consistency among the viewpoints is ensured by the architecture defined by RM-ODP. It describes the functionality provided by the system and its functional decomposition. the "4+1" model. This adds to the cost of adopting the use of UML for system specification. The viewpoint languages defined in the reference model are abstract languages in the sense that they define what concepts should be used. In order to address these issues.Use of UML for ODP system specifications". document (usually referred to as UML4ODP ISO/IEC 19505). It describes the distribution of processing performed by the system to manage the information and provide the functionality. the viewpoints are sufficiently independent to simplify reasoning about the complete specification. • The technology viewpoint. thus facilitating the software design process and the enterprise architecture specification of large software systems. It describes the technologies chosen to provide the processing. hampers communication between system developers and makes it difficult to relate or merge system specifications where there is a need to integrate IT systems. This [10] ) defines use of the Unified Modeling Language 2 (UML 2. the development of industrial tools for modeling the viewpoint specifications. Although the ODP reference model provides abstract languages for the relevant concepts. the formal analysis of the specifications produced. or the RM-ODP. However. functionality and presentation of information. . including the Zachman Framework. • The computational viewpoint. the RM-ODP framework provides five generic and complementary viewpoints on the system and its environment: • The enterprise viewpoint. which focuses on the semantics of the information and the information processing performed. which focuses on the purpose. one for each viewpoint language and one to express the correspondences between viewpoints. and the possible derivation of implementations from the system specifications. It defines a set of UML Profiles. which focuses on the choice of technology of the system. It describes the business requirements and how to meet them. to allow UML modelers to use the RM-ODP concepts and mechanisms to structure their large UML system specifications according to a mature and standard proposal. X. These approaches were consciously defined in a notation. which enables distribution through functional decomposition on the system into objects which interact at interfaces. not how they should be represented. and to allow UML tools to be used to process viewpoint specifications. for expressing the specifications of open distributed systems in terms of the viewpoint specifications defined by the RM-ODP.and representation-neutral manner to increase their use and flexibility. 125 RM-ODP and UML Currently there is growing interest in the use of UML for system modelling. It describes the information managed by the system and the structure and content type of the supporting data.Open distributed processing . More specifically. this makes more difficult. and an approach for structuring them according to the RM-ODP principles. This lack of precise notations for expressing the different models involved in a multi-viewpoint specification of a system is a common feature for most enterprise architectural approaches. among other things.906|ISO/IEC 19793: Information technology . there is no widely agreed approach to the structuring of such specifications. However.RM-ODP (defined in Part 2 of RM-ODP). and the use of a common object model provides the glue that binds them all together. • The engineering viewpoint.

X. X. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ s018836_ISO_IEC_10746-2_1996(E).911 | ISO/IEC 15414:2002. net/ files/ resources/ LON-040_UML4ODP_IS/ LON-040_UML4ODP_IS. X. Protocol support for computational interactions. X. X. Interface Definition Language. pdf [11] COMBINE (http:/ / www. iso. rm-odp. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ c020696_ISO_IEC_10746-1_1998(E). etc.960 | ISO/IEC 14769:2001. ie/ synapses/ public/ ) • • • • • • • • . Interface references and binding.901 (http:/ / www. jp/ e) [14] The Synapses Project: a three-year project funded under the EU 4th Framework Health Telematics Programme (http:/ / www. X. General Inter-ORB Protocol (GIOP)/Internet Inter-ORB Protocol (IIOP).906 | ISO/IEC 19793 "Use of UML for ODP systems specifications" (http:/ / www.Enterprise language (http:/ / www.RM-ODP In addition. ITU-T Rec.906 | ISO/IEC 19793 enables the seamless integration of the RM-ODP enterprise architecture framework with the Model-Driven Architecture (MDA initiative from the OMG. net/ ODP/ DIS_15414_X. etc. zip) [9] ISO/IEC 10746-4 | ITU-T Rec. [6] ISO/IEC 10746-1 | ITU-T Rec. org/ combine/ overview. itu. pdf) are also available from the RM-ODP resource site (http:/ / www. ITU-T Rec.920 | ISO/IEC 14750:1999. [4] There is also a very useful hyperlinked version (http:/ / www. The Table of Contents and Index were prepared by Lovelace Computing and are being made available by Lovelace Computing as a service to the standards community. 911. X. ITU-T Rec. the viewpoint metamodels. conference papers. made available in keeping with a resolution of the ISO council. net).[13] • The Synapses European project. 126 Applications In addition. html). cs. including X. int/ ). zip) [8] ISO/IEC 10746-3 | ITU-T Rec. Reference model . ch) or from ITU-T (http:/ / www. ccsds. X.910 | ISO/IEC 14771:1999. iso. rm-odp. They include the UML Profiles of the five ODP viewpoints. journal articles.930 | ISO/IEC 14753:1999.950 | ISO/IEC 13235-1:1998. iso. X. rm-odp. int/ rec/ T-REC-X/ en).9xx series. Trading function: Specification.952 | ISO/IEC 13235-3:1998. zip) [7] ISO/IEC 10746-2 | ITU-T Rec. opengroup.904 (http:/ / www. Japan. net/ files/ resources/ LON-040_UML4ODP_IS/ LON-040_UML4ODP_IS. All ODP-related ITU-T Recommendations. rm-odp. X.[14] • The COMBINE project[11] Notes and references [1] A complete and updated list of references to publications related to RM-ODP (books. together with an index to the Reference Model. Type repository function. intap. X.931 | ISO/IEC 14752:2000. for which RM-ODP provides a standardization framework: ITU-T Rec. iso. net/ ODP) of Parts 2 and 3 of the RM-ODP. aspx) [13] Interoperability Technology Association for Information Processing (INTAP) (http:/ / www. ITU-T Rec. • Interoperability Technology Association for Information Processing (INTAP). iso. pdf). [5] Some resources related to the current version of | ITU-T X. [3] Copies of the RM-ODP family of standards can be obtained either from ISO (http:/ / www. ITU-T Rec. there are several projects that have used or currently use RM-ODP for effectively structuring their systems specifications: • The Reference Architecture for Space Data Systems (RASDS)[12] From the Consultative Committee for Space Data Systems. X. net/ publications. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ s020697_ISO_IEC_10746-3_1996(E).903 (http:/ / www. joaquin. ITU-T Rec. the GIF files for the ODP-specific icons. are freely available from the ITU-T (http:/ / www. and with the service-oriented architecture (SOA). • ISO/IEC 19500-2:2003. Provision of Trading Function using OSI directory service. [2] In the same series as the RM-ODP are a number of other standards and recommendations for the specification and development of open and distributed system. org/ review/ default. tcd. ITU-T Rec. or. joaquin. X. itu. Naming framework. Parts 1 to 4 of the RM-ODP are available for from free download from ISO (http:/ / isotc.) is available at the RM-ODP resource site (http:/ / www. net. htm). iso.902 (http:/ / www. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ c020698_ISO_IEC_10746-4_1998(E). ITU-T Rec. htm) [12] Reference Architecture for Space Data Systems (RASDS) (http:/ / public. zip) [10] http:/ / www. ch/ livelink/ livelink/ fetch/ 2000/ 2489/ Ittf_Home/ PubliclyAvailableStandards.

Semantic Web Data Space A Semantic Web Data Space is a container for domain specific portable data. and the Linked Data Project Object Oriented Databases Data Portability Web 2. • Official Record of the ANSA project (http://www. . and thus can be viewed in an Object Oriented fashion.ac.RM-ODP 127 External links • RM-ODP Resource site (http://www. this supports the work of the Linked Data project which is part of the Semantic Web effort. France.enst.co. Linked Data. This means that an object in a data space should be movable and should also have the ability to be referenced using an identifier such as a Uniform Resource Identifier.Reference Model (http://www. This has the benefit of being a useful point for querying about information across domains. and is linked with other data across spaces and domains. Australia.edu.infres.lip6.net/ODP/) • RM-ODP information at LAMS (http://lamswww. Paris France. University of Kent.stir. Data in a Data Space can be referenced by an identifier.joaquin.fr/). The underlying paradigm is quite new. UMPC.ac.uk/) • Computing Laboratory (http://www. UK.fr/recherche/ILR/rapport.ansa.0.rm-odp.uk/~kjt/research/formosa.ch/reference/rm-odp).au/AU/research_news/). Canterbury UK. University of Stirling. A Data Space should be fully supportive of data portability such as that advocated by the DataPortability project. Swiss Federal Institute of Technology. Networks and ComputerScience Department of ENST.cs. which is provided in human and/or machine friendly structures.uk/). however it brings together ideas and technologies from various sources: • • • • • The Semantic Web.dstc.cs. Paris. Switzerland.html) (Formalisation of ODP Systems Architecture). • Systèmes Répartis et Coopératifs (http://www-src. and Data Portability Data in Data Spaces are linked across spaces and domains to enhance the meaning of internal data. Semantic Web Data Spaces.epfl.html). • Distributed Systems Technology Center (http://archive. • FORMOSA (http://www. and Content Management Systems Ontologies and Categorization The approach can be applied to both Web based systems and Desktop based systems. • ILR (http://www. and assists the development of a Web of Data.net/) • Open Distributed Processing . Lausanne (EPFL).ukc.

Zhuge. • H. OWL and Database: Mapping and Integration. Related web technologies • Uniform Resource Identifiers for object identifiers • Resource Description Framework for object and data space descriptions • SPARQL for querying about objects across domains References • H. a product can first be released as a browser application and then functionality moved module by module to the client application. php?title=MonoWebFrameworks& redirect=no .Shi. Y. 2008. 72(1)(2004)71-81. 8/4. Springer. 2008.Semantic Web Data Space 128 Examplary Semantic Web Data Space Implementation • OpenLink Data Spaces. com/ wiki/ index. novell. External links • Novell excerpt on Web Services Frameworks [1] References [1] http:/ / developer. The Web Resource Space Model. It is built on top of the OpenLink Software Virtuoso Universal Server. its design method and applications.Xing and P. a distributed collaborative data space system implemented as a Social networking service and Content Management System. Service-oriented distributed applications A RESTful programming architecture that allows some services to be run on the client and some on the server. • H. ACM Transactions on Internet Technology. Resource Space Model.Zhuge.Zhuge. Resource space model. Journal of Systems and Software. For example.

e. In software In computer software. See also Non-Uniform Memory Access. Since both processes can access the shared memory area like regular working memory. or inside the IStream object returned by CoMarshalInterThreadInterfaceInStream in the COM libraries under Windows. Using memory for communication inside a single program. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory. the change needs to be reflected to the other processors. Dynamic libraries are generally held in memory once and mapped to multiple processes. Such coherence protocols can. On the other hand. this is a very fast way of communication (as opposed to other mechanisms of IPC such as named pipes. Shared memory computers cannot scale very well. . it is less powerful. In hardware In computer hardware. Depending on context. when they work well. by using virtual memory mappings or with explicit support of the program in question. otherwise the different processors will be working with incoherent data (see cache coherence and memory coherence). Unix domain sockets or CORBA). Most of them have ten or fewer processors. and only pages that had to be customized for the individual process (because a symbol resolved differently there) are duplicated. is generally not referred to as shared memory. shared memory is either • a method of inter-process communication (IPC). as for example the communicating processes must be running on the same machine (whereas other IPC methods can use a computer network). This is most often used for shared libraries and for XIP. and then lets the write succeed on the private copy. for example among its multiple threads.Shared memory 129 Shared memory In computing. • Cache coherence: Whenever one cache is updated with information that may be used by other processors. and care must be taken to avoid issues if processes sharing memory are running on separate CPUs and the underlying architecture is not cache coherent. i. One process will create an area in RAM which other processes can access. programs may run on a single processor or on multiple separate processors. shared memory refers to a (typically) large block of random access memory that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system. a way of exchanging data between programs running at the same time. On the other hand they can sometimes become overloaded and become a bottleneck to performance The alternatives to shared memory are distributed memory and distributed shared memory. which has two complications: • CPU-to-memory connection becomes a bottleneck. usually with a mechanism that transparently copies the page when a write is attempted. or • a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance instead. IPC by shared memory is used for example to transfer images between the application and the X server on Unix systems. A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to a same location. provide extremely high-performance access to shared information between multiple processors. shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. each having a similar set of issues.

html . google. org/ onlinepubs/ 007908799/ xsh/ sysshm. Steven Robbins (29003). net/ manual/ en/ ref. . csail. h. edu/ cs?q=shared+ memory+ library [12] http:/ / www. uk/ Dave/ C/ node27. net/ rtlinux2003. ISBN 9780130424112. mit.POSIX shmop [7] . cf. com/ en-us/ library/ aa374778. lfbs. htm [4] http:/ / www. com/ books?id=tdsZHyH9bQEC) (2 ed. de/ content/ smi [3] http:/ / www.6 Linux kernel builds have started to offer /dev/shm as shared memory in the form of a RAM disk. pdf [11] http:/ / citeseer. opengroup. Shared Memory Allocator For The Standard Template Library [10]" by Marc Ronell Citations from CiteSeer [11] Boost. sun. and threads (http:/ / books. rwth-aachen. php [10] http:/ / allocator. Recent 2.). shmop. microsoft. This uses shmget from sys/shm.Shared memory 130 Specific implementations POSIX provides a standardized API for using shared memory. sourceforge. 512. p. shmdt and shmget.documentation from SunOS 5. External links • • • • • • • • • • • Shared Memory Interface [2] Shared Memory Library FAQ [3] by Márcio Serolli Pinho Article "IPC:Shared Memory [4]" by Dave Marshall shared memory facility [5] from the Single UNIX Specification [6] shm_open . opengroup. shmctl. UNIX systems programming: communication. com/ app/ docs/ doc/ 817-0691/ 6mgfmmdt3?a=view [8] http:/ / msdn2. pucrs. /dev/shm support is completely optional within the kernel configuration file. concurrency.h.h. php. org/ onlinepubs/ 007908799/ xsh/ shm_open. html [7] http:/ / docs. Retrieved 2011-05-13.9 CreateSharedMemory function [8] from Win32-SDK Functions in PHP-API [9] Paper "A C++ Pooled. Kay A. boost. html [6] http:/ / www. org/ doc/ libs/ 1_36_0/ doc/ html/ interprocess. ac.[1] Unix System 5 provides an API for shared memory as well. Both the Fedora and Ubuntu distributions include it by default. Prentice Hall PTR. "The POSIX interprocess communication (IPC) is part of the POSIX:XSI Extension and has its origin in UNIX System V interprocess communication. POSIX interprocess communication (part of the POSIX:XSI Extension) includes the shared-memory functions shmat.." [2] http:/ / www. aspx [9] http:/ / www. This uses the function shm_open from sys/mman.Interprocess C++ Library [12] References [1] Robbins. BSD systems provide "anonymous mapped memory" which can be used by several processes. POSIX Shared Memory. cs. html [5] http:/ / www. br/ ~pinho/ shared_memory_library. more specifically as a world-writable directory that is stored in memory. inf.

// attach to and subscribe to the remote object while (1) { cout << “greeting=” << greeting << endl.” with no opportunity for the object to be modified. Now. To begin. SmartVariables propagate themselves into process-level code automatically. greeting. it does change. SmartVariables attach an email-like "name" to each container or list. where a change to one item can set off other changes in the database.[2] The concept has some similarities to that of stored procedures and triggers in database systems.Smart variables 131 Smart variables SmartVariables is a term introduced in 1998 referring to a design pattern that merges networking and distributed object technology with the goal of reducing complexity by transparently sharing information at the working program variable level. World!. and there is no code that explicitly connects to machine “Charlie” to retrieve the “greeting” object or any changes made to it.Name( “greeting@Charlie” )." The design emphasis is API simplicity for systems needing to exchange information. Next.” Here is the code for Bob: Var greeting = “Hello. Imagine an environment with three networked computers named: Alice. the environment transparently propagates the change to Alice. Applications do not poll for content changes.” Because SmartVariables containers “know” who have copies of their data. when the variable changes value. World!”.” The code on Alice appears to be a “tight loop. Sharing and update behaviors do not need to be explicitly programmed.Name( “greeting@Charlie” ). however "callbacks" can be attached that execute when a "named" object's content changes. World!.com. it automatically propagates change events across the network into other running processes working with that data.” Here is the code for Alice: Var greeting. This means that the program still looping on Alice will now begin printing its new value of “Hello. our program running on “Alice” will function to continuously print out the contents of a remote container object named “greeting@Charlie. we run another program on machine “Bob” that simply changes the value of the remote “greeting@Charlie” object to be the string “Hello. . World!. however. when the above program on machine Bob gets executed. // note that ‘greeting’ can change values here } Note that Alice’s display code is in a tight-loop. Programming Basics This C++ example is from the GPL open-source SmartVariables implementation at SmartVariables. Bob and Charlie. as events get processed asynchronously — working program variables simply receive new content. Modifications to the “greeting@Charlie” object become automatically reflected by Alice’s program output. // modify all copies.[1] SmartVariables style programming interfaces emulate simple "network shared memory. everywhere. greeting. it transparently connects to Charlie and modifies the “greeting” object to have its new value: “Hello.

SmartVariables. It uses an interface description language (IDL). because of pointers to the computer's memory pointing to different data on each machine.. edu/ ~plop/ plop98/ final_submissions). an interface definition has information to indicate whether. A server stub is responsible for deconversion of parameters passed by the client and conversion of the results after the execution of the function. Joseph Yoder (1998). that is used for defining the interface between Client and Server. The client and server use different address spaces. . This method is simple to implement and can handle very complex parameter types. Simplifying Web Infrastructure with SmartVariables (http:/ / www. The main idea of an RPC is to allow a local computer (client) to remotely call procedures on a remote computer (server). Pattern Languages of Programs Conference. Lee (March 2006) (pdf). . Manually: In this method. and distributed neural networks. Stub (distributed computing) A stub in distributed computing is a piece of code used for converting parameters passed during a Remote Procedure Call (RPC). function. . output or both — only input arguments need to be copied from client to server and only output elements need to be copied from server to client.com)]]. The client and server may also use different data representations even for simple parameters (e.) Stubs are used to perform the conversion of the parameters. Stub can be generated in one of the two ways: 1. For example. big-endian versus little-endian for integers. Automatically: This is more commonly used method for stub generation.Smart variables 132 References [1] Foote. [2] Hounshell. — Refined and extended the concept. com/ doc/ DistributedProgramming. A client stub is responsible for conversion of parameters used in a function call and deconversion of results passed from the server after execution of the function. "Metadata and Active Object-Models" (http:/ / jerry. — Introduced the concept of "smart variables". uiuc. Stub libraries must be installed on client and server side. cs. 2. the RPC implementer provides a set of translation functions from which a user can construct his or her own stubs. External links • Open source commercial implementation (beta) in [[C++ (http://smartvariables.com. so a Remote Function Call looks like a local function call for the remote computer. each argument is input.g. otherwise the values of those parameters could not be used. Brian. directory. smartvariables. so conversion of parameters used in a function call have to be performed. pdf). using "smart variables" to simplify Grid computing and implement web services.

Japan is the fastest in the world. attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs. IBM and Hewlett-Packard. or Xeon. In the 1980s a large number of smaller competitors entered the market. weather forecasting. AMD GPUs. Cray Research. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. simulation of the detonation of nuclear weapons. some being off the shelf units and others being custom designs. Currently. Relevant here is the distinction between capability computing and capacity computing. such as the PowerPC.Supercomputer 133 Supercomputer A supercomputer is a computer that is at the frontline of current processing capacity. In the later 1980s and 1990s. who had purchased many of the 1980s companies to gain their experience. and coprocessors like NVIDIA Tesla GPGPUs. Today. Typical numbers of processors were in the range of four to sixteen. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects. Opteron. . biological macromolecules. and research into nuclear fusion). IBM Cell. The term supercomputer itself is rather fluid. which led the market into the 1970s until Cray left to form his own company. and many of the newer players developed their own such processors at a lower price to enter the market. climate research. CDC's early machines were simply very fast scalar processors. but many of these disappeared in the mid-1990s "supercomputer market crash". as defined by Graham et al. He then took over the supercomputer market with his new designs. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system. FPGAs. and physical simulations (such as simulation of airplanes in wind tunnels. in parallel to the creation of the minicomputer market a decade earlier. holding the top spot in supercomputing for five years (1985–1990). and the speed of today's supercomputers tends to become typical of tomorrow's ordinary computers. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC). Often a capability system is able to solve a problem of a size or complexity that no other computer can. In the 1970s most supercomputers were dedicated to running a vector processor. molecular modeling (computing the structures and properties of chemical compounds. built by Fujitsu in Kobe. [1] Japan's K computer. the Tianhe-1A supercomputer located in China. It is three times faster than previous one to hold that title. particularly speed of calculation. and crystals). Today. polymers. parallel designs are based on "off the shelf" server-class microprocessors. some ten times the speed of the fastest machines offered by other companies. Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics.

[11] [12] [13] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations. containing 1.7 gigaflops per processor. released in 1964.[5] Four years after leaving CDC. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[6] [7] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. and was ranked the fastest in the world in 1993.[18] and the "Peak speed" is given as the "Rmax" rating. in A Cray-1 supercomputer preserved at the the 1990s.[3] [4] Cray left CDC in 1972 to form his own company. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh.[2] The CDC 6600. Cray delivered the 80 MHz Cray 1 in 1976.[8] While the supercomputers of the 1981 used only a few processors. is generally considered the first supercomputer.[14] Current research using supercomputers The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex. allowing processes to execute on separate nodes. setting new computational performance records. It performed at 1. Fujitsu's Numerical Wind Tunnel supercomputer [9] [10] The used 166 vector processors to gain the top spot in 1994 with a peak speed of 1. For more historical data see History of supercomputing.9 gigaflops and was the world's fastest until 1990. and it become one of the most successful supercomputers in history.6 billion neurons with approximately 9 trillion connections. . Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[16] In 2011 the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.[15] Modern-day weather forecasting also relies on supercomputers. machines with thousands of processors began to appear both Deutsches Museum in the United States and in Japan. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate. communicating via the Message Passing Interface.[17] This is a recent list of the computers which appeared at the top of the Top500 list.Supercomputer 134 History The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.

released in 2011.[19] The cost to power and cool the system can be significant. Tianhe-1A consumes 4.566 PFLOPS National Supercomputing Center.026 PFLOPS DoE-Los Alamos National Laboratory. and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. Japan Hardware and software design Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel.[30] [31] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 .759 PFLOPS DoE-Oak Ridge National Laboratory.10/KWh is $400 an hour or about $3.Supercomputer 135 Year Supercomputer Peak speed (Rmax) Location 2008 IBM Roadrunner 1. as well as complex detail engineering.[25] In the Blue Gene system IBM deliberately used low power processors to deal with heat density. As with all highly parallel systems. and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company. and using hardware to address the remaining bottlenecks. Heat management is a major issue in complex electronic devices.[24] However. In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt. 4MW at $0. China Fujitsu K computer 8.[28] [29] In November 2010.g.162 PFLOPS RIKEN. The supercomputing awards for green computing reflect this issue. and affects powerful computer systems in various ways. the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors. almost all of which is converted into heat. requiring cooling. and perform poorly at more general computing tasks. Their I/O systems tend to be designed to support high bandwidth. USA 1. They tend to be specialized for certain types of computation. much of the performance difference between slower computers and supercomputers is due to the memory hierarchy.105 PFLOPS 2009 2010 2011 Cray Jaguar Tianhe-IA 1.[21] [22] [23] The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. Energy consumption and heat management An Blue Gene/L cabinet showing the stacked blades. because supercomputers are not used for transaction processing. Tianjin. has closely packed elements that require water cooling. New Mexico. e. usually numerical calculations. the Blue Gene/Q reached 1684 MFLOPS/Watt.5 million per year. Kobe. For example.[26] On the other hand. Tennessee. the IBM Power 775. USA 2. The Cray 2 was liquid cooled.04 Megawatts of electricity. and supercomputer designs devote great effort to eliminating software serialization. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact.[20] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. each holding many processors A typical supercomputer consumes large amounts of electrical power. Amdahl's law applies. with latency less of an issue.[27] The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt".

technologies Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason. some graphics cards have the computing power of several TeraFLOPS. An IBM HS20 blade server Supercomputers consume and produce massive amounts of data in a very short period of time. and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU). Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. In modern supercomputers built of many conventional CPUs running in parallel. graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors. Nebulae built by Dawning in China. a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. "A supercomputer is a device for turning compute-bound problems into I/O-bound problems. the number 3 [33] supercomputer." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.Supercomputer MFLOPS/W. The applications to which this power can be applied was limited by the special-purpose nature of early video processing.[32] 136 Supercomputer challenges. is based on GPGPUs. hence the cylindrical shape of his Cray range of computers. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason. Indeed. . Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers. Technologies developed for supercomputers include: • • • • • Vector processing Liquid cooling Non-Uniform Memory Access (NUMA) Striped disks (the first instance of what was later called RAID) Parallel filesystems Processing techniques Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. As video processing has become more sophisticated. latencies of 1–5 microseconds to send a message between CPUs are typical. In particular. According to Ken Batcher.

the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. In a [34] More than 90% of today's Supercomputers run some variant of Linux. . supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. An easy programming language for supercomputers remains an open research topic in computer science. using special libraries to share data between nodes. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on. which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. WareWulf. in general. supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). and the adoption of computer systems such as Cray's Unicos. and open source-based software solutions such as Beowulf. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. The base language of supercomputer code is.Supercomputer 137 Operating systems Supercomputers today most often use variants of the Linux operating system as shown by the graph to the right. VTL. similar manner. For the most part. and openMosix. different and incompatible vectorizing and parallelizing compilers for Fortran existed. Programming The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools Software tools for distributed processing include standard APIs such as MPI and PVM. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology. In the most common scenario.[34] Until the early-to-mid-1980s. or Linux. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA and OpenCL. environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Fortran or C. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application.

This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100. It will be housed in 96 refrigerators spanning roughly 3000 square feet (280 m2). The cores share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA). The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. the costs of chip development and production make it uneconomical to design custom IBM Roadrunner . the Cray XT5 "Jaguar".000 laptops). but with specialized programming can exceed the performance of the multiprocessor by several orders of magnitude for certain applications. wherein the The CPU Architecture Share of Top500 Rankings between 1993 and 2009.6 million cores (specific 45-nanometer chips in development) and 1. the number of processors per multiprocessor. Within this hierarchy we have: • A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. The supercomputers vary radically with respect to the number of multiprocessors per cluster. each core executes several SIMD instructions per nanosecond. Co-processors are often GPGPUs. Each computer runs under a separate instance of an Operating System (OS). Furthermore." which appears to be a 20 petaflops supercomputer. and the type and number of co-processors.[35] The Sequoia will be powered by 1.LANL . In February 2009. • A SIMD core executes the same instruction on more than one set of data at the same time. As of October 2010 the fastest supercomputer in the world is the Tianhe-1A system at National University of Defense Technology with more than 21000 processors. each processor of which is SIMD.6 petabytes of memory. and with each multiprocessor controlling multiple co-processors. The core may be a general purpose commodity core or special-purpose vector processor. it boasts a speed of 2. The cores may all be in from one to thousands of multicore processor devices. the number of simultaneous instructions per SIMD processor. As of 2007.507 petaflops. • A co-processor is incapable of executing "standard" code. It may be in a high-performance processor or a low power processor. The ratio of coprocessors to general-purpose processors varies dramatically. operating under a single instance of an OS and using more than one CPU core. It is slated for deployment in late 2011.Supercomputer 138 Modern supercomputer architecture Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors. IBM also announced work on "Sequoia. The benchmark used for measurig TOP500 performance disregards the contribution of co-processors. application-level software is indifferent to the number of CPU cores.[36] Moore's Law and economies of scale are the dominant factors in supercomputer design. • A multiprocessing computer is a computer. over 30% faster than the world's next fastest computer.

Supercomputing is taking a step of increasing density. in particular.[37] Deep Blue. 14 countries account for the vast majority of the world's 500 fastest supercomputers. many problems carried out by supercomputers are particularly suitable for parallelization (in essence. for many applications. 15 "Petascale" supercomputers can process one quadrillion (10 ) (1000 trillion) FLOPS. commonly used with an SI prefix such as tera-. They are used for applications such as astrophysics computation and brute-force codebreaking. combined into the shorthand "TFLOPS" (1012 FLOPS. Examples of special-purpose supercomputers: • Belle. or peta-. . pronounced teraflops). combined into the shorthand "PFLOPS" (1015 FLOPS.000 US dollars as of 2010. which can be programmed to act as one large computer. This mimics a class of real-world problems. This allows the use of specially programmed FPGA chips or even custom VLSI chips. Shaw Research Anton. for simulating molecular dynamics[43] The fastest supercomputers today Measuring supercomputer speed In general. fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. but is significantly easier to compute than a majority of actual real-world problems.[40] for astrophysics and molecular dynamics Deep Crack. E. which does LU decomposition of a large matrix. allowing for desktop supercomputers to become available.Supercomputer chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2. by some measure. An exaflop is one quintillion (1018) FLOPS (one million teraflops).) This measurement is based on a particular benchmark. offering the computer power that in 1998 required a large room to require less than a desktop footprint. splitting up into smaller parts to be worked on simultaneously) and. Exascale is computing performance in the exaflops range. traditional supercomputers can be replaced.[39] for playing chess • • • • • Reconfigurable computing machines or parts of machines GRAPE. 139 Special-purpose supercomputers A special-purpose supercomputer is a high-performance computing device with a hardware architecture dedicated to a single problem.[41] for breaking the DES cipher MDGRAPE-3. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer. GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.[38] and Hydra. In addition. by "clusters" of computers of standard design. most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4. For this reason.[42] for protein structure computation D. pronounced petaflops.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s. For example. the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second). allowing higher price/performance ratios by sacrificing generality. with over half being located in the United States.

1. quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements. Current fastest supercomputer system The K computer is ranked on the TOP500 list as the fastest supercomputer at 8.[50] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. 7.8 petaflops come from PlayStation 3 systems. reports processing power of over 700 teraflops through over 33. However. It does not use any GPUs or other accelerators. However.[47] As of May 2011. one of the earliest and most successful grid computing projects. Folding@home. since 1997. reported 8.000 active computers on the network project (measured by computational power).16 petaFLOPS.[] . MilkyWay@home.000 registered computers. fault tolerant message passing libraries and data pre-conditioning. As of May 2011.Supercomputer 140 The TOP500 list Since 1993. and the rest from various computer systems[45] .5 petaflops through over 480.1 petaflops are contributed by clients running on various GPUs. which is based on BOINC.8 petaflops of processing power as of May 2011. co-allocation subsystems. basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations. but it is a widely cited current definition of the "fastest" supercomputer available at any given time. and is one of the most energy-efficient systems on the list. The list does not claim to be unbiased or definitive.000 active computers. Examples of Opportunistic Supercomputing Systems Example architecture of a grid computing system connecting many personal computers over the internet The fastest grid computing system.[48] The Internet PrimeNet Server [49] supports GIMPS's grid computing approach. the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. Of this.[44] Opportunistic Supercomputing Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks. using the Tofu interconnect. It consists of 68.544 SPARC64 VIIIfx CPUs. BOINC recorded a [46] The most active processing power of over 5. GIMPS's distributed Mersenne Prime search currently achieves about 60 teraflops through over 25. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. The BOINC platform hosts a number of distributed computing projects. Quasi-opportunistic Supercomputing Quasi-opportunistic Supercomputing is a form of distributed computing whereby the “super virtual computer” of a large number of networked geographically disperse computers performs huge processing power demanding computing tasks. communication topology-aware allocation mechanisms.

Supercomputer Examples of Quasi-opportunistic Supercomputing Systems [51] uses a network of 16 machines. which is performing astrophysical simulations of large supermassive black holes capturing smaller compact objects.[62] Applications of supercomputers .[61] Such systems might be built around 2030. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling. in 2009. scaling up to Fastest supercomputers: log speed vs.[60] Erik P. while the Beowulf cluster still requires uniform architecture. The flash mob cluster allows the use of any computer in the network. the processing power of Google's cluster might reach from 20 to 100 petaflops.000 servers.[60] Samples of MIC chips with 32 cores which combine vector processing units with standard CPU have become available.[56] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011). time 10 PFLOPs by 2012.[58] Meanwhile. intended to create a "supercomputer on a chip".[55] a C-DAC effort targeted for 2010. which is Intel's response to GPU systems. which could cover a two week time span accurately.[59] Using the Intel MIC (many integrated cores) architecture. 141 server farms contain 450.[52] In June 2006 the New York Times estimated that the Googleplex and its Other notable computer clusters are the flash mob cluster. based on the Blue Gene architecture which is scheduled to go online in 2011. the Qoscos Grid and the Beowulf cluster. Research and development IBM is developing the Cyclops64 architecture. This cluster was built in 2007 by Dr. Other PFLOPS projects include one by Narendra Karmarkar in India. Pleiades. Given the current speed of progress.[53] According to 2008 estimates. Gaurav Khanna. The Cell processor has a main CPU and 6 floating-point vector processors. IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory. giving the machine a net of 16 general-purpose machines and 96 vector processors.[57] In May 2008 a collaboration was announced between NASA. named Sequoia. supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019. SGI and Intel to build a 1 petaflops computer.[54] Also a "quasi-supercomputer" is Google's search engine system with estimated total processing power of between 126 and 316 teraflops. SGI plans to achieve a 500 times increase in performance by 2018 to achieve an exaflop. as of April 2004. a professor in the Physics Department of the University of Massachusetts Dartmouth with support from Sony Computer Entertainment and is the first PS3 cluster that generated numerical results that were published in scientific research literature. and exploits the Cell processor for the intended The PlayStation 3 Gravity Grid application.

doi:10. Accessed 20 June 2011 [2] Hardware software co-design of a multimedia SOC platform by Sao-Jie Chen. Koga. 2011).green-500-list-ranks-supercomputers. van der Steen. M. N. [66] Brute force code breaking (EFF DES cracker).nationalgeographic. Michio. [64] [65] radiation shielding modeling (CDC Cyber). Proceedings of HPC-Asia '97. ieee. Top500. com/ business/ technology/ petaflop-computer-flap-ibm-unplugs-itself-from-supercomputer-project-at-univ-of-illinois/ 2011/ 08/ 08/ gIQAuiFG3I_story. Norman Paul Jouppi. com/ easyir/ customrel. Reilly 2003 ISBN 1573565210 page 65 [8] Parallel computing for real-time signal processing and control by M. April 1997. Press release. 2010s [68] Molecular Dynamics Simulation (Tianhe-1A) Notes [1] (http:/ / www.957772 (http:/ / sss. News. the Netherlands. Volume 1 Issue 7. top500. John A. "Directory page for Top500 lists. Y. org/ benchmark/ top500/ reports/ report94/ main. Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202 . Sumimoto. gov/ pubs/ 031001-acmq. [22] "Green 500 list ranks supercomputers" (http:/ / www. Guang-Huei Lin.com. nvidia. Balandin in IEEE Spectrum. Stichting Nationale Computer Faciliteiten. Issues 1-2. O. do?easyirid=A0D622CE9F579F09& version=live& prid=678988& releasejsp=release_157). Nuclear Physics B . 2010-10-28. 2011 (http:/ / www. Probabilistic analysis. . netlib. Iwasaki. iTnews Australia. lanl.1145/957717. Fukuda (1997).11/91. [23] Wu-chun Feng. 2003 Making a Case for Efficient Supercomputing in ACM Queue Magazine. . Akashi. [17] Washington Post August 8. html) [10] N. Ishihara. O. . org). Overview of recent supercomputers. IEEE Computer Society. M.J. google. google. H. [67] 3D nuclear test simulations as a substitute for banned atmospheric nuclear testing (ASCI Q). Result for each list since June 1993" (http:/ / www. [13] A. Pao-Ann Hsiung. [19] Nvidia (29 October 2010).Supercomputer 142 Decade 1970s 1980s 1990s Uses and computer involved [63] Weather forecasting. Retrieved 2010-10-31. com/ books?id=n3Xn7jMx1RYC& pg=PA1489& dq=history+ of+ supercomputer+ cdc+ 6600& hl=en& ei=nt8cTo-RFc2r-gaDiPHLCA& sa=X& oi=book_result& ct=result& resnum=6& ved=0CEkQ6AEwBQ#v=onepage& q=history of supercomputer cdc 6600& f=false) [5] Wisconsin Biographical Dictionary by Caryn Hannan 2008 ISBN 1878592637 pages 83-84 (http:/ / books. 10-01-2003 doi 10. Volume 60. Y. aerodynamic research (Cray-1). com. Inagami. org/ semiconductors/ materials/ better-computing-through-cpu-cooling/ 0) [21] "The Green 500" (http:/ / www. [14] Scalable input/output: achieving system balance by Daniel A. Wada. . [16] "Faster Supercomputers Aiding Weather Forecasts" (http:/ / news. [12] Y. Lee 2004 ISBN 1402081359 page 172 (http:/ / books. au/ News/ 65619. [20] Better Computing Through CPU Cooling by Alexander A. Physics of the Future (New York: Doubleday. Pages 246-254. Fujii. nationalgeographic. 19 June 2011. Yu-Hen Hu 2009 ISBN pages 70-72 [3] History of computing in education by John Impagliazzo. aspx). January 1997. Reed 2003 ISBN 9780262681421 page 182 [15] Kaku. com/ 2011/ 06/ 20/ technology/ 20computer. washingtonpost. Yasuda. Christian K. Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202 [9] TOP500 Annual Report 1994.1997. html). Zacher 2006 ISBN 0253348862 page 1489 (http:/ / books. January 1998. [11] H. nytimes. itnews. H. "Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory". Retrieved 2011-07-08. green500. . T. com/ books?id=V08bjkJeXkAC& pg=PA83& dq=cdc+ 6600+ 7600+ cray& hl=en& ei=7LMZTozDIInX8gP0xIkM& sa=X& oi=book_result& ct=result& resnum=1& ved=0CCgQ6AEwAA#v=onepage& q=cdc 6600 7600 cray& f=false) [6] Readings in computer architecture by Mark Donald Hill. Architecture and performance of the Hitachi SR2201 massively parallel processor system. October 2009 (http:/ / spectrum. Proceedings of 11th International Parallel Processing Symposium. Hirose and M. Tokhi.592130. O.1109/HPC. org/ sublist). The CP-PACS project. Pages 233-241. (http:/ / www. Gurindar Sohi 1999 ISBN 9781558605398 page 41-48 [7] Milestones in computer science and information technology by Edwin D. New York Times. 65. html) [18] Intel brochure . com/ news/ 2005/ 08/ 0829_050829_supercomputer. Kashiyama. html). "NVIDIA Tesla GPUs Power World's Fastest Supercomputer" (http:/ / pressroom. Publication of the NCF. google.Proceedings Supplements. com/ books?id=J46GinHakmkC& pg=PA172& dq=history+ of+ supercomputer+ cdc+ 6600& hl=en& ei=PeAcTv_eI8uf-wb3y9jvCA& sa=X& oi=book_result& ct=result& resnum=7& ved=0CEYQ6AEwBjgK#v=onepage& q=history of supercomputer cdc 6600& f=false) [4] The American Midwest: an interpretive encyclopedia by Richard Sisson. pdf) [24] Parallel computing for real-time signal processing and control by M. Tokhi.org.

com. com/ stories/ 20070518003711400. Taiji.M. (http:/ / www. Wiretap Politics & Chip Design (http:/ / cryptome. edu/ viewdoc/ summary?doi=10. 2010-11-22. nmscommunications.Supercomputer [25] Computational science -. html) on 2008-06-10. Completion of a one-petaflops computer system for simulation of molecular dynamics (http:/ / www. Retrieved June 6. umassd. . Gouri Agtey. stanford." [31] "IBM Research A Clear Winner in Green 500" (http:/ / www. springerlink. co. phy.. com/ 2008/ TECH/ 06/ 09/ fastest.org. Archived from the original (http:/ / www. Stanford University. flonnet. html). [32] Green 500 list (http:/ / www. Hensell. 8993). archive. 1982. wss). Retrieved 2010-10-31. html). Princeton University Press. Retrieved 2010-10-31. University of Massachusetts Dartmouth. Timothy (2010-05-31).co. cms). . . Benny. "Belle Chess Hardware". Sunderam 2005 ISBN 3540260439 pages 60-67 [26] "IBM uncloaks 20 petaflops BlueGene/Q super" (http:/ / www. 2011 [49] http:/ / www. New York Times. Saul (June 14. Ariel. 927 – 932 [40] J Makino and M. not those on the date last accessed.com. Assaf. Feng-hsiung (2002). Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. . Cracking DES . Retrieved 2010-10-31. ISBN 1-56592-520-3. [37] Condon. [54] Google Surpasses Supercomputer Community. In Advances in Computer Chess 3 (ed. more than twice that of the next best system. . Retrieved 2008-03-16. 03. The Chess Monster Hydra. . Retrieved 2011-05-28 [46] BOINCstats: BOINC Combined (http:/ / www. com/ topic/ processors/ IBM_Roadrunner_Takes_the_Gold_in_the_Petaflop_Race. html). 27. com/ news/ 2009/ 020409-ibm-to-build-new-monster. com/ communications/ 2008/ 05/ google-surpasses-supercomputer-community-unnoticed.. Dubitzky. Oreilly & Associates Inc. .H. hot topic paper (2007)" (http:/ / citeseer. "Quasi-opportunistic supercomputing in grids. Top500. "Nebulae #2 Supercomputer built with NVIDIA Tesla GPGPUs" (http:/ / www. com/ hreviews/ article. Pergamon Press.680 Mflops/watt. cnn. [34] "Top500 OS chart" (http:/ / www. (http:/ / www. edu/ cgi-bin/ main. riken. Deshawresearch. com/ stats/ project_graph. com/ ). org/ primenet/ [50] Kravtsov. BlueGene/Q system . org/ lists/ 2011/ 06/ press-release). Scientific Simulations with Special Purpose Computers: The GRAPE Systems. computer. ist. top500. ap/ index. Lorenz. com/ articleshow/ msid-225517. [27] The Register: IBM 'Blue Waters' super node washes ashore in August (http:/ / www. TechWorld . tnl. U. . edu/ ps3. Schuster. org/ primenet). com/ archives/ 2010/ 11/ 18/ ibm-system-clear-winner-in-green-500/ ). php/ 3913536/ Top500-Supercomputing-List-Reveals-Computing-Trends. . co. Retrieved 2008-03-16. boincstats. green500.Secrets of Encryption Research. networkworld. The Register. CNN. co. com/ 2006/ 06/ 14/ technology/ 14search. of 14th International Conference on Field-Programmable Logic and Applications (FPL). "performing 376 million calculations for every watt of electricity used. 2008. mersenne. pp.R. com/ content/ hp9la9pwq0a1cmrp/ ) Proc. php?pr=milkyway). Valentin. Orda. serverwatch. Carmeli. . php) [33] Prickett. Donninger. uk/ 2010/ 11/ 22/ ibm_blue_gene_q_super/ ). College of Engineering. htm). Wiley. org/ web/ 20080610155646/ http:/ / www. mersenne.B. setting a record in power efficiency with a value of 1. py?qtype=osstats). Unnoticed? (http:/ / blogs. May 20. com/ press/ us/ en/ pressrelease/ 26599. datacenterknowledge. [56] C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010. . hpcwire. [47] BOINCstats: MilkyWay@home (http:/ / boincstats. [44] "Japan Reclaims Top Ranking on Latest TOP500 List of World’s Supercomputers" (http:/ / www. not those on the date last accessed. html). [51] "PS3 Gravity Grid" (http:/ / gravity. Retrieved 2011-05-28. Retrieved 2010-10-31. Theregister. Note these link will give current statistics. 135.United States" (http:/ / www-03. 1998. . David. BOINC. 1.. J." [29] "IBM Roadrunner Takes the Gold in the Petaflop Race" (http:/ / www. 02/04/2009 [36] "Petaflop Sequoia Supercomputer . [42] RIKEN press release. theregister. . . htm) 143 . org/ lists/ 2011/ 06/ top/ list. BOINC. 2004. theregister. 2009-02-03. The Economic Times. indiatimes. Retrieved 2010-11-25. Yoshpa. "Tatas get Karmakar to make super comp" (http:/ / economictimes. .Clarke). Associate Professor. uk/ 2011/ 07/ 15/ power_775_super_pricing/ ) [28] "Government unveils world's fastest computer" (http:/ / web. 1.E. Retrieved 4 August 2011. htm). org/ overtime/ list/ 32/ os). [48] "Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search" (http:/ / www. net/ blog/ 2004/ 04/ 30/ how-many-google-machines/ ). deshawresearch. html). LNCS 3203. 2006). April 30. Shaw Research Anton" (http:/ / www. [45] Folding@home: OS Statistics (http:/ / fah-web. "IBM. GIMPS. .uk. Note these link will give current statistics.ibm. [41] Electronic Frontier Foundation (1998). ap/ index. Gaurav Khanna. com/ 2008/ TECH/ 06/ 09/ fastest. Werner. Top500. [55] Athley. . John. psu. Retrieved 2011-05-28. nytimes. Google Seeks More Power" (http:/ / www. top500.curpg-2. jp/ engn/ r-world/ info/ release/ press/ 2006/ 060619/ index. [30] "Top500 Supercomputing List Reveals Computing Trends" (http:/ / www. cnn.org. "Hiding in Plain Sight. org/ cracking-des/ cracking-des. IEEE International Symposium on High Performance Distributed Computing. theregister. computer. [38] Hsu. Antwerp – Belgium. [35] IBM to build new monster supercomputer (http:/ / www. . php?pr=bo). html) [43] "D. com/ stats/ project_graph.ICCS 2005: 5th international conference edited by Vaidy S. html?page=1) By Tom Jowitt . and K. ISBN 0-691-09065-3 [39] C. 2004 [53] Markoff. Rajeshwari Adappa (30 October 2006). ibm. IEEE. [52] How many Google machines (http:/ / www.Thompson. uk/ 2010/ 05/ 31/ top_500_supers_jun2010/ ). .

Retrieved 2008-07-01. infoworld. . "Reversible logic for supercomputing" (http:/ / portal. National Science Foundation. . html). Heise online. Heise Online. . [66] "EFF DES Cracker Source Code" (https:/ / www. computerhistory. Acronym. Intel plan to speed supercomputers 500 times by 2018. nsf. Blogs. nvidia. cfm?id=1062325). de/ english/ newsticker/ news/ 107683). pp. Bombay. U. India. [68] "China’s Investment in GPU Supercomputing Begins to Pay Off Big Time!" (http:/ / blogs. pdf) (PDF).dmoz. Retrieved May 25. [65] "Abstract for SAMSY . [59] Thibodeau. esat. com/ science?_ob=ArticleURL& _udi=B6VC5-3SWXX64-8& _user=10& _rdoc=1& _fmt=& _orig=search& _sort=d& view=c& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=0a76921c6623fa556491f2dccdf4377e) (Subscription required). ComputerWorld. Retrieved 2011-07-08. html). .esat. 2011 (http:/ / www. [58] "NASA collaborates with Intel and SGI on forthcoming petaflops super computers" (http:/ / www.org. 144 External links • Supercomputing (http://www. Retrieved 2008-03-16. jsp?cntn_id=109850). [63] "The Cray-1 Computer System" (http:/ / archive.com. Indian Institute of Technology Powai. com/ s/ article/ 9217763/ SGI_Intel_plan_to_speed_supercomputers_500_times_by_2018?taxonomyId=67) [61] DeBenedictis. kuleuven. sciencedirect. . Cray1. (2005). html). gov/ news/ news_summ. org. Retrieved 2011-07-08. org/ citation. Patrick (2008-06-10). . .org/Computers/Supercomputing/) at the Open Directory Project . html). com/ newsticker/ news/ item/ IDF-Intel-says-Moore-s-Law-holds-until-2029-734779. 2007. be/ des/ ). com/ article/ 08/ 06/ 10/ IBM_breaks_petaflop_barrier_1.nvidia. ISBN 1595930191. [64] Joshi.be. 2008-04-04. August 10. (9 June 1998). Cosic. 102638650. h-online.Shielding Analysis Modular System"" (http:/ / www. Department of Mathematics and School of Biomedical Engineering. .S.uk. fr/ abs/ html/ iaea0837. Proceedings of the 2nd conference on Computing frontiers.kuleuven. nea. Retrieved 2011-07-08. heise. uk/ dd/ dd49/ 49doe. 2011. . Cray Research. "A new heuristic algorithm for probabilistic optimization" (http:/ / www.DOE Supercomputing & Test Simulation Programme" (http:/ / www. acm. 1977. 2011. June 20. Inc. 2000-08-22. 2008-05-09.Supercomputer [57] "National Science Board Approves Funds for Petascale Computing Systems" (http:/ / www. Retrieved May 25. com/ 2011/ 06/ chinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/ ). . "IBM breaks petaflop barrier" (http:/ / www. 391–402. acronym. InfoWorld. [60] SGI. computerworld. [62] "IDF: Intel says Moore's Law holds until 2029" (http:/ / www. org/ resources/ text/ Cray/ Cray. cosic. [67] "Disarmament Diplomacy: . Erik P. Rajani R. .

.8. Building Blocks and Architecture Terrastore system consists of an ensemble of clusters that in each cluster can exist one Terrastore master and several Terrastore servers.0 / December 13. In addition to this membership management. 2010 Development status Active Written in Operating system Available in Type License Website Java Cross-platform English Document-oriented database Apache License 2. Replication is a pull strategy performed by server nodes from the master node. Amir Moulavi 2009 0. Master is responsible for managing the cluster membership: hence it notifies when the servers join/leave. Terrastore provides ubiquity by using universally supported HTTP protocol Data is partitioned and distributed among the nodes in the cluster(s) with automatic and transparent re-balancing when nodes join and leave. scalable and consistent document store supporting single-cluster and multi-cluster deployments.Terrastore 145 Terrastore Terrastore Original author(s) Developer(s) Initial release Stable release Sergio Bossa Sergio Bossa. Giuseppe Santoro.0 [1] Terrastore is a distributed. Master is also responsible to durably store the whole documents. It is also responsible for replicating the data to server nodes but it does not partition the data itself and partitioning strategy is decided by the server nodes which is either the default consistent hashing or a user defined one. it distributes the computational load for operations like queries and updates to the nodes that actually hold the data. In this way Terrastore facilitates with scalability at both data and computational layers. Sven Johansson. and for durable document storage (and replication). Terracotta is used as a distributed lock manager for locking single document access during write operations. Hence each server requests its own partition from the master. It provides advanced scalability support and elasticity feature without loosening the consistency at data level. as an intra-cluster group membership service. Moreover. Data (documents and buckets) is partitioned according to the consistent hashing schema [4] and is distributed on different Terrastore servers. Mats Henricson. Terrastore employs Terracotta clustering software [2] . Data Model Data model is pure JSON[3] which is stored in documents and buckets which are analogous to table row and table correspondingly in relational DBs. changing the group view. All the writes go through the master but only the first read request goes through the master and later requests will be read from the server memory.

It also facilitates the whole system partition-tolerance behavior. Mathhew Levine. It was achieved through encapsulation . making them invisible for the main application. The vast majority of the times. Rina Panigrahy. . . such as new feature or new component. slideshare. org/ ). json. slideshare. Confusingly. since the term invisible has a bad connotation (usually seen as something that the user can't see and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The term transparent is widely used in computing marketing in substitution of the term invisible. Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web. David. Thus in the case of partitioning the data will be available locally but it can not be seen by other clusters except the cluster owns the data. All the write requests go to both the server that owns the document and the master node. Eric Lehman. then the request is routed to the corresponding server. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. The purpose is to shield from change all systems (or human users) on the other end of the interface. it does not refer to visibility of component's internals (as in white box or open system). The role of ensemble is to join multiple clusters and make them work together. Daniel Lewin. Tom Leighton. com/ p/ terrastore/ "Terracotta" (http:/ / www. [5] http:/ / www. Each document is only own by one server node. is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behaviour.Terrastore Each server owns a partition to which a number of documents are mapped. the term transparent is used in a misleading way to refer to the actual invisibility of a computing process. 146 External links • • • • Project website [1] Introduction to Terrastore [5] Terrastore. the term refers to overall invisibility of the component. com/ tagged/ terrastore Transparency (human-computer interaction) Any change in a computing system. terracotta. net/ sbtourist/ terrastore-a-document-database-for-developers [7] http:/ / nosql. Karger. mypopescu. google. .Terracotta "JSON" (http:/ / www.putting the code into modules that hid internal details. net/ svjson/ introduction-to-terrastore [6] http:/ / www. Also temporarily used later around 1969 in IBM and Honeywell programming manuals the term referred to a certain programming technique. org/ ). The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighbouring layer. If a request is sent to server that does not own the document. It provides better scalability by providing multiple active masters. ACM Symposium on Theory of Computing. a document database for developers [6] Terrastore news and articles on myNoSQL [7] References [1] [2] [3] [4] http:/ / code.

this may be very noticeable. • Relocation transparency .Regardless of how resource access and representation has to be performed on each individual computing entity. it is also considered good practice to develop or use abstraction layers for database access. Similarly. • Migration transparency .Negotiation of cryptographically secure access of resources must require a minimum of user intervention. Types of transparency in distributed system Transparency means that any form of distributed system should hide its distributed nature from its users. or users will circumvent the security in preference of productivity. The early File Transfer Protocol (FTP) is considerably less transparent. some file systems allow transparent compression and decompression of data. The degree to which these properties can or should be achieved may vary widely. In object-oriented programming. so the user might even not notice it while using the folder hierarchy. • Concurrent transparency . • Location transparency .While multiple users may compete for and share a single resource. • Failure transparency . the Network File System is transparent. this should not be apparent to any of them. because it requires each user to learn how to access files through an ftp client. .Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location. because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system. • Replication transparency . enabling users to store more files on a medium without any special knowledge.Should a resource move while in use. This approach does not require running a compression or encryption utility manually. for example).Users of a distributed system should not have to be aware of where a resource is physically located. it should appear to the user as a single resource. In software engineering. For instance. • Persistence transparency . the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object.If a resource is replicated among several locations. some file systems encrypt files transparently. appearing and functioning as a normal centralized system. this should not be noticeable to the end user. here.Always try to hide any failure and recovery of computing entities and resources. • Security transparency . the users of a distributed system should always access resources in a single.[1] Formal definitions of most of these concepts can be found in RM-ODP. the Open Distributed Processing Reference Model (ISO 10746). If one expects real-time interaction with the distributed system. There are many types of transparency: • Access transparency . transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes. due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. Not every system can or should hide everything from its users. uniform way.Whether a resource lies in volatile or permanent memory should make no difference to the user.Transparency (human-computer interaction) 147 Examples For example. so that the same application will work with different databases.

com/ sandl. Tuple space may be thought as a form of distributed shared memory. External links • TreadMarks official site [1] References [1] http:/ / www. html [2] http:/ / delivery. pdf?key1=363836& key2=6763295811& coll=& dl=ACM& CFID=15151515& CFTOKEN=6184618 TreadMarks TreadMarks is a distributed shared memory system created at Rice University in the 1990s. find out which object provides the needed service. It provides a repository of tuples that can be accessed concurrently. Object Spaces. Object Spaces Object Spaces is a paradigm for development of distributed computing applications. Smalltalk. Producers post their data as tuples in the space. consider that there are a group of processors that produce pieces of data and a group of processors that use the data. edu/ CS/ Systems/ software/ treadmarks. Lisp. and the . was put forward by David Gelernter at Yale University. shared amongst providers and accessors of network services. Python. cs. as a computing paradigm. its abstract carries an early example of usage in IT field. References [1] http:/ / www. Implementations of tuple spaces have also been developed for Java (JavaSpaces). Tuple spaces were the theoretical underpinning of the Linda language developed by David Gelernter and Nicholas Carriero at Yale University. and puts it in the Object Space. . html Tuple space A tuple space is an implementation of the associative memory paradigm for parallel/distributed computing. Object Space can be thought of as a virtual repository. 1145/ 370000/ 363836/ p203-gorn. rice. Lua. called Object Spaces. counterpane. Tcl. This is also known as the blackboard metaphor. which are themselves abstracted as objects. All the participants of the distributed application share an Object Space. Ruby. Prolog. Gelernter developed a language called Linda to support the concept of global object coordination. As an illustrative example. It is characterized by the existence of logical entities. org/ 10. and have the request serviced by the object.Transparency (human-computer interaction) 148 References • Transparent-Mode Control Procedures for Data Communication [2] a paper from 1965. Processes communicate among each other using these shared objects — by updating the state of the objects as and when needed. A provider of a service encapsulates the service as an Object. and the consumers then retrieve data from the space that match a certain pattern. acm. Clients of a service then access the Object Space.NET framework.

The client reads the entry from the JavaSpace and invokes its method to access the service. and keeps track of how many times it was used. Instead. the workers are usually designed to be generic.. A process may choose to wait for an object to be placed in the Object Space. processed and written back to the space by the workers. they can take any unit of work from the space and process the task. 149 JavaSpaces JavaSpaces is a service specification providing a distributed object exchange and coordination mechanism (which may or may not be persistent) for Java objects. an object to be shared in the Object Space is made. Because once an object is accessed. where the property specifying the criteria for the lookup of the object is its name or some other property which uniquely identifies it. In a JavaSpace. their methods cannot be invoked while the objects are in the Object Space. First. when deposited in an Object Space are passive. The technology has found and kept new users over the years and some vendors are offering JavaSpaces-based products. high performance applications rather than reliable object caching. This means that no other process can access an object while it is being used by one process. needs to be registered with an Object Directory in the Object Space. Distribution can also be to remote locations.e. Example usage The following example shows an application made using JavaSpaces. updating its usage count by doing so. JavaSpaces is part of the Java Jini technology. which on its own has not been a commercial success. This paradigm inherently provides mutual exclusion. i. The most common software pattern used in JavaSpaces is the Master-Worker pattern.e. the Entry is used to encapsulate a service which returns a Hello World! string. JavaSpaces remains a niche technology mostly used in the financial services and telco industries where it continues to maintain a faithful following. The server which provides this service will create an Object Space. it has to be removed from the Object Space. it is regarded by many to be reliable as long as the power is reliable. all communication partners (peers) communicate and coordinate by sharing state. JavaSpaces can be used to achieve scalability through parallel processing. when deposited into a space. thereby ensuring mutual exclusion. the accessing process must retrieve it from the Object Space into its local memory. The announcement of Jini/JavaSpaces created quite some hype although Sun co-founder and chief [1] Jini architect Bill Joy put it straight that this distributed systems dream will take "a quantum leap in thinking". Any processes can then identify the object from the Object Directory.Tuple space An object. The Entry is then written into the JavaSpace. or JavaSpace. Here. and is placed back only after it has been released. using properties lookup. Such an object is called an Entry in JavaSpace terminology. although this won't survive a total power failure like a disk. and these are read. . this is rare as JavaSpaces are usually used to low-latency. It is used to store the distributed system state and implement distributed algorithms. public String service() { ++count. In a typical environment there are several "spaces". use the service provided by the object. however. public Integer count = 0. it can also be used to provide reliable storage of objects through distributed replication. Objects. update the state of the object and place it back into the Object Space. // An Entry class public class SpaceEntry implements Entry { public final String message = "Hello World!". The updated Entry is written back to the JavaSpace. several masters and many workers. The Master hands out units of work to the "space". if the needed object is not already present. i.

Prentice Hall. Thread.out. ISBN 1861002777 • Steven Halter: JavaSpaces Example by Example. // Create an Object Space // Register and write the Entry into the Space space. et al. } } // Hello World! server public class Server { public static void main(String[] args) throws Exception { SpaceEntry entry = new SpaceEntry().write(entry. ISBN 0-201-30955-6 • Phil Bishop. null. Addison-Wesley Professional. space. } } 150 Books • Eric Freeman. Long. June 1999. SpaceEntry e = space.MAX_VALUE).MAX_VALUE). 2002. Wrox Press.FOREVER).out. 1. null. ISBN 0131001523 • Sing Li. null. System. Goff: Network Distributed Computing: Fitscapes and Fallacies. ISBN 0-13-061916-7 . Addison Wesley.service()).FOREVER). Lease.sleep(10 * 1000). Prentice Hall PTR. Long. Nigel Warren: JavaSpaces in Practice. } public String toString() { return "Count: " + count. 2004.: Professional Java Server Programming. 2002. // Pause for 10 seconds and then retrieve the Entry and check its state. 1999. Susanne Hupfer. SpaceEntry e = space. Ken Arnold: JavaSpaces Principles. Lease. // Create the Entry object JavaSpace space = (JavaSpace)space().read(entry. null. and Practice. } } // Client public class Client { public static void main(String[] args) throws Exception { JavaSpace space = (JavaSpace) space(). System.println(e).Tuple space return message.write(e. Patterns.println(e. ISBN 0-321-11231-8 • Max K.take(new SpaceEntry().

Tuple space 151 Interviews • Gelernter. java.Net. Andreas (2005). Part 2: Building adaptive. Bernhard (2003). SearchWebServices. • Arango. • Mamoud. Part 1 (from 5)" [14]. "How To Build a ComputeFarm" [8]. Unknown Clustered. java. The Fly Object Space GigaSpaces [20] Linda in a Mobile [21] Environment (LIME) LinuxTuples PyLinda Rinda [22] Java C. Python Python Ruby BSD License GPL Ruby License Clustered. • Venners. InformIT. • Sing. "Space-Based Programming" [11]. "High-impact Web tier clustering. fault-tolerant. See page. • Heiss. "Lord of the Cloud" [2]. . Erlacher. Tom (2005). Dr. Joseph (2007). Retrieved 2007-03-20. • Löffler. "Coordination in parallel event-based systems" [17]. Commercial. scalable solutions with JavaSpaces" [12] . Retrieved 2004-02-01. "Getting Started With JavaSpaces Technology: Beyond Conventional Distributed Programming Paradigms" [13]. See page. William (2007). • Angerer. • Brogden. Notable features Based on the Jini project that Sun contributed to Apache. blogs. onjava. Sun Developer Network (SDN). java. Qusay H. Gerald (2004). C++ License used Apache License BSD License Commercial. Allows free non-commercial use. "JavaSpaces und ihr Platz im Enterprise Java Universum. • Haines. "Space-Based Architecture and the End of Tier-Based Computing" Technologies. JavaWorld. Janice J. "Interview: GigaSpaces" [5]. Associate Publisher. "How Web services can use JavaSpaces" [6].com. Retrieved 2003-03-19. Li (2003). Offers free "community license" with a subset of features. Editor and Publisher Russell Weinberger. David (2009). Bill (2003). Mauricio (2009).com. [16] . Inc. "Grid computing and Web services (Beowulf. Retrieved 2005-05-21. Scala Java. (2003).net. theserverside. • Angerer. Articles • Brogden. • Ottinger.com.. Susanne (1999). GigaSpaces • Shalom. IBM developerworks. Javaspaces)" [7]. Ruby. "Understanding JavaSpaces" [9]. SearchWebServices. Nati (2006). Edge Foundation. Steven (2006). Retrieved 2007-04-18. Retrieved 2006-06-03. Entwickler. John Brockman. "Make room for Javaspaces. Sun Developer Network (SDN). Retrieved 2007-01-31. Das Modell zum Objektaustausch: JavaSpaces vorgestellt" [15].net. Bernhard. "Designing as if Programmers are People (Interview with Ken Arnold)" [4]. See page. • Hupfer. "Computer Visions: A Conversation with David Gelernter" [3].com. . The Blitz Project Single site server.sun.com. Tuple Space Implementations Project Apache River [18] [19] Supported Languages Java Java Java. • White. BOINC.net. (2005). "Loosely Coupled Communication and Coordination in Next-Generation Java Middleware" [10]. William (2007).

com/ pub/ a/ onjava/ 2003/ 03/ 19/ java_spaces. sun. com/ guides/ content. html [12] http:/ / www-128. no/ ?docname=SmallSpaces/ [28] http:/ / www. ibm. html [9] http:/ / www.Tuple space SemiSpace SQLSpaces [23] [24] Java Server: Java. ibm. C/C++ Apache License AGPL (server) + LGPL (clients) Clustered with Terracotta Cluster. main website down. com/ javaworld/ jw-11-1999/ jw-11-jiniology. sourceforge. gigaspaces. tibco.nodeid. html [8] http:/ / today. August 1998 [15 January 2006] [2] [3] [4] [5] http:/ / www. theserverside.289483. org/ blitz/ [19] http:/ / www. html [15] http:/ / www. techtarget.00. html)". onjava. 152 TIBCO ActiveSpaces Commercial Clustered. net/ [23] http:/ / www. collide.sid26_gci1248166. Open Source implementation of the Linda/Tuplespace programming model • TSpaces [28]. open-source. info/ http:/ / www. Inactive Projects: • SlackSpaces [26]. html [16] http:/ / www. almaden. aspx?g=java& seqNum=263 [6] http:/ / searchwebservices. com/ developer/ technicalArticles/ Interviews/ gelernter_qa. com/ cs/ TSpaces/ . de/ itr/ online_artikel/ psecom. com/ tip/ 0. project stalled since 2000 References [1] Rob Guth: " More than just another pretty name: Sun's Jini opens up a new world of distributed computer systems (http:/ / sunsite. Clients: Java. org/ [24] [25] [26] [27] http:/ / sqlspaces. tss?l=UsingJavaSpaces [10] http:/ / today. net/ pub/ a/ today/ 2005/ 04/ 21/ farm. com/ products/ soa/ in-memory-computing/ activespaces-enterprise-edition/ default. java. com/ os_papers. html http:/ / java. java. fault-tolerant. sun. geir.id. org/ 3rd_culture/ gelernter09/ gelernter09_index. techtarget. com/ [20] http:/ / www. dancres. net/ pub/ a/ today/ 2003/ 06/ 10/ design. net/ [22] http:/ / linuxtuples. uakom. jsp http:/ / slackspaces. com/ [21] http:/ / lime.489. C#. html#a1 [17] http:/ / blogs. javamagazin. com/ tip/ 0. javaworld. net/ pub/ a/ today/ 2005/ 06/ 03/ loose. SunWorld. sun. Prolog. com/ developer/ technicalArticles/ tools/ JavaSpaces/ [14] http:/ / www. fongen. by IBM for Java. semispace.289483. sourceforge. tv/ http:/ / www. [25] Java. html http:/ / today. informit. com/ arango/ entry/ coordination_in_parallel_event_based [18] http:/ / www. com/ tt/ articles/ article.sid26_gci1251765. com/ developerworks/ java/ library/ j-cluster2/ ?Open& ca=daw-co-news [13] http:/ / java.11. java. Ruby. PHP. edge. flyobjectspace. html [7] http:/ / searchwebservices. project source is downloadable • SmallSpaces [27]. sk/ sunworldonline/ swol-08-1998/ swol-08-jini. html http:/ / www. html [11] http:/ / www. gigaspaces.00.

Tuple space 153 Sources • Gelernter. with organizations buying and selling computing resources as needed or as they go idle. who might be paid with a portion of the revenue from clients. as a metered service similar to a traditional public utility (such as electricity. sometimes called the Virtual Organization (VO).org Utility computing Utility computing is the packaging of computing resources. instead. The term "grid computing" is often used to describe a particular form of distributed computing. the paying customers).com • "JavaSpace Specification" (http://www. L. There was some initial skepticism about such a significant shift. This model has the advantage of a low or no initial cost to acquire computer resources. David. Liu External links • "TupleSpace" (http://c2.acm. One model. is for a central server to dispense tasks to participating nodes. computational resources are essentially rented . such as web services. on the behest of approved end-users (in the commercial case. HP and Microsoft were early leaders in the new field of Utility Computing with their business units and researchers working on the architecture. January 1985 • Distributed Computing (First Indian reprint. Amazon and others started to take the lead in 2008. or telephone network). as they established their own utility services for computing. ACM Transactions on Programming Languages and Systems. storage and applications. These might be a dedicated computer cluster specifically built for the purpose of being rented out. Another model.jini. a company can "bundle" the resources of members of the public for sale. payment and development challenges of the new computing model. where the supporting nodes are geographically distributed or cross administrative domains. The definition of "utility computing" is sometimes extended to specialized tasks. storage and services.org/wiki/JavaSpaces_Specification) at jini. IBM. or even an under-utilized supercomputer. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. "Generative communication in Linda" (http://portal. water.cfm?doid=2363. This repackaging of computing services became the foundation of the shift to "On Demand" computing.com/cgi/wiki?TupleSpace) at c2. common among volunteer computing applications. .2433). Software as a Service and Cloud Computing models that further propagated the idea of computing. To provide utility computing services.org/citation. number 1.turning what was previously a need to purchase products (hardware. software and network bandwidth) into a service. application and network as a service. such as computation. Google. 2004). volume 7. Multiple servers are used on the "back end" to make this possible. the new model of computing caught and eventually became mainstream with the publication of Nick Carr's book "The Big Switch". natural gas. M. The technique of running a single calculation on multiple computers is known as distributed computing. Utility Computing can support grid computing which has the characteristic of very large computations or a sudden peaks in demand which are supported via a large number of computers.[1] However. is more decentralized.

Utility computing


Utility computing is not a new concept, but rather has quite a long history. Among the earliest references is:

If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility... The computer utility could become the basis of a new and important industry.

[2] —John McCarthy, speaking at the MIT Centennial in 1961

IBM and other mainframe providers conducted this kind of business in the following two decades, often referred to as time-sharing, offering computing power and database storage to banks and other large organizations from their world wide data centers. To facilitate this business model, mainframe operating systems evolved to include process control facilities, security, and user metering. The advent of mini computers changed this business model, by making computers affordable to almost all companies. As Intel and AMD increased the power of PC architecture servers with each new generation of processor, data centers became filled with thousands of servers. In the late 90's utility computing re-surfaced. InsynQ ([3]), Inc. launched [on-demand] applications and desktop hosting services in 1997 using HP equipment. In 1998, HP set up the Utility Computing Division in Mountain View, CA, assigning former Bell Labs computer scientists to begin work on a computing power plant, incorporating multiple utilities to form a software stack. Services such as "IP billing-on-tap" were marketed. HP introduced the Utility Data Center in 2001. Sun announced the Sun Cloud service to consumers in 2000. In December 2005, Alexa launched Alexa Web Search Platform, a Web search building tool for which the underlying power is utility computing. Alexa charges users for storage, utilization, etc. There is space in the market for specific industries and [4] applications as well as other niche applications powered by utility computing. For example, PolyServe Inc. offers a clustered file system based on commodity server and storage hardware that creates highly available utility computing environments for mission-critical applications including Oracle and Microsoft SQL Server databases, as well as workload optimized solutions specifically tuned for bulk storage, high-performance computing, vertical industries such as financial services, seismic processing, and content serving. The Database Utility and File Serving Utility enable IT organizations to independently add servers or storage as needed, retask workloads to different hardware, and maintain the environment without disruption. In spring 2006 3tera announced its AppLogic service and later that summer Amazon launched Amazon EC2 (Elastic Compute Cloud). These services allow the operation of general purpose computing applications. Both are based on Xen virtualization software and the most commonly used operating system on the virtual computers is Linux, though Windows and Solaris are supported. Common uses include web application, SaaS, image rendering and processing but also general-purpose business applications. Utility computing merely means "Pay and Use", with regards to computing power.

[1] On-demand computing: What are the odds? (http:/ / www. zdnet. com/ news/ on-demand-computing-what-are-the-odds/ 296135), ZD Net, Nov 2002, , retrieved Oct 2010 [2] Architects of the Information Society, Thirty-Five Years of the Laboratory for Computer Science at MIT, Edited by Hal Abelson [3] http:/ / www. insynq. com [4] http:/ / www. polyserve. com/ index. php

Decision support and business intelligence 8th edition page 680 ISBN 0-13-198660-0

Utility computing


External links
• How Utility Computing Works (http://communication.howstuffworks.com/utility-computing.htm) • Utility computing definition (http://searchdatacenter.techtarget.com/sDefinition/0,,sid80_gci904539,00.html)

Virtual Machine Interface
Virtual Machine Interface[1] ("VMI") may refer to a communication protocol for running parallel programs on a distributed memory system. Virtual Machine Interface[2] is also the name given by VMware to the proposed open standard protocol that guest operating systems can use to communicate with the hypervisor of a virtual machine. An implementation of this standard was merged in the main Linux kernel version 2.6.21. A number of popular GNU/Linux distributions now ship with VMI support enabled by default. Since newer AMD and Intel CPUs allow for more efficient virtualization, VMI is being obsoleted and VMI support will be removed from Linux kernel in 2.6.37[3] and from VMware products in 2010-2011 timeframe [4] .

[1] Official web site for the VMI communication protocol (http:/ / vmi. ncsa. uiuc. edu/ ) [2] Transparent Paravirtualisation - VMware Inc (http:/ / www. vmware. com/ interfaces/ paravirtualization. html) [3] x86, vmi: Mark VMI deprecated and schedule it for removal (http:/ / git. kernel. org/ ?p=linux/ kernel/ git/ torvalds/ linux-2. 6. git;a=commit;h=d0153ca35d344d9b640dc305031b0703ba3f30f0) [4] Support for guest OS paravirtualization using VMware VMI to be retired from new products in 2010-2011 (http:/ / blogs. vmware. com/ guestosguide/ 2009/ 09/ vmi-retirement. html)

External links
• The VMI virtualization interface (http://lwn.net/Articles/175706/) - article in lwn.net

Virtual Object System


Virtual Object System
Virtual Object System
Developer(s) Stable release Interreality 0.23.0 / April 15, 2006 (S5 UI preview released October 19, 2007)

Operating system Linux, Windows, Mac OS X Type License Website Distributed systems, Networking, 3D graphics GNU Lesser General Public License interreality.org [1]

The Virtual Object System (VOS) is a computer software technology for creating distributed object systems. The sites hosting Vobjects are typically linked by a computer network, such as a local area network or the Internet. Vobjects may send messages to other Vobjects over these network links (remotely) or within the same host site (locally) to perform actions and synchronize state. In this way, VOS may also be called an object-oriented remote procedure call system. In addition, Vobjects may have a number of directed relations to other Vobjects, which allows them to form directed graph data structures. VOS is patent free, and its implementation is Free Software. The primary application focus of VOS is general purpose, multiuser, collaborative 3D virtual environments or virtual reality. The primary designer and author of VOS is Peter Amstutz.

External links
• Interreality.org official site [2]

[1] http:/ / interreality. org/ [2] http:/ / interreality. org

Volunteer computing


Volunteer computing
Volunteer computing is a type of distributed computing in which computer owners donate their computing resources (such as processing power and storage) to one or more "projects".

The first volunteer computing project was the Great Internet Mersenne Prime Search, which was started in January 1996.[1] It was followed in 1997 by distributed.net. In 1997 and 1998 several academic research projects developed Java-based systems for volunteer computing; examples include Bayanihan,[2] Popcorn,[3] Superweb,[4] and Charlotte.[5] . Another similar concept is Sideband computing which let a user to share his computing power while he is online. The term "volunteer computing" was coined by Luis F. G. Sarmenta, the developer of Bayanihan. It is also appealing for global efforts on social responsibility, or Corporate Social Responsibility as reported in a Harvard Business [6] [7] Review or used in the Responsible IT forum. In 1999 the SETI@home and Folding@home projects were launched. These projects received considerable media coverage, and each one attracted several hundred thousand volunteers. Between 1998 and 2002, several companies were formed with business models involving volunteer computing. Examples include Popular Power, Porivo, Entropia, and United Devices. In 2002, the Berkeley Open Infrastructure for Network Computing (BOINC) opensource project was founded, and became the software running the largest public computing grid (World Community Grid) in 2007. [8]

Middleware for volunteer computing
The client software of the early volunteer computing projects consisted of a single program that combined the scientific computation and the distributed computing infrastructure. This monolithic architecture was inflexible; for example, it was difficult to deploy new application versions. More recently, volunteer computing has moved to middleware systems that provide a distributed computing infrastructure independently of the scientific computation. Examples include: • The Berkeley Open Infrastructure for Network Computing (BOINC). BOINC is the most widely-used middleware system, and is currently used by the World Community Grid. It is open source (LGPL) and is developed by an NSF-funded research project located at the UC Berkeley Space Sciences Laboratory. It offers client software for Windows, Mac OS X, Linux, and other Unix variants. • XtremWeb is used primarily as a research tool. It is developed by a group based at the University of Paris - South. • Xgrid is developed by Apple. Its client and server components run only on Mac OS X. • Grid MP is a commercial middleware platform developed by United Devices and has been used in volunteer computing projects including grid.org, World Community Grid, Cell Computing, and Hikari Grid. Most of these systems have the same basic structure: a client program runs on the volunteer's computer. It periodically contacts project-operated servers over the Internet, requesting jobs and reporting the results of completed jobs. This "pull" model is necessary because many volunteer computers are behind firewalls that do not allow incoming connections. The system keeps track of each user's "credit", a numerical measure of how much work that user's computers have done for the project. Volunteer computing systems must deal with several problematic aspects of the volunteered computers: their heterogeneity, their churn (that is, the arrival and departure of hosts), their sporadic availability, and the need to not interfere with their performance during regular use. In addition, volunteer computing systems must deal with several related problems related to correctness:

hbsp. Responsible IT forum.g. edu/ email/ pdfs/ Porter_Dec_2006. Michael. Ibel. [7] "ResponsI. Karaul. The results (and the corresponding credit) are accepted only if they agree sufficiently.. x/ 4341/ 1). • Decreased performance of the PC. if adequate cooling is not in place. Lecture Notes in Computer Science 1368. 158 Costs for volunteer computing participants • Increased power consumption. pp. Wyckoff. php). External links • Wanted: Your computer's spare time (http://www.G. CPU cache contention. it will impact performance of the PC. and even if they are noticeable. of the 2nd International Conference on World-Wide Computing and its Applications (WWCA'98).Volunteer computing • Volunteers are unaccountable and essentially anonymous. New York: Syracuse University. that is available e. harvard. . [8] BOINC Migration Announcement (http:/ / www. Harvard Business Review.E. K. disk I/O contention. 1998. NY. or to disable power-saving features like suspend. Proceedings of the 9th International Conference on Parallel and Distributed Computing Systems. Springer-Verlag.. [4] Alexandrov. A. • Some volunteers intentionally return incorrect results or claim excessive credit for results. tk). "Bayanihan: Web-Based Volunteer Computing Using Java". 1998). . Mark Kramer. . "Charlotte: Metacomputing on the Web" (http:/ / citeseer. "The Link Between Competitive Advantage and Corporate Social Responsibility" (http:/ / harvardbusinessonline.. P. in BOINC client. New York. [6] Porter.org/featuredetail. the volunteer might choose to continue participating. M. Additionally. html).D. L.[9] These effects may or may not be noticeable. pp. United States: ACM Press. increased disk cache misses and/or increased paging can result. Kedem. Schauser.TK" (http:/ / www.asp?id=38) physics. and network I/O contention.physics. 444-461. . this constant load on the volunteer's CPU can cause it to overheat. pdf). org/ various/ history.28. A CPU that is idle generally has lower power consumption than when it is active. • Some volunteer computers (especially those that are overclocked) occasionally malfunction and return incorrect results. O. [5] A. Proceedings of the Workshop on Java for High performance Scientific and Engineering Computing Simulation and Modelling. Charleston. Proc. Japan. which helps to alleviate CPU contention. September 2009 . References [1] "GIMPS History" (http:/ / mersenne. 1998 [3] Regev. "SuperWeb: Research issues in Java-Based Global Computing". psu. worldcommunitygrid. If the volunteer computing application attempts to run while the computer is in use. If RAM is a limitation. com/ articles. The desire to participate may also cause the volunteer to leave the PC on overnight. K. Volunteer computing applications typically execute at a lower CPU scheduling priority. edu/ article/ baratloo96charlotte. . N (October 25 . in which each job is performed on at least two computers. Z. This is due to increased CPU contention. M. One common approach to these problems is "replicated computing".org. responsI. Scheiman (1996)..E. Proceedings of the First international Conference on information and Computation Economies. [2] Sarmenta. org/ forums/ wcg/ viewthread?thread=15715) [9] "Measuring Folding@Home's performance impact" (http:/ / techreport. Nisan. (Sept 1996). ist. Tsukuba.F. South Carolina. "The POPCORN market~Wan online market for computational resources". March 3-4. However the increased power consumption can be remedied to some extent by setting the option of desired processor usage percent. 148–157.

kar.org/w/index. Borgx. lorenz. Orbst. MartinSpamer. Tellyaddict. Gjbloom. LeaveSleaves. Flubeca.org/w/index. Whadar. Edward. El Cubano. TitusEapen.muller.org/w/index. Nagle. Ukexpat. Kbrose. Terry1944. Qwertyus. Kbdank71. Elsendero.org/w/index. Papadopa. Giftlite. SimonP. AutumnSnow. Plouin. Anthony. VampWillow. 509 anonymous edits Code mobility  Source: http://en. Fenna.wikipedia. JCLately.php?oldid=439654564  Contributors: Adrianwn. Stephan Leeds. Lilwik. Nishant shobhit. Bobprime. Mdz. Tmcw. Brutzman. Miym. Gary King. Wickethewok. Wonko9. Gregbard. Esap. Superm401. Gjnyasa. Expatrick. 14 anonymous edits Distributed memory  Source: http://en. Derbeth. Jamelan. Dangiankit. Shell Kinney. TimBentley. Ralfw08. Tomek. Pitel. Rror. LOL. C. Tonieisner. M4gnum0n. TedPavlic. Phoe6. Edward. FuturePrefect. Evolvingjerk. Jordi. SMasters. Belovedfreak. Psychcf. Danpovey. Yaronf. Rajithgune. AutumnSnow. White 720. Ade oshineye. 7 anonymous edits Amazon SimpleDB  Source: http://en. Yamavu. The Anome CouchDB  Source: http://en. Sparky132. Wolfkeeper. Sprhodes.cutler. Prasanna8585. Yonkeltron. JLaTondre.wikipedia. Maksim. Bernhard Bauer.wikipedia. Sepreece. Neilc. Frap. Miym. Creativename. Frecklefoot. Andreclos. Mark Renier. Bilaljaffery. Psiphiorg. Sboehringer. JCLately. Psantora. Woohookitty. Guesty-Persony-Thingy. Centrx. Shiftworker.php?oldid=409057229  Contributors: Dgies. Rcsheets.wikipedia. Patrick.php?oldid=445601866  Contributors: Ambulnick. Rehashed. Mikecron. Happyrabbit. Don. J mareeswaran.omalley. M. TheThomas.php?oldid=444609359  Contributors: BClemente. Flegelpuss. MaNeMeBasat. DVdm. Jandalhandler.org/w/index. Miym. Murt. Apapadop. Peterkaptein. Cybercobra. Moralis. John of Reading. Rich Farmbrough. Aldie. EvanProdromou. Palosirkka. SteveLoughran. Tinucherian. Khazadum. Bluezy. Chunbinlin. Jamelan. P199. Shadowjams. J. Miym. Leandrod. Cybercobra. Pebkac. Tobias Bergemann. Swingambassador. MaD70. Snoyes. Av pete. Robomanx.php?oldid=417577382  Contributors: Bearcat. A5b.wikipedia. Wbigger. Innv. 239 anonymous edits Aggregate Level Simulation Protocol  Source: http://en. Erdody. Lexor. GlennZ.org/w/index. Lguzenda. Kuru. Merlinthe. Portnadler. MylesBraithwaite. JDowning. Kuteni. 28bytes. JFromm. Chzz. Ttreitlinger. Vegaswikian. Nilei81. Captian Mar-Vell. Ah2190. Epbr123. Bovineone. Rtweed1955. KeyStroke. X7q. Jfabrizio84.php?oldid=400634770  Contributors: Alan Liefting. Nabbia. MelRobinson. R'n'B. Kkarimi. Bazzargh.Article Sources and Contributors 159 Article Sources and Contributors MapReduce  Source: http://en. Henriyugi. Excirial. Kbdank71. Zen-master. TutterMouse. Freerick. Seifsallam. Jni. Billymac00. Saligron.org/w/index.org/w/index. Android Mouse. Blueboy96. Shenme. Kbrose. Beano. MrJones. Bryan Derksen. Cdiggins. Luke Lonergan. GenerousOne. Bryant. SunCreator. Miym.wikipedia. Kku.yosinski.wikipedia. Stewartadcock. Dawynn. Wizgha. Jncraton. Jin. Dstainer. Mwtoews. Mtking. Akshayagupta. Gardar Rurak. Jaliyae. Ewulp. Okj579. Robofish. Canaima. Fluffernutter. FatalError. SixSix. 6 anonymous edits Distributed application  Source: http://en.C. Jason Davies. Bogey97. Tremilux. Dkf11. Qwertyus. Idearat.wikipedia. Allan McInnes. Peter. John Vandenberg.org/w/index. Bporopat. Ghettoblaster. SegfaultWizard. Mac. Jayron32. SimonP. 185 anonymous edits Data Diffusion Machine  Source: http://en. LilHelpa. Triwbe. Jivecat. Unforgiven24. Miym. Audriusa. Biscuittin. Gajakannan. Nealmcb. Miym. Furrykef. 5 anonymous edits Distributed object  Source: http://en.petya. Starwiz. Lzur. HughesJohn. MarkPDF. Debresser. Bezenek. Peu. Last Lost. Dbroadwell. Ryanreporter. Tree Biting Conspiracy. Someguy1221. Greg 12000. R39132.php?oldid=439119968  Contributors: Alansohn. Kwsn.doe. Discospinster. Chewie. Ut Libet. Fuhghettaboutit. Jhfireboy. Vrenator.org/w/index. Tide rolls. Altenmann. Phatom87. Shadowjams. ThorstenStaerk. Tweegirl. Henk. Owen. Skittleys. The Thing That Should Not Be. Spdegabrielle. FghIJklm. Mate Hegyhati. Wuzzeb. Orange Suede Sofa. LuzGamma. DeC. Jswanson3141. Pilif12p. LeaveSleaves. BeardedCat. Jni. KnightsWizard. PedroPVZ. Ebainomugisha. MJ94. Dmccreary. Pingveno. 16 anonymous edits Distributed shared memory  Source: http://en. Larry V.wikipedia. Anonymous Dissident. Vektor330. Ronewolf. Ravedave. 1 anonymous edits Distributed database  Source: http://en.org/w/index. 16 anonymous edits Art of War Central  Source: http://en.org/w/index. Patcito. Palfrey. Bovineone. Mashoodp. Kragen2uk. SamJohnston. JSpung. Wireless friend. Ignatzmice. PahaOlo.org/w/index. Szopen. Netlad. Yorrose. Brown. Gensanders. Rhwawn. RexNL. RadioYeti. Amberroom. CanisRufus. Azior. 74 anonymous edits Citrusleaf database  Source: http://en. A5b. VanGore. Wasubire. . SimonP. EdgeOfEpsilon. Andy Dingley. Richard Allen. Nslater.H. Thingg. TexasAndroid. OnePt618. Piano non troppo. KnowledgeOfSelf. Rjwilmsi. Hadal. T@nn.wikipedia. Tom Edwards. Iow. Passport90. Jenvor. Cander0000. Husky. Khazadum.wikipedia. David Eppstein. Ingenthr.php?oldid=443255340  Contributors: Bpalitaa. Cybercobra. JLaTondre.g. Ppgardne. Robomanx. Tiptoety. Clngre.wikipedia. Peruvianllama. Wolfling. Jaskiern.org/w/index. Al3xpopescu. Dpkade. DVD R W. Jackiechen01. Beland. Haham hanuka. Miym. BenFrantzDale. Cholmes75. SamJohnston. Davidfstr. Uncle Dick. Wernher.php?oldid=444825978  Contributors: AlexChurchill. Arleyl. Chandlermbing. Miym. Kuru. 125 anonymous edits Distributed design patterns  Source: http://en. Qwertyus. Urhixidur. Prasenjitmukherjee. ReallyNiceGuy. Agupte. Owenja. R'n'B. Leckley.D. Alexwg. Versus22. Kingturtle. Tmpnz. Raghutech. LMB. Quatloo. Edward. AxelBoldt. Crishoj. Simphonics. Balexandre. Jhellerstein. Liao. Suggestednickname.wikipedia. Vegaswikian. PhuongCM88. Yogendrasinh. Rich Farmbrough.php?oldid=355812613  Contributors: Beland. Gimmetrow. Nikhil search. Robina Fox. Vincnet. Dyl. Addaintstopnme. Inimino. RichMHelms.org/w/index. JLaTondre. Dionysostom. Gwern. Wayiran. Stephen E Browne Client–server model  Source: http://en. Miym.org/w/index. Bruce1ee. Locotorp. Marudubshinki. RichardVeryard. Jhoskins.php?oldid=441818900  Contributors: Katharineamy. 20 anonymous edits Autonomic Computing  Source: http://en. Wikiacc.php?oldid=447257745  Contributors: "alyosha". Gary King. Calimo. Pnr Database-centric architecture  Source: http://en.org/w/index. JonHarder. Chiborg. Perfecto. CharlotteWebb. Twn. Howard the Duck. FrankRHill. Thumperward. Johnuniq.wikipedia. Pedant17. Bovineone. Minnaert. Euclidbuena. Superm401. Csahut. Kenyon. Isnow. M4gnum0n. 1ForTheMoney. Narthring. Joshsteiner. Hu.php?oldid=436647527  Contributors: Alex. Тиверополник. Rjwilmsi. Bilbo1507. Hellisp. Mqtthiqs. Panoramix. NicDumZ. Shire Reeve. Primordium. Drt1245.php?oldid=446873882  Contributors: 16@r. Radugramescu. Upsetspecial. Hairy Dude. Awaterl. BBCWatcher. Jh51681. Zian.wikipedia. Thumperward. Gail. Roger D T. Sbowers3.php?oldid=420915171  Contributors: Atlant. Drvsrinivasan. YPavan.org/w/index. Micrypt. Blues-harp. Miym. Bryan Derksen. 23 anonymous edits Amoeba distributed operating system  Source: http://en. Dyl. Miym. Bogdangiusca. Charles Matthews. Levin. Dondemuth. Wujj123456. Intelligentfool. Eliz81. AllenDowney. Hmains. Thepohl. Dmccreary. OsamaK. X96lee15. Can't sleep. Krzys ostrowski. Da monster under your bed. Gslin. Ewlyahoocom. Llort. Xdxfp. Тиверополник. Jason. Una Smith. Jww van. Ripe. Sidna. TubularWorld. Butterwell. Bobo192. Epbr123. Chris Chittleborough. Rodrigoq. PlatoCantRepent. Mschlindwein. RickScott. Onegin. FatalError. Rst. Mange01. Krzys ostrowski.wikipedia. Niteowlneils. Liftarn. Mubaidr.php?oldid=440706600  Contributors: Ashleytate. Zomno. Abdull. Jmeddy. Jamitzky. GeorgeBills. Miym.php?oldid=441383833  Contributors: 0x6adb015. Addshore. Rick Sidwell. Woohookitty. Gamer007. Favonian. Rank1cheng. Thatguyflint. Joey Parrish. WereSpielChequers.bit. Az1568. Andreas Kaufmann. Dyl. Hugh. Delirium. Malerin. Wbrameld. Yozh. Silas S.glaser.org/w/index. Attilios. BillNace. Miym. Fudoreaper.php?oldid=410450977  Contributors: Friendlydata. Mentifisto. Hsr1. Chocmah. Kocio. 3 anonymous edits Distributed Interactive Simulation  Source: http://en. Gdo01. Brucefulton. Gwernol. Gary King. GregRobson. NotSoAnonymous54. Phantomsteve. Cprompt. TheParanoidOne. Xhienne. Kubanczyk. Brainix.wikipedia. Nurlan926.org/w/index. Kanestar. R. EEMIV. Sicard. Slackr. Gadfium. Saizai. Ebraminio. Nunquam Dormio. Crag. Dougher. Rouenpucelle. Alansohn. Johnuniq. Immunize. Panoat. Siruguri. Reeveorama. LilHelpa. Gunnar42.wikipedia. Noahslater. Cyplm. Kaicarver.org/w/index. Mathiastck. Danny Rathjens. Mohamed Ouda. Sushi Tax. Mindmatrix. Last Lost. Ehn.php?oldid=446929912  Contributors: Andrew80k. T0ny. (Ghost In The Machine). Salvar.torres. Miym. Khukri. Pcap. Saric. MParaz. Nanami Kamimura. Timdorr. A. FatalError. Eric-Wester. Tetriphile. Chris 73.wikipedia. Donhalcon. RL0919.wikipedia. Dianoetic. Ne vasya. Radagast83. Betacommand. Caiyu. Brick Thrower. SymlynX.wikipedia. Winterst. Heelmijnlevenlang. Ozsu.delanoy. Hannes Röst. Cliffb. Wikieditoroftoday. Julesd. Vy0123. Ruakh. Jncraton. Ghettoblaster. Diego guillen. Josh Cherry. Closedmouth. Chuunen Baka.php?oldid=446577882  Contributors: A2Kafir. JCDenton2052. Ammubhave. PatrickFisher. Timmillea. Malcolma. Ronz. GregorySmith. Miym. Khalid hassani. Nivix.php?oldid=446912892  Contributors: Bearcat. Nakakapagpabagabag. MrOllie. Centrx. Miym. Nagle. Nurg. Sanxiyn. MySchizoBuddy. GoingBatty. Bovineone. Steveswei. John of Reading. Dgies.poznan. Diego Moya. Ryanmcdaniel. Anshu. Cntras. Jsayre64. PigFlu Oink. 8 anonymous edits Distributed data flow  Source: http://en. Perada.org/w/index. Anwar saadat. Hhuili. Diannaa. Conversion script. Darsenault. 48 anonymous edits Distributed lock manager  Source: http://en. Tempodivalse. Kenyon. Horv.org/w/index. Hooperbloob. Shmlchr. 1 anonymous edits Connection broker  Source: http://en. Grundle. Mindmatrix. Oxymoron83. Pottsdl. Dysepsion. Kinema. Katharineamy. Stephenb. Cburnett.Fedak. Goldzahn.wikipedia. SamJohnston. Tnxman307.wikipedia. 3 anonymous edits Amazon Relational Database Service  Source: http://en. Pbb. ProsperousOne. clown will eat me. Spdegabrielle. BigDunc. Akamad.php?oldid=360159083  Contributors: Favonian. Kbdank71. Eastlaw. Midgrid. Gilles. Eggstasy. Dawynn. Rickyphyllis. Yworo.php?oldid=422685042  Contributors: Derild4921. Charles Matthews. CraigKeogh. Bertung.

Miym. DellTechWebGuy.php?oldid=447208239  Contributors: 10nitro. Twimoki. Joshxyz Fabric computing  Source: http://en. JCLately. JCLately. Acolovic. VampWillow. Miym. Lodevermeiren. Miym. TittoAssini. Khaless. Lenoxus. Rjwilmsi.php?oldid=409252753  Contributors: Andreas Kaufmann. Allan McInnes. Eggyknap. 10 anonymous edits Mobile agent  Source: http://en. Drewnic. Jpbowen. Neilc. Tgautier. Bmatschke. Seaneparker. Repat.org/w/index.org/w/index. JCLately. Soumyasch. RSaunders. RandomXYZb. Miym. Tobias Bergemann. King Arthur6687. Miym. Eeekster.php?oldid=446911323  Contributors: 4th-otaku. Bovineone. Jaycoh. Arvindn. Cybercobra. Gurch. MerlinMM. AdrianThurston. Sushinut. Econet. Jonasfagundes. Robertvan1. Ruralhouse. Alabamaisntgreat. Mycure. Miym. Yworo. Lee Carre. Hroðulf. SimonP. FatalError. Beland. Nestea Zen. FeydHuxtable. Rwwww. Greebo the Cat. Yurivict. Tmcw. Lastorset. Nighthawk2050. Theinfo.org/w/index. Leberwurscht. Lowellian. Jonas AGX. Madpaiand17. 1 anonymous edits 160 . Radagast83. Zacharewicz. EagleOne. Philippe Nicolai-Dashwood. Frap. Khazar. Jxm. Rakshith Amarnath. Selket. VictorAnyakin. Kintetsubuffalo. CraigKeogh. Torc2. Stardust8212. MacTed. Rjaf29. Smartse. Jamelan. LOL. Gsmgm. WikiMax. KennethJ.org/w/index. Mbferg. Lauciusa. Mwalsh34. WiktorWandachowicz. John Bessa.org/w/index. Miym. Tobias Bergemann. Ozten. 2 anonymous edits Master/slave (technology)  Source: http://en. Nagle. Richwales.muller. 35 anonymous edits Dryad (programming)  Source: http://en. Socraticscholar.php?oldid=444067796  Contributors: Bunnyhop11. Kbdank71. Philip Trueman. Elf Pavlik.wikipedia. Michael Hardy. WookieInHeat. JonintheUK. Kanebender.wikipedia. Ben Ben. Andrewa. Raul654.wikipedia. Happyinmaine. Bertie A. JForget. Randall311. 49 anonymous edits MongoDB  Source: http://en.php?oldid=442006474  Contributors: Aristotle Pagaltzis. Xissburg. RJFJR. Guoguo12.org/w/index. Miym. 39 anonymous edits Distributed social network  Source: http://en. Philomathoholic. Thumperward.brennan. Hazzik. LilHelpa. EagleOne. McSly. 3 anonymous edits Open architecture computing environment  Source: http://en. Oicumayberight. Lateg. Richard Slater. Nickptar. Lismoreboy. Dainis. Johnny99. Phatom87. JohnCatlin. Shepard. 10 anonymous edits HyperText Computer  Source: http://en. 43 anonymous edits IBZL  Source: http://en. 1 anonymous edits Message passing  Source: http://en. Samw. 5 anonymous edits Dynamic infrastructure  Source: http://en. Youngtwig. Tobias Conradi.php?oldid=428534732  Contributors: Agileball. Georgewilliamherbert.g. KSEltar. Ttonyb1.wikipedia. Dstainer. Dto. Meandtheshell. Antonielly. CloudComputing.wikipedia. Deon Steyn. Ff1959. Sjc.org/w/index. Akerans. Gogo Dodo. Hairhorn.wikipedia. AvicAWB. Iridescent.wikipedia.org/w/index. Darren uk. Beaddy1238. Hadrianheugh. Magog the Ogre. JonHarder. Chester Markel.php?oldid=429891944  Contributors: Buddy23Lee. Happyinmaine. Michael Hardy. 4th-otaku. Miym. Thepaul0. AlistairMcMillan. Tinucherian. Bearcat. Mcsee. Frap. Zondor. Poison Oak. Minghong. Tomrbj. Fabrictramp. Gravthuth. Emmanuel.php?oldid=424717286  Contributors: Acdx. Mac. Bearcat. Sciurinæ. ABCD.org/w/index. Miym. Megaltoid. Mboverload. 16 anonymous edits Explicit multi-threading  Source: http://en. Coldacid. Steve walkerou. Kadakas.php?oldid=430599102  Contributors: Alvin Seville. Opticalgirl. Hu12. Miym. Stephan Leeds. Karada. BrennanNovak. SvenGodo. Anrie Nord. Mdwh. Davetrainer. Yunyz. Belovedfreak. Szopen. Neumeier.org/w/index. CaptTofu. Daarklord. Rwwww. SamJohnston. Saifalisabri. Gemstone Staffing. 1 anonymous edits Open Computer Forensics Architecture  Source: http://en. MarktMan.org/w/index.php?oldid=443494567  Contributors: 16@r. Bostonvaulter. Miym. Samutoko. Avgjoey2k. Frap. Plaes. 30 anonymous edits Membase  Source: http://en.wikipedia. Frap. Technobadger. Ewlyahoocom. MarkusSchiltknecht.php?oldid=446408740  Contributors: Cybercobra. Pjoef. SiddhartaPranha. Prickus. Joonga. M Almarshad. Rofrol. Smjg. Rich Farmbrough. Eschuck. Robomanx.wikipedia.org/w/index. William Avery. Av pete. SimonP. Brennels. Romanc19s. Srjskam. Gurch. Agne27.wikipedia. John Nowak. Vdzhuvinov.php?oldid=447414552  Contributors: Afrab null. Wavemaster447. Miym. ClaesWallin. MorganCribbs. Smartse. Dinarphatak. Elibarzilay. Kennyluck. Hervegirod. 用 心 阁 . Valio bg. Twimoki. Hairy Dude. R'n'B. Rich Farmbrough. Dismas. YourEyesOnly. Peterdjones. Aottley. Fahdshariff. Kocarol.php?oldid=446363990  Contributors: ENeville. Rhopkins8. Mernen. Stolenglances.org/w/index. Orderud. Neilc. Venustas 12. 3 anonymous edits High level architecture (simulation)  Source: http://en.wikipedia. Cynehelm. Woohookitty. Hgrosser. Tuxcantfly. Kraftlos. Shijucv. Armadillo-eleven. Nasa-verve. FreplySpang. Henk. Netlad. UncleDouggie. Quercus basaseachicensis. Balabiot. Arkroll. AvicAWB. Radiojon. Alex. LobStoR. Miym. Σ. ZS. Momo54. Louspringer.php?oldid=427403475  Contributors: John of Reading. Jdzarlino. Adrianwn. Bovineone. AlainV. Cander0000. Kiwibird. Pengwynn. Marudubshinki. Reedy. Cntras. Lyricmac.php?oldid=444066741  Contributors: Atownballer. DoctorElmo. OwenBlacker. RHaworth.org/w/index. BadenW. Dancter. The Thing That Should Not Be. Mechanical digger. Blaisorblade. Chris Capoccia.wikipedia. SamJohnston. Hodsondd. ErrantX. Yunyz. NeilK. Rwwww. History2007. Ruud Koot. Jakub Vrána. WilliamAquarius.php?oldid=430841676  Contributors: David. Pibara.org/w/index.php?oldid=447612200  Contributors: 61cc.pratten. R'n'B. Jay.Article Sources and Contributors Pion. Suli123.wikipedia. Moheed. Rodrigob. Space89. Mwazzap. Svick.wikipedia. Fgiorgi. Kocio. Toni Stoev.wikipedia.wikipedia. Lexor. Æåm Fætsøn. Wilbysuffolk. Manasgarg. Jamelan. Ettrig. Djmackenzie. Shanes. Miym.org/w/index. Jsmethers.org/w/index.php?oldid=430500136  Contributors: Aboutmovies. Yellowgoat. Rick. JaGa. Miborovsky.wikipedia. Iridescent. Gaensebluemchen at night. Hoist2k.wikipedia. Mahanga. Xamian.php?oldid=420149708  Contributors: Miym. Vittyvk. Zyx. Lucaas. Alexteclo. Nicolas Barbier. SamyPesse. Nixdorf. The Anome.org/w/index. Kbrhouse. Aervanath. Teknobo. Brighterorange. Ceefour. Robklpd OrientDB  Source: http://en. Postcard Cathy. 5 anonymous edits Kayou  Source: http://en. Shuitu. Cactus26. RadManCF. JCLately. Haikupoet.wikipedia. 39 anonymous edits Multitier architecture  Source: http://en. RainbowCrane.wikipedia. Malcolma. Q Chris. Gamer007. Jimmyzimms. Thurston51. Darp-a-parp. Cesium 133. ArneBab.wikipedia. ErnstRohlicek. FrankTobia.org/w/index. PigFlu Oink Gemstone (database)  Source: http://en.wikipedia. Michael Hardy. Stephen B Streater. PullUpYourSocks. AlisonW.wikipedia. Robert K S. Woohookitty. Signalhead. Bovineone. MarSch. BClemente. IanOsgood.org/w/index.php?oldid=446515973  Contributors: Airplaneman.hc. 24 anonymous edits Fragmented object  Source: http://en. JCLately. Extols. Seerinteractive.Wiggin. Zachlipton. Urhixidur. Shaunfensom. Thomas Willerich. Discospinster. Siyamed. Khalid hassani.org/w/index. Night of the Big Wind Turbo. Jonathan Williams. Mechanical digger. Davidofithaca.wikipedia. Rajgopalv. CarlHewitt. Rayc. Beland. Abune.org/w/index. Bunnyhop11. Skomorokh. Edward321. Munahaf. 15 anonymous edits Message consumer  Source: http://en. Coldacid. Nabla. Tommy2010. Zondor. Omicronpersei8. Tevildo. Dkf11. JLRedperson. Wrboyce. JVersteeg. Mange01. Bweck. OmidPLuS. Yami Vizzini. Supa Z. Joriki. Catatoniatoday.org/w/index. Jncraton. Difu Wu. Sn0wflake. Ladybirdintheuk.org/w/index. Tide rolls. Foofy. Frap. Fæ. Bartledan. JLaTondre. Krzys ostrowski. Cander0000. Bearcat. Stephan Leeds. Abdull. My76Strat.wikipedia. Bpfurtado. Asafdapper. Lackett. Scoutchen. Thumperward. SBunce. 8oogers. Mortense. JubalHarshaw.org/w/index.php?oldid=444281694  Contributors: 1manfern. Metrax. ScottEdwinBailey. Phillow318. Patrick.php?oldid=430566863  Contributors: Alvin Seville. Mdd. Outlanderssc. Ideogram. Conversion script. Grshiplett. Pegship. Snezzy. Shaunfensom. Jpbowen. SAE1962. Mat813. Ninja987. Lelek. Vrenator. Kozuch.org/w/index. Abhinavkin. Anon126. Heelmijnlevenlang. A000040. 220 anonymous edits Network cloaking  Source: http://en. Sbowers3. Dm. William Avery. Entonian. Heelmijnlevenlang. Oleg Alexandrov. Akuckartz. Cander0000. Jonmmorgan. Eustress.wikipedia. Miym. Arichnad. Samer. Mark Renier. Raul654. Twirligig. Ewlyahoocom. Jackollie. W Nowicki. 5 anonymous edits Live distributed object  Source: http://en. Masterhomer. One-dimensional Tangent. Zombie1986. Dispenser. Stypex. Chris the speller. 52 anonymous edits Messaging pattern  Source: http://en. Tagishsimon. MarkWahl. Bblfish. Rettetast. BMF81. Al3xpopescu. Ramin zeinali. Jinlye. Rich Farmbrough. GoldKanga. Nrgiii. Tonyony83. SteveLoughran. Eleassar. Peridon.org/w/index.php?oldid=444950039  Contributors: 1000Faces. Mu Mind. Evileye73. Krzys ostrowski. Galoubet. Phatom87.wikipedia. Neilc. Etenil. Remuel. Carmeld1. GreenReaper. Waldhorn Opaak  Source: http://en. Chowbok. Robertvan1. Raywil. Bearcat. Joeyguerra. Animum.php?oldid=402986017  Contributors: Andreas Kaufmann. Closedmouth. ESkog. Oliphaunt. Kusma. 126 anonymous edits Multi-master replication  Source: http://en.php?oldid=435295809  Contributors: Archimedius. Skizzik. Friendlydata. JLaTondre.php?oldid=387166889  Contributors: Dawynn. Ingenthr.revah. Ilammy. Philippe Nicolai-Dashwood.wikipedia. Dubwai. GrahamN. Sidna.org/w/index. Styfle. Ochbad. Mdirolf. YUL89YYZ. SiarFisher. Jweston. Nesjo. Najeeb1010. David-Sarah Hopwood. 6 anonymous edits Fallacies of Distributed Computing  Source: http://en. Everyking.php?oldid=446828398  Contributors: Adm. Josephgrossberg. DavidBourguignon. CKlunck. Senthryl. Chzz. Wikante. Voteformike. 12 anonymous edits Edge computing  Source: http://en. Retired user 0001. Daf. FatalError. Percy Snoodle. DOSGuy.php?oldid=400012160  Contributors: Frap. Sander. Starwiz. Nforbes. Magioladitis. Miym. R'n'B. Reyk. Nad. Spl. Orso della campagna. Nomeata. Sriehl. Mike2782. Saintrain. Yadavjpr. Chmod007. Riadlem. Jamierlawson. Gribeco. JerryLerman.

Jeff G. Phr. Mihaigalos. Rgamble. Andyzweb. Esap. Hashproduct. Bosniak. Some fool. Geoff97. Discospinster. Bsadowski1. Emmess2006. 40 anonymous edits Paradiseo  Source: http://en. Michaelmas1957. Southen. Vivek prakash81. Nakon. Pedro. Balabiot.wikipedia. Bloodshedder. The 888th Avatar.php?oldid=443706886  Contributors: Amir. HaakonHjortland. Nikai. ZotovBST. Philipp Weis. Alphachimp. TreyGeek. Greg Lindahl.php?oldid=438879586  Contributors: Elkman. N328KF. JimParkerRogers.org/w/index. Husky.org/w/index. Roadrunner. Roy da Vinci. BMF81. Ahmedabadprince. Junkblocker. Asparagus. Eleckyt. Heelmijnlevenlang. Edward321. Olegos. Bijee. Codetiger. Burschik. Antandrus. Alansohn. Bonadea. DocendoDiscimus. Editor4567. Gp5588. Glane23. GiM. Ignatzmice. RxS. Тиверополник. Mwaisberg.org/w/index. RichardVeryard. Henriok. Jamesontai. Texture. Racklever. Chowbok. Femto. Miym. Chuunen Baka. Metageek. RJASE1. Ms. Miym. Nivix. Brighterorange. Bentogoa. Jawz44. Spf2. Michael Devore. Aumakua. Niroht. Simesa. Wikineer. Oleg Alexandrov. Amwebb. Kozuch. Hellis. Tonywalton Redis (data store)  Source: http://en. Anr.php?oldid=443982412  Contributors: Befreax. DARTH SIDIOUS 2. Fireaxe888. Lord British. AdjustShift. Mary quite contrary. JonHarder. Tiddly Tom.wikipedia. Danbert8. 9 anonymous edits PlanetSim  Source: http://en. Alexwcovington. Bryan Derksen. Grimey109. Zoeb. SimonP. CesarB. Erxnmedia. 161 . Hgrobe. Waldir. JLaTondre. Sam Hocevar. Andymrhodes.us. Earle Martin. Ahoerstemeier. Jorfer.xxx. Rilak. LokiiT.org/w/index. ShaunMacPherson. Billymac00. John Reaves. TexasAndroid. Doctorevil64. Jpbowen. Miym. Rookkey. Autopilots. Pion. Giftlite. Zachary. Drilnoth. Epolk. Ehn. Miracle Pen. Eivind. Mchu amd. Roaming. RTC. Ventura. Isilanes. Drphilharmonic. Torqueing. Omicronpersei8. Hft. UltraMuffin. Ekashp.davies.cz. Nakakapagpabagabag. RexNL. Jay. Neelix. ESkog. Pinethicket. Henry Robinson. Supercomputtergeek. Aldie. Jmurali. T. Balcer. Fæ. Długosz. Cp111.wikipedia. Chrisch. Cowman109. Sink257. Dgies. Demonkoryu. El Baby. EdMcMahon. Charles Matthews. Cretog8. MrOllie. Hadal. 5 anonymous edits Service-oriented distributed applications  Source: http://en. Afskymonkey. Bubba73. MC10. 15 anonymous edits Request Based Distributed Computing  Source: http://en.org/w/index. Moxfyre. IvanLanin. Miym. Allan McInnes. Sbtourist. Magnus Manske. Da monster under your bed. Cooperised. Alexkon. 2 anonymous edits Shared memory  Source: http://en. MJSkia1. Jpbowen. Donreed.NETLover. Hankwang.ferre. JForget. Dannyc77. Hydrox. Maurice Carbonaro. Vinceouca. Megacat. Seegoon. Mifam.org/w/index. Peyna. Jmundo. Akadruid. Xalfor. Guanaco. Poohneat. Swillison. Catin20. Kubanczyk. No1lakersfan. Dyl. Dontrustme.org/w/index. Peyre. PeterBrian. Szopen. Yworo. Andre Engels. Cwolfsheep. ZeroOne. Gamester17. Trcunning. Page Up. Dipskinny. Jnmoyne. Pearle. LinaMishima. Torswin. BobM. Wiki alf.253. Kbdank71. Linuxbeak. Mion. Pbannister. Edward. Bobrayner. Tokek. Maury Markowitz. Mipadi. Quest for Truth. Pczajkowski. Piotrus. Pcap. Balderdash707. IanOsgood. Barrylb. Stesch. Gsonnenf. Wernher. Wavelength. Raul654. Bramp. Liquiddatallc.org/w/index. Ultimatewisdom. Davidweiner23. VictorianMutant. Jeffshantz. Paxsimius. Ark. Chad Vander Veen. Chuck Marean. JamesBondMI6. Anonymous Cow. Roger Davies. Johntex. Nneonneo. Harman malhotra. Michael Rogers. Buster79. Ancheta Wis. Colonies Chris. Myanw. Rrburke. Husond.php?oldid=447344005  Contributors: 12 Noon. Cyrius. Devgus. Jder. Iamfscked. Miym. Aomsyz. Mjr162006. Natishalom. Miym. Haeleth. Nuno Tavares. Rjwilmsi. NickW557. RichardVeryard. Alchemist Jack. Stoakron97.org/w/index. Beland. Shoeofdeath. Taxman. CSWarren. OrgasGirl. Derek Ross. Guy Harris. Statsone. MainBody. Koffieyahoo. CES1596. Evil Twin Skippy. Lightmouse. Jonkerz. PeterStJohn.wikipedia. Radagast83. Caltas. Mr Stephen. Jaqiefox. IanBrock. Ruud Koot. Rosiestep. Rjwilmsi. Beland. Meteshjj. Closedmouth. Hogne. Champlax. EoGuy. Smitty. Tomtzigt. Tim1357. JHunterJ. AlienZen. Hibernian. FrummerThanThou. Thecheesykid. Xeworlebi. Stbalbach. Gfoley4. JteB. Kjkolb. Jeh. TwoOneTwo. clown will eat me.php?oldid=394780368  Contributors: Andreas Kaufmann. 4 anonymous edits Portable object (computing)  Source: http://en. JamesBWatson. Pohl. Wikicojamc. Ojw.122. SteveSims. LogicDictates. Rwwww. Jasper Deng. SCOnline. SJP. Unknown. Frap. ViveCulture. Namazu-tron. Elvarg. 50 anonymous edits Utility computing  Source: http://en. Joffeloff. Giraffedata. Svjson. Humble Guy. Nickg. Grzegorz Dubicki. SkeletorUK. Scovetta. Hervegirod. Marj Tiefert. Miym. Gererd+. Runtime. Bd84. Szopen. Muéro. Boul22435. RedWolf. Editor at Large. Vaceituno. Anastasios. Emmess2005. LilHelpa. Rainald62. Raryel.wikipedia. Ryanaxp. Vssun. Mgreenbe. Sorenriise. John. Heath. Tangotango. Kristiewells. Lsb34. Kbtarc. Philip Trueman. Tpbradbury.94.wikipedia. Cdamama.org/w/index. Oldhamlet. Truaxd. Shandris. Bovineone. Arthur a stevens. Arto B. Paradiseo. Slarson. DanielSHaischt. Doc Daneeka. Robertvan1. TheCoffee. Duckbill. Another-anomaly. Kuru. Toddst1. Mani1. FuFoFuEd. Miym. LittleOldMe. Gatemansgc. Kant66. T0ny. Dialectric. Merope. CarlHewitt. Devilrose. お む こ さ ん 志 望 . Jbaxt7. Dyl. Orderud. Searchme. Phr. Metapsyche. Inkling. Nethgirb.t. Tovojolo. ST47. Martin451. Kompere. Miym. Mhdrateln. Jedonnelley. 13 anonymous edits Semantic Web Data Space  Source: http://en. Wolph. Shaw SANAR. Iridescent. Muijz. Gioto. Arancaytar. Soveran. D. Zphelj. Rama's Arrow. Evil Monkey. 5950FX. Shinkansen Fan. Kavehmb. Strait. Komap. Squideshi. Miym. Piet Delport. SMC.org/w/index. Bryan Derksen. Woohookitty. VB. Eugene-elgato. Philippe Nicolai-Dashwood. Akata. Agentbla.. Ali.org/w/index. Cswierkowski. Plest. Finchsnows. Matt Deres. Remag Kee. Icey. 45 anonymous edits Remote Component Environment  Source: http://en. T-bonham. RJFJR. Heron. Mdd.php?oldid=440634049  Contributors: 667NotB. Myscrnnm. Tempshill. Jehochman. Mortense. Stephenb. Brevity. Katieh5584. Russell. Proofreader. Rhobite. FrenchIsAwesome. Sdornan. Wknight94. Kgfleischmann. Anwar saadat. RJaguar3. DerHexer. Harley peters. Artlondon. Ww. Leszek Jańczuk. Nirvana888. Thadius856. SymlynX. Stevertigo. JoeBruno. Sanket ar. Zahid Abdassabur. Chowbok.wikipedia. Jebba. Arvindn. Scm83x. Robert Merkel. Koyaanis Qatsi. Elcombe2000. Arch dude.wikipedia. Koavf. Ongar the World-Weary. TakuyaMurata. 29 anonymous edits TreadMarks  Source: http://en. Er Komandante. Yaos. Ericoides. Kku. Fuhghettaboutit. Pgk.wikipedia. A. Chillum. Phil Sandifer. Miym. Der Falke. Torla42. Qwertyus. Arkanosis. 1137 anonymous edits Terrastore  Source: http://en. Cec. Applicationit. Vanished user 39948282. Rich Farmbrough. History2007. CSWarren. Chuunen Baka. Erik the Appreciator. TheRanger. Leadmelord.wikipedia. Winterst. Topbanana. Monedula. Gunter. Chocmah. Rich Farmbrough. JH-man. CredoFromStart. Slathering. Jni. Scott McNay. DavidCary. Vincentwilliamse. Kleinheero. Mojska. J. Billbrixton. Wang. RoyBoy. Circeus. Yakudza. Headsrfun. Jschwa1. Michael Hardy.php?oldid=400145051  Contributors: Lismoreboy. Wikipelli.wikipedia. MER-C. Jessvj. Gadomski. Calliopejen1. Colorvision. Gilliam. Artaxiad.php?oldid=368256529  Contributors: Andreas Kaufmann. TAnthony. Suruena. Jjmerelo. AxelBoldt. Funandtrvl. Sharon08tam. HenryLi.wikipedia. WikiTome. Romanm. 6 anonymous edits Tuple space  Source: http://en. Shainer. Qwerty Binary. Jncraton. Tuqui. Miym. SriMesh. Kevin Saff. Johnlogic.org/w/index. Dbroadwell. Wwoods. Damian Yerrick. Miym. Capricorn42. GregRobson. Bjh21. Corvus cornix. Canens. RichardVeryard.php?oldid=438526958  Contributors: Abdull. C. Ehn. 2 anonymous edits RM-ODP  Source: http://en. Samuel Curtis. 16 anonymous edits Supercomputer  Source: http://en.org/w/index. Karim ElDeeb. Bevo. Jaybuffington. MER-C. Epatrocinio. Nutiketaiel. Khcw77. Mdd. Materialscientist. Melsaran.wikipedia. Frap. Miami33139. Bongwarrior. La Pianista. Nic Doye. AdSR. Duffman. Wikibofh. Richfife. Bergin. Jj05y. Robert Brockway. Sleske. Gino chariguan. Nojhan.php?oldid=385961828  Contributors: BD2412. Mannjc. Tnm8.org/w/index. Christopher.wikipedia. Dthomsen8. Cpiral. JCLately. Tempodivalse. Elsajoy. Ali azad bakhsh. Ronz. Tide rolls.64. Bk0. Sonicology. Kubigula. Conversion script. Autarchprinceps. List of marijuana slang terms. Hu12. Miym. Air55. Antonielly. Dmuth.wikipedia.php?oldid=447465905  Contributors: Aaaidan. Kvng. Rainer Wasserfuhr. 1 anonymous edits Stub (distributed computing)  Source: http://en. Hitman012. Gioto. Ævar Arnfjörð Bjarmason.org/w/index. Ciphergoth. Zrs 12. Teryx. Hello Control. KarlKarlson. Vaibhavahlawat1913. Poccil. SchfiftyThree. SheckyLuvr101. Bovineone. Pretzelpaws. JonHarder. Quuxplusone.org/w/index. Rjwilmsi.php?oldid=443129974  Contributors: A5b. Emperorbma. Alcachi. History2007. Kozuch. RedWolf. Maddiekate. Margana. Sfraza. Thegreenj. Jpahullo. Mmason56. MrMambo. Adam850. Hu12. Grutness. Gershwinrb. Miym.wikipedia. Dto. Etxrge. Nanobri. Gronau. PrologFan. Calaka. Jmarcelo95. Ludovic. J Milburn. Sorenriise. Soumya92. Bender235. Mwtoews. Fiftyquid. CanisRufus. Keraunos. Arakunem. Dgies. Sukiari. Maxim. Barmijo. み れ で ぃ ー . Mmernex. Miym. Violetriga. Matt Crypto. Diannaa. Karmiq. Randhirreddy. Samtheboy. Khazadum. Gz33. Sean D Martin. Ripper234. Nagle.moulavi. Frap. Voidxor. KFP. Nick125. Adam M. KathrynLybarger. Raistolo.org/w/index. Pearle. Delirium. Mikeroodeus. Marangog. Andy M. Shell Kinney.wikipedia. Davedx.org/w/index. Sannse. Rich Farmbrough. Frieda. Krtek2125. GreatTurtle. Yarnalgo. Ttiotsw. Eastmain. Mskfisher. MK8. Marcoacostareyes. Hede2000.xxx. ClementSeveillac. Jonah Stein. CharlesGillingham. Quietust. Newone. Jeff3000.wikipedia. Old Death. Imroy. Bodhran. Jasper Chua. Leehounshell.Article Sources and Contributors Overlay network  Source: http://en. LONGSHOT. Yurivict. Vsm01.php?oldid=400929191  Contributors: C777. Kubanczyk. EEPROM Eagle. Cwolfsheep. Tony Fox. Agentbla.php?oldid=429896483  Contributors: Bpeel. Laug. Fijal. Adamd1008. Dale Arnett. Modify. Stuartyeates. MonoAV. Josh Parris. Chych. Avi4now. Javawizard. Edal. AnonGuy. Fuzheado. Methedras. Doc Daneeka. L Kensington. Calmer Waters. Wowiamgood123. ThinkEnemies. Maycrow. Rror. Trainor. MichaelsProgramming. Ryan Roos. Simetrical. CosineKitty. K25125. Liao.php?oldid=400489047  Contributors: Arleyl. Equendil. Ryansca. Zodon. Modulatum. Neilc. Heron. 48 anonymous edits Smart variables  Source: http://en. SamJohnston. 32 anonymous edits Parasitic computing  Source: http://en. Spayrard. Dck7777. Intgr. Manop.php?oldid=396957096  Contributors: AntonioVallecillo. Gwern. DragonflySixtyseven. Miym. Peturingi. Delta759. Bkkeim2000.php?oldid=428682338  Contributors: Bovineone. Harmil. Ixfd64. Thumperward. Emre D. Samrawlins.. Darkstar1st. RedWolf. Tim@.php?oldid=432008745  Contributors: 4twenty42o. Monobi. Mange01. Ramu50. Coffeespoon. Quadell. Shieldforyoureyes. Onorem. Datacenterguy. Khalid hassani. DancingPenguin. Propaniac. Krallja. Fishnet37222. RandomStringOfCharacters. Jsbillings. Edward. Efa. Johndelorean.wikipedia. Ludraman. DarlingMarlin. Lulzfish. Everyking. Thatdumisdum.org/w/index. Splash. Cowpriest2. James086.php?oldid=431188269  Contributors: BahramH. MrOllie. Dekisugi. Toussaint. Tawker. Roche-Kerr. I dream of horses. Jahiegel. Lowellian. Herbee. Angwill. The Anome. New guestentry. D6. Liao. Platyk.wikipedia. Can't sleep. Vroman. Lionelt. Тиверополник. Nono64. Joseph Solis in Australia. Mean as custard. MikhailGusarov. Slark. Patstuart. Jvs. Abce2. RossPatterson. Nick Drake. OrbitalAnalyst. HazeNZ. 9 anonymous edits Transparency (human-computer interaction)  Source: http://en. 130. Loyalist Cannons. Jayen466. Miym. Gbleem. Cometstyles.borders. MIT Trekkie. Somatrix. Damian Yerrick. Jerryobject. TNLNYC. Seuakei. Beetstra.).delanoy. Nurg. RadiantRay. Richard Arthur Norton (1958. Qrex123. Iluvcapra. Infrangible. Gerry Ashton. Belovedfreak. Chanakyathegreat. Igottalisp. Aufidius. Owenozier. Jharrell.booth. Bovineone. Epbr123.wiki. X42bn6. 62.php?oldid=446536073  Contributors: An0n. Fredrik. Kmerenkov. Th1rt3en.

Royalguard11. GraemeMcRae. Wojteklw. Skysmith. Softtest123. Marvinandmilo. Shire Reeve. Ronz.php?oldid=440045994  Contributors: Bovineone. FatalError. Paul Foxworthy. Miym. Inc ru. Soggyc. THB. Rare4. ReedHedges. SteveLoughran. ShellyT123. Salad Days. Suyambuvel. CeciliaPang. Snrjefe. Softguyus. 4 anonymous edits Virtual Object System  Source: http://en. Licor. SamJohnston. Chip Zero. Bluemask.org/w/index. Guy Harris. Ycagen.wikipedia. Pearle.wikipedia. Mild Bill Hiccup. Thumperward. Weregerbil.org/w/index. RodneyMyers. 89 anonymous edits Virtual Machine Interface  Source: http://en. Verbamundi. Rich Farmbrough. Dlrohrer2003. 18 anonymous edits 162 .wikipedia. Soumyasch.Article Sources and Contributors Roman Doroshenko. MathieuDutourSikiric. Rwwww. The Anome.php?oldid=332950032  Contributors: ArthurDenture. Davepape.org/w/index. UncleDouggie. 6 anonymous edits Volunteer computing  Source: http://en. Rich Farmbrough. Shenme.php?oldid=434581986  Contributors: Avalon. AzzAz. Wmahan. Tlausser. SpigotMap. StoneIsle. Tobias Bergemann. Balrog-kun. Miym. Posix memalign.

svg  Source: http://en.org/w/index.org/w/index. File:RM-ODP viewpoints.wikipedia File:Network Overlay merged.org/w/index.org/w/index.png  License: Public Domain  Contributors: Gaensebluemchen at night Image:Definition of a Live Distributed Object.php?title=File:Symphony_1000_random.svg  Source: http://en.php?title=File:Planetsimlogo.wikipedia File:Client-server-model.wikipedia.wikipedia.org/w/index.svg  License: Public Domain  Contributors: Benedikt. See log.org/w/index.php?title=File:Fragmented_object.svg  License: GNU Free Documentation License  Contributors: Sam Johnston Image:Fragmented object.5  Contributors: The picture shows the PlanetSim layered architecture.0  Contributors: Damien Katz File:Couchdb screenshot.wikipedia.png by Rfl.) Image:Distributed Memory.wikipedia.PNG  Source: http://en.php?title=File:PlanetsimArchitecture.svg  Source: http://en.jpeg  License: GNU Free Documentation License  Contributors: Khazadum.php?title=File:Overview_of_a_three-tier_application_vectorVersion.svg  License: Creative Commons Attribution-Share Alike  Contributors: Ludovic. developed within the research project Planet (http://planet.wikipedia.gif  License: Creative Commons Attribution 3.png  License: Creative Commons Attribution 3. (Original SVG was based on File:PD-icon.php?title=File:PD-icon.php?title=File:PoweredMongoDBbrown66.org/w/index.org/w/index.jpeg  Source: http://en.svg  Source: http://en.wikipedia.php?title=File:Definition_of_a_Distributed_Data_Flow.gif  Source: http://en.png  License: Public Domain  Contributors: Driquet Image:Supercomputing-rmax-graph.org/w/index.ferre]] Image:Planetsimlogo.wikipedia.urv.php?title=File:ALSP.org/w/index.wikipedia.svg  License: Creative Commons Attribution-Sharealike 3.org/w/index.php?title=File:Couchdb_screenshot.wikipedia.php?title=File:Distributed_object_communication.png by Duesentrieb.wikipedia.jpg  License: Creative Commons Attribution-Sharealike 2.org/w/index.gif  License: Creative Commons Attribution-Sharealike 2.org/w/index.jpg  Source: http://en.wikipedia.php?title=File:ArchitectureCloudLinksSameSite.wikipedia.jpg  License: Public Domain  Contributors: BClemente Image:AutonomicSystemModel. Licenses and Contributors 163 Image Sources.php?title=File:Supercomputers_countries_share_pie.wikipedia.svg: David Vignoni Gnome-fs-server.org/w/index.wikipedia.org/w/index.jpg  Source: http://en.php?title=File:Roadrunner_supercomputer_HiRes.png  License: Public Domain  Contributors: Image:Fabric computing.5  Contributors: The picture is the results for a 1000-node Symphony network.org/w/index.org/w/index.svg  Source: http://en.jpg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Clemens PFEIFFER Image:BlueGeneL cabinet.jpg  License: GNU Free Documentation License  Contributors: Raul654. Licenses and Contributors Image:ALSP. Sfan00 IMG File:Distributed object communication.5  Contributors: The picture is the results for a 1000-node Chord network. Yaleks Image:IBM HS20 blade server.php?title=File:Processor_families_in_TOP500_supercomputers.php?title=File:Network_Overlay_merged.0  Contributors: Krzys ostrowski File:PoweredMongoDBbrown66.wikipedia.ferre File:Network Overlay.php?title=File:Network_Overlay.png  Source: http://en.gif  Source: http://en.php?title=File:Client-server-model.wikipedia.jpg  License: Creative Commons Attribution-Sharealike 2.org/w/index.svg: David Vignoni derivative work: Calimo (talk) Image:Couchdb-logo.es).PNG  License: Creative Commons Attribution 3.svg  License: Public Domain  Contributors: Various.png  Source: http://en.gif  Source: http://en.org/w/index.wikipedia.svg  Source: http://en.svg  License: GNU Lesser General Public License  Contributors: Gnome-fs-client.Seidl Image:Roadrunner supercomputer HiRes.jpg  License: Creative Commons Attribution-Sharealike 3.php?title=File:IBM_HS20_blade_server.org/w/index.org/w/index.png  License: Public Domain  Contributors: Original uploader was Sjschmid at en. PlanetSim was developed within the research project Planet (http://planet.wikipedia. Image:PlanetsimArchitecture.php?title=File:Couchdb-logo.svg  Source: http://en.php?title=File:Cray-1-deutsches-museum.es).png  Source: http://en.png  Source: http://en.jpg  Source: http://en.wikipedia.wikipedia.php?title=File:AutonomicSystemModel.wikipedia.wikipedia.wikipedia.jpg  Source: http://en.0  Contributors: Robert Kloosterhuis Image:Operating systems used on top 500 supercomputers.5  Contributors: The picture is the logo of the PlanetSim simulator.jpg  Source: http://en.org/w/index.gif  License: Creative Commons Attribution 3.php?title=File:RM-ODP_viewpoints. which was based on Image:Red copyright.jpg  Source: http://en.org Image:Definition of a Distributed Data Flow.org/w/index.gif  Source: http://en.wikipedia.php?title=File:Chord_1000_random.org/w/index.php?title=File:Supercomputing-rmax-graph.jpg  License: Creative Commons Attribution 2. Original uploader was Bartledan at en.wikipedia.wikipedia.urv.wikipedia.org/w/index. Sanchez.wikipedia.png  Source: http://en.org/w/index. Image:Chord 1000 random.php?title=File:Fabric_computing.png  Source: http://en.png  License: GNU General Public License  Contributors: apache.jpg  Source: http://en.png  Source: http://en.jpg  License: Public Domain  Contributors: LeRoy N. PlanetSim was developed within the research project Planet (http://planet.gif  License: Creative Commons Attribution-Sharealike 2. Image:Symphony 1000 random.wikipedia.Image Sources.org/w/index.php?title=File:Definition_of_a_Live_Distributed_Object.es).0  Contributors: Moxfyre File:Supercomputers countries share pie.es).svg  License: Creative Commons Attribution-Share Alike  Contributors: Ludovic.0  Contributors: Marcel Douwe Dekker Image:Cray-1-deutsches-museum.wikipedia.svg  Source: http://en.png  License: Creative Commons Zero  Contributors: Megaltoid File:Overview of a three-tier application vectorVersion.0  Contributors: LokiiT File:ArchitectureCloudLinksSameSite. PlanetSim was developed within the research project Planet (http://planet.urv.svg  License: Public Domain  Contributors: Bartledan (talk).org/w/index.org/w/index.jpg  Source: http://en.png  Source: http://en. Records Management/Media Services and Operations Image:Processor families in TOP500 supercomputers.png  License: Creative Commons Zero  Contributors: Lucaswilkins .urv. based on a file by User:Foofy.php?title=File:BlueGeneL_cabinet.org/w/index.php?title=File:Distributed_Memory.php?title=File:Operating_systems_used_on_top_500_supercomputers.0  Contributors: Krzysztof Ostrowski Image:PD-icon.

org/ licenses/ by-sa/ 3.License 164 License Creative Commons Attribution-Share Alike 3. 0/ .0 Unported http:/ / creativecommons.

Amazon SimpleDB. F#. With the Least Amount of Effort. Opaak.. C#. PlanetSim. Perl. Stop Searching. Fallacies of Distributed Computing. Parasitic computing. Distributed Interactive Simulation. Java. Paradiseo. Network cloaking. Terrastore.. and ace any discussion. HyperText Computer. Semantic Web Data Space. Python.and Much. MongoDB. Distributed application. Open Computer Forensics Architecture. Erlang. Master/slave (technology). Distributed memory.Grab your copy now. background and everything you need to know. PHP. Volunteer computing. This book is your ultimate resource for MapReduce. Multitier architecture.The Knowledge Solution. Remote Component Environment. OCaml. Distributed data flow. Data Diffusion Machine. Stand Out and Pay Off. Portable object (computing). Code mobility. this book is a unique collection to help you become a master of MapReduce. Ruby. Stub (distributed computing). Parts of the framework are patented in some countries. learn EVERYTHING you need to know about MapReduce. Open architecture computing environment. Are you looking to learn more about MapReduce? You’re about to discover the most spectacular gold mine of MapReduce materials ever created. Here you will find the most up-to-date information. Dynamic infrastructure. Serviceoriented distributed applications. Virtual Object System. Multi-master replication. Citrusleaf database. Message consumer. In 2 Days Or Less. Edge computing. Dryad (programming). The #1 ALL ENCOMPASSING Guide to MapReduce. Overlay network. Request Based Distributed Computing. An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily. Redis (data store). proposal and implementation with the ultimate book – guaranteed to give you the education that you need. . CouchDB. Transparency (human-computer interaction). “Here’s Your Chance To Skip The Struggle and Master MapReduce. although their purpose in the MapReduce framework is not the same as their original forms. Distributed database. Supercomputer. Live distributed object. while you still can. Database-centric architecture. Fragmented object. R and other programming languages. Autonomic Computing..” MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Explicit multi-threading. Virtual Machine Interface. Aggregate Level Simulation Protocol.. TreadMarks. Kayou. IBZL. High level architecture (simulation). Mobile agent. Smart variables. Distributed lock manager. Get the edge. Client–server model. Message passing. Distributed shared memory. In easy to read chapters. Membase. OrientDB. faster than you ever dreamed possible! The information in this book can show you how to be an expert in the field of MapReduce. analysis. Connection broker. Fabric computing. MapReduce libraries have been written in C++.. It reduces the risk of your technology. Much More! This book explains in-depth the real drivers and workings of MapReduce. Messaging pattern. Gemstone (database). Art of War Central. Distributed social network.. Amazon Relational Database Service. time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals . Amoeba distributed operating system. RM-ODP. The framework is inspired by the map and reduce functions commonly used in functional programming. Shared memory. Utility computing. Distributed object. A quick look inside: MapReduce. with extensive references and links to get you to know all there is to know about MapReduce right away. Distributed design patterns. Tuple space.

You're Reading a Free Preview