P. 1
MapReduce: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

MapReduce: High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

|Views: 765|Likes:
Published by Emereo Publishing
The Knowledge Solution. Stop Searching, Stand Out and Pay Off. The #1 ALL ENCOMPASSING Guide to MapReduce.

An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily...

"Here's Your Chance To Skip The Struggle and Master MapReduce, With the Least Amount of Effort, In 2 Days Or Less..."

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Parts of the framework are patented in some countries.

The framework is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms.

MapReduce libraries have been written in C++, C#, Erlang, Java, OCaml, Perl, Python, PHP, Ruby, F#, R and other programming languages.

Get the edge, learn EVERYTHING you need to know about MapReduce, and ace any discussion, proposal and implementation with the ultimate book – guaranteed to give you the education that you need, faster than you ever dreamed possible!

The information in this book can show you how to be an expert in the field of MapReduce.

Are you looking to learn more about MapReduce? You're about to discover the most spectacular gold mine of MapReduce materials ever created, this book is a unique collection to help you become a master of MapReduce.

This book is your ultimate resource for MapReduce. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about MapReduce right away. A quick look inside: MapReduce, Aggregate Level Simulation Protocol, Amazon Relational Database Service, Amazon SimpleDB, Amoeba distributed operating system, Art of War Central, Autonomic Computing, Citrusleaf database, Client–server model, Code mobility, Connection broker, CouchDB, Data Diffusion Machine, Database-centric architecture, Distributed application, Distributed data flow, Distributed database, Distributed design patterns, Distributed Interactive Simulation, Distributed lock manager, Distributed memory, Distributed object, Distributed shared memory, Distributed social network, Dryad (programming), Dynamic infrastructure, Edge computing, Explicit multi-threading, Fabric computing, Fallacies of Distributed Computing, Fragmented object, Gemstone (database), HyperText Computer, High level architecture (simulation), IBZL, Kayou, Live distributed object, Master/slave (technology), Membase, Message consumer, Message passing, Messaging pattern, Mobile agent, MongoDB, Multi-master replication, Multitier architecture, Network cloaking, Opaak, Open architecture computing environment, Open Computer Forensics Architecture, OrientDB, Overlay network, Paradiseo, Parasitic computing, PlanetSim, Portable object (computing), Redis (data store), Remote Component Environment, Request Based Distributed Computing, RM-ODP, Semantic Web Data Space, Service-oriented distributed applications, Shared memory, Smart variables, Stub (distributed computing), Supercomputer, Terrastore, Transparency (human-computer interaction), TreadMarks, Tuple space, Utility computing, Virtual Machine Interface, Virtual Object System, Volunteer computing...and Much, Much More!

This book explains in-depth the real drivers and workings of MapReduce. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals - Grab your copy now, while you still can.
The Knowledge Solution. Stop Searching, Stand Out and Pay Off. The #1 ALL ENCOMPASSING Guide to MapReduce.

An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily...

"Here's Your Chance To Skip The Struggle and Master MapReduce, With the Least Amount of Effort, In 2 Days Or Less..."

MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Parts of the framework are patented in some countries.

The framework is inspired by the map and reduce functions commonly used in functional programming, although their purpose in the MapReduce framework is not the same as their original forms.

MapReduce libraries have been written in C++, C#, Erlang, Java, OCaml, Perl, Python, PHP, Ruby, F#, R and other programming languages.

Get the edge, learn EVERYTHING you need to know about MapReduce, and ace any discussion, proposal and implementation with the ultimate book – guaranteed to give you the education that you need, faster than you ever dreamed possible!

The information in this book can show you how to be an expert in the field of MapReduce.

Are you looking to learn more about MapReduce? You're about to discover the most spectacular gold mine of MapReduce materials ever created, this book is a unique collection to help you become a master of MapReduce.

This book is your ultimate resource for MapReduce. Here you will find the most up-to-date information, analysis, background and everything you need to know.

In easy to read chapters, with extensive references and links to get you to know all there is to know about MapReduce right away. A quick look inside: MapReduce, Aggregate Level Simulation Protocol, Amazon Relational Database Service, Amazon SimpleDB, Amoeba distributed operating system, Art of War Central, Autonomic Computing, Citrusleaf database, Client–server model, Code mobility, Connection broker, CouchDB, Data Diffusion Machine, Database-centric architecture, Distributed application, Distributed data flow, Distributed database, Distributed design patterns, Distributed Interactive Simulation, Distributed lock manager, Distributed memory, Distributed object, Distributed shared memory, Distributed social network, Dryad (programming), Dynamic infrastructure, Edge computing, Explicit multi-threading, Fabric computing, Fallacies of Distributed Computing, Fragmented object, Gemstone (database), HyperText Computer, High level architecture (simulation), IBZL, Kayou, Live distributed object, Master/slave (technology), Membase, Message consumer, Message passing, Messaging pattern, Mobile agent, MongoDB, Multi-master replication, Multitier architecture, Network cloaking, Opaak, Open architecture computing environment, Open Computer Forensics Architecture, OrientDB, Overlay network, Paradiseo, Parasitic computing, PlanetSim, Portable object (computing), Redis (data store), Remote Component Environment, Request Based Distributed Computing, RM-ODP, Semantic Web Data Space, Service-oriented distributed applications, Shared memory, Smart variables, Stub (distributed computing), Supercomputer, Terrastore, Transparency (human-computer interaction), TreadMarks, Tuple space, Utility computing, Virtual Machine Interface, Virtual Object System, Volunteer computing...and Much, Much More!

This book explains in-depth the real drivers and workings of MapReduce. It reduces the risk of your technology, time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals - Grab your copy now, while you still can.

More info:

Published by: Emereo Publishing on Sep 09, 2011
Copyright:Traditional Copyright: All rights reserved
List Price: $39.95

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
This book can be read on up to 6 mobile devices.
Full version available to members
See more
See less

10/31/2014

Sections

  • Aggregate Level Simulation Protocol
  • Amazon Relational Database Service
  • Amazon Relational Database Service[1]
  • Amazon SimpleDB
  • Amoeba distributed operating system
  • Art of War Central
  • Autonomic Computing
  • Citrusleaf database
  • Client–server model
  • Code mobility
  • Connection broker
  • CouchDB
  • Data Diffusion Machine
  • Database-centric architecture
  • Distributed application
  • Distributed data flow
  • Distributed database
  • Distributed design patterns
  • Distributed Interactive Simulation
  • Distributed lock manager
  • Distributed memory
  • Distributed object
  • Distributed shared memory
  • Distributed social network
  • Dryad (programming)
  • Dynamic infrastructure
  • Edge computing
  • Explicit multi-threading
  • Fabric computing
  • Fallacies of Distributed Computing
  • Fragmented object
  • Gemstone (database)
  • HyperText Computer
  • High level architecture (simulation)
  • IBZL
  • Kayou
  • Live distributed object
  • Master/slave (technology)
  • Membase
  • Message consumer
  • Message passing
  • Messaging pattern
  • Mobile agent
  • MongoDB
  • Multi-master replication
  • Multitier architecture
  • Network cloaking
  • Opaak
  • Open architecture computing environment
  • Open Computer Forensics Architecture
  • OrientDB
  • Overlay network
  • Paradiseo
  • Parasitic computing
  • PlanetSim
  • Portable object (computing)
  • Redis (data store)
  • Remote Component Environment
  • Request Based Distributed Computing
  • RM-ODP
  • Semantic Web Data Space
  • Service-oriented distributed applications
  • Shared memory
  • Smart variables
  • Stub (distributed computing)
  • Supercomputer
  • Terrastore
  • Transparency (human-computer interaction)
  • TreadMarks
  • Tuple space
  • Utility computing
  • Virtual Machine Interface
  • Virtual Machine Interface[1]
  • Virtual Machine Interface[2]

MapReduce

IN-DEPTH: THE REAL DRIVERS AND
WORKINGS

Kevin Roebuck

REDUCES THE RISK OF YOUR TECHNOLOGY, TIME AND RESOURCES
INVESTMENT DECISIONS

ENABLING YOU TO COMPARE YOUR
UNDERSTANDING WITH THE OBJECTIVITY OF EXPERIENCED PROFESSIONALS

High-impact Strategies - What You Need to Know: Definitions, Adoptions, Impact, Benefits, Maturity, Vendors

Topic relevant selected content from the highest rated entries, typeset, printed and shipped. Combine the advantages of up-to-date and in-depth knowledge with the convenience of printed books. A portion of the proceeds of each book will be donated to the Wikimedia Foundation to support their mission: to empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally. The content within this book was generated collaboratively by volunteers. Please be advised that nothing found here has necessarily been reviewed by people with the expertise required to provide you with complete, accurate or reliable information. Some information in this book maybe misleading or simply wrong. The publisher does not guarantee the validity of the information found here. If you need specific advice (for example, medical, legal, financial, or risk management) please seek a professional who is licensed or knowledgeable in that area. Sources, licenses and contributors of the articles and images are listed in the section entitled “References”. Parts of the books may be licensed under the GNU Free Documentation License. A copy of this license is included in the section entitled “GNU Free Documentation License” All used third-party trademarks belong to their respective owners.

Contents
Articles
MapReduce Aggregate Level Simulation Protocol Amazon Relational Database Service Amazon SimpleDB Amoeba distributed operating system Art of War Central Autonomic Computing Citrusleaf database Client–server model Code mobility Connection broker CouchDB Data Diffusion Machine Database-centric architecture Distributed application Distributed data flow Distributed database Distributed design patterns Distributed Interactive Simulation Distributed lock manager Distributed memory Distributed object Distributed shared memory Distributed social network Dryad (programming) Dynamic infrastructure Edge computing Explicit multi-threading Fabric computing Fallacies of Distributed Computing Fragmented object Gemstone (database) HyperText Computer High level architecture (simulation) 1 7 15 17 19 20 21 25 27 29 30 31 36 36 37 38 40 42 43 45 48 50 51 52 59 60 63 65 67 69 70 72 73 74

IBZL Kayou Live distributed object Master/slave (technology) Membase Message consumer Message passing Messaging pattern Mobile agent MongoDB Multi-master replication Multitier architecture Network cloaking Opaak Open architecture computing environment Open Computer Forensics Architecture OrientDB Overlay network Paradiseo Parasitic computing PlanetSim Portable object (computing) Redis (data store) Remote Component Environment Request Based Distributed Computing RM-ODP Semantic Web Data Space Service-oriented distributed applications Shared memory Smart variables Stub (distributed computing) Supercomputer Terrastore Transparency (human-computer interaction) TreadMarks Tuple space Utility computing Virtual Machine Interface

77 80 80 84 86 88 89 92 93 95 102 105 107 108 109 110 111 112 114 116 117 119 120 122 123 123 127 128 129 131 132 133 145 146 148 148 153 155

Licenses and Contributors 159 163 Article Licenses License 164 .Virtual Object System Volunteer computing 156 157 References Article Sources and Contributors Image Sources.

The worker node processes that smaller problem. Provided each mapping operation is independent of the others. collectively referred to as a cluster (if all nodes use the same hardware) or as a grid (if the nodes use different hardware). and returns a list of pairs in a different domain: Map(k1. Ruby. the MapReduce framework collects all pairs with the same key from all lists and groups them together. After that. PHP. This produces a list of (k2. value) pairs. a set of 'reducers' can perform the reduction phase provided all outputs of the map operation that share the same key are presented to the same reducer at the same time. "Reduce" step: The master node then takes the answers to all the sub-problems and combines them in some way to get the output – the answer to the problem it was originally trying to solve.[4] MapReduce libraries have been written in C++.v2) The Map function is applied in parallel to every item in the input dataset. OCaml. value) pairs into a list of values. A worker node may do this again in turn. Map takes one pair of data with a type in one data domain. Thus the MapReduce framework transforms a list of (key. Overview MapReduce is a framework for processing huge datasets on certain kinds of distributable problems using a large number of computers (nodes).MapReduce 1 MapReduce MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. R and other programming languages. leading to a multi-level tree structure. Computational processing can occur on data stored either in a filesystem (unstructured) or within a database (structured). While this process can often appear inefficient compared to algorithms that are more sequential. though one call is allowed to return more than one value. "Map" step: The master node takes the input.[1] Parts of the framework are patented in some countries. The parallelism also offers some possibility of recovering from partial failure of servers or storage during the operation: if one mapper or reducer fails. MapReduce can be applied to significantly larger datasets than "commodity" servers can handle – a large server farm can use MapReduce to sort a petabyte of data in only a few hours. and distributes those to worker nodes.[2] The framework is inspired by the map and reduce functions commonly used in functional programming. MapReduce allows for distributed processing of the map and reduction operations. Logical view The Map and Reduce functions of MapReduce are both defined with respect to data structured in (key. list (v2)) → list(v3) Each Reduce call typically produces either one value v3 or an empty return. F#. Java. which accepts a list of arbitrary values and .v1) → list(k2.v2) pairs for each call. The returns of all calls are collected as the desired result list. which in turn produces a collection of values in the same domain: Reduce(k2.[3] although their purpose in the MapReduce framework is not the same as their original forms. Python. partitions it up into smaller sub-problems. and passes the answer back to its master node. Erlang. Similarly. Perl. thus creating one group for each one of the different generated keys. the work can be rescheduled – assuming the input data is still available. This behavior is different from the typical functional programming map and reduce combination. all maps can be performed in parallel – though in practice it is limited by the data source and/or the number of CPUs near that data. The Reduce function is then applied in parallel to each group. C#.

The hot spots. which the application defines. String document): // name: document name // document: document contents for each word w in document: EmitIntermediate(w. Here. or for the mapping processors to serve up their results to reducers that query them. It is necessary but not sufficient to have implementations of the map and reduce abstractions in order to implement MapReduce. such as direct streaming from mappers to reducers.MapReduce returns one single value that combines all the values returned by map. void reduce(String word. Other options are possible. Dataflow The frozen part of the MapReduce framework is a large distributed sort. This may be a distributed file system. thus this function just needs to sum all of its input values to find the total appearances of that word. each document is split into words. AsString(sum)). 2 Example The canonical example application of MapReduce is a process to count the appearances of each different word in a set of documents: void map(String name. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce. Iterator partialCounts): // word: a word // partialCounts: a list of aggregated partial counts int sum = 0. Distributed implementations of MapReduce require a means of connecting the processes performing the Map and Reduce phases. using the word as the result key. Emit(word. are: • • • • • • an input reader a Map function a partition function a compare function a Reduce function an output writer . for each pc in partialCounts: sum += ParseInt(pc). "1"). and each word is counted initially with a "1" value by the Map function.

Each node is expected to report back periodically with completed work and status updates. Individual operations use atomic operations for naming file outputs as a check to ensure that there are not parallel conflicting threads running. sums them and generates a single output of the word and the final sum. Reduce function The framework calls the application's Reduce function once for each unique key in the sorted order. The Reduce can iterate through the values that are associated with that key and output 0 or more values. Map function Each Map function takes a series of key/value pairs. the map function would break the line into words and output a key/value pair for each word. If the application is doing a word count. the master node (similar to the master server in the Google File System) records the node as dead and sends out the node's assigned work to other nodes. A common example will read a directory full of text files and return each line as a record. Distribution and reliability MapReduce achieves reliability by parceling out a number of operations on the set of data to each node in the network. otherwise the MapReduce operation can be held up waiting for slow reducers to finish. and generates zero or more output key/value pairs. Output writer The Output Writer writes the output of the Reduce to stable storage. Each output pair would contain the word as the key and "1" as the value. it is possible to also copy them to another name in addition to the name of the task (allowing for .MapReduce 3 Input reader The input reader divides the input into appropriate size 'splits' (in practice typically 16MB to 128MB) and the framework assigns one split to each Map function. data produced and time taken by map and reduce computations. processes each. usually a distributed file system. the data is shuffled (parallel-sorted / exchanged between nodes) in order to move the data from the map node that produced it to the shard in which it will be reduced. A typical default is to hash the key and modulo the number of reducers. When files are renamed. CPU speeds. The input reader reads data from stable storage (typically a distributed file system) and generates key/value pairs. If a node falls silent for longer than that interval. In the word count example. Comparison function The input for each Reduce is pulled from the machine where the Map ran and sorted using the application's comparison function. the Reduce function takes the input values. It is important to pick a partition function that gives an approximately uniform distribution of data per shard for load balancing purposes. Partition function Each Map function output is allocated to a particular reducer by the application's partition function for sharding purposes. The partition function is given the key and the number of reducers and returns the index of the desired reduce. Between the map and reduce stages. The input and output types of the map can be (and often are) different from each other. The shuffle can sometimes take longer than the computation time depending on network bandwidth.

[6] [7] desktop grids.[5] and statistical machine translation. It replaced the old ad hoc programs that updated the index and ran the various analyses.[16] They concluded that databases offer real advantages for many kinds of data use. the master node attempts to schedule reduce operations on the same node. citing Teradata as an example of prior art that has existed for over two decades. web link-graph reversal. MapReduce was used to completely regenerate Google's index of the World Wide Web. map and reduce functionality can be very easily implemented in Oracle's PL/SQL database oriented language. The reduce operations operate much the same way. term-vector per host. However. experts in parallel databases and shared-nothing architectures.[8] volunteer computing environments. in Hadoop the NameNode is a single point of failure for the distributed filesystem. have been critical of the breadth of problems that MapReduce can be used for. The transient data is usually stored on local disk and fetched remotely by the reducers. Because of their inferior properties with regard to parallel operations. Criticism David DeWitt and Michael Stonebraker. For example.[17] . noting both are "writing in a low-level language performing low-level record manipulation. Another article. rejects these views. DeWitt and Stonebraker have subsequently published a detailed benchmark study comparing performance of MapReduce and RDBMS approaches on several specific problems. This property is desirable as it conserves bandwidth across the backbone network of the datacenter.[15] Jorgensen asserts that DeWitt and Stonebraker's entire analysis is groundless as MapReduce was never designed nor intended to be used as a database. distributed sort. the MapReduce model has been adapted to several computing environments like multi-core and many-core systems. by Greg Jorgensen. Google has been granted a patent on MapReduce. For example.MapReduce side-effects). Implementations are not necessarily highly-reliable. or in the same rack as the node holding the data being operated on. machine learning.[14] They challenged the MapReduce proponents' claims of novelty. document clustering.[9] dynamic cloud environments. inverted index construction. They have published the data and code used in their study to allow other researchers to do comparable studies.[10] and mobile environments.[12] MapReduce's stable inputs and outputs are usually stored in a distributed file system. 4 Uses MapReduce is useful in a wide range of applications including: distributed grep.[11] At Google. Moreover.[13] They called its interface too low-level and questioned whether it really represents the paradigm shift its proponents have claimed it is. especially on complex processing or where the data is used across an enterprise. though projects such as Pig (or PigLatin) and Sawzall are starting to address these problems. but that MapReduce may be easier for users to adopt for simple or one-time processing tasks. there have been claims that this patent should not have been granted because MapReduce is too similar to existing products."[14] MapReduce's use of input files and lack of schema support prevents the performance improvements enabled by common database system features such as B-trees and hash partitioning. They also compared MapReduce programmers to Codasyl programmers. web access log stats.

"More patent nonsense — Google MapReduce" (http:/ / www. acm. "MapReduce: A major step backwards" (http:/ / databasecolumn. google.com. pp. • Matt WIlliams (2009). Gary Bradski. uspto. 104. and Kunle Olukotun. [18] http:/ / graal. Among other things. HPDC'10. . brown. com/ 2010/ 02/ 11/ google-mapreduce-patent/ ).331: "System and method for efficient large-scale data processing " (http:/ / patft. D. 7. Retrieved 2010-01-11. [14] David DeWitt. References Specific references: [1] Google spotlights data center inner workings | Tech news blog . these batch routines analyze the latest Web pages and update Google's indexes. Google was running about 3. Ramanan Raghuraman.650. D."MapReduce: Simplified Data Processing on Large Clusters" (http:/ / labs. gov/ netacgi/ nph-Parser?Sect1=PTO1& Sect2=HITOFF& d=PALL& p=1& u=/ netahtml/ PTO/ srchnum. Qiong Luo. Dewitt. "Map-Reduce for Machine Learning on Multicore" (http:/ / www. Michael Stonebraker. Antonopoulos. 6. com/ map-reduce-machine-learning-multicore). baselinemag. and M. Naga K.000 computing jobs per day through MapReduce. Brown University. "Towards MapReduce for Desktop Grid Computing" (http:/ / ieeexplore.. [17] Curt Monash. "Mars: a MapReduce framework on graphics processors" (http:/ / portal. com/ General references: • Dean. Retrieved Apr. 2011. ens-lyon. 113–125. [6] Colby Ranger. acm. fr/ mapreduce/ [19] http:/ / mapreduce. from Microsoft [5] Cheng-Tao Chu. "As of October. G. edu/ projects/ mapreduce-vs-dbms/ ). Retrieved Apr. Paulson.com. and Christos Kozyrakis. representing thousands of machine-days. typicalprogrammer.331. [12] "How Google Works" (http:/ / www. org/ xpl/ freeabs_all. 13." [13] "Database Experts Jump the MapReduce Shark" (http:/ / typicalprogrammer. com/ content/ h17r882710314147/ ). "Evaluating MapReduce for Multi-core and Multiprocessor Systems" (http:/ / www. . [16] Andrew Pavlo. chapt.com/matt/2009/01/18/ understanding-mapreduce/). . "Misco: a MapReduce framework for mobile systems" (http:/ / portal. . .331) [3] "Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages. 2010. ist. Dimitrios Gunopulos.com. [11] Adam Dou. com/ evaluating-mapreduce-multi-core-and-multiprocessor-systems). Gary Bradski. Jeremy Archuleta. . Yi-An Lin. • MapReduce Users Groups [19] around the world. NIPS 2006. "Relational Database Experts Jump The MapReduce Shark" (http:/ / typicalprogrammer. Govindaraju. Springer. cfm?id=1839332). "A Comparison of Approaches to Large-Scale Data Analysis" (http:/ / database. Abadi. databasecolumn. by Jeffrey Dean and Sanjay Ghemawat.CNET News. S. Retrieved 2009-11-11. 5859& rep=rep1& type=pdf) — paper by Ralf Lämmel.MapReduce 5 Conferences and users groups • The First International Workshop on MapReduce and its Applications (MAPREDUCE'10) [18] was held with the HPDC conference and OGF'29 meeting in Chicago. 1.google. [9] Heshan Lin. Andrew Ng. Arun Penmetsa. 3PGCIC'10. . Systems and Applications.650. Domenico Talia. [10] Fabrizio Marozzo.00. E. Paolo Trunfio. HPDC'10.com (http:/ / news. dbms2. vertica. ISBN: 978-1-84996-240-7. htm& r=1& f=G& l=50& s1=7.html). Jeffrey & Ghemawat. Retrieved 2008-08-27. html) [2] US Patent 7. Madden. Stonebraker. Haiwu He and Fedak." . In: Cloud Computing: Principles. Wu-chun Feng. Sanjay (2004). ieee. willowgarage. springerlink. Mark Gardner. psu.Revisited" (http:/ / citeseerx. jsp?arnumber=5662789).. cfm?id=1851489). edu/ viewdoc/ download?doi=10. . dbms2. Xiaosong Ma. "A Peer-to-Peer Framework for Supporting MapReduce Applications in Dynamic Cloud Environments" (http:/ / www. Wenbin Fang. cfm?id=1454152). . baselinemag. L. html). YuanYuan Yu. from Google Labs [4] "Google's MapReduce Programming Model -. 2005. . PN. M. Sang Kyun Kim. . Retrieved 2010-03-07. A. [7] Bingsheng He. Chevalier.331& RS=PN/ 7. com/ ?p=16). org/ citation. com/ article2/ 0. acm. "MapReduce: Simplified Data Processing on Large Clusters" (http:// labs. Gillam (Editors). S. [15] Greg Jorgensen.650. Best Paper. 1. meetup. & OS=PN/ 7. J. com/ database-innovation/ mapreduce-a-major-step-backwards/ ). HPCA 2007. com/ 8301-10784_3-9955184-7. . Zhe Zhang. "Understanding Map-Reduce" (http://wordflows. Tuyong Wang.1985048. according to a presentation by Dean. "MOON: MapReduce On Opportunistic eNvironments" (http:/ / portal. cnet. Vana Kalogeraki.com.com/papers/mapreduce. PACT'08. org/ citation. com/ papers/ mapreduce. com/ ?p=16). org/ citation. . asp). J. Tuulos. N. IL.. willowgarage.1540. [8] Bing Tang.650. Taneli Mielikainen and Ville H. cs. Moca. Rasin. .

• "Tiled-MapReduce: Optimizing Resource Usages of Data-parallel Applications on Multicore with Tiling" (http:// ppi. from Stanford University.fudan. 2007.1723129) -- • • • • • paper by Yi Shan.jpdc. (This paper shows how to extend MapReduce for relational data processing. Ningyi Xu.2010. Beth Plale. and D. "A New Computation Model for Rack-Based Computing" (http://infolab. It presents the design and implementation of MapReduce on graphics processors. N.1247602) — paper by Hung-Chih Yang. Afrati. Tuyong Wang.com/2008/08/26/ why-mapreduce-matters-to-sql-data-warehousing/) — analysis related to the August.pdf) — paper by Colby Ranger.jpdc. Govindaraju. Journal of Parallel and Distributed Computing 71 (2011) 450-459. from University of Calabria. • "Scheduling divisible MapReduce computations " (http://dx. Sean Dorward. Arun Penmetsa. 1029–1040.hpca. and Christos Kozyrakis. Huazhong Yang (2010). Not published as of Nov 2009.springerlink. published in Cloud Computing: Principles. from Hong Kong University of Science and Technology. Wenbin Fang.cmp_mapreduce. 7. from Indiana University and Wilfred Li.edu/~christos/ publications/2007.ust. doi:10. but with additional implementation cost.12. "A Peer-to-Peer Framework for Supporting MapReduce Applications in Dynamic Cloud Environments" (http:// www. .stanford.acm. "Map-Reduce-Merge: Simplified Relational Data Processing on Large Clusters" (http://portal.1016/j.cn/_media/publications.1016/j. Qiong Luo.pdf) — paper by Marc de Kruijf and Karthikeyan Sankaralingam. Ullman. Proceedings of the 18th annual ACM/SIGDA international symposium on Field programmable gate arrays. Stott Parker.pdf?id=rong_chen&cache=cache) -.google.edu/~dekruijf/docs/mapreduce-cell. from Stanford University • "Why MapReduce Matters to SQL Data Warehousing" (http://www. PACT 2010. published in Proc. Gary Bradski. FPMR: MapReduce framework on FPGA (http://portal. Zhenhua Guo. Jing Yan. San Diego • "Interpreting the Data: Parallel Analysis with Sawzall" (http://labs.edu. It presents the Tiled-MapReduce programming model which optimizes resource usages of MapReduce applications on multicore environment using tiling strategy. Antonopoulos.004. Ali Dasdan.cse.hk/catalac/users/saven/ GPGPU/MapReduce/PACT08/171. Yiming Sun.MapReduce 6 External links Papers • "A Hierarchical Framework for Cross-Domain MapReduce Execution" (http://pti. L. Jeffrey D. Bo Wang. ISBN: 978-1-84996-240-7. pdf) — paper by Foto N.html) — paper by Rob Pike. from University of California. Robert Griesemer.paper by Rong Chen. Systems and Applications.stanford.org/beta/citation. Architecture" (http://pages.doi. Judy Qiu.edu/647742. Gillam (Editors).12. cfm?doid=1247480. Paolo Trunfio. of ACM SIGMOD. 2010. edu/546646.edu/~ullman/pub/mapred. Yu Wang. Haibo Chen and Binyu Zang from Fudan University.org/citation. Domenico Talia.004) -.iu.E. Springer. 113–125. This paper is an attempt to develop a general model in which one can compare algorithms for computing in an environment similar to what map-reduce expects. published in Proc.paper by Joanna Berlińska from Adam Mickiewicz University and Maciej Drozdowski from Poznan University of Technology. PACT 2008. It presents scheduling and performance model of MapReduce.ostrich-pact10.ist. Load Balancing (http://citeseer. from University of Wisconsin–Madison • "Mars: A MapReduce Framework on Graphics Processors" (http://www. published in Proc.com/papers/sawzall.cfm?id=1723112.wisc.2010. Ramanan Raghuraman. pp. 2008 introduction of MapReduce/SQL integration by Aster Data Systems and Greenplum • "MapReduce for the Cell B.) FLuX: the Fault-tolerant (http://citeseer.psu. from Google Labs • "Evaluating MapReduce for Multi-core and Multiprocessor Systems" (http://csl.org/10. from Yahoo and UCLA.html) eXchange operator from UC Berkeley provides an integration of partitioned parallelism with process pairs.edu/pubs/ hierarchical-framework-cross-domain-mapreduce-execution) — paper by Yuan Luo.html).ist.acm.dbms2. Naga K. pp. Ruey-Lung Hsiao. This results in a more pipelined approach than Google's MapReduce with instantaneous failover. chapt.pdf) — paper by Bingsheng He. Sean Quinlan.cs. in FPGA '10.psu.com/content/h17r882710314147/) — paper by Fabrizio Marozzo.

Markl. of BTW 2011. V. ALSP Infrastructure Software (AIS) that provides distributed runtime simulation support and management. Ewen. 2.umiacs. and 3.tu-berlin. and D. Markl. it was used by the US military to link analytic and training simulations.googlepages. a community-based experiment was conducted in 1991 to extend SIMNET to link the US Army's Corps Battle Simulation (CBS) [1] and the US Air Force's Air Warfare Simulation (AWSIM) [2].com/edu/submissions/mapreduce-minilecture/listing. providing air-ground interactions between CBS and AWSIM. F.edu/ ~jimmylin/book.html) course from Google Code University (http://code. of ACM SoCC 2010.de/menue/home/parameter/en/ ) published in Proc.paper by A.de/menue/home/parameter/en/) published in Proc. Battré. Nijkamp.html) (manuscript) Educational courses • Cluster Computing and MapReduce (http://code. E. Alexandrov. The success of the prototype and users' recognition of the value of this technology to the training community led to development of production software. html) course from Google Code University (http://code.umd. Warneke from TU Berlin (http://www. Books • Jimmy Lin and Chris Dyer. developed in the Stratosphere (http://www. and programming assignments. F. a generalization of MapReduce.google. ALSP Logo History In 1990. . Hueske.google. and D. O. The paper introduces the PACT programming model.paper by D. Based on prototype efforts.stratosphere. S. Hueske. supported three major exercises in 1992.com/edu/) contains video lectures and related course materials from a series of lectures that was taught to Google software engineering interns during the Summer of 2007. M. Kao.com/edu/) contains a comprehensive introduction to MapReduce including lectures.iap.eu/files/NephelePACTs_10. • MapReduce course (http://mr.pdf) -. V. Warneke from TU Berlin (http://www. The first ALSP confederation. Heimel. taught by engineers of Google Boston. • MapReduce in a Week (http://code. ALSP consists of: 1.2008.pdf) -.eu) research project. • "MapReduce and PACT .Comparing Data Parallel Programming Models" (http://stratosphere. "Data-Intensive Text Processing with MapReduce" (http://www.MapReduce • "Nephele/PACTs: A Programming Model and Execution Framework for Web-Scale Analytical Processing" (http://stratosphere.com/edu/submissions/mapreduce/listing. Kao. the Defense Advanced Research Projects Agency (DARPA) employed The MITRE Corporation to study the application of distributed interactive simulation principles employed in SIMNET to aggregate-level constructive training simulations. Replaced by the High Level Architecture (simulation) (HLA). part of 2008 Independent Activities Period at MIT. Participating simulations adapted for use with ALSP.tu-berlin.google. 7 Aggregate Level Simulation Protocol The Aggregate Level Simulation Protocol (ALSP) is a protocol and supporting software that enables simulations to interoperate with one another. Ewen.eu/files/ ComparingMapReduceAndPACTs_11.google. A reusable ALSP Interface consisting of generic data exchange message protocols. O.com/). S. reading material.

and intelligence (TACSIM [4]). fires its own weapons and determines appropriate damage to its systems when fired upon • A message-based protocol for distributing information from one simulation to all other simulations. logistics (CSSTSS). the US Air Force (AWSIM). distribution of Ground Warfare Simulation (GRWSIM). and Instrumentation (PEO STRI [5]) 8 Contributions ALSP developed and demonstrated key aspects of distributed simulation. degrading the effectiveness of the exercise. The GRWSIM simulation was unreliable and its distributed database was inconsistent.Aggregate Level Simulation Protocol By 1995. Its packetized video teleconferencing brought general officers of NATO nations face-to-face during a military exercise for the first time. • An architecture that permits simulations to continue to use their existing architectures while participating in an ALSP confederation. the US Navy (RESA). • Data management permits all simulations to share information in a commonly understood manner even though each had its own representation of data. was less successful. But the software application of DSI. . many of which were applied in the development of HLA. the Warrior Preparation Center (WPC) in Einsiedlerhof. The program had also transitioned from DARPA’s research and development emphasis to mainstream management by the US Army’s Program Executive Office for Simulation. This includes multiple simulations controlling attributes of the same object. • No central node so that simulations can join and depart from the confederation at will • Geographic distribution where simulators can be distributed to different geographic locations yet exercise in the same simulated environment • Object ownership so each simulation controls its own resources. electronic warfare (JECEWSI). • Time management so that the times for all simulations appear the same to users and so that event causality is maintained – events should occur in the same sequence in all simulations. this was well-received. ALSP had transitioned to a multi-Service program with simulations representing the US Army (CBS). computerized. the disappointment of ACE-89. The Defense Advanced Research Projects Agency (DARPA) used ACE-89 as a technology insertion opportunity by funding deployment of the Defense Simulation Internet (DSI). and the desire to combine existing combat simulations prompted DARPA to initiate research that lead to ALSP. Germany hosted the computerized military exercise ACE-89. virtual battlefield. The success of SIMNET. DARPA was funding development of a distributed tank trainer system called SIMNET where individual. Training. the US Marine Corps (MTWS [3]). tank-crew trainers were connected over local area networks and the DSI to cooperate in a single. Motivation In 1989.

However. interaction is facilitated entirely through the interconnection infrastructure. user interface.Aggregate Level Simulation Protocol 9 Basic Tenets DARPA sponsored the design of a general interface between large. • Geographic distribution. conducts damage assessment locally. Architectural characteristics (implementation language. A simulation uses a message-passing protocol distribute information to all other simulations. ALSP supports a confederation of simulations that coordinate using a common model. this solution does not scale well. The schemes for internal state representation differ among existing simulations. Conceptual Framework A conceptual framework is an organizing structure of concepts that facilitates simulation model development. two strategies are possible: (1) define an infrastructure that translates between the representations in each simulation. the ALSP design adopted the second strategy. when one of its objects is hit. activity scanning and process interaction. For the results of a [6] distributed simulation to be "correct. aggregate-level combat simulations. the simulation-object relationship is more complicated. To design a mechanism that permits existing simulations to interact. objects come into (and go out of) existence with the passage of simulation time and the disposition of these objects is solely the purview of the simulation. several principles of SIMNET applied to aggregate-level simulations: • Dynamic configurability. • Architecture independence. . Typically. Aggregate-level combat simulations use Lanchestrian models of combat rather than individual physical weapon models and are typically used for high-level training. Simulations can reside in different geographic locations yet exercise over the same logical terrain. The remaining modifications are: • Recognizing that the simulation doesn’t own all of the objects that it perceives.[7] Common conceptual frameworks include: event scheduling. Because of an underlying requirement for scalability. This mapping represents one of the three ways in which a simulation must be altered to participate in an ALSP confederation. • Data management. necessitating a common representational system and concomitant mapping and control mechanisms. or (2) define a common representational scheme and require all simulations to map to that scheme. fires its own weapons and. When acting within a confederation. • Modifying the simulation’s internal time advance mechanism so that it works cooperatively with the other simulations within the confederation. The first strategy requires few perturbations to existing simulations. Object classes are organized hierarchically in much the same manner as with object-oriented programming languages." time must be consistent across all simulations. and time flow mechanism) of existing simulations differed. ALSP prescribes that each simulation maps between the representational scheme of the confederation and its own representational scheme. The ALSP challenge had requirements beyond those of SIMNET: • Simulation time management. The architecture implied by ALSP must be unobtrusive to existing architectures. existing. • Autonomous entities. Despite representational differences. • Communication by message passing. Simulations may join and depart an exercise without restriction. Each simulation controls its own resources. simulation time is independent of wall-clock time. The ALSP conceptual framework is object-based where a model is composed of objects that are characterized by attributes to which values are assigned. In stand-alone simulations.

it reports this fact to enable ghost deletion. ALSP time management facilities support discrete event simulation using either asynchronous (next-event) or [8] synchronous (time-stepped) time advance mechanisms. The simulation sends an advance request to its ACM for time . The simulation sends any messages resulting from the event to its ACM. 3. A simulation sends an event-request message to its ACM with a time parameter corresponding to simulation time T. Coordinate simulation local time with confederation time. The simulation processes all events for some time interval 2. The mechanism to support time-stepped simulation is: 1. . when a simulation departs a confederation the other ACMs delete input message queues for that simulation. The term confederation model describes the object hierarchy. they include: • • • • • Coordinate simulations joining and departing from a confederation. Owning an object’s attribute means that a simulation is responsible for calculating and reporting changes to the value of the attribute. a simulation owns an object if it owns the "identifying" attribute of the object. 10 ALSP Infrastructure Software (AIS) The object-based conceptual framework adopted by ALSP defines classes of information that must be distributed. Ghosts are local copies of objects owned by other simulations. several simulations may own different attributes of a given object. By convention. when a simulation deletes an object. 2. giving it permission to process its local event at time T. When a simulation joins a confederation. 4. Principal components of AIS are the ALSP Common Module (ACM) and the ALSP Broadcast Emulator (ABE). These fundamental concepts provide the basis for the remainder of the presentation. it reports this fact to the confederation to let other simulations create ghosts. the simulation must report this to the confederation. The mechanism to support next-event simulations is 1. Filter incoming messages. this is an interaction. When a simulation creates an object. The simulation repeats from step (1). the ACM send a grant-advance to the simulation. Whenever a simulation takes an action between one of its objects and a ghost. One ACM instance exists for each simulation in a confederation. (the time of its next local event). ALSP Common Module (ACM) The ALSP Common Module (ACM) provides a common interface for all simulations and contains the essential functionality for ALSP. i. If the ACM has messages for its simulation with timestamps older than or the same as T. attributes and interactions supported by a confederation. The ALSP Infrastructure Software (AIS) provides data distribution and process coordination.. the ACM sends the oldest one to the simulation. . and permit ownership migration. If all messages have timestamps newer than T. for any value of simulation time. In the parlance of ALSP. during its lifetime an object may be owned by more than one simulation. so that simulations receive only messages of interest. Likewise.e. Enforce attribute ownership so that simulations report values only for attributes they own. Conversely. ACM services require time management and object management. all other ACMs in the confederation create input message queues for the new simulation. Objects not owned by a particular simulation but within the area of perception for the simulation are known as ghosts. Coordinate ownership of object attributes. Time management Joining and departing a confederation is an integral part of time management process.Aggregate Level Simulation Protocol The simulation-object ownership property is dynamic. In fact.

It receives a message on one of its communications paths and retransmits the message on all of its remaining communications paths. If (an update passes all filter criteria) | If (the object is known to the simulation) | | Send new attribute values to simulation | Else (object is unknown) | | If (enough information is present to create a ghost) | | | Send a create message to the simulation | | Else (not enough information is know) | | | Store information provided | | | Send a request to the confederation for missing data Else (the update fails filter criteria) | If (the object is known to the simulation) | | Send a delete message to the simulation | Else | | Discard the update data The ownership and filtering information maintained by the ACM provide the information necessary to coordinate the transfer of attribute ownership between simulations. attributes may be members of • Create set. Useful. Filtering provides discrimination by (1) object class. either owned or ghosted. and (3) geographic location. The simulation repeats from step (1). Filters also define the interactions relevant to a simulation. It also permits configurations where sets of ACMs communicate with their own local ABE with inter-ABE communication over wide area networks. and attributes of those objects that the simulation currently owns. but not mandatory. to the simulation. information • Update set. For any object class. followed by a to the ACM. ALSP Broadcast Emulator (ABE) An ALSP Broadcast Emulator (ABE) facilitates the distribution of ALSP information. (2) attribute value or range. Object attribute values reported by a simulation to the confederation Information flow across the network can be further restricted through filters. This permits configurations where all ALSP components are local to one another (on the same computer or on a local area network). 11 AIS includes a deadlock avoidance mechanism using null messages.Aggregate Level Simulation Protocol 3. The simulation sends any messages for the interval 5. The attribute database maintains objects known to the simulation. 4. Attributes minimally required to represent an object • Interest set. Object management The ACM administers attribute database and filter information. The ACM sends all messages with time stamps on the interval grant-advance to T+?T. The mechanism requires that the processes have exploitable lookahead characteristics. .

the simulation sends a delete message to inform other simulations. Event messages are time-stamped and delivered in a temporally-consistent order. class attributes. (3) a message filtering scheme to define the information of interest to a simulation. and time management. just as objects are described by attributes. . The semantics of the protocol are confederation-dependent. and (4) a mechanism for intelligent message distribution. object resource control. The simulation protocol is text-based. filter registration. Inter-component Communications Model AIS employs a persistent connection communications model[9] to provide the inter-component communications. Simulation Protocol The simulation protocol is the main level of the ALSP protocol. • Delete. and time control services. • Refresh request. Additional protocol messages provide connection state. the syntactical representation of the simulation protocol may be defined without a priori knowledge of the semantics of the objects and interactions of any particular confederation. These issues are addressed by a layered protocol that has at the top a simulation protocol with underlying simulation/ACM. It consists of four message types: • Update. It is defined by an LALR( 1) context-free grammar. where the set of classes. The ACM then distributes the information via AIS to other simulations in that have indicated interest. A simulation can request an update of a set of attribute values for any object or class of objects by sending a refresh request message to the confederation. time management. interactions. Two services control distribution of simulation protocol messages: events and dispatches. it sends update messages to the ACM that provide initial or changed attribute values. Interactions between objects are identified by kind. Interaction kinds are described by parameters. The transport layer interface used to provide inter-component communications was dictated by simulation requirements and the transport layer interfaces on AIS-supporting operating systems: local VMS platforms used shared mailboxes. object management. Objects in ALSP are defined by a unique id number. object management. (2) a layered protocol for simulation-to-simulation communication. non-local VMS platforms used either Transparent DECnet or TCP/IP. and time management. a class. and interaction parameters are variable. object management. attribute lock control.Aggregate Level Simulation Protocol 12 Communication Scheme The ALSP communication scheme consists of (1) an inter-component communications model that defines the transport layer interface that connects ALSP components. ALSP Protocol The ALSP protocol is based on a set of orthogonal issues that comprise ALSP’s problem space: simulation-to-simulation communication. When a simulation causes one of its objects to cease to exist. and UNIX-like platforms use TCP/IP. Therefore. Simulation/ACM Connection Protocol The simulation/ACM connection protocol provides services for managing the connection between a simulation and its ACM and a method of information exchange between a simulation and its ACM. and event distribution protocols. Dispatch messages are delivered as soon as possible. without regard for simulation time. confederation save control. the simulation sends an interaction message to the ACM for further dissemination to other interested simulations. As a simulation changes the state its objects. • Interaction. and a set of attributes associated with a c1ass. When a simulation’s object engages either another simulation’s object or a geographic area.

The protocol provides services for the distributed coordination of a simulation’s entrance into the confederation. The kind parameter has a hierarchical structure similar to the object class structure. uses the object management protocol. acquisition. The ACM filters two types of messages: update messages and interaction messages. ACMs solely use it for object attribute creation. Services provided by the simulation/ACM protocol are used by the simulations to interact with the ACM’s attribute locking mechanism. Coordination is required to produce a consistent snapshot of all ACMs. or (4) the ACM sends the simulation a delete message. Each attribute of each object known to a given ACM has a status that assumes one of three values: • Locked. • Object Discovery adds an object to the object database as a ghosted object. when an ACM receives an update message there are four possible outcomes: (1) the ACM discards the message. and release of object attributes. The coordination of status. The ACM evaluates update messages based on the simulation’s update message filtering criteria that the simulation provides. If this simulation is interested in the objects. and confederation saves. it can ghost them (track their locations and state) and model interactions to them from owned objects. These services allow AIS to manage distributed object ownership. between ACMs. (2) the ACM sends the simulation a create message. time progression. It provides time management services for synchronizing simulation time among ACMs. The save mechanism provides fault tolerance. acquisition. (3) the ACM sends the simulation the update message. translators and simulations for a particular value of simulation time. Distributed object ownership presumes that no single simulation must own all objects in a confederation. and pass filtering criteria and discards those that are not of interest. Time Management Protocol The time management protocol is also a peer-level protocol that sits below the simulation protocol. A simulation uses simulation protocol update messages to discover objects owned by other simulations. The simulation may optionally specify attributes to be in the unlocked state. No simulation currently controls the attribute. All of the attributes for this object are marked with a status of gone. 13 Message Filtering The ACM uses simulation message filtering to evaluates the content of a message received from the confederation. As discussed in earlier. objects come into existence through the registration process performed by its simulation or through the discovery of objects registered by other simulations. Interaction messages. and verification (of the consistency of the distributed object database). A simulation controls the attribute and may update the attribute value. • Gone. A simulation "owns" the attribute if it has that attribute locked. A primary function of the object management protocol is to ensure that a simulation only updates attributes for which it has acquired a lock. A simulation "owns" the object if it has its id attribute locked. The object manager in the ACM manages the objects and object attributes of the owned and ghosted objects known to the ACM. The state of control is held elsewhere in the confederation. An ACM may discard interaction messages because of the kind parameter. From the ACM’s perspective. Any simulation asking for control is granted control. but many simulations require knowledge of some objects. request. The initial state attribute locks for registered objects and discovered objects is as follows: • Object Registration places each object-attribute pair in the locked state. The join/resign services and time synchronization mechanisms are described in Section earlier. Locks implement attribute ownership. The simulation informs its ACM of the interaction . Update messages. • Unlocked. The ACM delivers messages to its simulation that are of interest.Aggregate Level Simulation Protocol Object Management Protocol The object management protocol is a peer-level protocol that sits below the simulation protocol and provides object management services. release.

and Bishop. 257-263." In: Proceedings of the 1993 Winter Simulation Conference. Taft. peostri. afams.A. July. mil/ index. Clocks. mil/ products/ cbs/ https:/ / afmsrr. LA.Theory.L. [8] Nance. (1978). mil/ products/ tacsim [5] http:/ / www. and the Ordering of Events in a Distributed System.E. army. 558-565. XEROX Palo Alto Research Center. usmc. E." Management Science. In the case of update messages. Wilson. The ABE also use this information to send only information that is of interest to the components it serves. Experience. af. asp http:/ / www. Nance. pp. New Orleans. "PUP: An Internetwork Architecture.R.. Shoch. pp. . (1993). pp. S. cfm?RID=SMN_AF_1000000 http:/ / www. and Griffin. peostri. Page. except that the kind parameter in the interaction message determines where the message is sent. R. mil/ dirs/ ont/ mands/ mwts.M. 29palms. "Model Generation Issues in a Simulation Support Environment.. J. E.P. distribution of this information allows ACMs to only distribute data on classes (and attributes of classes) that are of interest to the confederation. July. Derrick. 59-93." In: Proceedings of the 1990 Winter Simulation Conference. O. [9] Boggs. "Time. and Future Directions. 21(7).F. R. A. AIS employs a form of intelligent message routing that uses the Event Distribution Protocol (EDP). 18(l). L. "On Time Flow Mechanisms for Discrete Event Simulations. pp. peostri. and Metcalfe. 1068-1072. September. army." Communications of the ACM.. "ALSP . (1990).M. [7] Balci. mil [6] Lamport. the process is similar..J. R. R. (1979).[10] The EDP allows ACMs to inform the other AIS components about the update and interaction filters registered by their simulations. For interaction messages. CA. D." Report CSL-79. 14 Message Distribution To minimize message traffic between components in an ALSP confederation.E. 12–15 December.Aggregate Level Simulation Protocol kinds that should pass or fail the interaction filter. army. J.H. Los Angeles. [10] Weatherly.L. E.. References [1] [2] [3] [4] http:/ / www. 9–12 December. (1971).10..

The two instance types are exactly the same except for their billing. one-time fee and in turn provide a significant discount on the hourly usage charge for that instance. Monitoring the compute and storage resource utilization of your DB Instance is easy. Amazon RDS automatically failsover to the up-to-date standby ensuring that database operations resume quickly without administrative intervention. . Thus Reserved DB Instances enable you to take advantage of the rich functionality of Amazon RDS at lower cost and can provide substantial savings over owning database assets or running only On-Demand DB instances.[6] Amazon RDS supports MySQL and Oracle database engines. When you create or modify your DB Instance to run as a Multi-AZ deployment. Read Replicas help in scaling out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. Scaling storage and compute resources can be performed by a single API call. Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone [11] (independent infrastructure in a physically separate location). Complex administration processes like patching the database software. Some of the major features are: Multi AZ deployment Multi-Availability Zone deployments are targeted for production environments [10] . Amazon RDS offers many different features to support different use cases. asynchronous replication functionality. Reserved DB Instances require a low. In the event of planned database maintenance or unplanned service disruption. These performance metrics are available using the AWS Management Console or Amazon CloudWatch APIs [9]. up-front. In June 2011.com. A new DB instance can be launched from the AWS Management Console [7] or using the Amazon RDS APIs [8]. Read Replicas Read Replicas make it easy to take advantage of MySQL’s native. Features Amazon RDS is simple to use. and scale a relational database[2] . Reserved Instances Amazon RDS DB instances come in two packages: On-Demand DB Instances and Reserved DB Instances [12]. operate. Amazon RDS was first released on 22 October 2009[4] [5] . On-Demand instances are billed [13] at an ongoing hourly usage rate. Amazon RDS makes it easy to set up.Amazon Relational Database Service 15 Amazon Relational Database Service Amazon Relational Database Service[1] or Amazon RDS is a distributed relational database service by Amazon. Oracle database support was added. backing up your database and enabling point in time recovery are managed automatically[3] . Multi-AZ deployments provide enhanced availability and data durability for MySQL instances. They can also be used for serving read traffic when the primary database is unavailable. It is a web service running "in the cloud" and provides users a relational database for use in their applications.

com/ [8] http:/ / docs. com/ rds/ faqs/ #41 [12] http:/ / aws. 64-bit platform. com/ mysql-in-the-cloud-at-airbnb [3] http:/ / aws. High I/O Capacity References [1] http:/ / aws. amazon. Moderate I/O Capacity Large DB Instance 7. com/ rds/ amazon-rds-introduced/ [4] http:/ / developer. aws. amazonwebservices. allthingsdistributed.25 ECUs each).1 GB memory. High I/O Capacity High-Memory Quadruple Extra Large DB Instance 68 GB of memory.5 GB memory. amazon.5 ECU (2 virtual cores with 3. typepad. High I/O Capacity Extra Large DB Instance 15 GB of memory.7 GB memory. jspa?externalID=2942& categoryID=291 [5] http:/ / www. php/ 426926 [7] https:/ / console. to support different types of workloads [14] : Small DB Instance 1. com/ aws/ 2010/ 08/ by-popular-demand-amazon-rds-reserved-db-instances. amazon. com/ 2009/ 10/ amazon_relational_database_service. 64-bit platform. amazon. amazonwebservices. amazon. internet. com/ mysql2011/ public/ schedule/ detail/ 19732 [11] http:/ / aws. com/ connect/ entry. 64-bit platform. airbnb.25 ECUs each). html [13] http:/ / aws. 6.25 ECUs each). 64-bit platform. amazon. com/ rds/ pricing/ [14] http:/ / aws. 4 ECUs (2 virtual cores with 2 ECUs each). oreilly. 64-bit platform. 8 ECUs (4 virtual cores with 2 ECUs each). 26 ECUs (8 virtual cores with 3. High I/O Capacity High-Memory Double Extra Large DB Instance 34 GB of memory. 13 ECUs (4 virtual cores with 3. com/ developertools/ 2534 [10] http:/ / en. com/ AmazonRDS/ latest/ APIReference/ [9] http:/ / aws. amazon. com/ applications/ article. 1 ECU (1 virtual core with 1 ECU). High I/O Capacity (MySQL DB Engine Only) High-Memory Extra Large Instance 17. com/ rds/ #features . html [6] http:/ / cloudcomputing.Amazon Relational Database Service 16 Database Instance Types Amazon RDS currently supports six DB Instance Classes. 64-bit platform. com/ rds/ [2] http:/ / nerds.

Transfer to other Amazon Web Services is free of charge. org/ archives/ 2007/ 12/ 13/ amazon-simpledb/ ) [2] Amazon SimpleDB. It is used as a web service in concert with Amazon Elastic Compute Cloud (EC2) and Amazon S3 and is part of Amazon Web Services. amazonwebservices. amazon. com/ b?node=342335011) [5] SimpleDB Limits. 2008. amazon. Amazon charges fees for SimpleDB storage. satine.Amazon SimpleDB 17 Amazon SimpleDB Amazon SimpleDB is a distributed database written in Erlang[1] by Amazon.com. Amazon SimpleDB Developer Guide (API Latest version) (http:/ / docs. On December 1. com/ SimpleDB-AWS-Service-Pricing/ b?node=342335011& no=553872011& me=A36L942TSJ2AJA) [3] SimpleDB . 10 GB attributes per domain 1. Amazon introduced a new pricing with free tier[3] for 1 GB of data & 25 machine hours. It was announced on December 13.Limited Beta (http:/ / www. html''Amazon) . html?SDBLimits.[2] As with EC2 and S3. More can be requested by filling a form.A shift in AWS pricing (http:/ / blog. transfer. and throughput over the Internet. sdbexplorer. com/ AmazonSimpleDB/ latest/ DeveloperGuide/ index. 2007.[4] Limitations Published limitations[5] : Store limitations Attribute domains size of domains Maximum 250 active domains per account.000 attributes per item size per attribute 256 attributes 1024 bytes Query limitations Attribute items returned in a query response seconds a query may run Maximum 2500 items 5 seconds attribute names per query predicate 1 attribute name comparisons per predicate predicates per query expression 20 operators 5 predicates References [1] What You Need To Know About Amazon SimpleDB (http:/ / www.000. com/ 2008/ 12/ simpledb-2000000-free-requests-for-next-six-months/ ) [4] Amazon SimpleDB official home page (http:/ / www.Free Tier .000.

com/p/nsimpledb/) • M/DB .a Java Persistence API (JPA) implementation for Amazon's SimpleDB.com/simpledb/) • NSimpleDB .Tool to explore Amazon SimpleDB service.com/) .sdbexplorer.com/mdb. codeplex.com/ p/simplejpa/) • SDB Explorer .google.Open-source .google.NET object-persistence framework for Amazon SimpleDB written in C#.a Free Open Source API-compatible alternative to SimpleDB that can be used as a local or cloud database (http://www.mgateway. (http://code.A Java client for SimpleDB and other Amazon Web Services (http://code.Amazon SimpleDB 18 External links • Amazon SimpleDB official home page (http://aws.com/p/typica/) • SimpleJPA .html) • Simol . (http://www.amazon.Open source C# implementation of the SimpleDB data model for the desktop.com/) • typica . can also be used as a proxy for SimpleDB (http://code. (http://simol.google.

.de/english/vxkernel.bsslab.de/english/amcross.bsslab.sourceforge.html) • VAM (http://www.net) Recent development by Dr. The Virtual Amoeba Machine Network: a new hybrid distribute operating system environment .vu.bsslab.Amoeba distributed operating system 19 Amoeba distributed operating system Amoeba Company / developer Andrew S.html). Amoeba on the top of UNIX: Amoeba extension for UNIX-like opertaing systems • AMCROSS (http://www. External links • Amoeba home page (http://www. Stefan Bosse at BSS Lab (http://www. including SPARC.bsslab. Python FAQ.de/english/projects_software. i386. python.de/english/vamnet. nl/ pub/ amoeba/ [2] "Why was Python created in the first place?" (http:/ / www.html): the new VX-Amoeba Kernel • VAMNET (http://www. Sun 3/50 and Sun 3/60.html): • Overview (http://www. Stefan Bosse at BSS Lab. 68030. The Python programming language was originally developed for this platform. i486.html): Amoeba crosscompiling environment for UNIX • VX-Kernel (http://www.3) were last modified on 12 February 2001. Recent development is carried forward by Dr. cs. The Virtual Amoeba Machine: distributed operating system based on Amoeba with virtual machine concepts and functional programming • AMUNIX (http://www.cs. [2] References [1] http:/ / www.bsslab.de/english/vam. Tanenbaum Available language(s) English Official website [1] Amoeba is an open source microkernel-based distributed operating system developed by Andrew S. Development at Vrije Universiteit was stopped: the files in the latest version (5. The aim of the Amoeba project is to build a timesharing system that makes an entire network of computers appear to the user as a single machine.bsslab. Retrieved 2008-02-11.nl/pub/amoeba/) • FSD-Amoeba page at Sourceforge (http://fsd-amoeba. vu.de/english/amunix.bsslab.html).html). Amoeba runs on several platforms. org/ doc/ faq/ general/ #why-was-python-created-in-the-first-place).de/english/index. The system uses FLIP as a network protocol. Tanenbaum and others at the Vrije Universiteit.

Organizations such as Club Conflict Online Gaming League[13] and TeamWarfare League[14] have used Art of War Central.[17] .[11] North American Game Technology LLC is an accredited member of the Columbus. Battlefield 2. Texas. World in Conflict. Virginia.as of April.[5] Current ownership is listed as North American Game Technology. Quake Wars.[6] Acquisitions In November 2009 Art of War Central acquired two competitors in the game server and dedicated server marketplace. Their takeover of Wolf Servers and VSK Game Servers was announced in a press release November 26. 2011. a 64 team double elimination Counter-Strike competition.[3] The company began offering additional games when it introduced a beta version of Counter Strike in 2002 and has since expanded its portfolio to over 100 online games as of October 2010. History Initially started in the basement of company founder and current Vice President Mr. Homefront. it was the first such game server on the internet. LLC founded in September 2006 with Mr.com maintained dedicated game servers in the following markets Atlanta. Battlefield 2142. Medal of Honor. Crysis. Germany. Amsterdam and Frankfurt.[9] Incorporating specific performance requirements into the hardware of their in-house servers and partnering with Internap to improve routing performance. the CAL (Cyberathlete Amateur League) and was a contributing sponsor to the CPL World Tour. Chicago. Steve Phallen as President.[10] Accreditations Art of War Central is an approved ranked server provider for America's Army Honor.[4] In 2008 international operations were launched in London.[7] WolfServers. Art of War Central sponsored the 2004 Cyberathlete Extreme World Championships[15] and in August 2004 participated with Team Sportscast Network by providing a 50. Los Angeles and Southampton/London UK.[16] Art of War Central has co-sponsored a number of on-line game events with Superstar Gamers. Frontlines: Fuel of War. Dallas Behling. the CPL (Cyberathlete Professional League).000 slot HLTV network to broadcast “The-Rush”.Art of War Central 20 Art of War Central Art of War Central is a game server company that provides game server hosting to game player clans for a variety of PC on-line multi-player games. San Jose. Ohio Better Business Bureau with a rating of A. 2001 [2] and offered game servers for Tribes 1 and Tribes 2. 2009.[1] The site was registered on March 28. New York. Bad Company 2.[8] VSK Game Servers was an early industry leader in developing lag or latency reducing technology to improve gaming performance. the original intent was to provide a dedicated server for private team play.[12] Sponsorships and League Hosting Art of War Central has sponsored and hosted numerous on-line gaming tournaments and leagues for profession and amateur players. Crysis 2. voice servers. While their primary business is directed at the on-line gaming community they also offer virtual servers. Dallas. dedicated servers and web hosting services for non-gaming users.

com/ forums/ showthread. net/ whois/ (query “artofwarcentral. gkg. As widely reported in literature. internap.Art of War Central 21 References [1] [2] [3] [4] http:/ / artofwarcentral. com/ cs/ story/ 22604/ [17] http:/ / www. bbb. A very similar trend has recently characterized significant research work in the area of multi-agent systems.com/) Autonomic Computing Autonomic Computing refers to the self-managing characteristics of distributed computing resources. However. com/ art-of-war-continues-growth-with-expansion-to-frankfurt_281.php) • VSK Gaming Servers (http://www. bbb. a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed.com/) • Wolf Servers (http://www. vskgamingservers. com/ art-of-war-continues-growth-with-expansion-to-frankfurt_281. com/ main/ index. this initiative's ultimate aim is to develop computer systems capable of self-management. most of these approaches are typically conceived with centralized or cluster-based server architectures in mind and mostly address the need of reducing management costs rather than the need of enabling complex software systems or providing innovative services. An autonomic system makes decisions on its own.com/main/index.artofwarcentral. effectors (for self-adjustment). i-newswire. com/ content/ 9934-TsN_Three_Continents_in_Three_Weeks External links • Art of War Central (http://www. wolfservers. For example. using high-level policies. prlog. clubconflict. org/ centralohio/ business-reviews/ internet-gaming/ north-american-game-technology-in-worthington-oh-70041777 [13] http:/ / www. i-newswire. htm [12] http:/ / www. php [9] http:/ / www.[2] . knowledge and planner/adapter for exploiting policies based on self. com/ [10] http:/ / www. org/ centralohio/ business-reviews/ internet-gaming/ north-american-game-technology-in-worthington-oh-70041777 [7] http:/ / www. com/ business-internet-connectivity-services/ route-optimization-miro/ [11] http:/ / nuclearwar2012. gotfrag. sk-gaming.vskgamingservers. to overcome the rapidly growing complexity of computing systems management. com/ sponsors/ [14] http:/ / www. com/ cs/ story/ 21732/ [16] http:/ / www.and environment awareness. Started by IBM in 2001.com”) http:/ / www. gotfrag. Driven by such vision. com/ art-of-war-central-celebrates-10th/ 68295 http:/ / www. adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. teamwarfare.wolfservers. org/ 10429472-gamers-are-winners-in-landmark-gamer-server-merger-art-of-war-central-merges-with-wolf-servers-and. ant colony optimization could be studied in this paradigm. and to reduce the barrier that complexity poses to further growth. com/ main. asp?forumid=662& threadid=449592 [15] http:/ / www. asp?page=dp& dis=98290 https:/ / www. an autonomic computing framework might be seen composed by Autonomic Components (AC) interacting with each other [1]. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve hard computational problems. com/ art-of-war-central-celebrates-10th/ 68295 [5] http:/ / nuclearwar2012. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring). it will constantly check and optimize its status and automatically adapt itself to changing conditions. htm [6] http:/ / www. html [8] http:/ / www.

Additionally. Mobile computing is pervading these networks at an increasing speed: employees need to communicate with their companies while they are not in their office. The manual effort needed to control a growing networked computer-system tends to increase very quickly. and error-prone. hardware. IBM defined five evolutionary levels. . freeing administrators from low-level task management while delivering better system behavior. but the demand for skilled IT personnel is already outstripping supply. the human operator takes on a new role: he or she does not control the system directly. PDAs. The distributed applications running on these computer networks are diverse and deal with many different tasks. respiration. For this process. Most 'autonomic' service providers guarantee only up to the basic plumbing layer (power. or mobile phones with diverse forms of wireless technologies to access their companies' data. network and basic database parameters). is becoming a significant limiting factor in their further development.4 introduce increasingly automated management functions. while level 5 represents the ultimate goal of autonomic.g. Instead. for its deployment: Level 1 is the basic level that presents the current situation where systems are essentially managed manually. Levels 2 . Forecasts suggests that the number of computing devices in use will grow at 38% per annum and the average complexity of each device is increasing.[4] Kephart and Chess warn that the dream of interconnectivity of computing systems and devices could become the “nightmare of pervasive computing” in which architects are unable to anticipate. expensive. 80% of such problems in infrastructure happen at the client specific application and database layer. IBM has defined the following four functional areas: • Self-Configuration: Automatic configuration of components. Currently this volume and complexity is managed by highly skilled humans. heart rate. and correction of faults. They do so by using laptops. Large companies and institutions are employing large-scale computer networks for communication and computation. design and maintain the complexity of interactions. and in particular the complexity of their management. In "The Vision of Autonomic Computing". Autonomic systems A possible solution could be to enable modern. A general problem of modern distributed computing systems is that their complexity. operating system. This nervous system controls important bodily functions (e. or the Autonomic deployment model [5]. It is inspired by the autonomic nervous system of the human body. ranging from internal control processes to presenting web content and to customer support. They state the essence of autonomic computing is system self-management. • Self-Healing: Automatic discovery. The Autonomic Computing Initiative (ACI) aims at providing the foundation for autonomic systems. self-managing systems. This creates an enormous complexity in the overall computer network which is hard to control manually by human operators. networked computing systems to manage themselves without direct human intervention. and blood pressure) without any conscious intervention. • Self-Protection: Proactive identification and protection from arbitrary attacks. Manual control is time-consuming. Computing systems have brought great benefits of speed and automation but there is now an overwhelming economic need to automate their maintenance. In a self-managing Autonomic System. she defines general policies and rules that serve as an input for the self-management process. • Self-Optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements.Autonomic Computing 22 The problem of growing complexity Self-management means different things in different fields. with labour costs exceeding equipment costs [3] by a ratio of up to 18:1.

faults. bootstrapping.g.[6] 23 Control loops A basic concept that will be applied in Autonomic Systems are closed control loops. This will allow the system to cope with temporal and spatial changes in its operational context either long term (environment customisation/optimisation) or short term (exceptional conditions such as malicious attacks. that define the basic behaviour). which is responsible for making the right decisions to serve its Purpose. Characteristics Even though the purpose and thus the behaviour of autonomic systems vary from system to system. etc.. As such. a closed control loop in a self-managing system monitors some resource (software or hardware component) and autonomously tries to keep its parameters within a desired range.Autonomic Computing The design complexity of Autonomic Systems can be simplified by utilizing design patterns such as the Model View Controller (MVC) to improve concern separation by helping encapsulate functional concerns. Adaptive An autonomic system must be able to change its operation (i.g. and influence by the observation of the operational context (based on the sensor input). its configuration. the policies (e. Again.. state and functions). which enables the system to observe its external operational context. The actual operation of the autonomic system is dictated by the Logic.g. Conceptual model A fundamental building block of an autonomic system is the sensing capability (Sensors Si). etc. hundreds or even thousands of these control loops are expected to work in a large-scale self-managing computer system. This includes its mission (e. This well-known concept stems from Process Control Theory. Essentially. configuration knowledge.e. the knowledge required to bootstrap the system (Know-how) must be inherent to the system. every autonomic system should be able to exhibit a minimum set of properties to achieve its purpose: Automatic This essentially means being able to self-control its internal functions and operations. and the “survival instinct”. an autonomic system must be self-contained and able to start-up and operate without any manual intervention or external help. the service it is supposed to offer). According to IBM. interpretation of sensory data.. Inherent to an autonomic system is the knowledge of the Purpose (intention) and the Know-how to operate itself (e. If seen as a control system this would be encoded as a feedback error function or in a heuristically assisted system as an algorithm combined with set of heuristics bounding its operational space.) without external intervention. Aware . This model highlights the fact that the operation of an autonomic system is purpose-driven..).

25.uni-stuttgart.” (http:/ / dx. Berlin. informatik. USA. net/ project/ showfiles.org/) • ASSL (Autonomic System Specification Language) : A Framework for Specification. Validation and Generation of Autonomic Systems (http://www. Risks. ibm.ibmpressbooks. 2004.bsc.dyscas. 2969.providers of autonomic computing software (http://www. Curry and P.com/autonomic/) • Autonomic Computing articles and tutorials (http://www.asp?isbn=013144025X) • IBM Autonomic Computing Website (http://www.html) • Practical Autonomic Computing . Berkeley University of California. Michael Rovatsos. 3. 84-90.ipsoft.ibm. wss [6] E.ibm.ustuttgart_fi/DIP-2787/DIP-2787. springerlink.providers of autonomic computing software (http://www. survey. 2008.org) • CASCADAS Autonomic Tool-Kit in Open Source (http://sourceforge. vol.whitestein.org/) • Dynamically Self Configuring Automotive Systems (http://www.com/ autonomic/pdfs/AC_Practical_Roadmap_Whitepaper.pdf) • Autonomic Computing Architecture in the RKBExplorer (http://www.inrialpes. vol.com/autonomic-technology-platform) • Applied Autonomics provides Autonomic Web Services (http://www.Roadmap to Self Managing Technology (http://www-03. ana-project.com/ bookstore/product. and Solutions.com) • IPsoft service providers delivering Autonomic Computing (http://www. 2008.com) • Enigmatec Website .es/ autonomic) • SOCRATES: Self-Optimization and Self-Configuration in Wireless Networks (http://www. Lecture Notes in Computer Science. 24 References [1] http:/ / sourceforge. php?group_id=225956 [2] Xiaolong Jin and Jiming Liu.net/project/showfiles.A framework for developing autonomic administration software (http://sardes. ISBN 978-3-540-22477-8. no. com/ openurl. March 2002 [4] IEEE Computer Magazine. 1109/ MS.appliedautonomics.handsfreenetworks. funded by the European Union (http://www.ibm.com) • CASCADAS Project: Component-ware for Autonomic. May.com/developerworks/blogs/page/DaveBartlett) • Whitestein Technologies .net) • Handsfree Networks . Grace.vassev.provider of development and integration environment for autonomic computing software (http://www. com/ press/ us/ en/ pressrelease/ 464.fp7-socrates.com/id/resilience-mechanism-87d79b11}) .org/) • JADE .in German (ftp://ftp.Autonomic Systems and eBusiness Platforms (http://www. Jan 2003 [5] http:/ / www. Awareness will control adaptation of its operational behaviour in response to context or state changes.com) • Explanation of Autonomic Computing and its usage for business processes (IBM). and Gerhard Weiss (editors).enigmatec.pdf) • Autonomic computing blog (http://www-03. “Flexible Self-Management Using the Model-View-Controller Pattern. in Matthias Nickles. asp?genre=article& issn=0302-9743& volume=2969& spage=151)".rkbexplorer.de/pub/library/medoc. 60) IEEE Software. funded by the European Union (http://www.research. Springer. Agents and Computational Autonomy: Potential.fr/jade.com/explorer/ #display=mechanism-{http://resex.html) • Barcelona Supercomputing Center .rkbexplorer. " From Individual Based Modeling to Autonomy Oriented Computation (http:/ / www. org/ 10.assl. External links • Autonomic Computing by Richard Murch published by IBM Press (http://www.cascadas-project. doi. php?group_id=225956) • ANA Project: Autonomic Network Architecture Research Project.com/developerworks/tivoli/autonomic/library/ 1016/1016_autonomic. pages 151–169.Autonomic Computing An autonomic system must be able to monitor (sense) its operational context as well as its internal state in order to be able to assess if its current operation serves its purpose.ibm. Situation-aware Communications And Dynamically Adaptable. [3] ‘Trends in technology’. pp.

Retrieving and processing this information with sub-millisecond response time was impossible with traditional database approaches. Therefore in 2008 Brian Bulkowski created a key-value data store and later was joined by Srini Srinivasan in 2009. fault-tolerant database engine.Autonomic Computing • International Journal of Autonomic Computing (http://www. As of 2010 Citrusleaf has been implemented in production. Inc. 2010 C Operating system Linux Type License Website distributed key/value database system Enterprise (Perpetual or Subscription based) http:/ / www. with a response time of under one millisecond. The first was the sheer volume of data. net/ The Citrusleaf database is an ACID-compliant. To support these transaction loads in a non-stop manner during node arrivals and departures. 2. real-time prioritization. This was due to several reasons. Keeping track of 5 to 10 Kilobytes of information for each of hundreds of millions of people produced a database with billions of objects. These applications require the ability to store 5 to 10 Kilobytes of information on hundreds of millions of webs users and compare it to potential ads to display with sub-millisecond response time.23 / September 1. Inc. post-relational NoSQL database produced and marketed by Citrusleaf.edu/wiki/index. scalable. Citrusleaf takes advantage of the properties of Solid-state drive (SSD) to accomplish this. The Citrusleaf database platform is an ACID-compliant.000 transactions per second per node. the founders of Citrusleaf Corporation encountered a problem. It was originally developed for managing the mission-critical data for applications on the Real-time web. Their applications were mission-critical so in addition to the performance requirements the solution had to be available without interruption. The average seek time of rotating disk storage is ten milliseconds and therefore a sub-millisecond response time is not possible.com/ijac/) • BiSNET/e: A Cognitive Sensor Networking Architecture with Evolutionary Multiobjective Optimization (http:// dssg.cs. Together they created the Citrusleaf database platform. citrusleaf.inderscience. In addition to performance.umb.0. Traditional databases approaches were designed with traditional rotational disk storage in mind.php/BiSNET/e) 25 Citrusleaf database Citrusleaf Developer(s) Stable release Written in Citrusleaf. History While at Yahoo! and Aggregate Knowledge. The system is capable of 100. The volume and performance demands of Real-time web applications caused traditional SQL databases to fail. extremely fast. Fault-tolerant design was an issue. the authors created software solutions in the areas of distributed systems. . Design Drivers The answer lay in making use of solid state drives SSD. and storage management across all kinds of storage.

Automatic Client failover: Clients track cluster membership for automatic load balancing and transaction re-try. but the set of instructions is not very rich. integers. individual data objects are referenced by tables and primary keys which could be strings. and control policies like replication count and storage location. Within a namespace. These namespaces are similar to a database instance in an RDBMS.citrusleaf. blobs. Some high level operations (such as atomically adding integers) are supported.Citrusleaf database 26 Data model Citrusleaf organizes all data into namespaces. C#. in the style of Redis.net) . Replication and Failover • • • • Automatic failure detection and in-flight transaction rerouting for nonstop operation in the face of failure.000 transactions per second per commodity node. which are binary data which has been reflected by the serializer of an individual object (such as a Java blob generated by Java's serializer). PHP. The use of typed values allows different languages to inter-operate simply: a string set in Java will appear correctly through the Python client. which are similar to column names in SQL. integers. even though Java and Python use different underlying character representations (Unicode vs UTF-8). Each column's value is typed. Flexible replication policy: Set replication factors for individual data items. A key is a unique reference to a piece of data: common keys include usernames and session identifiers. and "reflection blobs". • Real-time performance: Low. Citrusleaf's data model allows it to be considered as a document store. • Automatic cluster resizing and rebalancing: Citrusleaf cluster will automatically grow or shrink using zeroconfig networking. Python and Ruby. Randomized object replication allows smooth load balancing during failure recovery. predictable sub-millisecond latency from memory or flash storage. although it is more similar to a schema-less version of the row based schema typically used in relational systems. The types supported are strings. Scalability and Performance • Distributed object store: Easily store and retrieve large volumes of data through Citrusleaf client for C. or binary data. • High sustained throughput of over 100. Each data object is a collection of 'bins' in Citrusleaf's parlance. The system is schema-less in that different columns can be used in different data objects of the same table. References External links • Official Citrusleaf site (http://www. Java.

The interaction between client and server is often described using sequence diagrams. Most web services are also types of servers. Functions such as email exchange. print servers. are Schematic clients-server interaction. but requests a server's content or service function. Specific types of servers include web servers. but both client and server may reside in the same system. that is where the term peer-to-peer comes from. Description The client–server characteristic describes the relationship of cooperating programs in an application. which in turn serves it back to the web browser client displaying the results to the user. email clients. built on the client–server model. web access and database access. A server machine is a host that is running one or more server programs which share their resources with clients. file servers. Peer-to-peer networks are typically less secure than a client-server networks because security is handled by the individual computers. software applications can be installed on the single computer and shared by every computer in the network.[1] Often clients and servers communicate over a computer network on separate hardware.Clientserver model 27 Client–server model The client–server model of computing is a distributed application that partitions tasks or workloads between the providers of a resource or service. and online chat clients. database servers. Clients therefore initiate communication sessions with servers which await incoming requests. These shared resources are available to every computer in the network. CD-ROMs and printers[2] . name servers. They are also cheaper to set up because most desktop operating systems have the software required for the network installed by default. A client does not share any of its resources. which initiate requests for such services. called servers. such as HTTP. the client-server model works with any size or physical layout of LAN and doesn't tend to slow down with a heavy use. The balance is returned to the bank database client. and terminal servers. In the peer to peer network. high speed computer with a large hard disk capacity. On the other hand. Many business applications being written today use the client–server model. The resources of the computers in . the collision of session may be larger than with routing via server nodes. [3] . not controlled and supervised on the network as a whole. However. called clients. The client–server model has become one of the central ideas of network computing. Each computer acts as both the client and the server which means all the computers on the network are equals. Comparison to peer-to-peer architecture A client-server network involves multiple clients connecting to a single. mail servers. Users accessing banking services from their computer use a web browser client to send a request to a web server at a bank. central server. Specific types of clients include web browsers. The file server on a client-server network is a high capacity. So do the Internet's main application protocols. The advantage of peer-to-peer networking is the easier control concept not requiring any additional coordination entity and not delaying transfers by routing via server entities. application servers. and service requesters. and DNS. ftp servers. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve the account information. Telnet. peer-to-peer networks involve two or more computers pooling individual resources such as disk drives. SMTP. while each two of them communicate in a session. By contrast. Sequence diagrams are standardized in the Unified Modeling Language. The server component provides a function or service to one or many clients.

and transaction recovery time. In addition the concentration of functions in performant servers allows for lower grade performance qualification of the clients. but may be taken by another server. All processing is completed on few central computers. pdf). If dynamic re-routing is established. but it is recommended to consider investment in enterprise-wide server facilities with standardised choice of hardware and software and with a systematic and remotely operable administering strategy. clients’ requests cannot be fulfilled by this very entity. this simple model ends with the bandwidth of the network: Then congestion comes on the network and not with the peers. resources are usually distributed among many nodes which generate as many locations to fail. • Using intelligent client terminals increases the maintenance and repair effort. Then a single server may cause a bottleneck or constraints problem. It is possible to set up a server on a modern desktop computer. It may be difficult to provide systemwide services when the client operating system typically used in this type of network is incapable of hosting the service. even if one or more nodes depart and abandon a downloading file.Clientserver model the network can become congested as they have to support not only the workstation user. servers may be cloned and networked to fulfill all known capacity and performance requirements. the remaining nodes should still have the data needed to complete the download. In P2P networks. pdf) [3] [Peer-to-Peer Networking and Applications] [4] Book: Computers are your future [5] Peer to Peer vs. where its aggregated bandwidth actually increases as nodes are added. References [1] "Distributed Application Architecture" (http:/ / java. as long as required data is accessible. It is easier to configure and manage the server hardware and software compared to the distributed administering requirements with a flock of computers[4] [5] . • Mainframe networks use dumb terminals. for example. . Lesser complete netbook clients allow for reduction of hardware entities that have limited life cycles. • Any single entity paradigm lacks the robustness of a redundant configuration. Sun Microsystem. However. [2] Understanding peer-to-peer networking (http:/ / www. Limitations include network load. Aspects of comparison for other architectural concepts today include cloud computing as well. com/ developer/ Books/ jdbc/ ch07. isafe. sun. Client-server networks with their additional capacities have a higher initial setup cost for networking than peer to peer networks. network address volume. since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network. Client/Server Networks . However. Retrieved 2009-06-16. Possible design decision considerations might be: • As soon as the total number of simultaneous client requests to a given server increases. 28 Challenges Generally a server may be challenged beyond its capabilities. org/ imgs/ pdf/ education/ P2PNetworking. The long-term aspect of administering a client-server network with applications largely server-hosted surely saves administering effort compared to administering the application settings per each client. but also the requests from network users. Contrast that to a P2P network. should a critical server fail. This is a method of running a network with different limitations compared to fully fashioned clients. Under client–server. the server can become overloaded.

computer. This is the process of moving code across the nodes of a network as opposed to distributed computation where the data is moved. IEEE Transactions on Software Engineering (NJ. USA: IEEE Press Piscataway) 24 (5): 342–361. It is common practice in distributed systems to require the movement of code or processes between parts of the [1] system. Retrieved 29 July 2009. org/ portal/ web/ csdl/ abs/ trans/ ts/ 1998/ 05/ e0342abs.685258. such as time-critical applications. . The purpose of code mobility is to support sophisticated operations. codes or objects to be migrated (or moved) from one machine (host) to another. code mobility is the ability for running programs .Code mobility 29 Code mobility In distributed computing. without the need to restart the program on the recipient's machine. doi:10.1109/32. • Weak code mobility involves moving the code and the data only. data and the execution state from one host to another. Code mobility can be either Strong or Weak: • Strong code mobility involves moving the code. . Alfonso. instead of data . This is important in cases where the running application needs to maintain its state as it migrates from host to host. ISSN 0098-5589. For example a user A can send a running program to another user B and the program continues to run as if it was still on the original machine. htm). This may necessitate restarting the execution of the program at the destination host. "Understanding Code Mobility" (http:/ / www2. Gian Pietro Picco. Giovanni Vigna (1998). References [1] Fuggetta.

Connection broker 30 Connection broker In software engineering. Connection brokers are often used in systems using N-tier architectures. a connection broker is a resource manager that manages a pool of connections to connection-based resources such as databases or remote desktops. . enabling rapid reuse of these connections by short-lived processes without the overhead of setting up a new connection each time.

Noah Slater.CouchDB 31 CouchDB Apache CouchDB CouchDB's Futon Administration Interface. org/ Apache CouchDB. commonly referred to as CouchDB. User database Original author(s) Developer(s) Initial release Preview release Damien Katz. . Chris Anderson Apache Software Foundation 2005 1. is an open source document-oriented database written in the Erlang programming language. apache.0 / May 30. J. It borrows from NoSQL and is designed for local replication and to scale horizontally across a wide range of devices.0 http:/ / couchdb. CouchDB is supported by commercial enterprises Couchbase and Cloudant. Christopher Lenz. Jan Lehnardt. 2011 Development status Active Written in Operating system Available in Type License Website Erlang Cross-platform English Document-oriented database Apache License 2.1.

ACID Semantics Like many relational database engines. methods and representations and can be simplified as the following. Ruby. Views are defined with aggregate functions and filters are computed in parallel. CouchDB exposes a RESTful HTTP API and a large number of pre-written clients are available. That means CouchDB can . CouchDB supports a view system using external socket servers and a JSON-based protocol. CouchDB is maintained at the Apache Software Foundation with backing from IBM. Katz works on it full-time as the lead developer. Additionally. Every document in a CouchDB database has a unique id and there is no required document schema. ” —Jacob Kaplan-Moss. In February 2008. where it is used to synchronize address and bookmark data.CouchDB 32 History In April 2005. Django Developer [5] It is in use in many software projects and web sites[6] . view servers have been developed in a variety of languages. Instead of storing data in rows and columns. including Ubuntu. I’ve never seen software that so completely embraces the philosophies behind HTTP. CouchDB makes Django look old-school in the same way that Django makes ASP look outdated. But you can also use ordered lists and associative maps. it became an Apache Incubator project and the license was changed to the Apache License rather [2] than the GPL.[4] As a consequence. now founder. a plugin architecture allows for using different computer languages as the view server such as JavaScript (default). Field values can be simple things like strings. much like MapReduce. but CouchDB is built of the Web. CouchDB provides ACID semantics[9] . CouchDB design and philosophy borrows heavily from Web architecture and the concepts of resources. Support for other languages can be easily added. the database manages a collection of JSON documents. “ Django may be built for the Web. Damien Katz (former Lotus Notes developer at IBM. You can think of a document as one or more field/value pairs expressed as JSON. He self-funded the project for almost two years and released it as an open source project under the GNU General Public License. CTO of Couchbase) posted on his blog about a new database engine he was working on. Views are generally stored in the database and their indexes updated continuously.11 CouchDB supports CommonJS' Module specification[8] . Design CouchDB is most similar to other document stores like MongoDB and Lotus Notes. Python and Erlang.[3] Currently. Features Document Storage CouchDB stores documents in their entirety. it graduated to a top-level project alongside the likes of the Apache HTTP Server. It is not a relational database management system.[7] Since Version 0. or dates. Details were sparse at this early stage. The documents in a collection need not share a schema. although queries may introduce temporary views. numbers. Tomcat and Ant. but the project moved to the Erlang OTP platform for its emphasis on fault tolerance. but retain query abilities via views. It does this by implementing a form of Multi-Version Concurrency Control (MVCC) not unlike InnoDB or Oracle. CouchDB was originally written in C++. On November 2008. but what he did share was that it would be a "storage system for a large scale object database" and that it would be called CouchDB (Couch is an acronym for cluster of unreliable commodity hardware).[1] His objectives for the database were for it to become the database of the Internet and that it would be designed from the ground up to serve web applications. PHP.

.1:5984/wiki CouchDB will reply with the following message.1:5984/ The CouchDB server processes the HTTP request. Distributed Architecture with Replication CouchDB was designed with bi-direction replication (or synchronization) and off-line operation in mind. PUT and DELETE for the four basic CRUD (Create. The function takes a document and transforms it into a single value which it returns. . The biggest gotcha typically associated with this level of flexibility is conflicts.0. That means multiple replicas can have their own copies of the same data. you can develop views that are similar to their relational database counterparts. Read. PUT or DELETE) by using the cURL lightweight command-line tool to interact with CouchDB server: curl http://127."disk_size":79. In CouchDB.1:5984/wiki The server replies with the following JSON message: {"db_name":"wiki". 33 Examples CouchDB provides a set of RESTful HTTP methods (e."doc_del_count":0. or updated. if the database already exists: {"error":"file_exists".CouchDB handle a high volume of concurrent readers and writers without conflict. "purge_seq":0.0."} The command below retrieves information about the database: curl -X GET http://127. Creating a database is simple—just issue the following command: curl -X PUT http://127. with a different response message.g. GET. Delete) operations on all resources. interoperable."update_seq":0."doc_count":0.0. if the database does not exist: {"ok":true} or. CouchDB can index views and keep those indexes updated as documents are added. scalable and proven technology. Update. but it illustrates nicely the way of interacting with CouchDB. and then sync those changes at a later time.1"} This is not terribly useful. Map/Reduce Views and Indexes To provide some structure to the data stored in CouchDB.0. All items have a unique URI that gets exposed via HTTP. GET."reason":"The database could not be created. POST."compact_running":false. are available to do all sorts of things with HTTP like caching. each view is constructed by a JavaScript function (server-side JavaScript by using CommonJS and SpiderMonkey) that acts as the Map half of a MapReduce operation. software and hardware. The logic in your JavaScript functions can be arbitrarily complex. REST API CouchDB treats all stored items (there are others besides documents) as a resource. it returns a response in JSON as the following: {"couchdb":"Welcome". HTTP is widely understood. A lot of tools. modify it. proxying and load balancing. Since computing a view over a large database can be an expensive operation. REST uses the HTTP methods POST.0. This provides a very powerful indexing mechanism that grants unprecedented control compared to most databases."version":"1. removed.0.0. the file already exists.

org/ docs/ overview. Joe (2009-03-31). com/ developerworks/ opensource/ library/ os-couchdb/ index. org/ couchdb/ CommonJS_Modules [9] (http:/ / couchdb. with strict evaluation.org [3] Re: Proposed Resolution: Establish CouchDB TLP (http:/ / mail-archives.org [4] View Server Documentation (http:/ / wiki.0. written by Brendan Eich at Netscape Communications. . ibm. IBM. apache.0. The sequential subset of Modified MPL Erlang is a functional language. org/ couchdb/ ViewServer) on wiki. html). and MIT International Components for Unicode (ICU) is an open source project of mature C/C++ and Java libraries for Unicode support. OpenSSL is an open source implementation of the SSL and TLS protocols. apache.apache. software internationalization and software globalization. Component Description License MPL/GPL/LGPL tri-license SpiderMonkey SpiderMonkey is a code name for the first ever JavaScript engine. see section on ACID Properties.org [5] A Different Way to Model Your Data (http:/ / books. later released as open source and now maintained by the Mozilla Foundation. apache. ICU is widely portable to many operating systems and environments. and dynamic typing. couchdb. org/ mod_mbox/ incubator-general/ 200802.1:5984/wiki CouchDB will reply with the following message: {"ok":true} 34 Open source components CouchDB includes a number of other open source projects as part of its default package. apache.CouchDB "instance_start_time":"1272453873691070". mbox/ <4AD53996. com>) to the CouchDB-Devel list [8] http:/ / wiki. gmail.apache. org/ mod_mbox/ couchdb-dev/ 200910. org/ couchdb/ CouchDB_in_the_wild) A list of software projects and websites using CouchDB [7] Email from Elliot Murphy (Canonical) (http:/ / mail-archives.apache. "Exploring CouchDB" (http:/ / www. com>) on mail-archives. apache. single assignment. apache. com>) on mail-archives. apache. MIT License OpenSSL Apache-like unique Erlang Erlang is a general-purpose concurrent programming language and runtime system. 3090104@canonical. jQuery ICU jQuery is a lightweight cross-browser JavaScript library that emphasizes interaction between JavaScript and Dual license: GPL HTML. mbox/ <3F352A54-5FC8-4CB0-8A6B-7D3446F07462@jaguNET. The core library (written in the C programming language) implements the basic cryptographic functions and provides various utility functions."disk_format_version":5} The following command will remove the database and its contents: curl -X DELETE http://127. org/ mod_mbox/ incubator-couchdb-dev/ 200811. . org/ relax/ intro/ why-couchdb#A Different Way to Model Your Data) [6] CouchDB in the wild (http:/ / wiki. References [1] Lennon. html). Retrieved 2009-03-31. [2] Apache mailing list announcement (http:/ / mail-archives. mbox/ <3d4032300802121136p361b52ceyfc0f3b0ad81a1793@mail. IBM.

com/free/green_chandler.org/editions/1/en/index. O'Reilly Media. ISBN 1449303439 External links • • • • • • Official website (http://couchdb. J.com/presentations/katz-couchdb-and-me) on Jan 31. Chris.google. Noah.com/ catalog/0636920018247) (1st ed. Joe (December 15. 300. 2009 by Damien Katz .html) CouchDB news and articles on myNoSQL (http://nosql.000 feet Jan Lehnardt (http://video. O'Reilly Media. Writing and Querying MapReduce Views in CouchDB (http://oreilly. Bradley (April 11.couchdb.apache.nosqldatabases.mypopescu. ISBN 0596158165 • Lennon.org/) CouchDB: The Definitive Guide (http://books. Slater. pp.couchdb.com (http://www. pp.com/ videoplay?docid=-3714560380544574985&hl=en#) • Jan Lehnardt is Giving the Following Talks. 300. pp.org/relax/) CouchDB articles on NoSQLDatabases.CouchDB 35 Bibliography • Anderson. 2009).com/post/683838234/scaling-couchdb) • Complete HTTP API Reference (http://wiki. 2009). Beginning CouchDB (http://www.infoq. CouchDB for Erlang Developers (http://www.apress. 76. 2011). Scaling CouchDB (http://oreilly. CouchDB: The Definitive Guide (http:// guide. O'Reilly Media. Bradley (March 7. 72. Lehnardt. 2011).erlang-factory.mypopescu.com/1999/couchdb-php) Videos • Erlang eXchange 2008: Couch DB at 10. Apress.com/main/tag/couchdb) CouchDB green paper (http://manning. ISBN 1430272376 • Holt.com/ conference/London2009/speakers/janlehnardt) • CouchDB and Me (http://www.apache.).com/tagged/couchdb) Scaling CouchDB (http://nosql.).).). Jan (November 15.com/catalog/9781449303433) (1st ed. pp. ISBN 1449303129 • Holt.com/book/view/9781430272373) (1st ed.html) (1st ed.org/couchdb/Complete_HTTP_API_Reference) • Simple PHP5 library to communicate with CouchDB (https://github.

e.bris. • using dynamic. [2] E. In Proceedings of the 1988 International Conference on Fifth Generation Computer Systems. Often this description is meant to contrast the design to an alternative approach.[1] [2] [3] Data Diffusion Machines were under active research in the late 1980s and early 1990s. • using stored procedures that run on database servers. org/ citation. 1. Japan. edu/ viewdoc/ summary?doi=10. but the research has ceased since then. 1. general-purpose relational database management system. table-driven logic. A DDM appears to the user as a conventional shared memory machine but is implemented using a distributed memory architecture. International. much of which is either free or included with the operating system. concluding that a database-centric approach has practical advantages from the standpoint of ease of development and maintainability. The extent to which business logic should be placed at the back-end versus another tier is a subject of ongoing debate. 1996. Warren. as opposed to customized in-memory or file-based data structures and access methods. database or even retrieved from a spreadsheet. Stallard. DDM . The use of table-driven logic. as opposed to greater reliance on logic running in middle-tier application servers in a multi-tier architecture. See also control tables for tables that are normally coded and embedded within programs as data structures (i. A. With the evolution of sophisticated DBMS software. behavior that is heavily dictated by the contents of a database. Hagersten. http:/ / citeseerx. This capability is a central feature of dynamic programming languages.Data Diffusion Machine 36 Data Diffusion Machine Data Diffusion Machine is a historical virtual shared memory architecture where data is free to migrate through the machine. David H.[1] . pp 943-952. and S. 10th International Parallel Processing Symposium (IPPS '96). p. application developers have become increasingly reliant on standard database tools." Parallel Processing Symposium. cfm?id=141718 [3] Henk L. Toon Koppelaars presents a detailed analysis of alternative Oracle-based architectures that vary in the placement of business logic.A Cache-only Memory Architecture. Landin. 2301 External links • Data Diffusion Machine (University of Bristol) (http://www. allows programs to be simpler and more flexible. as opposed to logic embodied in previously compiled programs. Haridi. acm. 48. the characterization of an architecture as "database-centric" may mean any combination of the following: • using a standard. psu.A Scalable Shared Virtual Memory Multiprocessor. "Implementing the Data Diffusion Machine using Crossbar Routers. not compiled statements) but could equally be read in from a flat file. For example. The Data Diffusion Machine . especially for the sake of rapid application development. i. For example.uk/Research/DDM/) Database-centric architecture Database-centric architecture or data-centric architecture has several distinct meanings.A. The Data Diffusion Machine (DDM) overcomes this problem by providing a virtual memory abstraction on top of a distributed memory machine. D. References [1] David H. Paul W. 152. December 1988.e. IEEE Computer.cs. Warren and Seif Haridi. Muller. http:/ / portal.ac.D. September 1992. Shared memory machines are convenient for programming but do not scale beyond tens of processors. Tokyo. generally relating to software architectures in which databases play a crucial role. ist.

com/ dbgrid. doc) [2] Database-Centric Grid and Cluster Computing (http:/ / www. Examples Distributed Applications can include: 1. fault-tolerance. net/ users/ T. and capacity. or where each of these machines serves a specific purpose or task. inter. nl. and scalability. be it the client computer or the server. Distributed systems using general purpose and specialized APIs 2. boic. both ultimately run on one single computer. Base One describes a database-centric distributed computing architecture for grid and cluster computing. A potential benefit of database-centric architecture in distributed applications is that it simplifies the design by utilizing DBMS-provided transaction processing and indexing to achieve a high degree of reliability. where many systems on several locations can take care of Load balancing (computing) by re-distribution of specific tasks. Introduction Where classic software systems of the past century were mostly based on Client–server models and Client-centric application development. With the introduction of Intelligent agents.0 and the emergence of Cloud computing more and more "multiple machine" approaches emerge. Real time systems for data-input by people – Like HelpDesk software and Client Service Software taking care of appointments and updates on Client Data 3. Hardware systems like "the Internet of Things" . For example.Database-centric architecture • using a shared database as the basis for communicating between parallel processes in distributed computing applications. Koppelaars/ J2EE_DB_CENTRIC. Web APIs and Web 2.With independent components capable of processing specific tasks while communicating to other parts via a network 4. as opposed to direct inter-process communication via message passing functions and message-oriented middleware. performance. htm) Distributed application Distributed Applications are applications running on two or more machines in a network. and explains how this design provides enhanced security. Render and computation farms – To render 3D images and do calculations on large datasets and process complex data in general .[2] 37 References [1] A database-centric approach to J2EE application development (http:/ / web.

and the other always consumes the events). all the arguments passed in a method call).g.v). or instances of the same components. there can be only finitely many events in the flow that occur at time t or earlier. the flow of multicast requests would include all such requests made by instances of the given application on different nodes.g.k. a set of events that includes all multicast requests issued by the same application layer to the same multicast protocol is a distributed flow. such flow would include events that occur on all nodes participating in the given multicast protocol. Each distributed flow is a (possibly infinite) set of such quadruples that satisfies the following three formal properties. The flow itself can be infinite. but perhaps on different nodes within a computer network. simultaneously at different locations. one can always point to the point in time at which the flow originated. and neither would be a set of events that represent multicast requests as well as acknowledgments and error notifications. continuous. • Homogeneous. issued by an An illustration of the basic concepts involved in the definition of a distributed data flow. application layer to an underlying multicast protocol. one type of a layer or component always produces. For example. This implies that in which flow. and v is a value that represents the event payload (e. All events in the distributed flow serve the same functional and logical purpose. one-way. Formally. each event might represent a single request to multicast a packet. events in a distributed flow are distributed both in space (they occur at different nodes) and in time (they occur at different times). all events must flow in the same direction (i. For example. and distributed.e. and carry the same type of a payload. a set of events that includes multicast requests made by different applications to different multicast protocols would not be considered a distributed flow. non-blocking. in which all events occur at the same node would be considered degenerate. unidirectional. we represent each event in a distributed flow as a quadruple of the form (x.. generally. normally. • Concurrent.. Each event represents a single instance of a non-blocking. The flow usually includes all events that flow between the two layers of software. For example. we require that they represent method calls or message exchanges between instances of the same functional layers. eventually a new event will appear in the flow. and uniform.. in general. A flow. Furthermore. k is a version. or a sequence number identifying the particular event. and one-way. where x is the location (e. asynchronous method invocation or other form of explicit or implicit message passing between two layers or software components. On the other hand.t. and another flow that represents responses. the network address of a physical node) at which the event occurs. .Distributed data flow 38 Distributed data flow Distributed data flow (also abbreviated as distributed flow) refers to a set of events in a distributed application or protocol that satisfies the following informal properties: • Asynchronous. • For any finite point in time t. at any point in time. Thus. Invocations of methods that may return results would normally be represented as two separate flows: one flow that represents the requests. and are related to one-another. The requirement that events are one-way and asynchronous is important. and over a finite or infinite period of time. in such case. t is the time at which this happens.

in that they can represent state that is stored or communicated by a layer of software. Distributed data flows serve a purpose analogous to variables or method parameters in programming languages such as Java. Consistent flows typically represent various sorts of global decisions made by the protocol or application. A distributed flow is said to be strongly monotonic (or simply monotonic) if this is true even for pairs of events e_1 and e_2 that occur at different locations. cornell. C. pdf . cs. In addition to the above. They typically represent various sorts of irreversible decisions. USA.. A distributed flow is said to be consistent if events with the same version always have the same value. flows can have a number of additional properties. Strongly monotonic flows are always consistent. edu/ ~krzys/ krzys_debs2009. edu/ ~krzys/ krzys_oopsla2009..[3] 39 References [1] Ostrowski. they must also have the same values. A distributed flow is said to be weakly monotonic if for any pair of events e_1 and e_2 that occur at the same location. and Sakoda. Dolev. and Dolev. cornell. "Programming Live Distributed Objects with Distributed Data Flows". "Distributed Data Flow Language for Multi-Party Protocols". (2009). distributed flows are dynamic and distributed: they simultaneously appear in multiple locations within the network at the same time. K. http:/ / www.. MT. K. if the two events have the same version numbers. http:/ / www.. Submitted to the International Conference on Object Oriented Programming. cornell. 3rd ACM International Conference on Distributed Event-Based Systems (DEBS 2009). Weakly monotonic flows may or may not be consistent. 5th ACM SIGOPS Workshop on Programming Languages and Operating Systems (PLOS 2009). http:/ / www. D. • Monotonicity. Unlike variables or parameters. TN. 2009. pdf [3] Ostrowski. edu/ ~krzys/ krzys_plos2009. Dolev. As such. cs.. K. October 11. which represent a unit of state that resides in a single location. Birman. 2009. K. • For any pair of events e_1 and e_2 that occur at the same location. D. if e_1 has a smaller version than e_2. K. Nashville. D. July 6–9. distributed flows are a more natural way of modeling the semantics and inner workings of certain classes of distributed systems. if e_1 occurs at an earlier time than e_2. (2009). Big Sky. In particular. Systems. cs. pdf [2] Ostrowski. Birman. • Consistency.Distributed data flow • For any pair of events e_1 and e_2 that occur at the same location. the distributed data flow abstraction has been used as a convenient way of expressing the high-level logical relationships between parts of distributed protocols [1] [2] . K. "Implementing Reliable Event Streams in Large Systems via Distributed Data Flows and Recursive Delegation". Birman.. even if they occur at different locations. then e_1 must carry a smaller value than e_2.. Languages and Applications (OOPSLA 2009). (2009). USA. then the version number in e_1 must also be smaller than that of e_2.

Both of the processes can keep the data current in all distributive locations. synchronous and asynchronous distributed database technologies. The replication process can be very complex and time consuming depending on the size and number of the distributive databases. there are two processes: replication and duplication. It may be stored in multiple computers located in the same physical location. A distributed database does not share main memory or disks. The replication and distribution of databases improves database performance at end-user worksites. the replication process makes all the databases look the same. The duplication process is normally done at a set time after hours. or on other company networks. • Transactions are transparent — each transaction must maintain database integrity across multiple databases. in a database) can be distributed across multiple physical locations. local autonomy. . A distributed database can reside on network servers on the Internet.Distributed database 40 Distributed database A distributed database is a database in which storage devices are not all attached to a common CPU. Important considerations Care with a distributed database must be taken to ensure the following: • The distribution is transparent — users must be able to interact with the system as if it were one logical system. Replication involves using specialized software that looks for changes in the distributive database. and hence the price the business is willing to spend on ensuring data security. Basic architecture A database User accesses the distributed database through: Local applications applications which do not require data from other sites. Global applications applications which do require data from other sites. or may be dispersed over a network of interconnected computers. [1] To ensure that the distributive databases are up to date and current.g. on corporate intranets or extranets. consistency and integrity. Transactions must also be divided into subtransactions. Once the changes have been identified. Duplication on the other hand is not as complicated. In the duplication process. each subtransaction affecting one database system. It basically identifies one database as a master and then duplicates that database. For example. This process can also require a lot of time and computer resources. This is to ensure that local data will not be overwritten. This applies to the system's performance. These technologies' implementation can and does depend on the needs of the business and the sensitivity/confidentiality of the data to be stored in the database. there are many other distributed database design technologies. Collections of data (e.[2] Besides distributed database replication and fragmentation. and methods of access among other things. This is to ensure that each distributed location has the same data. changes to the master database only are allowed.

Due to replication of database. added and removed from the distributed database without affecting other modules (systems). Reliable transactions . DBMS. • Security — remote database fragments must be secured. Disadvantages of distributed databases • Complexity — extra work must be done by the DBAs to ensure that the distributed nature of the system is transparent. the transaction takes place as whole or not at all. • Lack of standards — there are no tools or methodologies yet to help users convert a centralized DBMS into a distributed DBMS. enforcing integrity over a network may require too much of the network's resources to be feasible. and the database systems themselves are parallelized. • Inexperience — distributed databases are difficult to work with. • Additional software is required. Modularity — systems can be modified.. allowing load on the databases to be balanced among servers.g. Reflects organizational structure — database fragments are located in the departments they relate to. joins become prohibitively expensive when performed across multiple systems. Network. • Concurrency control: it is a major issue. instead of one big one. Increase reliability and availability. The infrastructure must also be secured (e. Replication and Location Independence. d-durability. Easier expansion. . It is solved by locking and timestamping. property: a-atomicity.I. Improved performance — data is located near the site of greatest demand. Operating System. • • • • • • • Single site failure does not affect performance of system. Extra work must also be done to maintain multiple disparate systems.Distributed database 41 Advantages of distributed databases • • • • • • • Management of distributed data with different levels of transparency. Extra database design work must also be done to account for the disconnected nature of the database — for example. c-consistency. • Economics — increased complexity and a more extensive infrastructure means extra labour costs. by encrypting the network links between remote sites). Local autonomy — a department can control the data about them (as they are the ones familiar with it. the design of a distributed database has to consider fragmentation of data. all of the data would not be in one place. i-isolation. Hardware.) Economics — it costs less to create a network of smaller computers with the power of a single large computer.) Protection of valuable data — if there were ever a catastrophic event such as a fire. but distributed in multiple locations. Continuous operation. Distributed Transaction management.D. • Difficult to maintain integrity — but in a distributed database. Distributed Query processing. and as a young field there is not much readily available experience on proper practice. The Merge Replication Method used to consolidate the data between databases. (A high load on one module of the database won't affect other modules of the database in a distributed database.. and they are not centralized so the remote sites must be secured as well.C. • Database design more complex — besides of the normal difficulties. • Operating System should support distributed environment. Fragmentation. allocation of fragments to specific sites and data replication. All transactions follow A. maps one consistent DB state to another. each transaction sees a consistent DB. the results of a transaction must survive system failures.

ISBN 0-13-659707-6 •  This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (http://www.(2008) Management Information Systems (pp. • Elmasri and Navathe. ISBN 0-201-54263-3 Distributed design patterns In software engineering. NY: McGraw-Hill Irwin • M. J. 185-189). G. New York.(2008) Management Information Systems (pp. Addison-Wesley Longman.its. 185-189). New York.M. NY: McGraw-Hill Irwin [2] O'Brien. Valduriez. & Marakas. Fundamentals of database systems (3rd edition).bldrdoc.gov/fs-1037/fs-1037c. Prentice-Hall. Principles of Distributed Databases (2nd edition). T. J. G. Classification Distributed design patterns can be divided into several groups: • Distributed communication patterns • Security and reliability patterns • Event driven patterns Examples • MapReduce • Bulk synchronous parallel .htm).Distributed database 42 References [1] O'Brien.M. Ozsu and P. a distributed design pattern is a design pattern focused on distributed computing problems. & Marakas.

BBN introduced the concept of dead reckoning to efficiently transmit the state of battle field entities. In the early 1990s.Standard for Distributed Interactive Simulation .Application protocols Errata (May 1998) IEEE-1278. Both PDF and XML versions are available. The DIS family of standards DIS is defined under IEEE Standard 1278: • • • • • • IEEE 1278-1993 .4-1997 .1-1995 . Funding and research interest for DIS standards development decreased following the proposal and promulgation of its successor.Exercise Management and Feedback • IEEE 1278.3-1996 . the Simulation Interoperability Standards Organization (SISO) maintains and publishes an "enumerations and bit encoded fields" document yearly. There was a NATO standardisation agreement (STANAG 4482. adopted in 1995) on DIS for modelling and simulation interoperability. Beranek and Newman (BBN) for Defense Advanced Research Project Agency (DARPA) in the early through late 1980s. .1A-1998 .Distributed Interactive Simulation 43 Distributed Interactive Simulation Distributed Interactive Simulation (DIS) is an open standard for conducting real-time platform-level wargaming across multiple host computers and is used worldwide.Standard for Distributed Interactive Simulation . The standard itself is very closely patterned after the original SIMNET distributed interactive simulation protocol.Recommended Practice for Distributed Interactive Simulation .Application protocols IEEE 1278. held by the University of Central Florida's Institute for Simulation and Training (IST).Verification Validation & Accreditation • IEEE 1278. TENA and HLA federations. HLA was produced by the merger of the DIS protocol with the Aggregate Level Simulation Protocol (ALSP) designed by MITRE. developed by Bolt.Recommended Practice for Distributed Interactive . IST was contracted by the United States Defense Advanced Research Project Agency to undertake research in support of the US Army Simulator Network (SimNet) program.Standard for Distributed Interactive Simulation . Standardised Information Technology Protocols for Distributed Interactive Simulation (DIS).Standard for Distributed Interactive Simulation .Application protocols IEEE 1278. This document is referenced by the IEEE standards and used by DIS.Application protocols[1] IEEE 1278. especially by military organizations but also by other agencies such as those involved in space exploration and medicine.Fidelity Description Requirements (never published) In addition to the IEEE standards.Standard for Distributed Interactive Simulation .2-1995 .Communication Services and Profiles IEEE 1278.5-XXXX . This was retired in favour of HLA in 1998 and officially cancelled in 2010 by the NATO Standardisation Agency (NSA). History The standard was developed over a series of "DIS Workshops" at the Interactive Networked Simulation for Training symposium.1-1995 . the High Level Architecture (simulation) in 1996.

but also drafts submitted during the standards balloting process. Resupply Offer. Resupply Cancel.Standard for Distributed Interactive Simulation .1a-1998 (amendment to IEEE 1278.Information Operations Action. Signal. Detonation.scheduled for completion and IEEE balloting in the Spring of [2] 2010. Version 1. It provides extensive clarification and more details of requirements. Acknowledge • Distributed emission regeneration family . PDU and family names shown in italics are included in present draft DIS 7.IEEE 1278-1993 • Version 3 . Receiver.0 Draft (1992) • Version 2 .IEEE 1278. Resupply Received. Major changes are already in the DIS 7 draft update to IEEE 1278. This is a major upgrade to DIS to enhance extensibility and flexibility.IEEE 1278. Collision-Elastic.Start/Resume. Stop/Freeze. • Version 1 . Frequently used PDU types are listed below for each family.Application Protocols. Attribute • Warfare family . Repair Complete. not only including the formal standards.Standard for Distributed Interactive Simulation .[2] Protocol data units The current version (DIS 6) defines 67 different PDU[3] types. Version 2.Fire.Designator.Transmitter.Service Request. Directed Energy Fire. See External Link .1-2010 (in preparation . Repair Response • Simulation management family . Collision. Entity State Update. There are several versions of the DIS application protocol. Underwater Acoustic.1[1] to make DIS more extensible. Electromagnetic Emission. efficient and to support the simulation of more real world capabilities. Intercom Control • Entity management family • Minefield family • Synthetic environment family • Simulation management with reliability family • Live entity family • Non-real time family • Information Operations family .Entity State. promulgates improvements in DIS.) Version 7 is also called DIS 7 .Distributed Interactive Simulation 44 Current status SISO. known as protocol data units (PDUs) and exchanged between hosts using existing transport layer protocols.IEEE 1278.1-1995) • Version 7 .[2] Application protocol Simulation state information is encoded in formatted messages. Information Operations Report . though broadcast User Datagram Protocol is also supported.DIS Product Development Group. Supplemental Emission/Entity State (SEES) • Radio communications family . a sponsor committee of the IEEE.1-1995 • Version 6 . including multicast.Application Protocols. Entity Damage Status • Logistics family . arranged into 12 families.Standard for Distributed Interactive Simulation .0 Third Draft (May 1993) • Version 4 . Version 2. and adds some higher-fidelity mission capabilities.Application Protocols. • Entity information/interaction family . Intercom Signal. IFF/ATC/NAVAIDS.0 Fourth Draft (March 1994) • Version 5 .

aspx?EntryId=29288) [3] "1278. A lock must be obtained on a parent resource before a subordinate resource can be locked. which is some entity to which shared access must be controlled.1a-1998 IEEE Standard for Distributed Interactive Simulation . org/ DigitalLibrary. sisostds. org/ servlet/ opac?punumber=5896). VMScluster. Retrieved 10100517.aspx) Distributed lock manager A distributed lock manager (DLM) provides distributed software applications with a means to synchronize their accesses to shared resources. IEEE.Application Protocols" (http:/ / ieeexplore. .org/StandardsActivities/SupportGroups/ DISPSGDistributedInteractiveSimulation. or anything else that the application designer chooses. ieee. A hierarchy of resources may be defined. a record. [2] DIS 7 Overview. the first clustering system to come into widespread use.Distributed Interactive Simulation 45 References [1] "Corrections to Standard for Distributed Interactive Simulation . This can relate to a file. although the user interface was the same as the single-processor lock manager that was first implemented in Version 3. a hypothetical database might define a resource hierarchy as follows: • • • • Database Table Record Field A process can then acquire locks on the database as a whole. and then on particular parts of the database. VMS implementation VMS was the first widely-available operating system to implement a DLM. This became available in Version 4. 1-1995. relies on the OpenVMS DLM in just this way. The DLM is used not only for file locking but also for coordination of all disk access. External links • SISO DIS Product Support Group (http://www.sisostds. IEEE. SISO PSG File Library (http:/ / www. org/ reading/ ieee/ updates/ errata/ 1278. ieee. with significant advantages for performance and availability. The main performance benefit comes from solving the problem of disk cache coherency between participating computers. DLMs have been used as the foundation for several successful clustered file systems. .Application protocols" (http:/ / standards. Retrieved 10100517. an area of shared memory. so that a number of levels of locking can be implemented. in which the machines in a cluster can use each other's storage via a unified file system. . Resources The DLM uses a generalised concept of a resource. For instance. pdf).

which is triggered when a process has obtained a lock that is preventing access to the resource by another process.g. • Protected Read (PR). The enqueue lock request can either complete synchronously. or asynchronously. Indicates a desire to read and update the resource. in which case the process waits until the lock is granted. This is the traditional exclusive lock which allows read and update access to the resource. and these determine the level of exclusivity of access to the resource. and prevents others from having any access to it. Once a lock has been granted. It has the advantage that the resource and its lock value block are preserved. This is the traditional update lock. . Others can however also read the resource. by demoting or releasing the lock). The following truth table shows the compatibility of each lock mode with the others: Mode NL CR CW PR PW EX NL Yes Yes Yes Yes Yes Yes CR Yes Yes Yes Yes Yes No CW Yes Yes Yes No No No PR Yes Yes No Yes No No PW Yes Yes No No No No EX Yes No No No No No Obtaining a lock A process can obtain a lock on a resource by enqueueing a lock request. even when no processes are locking it. Indicates a desire to read (but not update) the resource. but prevents others from having exclusive access to it. but prevents others from having exclusive access to it. which indicates a desire to read and update the resource and prevents others from updating it. When all processes have unlocked a resource. • Concurrent Read (CR). • Exclusive (EX). but does not prevent other processes from locking it. This is also usually employed on high-level resources. the system's information about the resource is destroyed. it is possible to convert the lock to a higher or lower level of lock mode. in which case an AST occurs when the lock has been obtained. in order that more restrictive locks can be obtained on subordinate resources. The original process can then optionally take action to allow the other access (e. • Null Lock (NL). This is usually employed on high-level resources. Others with Concurrent Read access can however read the resource. It is also possible to establish a blocking AST. There are six lock modes that can be granted. Indicates interest in the resource. This is similar to the QIO technique that is used to perform I/O. in order that more restrictive locks can be obtained on subordinate resources. It allows other processes to read or update the resource. • Concurrent Write (CW). It also allows other processes to read or update the resource. This is the traditional share lock. • Protected Write (PW). which indicates a desire to read the resource but prevents other from updating it.Distributed lock manager 46 Lock modes A process running within a VMSCluster may obtain a lock on a resource.

This can be read by any process that has obtained a lock on the resource (other than a null lock) and can be updated by a process that has obtained a protected update or exclusive lock on it. and MapReduce. in November 2006. has eight parameters.Distributed lock manager 47 Lock value block A lock value block is associated with each resource.[3] Oracle's DLM has a simpler API. and none of them can proceed.[4] It is designed for coarse-grained locking and also provides a limited but reliable distributed file system. Linux clustering Both Red Hat and Oracle have developed clustering software for Linux. it obtains the appropriate lock and compares the current lock value with the value it had last time the process locked the resource. Key parts of Google's infrastructure. A simple example is when Process 1 has obtained an exclusive lock on Resource A. Each time the associated entity (e. If Process 1 then tries to lock Resource B. a database record) is updated. It would then be up to this process to take action to resolve the deadlock — in this case by releasing the first lock it obtained.16. both processes will wait forever for each other. including their DLM and Global File System was officially added to the Linux kernel [2] with version 2. the holder of the lock increments the lock value block. and therefore it is unnecessary to read it again. and Process 2 has obtained an exclusive lock on Resource B. this technique can be used to implement various types of cache in a database or similar application. whereas the VMS SYS$ENQ service and Red Hat's dlm_lock both have 11. OCFS2. the second lock enqueue request of one of the processes would return with a deadlock status.6. Though Chubby was designed as a lock service.6. supplanting DNS. But if Process 2 then tries to lock Resource A.19. Both systems use a DLM modeled on the venerable VMS DLM. It can be used to hold any information about the resource that the application designer chooses. This is known as a deadly embrace or deadlock. including Google File System.g.19. If the value is the same. Hence. use Chubby to synchronize accesses to shared resources. it is now heavily used inside Google as a name server. The alpha-quality code warning on OCFS2 was removed in 2. A typical use is to hold a version number of the resource. When another process wishes to read the resource. Red Hat's cluster software. it is possible to produce a situation where each is preventing another from obtaining a lock.6. in January 2006. Deadlock detection When one or more processes have obtained locks on resources.) Google's Chubby lock service Google has developed Chubby. a lock service for loosely-coupled distributed systems.[4] . BigTable. it will have to wait for Process 2 to release it. In the example above. The OpenVMS DLM periodically checks for deadlock situations. the process knows that the associated entity has not been updated since last time it read it. (the core function. dlmlock(). the Oracle Cluster File System was added[1] to the official Linux kernel with version 2.

In contrast. and if remote data is required.h=1c1afa3c053d4ccdf44e5a4e159005cdfd48bfc6 [3] http:/ / lwn. The network topology is a key factor in determining how the multi-processor machine scales. a shared memory multi processor offers a single memory space used by all processors. Processors do not have to be aware where data resides. git. org/ ?p=linux/ kernel/ git/ torvalds/ linux-2. git. and some form of interconnection that allows programs on each processor to interact with each other.arcs. using bespoke network links (used in for example the Transputer).Distributed lock manager 48 SSI Systems A DLM is also a key component of more ambitious single system image projects such as OpenSSI. kernel. and that race conditions are to be avoided. . com/ papers/ chubby. except that there may be performance penalties.h=29552b1462799afbe02af035b243e97579d63350 [2] http:/ / git.us) Distributed memory In computer science.www7. the computational task must communicate with one or more remote processors. distributed memory refers to a multiple-processor computer system in which each processor has its own private memory. a memory. google.a=commit.A Web Service used as a Distributed Lock Manager (http://www. net/ Articles/ 137278/ [4] http:/ / labs.hp. org/ git/ ?p=linux/ kernel/ git/ torvalds/ linux-2.com/doc/82FINAL/ 4527/4527pro_044. kernel.a=commit. 6. or using dual ported memories.html#jun_227) • ARCS . References [1] http:/ / www. 6. The links between nodes can be implemented using some standard network protocol (for example Ethernet). An illustration of a distributed memory system of three computers Architecture In a distributed memory system there is typically a processor. html • HP OpenVMS Systems Services Reference Manual – $ENQ (http://h71000. The interconnect can be organised with point to point links or separate hardware can provide a switching network. Computational tasks can only operate on local data.

On every iteration. Distributed shared memory Similarly. G. etc. (the result is H(G(F(X)))). or it can be moved through the nodes. This is also known as systolic computation. and that it forces the programmer to think about data distribution. An example of this is simulation where data is modeled using a grid. Shared memory versus distributed memory versus distributed shared memory The advantage of (distributed) shared memory is that it offers a unified address space in which all data can be found. H. and only changes on edges have to be reported to other nodes. in distributed shared memory each node of a cluster has access to a large shared memory in addition to each node's limited non-shared private memory. Data can be kept statically in nodes if most computations happen locally. and finally to the third node that computes H. then this can be expressed as a distributed memory problem where the data is transmitted first to the node that performs F that passes the result onto the second node that computes G. and each node simulates a small part of the larger grid. . if a problem can be described as a pipeline where data X is processed subsequently through functions F. nodes inform all neighboring nodes of the new edge data. or data can be pushed to the new nodes in advance. Data can be moved on demand. As an example.Distributed memory 49 Programming distributed memory machines The key issue in programming distributed memory systems is how to distribute the data over the memories. the data can be distributed statically.it does not hide the latency of communication. Depending on the problem solved. The advantage of distributed (shared) memory is that it is easier to design a machine that scales with the algorithm Distributed shared memory hides the mechanism of communication . The advantage of distributed memory is that it excludes race conditions.

perhaps resulting in only a weak consistency between their local states. JavaSpaces is a Sun specification for a distributed. 8. • Replicated objects are groups of software components (replicas) that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states. 3. DDObjects is a framework for distributed objects using Borland Delphi. and that respond to requests in a coordinated manner. DCOM is a framework for distributed objects on the Microsoft platform. Image describes communication between distributed objects residing in different machines. The results are sent back to the calling object. One object sends a message to another object in a remote machine or process to perform some task. 2. Communication : There are different communication primitives available for distributed objects requests Failure : Distributed objects have far more points of failure than typical local objects Security : Distribution makes them vulnerable to attack. 7. migration and deletion of distributed objects is different from local objects Reference : Remote references to distributed objects are more complex than simple pointers to memory addresses Request Latency : A distributed object request is orders of magnitude slower than local method invocation Object Activation : Distributed objects may not always be available to serve an object request at any point in time Parallelism : Distributed objects may be executed in parallel. Jt is a framework for distributed components using a messaging paradigm. Local vs Distributed Objects Local and distributed objects differ in many respects. Examples Distributed objects are implemented in Objective-C using the Cocoa API with the NSConnection class and supporting objects. The term may also generally refer to one of the extensions of the basic object concept used in the context of distributed computing. such as replicated objects or live distributed objects. but reside either in multiple computers connected via a network or in different processes inside the same computer. Life cycle : Creation. [1] • Live distributed objects (or simply live objects) generalize the replicated object concept to groups of replicas that might internally use any distributed protocol. shared memory (spaces based) .Distributed object 50 Distributed object The term distributed objects usually refers to software modules that are designed to work together. Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. viewed from the object-oriented perspective as entities that have distinct identity. CORBA lets one build distributed mixed object systems. and that can encapsulate distributed state and behavior. 6. 4. Distributed objects are used in Java RMI. See also Internet protocol suite. Live distributed objects can also be defined as running instances of distributed multi-party protocols. 5.[2] Here are some of them: 1.

a concept that refers to a wide class of software and hardware implementations. Shared memory architecture may involve separating memory into shared parts distributed amongst nodes and main memory. "Programming with Live Distributed Objects". J. (2008). Vitek. in Computer Architecture is a form of memory architecture where the (physically separate) memories can be addressed as one (logically shared) address space. http:/ / portal. The page based approach organizes shared memory into pages of fixed size.. Emmerich (2000) Engineering distributed objects. K. A coherence protocol. 51 References [1] Ostrowski. cfm?id=1428508. Software DSM systems implemented in the operating system can be thought of as extensions of the underlying virtual memory architecture. Such systems are transparent to the developer. Software DSM systems also have the flexibility to organize the shared memory region in different ways. 5142.Distributed object Pyro is a framework for distributed objects using the Python programming language.. Alternatively in computer science it is known as (DGAS). Distributed shared memory Distributed Shared Memory (DSM). 463-489. D.. these systems offer a more portable approach to DSM system implementation. Heidelberg. However. Examples of such systems include: • Kerrighed • • • • • OpenSSI MOSIX Terracotta TreadMarks DIPC . Cyprus. Here.. Dolev. Lecture Notes In Computer Science. Proceedings of the 22nd European Conference on Object-Oriented Programming. which means that the underlying distributed memory is completely hidden from the users. maintains memory coherence. or distributing all memory between nodes. J. or as a programming library. In contrast. 2008. in which the unit of sharing is a tuple. in which each node of a cluster has access to shared memory in addition to each node's non-shared private memory. K. John Wiley & Sons Ltd. the term shared does not mean that there is a single centralized memory but shared essentially means that the address space is shared (same physical address on two processors refers to the same location in memory)[1] . 1428536. vol. Software DSM systems implemented at the library or language level are not transparent and developers usually have to program differently. org/ citation. Distributed Ruby (DRb) is a framework for distributed objects using the Ruby programming language. In contrast. July 07 . the object based approach organizes the shared memory region as an abstract space for storing shareable objects of variable sizes. Software DSM systems can be implemented in an operating system. Springer-Verlag. Birman. Berlin. chosen in accordance with a consistency model. acm. [2] W.11. Paphos. Ed. and Ahnn. Another commonly seen implementation uses a tuple space.

Through the add-ons. and Atom web feeds—increasingly referred to together as the Open Stack—are often cited as enabling [1] technologies for distributed social networking. microformats [4] Addressbook to send posts to either individuals or groups. p. addressbook. the Wave Federation Protocol. the Portable Contacts protocol. private messaging server [3] PHP MIT HTTP + REST. XRD metadata discovery. typically through added widgets or plug-ins. Project Name Features Software Programming Language 6d License Protocols Privacy Support Federation (with other applications or services) Instances Version/Maturity [2] Blog.cfm?id=75105&am) by Kai Li. microformats like XFN and hCard. Hennessy (2007). the social network functionality is implemented on users' websites. Fourth Edition. themeable. Comparison of projects The protocols of these projects are generally open and free. Public Domain HTTPS. OStatus federation. The software of the projects is generally free and open source. It contrasts with social network aggregation services.acm. Open standards such as OAuth authorization. OpenSocial widget APIs. Computer architecture : a quantitative approach. External links • Distributed Shared Cache (http://www. The emphasis of the distribution is on portabilitya[›]. web-hook style sensor network development . ISBN 0123704901. A few social networking service providers have used the term more broadly to describe provider-specific services that are distributable across different websites. 1989 Distributed social network A distributed social network is an Internet social network service that is decentralized and distributed across different providers. Volume 7 Issue 4. interoperability and federation capability. Application framework. 201. not yet demo [5] alpha 5 total Ampify Trust-based search.com) • Memory coherence in shared virtual memory systems (http://portal.Distributed shared memory 52 References [1] Patterson. David A. media library. Ampify Messaging Protocol Provides fine grained privacy control through object capability security and transport layer encryption. which are used to manage accounts and activities across multiple discrete social networks. Morgan Kaufmann Publishers.org/citation. and John L. OpenID authentication.sharedcache. Nov. Paul Hudak published in ACM Transactions on Computer Systems.

mood. OStatus. OpenID. ChoiceSocial (web interface) Distributed Social Networking Protocol (DSNP) ? ? Friends in Feed [31] . messaging. anonymous DVCS. in use Approximately 120 [11] buddycloud [10] Location. content XMPP. ? alpha . XML for all data exchange. email.0 XMPP. DistribSocial. scrobbling. but pre 1. photo sharing. feed reader.friend management Diaspora X 2 [20] Yes in development [15] server [16] Ruby AGPL 3. IRC Excellent. Privacy controls. XMPP chat. WebOfTrust. OAuth push/pull. anonymity. games. OStatus in testingdue in next release beta. OpenSocial. Location Query Diaspora Microblogging. customizable interface Freenet Censorship resistant publishing. blogs. RSS/Atom. updating bookmarks. 'aspects' . hCard. Yes hosted on every users computer stable. acl. privacy controls. Forum. buddycloud for federation DiSo Project [23] ? ? [25] WordPress plugins [26] microformats (XFN.0 Friend2Friend [35] Strong encryption. Data is digitally signed LGPL Connect to known individuals. profiles. PubSubHubbub. video chat. Messaging. Activity Streams. pseudonymity. document creation and editing. videos. Blog. Journals. XMPP. OStatus (next release). collaborative drawing. global darknet DHT on restricted routes (FOAF) or Opennet (anonymizing DHT). groups. third party plugins p2p Java GPL [34] UDP.0 changing Salmon [17] . opendd. email. buddycloud channels Activity Streams ? ? [22] Diaspora X 2 [24] . profile. ChoiceSocial. photo/video sharing server client [12] [13] . Atom. OAuth. avatar.Distributed social network [6] Photos. Channel Protocol [14] . Java Apache 2.net [28] [30] [32] .net [33] [29] GPLv2 FOAF. in use Duuit! Search. microblogging. [18] ? ? Diaspora Alpha Wiki [19] pre-alpha 24 listed on Diaspora client using [21] XMPP. photos. XOXO). XMPP [27] DSNP [28] DSNPd (server daemon). Newsfeeds 53 [7] PHP GPLv2 QuickSocial Appleseed server [8] Friend circles used to categorize friends and restrict/allow access Internally. granular. Groups. files. JavaScript. others easily added (plugin architecture) Appleseed total [9] beta. webpages. Status Updates.

Activity Stream import and export.Net. single sign on to post directly to friend's profiles on co-operating systems. lists. photo albums. videos). maps. wave extensions (gadgets. galleries (photos. multiple profiles w/assignment to specific friends. XMPP chat interoperable with other XMPP-compliant [52] alpha groups. blogs. automatically updated address book from remote data sources. contact import from Web 2. blogs/feeds/Diaspora/Google (via RSS/ATOM).ca/Status. Facebook. robots). consolidated profile with RDF/FOAF export. Friendika server components [38] [40] stable/production [39] [41] Server [42] AGPLv3 OStatus [43] ? Yes daisycha. email.0 services via XFN and FOAF. Local and global directory services. DFRN demo .in [44] (based on SatusNet) Jappix [45] XMPP client + Microblogging server. networking groups. personal SPARQL API W3C OpenID. Twitter. Kopal Connect protocol ? ? alpha [49] . tasks. web client AGPL XMPP Excellent: based on presence authorizations ? demo [46] production Knowee OpenID Signup. Apache Wave (generates . Wave Federation Protocol Total federation/interoperability with other Kune Excellent installations and Apache Wave accounts. GNU-social. more in development 54 [37] PHP BSD OStatus OpenID. Ability to restrict connection endpoints. youtube share. Kopal Feed microformat Kune [50] demo [51] real-time colaborative edition. like/dislike. Fans and one-way relationships. Apache Wave inbox (modern email). richtext status (not specifically length limited). FOAF ? ? alpha Kopal [47] OpenID Core. community/group/celebrity pages. federation server. GNU Social extensive Friendika. integrates Java-based GWT AJAX) AGPLv3 XMPP. profiles. public webpages. location. XMPP chat. multiple profiles Server [48] MIT OpenID. Communications encryption.Distributed social network [36] Rich profiles. identi.

tasks. streams. Social Graph API.1.net among others [65] Active use Books.1. Local follow/unfollow. Feed Aggregation. OpenMicroBlogger User-toggleable "apps" to add/remove functionality. Facebook. calendar. WebID and others Yes (Comercial OpenID. Portable Contacts. Dual and GPL for Open Source Edition) WebID. tagclouds 55 [54] . security. Atom Publishing. RSSCloud. 1. XMPP extensions [63] Active developer Yes Yes community. PubSubHubbub. id. IMAP sample server ObjectCloud customization. GPL OpenID. Particle Yes ? 2 Alpha. groups. [62] OpenLink Data [64] Blogs. (partial) OStatus (PubSubHubbub) Yes Yes alpha AGPLv3 WebDAV.myopenlink. Working on: OStatus ? project's group Lorea Elgg [56] production plugins [55] [54] (60% production). WebID. Address Spaces (ODS) Profile Management. Flickr integration. PubSubHubbub. Twitter. subgroups. OAuth. File Servers (WebDAV based Briefcase). XMPP/psyc (50% development). Fully Restful design. ownCloud Cloudstorage and plugins for Photos. HTTP. more. user interface consumes Rest API.. Webfinger. Media. group mailing lists. Activity Streams. microblogging. SPARQL. Open Collaboration Services Yes ? ver. Privacy NoseRub server and webclient SMTP. Semantic Pingback. application platform OneSocialWeb NoseRub protocol / WebID SimPL 2. RSSCloud and partial OStatus (PubSubHubbub) federation as well as Open Microblogging 0. active development [61] Microblogging Openfire plugin..Distributed social network [53] Profiles. features being added. flexible hosting. OpenSocial. Calendars. clients Java Apache 2 XMPP. plugins. Discussion Forums (includes NNTP support).0 ? PHP AGPLv3 XMPP Excellent not yet Yes ? not yet Yes demo development [57] [58] [59] [60] OpenID.0 . RSS and more MIT Open Microblogging 0. Wikis. rdf+sparql (10% development) Movim XMPP client + Microblogging Mr. (partial) Twitter API support.

mobile themes. IRC. OpenID.9 (Active use) Thimbl Weestit microblogging Finger. OpenMicroBlogging (deprecated) Available for sites. WAP. ? TELNET. cart. Privacy Controls ? Yes Alpha Yes OpenID. messaging. 3rd party integration (Facebook. Clients [73] PHP AGPLv3 OStatus.0. Activity Streams ? ? 3 production Alpha [69] friends. and other open protocols psyced profiles. including communication untraceability ? demo [67] beta [68] SMOB Social-Igniter microblogging FOAF server GPL Webfinger. planned for accounts and posts ? Planned for future Yes Identi. TWiT [75] 0. OpenID No Yes Beta StatusNet microblogging Server. Portable Contacts. Private Messaging. will add support for OAuth SocialZE [72] server. microblogging GPLv2 MIT PSYC. Applet. media). YouTube). XMPP. among others 56 development early alpha concept [66] GPL Extensive. themes. Webfinger. OpenID. PubSubHubbub. editable widgets. Salmon StatusNet and Cliqset. FOAF. ? Webfinger. modular apps (messages.20 2010 . enables internet content sharing Socknet. status. XDI.ca Army [74] . places. SMTP. likely Eclipse or Apache OStatus. Twiter. PubSubHubbub. OAuth 2. chat. RSS RSSN private messaging. OStatus. Yes Yes SocialRiver [70] GPL AGPL OStatus [71] . hCard. OAuth.Distributed social network Project Danube 1) Sharing personal data with companies/organizations 2) Sharing personal data with "friends" 3) Use of personal data for "personal applications" Project Nori OStatus. SSH XMPP. Portable Contacts. OAuth. SMTP. web client OSMP (Open Social Message Protocol) Socknet ProviderFoolishMortal. blog.org profiles. comments. groups Safebook RSSN ? ? ? ? Yes TBD. HTTP. HTTP. POP development alpha planned Yes Planned Nov. XRI.

google. [2] (http:/ / get6d. net/ ) [36] (http:/ / friendika. com/ [32] https:/ / choicesocial.Distributed social network 57 Notes ^ a: See DataPortability article. org/ wiki/ Channel_Protocol [15] http:/ / open. friendika. com/ [21] https:/ / github. net/ [33] https:/ / distribsocial. friendika. friendika. pdf) [40] http:/ / demo. org/ http:/ / diso-project. appleseedproject. David (2008-10-09). com/ [6] http:/ / opensource. net/ daveman692/ blowing-up-social-networks-by-going-open-presentation/ ). org/ download/ [8] http:/ / opensource. org/ [12] https:/ / github. com/ buddycloud/ channel-server [13] https:/ / github. com/ group/ diaspora-dev/ browse_thread/ thread/ 4bfb9cd07722dfc0 [18] (http:/ / groups. com/ download [38] (http:/ / portal. . com/ [11] http:/ / buddycloud. org/ projects/ social/ faq/ . org/ dfrn2. org/ dsnp/ http:/ / complang. com/ [41] (http:/ / gnu. appleseedproject. org/ + socialites/ statusnet/ gnu-socia [43] http:/ / foocorp. External links • • • • Wiki of Federated Social Web W3C Incubator Group [76] Federated Social Web Conference 2011 [77] Comparison of protocol/software projects for distributed social networking [78] Diploma Thesis from the University of Applied Sciences Dresden(HTW) about XMPP-based Federated Social Networks like buddycloud [79](CC-BY) References [1] Recordon. org [7] http:/ / opensource. org/ login/ [10] http:/ / buddycloud. com/ ) [3] https:/ / github. net/ [34] https:/ / github. org/ dsnp/ [30] http:/ / complang. com/ [16] https:/ / github. com/ [24] http:/ / diaspora-x. google. com/ manifesto) [5] http:/ / demo6d. slideshare. com/ freenet [35] (http:/ / Friend2Friend. Retrieved 5 January 2009. pdf [31] https:/ / friendsinfeed. com/ cms/ content/ diaspora-x-now-running-buddycloud-channels-and-xmpp [23] http:/ / diaspora-x. org/ wiki/ Main_Page#Components http:/ / diso-project. com/ ijoey/ 6d [4] (http:/ / get6d. com/ group/ salmon-protocol/ browse_thread/ thread/ efab99ca7311d4ae) [19] https:/ / joindiaspora. com/ [20] http:/ / diaspora-x. 27. ""Blowing Up" Social Networks by Going Open" (http:/ / www. org/ dsnp/ spec/ dsnp-spec. pp. com/ ) [37] http:/ / portal. org/ quicksocial/ [9] http:/ / appleseedproject. com/ bnolan/ diaspora-x2 [22] http:/ / buddycloud. com/ buddycloud [14] http:/ / buddycloud. com/ node/ 7) [39] (http:/ / dfrn. buddycloud. appleseedproject. org/ software/ social) [42] http:/ / gitorious. com/ #login [26] [27] [28] [29] [25] http:/ / diso-project. org/ http:/ / complang. com/ diaspora/ diaspora [17] http:/ / groups.

google. org/ developers-downloads. tv/ [76] http:/ / www. google. safebook. org/ developers-protocol. com/ cms/ sites/ default/ files/ thesis. cc/ pg/ groups/ 7826/ lorea/ [57] http:/ / noserub. com/ download/ [59] http:/ / noserub. beta. net/ download [74] http:/ / identi. com/ p/ kopal/ wiki/ Getting_Started?tm=2 [49] http:/ / code. php?content=prototype [69] http:/ / social-igniter. google. org/ fsw2011/ [78] http:/ / gitorious. ourproject. ca [75] http:/ / army. org/ join [55] https:/ / bitbucket. eu/ home. org/ 2005/ Incubator/ federatedsocialweb/ wiki/ Main_Page [77] http:/ / d-cent. safebook. twit. in/ [45] http:/ / project. com/ wiki/ ODS/ [65] http:/ / id. org/ social/ pages/ ProjectComparison [79] http:/ / buddycloud. w3. com/ [47] (http:/ / code. es/ ws/ [53] http:/ / lorea. us/ home. com/ [58] http:/ / noserub. com/ p/ kopal/ wiki/ Kopal_Connect [50] http:/ / code. myopenlink. html [63] (http:/ / onesocialweb. org/ index. com/ [46] http:/ / jappix. org/ [71] http:/ / socialriver. safebook. en [54] http:/ / lorea. org [73] http:/ / status. html. google. org/ [62] http:/ / onesocialweb. com/ p/ kopal/ ) [48] http:/ / code. org/ faq/ [72] http:/ / socialze. html) [64] http:/ / ods. com/ [61] http:/ / onesocialweb. org/ rhizomatik [56] https:/ / n-1. net/ ods/ [66] http:/ / www. pdf 58 . com/ p/ kopal/ wiki/ Kopal_Feed [51] http:/ / kune. openlinksw. iepala. com/ [70] http:/ / socialriver. jappix. org [52] http:/ / kune.Distributed social network [44] http:/ / daisycha. eu/ [67] http:/ / www. com/ quick-facts/ [60] http:/ / identoo. php?content=demo [68] http:/ / www.

com/ en-us/ um/ people/ jrzhou/ pub/ Scope. • "Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks" [2]. An application written for Dryad is modeled as a directed acyclic graph (DAG). pdf [3] http:/ / research. The graph is defined by adding edges. External links • Dryad: Programming the Data Center [4] • Dryad Home [5] • Video of Michael Isard explaining Dryad at Google [6] References [1] http:/ / research. com/ watch?v=WPhE5JCP2Ak . Retrieved 2009-01-21. they must be encapsulated in a class that inherits from the GraphNode base class. To make them accessible to the Dryad runtime. com/ en-us/ projects/ dryadlinq/ dryadlinq. and the vertices of the graph defines the operations that are to be performed on the data. examples include PSQL. The Dryad runtime parallelizes the dataflow graph by distributing the computational vertices across various execution engines (which can be multiple processor cores on the same computer or different physical computers connected by a network. There exist several high-level language compilers which use Dryad as a runtime.Dryad (programming) 59 Dryad (programming) Dryad is an ongoing research project at Microsoft Research for a general purpose runtime for execution of data parallel applications. Dryad defines a domain-specific language. devoid of any concurrency or mutual exclusion semantics. zdnet. edges are added by using a composition operator (defined by Dryad) that connects two graphs (or two nodes of a graph) with an edge. The "computational vertices" are written using sequential constructs. • "SCOPE: Easy and Efficient Parallel Processing of Massive Data Sets" [3]. Retrieved 2009-01-21. com/ research/ sv/ dryad/ [6] http:/ / www. Computational vertices are written using standard C++ constructs. Scheduling of the computational vertices on the available hardware is handled by the Dryad runtime. pdf [4] http:/ / blogs. pdf [2] http:/ / research. which is implemented via a C++ library. which in physical implementation is realized by TCP/IP streams. References • "DryadLINQ: A System for General-Purpose Distributed Data-Parallel Computing Using a High-Level Language" [1]. com/ en-us/ projects/ dryadlinq/ eurosys07. microsoft. Microsoft Research. microsoft. shared memory or temporary files. Microsoft Research. that is used to create and model a Dryad execution graph. Microsoft Research. The DAG defines the dataflow of the application. The flow of data between one computational vertex to another is implemented by using communication "channels" between the vertices. Microsoft Scope and DryadLINQ. youtube. microsoft. Retrieved 2007-12-04. microsoft. as in a cluster). without any explicit intervention by the developer of the application or administrator of the network. com/ microsoft/ ?p=18 [5] http:/ / research. A stream is used at runtime to transport a finite number of structured Items. Managed code wrappers for the Dryad API can also be written.

real-time allocation of IT resources in line with demand from business processes.and software-related failures.Dynamic infrastructure 60 Dynamic infrastructure Dynamic Infrastructure is an information technology paradigm concerning the design of data centers so that the underlying hardware and software can respond dynamically to changing levels of demand in more fundamental and efficient ways than before.[5] HP [6] and Dell. for example. The paradigm is also known as Infrastructure 2.[8] Enterprises switching to Dynamic Infrastructures can also reduce costs. organizations are enabled to make more efficient use of their IT budgets and devote greater proportions of their budget to physical and virtual production servers. Early examples of server-level Dynamic Infrastructures are the FlexFrame for SAP and FlexFrame for Oracle solutions introduced by Fujitsu Siemens Computers (now Fujitsu) in 2003.0 refers to the ability of networks to keep up with the movement and scale requirements of new enterprise IT initiatives. By reducing redundant capacity. Dynamic Infrastructures also provide the fundamental business continuity and high availability requirements to facilitate cloud or grid computing. enabling higher levels of dynamic control and connectivity between networks.[4] Fujitsu. F5 Networks and Infoblox. . Instead of the hot spare principle of keeping second servers on standby to replace all production machines in contingencies for hardware. This is achieved by using server virtualization technology to pool computing resources wherever possible.[10] Potential benefits of Dynamic Infrastructures include enhancing performance."[12] IBM's definition: “A dynamic infrastructure integrates business and IT assets and aligns them with the overall goals of the business while taking a smarter. Fujitsu's definition: "Dynamic Infrastructures enable customers to assign IT resources dynamically to services as required and to choose sourcing models which best fit their businesses. reduce cost. Infrastructure 2. According to companies like Cisco. Dynamic Infrastructures may also be used to provide security and data protection when workloads are moved during migrations. and manage risk.[1] [2] Microsoft. This brings IT flexibility and efficiency to the next level.[9] enhancing performance or building co-location facilities. Dynamic Infrastructures provide for failover from a smaller pool of spare machines.[11] system availability and uptime. especially virtualization and cloud computing. applications and endpoints will be required to reap the full benefits of virtualization and many types of cloud computing.[3] Sun. once a month. increasing server utilization and the ability to perform routine maintenance on either physical or virtual systems all while minimizing interruption to business operations and reducing cost for IT. provisioning. The FlexFrame approach is to dynamically assign servers to applications on demand. This allows for load balancing and is a more efficient approach than keeping massive computing resources in reserve to run tasks that take place. improve quality-of-service and make more efficient use of energy through reducing the number of standby or under-utilized machines in their data centers. new and more streamlined approach to helping improve service. network automation and connectivity intelligence between networks.”[13] For networking companies.0 and Next Generation Data Center. This will require network management and infrastructure to be consolidated. systems and endpoints. but are otherwise under-utilized. Top tier vendors promoting dynamic infrastructures include IBM.[7] The basic premise of Dynamic Infrastructures is to leverage pooled IT resources to provide flexible IT capacity. enabling the seamless. and allocating these resources on-demand using automated tools. scalability. leveling peaks and enabling organizations to maximize the benefit from their IT investments.

power plants. • Communications companies can better monitor usage by location.Dynamic infrastructure 61 Need for a holistic approach Even in the face of global uncertainty. user or function. Now. that airports. This meant. – Source: Gartner – "TCO of Traditional Software Distribution vs. Combining the two means that at least 57% of data center outsourcing and hosting initiatives are driven by green. utilities. and optimize routing to enhance user experience. many organizations have thought of physical infrastructure and IT infrastructure as separate. Global organizations already have the foundation for a dynamic infrastructure that will bring together the business and IT infrastructure to create new possibilities. global. roadways. control and automation across all business and IT assets Is highly optimized to achieve more with less Addresses the information challenge Leverages flexible sourcing like clouds Manages and mitigates risks Organizations need an infrastructure that can propel them forward — not hold them back. Mark A Margevicious. competitors and customers. interconnected. managing spikes in demand. like cloud computing to deliver new services with agility and speed. packaging and supporting an application by 60%. • Utility companies can reduce energy usage with a "smart grid. and its effect on organizations is equally far-reaching. valves and assembly equipment through embedded electronics. and ensuring disaster recovery readiness. it is the infrastructure that continues to enable commerce and communications – the roads. a new approach is needed." / Kurt Potter / 4 December 2008 . and oil wells were managed in one way. and broadband devices were managed quite differently. routers. and technologies connecting and differentiating organizations. and they reduced overall TCO by 5% to 7% in our model. • Production environments can monitor and manage presses. networks. the infrastructure of atoms and the infrastructure of bits are merging into an intelligent. buildings. Benefits of having dynamic infrastructures Dynamic infrastructures take advantage of intelligence gained across the network. dynamic infrastructure. For example: • Transportation companies can optimize their vehicles' routes leveraging GPS and traffic information. – Source: Gartner – "Green IT Services as a Catalyst for Cost Optimization." Virtualized applications can reduce the cost of testing. while datacenters. It can utilize alternative sourcing approaches. To succeed in today's world of instrumented. • Facilities organizations can secure access to locations and track the movement of assets by leveraging RFID technology. for example. The range of this approach is broader than ever before. Terrence Cosgrove. • Technology systems can be optimized for energy efficiency. Application Virtualization" / Michael A Silver. This convergence of business and IT assets requires an infrastructure that can measure and manage the lifecycle of assets that exist beyond the data center. Until now. Brian Gammage / 16 April 2008 While green issues are a primary driver in 10% of current data center outsourcing and hosting initiatives. is for a new type of infrastructure that: • • • • • Enables visibility. The need therefore. cell phones. By design. PCs. throughout an organization's entire facilities as well as between one organization and another. and intelligent assets. cost reductions initiatives are a driver 47% of the time and are now aligned well with green goals. every dynamic infrastructure is service-oriented and focused on supporting and enabling the end users in a highly responsive way.

html) • Dell Converged Infrastructure (http://www. Donna Scott. fujitsu.com/it_trends/dynamic_infrastructures/index. jsp?containerId=IDC_P15254) • Infrastructure 2.hp. com/ p/ articles/ mi_m0BRZ/ is_2007_Spring/ ai_n19493357/ pg_2). F5. / Roberta J Witty. html) [10] An overview of continuous data protection (http:/ / findarticles. html) [13] Dynamic Infrastructure: Delivering superior business and IT services with agility and speed (ftp:/ / ftp.com/en-us/ infrastructure/bb736006. on-demandenterprise. jsp) [5] Fujitsu Dynamic Infrastructures (http:/ / ts.sandia. com/ dynamicinfrastructures) [6] Dynamic Infrastructure and Blades at HP (http:/ / h18000." – Source: Gartner – "Predicts 2009: Business Continuity Management Juggles Standardization. co. www1. Dave Russell.com/ 4891610) • IDC 4th Annual Dynamic Infrastructure Conference (Event) (http://www.0 (http://seekingalpha. ts.com/videos/tag/dynamic+ infrastructure) . com/ common/ ssi/ sa/ wh/ n/ oiw03021usen/ OIW03021USEN.sun.microsoft. com/ community/ node/ 27354). com/ ci) [8] IDC White Paper Building the Dynamic DataCenter: FlexFrame for SAP (http:/ / docs. amazon.idc.com/ global/corporate-ad/images/it_infrastructure.jsp) • NEC Dynamic It Takes a Dynamic Infrastructure to sustain growth while staying green (http://www. com/ service/ dynamicinfrastructure/ index. fujitsu. dell.0 Panel with Cisco. service delivery and acquisition models that optimize the infrastructure for efficiency and flexibility while transforming management to an automated service delivery and management model.html?jumpid=reg_R1002_USEN/) • Fujitsu Dynamic Infrastructures (http://ts. more than 50% of midsize organizations and more than 75% of large enterprises will implement layered recovery architectures.com/ article/111346-network-industry-needs-a-new-vision-infrastructure-2-0) • National Infrastructure Simulation and Analysis Center (http://www. freepatentsonline. com/ y2007/ 0294736. software.nec. html) [2] IBM's dynamic infrastructure taking shape at TheRegister (http:/ / www. html) [7] Dell Converged Infrastructure (http:/ / www.ibm. sun. Cost and Outsourcing Risk"). [11] Amazon Elastic Compute Cloud (http:/ / aws.aspx) • Seeking Alpha The Network Industry Needs a New Vision — Infrastructure 2. 62 References [1] IBM patent: Method For Dynamic Information Technology Infrastructure Provisioning (http:/ / www. com/ ec2/ ).0 blog (http://www.fujitsu.com/systems/ dynamicinfrastructure/) • HP Converged Infrastructure HP Converged Infrastructure (http://h18004.html) • Sun Dynamic Infrastructure Suite (http://www.com/service/dynamicinfrastructure/index.infra20.Dynamic infrastructure "By 2013. uk/ 2009/ 04/ 29/ ibm_storage_apr09/ ) [3] Microsoft's view of The Dynamic Datacenter coverered by networkworld (http:/ / www. fujitsu. John P Morency. [12] Fujitsu's Dynamic Infrastructures main page (http:/ / ts.com) • Microsoft Realizing the potential for dynamic infrastructure (http://technet. VMware at Future in Review Conference May 2009 (http://vimeo.dell.gov/nisac/diisa. PDF) External links • IBM Dynamic Infrastructure IBM Dynamic Infrastructure (http://www-03. networkworld.com/ci) • Infrastructure 2.com/products/solutions/ converged/main.www1. ibm. [4] Dynamic Infrastructure at Sun (http:/ / www. aspx?id=140d1393-d5ff-4c3b-924d-0c7183ebee65) [9] Computation on Demand: The Promise of Dynamic Provisioning (http:/ / www. com/ features/ 26054149. com/ dl. com/ products/ blades/ components/ matrix/ big_picture. Rober Desisto / 28 January 2009 The key to a business and IT infrastructure that is "dynamic" is leveraging technologies. Dynamic Infrastructure (http://www.technorati. theregister.pdf) • Technorati Dynamic Infrastructure.com/getdoc. com/ it_trends/ dynamic_infrastructures/ index. hp.

com date=September 2008. Jason (PDF).virtual-strategy. data and computing power (services) away from centralized points to the logical extremes of a network. Joyent. Retrieved 2008-10-31. SOA and the IBM PC (http://blog. 3. Mirroring transactional and interactive systems are however a much more complex endeavor. NEC and Promark Deliver The Dynamic Infrastructure (http://www. and active hackers can be caught early on. Retrieved 2010-08-23. Edge computing pushes applications. Virtual Iron: Dynamic Infrastructure for the Data Center (http://www. . lou (September 2007). http://www. shrinking latency. Overview As the name implies. and other names implying non-centralized. Retrieved 2008-10-31. 2008) • Carolan. autonomic (self-healing) computing. where the cache is in the Internet itself. Dynamic Infrastructures: Taking Business Continuity to the Next Level (http://www. Edge computing eliminates. Edge computing is also referred to as mesh computing. limiting or removing a major bottleneck and a potential point of failure. As a topological paradigm.Dynamic infrastructure • Springer. all of which need to be specifically developed or configured for edge computing. ADynamic The Datacenter of the Future -.com date=October 2008. • Sun Dynamic Infrastructures Wiki (http://wikis.virtual-strategy. Edge computing imposes certain limitations on the choices of technology platforms. SAAS. peer-to-peer computing. Bruce. Static web-sites being cached on mirror sites is not a new concept. 2. 2008) • Herndon. Edge application services significantly decrease the data volume that must be moved.vmworld. Security is also improved as encrypted data moves further in. louspringer. applications or services.com. It is like an application cache. Previously available only to very large corporate and government organizations.pdf) (September. • Bizvoicemagazine. large organizations typically implement Edge computing by deploying Web server farms with clustering. • Reuters. Ann. the core computing environment. and the distance the data must go. http://blog.com/article/ pressRelease/idUS118407+17-Mar-2008+BW20080317) (March 17. which may be vast and include many networks. nodeless availability. • Ernst. technology advancement and cost reduction for large-scale implementations have made the technology available to small and medium-sized business. or at least de-emphasizes.com/downloads/opendi/ opendiR1-vision-high-level-design_v16.reuters.com.sun.pdf). and improving quality of service (QoS). compromised data. The target end-user is any Internet client making use of commercial Internet application services. grid computing.sun. bizvoicemagazine. http://www. http://www. the consequent traffic. OpenDI Vision and High Level Desing Overview (http://kenai.com date=October 2005. louspringer. thereby reducing transmission costs.com/archives/08sepoct/PassItOn-Infrastructures.html). Edge computing has many advantages: 1.com. ADynamic Infrastructure. the data is checked as it passes through protected firewalls and other security points.It's Already Here! (http://www. toward the network core.vmworld.com/2007/09/27/dynamic-infrastructure-joyent-saas-soa-and-the-ibm-pc). Retrieved 2008-10-31. To ensure acceptable performance of widely-dispersed distributed services.html).com/ static/sessions/2008/PO2596.com/display/DI/DI+Home) 63 Edge computing Edge computing provides application processing load balancing capacity to corporate and other large-scale web servers.com/ Migration/Virtual-Iron-Dynamic-Infrastructure-for-the-Data-Center. As it approaches the enterprise. Edge computing replicates fragments of information across distributed networks of web servers. where viruses.

com http:/ / www. External links • • • • Akamai [1] Exinda . com . real-time basis) extends scalability.Geo-Targeted Private Content Delivery Network Platform (pCDN) Companies providing edge computing services • • • • • Akamai Technologies EdgeCast Networks Exinda Limelight Networks Mirror Image Internet References [1] [2] [3] [4] http:/ / www.g. GeoStratus.e. the ability to "virtualize" (i. html http:/ / www1. and it could be argued that typical customers for Edge services are organizations desiring linear scale of business application performance to the growth of. com/ cms__Main?name=exinda-introduces-the-exinda-edge-cache http:/ / www.Adhoc Geo-Targeted Computing Alliance [3] GeoStratus. Whereas Grid computing would be hardcoded into a specific application to distribute its complex and resource intensive computational needs across a global grid of cheap networked machines. akamai. e. Finally. a subscriber base. com/ en/ html/ technology/ edgecomputing_howitworks. Edge computing provides a generic template facility for any type of application to spread its execution across a dedicated grid of prepared expensive machines..Edge computing 4.com [4] .. The Edge computing market is generally based on a "charge for network services" model. 64 Grid computing Edge computing and Grid computing are related. exinda. logically group CPU capabilities on an as-needed. GeoElastic.Edge Cache implementation press release [2] GeoElastic .

The PRAM computational model is an abstract parallel machine model that had been introduced to similarly study parallel algorithms and complexity for parallel computing. A consequence of this abstraction is a step-by-step (inductive) explication of the instruction available next for execution. processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. Multi-core computers are built around two or more processor cores integrated on a single integrated circuit die. but several issues can be suppressed. A more direct explanation of XMT starts with the rudimentary abstraction that made serial computing simple: that any single instruction available for execution in a serial program executes immediately. a parallel multi-threaded programming language which is a small extension of the programming language C. guided by the proof of a scheduling theorem due to Brent (1974). The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm. Second. the suppressed information is provided. the number of operations at each round need not be clear. In the WT framework. as well as in the class notes Vishkin (2009). Vishkin (2011) explains the simple connection between the WT framework and the more rudimentary ICE abstraction noted above. The inclusion of the suppressed information is. For example. by standards of other approaches to parallel algorithms. Since productivity of parallel programmers has long been considered crucial for the success a parallel computer. The XMT paradigm include a programmer’s workflow that starts with casting an algorithm in the WT framework and proceeds to programming it in XMTC. hundreds or thousands of processor cores. . A consequence of ICE is a step-by-step (inductive) explication of the instructions available next for concurrent execution. The XMT paradigm can be programmed using XMTC. They are widely used across many application domains including general-purpose computing. Kessler & Traeff (2001). introduced by Shiloach & Vishkin (1982). simplicity of algorithms is important. inserting the details suppressed by that initial description is often not very difficult. the aspiration of XMT is that computer science will again be able to augment mathematical induction with a simple one-line computing abstraction The random access machine (RAM) is an abstract machine model used in computer science to study algorithms and complexity for standard serial computing. dubbed Immediate Concurrent Execution (ICE) in Vishkin (2011). the WT framework was adopted as the basic presentation framework in the parallel algorithms books (for the PRAM model) JaJa (1992) and Keller. the operations to be performed are characterized. The rudimentary parallel abstraction behind XMT. in fact. This large body of parallel algorithms knowledge for the PRAM model and their relative simplicity motivated building computers whose programming can be guided by these parallel algorithms. The XMT paradigm was introduced by Uzi Vishkin. provides a simple way for conceptualizing and describing parallel algorithms. when they were yet to be built. is that indefinitely many instructions available for concurrent execution execute immediately. The work-time (WT) (sometimes called work-depth) framework. For example. Researchers have developed a large body of knowledge of parallel algorithms for the PRAM model.Explicit multi-threading 65 Explicit multi-threading Explicit Multi-Threading ( XMT ) is a computer science paradigm for building and programming parallel computers designed around the Parallel Random Access Machine (PRAM) parallel computational model. The main levels of abstraction of XMT The Explicit Multi-Threading (XMT) computing paradigm integrates several levels of abstraction. a parallel algorithm is first described in terms of parallel rounds. Explicit Multi-Threading (XMT) is a computing paradigm for building and programming multi-core computers with tens. Moving beyond the serial von Neumann computer (the only successful general purpose platform to date). For each round. These parallel algorithms are also known for being simple.

the demonstration also sought to include teaching the basics of PRAM algorithms and XMTC programming to students ranging from high-school Torbert et al. Cristoph W. Volume 54 Issue 1. Journal of the ACM 21: 201–208. 10. Uzi (2011). "Communications of the ACM.1366240. • Vishkin. Uzi (2008). doi:10. ISBN 0-201-54856-9 • Keller. Jesper L. on Parallel Algorithms and Architecture) 36: 551–552. Theory of Computer Systems (Special Issue of 2001 ACM Symp. Uzi. March 10-13. Nuzman. Addison-Wesley. Tel Aviv University and the Technion • Wen. Wiley-Interscience. One of them [1] generalizes the program counter concept. • Torbert. • Vishkin. . Joseph (1992). 66 XMT prototyping and links to more information In January 2007. Uzi (2003). Proc. doi:10. (2001). The XMT concept was presented in Vishkin et al. David (2010). ISBN 0-471-35351-5 • Naishlos. References • Brent. 55–66. Vishkin. Tseng. Ron. Kessler.1866757 Using simple abstraction to reinvent computing for parallelism]. Jorg. 1998 ACM Symposium on Parallel Algorithms and Architectures (SPAA). Shlomit. which is central to the von Neumann architecture to multi-core hardware. Tzur. Joseph (1998). that demonstrates the overall concept was completed. "Explicit Multi-Threading (XMT) bridging models for instruction parallelism" [5]. Joseph. Uzi. Milwaukee. ACM Technical Symposium on Computer Science Education (SIG CSE).Explicit multi-threading The XMT multi-core computer systems provides run-time load-balancing of multi-threaded programs incorporating several patents. "Is teaching parallel algorithmic thinking to high-school student possible? One teacher’s experience. Uzi (1982). (2003) and the XMT 64-processor computer in Wen & Vishkin (2008).1145/1366230. Vishkin. Class notes of courses on parallel algorithms taught since 1992 at the University of Maryland. Vishkin. • Vishkin. Xingzhi. Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques. (1974). Vishkin. WI. Communications of the ACM 54: 75–85. Richard P. 140–151. Yossi. Ellison. Uzi (2009). 104 pages [6]. • JaJa. Practical PRAM Programming. College Park.".1145/1866739.1866757. Italy). Nuzman. Dorit. 2008 ACM Conference on Computing Frontiers (Ischia. pp. a 64-processor computer [2] named Paraleap [3] . • Shiloach. Berkovich. An Introduction to Parallel Algorithms. Chau-Wen. [10. Proc.. January 2011". Traeff. pp. Shane.1145/1866739.1145/1866739.1866757. "The parallel evaluation of general arithmetic expressions". "Towards a First Vertical Prototyping of an Extremely Fine-Grained Parallel Programming Approach" [4]. (2010) to graduate school. 2010. "An O(n2 log n) parallel max-flow algorithm". Since making parallel programming easy is one of the biggest challenges facing computer science today. "FPGA-based prototype of a PRAM-on-chip processor" [7]. Journal of Algorithms 3: 128–146. to appear. Dascal. Efraim. (1998) and Naishlos et al.

press release. [3] University of Maryland.[2] While the term "fabric" has also been used in association with storage area networks and switched fabric networking. umiacs.umiacs. on-line tutorial and to material for teaching parallelism (http://www. edu/ users/ vishkin/ XMT/ spaa01-j-03. [4] http:/ / www. html). cfm?ArticleID=1459).Explicit multi-threading 67 Notes [1] Vishkin. pdf [7] http:/ / www. [2] University of Maryland. Uzi. umd. press release. edu/ media/ pressreleases/ pr112707_superwinner.[6] [7] . edu/ users/ vishkin/ XMT/ CompFrontiers08. umd. the introduction of compute resources provides a complete "unified" computing system. Patent 6.463. umd. U. See also Vishkin et al. networking and parallel processing functions linked by high bandwidth interconnects (such as 10 Gigabit Ethernet and InfiniBand)[2] but the term has also been used to describe platforms like the Azure Services Platform and grid computing in general (where the common theme is interconnected nodes that appear as a single logical unit). June 26. umd.umd.shtml). Spawn-join instruction set architecture for providing explicit multithreading. eng.527. umiacs.S. memory. "grid computing 'fabrics' are now poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations. HP and Egenera currently manufacture computing fabric equipment. pdf External links • Home page of the XMT project. edu/ scitech/ release. James Clark School of Engineering. "data center fabric" and "unified data center fabric". director of the Computation Institute at the Argonne National Laboratory and University of Chicago."[3] Brocade.[5] According to Ian Foster.edu/~vishkin/XMT/index. November 28. edu/ users/ vishkin/ XMT/ spaa98. 2007: "Next Big "Leap" in Computing Technology Gets a Name" (http:/ / www. Fabric computing Fabric computing or unified computing involves the creation of a computing fabric consisting of interconnected nodes that look like a 'weave' or a 'fabric' when viewed collectively from a distance. pdf [5] http:/ / www. A.[1] Usually this refers to a consolidated high-performance computing system consisting of loosely coupled storage. Other terms used to describe such fabrics include "unified fabric"[4] . and/or peripherals) and "links" (functional connection between nodes). Cisco. 2007: "Maryland Professor Creates Desktop Supercomputer" (http:/ / www. edu/ users/ vishkin/ PUBLICATIONS/ classnotes. ps [6] http:/ / www. with links to a software release. umiacs. newsdesk. umd. (1998). umd. umiacs.[3] The fundamental components of fabrics are "nodes" (processor(s).

cisco. html) [9] "Cisco launches Unified Computing push with new blade server" (http:/ / www. com/ wiki/ index.hp. computerworld. do?command=viewArticleBasic& articleId=9043698) [8] Cisco: Unified Data Center Fabric: Reduce Costs and Improve Flexibility (http:/ / www. particularly from rivals who claim that these proprietary systems will lock out other vendors. cfm?featureid=3614) Unified Fabric: Benefits and Architecture of Virtual I/O (http:/ / www. whereby adding resources does not linearly increase performance which is a common problem with parallel computing and maintaining security. but features will live on (http:/ / www. reuters. com/ en/ US/ prod/ collateral/ switches/ ps9441/ ps9670/ white_paper_c11-462181. Key characteristics The main advantages of fabrics are that a massive concurrent processing combined with a huge. Reuters.[2] References What Is: The Azure Fabric and the Development Fabric (http:/ / azure. Retrieved 2009-03-17. Other companies offering unified or fabric computing systems include Liquid Computing Corporation and Egenera. tightly-coupled address space makes it possible to solve huge computing problems (such as those presented by delivery of cloud computing services) and that they are both scalable and able to be dynamically reconfigured. Retrieved 2009-03-17. cisco. [1] [2] [3] [4] External links • Cisco Unified Computing and Servers (http://www.html/) • HP Converged Infrastructure (http://h18004. Analysts claim that this "ambitious new direction" is "a big risk" as companies like IBM and HP who have previously partnered with Cisco on data center projects (accounting for $2-3bn of Cisco's [10] [9] annual revenue) are now competing with them. 2009-03-16. do?command=viewArticleBasic& articleId=9129718& intsrc=news_ts_head). snagy. com/ issuesprint/ issue199810/ fabric. html) [5] Intel: Data Center Fabric (http:/ / communities. html?jumpid=reg_R1002_USEN/) . com/ action/ article. . dominopower. html) Grid computing: The term may fade.com/products/solutions/converged/main. [10] "Cisco to sell servers aimed at data centers" (http:/ / www. intel. computerworld. com/ openport/ blogs/ server/ 2008/ 02/ 13/ data-center-fabric) [6] Toolbox for IT: Data Center Fabric (http:/ / it.www1. There have been mixed reactions to Cisco's architecture. toolbox. com/ article/ technologyNews/ idUSTRE52F68W20090316). php/ Data_Center_Fabric) [7] Switch maker introduces a 'Data Center Fabric' architecture (http:/ / www. cisco.Fabric computing 68 History While the term has been in use since the mid to late 1990s[2] the growth of cloud computing and Cisco's evangelism of unified data center fabrics[8] followed by unified computing (an evolutionary data center architecture whereby blade servers are integrated or unified with supporting network and storage infrastructure[9] ) starting March 2009 has renewed interest in the technology. com/ opsys/ features/ index.[2] Challenges include a non-linearly degrading performance curve. ComputerWorld. 2009-03-16.com/en/US/products/ps10265/index. techworld. . com/ action/ article. name/ blog/ ?p=84) Massively distributed computing using computing fabrics (http:/ / www. com/ en/ US/ prod/ collateral/ ps6418/ ps6423/ ps6429/ prod_white_paper0900aecd80337bb8.

The network is secure. Peter Deutsch. History The list of fallacies generally came about at Sun Microsystems. com/ c/ a/ Security/ Malware-Defensive-Techniques-Will-Evolve-as-Security-Arms-Race-Continues-331833/ ).and transport-layer developers to allow unbounded traffic. eweek. and of the packet loss it can cause. Topology doesn't change. 2. but this is considered a mistake). greatly increasing dropped packets and wasting bandwidth. com/ jag/ resource/ Fallacies. The "hidden" costs of building and maintaining a network or subnet are non-negligible and must consequently be noted in budgets to avoid vast shortfalls.Fallacies of Distributed Computing 69 Fallacies of Distributed Computing Peter Deutsch asserted that programmers new to distributed applications invariably make a set of assumptions known as the Fallacies of Distributed Computing and that all of these assumptions ultimately prove false. as with subnets for rival companies. Ignorance of bandwidth limits on the part of traffic senders can result in bottlenecks over frequency-multiplexed mediums. Bill Joy and Tom Lyon had already identified the first four as "The Fallacies of Networked Computing"[3] (the article claims "Dave Lyon". resulting either in the failure of the system. sys-con. There is one administrator. Multiple administrators. The network is homogeneous. [3] "Deutsch's Fallacies. or in large unplanned expenses required to redesign the system to meet its original goals. sun. [2] "Malware Defensive Techniques Will Evolve as Security Arms Race Continues" (http:/ / www. .[2] 3. may institute conflicting policies of which senders of network traffic must be aware in order to complete their desired paths. Complacency regarding network security results in being blindsided by malicious users and programs that continually adapt to security measures. . Latency is zero. 8. html). a substantial reduction in system scope. com/ read/ 38665. 5. James Gosling. 2. Transport cost is zero. Ignorance of network latency. The fallacies The fallacies are summarized as follows:[1] 1. 4. 4. another Sun Fellow and the inventor of Java. The network is reliable. . Effects of the Fallacies 1." is credited with penning the first seven fallacies in 1994. one of the original Sun "Fellows. Bandwidth is infinite. 10 Years After" (http:/ / java. 6. however. Around 1997. 5. 3. . 7.[3] References [1] "The Eight Fallacies of Distributed Computing" (http:/ / blogs. htm). added the eighth fallacy. induces application.

may exist on different nodes and provide the object's interface. Arbitrary internal structure The internal structure of a fragmented object is arranged by the object developer/deployer. Full transparency is gained by the following characteristics of fragmented objects.g. It is a novel design principle extending the traditional concept of stub based distribution.html) • Fallacies of Distributed Computing Explained (http://www. Therefore clients cannot distinguish between the access of a local object. Of course an exchange request may trigger one or more other internal changes. In contrast to distributed objects they are physically distributed and encapsulate the distribution in the object itself.rgoarchitects. Arbitrary internal communication Arbitrary protocols may be chosen for the internal communication between the fragments. an application using a fragmented object can also tolerate a change in distributions which is achieved by exchanging the fragment at one or multiple hosts. hierarchical.. This procedure can either be triggered by a user who changes object properties or by the fragmented object itself (that is the collectivity of its fragments) e. Each client accessing a fragmented object by its unique object identity presumes a local fragment. In addition. Those dynamically change the inside the fragmented objects.sun. It may be client–server. Parts of the object . peer-to-peer and others. a downward compatibility to stub based distribution is ensured.g. Fragmented objects may act like a RPC-based infrastructure or a (caching) smart proxy as well.com/jag/resource/Fallacies. this allows to hide real-time protocols (e.. Fragmented object Arbitrary internal configuration As both the distribution of state and functionality are hidden behind the object interface their respective distribution over the fragments is also arbitrary. The object developer can migrate the state and the functionality over the fragments by providing different fragment implementations. RTP for media streaming) behind a standard CORBA interface.named fragments . Thus. when some fragment is considered to have failed.pdf) by Arnon Rotem-Gal-Oz Fragmented object Fragmented objects are truly distributed objects. a local stub or a local fragment. For instance. .Fallacies of Distributed Computing 70 External links • The Eight Fallacies of Distributed Computing (http://blogs.com/Files/fallacies. A flexible internal partitioning is achieved providing transparent fault-tolerant replications as well.

de/ Publications/ pdf/ Reiser-Hauck-Kapitza-Schmied-Fragments. uni-erlangen. org/ portal/ site/ dsonline/ menuitem. inria. • SOS [4] . vu.jsessionid=HT0pf1n2TGvnRGN2vhBQBX8xQvdBF1tzts4hTfslFZQjyr2nqhzK!-648338668 . adaptive and quality-of-service-aware applications. • FORMI [2] . and automated source-code transformation. ist. objectweb. • Globe [3] . html [6] http:/ / citeseer. edu/ makpangou92fragmented. nl/ ~ast/ publications/ ieeeconc-1999. org [2] http:/ / aspectix. informatik. xml& xsl=article. computer.The SOMIW object-oriented Operating System.The Aspectix group works on several projects that focus on on middleware architecture. pdf [8] http:/ / www4. fault tolerance. jsp?& pName=dso_level1& path=dsonline/ 2006/ 10& file=o10001. pdf [10] http:/ / dsonline. pdf [9] http:/ / middleware05. ist.FORMI is an extension of Java RMI. psu. vu. html [7] http:/ / www. cs. 9ed3d9924aeb0dcd82ccc6716bbe36ec/ index. org/ formi [3] http:/ / www.Fragmented object 71 Projects • Aspectix [1] . References • • • • • • Structure and Encapsulation in Distributed Systems: the Proxy Principle [5] Fragmented objects for distributed abstractions [6] Globe: A Wide-Area Distributed System [7] Integrating fragmented objects into a CORBA environment [8] FORMI: An RMI Extension for Adaptive Applications [9] FORMI: Integrating Adaptive Fragmented Objects into Java RMI [10] References [1] http:/ / aspectix. xsl& . org/ WSProceedings/ ARM05/ a2-kapitza. cs. aspect-oriented programming. fr/ projects/ sos/ [5] http:/ / citeseer. nl/ globe/ [4] http:/ / www-sor. psu.In this research we are looking at a powerful unifying paradigm for the construction of large-scale wide area distributed systems: distributed shared objects. edu/ shapiro86structure.

GemStone systems serve as mission-critical applications[2] even though many computing industry business publications focus attention on other ecosystems and languages. GemStone for Smalltalk continues as "GemStone/S" and various C++ and Java products for scalable. Bob Bretl. have been with the company since its inception. lang. 2011) External links • Official website (http://www. SpringSource.org/faqs/databases/GemStone-FAQ/) . Inc. instantiations.Gemstone (database) 72 Gemstone (database) GemStone Database Management System Paradigm(s) Appeared in Application framework 1991 Influenced by Smalltalk. GemStone Systems was founded in 1982 as Servio Logic. References [1] http:/ / www.NET for new development.1. Gemstone builds on the Smalltalk programming language. Three of the original co-founding engineers. Inc in 1995. a division of VMware. and then became GemStone Systems. GemStone developed its first prototype in 1982. GemStone Systems. Oregon. multi-tier distributed systems. gemstone. and shipped its first product in 1986. GemStone frameworks still see some interest for web services and service-oriented architectures. 2010. Systems based on object databases are not as common as those based on ORM or Object-relational mapping frameworks such as TopLink or Hibernate. now develops and markets GemFire. com [2] Slovenian national gas operator has its billing system running on Smalltalk for 10 years (http:/ / groups. JBoss and BEA Weblogic are somewhat analogous to GemStone. Many information system features now associated with Java EE were implemented earlier in GemStone.faqs. Event Stream Processing.com/) • GemStone FAQ (v. com/ group/ comp. In the area of web application frameworks. Allen Otis and Monty Williams. smalltalk/ msg/ 9560a50c14522f13) [3] SpringSource acquires Gemstone Systems (http:/ / www. which is notable for CEP (complex event processing). announced it had entered into a definitive agreement to [3] acquire GemStone.0) (http://www. On May 6. data virtualization and distributed caching. GemStone and VisualWave were an early web application server platform (VisualWave and VisualWorks are now owned by Cincom. The engineering group resides in Beaverton. GemStone's owners pioneered implementing distributed computing in business systems. Object-oriented programming Influenced Java EE GemStone is a proprietary application framework that was first available for Smalltalk as an object database. Although Gemstone isn't often mentioned in print.gemstone.) GemStone played an important sponsorship role in the Smalltalk Industry Council at the time when IBM was backing VisualAge Smalltalk (VA is now at Instantiations [1]). After a major transition. com/ news/ 2010/ 05/ 06/ springsource-acquires-gemstone-systems/ ) (Retrieved May 23. A recent revival of interest in Smalltalk has occurred as a result of its use to generate Javascript for e-commerce web pages or in web application frameworks such as the Seaside web framework. such as Java or C# for Microsoft . google.

In its basic instruction set. zdnet. However. Technologies like Ajax at the presentation level and iSCSI at the transport level are so undermining the Fallacies of Distributed Computing that inter and> intra-computer communications not carried over IP are looking like special case optimizations. com/ category/ hypertext-computer/ . The transition from computers being connected by networks to the network as a computer has been anticipated for some time. unplugging the local computing resources. IP networking is rivaling computer backplane speeds leading him to observe that "It’s time to move the backplane on to the network and redesign the computer". davidpratten. Locally available processing capacity and storage is presented in the same way as remote processing and storage — that is .HyperText Computer 73 HyperText Computer HTTP Persistence · Compression · HTTPS Request methods OPTIONS · GET · HEAD · POST · PUT · DELETE · TRACE · CONNECT Header fields Cookie · ETag · Location · Referer X-Forwarded-For Status codes 301 Moved permanently 302 Found 303 See Other 403 Forbidden 404 Not Found The HyperText Computer (HTC) has been proposed as a model computer. Computers with just enough processing power to run an instance of a user agent can access the same applications as those with additional processing power and storage available. The HTC is a foundational model for distributed computing. every operator is implemented by an HTTP request and every operand is a URL referring to a document. does not impact the user's or the programmer's view in any way. External links • HyperText Computer Blog [2] References [1] http:/ / blogs. In this case. The HTC is a redesign of the computer. Built on the Hypertext Transfer Protocol (HTTP).as the ability to fulfill HTTP requests. The HTC is a model of a computer built from the ground up containing no implicit information about locality or technology. the HTC is a general-purpose computer. As noted by Cisco's Giancarlo [1]. com/ BTL/ ?p=1945 [2] http:/ / www. other issues such as intellectual property will dominate decisions as to where and how processing is done.

Interface specification The interface specification is object oriented.. The FOM describes the shared object. OMT consists of the following documents: • Federation Object Model (FOM). Many RTIs provide APIs in C++ and the Java programming languages. to communicate data. Parameter: data field of an interaction. Common HLA terminology • • • • • • Federate: an HLA compliant simulation entity. • Rules. Federation: multiple simulation entities connected via the RTI using a common OMT. that simulations must obey in order to be compliant to the standard. and how it is documented. • Simulation Object Model (SOM). computer simulations can interact (that is. attributes and interactions for the whole federation. A SOM describes the shared object. Attribute: data field of an object. and to synchronize actions) to other computer simulations regardless of the computing platforms. Interaction: event sent between simulation entities. Object: a collection of related data sent between simulations. Using HLA. • Object Model Template (OMT). . attributes and interactions used for a single federate. that defines how HLA compliant simulators interact with the Run-Time Infrastructure (RTI). The interface specification is divided into service groups: • • • • • • • Federation Management Declaration Management Object Management Ownership Management Time Management Data Distribution Management Support Services Object model template The object model template (OMT) provides a common framework for the communication between HLA simulations.High level architecture (simulation) 74 High level architecture (simulation) The High Level Architecture (HLA) is a general purpose architecture for distributed computer simulation systems. The interaction between simulations is managed by a Run-Time Infrastructure (RTI). Technical overview A High Level Architecture consists of the following components: • Interface Specification. The RTI provides a programming library and an application programming interface (API) compliant to the interface specification. that specifies what information is communicated between simulations.

Federations shall have an HLA Federation Object Model (FOM). 2. is a standardized and recommended process for developing interoperable HLA based federations. Federates shall be able to vary the conditions under which they provide updates of attributes of objects. as specified in their SOM.[1] 1. not in the run-time infrastructure (RTI).Recommended Practice for High Level Architecture Federation Development and Execution Process (FEDEP) . Base Object Model The Base Object Model (BOM) is a new concept created by SISO [2] to provide better reuse and composability for HLA simulations.info [3]. all exchange of FOM data among federates shall occur via the RTI.Standard for Modeling and Simulation High Level Architecture .2-2010 . 4. and is highly relevant for HLA developers. During a federation execution.Framework and Rules • IEEE 1516. 6.1-2010 .Object Model Template (OMT) Specification • IEEE 1516.High level architecture (simulation) 75 HLA rules The HLA rules describe the responsibilities of federations and the federates that join.Federate Interface Specification • IEEE 1516. In a federation. federates shall interact with the run-time infrastructure (RTI) in accordance with the HLA interface specification. IEEE 1516. 3. It has been renamed to Distributed Simulation Engineering and Execution Process (DSEEP) and is now an active standard IEEE 1730-2010 (instead of IEEE 1516.Standard for Modeling and Simulation High Level Architecture . During a federation execution. More information can be found at Boms.3-2003 . Federation Development and Execution Process (FEDEP) FEDEP. an attribute of an instance of an object shall be owned by only one federate at any given time. 9. FEDEP is an overall framework overlay that can be used together with many other. 5. as specified in their SOM. 7. documented in accordance with the HLA Object Model Template (OMT). Standards HLA is defined under IEEE Standard 1516: • IEEE 1516-2010 .3-2003. Federates shall be able to transfer and/or accept ownership of an attribute dynamically during a federation execution. Federates shall have an HLA Simulation Object Model (SOM).Standard for Modeling and Simulation High Level Architecture . Federates shall be able to manage local time in a way that will allow them to coordinate data exchange with other members of a federation. documented in accordance with the HLA Object Model Template (OMT).3). During a federation execution. as specified in their SOM. Federates shall be able to update and/or reflect any attributes of objects in their SOM and send and/or receive SOM object interactions externally. Distributed Simulation Engineering and Execution Process (DSEEP) In spring 2007 SISO started revising the FEDEP. all representation of objects in the FOM shall be in the federates. 10. commonly used development methodologies. 8.

3 [9] • SISO-STD-004.2-2000 . The revised IEEE 1516-2010 standard includes current DoD standard interpretations and the EDLC API.Framework and Rules • IEEE 1516.1 Version) [10] HLA Evolved The IEEE 1516 standard has been revised under the SISO HLA-Evolved Product Development Group and was approved 25-Mar-2010 by the IEEE Standards Activities Board. The DLC API addresses a limitation of the IEEE 1516 and 1. informally known as Evolved DLC APIs (EDLC). Note that this API has since been superseeded by the HLA Evolved APIs. Release 2 (2003-jul-01) [8] 76 Prior to publication of IEEE 1516. The first complete version of the standard. published 1998. whereby federate recompilation was necessary for each different RTI implementation.4-2007 . Previous version: • IEEE 1516-2000 .. C++.Object Model Template (OMT) Specification See also: • Department of Defense (DoD) Interpretations of the IEEE 1516-2000 series of standards. Other major improvements include: • • • • • • • Extended XML support for FOM/SOM. DLC API SISO has developed a complementary HLA API specification known as the Dynamic Link Compatible (DLC) API. the HLA standards development was sponsored by the US Defense Modeling and Simulation Office.3 API specification.Standard for Modeling and Simulation High Level Architecture .3. such as Schemas and extensibility Fault tolerance support services Web Services (WSDL) support/API Modular FOMs Update rate reduction Encoding helpers Extended support for additional transportation (such as QoS.1-2000 . and Accreditation of a Federation an Overlay to the High Level Architecture Federation Development and Execution Process Machine-readable parts of the standard. Java and WSDL APIs as well as FOM/SOM samples can be downloaded from the IEEE 1516 download area of the IEEE web site [4]. Validation.Dynamic Link Compatible HLA API Standard for the HLA Interface Specification (IEEE 1516.1-2000 Errata (2003-oct-16) [7] • IEEE 1516. STANAG 4603 HLA (in both the current IEEE 1516 version and its ancestor "1.Recommended Practice for Verification.3" version) is the subject of the NATO standardization agreement (STANAG 4603) for modeling and simulation: Modeling And Simulation Architecture Standards For Technical Interoperability: High Level Architecture (HLA).Dynamic Link Compatible HLA API Standard for the HLA Interface Specification Version 1.1-2004 .Federate Interface Specification • IEEE 1516. was known as HLA 1.) .High level architecture (simulation) • IEEE 1516.Standard for Modeling and Simulation High Level Architecture . The full standards texts are available at no extra cost to SISO [5] members or can be purchased from the IEEE shop [6].. • SISO-STD-004-2004 . an extended version of the SISO DLC API..Standard for Modeling and Simulation High Level Architecture . such as XML Schemas. IPv6.

cross-platform HLA RTI implementation. video sharing and always-on social networking . php?tg=fileman& idx=get& id=5& gr=Y& path=SISO+ Products%2FSISO+ Standards& file=SIS-STD-004.3-Next Generation Programmer's Guide Version 4. Background Next Generation Access (NGA) broadband is promoted strongly by policy makers as underpinning future economic growth. info [4] http:/ / standards. org/ [6] http:/ / shop. Defense Modeling and Simulation Office (2001). but the most innovative aspects of these .google.open source knowledge. org/ reading/ ieee/ updates/ errata/ 1516. Department of Defense. • Portico (http://www. sisostds. ieee.porticoproject. pdf [10] http:/ / www. org [3] http:/ / www. 1-2004. open source C++ library for developing HLA compliant simulations.High level architecture (simulation) • Standardized time representations 77 Books • Creating Computer Simulation Systems: An Introduction to the High Level Architecture [11] References [1] U.infinite bandwidth zero latency . ieee. php?tg=fileman& idx=get& id=5& gr=Y& path=SISO+ Products%2FSISO+ Standards& file=SISO-STD-004-2004-Final. org/ index. [2] http:/ / www. mil/ public/ library/ projects/ hla/ rti/ DoD_interps_1516_Release_2. doc [9] http:/ / www. . tools. pdf [8] https:/ / www.were not foreseen. Youtube and Facebook. and utilities. sisostds. zip [11] http:/ / www. ieee. sisostds. U.S. org/ index. org/ downloads/ 1516/ [5] http:/ / www. that it no longer matters? What will be the applications and services that become widespread?. org/ [7] http:/ / standards. and latency so small.com/p/proto-x/): a cross-platform. amazon.org): an open source. The IBZL programme has used a process (Imagine/ Triple Task Method) to explore the potentially novel applications of NGA and provide some ideas as to the key components of the future inter-networked landscape. A parallel can be drawn with the advent of first generation broadband which arguably created the conditions for the success of innovations such as Wikipedia. boms. sisostds. The IBZL programme[1] was started by the Open University and Manchester Digital in the UK.S.is a thought experiment that asks: what will happen when bandwidth (for connecting to the Internet for example) is so great. 1-2000. com/ Creating-Computer-Simulation-Systems-Introduction/ dp/ 0130225118 External Links • proto-x (http://code. dmso. There is however a lack of examples of the ways that NGA will be used or of the sort of innovations that may come about as a result of widespread access to NGA. RTI 1. IBZL IBZL .

real-time social encounters (‘collisions’) that happen when people are co-located.g. This would not only allow a new level of remote working and collaboration but also the sense of living in proximity with friends and relations could transform the lives of older people who need to stay longer in their homes as the population ages. QoS here is taken to mean not only service reliability and availability. OFCOM 2009[4] ). jitter and data loss are important aspects of the usability of applications such as internet telephony or video. Behind this would be a thorough analysis of organizations. • In contrast with currently widespread ADSL technologies. NGA bandwidth should be symmetrical. but see for example: List 2006[7] ) There have been two IBZL workshops held in Manchester. The speeds cited vary widely from 25 Mbps (e. • NGA is widely taken to offer improved ‘quality of service’ (QoS)[5] .g. UK in May and October 2010. private sector and academic participants.the networked production of artefacts by artisans in multiple locations. products and people. They were organized jointly by the Open University Faculty of Mathematics. ‘Intelligent matchmaking’ – bringing suppliers and consumers together optimally for business. to imagine a digital future. 2010) announced a plan for experimental community networks operating at 100 Gbps.virtual spaces in which the connection is always on/perpetual. Five of these are briefly summarised below. though others have a more relaxed view (e. jitter (the variation in latency among data packets) and data loss (the loss of data packets due to network congestion). three elements are usually considered essential: • NGA will provide a significant increase in the transmission speeds available to the domestic or small-business end-user. Next generation technology could support real-time collaborative generation of product ideas followed by the process of . but also indicators of network performance including latency (the time taken for data packets to travel from source to destination). made possible next generation networks. Zero Latency (IBZL[6] ) initiative was designed as a contribution to innovation by identifying new applications that will be made possible by NGA as it evolves and that may contribute to the continuing development of innovative digital industries. a trade association of creative and digital companies in Manchester and the North West of England. Google (Google. They brought together invited public sector. in early 2010. reflecting the demands of increasingly user-generated content. IBZL addresses a gap in policy and strategic thought. they are a shorthand for networks where bandwidth and latency cease to be limiting factors. IBZL outcomes The workshops produced ideas that will be further developed. where relatively little attention has been given to what kinds of novel application are made feasible by networks which are relatively free of speed and latency capacity constraints. spontaneous. social and educational interactions. For some. it is generally assumed that NGA will offer a step-change in upload as well as download speeds. in addition to ‘raw’ bandwidth. and more recently UK ministers have referred to 50 Mbps and faster[3] . To put this in context. The IBZL process is intended as a means to explore and speculate on potential future technologies. 'Infinite bandwidth' and 'zero latency' are not meant literally. ‘Always on social space’ . supporting the kind of occasional. Latency. What is Digital Region? 2009) to over 200 Mbps. The [2] ‘Digital Britain’ report refers to ‘next generation service up to’ 40 Mbps. informal. between people living and working remotely. Computing and Technology [8] and Manchester Digital.IBZL 78 Next Generation Access (NGA) While there is no universally agreed definition of what qualifies a network to be considered ‘next generation’. IBZL as a way to develop NGA The Infinite Bandwidth. in order to synthesize high quality informational and other connections. ‘Real artisans in a virtual world’ . To facilitate the process the Imagine methodology was adapted and applied as a form of future workshop for deep reflection on possible scenarios (numerous examples of this kind of work exist.

Latency maps would be an enabling tool to identify the kinds of applications possible within/between. D. resulting in a ‘geography of latency’ and the disruption of ‘simultaneous time’. Digital Britain: Final Report.ibzl. ibzl." Futures 38: 673 . Delivering Super-Fast Broadband in the UK: Promoting investment and competition OFCOM [5] OFCOM (2009)Delivering Super-Fast Broadband in the UK: Promoting investment and competition OFCOM [6] Infinite Bandwidth.net) . coop/ policy/ inca-policy-briefing-no1). [8] http:/ / mct. inca. uk/ External links • IBZL project website (http://www. technical/geographic. Zero Latency (IBZL) project website (http:/ / www. Independent Networks Cooperative Association [4] OFCOM (2009). network infrastructure and the network of relationships between service providers. co-ordinated among volunteers by a central ‘master’ application. Department for Business Innovation and Skills and the Department for Culture. challenging) current craft value chains. or commercial spaces. ibzl. The kinds of networked application that are feasible between two network locations will be a function of a range of factors including spatial distribution.Industry Day (http:/ / www. London. Zero Latency (IBZL) project website (http:/ / www. (2006). development and distributed fabrication.684. 79 References [1] Infinite Bandwidth. Peer-to-peer processor time-sharing . Latency mapping . ac. "Action Research Cycles for Multiple Futures Perspectives. INCA Policy Briefing No. Media and Sport.projects like SETI@home use the spare processor capacity of millions of personal computers to process batches of number-crunching tasks. Next generation networks could allow real time peer-to-peer sharing so that when an application needs additional capacity for processor-heavy tasks like video rendering it could have access to effectively limitless extra computing power. 1: Broadband Delivery UK . open.the evolution of next generation networks will be uneven. net) [7] List.IBZL design. This could turn the conventional trading pattern on its head with artisans in the developing world crafting products for “3D printing” in the developed world. net) [2] Department_for_Business_Innovation_and_Skills (2009). effectively re-engineering (or at least. Page 54 [3] INCA (2010).

the object's identity is determined by the identifier of the channel or group. kayou is still in its design phase hence not much information is actually available about its design or its implementation. the identity of the system might be determined. controls. as applied to live distributed objects. by the address of the membership service (the entity that manages the membership of the multicast group).Kayou 80 Kayou kayou is a distributed operating system project developed on top of the kaneton microkernel in the vein of Amoeba. consists of a group of software components physically executing on some set of physical machines and engaged in mutual communication. The key programming language concepts. viewed from the object-oriented perspective. and manages the given channel or group. The object object. In this case. or receiving the data published in the channel or multicast within the group. org Live distributed object Definitions The term live distributed object (also abbreviated as live object) refers to a running instance of a distributed multi-party (or peer-to-peer) protocol. for example. In the case of multicast. The identity of a live distributed object is determined by the same factors that differentiate between instances of the An illustration of the basic concepts involved in the definition of a live distributed same distributed protocol. External links • kayou official website [1] References [1] http:/ / kayou. as an entity that has a distinct identity. . each executing the distributed protocol code with the same set of essential parameters. Thus. forwarding. and that exhibits a well-defined externally visible behavior. for example. there exists a single instance of a distributed protocol running among all computers sending. are defined as follows. etc. qualified with the identity of the distributed system that provides. Note that the kayou project is part of the Opaak educational trilogy along with kastor and kaneton. opaak. publish-subscribe channels and multicast groups are examples of live distributed objects: for each channel or group. the identity of a membership service. the identifier of a publish-subscribe topic. • Identity. kayou provides a powerful distribution-oriented interface which enables applications to take advantage of the resources of networked computers. may encapsulate internal state and threads of execution. such as the name of a multicast group.

or a web service's WSDL description. or . • References. a live object reference plays the same role as a Java reference. it materializes as a stream of messages of the form elected(x) concurrently produced by the proxies involved in executing this protocol. These interactions are modeled as exchanges of explicit events (messages). the latter is a specific type of live distributed object that uses a protocol such as Paxos. these may include event channels and various types of graphical user interfaces. there might exist many 81 . and each of the endpoint instances carries events of the same types (or binds to the same type of a graphical display). virtual synchrony. rather. For example. The different replicas of the object's state may be strongly or only weakly consistent. on different machines distributed across the network. Interfaces exposed by the proxies are referred to as the live distributed object's endpoints. The identity is not stored at any particular location. the information contained in a live distributed object's reference cannot be limited to just an address. In this sense. and coordinating their operations. For example. the precise definition might vary). depending on the protocol semantics: an instance of a consensus protocol will have the state of its replicas strongly consistent. jointly maintaining some distributed state. To dereference a reference means to locally parse and follow these instructions on a particular computer. The term endpoint instance refers to a single specific event channel or user interface exposed by a single specific proxy. the reference must specify how this identifier is resolved. The term proxy stresses the fact that a single software component does not in itself constitute an object. to produce a running proxy of the live object. RMI. The constraints that the object's type places on event patterns may span across the network. type atomic multicast might specify that if an event of the form deliver(x) is generated by one proxy.NET remoting client-side proxy stub. In this sense. it contains a complete information sufficient to locate the given object and interact with it. portable instructions for constructing its proxy. a similar event must be eventually generated by all non-faulty proxies (proxies that run on computers that never crash. The state of a live distributed object is defined as the sum of all internal. The state of a live distributed object should be understood as a dynamic notion: as a point (or consistent cut) in a stream of values. Much as it is the case for types in Java-like languages. the term live distributed object generalizes the concept of a replicated object. whereas an instance of a leader election protocol will have a weakly consistent state. The object can thus be alternatively defined as a group of proxies engaged in communication. rather. The behavior of a live distributed object is characterized by the set of possible patterns of external interactions that its proxies can engage in with their local runtime environments. and that never cease to execute or are excluded from the protocol. Defined this way. If the object is identified by some sort of a gobally unique identifier (as might be the case for publish-subscribe topics or multicast groups). local states of its proxies. the concept of a live distributed object proxy generalizes the notion of a RPC. • Types. the externally visible state of a leader election object would be defined as the identity of the currently elected leader. • Behavior. it is distributed and replicated.Live distributed object • Proxies (replicas). The type of a live distributed object determines the patterns of external interactions with the object. by recursively embedding a reference to the appropriate name resolution object. The interface of a live distributed object is defined by the types of interfaces exposed by its proxies. it serves as a gateway through which an application can gain access to a certain functionality or behavior that spans across a set of computers. or state machine replication to achieve strong consistency between the internal states of its replicas. By definition. and the patterns of events that may occur at the endpoints. rather than as a particular value located in a given place at a given time. and concurrently consumed by instances of the application using this protocol. To say that a live object exposes a certain endpoint means that each of its proxies exposes an instance of this endpoint to its local environment. • Interfaces (endpoints). Since live distributed objects may not reside in any particular place (but rather span across a dynamically changing set of computers). The reference to a live object is a complete set of serialized. a C/C++ pointer. The proxy or a replica of a live object is one of the software component instances involved in executing the live object's distributed protocol. • State. it is determined by the types of endpoints and graphical user interfaces exposed by the object's proxies.

a more comprehensive discussion of the live object concept in the context of Web development can be found in Krzysztof Ostrowski [9]'s Ph. which includes instances of distributed multi-party protocols used internally to replicate state. The word distributed expressed the fact that the information is not hosted. a comprehensive discussion of the relevant prior work can be found in Krzysztof Ostrowski's Ph. Since the moment of its creation. and connecting them together. message streams. and protocol composition frameworks. The semantics and behavior of live distributed objects can be characterized in terms of distributed data flows. it is replicated among the end-user computers. but rather. in this sense. Originally. live content that reflects recent updates made by the users (as opposed to static. fresh. and updated in a peer-to-peer fashion through a stream of multicast messages that may be produced directly by the end-users consuming the content. and containing XML-serialized live object references. the set of messages or events that appear on the instances of a live object's endpoint forms a distributed data flow [1] [2] . dissertation[3] . dating back at least to the actor model developed in the early 1970s. behavior characteristic to atomic multicast might be exhibited by instances of distributed protocols such as virtual synchrony or Paxos. for example. and to support various types of hosted content such as Google Maps[13] . interactive. When applied to live distributed objects. and Jini. The first implementation of the live distributed object concept. drag and drop tools for composing hierarchical documents resembling web pages. The extension of the term has been motivated by the need to model live objects as compositions of conference other objects. stored at a server in a data center. in an IEEE Internet Computing article[8] . The more general definition presented above has been first proposed in 2008. should also be modeled as live distributed objects. but rather stored on the end-user's client computers. the term was used to refer to the types of dynamic.[14] [15] [16] [17] [18] [19] [20] [21] . and then formally defined in 2007. WA [7] . was the Live Distributed Objects [11] platform developed by Krzysztof Ostrowski [9] at Cornell University. dissertation[3] . and represents current. and archival content that has been pre-assembled). and at the MSR labs in Redmond. a number of extension have been developed to embed live distributed objects in Microsoft Office documents [12] .D. The term live distributed object was first used informally in a series of presentations given in the fall of 2006 at an ICWS conference [4] . Visual content such as chat windows. 82 History Early ideas underlying the concept of a live distributed object have been influenced by a rich body of research on object-oriented environments. and internally powered by instances of reliable multicast protocols. and various sorts of mashups could be composed by dragging and dropping components representing user interfaces and protocol instances onto a design form. interactive Web content that is not hosted on servers in data centers. the concept has been inspired by Smalltalk. which pioneered the idea that services are objects.D. The word live expressed the fact that the displayed information is dynamic. The need for uniformity implies that the definition of a live distributed object must unify concepts such as live Web content. the platform is being actively developed by its creators. programming language embeddings. STC [5] conference [6] . which pioneered the uniform perspective that everything is an object.Live distributed object very different implementations of the same type. the perspective dictates that their constituent parts. as defined in the ECOOP paper[10] . read-only. in a paper published at the ECOOP [10] . Thus. The platform provided a set of visual. shared desktops. As of March 2009. and instances of distributed multi-party protocols.

Proceedings of the 22nd European Conference on Object-Oriented Programming. 3rd ACM SIGOPS International Workshop on Large Scale Distributed Systems and Middleware (LADIS 2009). acm. html [20] Mahajan. New York. cs. First ACM Workshop on Scalable Trusted Computing (ACM STC 2006). and Birman. Ph. K. pdf [3] Ostrowski. http:/ / liveobjects. vol. "Goole Earth Live Object".. S. Vitek. IL. Cornell University. IEEE Internet Computing.11.. html [19] Gupta. cs. Companion '08.. cfm?id=1179477. "Cornell Yahoo! Live Objects". K. http:/ / portal. K. (2008). July 6–9. K. 30-35. html [14] Ostrowski. December 01 . and Dolev. K. (2008).. July 07 . cornell. edu/ ~krzys [10] Ostrowski. "Live Distributed Objects: Enabling the Active Web". Z. K. (2008). Birman. Cyprus. edu/ ~shxu/ stc06/ [6] Ostrowski. [8] Ostrowski. edu [12] Ahnn. edu/ community/ 2/ index.. cs. Fairfax. http:/ / liveobjects.. and Nagarajappa. cs. "Distributed Google Earth". http:/ / www. Heidelberg. [5] http:/ / www. U. cs. http:/ / www. November–December 2007. 11(6):72-78. cs. K. (2008). K. [7] Ostrowski.. K. Lecture Notes In Computer Science.. D. (2007). NY. (2008). cs. Birman. Nashville. http:/ / portal. USA. Languages and Applications (OOPSLA 2009). VA. html [16] Kashyap. 3rd ACM International Conference on Distributed Event-Based Systems (DEBS 2009). Dissertation. 2009. Big Sky. USA. S. org/ xpl/ freeabs_all. org/ citation.. MT. cs. cornell. Redmond. K. K. ieee. C. Ostrowski. researchchannel. "Integrate Live Objects with Flickr Web Service". November 2006. D. 2008.. ACM. http:/ / liveobjects. Proceedings of the ACM/IFIP/USENIX Middleware '08 Conference Companion. November 2006. (2009). [9] http:/ / www. "Using live distributed objects for office automation". Birman. and Vora. 'Scalable Group Communication System for Scalable Trust'. D.. org/ prog/ displayevent. "Programming Live Distributed Objects with Distributed Data Flows". October 11. edu/ community/ 4/ index. cornell. pdf [2] Ostrowski. (2009).. D. and Sakoda. WA. Submitted to the International Conference on Object Oriented Programming. org/ citation. Dolev. K. edu/ community/ 7/ index. org/ xpls/ abs_all. "Live Maps". cornell. http:/ / liveobjects. http:/ / liveobjects. cornell. K. cs. and van Renesse. edu/ ~krzys/ krzys_oopsla2009. 5142. "ALGE (A Live Google Earth)". cornell. Systems. [13] http:/ / liveobjects.. Dolev. edu/ community/ index. cs. Microsoft Research. edu/ community/ 1/ index. (2008). http:/ / hdl. September 2006. jsp?isnumber=4376216& arnumber=4376231. Sankar. "Implementing Reliable Event Streams in Large Systems via Distributed Data Flows and Recursive Delegation". S.. [4] Ostrowski. A. and Polepalli. utsa. Birman. H. and Wakankar. [11] http:/ / liveobjects. cornell. cs. S. html [21] Wadhwa. ieee.Live distributed object 83 References [1] Ostrowski. Chicago. net/ 1813/ 10881. http:/ / www. "Live Google Earth UI". cornell. D. edu/ ~krzys/ krzys_debs2009.05. (2006). 2009. (2008). acm.. cs. K.. (2008). jsp?arnumber=4032049. cfm?id=1428508.. A. edu/ community/ 5/ index.. R. http:/ / ieeexplore. K. "Live Distributed Objects".. Ed. Paphos. http:/ / portal. (2008). K. J.. and Zhang. cornell. Belgium. Birman. html .. and Subramaniyan.. Springer-Verlag. cs. cfm?id=1462735. R. J. "Programming with Live Distributed Objects". http:/ / www. http:/ / liveobjects. Berlin.. cornell. edu/ community/ 3/ index. 'Extensible Web Services Architecture for Notification in Large-Scale Systems'. html [18] Prateek. Dolev. (2008). and Birman. X. cornell. H. 1462743. and Birman. TN. "Storing and Accessing Live Mashup Content in the Cloud". (2009).. K.. J. http:/ / ieeexplore. 1428536. 463-489. K. R. html [17] Dong. IEEE International Conference on Web Services (ICWS 2006). 2008. cornell. http:/ / liveobjects. cornell. QuickSilver Scalable Multicast. handle. edu/ ~krzys/ krzys_ladis2009. edu/ community/ 6/ index.D. and Ahnn.. org/ citation. aspx?rID=7870& fID=2276.. Leuven. acm. pdf [15] Akdogan. cs.

supplies or services that are provided to County departments do not possess or portray an image that may be construed as offensive or defamatory in nature.with the operation of all locomotives in the train slaved to the controls of the first locomotive. [5] [6] On November 2003. Some older pre-FireWire Macintoshes had a similar controversial "SCSI Disk Mode". identify and remove/change any identification or labeling of equipment or components thereof that could be interpreted as discriminatory or offensive in nature before such equipment is sold or otherwise provided to any County department. As such. Operating the controls on the master triggers the same commands on the slaves. and the slave databases are synchronized to it. so that recording is done in parallel. with the other devices acting in the role of slaves. Thank you in advance for your cooperation and assistance. suppliers and contractors make a concentrated effort to ensure that any equipment. One such recent example included the manufacturer's labeling of equipment where the words "Master/Slave" appeared to identify the primary and secondary sources. The terms also do not indicate precedence of one drive over the other in most situations. the master database is regarded as the authoritative source. the terms master and slave are used but neither drive has control over the other. Based on the cultural diversity and sensitivity of Los Angeles County. 18 Nov 2003 14:21:16 -0800 From: "Los Angeles County" The County of Los Angeles actively promotes and is committed to ensure a work environment that is free from any discriminatory influence be it actual or perceived. it is the County's expectation that our manufacturers. In some systems a master is elected from a group of eligible devices. "Master" is merely another term for device 0 and "slave" indicates device 1.Multiple-unit train control. • Duplication is often done with several cassette tape or compact disc recorders linked together. essentially a disk slave mode. See .Master/slave (technology) 84 Master/slave (technology) Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. • Railway locomotives operating in multiple (for example: to pull loads too heavy for a single locomotive) can be referred to as a master/slave configuration .[1] [2] [3] Examples • In database replication. the County of Los Angeles sent an e-mail to its suppliers asking them not to use these terms: Subject: IDENTIFICATION OF EQUIPMENT SOLD TO LA COUNTY Date: Tue. Division Manager Purchasing and Contract Services [4] . We would request that each manufacturer. • In parallel ATA hard drive arrangements. Controversy Sometimes the terms master and slave are deemed offensive. • On the Macintosh platform. • Peripherals connected to a bus in a computer system. this is not an acceptable identification label. supplier and contractor review. Joe Sandoval. Target Disk Mode allows a computer to operate as an external FireWire hard disk.

pl?sid=03/ 11/ 25/ 0014257& mode=thread& tid=103& tid=133& tid=186& tid=99) [6] 'Master' and 'slave' computer labels unacceptable. November 26. techtarget. 85 References [1] master/slave .00. microsoft. aspx?scid=KB. 2003. County Bans Use Of "Master/Slave" Term from Slashdot (http:/ / slashdot. snopes.a searchNetworking definition (http:/ / searchnetworking.Master/slave (technology) Internal Services Department County of Los Angeles Many in the Information Technology field rebuff this claim of discrimination and offence as ridiculous. com/ default. noting that the master/slave terminology accurately reflects what is going on inside the device and that this was not intended in any way to be a reference to slavery as it existed in the United States. org/ article. reut/ index.) There were rumors of a major push to change the way hardware manufacturers refer to these devices .com (http:/ / www. officials say (http:/ / www. microsoft.A.. term. with SATA replacing older IDE (PATA) drives. com/ sDefinition/ 0. com/ kb/ 188001) [3] Information on Browser Operation from Microsoft KnowledgeBase (http:/ / support. CNN) . It has not had much effect on most of the products being produced.sid7_gci783492. The designation of hard drives as master/slave may decline in a few years.snopes. This standard allows only one drive per connection. com/ 2003/ TECH/ ptech/ 11/ 26/ master.en-us. asp) [5] L. com/ inboxer/ outrage/ master. html) [2] Description of the Microsoft Computer Browser Service from Microsoft KnowledgeBase (http:/ / support. (See also political correctness. and does not require the use of master/slave terms.102878) [4] Urban Legends Reference Pages: Inboxer Rebellion (Master/Slave) from www. cnn. html) (Wednesday.

replicates data for high-availability. speed. Zynga. manipulating and presenting data. These applications must service many concurrent users. live cluster reconfiguration. membase provides on-the-wire client protocol compatibility. in fact. 2011 C++. In the parlance of Eric Brewer’s CAP theorem. persistence and querying capabilities of a database.org [2] in June 2010. As of February 8. join it to the cluster and press the rebalance button to automatically rebalance data to it. Erlang Operating system Cross-platform Type License Website distributed key/value database system Apache License http:/ / membase. leveraging the memcached engine interface. Membase design decisions are weighed against three non-negotiable requirements. and project co-sponsors Zynga and NHN to a new project on membase. Membase has wide language and application framework support due to its on-the-wire protocol compatibility with memcached. Every node is alike in a membase cluster – clone a node. and simple to develop against. The merged project will be known as Couchbase[3] Design drivers According to the Membase site and presentations. the Membase project founders and Membase. storing. History Membase was developed by several leaders of the memcached project. but also provided the storage. fast. who had founded a company. but is designed to add disk persistence (with hierarchical storage management). membase is designed to provide simple.[4] Membase intends to be extremely easy to manage. It is designed to be clustered for single machine to very large scale deployments. expressly to meet the need for an key-value database that enjoyed all the simplicity. membase is a CP type system. Inc. aggregating. Membase distributes data and data operation I/O across commodity servers (or VMs). key-value database management system optimized for storing data behind interactive web applications. rebalancing and multi-tenancy with data partitioning. 2011. In support of these kinds of application needs. membase is simple. announced a merger with CouchOne (a company with many of the principal players behind CouchDB) with an associated project merger. guaranteeing compatibility today and in to the future. data replication. [1] For those familiar with memcached. and scalability of memcached.7. persists the data with a design for multi-tier storage . The original membase source code was contributed by NorthScale. easy to scale key-value data operations with low latency and high sustained throughput. NorthScale. creating.0 license) distributed. fast. org/ Membase (pronunciation: mem-base) is an Open Source (Apache 2.Membase 86 Membase Membase Developer(s) Stable release Written in Couchbase (merged from NorthScale).1 / July 26. By design. membase directly incorporates memcached “front end” source code. transparently caches data in main memory. NHN 1. and elastic. retrieving.

Membase management model (planned to support Solid-state drive and Hard disk drive media). Employing commodity servers. replication/failover. full text search indexing.7 and later. predictable latency. data analytics or archiving. • Configurable replication count: Balance resource utilization with availability requirements • High-speed failover: Fast failover to replicated items based upon request Scalability and performance • Distributed object store: Easily store and retrieve large volumes of data from any application. data management resources can be dynamically matched to the needs of an application with little effort.[5] • Supports working set greater than a memory quota per "node" or "bucket" • Tunables to affect how max memory and migration from main-memory to disk is handled. Servers can be added to.[7] [6] Replication and failover • Multi-model replication support: Peer-to-peer replication support with underlying architecture supporting master-slave replication. it automatically de-duplicates writes and is internally asynchronous everywhere possible. • Tunables to define item ages that affect when data is persisted. It is multi-threaded. using any language or application framework • Dynamic cluster resizing and rebalancing: Effortlessly grow or shrink a membase cluster. while disk writes are still asynchronous. • Configurable “tap” interface: External systems can subscribe to filtered data streams – supporting. with low lock contention. When operating out of memory. 87 Data model Key Features (persistence. virtual machines or cloud machine instances. most operations occur in far less than 1 ms (assuming gigabit Ethernet). In version 1. a running cluster with no application downtime. or removed from. adapting to changing data management requirements of an application • Guaranteed data consistency: Never grapple with consistency issues in your application – no quorum reads required • High sustained throughput • Low. Membase claims to scale with linear cost. for example. It is a consistently low-latency and high-throughput processor of data operations. . scalability/performance) Persistence • Asynchronously writes data to disk after acknowledging write to client. applications can ensure data is synced to more than one server.

org) • membase wiki (http://wiki. membase. com/ pr/ NorthScale-Membase-Server-beta. couchbase. It is used to receive messages from a destination. northscale. membase. google. html) [5] membase. html) [8] NorthScale Releases High-Performance NoSQL Database (http:/ / www. northscale.org wiki: membase Background Flush (http:/ / wiki. Created by a selector.membase. org/ bin/ view/ Main/ FlushingItems) [6] membase. northscale. com/ p/ memcached/ wiki/ NewProtocols [2] http:/ / www. membase. To create it. html) Commercially supported distributions • Couchbase Membase Server (http://www. com/ pr/ NorthScale-Membase-Server-beta. org/ bin/ view/ Main/ DiskGtMemory) [7] Want to know what your memcached servers are doing? Tap them.org) • membase mailing list (http://groups.google. html) [9] NorthScale Releases High-Performance NoSQL Database (http:/ / www.Membase 88 Prominent users • Zynga – membase is the key-value database behind FarmVille[8] • NHN[9] References [1] http:/ / code.com/products-and-services/membase-server) External links • Official membase site (http://www. a destination object is passed to a message-consumer creation method that is supplied by the session of this object.com/group/membase) Message consumer A message consumer is a Java interface for distributed systems.membase. (http:/ / blog.couchbase. The communication may be synchronous or asynchronous. . membase.org:Does the world really need another NoSQL Database? (http:/ / www. com/ ) [4] membase. com/ northscale-blog/ 2010/ 03/ want-to-know-what-your-memcached-servers-are-doing-tap-them. org/ whatsdifferent. org [3] Couchbase Website (http:/ / www.org wiki: Disk > Memory (http:/ / wiki. it is possible to send a message to particular message consumer objects.

Prominent theoretical foundations of concurrent computation. or even segments of code) to other processes. CTOS. Whether communication is synchronous or asynchronous. and/or transacted. Overview Message passing is the paradigm of communication where messages are sent from a sender to one or more recipients. Corba.Message passing 89 Message passing Message passing in computer science is a form of communication used in parallel computing. . Message passing systems have been called "shared nothing" systems because the message passing abstraction hides underlying state changes that may be used in the implementation of sending messages. the sender will not continue until the receiver has received the message. Asynchronous message passing systems deliver a message from sender to receiver. When designing a message passing system several choices are made: • • • • Whether messages are transferred reliably Whether messages are guaranteed to be delivered in order Whether messages are passed one-to-one. The second advantage is that no buffering is required. In this model. etc. Implementations of concurrent systems that use message passing can either have message passing as an integral part of the language. secure. such as the Actor model and the process calculi are based on message passing. Examples of the former include many distributed object systems. The first advantage is that reasoning about the program can be simplified in that there is a synchronisation point between sender and receiver on message transfer. durable.). signals. and interprocess communication. Messages are also commonly used in the same sense as a means of interprocess communication. processes or objects can send and receive messages (comprising zero or more bytes. Forms of messages include (remote) method invocation. OpenBinder. and the Message Passing Interface used in high-performance computing. The advantage of asynchronous communication is that the sender and receiver can overlap their . object-oriented programming.NET Remoting. By waiting for messages. Synchronous versus asynchronous message passing Synchronous message passing systems require the sender and receiver to wait for each other to transfer the message. socket. Message passing systems Distributed object and remote method invocation systems like ONC RPC. Such messaging is used in Web Services by SOAP. or many-to-one (client–server). The message can always be stored on the receiving side. Message passing model based programming languages typically define messaging as the (usually asynchronous) sending (usually by copy) of a data item to a communication endpoint (Actor. or as a series of library calls from the language. in which data are sent as a sequence of elementary data items instead (the higher-level version of a virtual circuit). complex data structures. That is. process. This concept is the higher-level version of a datagram except that messages can be larger than a packet and can optionally be made reliable. thread. D-Bus and similar are message passing systems. DCOM. and data packets. Synchronous communication has two advantages. the other common technique being streams or pipes. processes can also synchronize. SOAP. Java RMI. Examples of the latter include Microkernel operating systems pass messages between one kernel and one or more server blocks. because the sender will not continue until the receiver is ready. one-to-many (unicasting or multicast). without waiting for the receiver to be ready. QNX Neutrino RTOS.

Processes wishing to access the resource send a request message to the handler. A subroutine call or method invocation will not exit until the invoked computation has terminated. In a traditional Call. at least some of. and all changes to it are made by an associated process. it has to be copied in its entirety (and perhaps even transmitted) to the receiving program (if not a local program). a disk file or region thereof. Asynchronous message passing. If the sender is blocked.000 octets describing a web page (similar to the size of this article). This form of communication differs from message passing in at least three crucial areas: • total memory usage • transfer time • locality In message passing. This applies irrespective of the size of the original arguments – so if one of the arguments is (say) an HTML string of 31. by contrast. This of course is not possible for distributed systems since an (absolute) address – in the callers address space – is normally meaningless to the remote program (however. each of the arguments has to have sufficient available extra memory for copying the existing argument into a portion of the new message. If messages are dropped. other processes are blocked out. (in other words. and processes wishing to access it (or a sector of it) must first obtain a lock. With the message-passing solution. so that the resource is encapsulated. One of the main alternatives is mutual exclusion or locking. the handler makes the requested change as an . This means its state can change for reasons unrelated to the behaviour of a single sender or client process. it may lead to an unexpected deadlock. the callers memory in advance). a database table or set of rows. the lock is then released. By contrast. Message passing and locks Message passing can be used as a way of controlling access to resources in a concurrent or asynchronous system. it is assumed that the resource is not exposed. Synchronous communication can be built on top of asynchronous communication by ensuring that the sender always wait for an acknowledgement message from the receiver before continuing.Message passing computation because they do not wait for each other. After the process with the lock is finished with the resource. If the resource (or subsection) is available. in general. only an address of say 4 or 8 bytes needs to be passed for each argument and may even be passed in a general purpose register requiring zero additional storage and zero "transfer time". arguments are passed to the "callee" (the receiver) typically by one or more general purpose registers or in a parameter list containing the addresses of each of the arguments. can result in a response arriving a significant time after the request message was sent. process messages from more than one sender. a resource is essentially shared. 90 Message passing versus calling Message passing should be contrasted with the alternative communication method for passing information between programs – the Call. In locking. The buffer required in asynchronous communication can cause problems when it is full. then communication is no longer reliable. for the call method. Once the lock is acquired. ensuring that corruption from simultaneous writes does not occur. A message handler will. A URL is an example of a way of referencing resources that does depend on exposing the internals of a process. This is in contrast to the typical behaviour of an object upon which methods are being invoked: the latter is expected to remain in the same state between method invocations. Web browsers and web servers are examples of processes that communicate by message passing. Examples of resources include shared memory. A decision has to be made whether to block the sender or whether to discard future messages. the message handler behaves analogously to a volatile object). a relative address might in fact be usable if the callee had an exact copy of.

The sending programme may or may not wait until the request has been completed..com/2010/08/02/ beyond-locks-and-messages-the-future-of-concurrent-programming/) Further reading • Ramachandran.org/citation. If the resource is not available. Objects can send messages to other objects from within their method bodies. Proceedings of the 1975 ACM SIGCOMM/SIGOPS workshop on Interprocess communications. the request is generally queued. acm.wordpress. ACM Press. message passing is performed exclusively through a dynamic dispatch strategy.org/citation. org) [2] Elements of interaction: Turing award lecture (https:/ / dl. M. cfm?id=151240) [3] http:/ / lists. org/ citation.acm. The live distributed objects programming model builds upon this observation. and that objects themselves are often over-emphasized. Toshiyuki. Proceedings of the 14th annual international symposium on Computer architecture.cfm?id=140385&coll=&dl=ACM&CFID=15151515& .org/citation. using high-level. ACM Press. if the name and the arguments of the message are identical. Examples • • • • Actor model implementation Amorphous computing Flow-based programming SOAP (protocol) References [1] Actor Model of Computation: Scalable Robust Information Systems (http:/ / www. squeakfoundation. Solomon. that is conflicting requests are not acted on until the first request has been completed. a message is the single means to pass control to an object. Walden (1975). it has a method for that message.cfm?id=30371&coll=&dl=ACM&CFID=15151515&CFTOKEN=6184618). • McQuillan. • Shimizu.cfm?id=810905&coll=&dl=ACM& CFID=15151515&CFTOKEN=6184618). Some languages support the forwarding or delegation of method invocations from one object to another if the former has no method to handle the message. John M. Takeshi Horie.acm. it uses the concept of a distributed data flow to characterize the behavior of a complex distributed system in terms of message patterns. 91 Mathematical models The prominent mathematical models of message passing are the Actor model[1] and Pi calculus[2] .acm. Message passing enables extreme late binding in systems.. In pure object-oriented programming. U. Vernon (1987). but "knows" another object that may have one. If the object "responds" to the message. Hiroaki Ishihata (1992). "Hardware support for interprocess communication" (http:// portal. In the terminology of some object-oriented programming languages. "Some considerations for a high performance message-based interprocess communication system" (http://portal. Alan Kay has argued[3] that message passing is more important than objects in OOP. html External links • Future of Concurrent Programming (http://bartoszmilewski. M. Two messages are considered to be the same message type.Message passing atomic event. "Low-latency message communication support for the AP1000" (http://portal. robust11. org/ pipermail/ squeak-dev/ 1998-October/ 017019. See also Inversion of Control. David C. functional-style specifications. Sending the same message to an object twice will usually result in the object applying the method twice.

Request-reply defines so-called "service bus". • Publish-subscribe connects a set of publishers to a set of subscribers. ACM Press. 7. In-Out: This is equivalent to request-response. 6. push-pull defines "parallelised pipeline". and loops. This is a low-level pattern for specific. All the patterns are deliberately designed in such a way as to be infinitely scalable and thus usable on Internet scale. In Optional-Out: A standard two-way message exchange where the provider's response is optional. This is a parallel task distribution and collection pattern. the provider responds with a message or fault and the consumer responds with a status. 5. and a one-way pattern. Each pattern defines a particular network topology. publish-subscribe defines "data distribution tree". For example. Out-Only Robust Out-Only Out-In Out-Optional-In ØMQ The ØMQ message queueing library provides a so-called sockets (a kind of generalization over the traditional IP and Unix sockets) which require to indicate a messaging pattern to be used. a messaging pattern is a network-oriented architectural pattern which describes how two different parts of a message passing system connect and communicate with each other. • Push-pull connects nodes in a fan-out / fan-in pattern that can have multiple steps. Proceedings of the 19th annual international symposium on Computer architecture. 3. and the UDP has a one-way pattern. The basic ØMQ patterns are:[3] • Request-reply connects a set of clients to a set of services.Message passing CFTOKEN=6184618). SOAP The term "Message Exchange Pattern" has a specific meaning within the SOAP protocol. 92 Messaging pattern In software architecture. In telecommunications. 2. There are two major message exchange patterns — a request-response pattern. This is a data distribution pattern. If the response is a status. the exchange is complete. and are particularly optimized for that kind of patterns. A standard two-way message exchange where the consumer initiates with a message. The consumer initiates with a message to which the provider responds with status. • Exclusive pair connects two sockets in an exclusive pair. [4] . A standard one-way messaging exchange where the consumer sends a message to the provider that provides only a status response. In-Only: This is equivalent to one-way. a message exchange pattern (MEP) describes the pattern of messages required by a communications protocol to establish or use a communication channel. 4. This is a remote procedure call and task distribution pattern. Robust In-Only: This pattern is for reliable one-way message exchanges. the TCP is a request-response pattern protocol.[1] [2] SOAP MEP types include: 1. advanced use cases. 8. but if the response is a fault. the consumer must respond with a status.

2 Web Services Description Language (WSDL) Version 2. When a mobile agent decides to move. However. Mobile agents decide when and where to move. Movement is often evolved from RPC methods. and be capable of performing appropriately in the new environment.Pattern Catalog (http://www.com/en-us/library/aa480027. w3. continuously enter and leave the system. org/ TR/ wsdl20-additional-meps/ ) ØMQ User Guide (http:/ / www. How trust value is calculated 3. 250bpm.Messaging pattern 93 References [1] [2] [3] [4] http:/ / www. Just as a user directs an Internet browser to "visit" a website (the browser merely downloads a copy of the site or one version of it in the case of dynamic web sites). it saves its own state. a mobile agent is a process that can transport its state from one environment to another. Source of trust information • • • • Direct experience Witness information Role-based rules Third-party references 2. This makes them a powerful tool for implementing distributed applications in a computer network. namely. org/ docs:user-guide) Scalability Layer Hits the Internet Stack (http:/ / www. and most importantly. in contrast to the Remote evaluation and Code on demand programming paradigms. a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer. learning. with the feature of autonomy. A mobile agent is a specific form of mobile code. org/ TR/ soap12-part1/ #soapmep SOAP MEPs in SOAP W3C Recommendation v1.eaipatterns. a mobile agent accomplishes a move through data duplication.com/toc. mobile agents are active in that they can choose to migrate between computers at any time during their execution. An open multi-agent systems (MAS) is a system in which agents. aspx) • Enterprise Integration Patterns . social ability. transports this saved state to the new host. w3. with its data intact. mobility. similarly. and resumes execution from the saved state. com/ hits) External links • Messaging Patterns in Service-Oriented Architecture (http://msdn. that are owned by a variety of stakeholders. Overall trust value What are the differences between trust and reputation systems? . zeromq.html) Mobile agent In computer science. Definition and overview A Mobile Agent.0: Additional MEPs (http:/ / www. is a type of software agent.microsoft. Reputation and Trust The following are general concerns about Trust and Reputation in Mobile Agent research: 1. More specifically.

a standards body which defines an interface for agent based interactions. • National Institute for Standards and Technology [3]. a multi-agent platform for mobile C/C++ agents. net/ http:/ / csrc.able to operate without an active connection between client and server • Flexible maintenance . References [1] http:/ / www. com/ danny/ docs/ 7reasons. developer of AgentOS agent based operating system.actions are dependent on the state of the host environment • Tolerant to network faults . External links • Seven Good Reasons for Mobile Agents [1] • Mobile Agent Technologies [2]. only the source (rather than the computation hosts) must be updated One particular advantage for remote deployment of software includes increased portability thereby making system requirements less influential. org http:/ / www. sourceforge. html http:/ / www. whereas reputation systems produce an entity’s (public) reputation score as seen by the whole community. fipa. html . agentlink. • The Foundation for Intelligent Physical Agents [7]. pdf [3] [4] [5] [6] [2] http:/ / www.Mobile agent Trust systems produce a score that reflects the relying party’s subjective view of an entity’s trustworthiness. mobilec. • JADE [6]. moe-lange. org/ [8] http:/ / semoa. Inventor of Automatic Thread Migration (ATM). gov/ mobileagents/ projects. agentos. a project to develop a secure mobile agent server (last release 2007). More: • Compare Reputation and Trust 94 Advantages Some advantages which mobile agents have over conventional agents: • Computation bundles .to change an agent's actions. nist. net/ about/ about. • Secure Mobile Agents Project [8]. hosts a center for investigating security of mobile agents. tilab. reducing network load. • Parallel processing -asynchronous execution on multiple heterogeneous network hosts • Dynamic adaptation . org http:/ / jade.converts computational client/server round trips to relocatable data bundles. com [7] http:/ / www. • AgentLink III [4] • Mobile-C [5]. an OSS mobile agent framework written in JAVA.

any field can be queried at any time. Non-UTF-8 data can be saved.[1] The database is document-oriented so it manages collections of JSON-like documents. regular expressions. as well as sorting. 2011 Development status Active Written in Operating system Available in Type License Website C++ Cross-platform English Document-oriented database GNU AGPL v3. and Solaris. org/ MongoDB (from "humongous") is an open source. The first public release was in February 2009. Development of MongoDB began in October 2007 by 10gen.[2] Features Among the features are: • Consistent UTF-8 encoding.8. mongodb. queried. . MongoDB supports range queries. high-performance. regular expression searches. • Cross-platform support: binaries are available for Windows. and retrieved with a special binary data type. and limiting results. Linux. Queries can also include user-defined JavaScript functions (if the function returns true. schema-free.0 (drivers: Apache license) http:/ / www. document-oriented database written in the C++ programming language.MongoDB 95 MongoDB MongoDB Developer(s) Initial release Stable release 10gen 2009 1. skipping. as data can be nested in complex hierarchies and still be query-able and indexable. MongoDB can be compiled on almost any little-endian system. and more (all BSON types) • Cursors for query results More features: Ad hoc queries In MongoDB. the document matches). and other special types of queries in addition to exactly matching fields. binary data. Many applications can thus model data in a more natural way. • Type-rich: supports dates.2 / June 18. Queries can return specific fields of documents (instead of the entire document). code. OS X.

including MapReduce[4] and a group function similar to SQL's GROUP BY. Nested fields (as described above in the ad hoc query section) can also be indexed and indexing an array type will index each element of the array. compound. This file storage mechanism has been used in plugins for NGINX[6] and lighttpd. Joe".users. unique. "pear"]}) > db.find({$where : function() { return this. periodically resampling. "plum". Indexes can be created or removed at any time. "address" : { "street" : "123 Main Street".find({"fruit" : "pear"}) Indexing The software supports secondary indexes. Developers can see the index being used with the `explain` function and choose a different index with the `hint` function. File storage The software implements a protocol called GridFS[5] that is used to store and retrieve files from the database. Example of JavaScript in a query: > db.find({"address. "+name. .insert({"fruit" : ["peach". }}) Example of code sent to the database to be executed: > db. "state" : "NY" } } We can query for this document (and all documents with an address in New York) with: > db.y.foo. the database supports a couple of tools for aggregation.x == this.food. ["Joe"]) This returns "Hello. If the following object is inserted into the users collection: { "username" : "bob". }.state" : "NY"}) Array elements can also be queried: > db. aggregation functions (such as MapReduce). including single-key. "city" : "Springfield".MongoDB 96 Querying nested fields Queries can "reach into" embedded objects and arrays.eval(function(name) { return "Hello. Aggregation In addition to ad hoc queries. and geospatial[3] indexes.[7] Server-side JavaScript execution JavaScript is the lingua franca of MongoDB and can be used in queries. non-unique.food. and sent directly to the database to be executed. MongoDB's query optimizer will try a number of different query plans when a query is run and select the fastest.

number of elements.[9] can be used with capped collections.NET. including CentOS and Fedora. returning new results as they are inserted into the capped collection. called a tailable cursor. Deployment MongoDB can be built and installed from source.[15] The MongoDB server can only be used on little-endian systems. A special type of cursor. Many Linux package management systems now include a MongoDB package. including functions and objects.[34] JVM languages (Clojure.[13] It can also be acquired through the official website. a capped collection behaves like a circular queue. etc.[14] MongoDB uses memory-mapped files. can be stored in MongoDB so that JavaScript can be used to write "stored procedures.[18] ColdFusion. although most of the drivers work on both little-endian and big-endian systems.[10] Debian and Ubuntu.[30] [31] Factor.[42] . and does not close when it finishes returning results but continues to wait for more to be returned.[32] Fantom.[29] Erlang. Scala. Groovy [35] .[28] Delphi. Capped collections are the only type of collection that maintains insertion order: once the specified size has been reached.[39] Ruby.[11] Gentoo[12] and Arch Linux.[36] Lua.). Language support MongoDB has official drivers for: • C[16] • • • • • • • • • • • C++[17] C#[18] Haskell[19] Java[20] JavaScript[21] Lisp[22] Perl[23] PHP[24] Python[25] Ruby[26] Scala[27] There are also a large number of unofficial drivers. optionally. limiting data size to 2GB on 32-bit machines (64-bit systems have a much larger data size). Any legal JavaScript type.[33] Go.[41] and Smalltalk.MongoDB JavaScript variables can also be stored in the database and used by any other JavaScript as a global variable.js.[8] A capped collection is created with a set size and." 97 Capped collections MongoDB supports fixed-size collections called capped collections.[37] node.[40] Racket. for C# and . but it is more commonly installed from a binary package. This cursor was named after the `tail -f` command.[38] HTTP REST.

as well as what percentage of the time the database was locked and how much memory it is using./mongod --master --port 10000 --dbpath ~/dbs/master $ . and update data in their databases. mongo. Any number of `mongos` processes can be run: usually one per application server is recommended. Management and graphical frontends Official tools The most powerful and useful management tool is the database shell. mongo is built on SpiderMonkey. Sharding MongoDB scales horizontally using a system called sharding[43] which is very similar to the BigTable and PNUTS scaling model. By default. execute JavaScript. removes. All requests flow through this process: it not only forwards requests and responses but also performs any necessary final data merges or sorts. The shell lets developers view. mongostat is a command-line tool that displays a simple list of stats about the last second: how many inserts. mongosniff sniffs network traffic going to and from MongoDB.MongoDB 98 Replication MongoDB supports master-slave replication. For example. Example: starting a master/slave pair locally: $ mkdir -p ~/dbs/master ~/dbs/slave $ . shut down servers. a "findAndModify" query must contain the shard key if the queried collection is sharded[44] . The developer chooses a shard key. Master-slave As operations are performed on the master. MongoDB allows developers to guarantee that an operation has been replicated to at least N servers on a per-operation basis. Administrative information can also be accessed through the admin interface: a simple html webpage that serves information about the current server status. setting up sharding. This `mongos` process knows what data is on each shard and routes the client's requests appropriately. (A shard is a master with one or more slaves. A master can perform reads and writes. the slave will replicate any changes to the data. but they incorporate the ability for the slaves to elect a new master if the current one goes down. The data is split into ranges (based on the shard key) and distributed across multiple shards. insert.) The developer's application must know that it is talking to a sharded cluster when performing some operations. updates. A slave copies data from the master and can only be used for reads or backup (not writes). and more. remove. so it is a full JavaScript shell as well as being able to connect to MongoDB servers./mongod --slave --port 10001 --dbpath ~/dbs/slave --source localhost:10000 Replica sets Replica sets are similar to master-slave. . and commands were performed. as well as get replication information. this interface is 1000 ports above the database port (http:/ / localhost:28017) and it can be turned off with the --norest option. The application talks to a special routing process called `mongos` that looks identical to a single MongoDB server. queries. which determines how the data in a collection will be distributed.

Licensing and support MongoDB is available for free under the GNU Affero General Public License.ly[63] The New York Times[64] SourceForge[65] Business Insider[66] Etsy[67] CERN LHC[68] Thumbtack[69] AppScale[70] Uber[71] . Fang of Mongo [50] Futon4Mongo – a clone of the CouchDB Futon web interface for MongoDB. supports also RDBMS. Database Master [54] Windows based MongoDB Management Studio. The language drivers are available under an Apache License.MongoDB 99 Monitoring There are monitoring plugins available for MongoDB: • • • • munin[45] ganglia[46] scout[47] cacti[48] GUIs Several GUIs have been created to help developers visualize their data. Mongo3[51] – a Ruby-based interface.[55] Prominent users • • • • • • • • • • • • • • • • MTV Networks[56] craigslist[57] Disney Interactive Media Group[58] Wordnik[59] diaspora[60] Shutterfly[61] foursquare[62] bit. Opricot[53] – a browser-based MongoDB shell written in PHP. Some popular ones are: • • • • • • [49] – a web-based UI built with Django and jQuery. MongoHub[52] – a native OS X application for managing MongoDB.

. 2011-05-10. mongodb. mongodb. com/ ) [52] MongoHub (http:/ / www. org/ liamstask/ fantomongo/ wiki/ Home) gomongo Go driver (http:/ / github. com/ mongodb/ mongo-php-driver) Python driver (http:/ / github. mongoose) [40] rmongo (http:/ / github. mongodb. com/ mongodb/ mongo-python-driver) Ruby driver (http:/ / github. com/ p/ luamongo/ ) node. com [55] The AGPL . com/ blog/ mongodb-cacti-graphs) [49] Fang of Mongo (http:/ / github. com/ downloads/ macosx/ development_tools/ mongohub. mongodb. org/ display/ DOCS/ CentOS+ and+ Fedora+ Packages) [11] Debian and Ubuntu (http:/ / www. org/ display/ DOCS/ Sharding) [44] (http:/ / www. org/ ) MongoDB Blog . org/ display/ DOCS/ GridFS) [6] NGINX (http:/ / github. [12] Gentoo (http:/ / packages. org/ display/ DOCS/ Geospatial+ Indexing) MapReduce (http:/ / www.MongoDB 100 References [1] [2] [3] [4] MongoDB website (http:/ / www. org/ display/ DOCS/ findAndModify+ Command#) [45] Munin plugin (http:/ / github. archlinux. mongodb. com/ plugin_urls/ 291-mongodb-slow-queries) [48] Cacti plugin (http:/ / tag1consulting. com/ mongodb/ casbah) ColdFusion driver (http:/ / github. org/ display/ DOCS/ Ubuntu+ and+ Debian+ packages). html) [43] sharding (http:/ / www. squeaksource. org/ package/ mongoDB) [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] Java driver (http:/ / github. com/ mongodb/ mongo-ruby-driver) Casbah. php?ID=27971) [14] official website (http:/ / www. Retrieved 2011-07-06. com/ mongodb/ mongo-csharp-driver) [19] Haskell driver (http:/ / hackage. com/ wpntv/ erlmongo) Factor driver (http:/ / github. org/ display. com/ Fiedzia/ Fang-of-Mongo) [50] Futon4Mongo (http:/ / github. org/ rumataestor/ emongo) Erlmongo Erlang driver (http:/ / github. org/ display/ DOCS/ Javascript+ Language+ Center) (https:/ / github. mongodb. racket-lang. org/ display/ DOCS/ node. org/ post/ 434865639/ state-of-mongodb-march-2010) Geospatial indexes (http:/ / www. com/ erh/ mongo-munin) [46] Ganglia plugin (http:/ / github. mongodb. com/ mongodb/ mongo) [18] C# driver (https:/ / github. JS) [39] REST interface (http:/ / github. com/ p/ pebongo/ ) Emongo Erlang driver (http:/ / bitbucket. nucleonsoftware. org/ display/ DOCS/ MapReduce) [5] GridFS (http:/ / www. mongodb. mongodb. google. . mongodb. html) [53] Opricot (http:/ / www. org/ display/ DOCS/ Capped+ Collections) [9] (http:/ / www. com/ MongoTalk. com/ virtix/ cfmongodb) Delphi (http:/ / code. mongodb.MongoDB Blog: May 5. com/ mongodb/ mongo-c-driver) [17] C++ driver (http:/ / github. mongodb. com/ mdirolf/ nginx-gridfs) [7] lighttpd (http:/ / bitbucket. com/ 2010/ 06/ 20/ gmongo-0-5-released/ ) JVM language center (http:/ / www. org/ package/ dev-db/ mongodb) [13] Arch Linux (http:/ / aur. com/ mongodb/ mongo-perl-driver) PHP driver (http:/ / github. com/ fons/ cl-mongo) Perl driver (http:/ / github. org/ post/ 137788967/ 32-bit-limitations) [16] C driver (http:/ / github.js driver (http:/ / www. ss?package=mongodb. org/ display/ DOCS/ Tailable+ Cursors) [10] CentOS and Fedora (http:/ / www. fi/ oss/ opricot/ ) [54] http:/ / www. mongodb. com/ sbellity/ futon4mongo) [51] Mongo3 (http:/ / mongo3. plt& owner=jaymccarthy) [42] Smalltalk driver (http:/ / www. 2009 (http:/ / blog. org/ display/ DOCS/ Downloads) [15] (http:/ / blog. google. gentoo. paulopoiati. haskell. mongodb. the officially supported Scala Driver for MongoDB (https:/ / github. com/ mikejs/ gomongo) GMongo (http:/ / blog. mongodb. com/ slavapestov/ factor/ tree/ master/ extra/ mongodb/ ) Fantom driver (http:/ / bitbucket. com/ quiiver/ mongodb-ganglia) [47] Scout slow-query plugin (http:/ / scoutapp. com/ tmm1/ rmongo) [41] (http:/ / planet. apple. org/ bwmcadams/ lighttpd-gridfs/ src/ ) [8] capped collections (http:/ / www. org/ display/ DOCS/ JVM+ Languages) LuaMongo (http:/ / code. mongodb. icmfinland. mongodb. org/ post/ 5360007734/ mongodb-powering-mtvs-web-properties). com/ mongodb/ mongo-java-driver) JavaScript driver (http:/ / www.March 2010 (http:/ / blog. org/ packages. com/ kchodorow/ sleepy. mongodb. org/ post/ 103832439/ the-agpl) [56] "MongoDB Powering MTV's Web Properties" (http:/ / blog.

2011).org/) mongoDB User Group (http://www.MongoDB [57] "MongoDB live at craigslist" (http:/ / blog. . 2010-10-25. cs. ISBN 9780321705334 • Hawkins. [67] "MongoDB at Etsy" (http:/ / codeascraft. [60] "MongoDB . [61] "Implementing MongoDB at Shutterfly . . Apress. auto-sharded .ly user history. tv/ file/ 3704098).diasporatest. 2010-12-23.computerworld. 2010-05-21. youtube. com/ how-we-use-mongodb-2009-11). MongoDB: The Definitive Guide (1st ed. Retrieved 2010-06-28. . Kyle (March 28. [69] "Building Our Own Tracking Engine With MongoDB" (http:/ / engineering. blogs. Retrieved 2011-05-24. . org/ post/ 660037122/ holy-large-hadron-collider-batman). TurboGears. thumbtack. 2010-05-19. Batman!" (http:/ / blog. . com/ event_mongosf_10apr30#shutterfly). [66] "How This Web Site Uses MongoDB" (http:/ / www. [62] "MongoDB at foursquare .Presentation at MongoSF" (http:/ / www. Retrieved 2011-05-15.A MongoDB Demo App with ASP. org/ post/ 5545198613/ mongodb-live-at-craigslist). . org/ 2010/ conference/ schedule/ event/ 110/ ).nosqldatabases. 101 Bibliography • Banker.mypopescu. 2011-05-24.com/main/tag/mongodb) June 2009 San Francisco NOSQL Meetup Page (http://nosql. 216. ucsb. Retrieved 2010-08-03. [58] "Disney Central Services Storage: Leveraging Knowledge and skillsets" (http:/ / www. Code as Craft: Etsy Developer Blog. etsy. The Definitive Guide to MongoDB: The NoSQL Database for Cloud and Desktop Computing (1st ed. [65] "How Python. Retrieved 2010-06-28. Michael (September 23. MongoDB in Action (1st ed. com/ 2011/ 05/ 03/ building-our-own-tracking-engine-with-mongodb/ ). com/ 2010/ 05/ 19/ mongodb-at-etsy/ ). July 1). [70] http:/ / appscale.markus-gattol. Mitch (March 3. 2011-05-03. Kristina. and MongoDB are Transforming SourceForge. O'Reilly Media.).com/post/287597162/episode-0-0-7-mike-dirolf-from-10gen-and-mongodb) MongoMvc . Retrieved 2010-06-28. 2010-11-06. mongodb. . Uber | JoyentCloud:" (http:/ / www.eventbrite. Jacqueline (2010-05-25). 10gen. Addison-Wesley Professional. com/ resources/ videos/ node-js-office-hours-curtis-chambers-uber/ ). Business Insider. (2009. [59] "12 Months with MongoDB" (http:/ / blog. Dirolf. The MongoDB NoSQL Database Blog.com/) Designing for the Cloud (http://www. wordnik. 2011).Presentation at MongoNYC" (http:/ / blip. diasporatest. 375. pp. pycon.slideshare. Plugge. Retrieved 2010-06-28. . com/ index. Peter (September 26. . nytimes.com/watch?v=dOP3w-9Q6lU) on YouTube Interview with Mike Dirolf on The Changelog about MongoDB background and design decisions (http:// thechangelog. pp.technologyreview. ISBN 9781430230519 External links • • • • • • • • • • • Official MongoDB Project Website (http://www. com/ 2010/ 05/ 25/ building-a-better-submission-form/ ). pp. html#mongodb [71] "Node.com" (http:/ / www. Membrey.name/ws/mongodb. [63] "bit. pp. ISBN 9781935182870 • Chodorow. .codeplex. MongoDB for Web Development (1st ed.com/tagged/mongodb) Eric Lai. 2010-05-21. . 2010-06-03. Retrieved 2010-06-28. com/ presentation/ mongosf2011/ disney).Software Engineer at MongoDB (http://www. [64] Maher. [68] "Holy Large Hadron Collider. PyCon 2010. . .com (http://www.net/mdirolf/mongodb-europython-2009) Non-relational data persistence in Java using MongoDB .html#faqs) . NYTimes Open Blog. 2011-05-16. tv/ file/ 3704043).com/s/ article/9135086/No_to_SQL_Anti_database_movement_gains_steam_) MongoDB articles on NoSQLDatabases. 350. mongodb.linkedin.Presentation at MongoNYC" (http:/ / blip. Retrieved 2010-12-23. ISBN 9781449381561 • Pirtle.net" (http:/ / us.com/) • FAQs about MongoDB (http://www. com/ 12-months-with-mongodb).NET MVC (http://mongomvc.). Tim. Retrieved 2010-06-28. 2010-02-20. Manning. Thumbtack Blog.mongodb. 2010). businessinsider. Retrieved 12 August 2011.com/video/?vid=356) at MIT Technology Review EuroPython Conference Presentation (http://www. Retrieved 2010-06-28. php/ MongoDB). 2010). "Building a Better Submission Form" (http:/ / open.js Meetup: Distributed Web Architectures – Curtis Chambers.com/groups?gid=3265391) on LinkedIn MongoDB news and articles on myNoSQL (http://nosql.). joyentcloud. No to SQL? Anti-database movement gains steam (http://www. 10gen. edu/ datastores. 360. Retrieved 2011-07-06.). Eelco. 2010-04-30. Retrieved 2011-07-06. .

database changes can be distributed either synchronously or asynchronously. database changes can only be distributed asynchronously. Methods Log-Based A database transaction log is referenced to capture changes made to the database. i. Trigger-Based Triggers at the subscriber capture changes made to the database and submit them to the publisher. Instead. and updated by any member of the group. i. Allowing only a single master makes it easier to achieve consistency among the members of the group. • Masters can be located in several physical sites. For log-based transaction capturing. • Eager replication systems are complex and increase communication latency. Disadvantages • Most multi-master replication systems are only loosely consistent. Other members wishing to modify the data item must first contact the master node.e. violating ACID properties. but is less flexible than multi-master replication. lazy and asynchronous. and resolving any conflicts that might arise between concurrent changes made by different members. With trigger-based transaction capturing. Within Active Directory. Multi-master replication can be contrasted with master-slave replication. other masters continue to update the database. Some Active Directory needs are however better served by Flexible single master operation. Advantages • If one master fails. Implementations Many directory servers based on LDAP implement multi-master replication.e.Multi-master replication 102 Multi-master replication Multi-master replication is a method of database replication which allows data to be stored by a group of computers. It is not required for all domain controllers to replicate with each other domain controller as this would cause excessive network traffic in large Active Directory deployments. domain controllers have a complex update pattern that ensures that all servers are updated in a timely fashion without excessive replication traffic. Active Directory One of the more prevalent multi-master replication implementations in directory servers is Microsoft's Active Directory. objects that are updated on one Domain Controller are then replicated to other domain controllers through multi-master replication. • Issues such as conflict resolution can become intractable as the number of nodes involved rises and latency increases. in which a single member of the group is designated as the "master" for a given piece of data and is the only node allowed to modify that data item. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group. distributed across the network. .

or none of it is. There is Bucardo [2]. This means that some servers in the environment can serve as failover candidates while other servers can meet other requirements such as managing a subset of columns or tables for a departmental solution.3. OpenDS replication can be used over a Wide Area Network. including solutions based on two-phase commit. implementing synchronous replication is Postgres-XC [8]. objects that are updated on one Ingres server can then replicated to other servers whether local or remote through multi-master replication. data integrity is enforced through this two-phase commit protocol by ensuring that either the whole transaction is replicated. Ingres Replicator provides an elegant and sophisticated design that allows the appropriate data to be replicated to the appropriate servers without excessive replication traffic. MySQL MariaDB and MySQL ships with replication support. Another promising approach. a subset of rows for a geographical region or one-way replication for a reporting server.Multi-master replication 103 CA Directory CA Directory supports multi-master replication. PostgreSQL PostgreSQL offers multiple solutions for multi-master replication. or network failure. Instead. Asynchronous multi-master replication commits data changes to a deferred transaction queue which is periodically processed on all databases in the cluster. PgPool and PgPool-II [4]. it uses a log with a publish-subscribe mechanism that allows scaling to a large number of nodes. It is possible to achieve a multi-master replication scheme beginning with MySQL version 3. If one server fails. rubyrep [3]. The OpenDS multi-master replication is asynchronous. In the event of a source. . Yet another project. OpenDS replication does conflict resolution at the entry and attribute level. Ingres Within Ingres Replicator. Ingres Replicator can operate over RDBMS’s from multiple vendors to connect them. OpenDS OpenDS implements multi-master replication since its version 1.0. Synchronous multi-master replication uses Oracle's two phase commit functionality to ensure that all databases with the cluster have a consistent dataset.4 (October 2007) [1]. target. OpenLDAP The widely used open source LDAP server implements multi-master replication since its version 2. implementing eager (synchronous) replication is Postgres-R [7]. In addition. Oracle Oracle database clusters implement multi-master replication using one of two methods. however it is still in development. It is not required for all Ingres servers in an environment to replicate with each other as this could cause excessive network traffic in large implementations. PgCluster [5] and Sequoia [6] as well as some proprietary solutions. Postgres-XC also is still under development. client connections can be re-directed to another server.23. MySQL Cluster supports conflict detection and resolution between multiple masters since version 6.

asp) • Terms and Definitions for Database Replication (http://www. data synchronization/replication software.postgres-r.microsoft. Daffodil Replicator works over standard JDBC driver and supports replication across heterogeneous databases. At present. org/ software/ roadmap. org/ [5] http:/ / pgcluster. org [8] http:/ / sourceforge.org) • DMOZ Open Directory Project . and data backup between various database servers.com/presentations/mm_replication. Oracle. HSQLDB. openldap. org http:/ / pgpool. projects.asp?url=/resources/documentation/Windows/2000/server/reskit/en-us/distrib/ dsbh_rep_fgtk.dbspecialists. continuent. The software was designed to scale for a large number of databases. org/ [6] http:/ / www. and withstand periods of network outage. • Daffodil Replicator (http://opensource.org/documentation/terms) • SymmetricDS (http://symmetricds. Oracle.com/) is a Java tool for data synchronization.codehaus. html) • Active Directory Replication Model (http://www. SQL Server. projects. work across low-bandwidth connections. it supports following databases: Microsoft SQL Server.dmoz. and PostgreSQL. • DBReplicator Project Page (http://dbreplicator. com/ community/ lab-projects/ sequoia [7] http:/ / www. net/ projects/ postgres-xc/ • Challenges Involved in Multimaster Replication (http://www.Database Replication Page (http://www. org/ wiki/ Bucardo http:/ / www. postgres-r. H2. Firebird. postgresql. By using database triggers. Daffodil Replicator is available in both enterprise (commercial) and open source (GPL-licensed) versions. SymmetricDS guarantees that data changes are captured and atomicity is preserved. PostgreSQL. database independent. data migration. html http:/ / bucardo. rubyrep. Daffodil database. and Apache Derby included.org/) is web-enabled. with implementations for MySQL.Multi-master replication 104 References [1] [2] [3] [4] http:/ / www. Support for database vendors is provided through a Database Dialect layer. MySQL. It uses web and database technologies to replicate tables between relational databases in near real time.replicator. postgresql.com/resources/documentation/Windows/2000/ server/reskit/en-us/Default. Licensed under LGPL open source license.org/Computers/Software/ Databases/Replication/) . DB2.daffodilsw. Apache Derby. DB2.

Massachusetts. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network. However. Three-tier architecture Three-tier[3] is a client–server architecture in which the user interface. multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation. For example. a tools company he founded in Cambridge. Apart from the usual advantages of modular software with well-defined interfaces. a change of operating system in the presentation tier would only affect the user interface code. the user interface runs on a desktop PC or workstation and uses a standard graphical user interface. By breaking up an application into tiers. the application processing. The presentation tier displays information related to such services as browsing merchandise. The three-tier model is a software architecture and a software design pattern. functional process logic may consist of one or more separate modules running on a workstation or application server. N-tier application architecture provides a model for developers to create a flexible and reusable application. the three-tier architecture is Visual overview of a Three-tiered application intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. Donovan in Open Environment Corporation (OEC). most often on separate platforms. and shopping cart contents. The concepts of layer and tier are often used interchangeably.Multitier architecture 105 Multitier architecture In software engineering. Three-tier architecture has the following three tiers: Presentation tier This is the topmost level of the application. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture"). a business or data access tier. and an RDBMS on a database server or mainframe contains the computer data storage logic. and the data management are logically separate processes. developers only have to modify or add a specific layer. The most widespread use of multi-tier architecture is the three-tier architecture. computer data storage and data access are developed and maintained as independent modules. while a tier is a physical structuring mechanism for the system infrastructure. and a data tier. one fairly common point of view is that there is indeed a difference. purchasing. and that a layer is a logical structuring mechanism for the elements that make up the [1] [2] software solution. an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. It was developed by John J. There should be a presentation tier. . Typically. functional process logic ("business rules"). For example. rather than have to rewrite the entire application over.

as its own layer. data access tier. In web based application. Often middleware is used to connect the separate tiers. 3. Separate tiers often (but not necessarily) run on separate physical servers. Conceptually the three-tier architecture is linear. The content may be static or generated dynamically. the MVC architecture is triangular: the view sends updates to the controller. Traceability The end-to-end traceability of data flows through n-tier systems is a challenging task which becomes more important when systems increase in complexity.. and each tier may itself run on a cluster. PHP. it controls an application’s functionality by performing detailed processing. Java RMI. Windows Communication Foundation. Web development usage In the web development field. web services or other standard or proprietary protocols.g. However. which are built using three tiers: 1. 2. for example Java EE. the controller updates the model. ASP. Data tier This tier consists of database servers. Whereas MVC comes from the previous decade (by work at Xerox PARC in the late 1970s and early 1980s) and is based on observations of applications that ran on a single graphical workstation. A fundamental rule in a three tier architecture is the client tier never communicates directly with the data tier.NET Remoting. logic tier. sockets. comprising both data sets and the database management system or RDBMS software that manages and provides access to the data. ColdFusion platform. Giving data its own tier also improves scalability and performance. or middle tier) The logic tier is pulled out from the presentation tier and. A back-end database. 106 Comparison with the MVC architecture At first glance. CORBA. three-tier is often used to refer to websites. commonly electronic commerce websites.NET. middle ware and data tiers ran on physically separate platforms. A front-end web server serving static content. . web applications) where the client. MVC was applied to distributed applications later in its history (see Model 2).Multitier architecture Application tier (business logic. This tier keeps data neutral and independent from application servers or business logic. in a three-tier model all communication must pass through the middle tier. . Other considerations Data transfer between tiers is part of the architecture. and potentially some cached dynamic content. topologically they are different. A middle dynamic content processing and generation level application server. and the view gets updated directly from the model. the three tiers may seem similar to the model-view-controller (MVC) concept. Protocols involved may include one or more of SNMP. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers. Front End is the content rendered by the browser. From a historical perspective the three-tier architecture concept emerged in the 1990s from observations of distributed systems (e. UDP. Here information is stored and retrieved. however.

. com/ en-us/ library/ ms998478. which is licensed under the GFDL. Martin "Patterns of Enterprise Application Architecture" (2002). and Efficiency in Client Server Applications. Layers refer to a logical grouping of components which may or may not be physically located on one processing node. malformed packets. Three Tier Architecture [4] • Microsoft Application Architecture Guide [5] References [1] Deployment Patterns (Microsoft Enterprise Architecture. com/ article/ 3508 [5] http:/ / msdn. linuxjournal. including the initial request packets. the term tiers is used to describe physical distribution of components of a system on separate servers. and Practices) (http:/ / msdn. the protected network simply appears to be unused. aspx) [2] Fowler." Open Information Systems 10. while allowing complete and uninterrupted access for legitimate users. All non-encrypted Internet traffic entering a network is inspected for malicious code. microsoft. or networks (processing nodes). and responses from the protected network. [3] Eckerson. Wayne W. Addison Wesley. To the perpetrator. com/ en-us/ library/ ee658109. microsoft. Performance. computers. 1 (January 1995): 3(20) [4] http:/ / www. aspx <webopedia> This article was originally based on material from the Free On-line Dictionary of Computing. located in front of the internet firewall. and hack attempts. or invisible. "Three Tier Client/Server Architecture: Achieving Scalability. External links • Linux journal. The network cloaking function immediately drops all packets from an offending IP address. Network cloaking Network cloaking is a technology that makes a protected network invisible to malicious external traffic. A three-tier architecture then will have three processing nodes.Multitier architecture 107 Comments Generally. Network cloaking is accomplished via a promiscuous bridge with firewall functionality. prohibited behaviors. Patterns.

the interrupts processing. security etc. This project focuses on making students fully understand the kernel internals of a microkernel-based operating system by addressing advanced concepts such as multiprocessing. kaneton kaneton represents the core of the Opaak trilogy as it aims at making students develop parts of a microkernel. Indeed. each one targeting a kernel functionality such as the booting phase. Indeed. the memory management and the multitasking. The project is composed of several stages. to kernel internals to operating system principles and distributed system paradigms. This project is taught following the kastor project and lasts for a few months. The objective for students is to develop an emulator for arcade games such as Pong. The project lasts several weeks and allows students to understand what is the microprocessor's role in an operating system though many modern functionalities are not discussed in this project such as virtual memory and scheduling. Arcanoid etc. The kayou's originality resides in its fully distributed architecture. originally named k. kayou kayou is an operating system built over the kaneton microkernel. loads it into memory and finally executes it. all the computers of the network share their resources with each other including memory. in an environment composed of multiple kayou instances. The kernel extracts this game from a special and minimalistic file system. Projects Opaak is composed of the three following projects kastor kastor. . In 2006. processor. The Opaak trilogy has been introduced by Julien Quintard in 2007 following the relative success of the kastor and kaneton projects in the EPITA curriculum. storage. the kaneton educational project competed[1] in the Alternative OS Contest run by the specialized website OSNews. date at which the kastor project was created. is an introductory project targeting low-level programming. History The Opaak educational trilogy's projects have been used for teaching operating systems at EPITA since 2004. devices etc.Opaak 108 Opaak The Opaak educational trilogy aims at providing material for the teaching and self-teaching of operating system concepts ranging from low-level programming. the kastor monolithic kernel is provided with an ELF binary at the boot time which represents an arcade game to be run.

This way it enables reusable software applications and components.Opaak 109 References [1] The kaneton Microkernel Project (http:/ / www. com/ story/ 15018/ The-kaneton-Microkernel-Project/ ) at the Alternative OS Contest External links • The Opaak educational trilogy official website (http://www.org) Open architecture computing environment Open Architecture Computing Environment (OACE) is a specification that aims to provide a standards-based computing environment in order to decouple computing environment from software applications. . osnews.opaak.

libmagic. bzip2. objdump. OCFA is extensible in C++ or Java. GNU Privacy Guard. it uses a PostgreSQL database for data storage. Photorec. net/ apps/ trac/ ocfa/ wiki . rar. exiftags. Architecture OCFA consists of a back end for the Linux platform. The framework was built by the Dutch national police. gzip. 7-zip. qemu-img and mbx2mbox. a custom Content-addressable storage or CarvFS based data repository and a Lucene index. antiword. zip. The framework integrates with other open source forensic tools and includes modules for The Sleuth Kit. Scalpel. References [1] http:/ / sourceforge.0pl4 Development status Active Operating system Available in Type Website Linux English Computer forensics [1] [1] The Open Computer Forensics Architecture (OCFA) is an distributed open source computer forensics framework used to analyze digital media within a digital forensics laboratory environment.Open Computer Forensics Architecture 110 Open Computer Forensics Architecture Open Computer Forensics Architecture Developer(s) Stable release Korps landelijke politiediensten 2. tar.2. The front end for OCFA has not been made publicly available due to licencing issues.

OrientDB uses a new indexing algorithm called MVRB-Tree. the relationships are managed as in graph databases with direct connections among records. No fees or royalties are requested to use it • Light: about 1Mb for the full server.OrientDB 111 OrientDB OrientDB Developer(s) Initial release Written in Luca Garulli 2010 Java Operating system Cross-platform Type License Website Graph database Apache 2 License [1] OrientDB is an open source NoSQL database management system written in Java. Windows and any system that supports the Java technology • Embeddable: local mode to use the database bypassing the Server. Features • Transactional: supports ACID Transactions [2]. It supports schema-less. schema-full and schema-mixed modes. Perfect for scenarios where the database is embedded • Apache 2 License: always FREE for any usage. Even if it is a document-based database. It has a strong security profiling system based on user and roles and support the SQL between the query languages. manage trees and graphs of connected documents • Web ready: supports natively HTTP. RESTful protocol and JSON without use 3rd party libraries and components • Run everywhere: lll the engine is 100% pure Java: runs on Linux. On crash it recovers the pending documents • GraphDB: native management of graphs. derived from the Red-Black Tree and from the B+Tree with benefits of both: fast insertion and ultra fast lookup. No libraries needed • Commercial support available . Thank to the SQL layer OrientDB is straightforward to use it for people skilled in relational world. 100% compliant with TinkerPop Blueprints [3] standard for Graph database • SQL: supports SQL language [4] with extensions to handle relationships without SQL join. No dependencies from other software.

com/ forum/ #!forum/ orient-database Overlay network An overlay network is a computer network which is built on top of another network. google. Enterprise private networks were first overlaid on telecommunication networks such as frame relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP based MPLS networks and virtual private networks started (2001~2002). com/ p/ orient/ [7] https:/ / groups. com/ [6] http:/ / code. and client-server applications are overlay networks because their nodes run on top of the Internet. a transport layer and an IP or circuit layers (in the case of the PSTN). in the underlying network. From a physical standpoint overlay networks are quite complex (see Figure 1) as they combine various logical layers that are operated and built by Figure 2: Overlay network broken-up into logical layers Figure 1: A sample overlay network . com/ p/ orient/ wiki/ SQL [5] http:/ / www. com [4] http:/ / code. google.OrientDB 112 External links • Official OrientDB website [5] • Code base on Google Code [6] • Public technical group [7] References [1] http:/ / www. google. Nodes in the overlay can be thought of as being connected by virtual or logical links. com/ p/ orient/ wiki/ Transactions [3] http:/ / blueprints. orientechnologies. perhaps through many physical links. com [2] http:/ / code. Uses of overlay networks In telecommunication Overlay networks are used in telecommunication because of the availability of digital circuit switching equipments and optical fiber. tinkerpop. each of which corresponds to a path. peer-to-peer networks. distributed systems such as cloud computing. For example.[2] Telecommunication transport networks and IP networks (that combined make up the broader Internet) are all overlaid with at least an optical layer. The [1] Internet was built as an overlay upon the telephone network. google. orientechnologies.

Kaashoek. whose IP address is not known in advance. an overlay network can be incrementally deployed on end-hosts running the overlay protocol software. Morris. RON (Resilient Overlay Network) for resilient routing. for example. net) External links • List of overlay network implementations. (Examples: Limewire. [4] http:/ / esm..) • PUCC • Solipsis: a France Télécom system for massively shared virtual world References [1] D.brown. without cooperation from ISPs.mit. Previous proposals such as IntServ. Overlay networks have also been proposed as a way to improve Internet routing. and IP multicast have not seen wide acceptance largely because they require modification of all routers in the network. Freenet and I2P. For example. universities.[3] 113 Over the Internet Nowadays the Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. Oxford University Press. Andersen. Virtela Technology Services underlying telecom providers. Gnutella2.cs.edu/ron/) • Overcast: reliable multicasting with an overlay network (http://www.jyu. utorrent. Akamai Technologies manages an overlay network which provides reliable. such as KAD and other protocols based on the Kademlia algorithm. DiffServ. such as through quality of service guarantees to achieve higher-quality streaming media. For example.fi/ffdoc/storm/pegboard/ available_overlays--hemppah/peg. cs. Resilient Overlay Networks (http:/ / nms. government etc) but they allow separation of concerns (and healthy business competition) that over time permitted the build up of a broad set of services that could not have been proposed by a single telecommunication operator overwise (ranging from broadband Internet access. edu/ [5] Virtela Technology Services (http:/ / www. and OverQoS for quality of service guarantees. voice over IP or IPTV. Balakrishnan. For example. efficient content delivery (a kind of multicast). The overlay has no control over how packets are routed in the underlying network between two overlay nodes. csail. 2001. On the other hand. competitive telecom operators etc).edu/~jj/papers/ overcast-osdi00. • JXTA • Many peer-to-peer protocols including Gnutella. virtela. distributed hash tables can be used to route messages to a node having a specific logical address. In Proc.gen. M. the sequence of overlay nodes a message traverses before reaching its destination. com/ history/ nethistory/ transmission. among others. corp. ACM SOSP.it. Martin. Academic research includes End System Multicast [4] and Overcast for multicast. [2] AT&T history of Network transmission (http:/ / www. Telecoms in the Internet Age: From Boom to Bust to . html) [3] Fransman. H.csail. cmu. July 2003 (http://himalia. Oct. etc. and R. but it can control.?.. Shareaza.html) • Resilient Overlay Networks (http://nms. edu/ ron/ ). [5] provides an overlay network in 90+ countries on top of 500+ different List of overlay network protocols based on TCP/IP Overlay network protocols based on TCP/IP include: • Distributed hash tables (DHTs). for example.Overlay network various entities (businesses. att. mit.pdf) .

as it uses standard libraries such as MPI. the fine-grained nature of the classes provided by the framework allow a higher flexibility compared to other frameworks. local searches. ParadisEO is of the rare frameworks that provide the most common parallel and distributed models. The models can be exploited in a transparent way. ParadisEO provides a broad range of features including evolutionary algorithms.mit. Their implementation is portable on distributed-memory machines as well as on shared-memory multiprocessors. and parallel and distributed metaheuristics. Their experimentation on the radio network design real-world application demonstrate their efficiency. ANSI-C++ compliant computation library is portable across both Windows system and sequential platforms (Unix. the most common parallel and distributed models and hybridization mechanisms. etc.html) 114 Paradiseo Paradiseo Developer(s) Stable release DOLPHIN project-team 1. This template-based. Particle swarm optimization.0 / October 12. This high content and utility encourages its use at International level. etc.lcs. Mac OS X. Furthermore. one has just to instantiate their associated provided classes.edu/papers/ overqos-nsdi04. Linux.Overlay network • OverQoS: An overlay based architecture for enhancing Internet QoS (http://nms. This separation confers to the user a maximum code and design reuse. PVM and PThreads. ParadisEO is distributed under the CeCill license and can be used under several environments. hybrid metaheuristics. . Overview ParadisEO is a white-box object-oriented framework dedicated to the reusable design of metaheuristics. ParadisEO is based on a clear conceptual separation of the solution methods from the problems they are intended to solve.). 2007 [1] of INRIA Operating system Cross-platform Type License Website Technical computing CeCill license [2] ParadisEO is a white-box object-oriented framework dedicated to the flexible design of metaheuristics.

cellular model.. it is very easy to subclass existing abstract or concrete classes.. Paradiseo-MOEO Paradiseo-MOEO provides a broad range of tools for the design of multiobjective optimization metaheuristics: fitness assignment shemes (achievement functions. Team • • • • • Jean-Charles Boisson Clive Canape [3] Thomas Legrand Arnaud Liefooghe Alexandru-Adrian Tantar External links • Official site [2]. EMO 2007. it provides tools for the development of single solution-based metaheuristics: Hill climbing.Paradiseo 115 Modules Paradiseo-EO Paradiseo-EO deals with population based metaheuristics. CEC 2006. 0-7803-9489-5. It is component-based..). particle swarm optimization. In Handbook of Bioinspired Algorithms and Applications. It contains classes for almost any kind of evolutionary computation you might come up to .. Edited by S. crowding). Matsushima. performance metrics (contribution. July 16-21 2006. at Paradiseo website • Team [1]. ranking.at least for the ones we could think of.. statistical tools and some easy-to-use state-of-the-art multi-objective evolutionary algorithms (NSGA. indicator-based. Tabu search. incremental evaluation.. Iterative Local Search (ILS). Paradiseo-MO Paradiseo-MO deals with single-solution based metaheuristics. so that if you don't find the class you need in it.Y. Canada • "ParadisEO-MOEO: A Framework for Evolutionary Multi-objective Optimization" [5] (broken link?) • A Multi-Objective Approach to the Design of Conducting Polymer Composites for Electromagnetic Shielding.. pp 1412–1419. Paradiseo-PEO also introduces tools for the design of distributed.. diversity preservation mechanisms (sharing.. Vancouver. Olariu and A... Paradiseo-PEO Paradiseo-PEO provides tools for the design of parallel and distributed metaheuristics: parallel evaluation. ANSI-C++ compliant evolutionary computation library (evolutionary algorithms. at DOLPHIN project-team website References • "Solving the Protein Folding Problem with a Bicriterion Genetic Algorithm on the Grid" [4] • Protein Sequencing with an Adaptive Genetic Algorithm from Tandem Mass Spectrometry. Simulated annealing. Zomaya • Grid computing for parallel bioinspired algorithms [6] . entropy.). Japan • A hybrid metaheuristic for knowledge discovery in microarray experiments. NSGA-II. it is a templates-based.. partial neighbourhood. island model. elitism. parallel evaluation function. IBEA. hybrid and cooperative models.).).

inria. . The authors suggest that as one moves up the application stack. 172 [5] http:/ / www2. a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program. gforge. lifl. pdf| [6] http:/ / top25. org/ 10. parasitic computing on the level of checksums is a demonstration of the concept. It is. one could in theory use a number of control nodes for which many hosts on the Internet form a distributed computing network completely unawares. or even done anything besides have a normal TCP/IP session. all the sub-problems will be answered and the final answer easily calculated. In addition. Eventually. en. as part of receiving the packet and deciding whether it is valid and well-formed. it will then request a new packet from the original computer. and can transmit a fresh packet embodying a different sub-problem. comcom. com/ content/ up02m74726v1526u/ |ParadisEO: [8] http:/ / dx. and the 3-SAT problem would be solved much more quickly if just analyzed locally. inria. fr/ ~jourdan/ publi/ jourdan_EMO07_A.perhaps one could break down interesting problems into queries of complex cryptographic protocols using public keys. in a sense. there might come a point where there is a net computational gain to the parasite . lille. This computer will. If the checksum is invalid. The first computer is attempting to solve a large and extremely difficult 3-SAT problem. 2006. If there was a net gain. sciencedirect. 2006. com/ index. springerlink. fr/ recherche/ equipes/ dolphin. doi. php?cat_id=9& subject_area_id=7& journal_id=07437315 [7] http:/ / www. 1016/ j. html [2] http:/ / paradiseo. 017 Parasitic computing Parasitic computing is programming technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature.Paradiseo • A Framework for the Reusable Design of Parallel and Distributed Metaheuristics [7] (broken link?) • Designing cellular networks using a parallel hybrid metaheuristic [8] 116 References [1] http:/ / www. The original computer now knows the answer to that smaller problem based on the second computer's response. The proof-of-concept is obviously extremely inefficient as the amount of computation necessary to merely send the packets in the first place easily exceeds the computations leached from the other program. it has decomposed the original 3-SAT problem in a considerable number of smaller problems. 08. Each of these smaller problems is then encoded as a relation between a checksum and a packet such that whether the checksum is accurate or not is also the answer to that smaller problem. However. 1109/ CCGRID. So in the end. The packet/checksum is then sent to another computer. The example given by the original paper was two computers communicating over the Internet. fr/ ~canape [4] http:/ / doi. ieeecomputersociety. create a checksum of the packet and see whether it is identical to the provided checksum. org/ 10. the target computer(s) is unaware that it has performed computation for the benefit of the other computer. inria. fr [3] http:/ / researchers. under disguise of a standard communications session. in practice packets would probably have to be retransmitted occasionally when real checksum errors and network problems occur.

Besides. CAST. In PlanetSim. PlanetSim Architecture PlanetSim’s architecture comprises three main extension layers constructed one atop another. Barabasi et al.ch/parasit/ PlanetSim PlanetSim is an object oriented simulation framework for overlay networks and services. This framework presents a layered and modular architecture with well defined hotspots documented using classical design patterns. External links • http://www. PlanetSim logo PlanetSim also aims to enable a smooth transition from simulation code to experimentation code running in the Internet. PlanetSim layered architecture . we provide wrapper code that takes care of network communication and permits us to run the same code in network testbeds such as PlanetLab.Parasitic computing 117 References 1. Parasitic computing. Moreover. developers can work at two main levels: creating and testing new overlay algorithms like Chord or Pastry. This enables complete transparency to services running either against the simulator or the network. Nature.. This façade is built on the routing services offered by the underlying overlay layer. We however have profiled and optimised the code to enable scalable simulations in reasonable time. Because of this. distributed services in the simulator use the Common API for Structured Overlays. and object middleware. We have proved that PlanetSim reproduces the measures of these environments and is also efficient in its network implementation. etc) on top of existing overlays. PlanetSim has been developed in the Java language to reduce complexity and smooth the learning curve in our framework.szene. DHT. To validate the utility of our approach.edu/~parasite • http://www. or creating and testing new services (DHT. we have implemented two overlays (Chord and Symphony) and a variety of services like CAST. DOLR. Applications are built in the upper layer using the standard Common API façade. the overlay layer obtains proximity information to other nodes asking information to the Network layer. The Simulator dictates the overall life cycle of the framework by calling the appropriate methods in the overlay's Node and obtaining routing information to dispatch messages through the Network.nd. 412: 894-897 (2001).

This site holds the latest release and collaborations. Graphical Results Currently the PlanetSim can show the network topology as a GML or Pajek outputs. whose node Ids are randomly built. included into the current PlanetSim distribution. Workshop on Software Engineering and Middleware (SEM 2004). not included into the current PlanetSim distribution. and Robert Rallo. Acceptance Rate: 34%. Lecture Notes in Computer Science (LNCS). ISBN 3-540-25328-9. Austria. pp. Carles Pairot. Rubén Mondéjar. This output is obtained loading the output file into the yEd graph editor. PlanetSim: A New Overlay Network Simulation Framework. Random 1000-node Chord network Symphony A Symphony network with 1000 nodes. Volume 3437.PlanetSim 118 Publications 2005 • Pedro García. whose node Ids are randomly built. Jordi Pujol. External links • PlanetSim official website [3] • PlanetSim at SourceForge. Helio Tejedor. March 2005. Jordi Pujol. Linz. Rubén Mondéjar. ISBN 3-902457-02-3. Carles Pairot. September 2004. SEM 2004. Proceedings of the 19th IEEE International Conference on Automated Software Engineering (ASE 2004). PlanetSim: A New [1] Overlay Network Simulation Framework . Revised Selected Papers. Software Engineering and Middleware. Austria. Helio Tejedor. See these examples: Chord A Chord network with 1000 nodes.net [4]. and Robert Rallo. Linz. 123-137. This output is obtained loading the output file into the Pajek graph editor (only Windows version). ISSN 0302-9743. Random 1000-node Symphony network . pdf [2] 2004 • Pedro García.

. 1007/ 11407386_10 http:/ / planet. springerlink. com/ index/ 10. urv. urv. The advantage of portable objects is that they are easy to use and very expressive.PlanetSim 119 References [1] [2] [3] [4] http:/ / www. es/ planetsim/ planetsim. pdf http:/ / planet. Detractors cite this as a fault. irrespective of operating system or computer architecture. net/ projects/ planetsim/ Portable object (computing) In distributed programming. es/ planetsim/ http:/ / sourceforge. a portable object is an object which can be accessed through a normal method call while possibly residing in memory on another computer. as naïve programmers will not expect network-related errors or the unbounded nondeterminism associated with large networks. allowing programmers to be completely unaware that objects reside in other locations. It is portable in the sense that it moves from machine to machine. This mobility is the end goal of many remote procedure call systems.

Versions up to 2. the Redis data model is a dictionary where keys are mapped to values. Scala. key-value data store. Erlang. development of Redis is sponsored by VMware[1] [2] . R. . union. io/ Redis is an open-source. Data model In its outer layer. 2011 Development status Active Written in Operating system Available in Type License Website ANSI C Cross-platform English Document-oriented database BSD http:/ / redis.12 / June 12. C#. Persistence is reached in two different ways: One is called snapshotting. It is written in ANSI C. JavaScript (both client and serverside). Java. Common Lisp.1 the safer alternative is an append-only file (a journal) that is written as operations modifying the dataset in memory are processed. As of 15 March 2010. persistent. Persistence Redis typically holds the whole dataset in RAM. Go. One of the main differences between Redis and other structured storage systems is that values are not limited to strings. Clojure. Redis supports high level atomic server side operations like intersection. the following abstract data types are supported: • • • • Lists of strings Sets of strings (collections of non-repeating unsorted elements) Sorted sets of strings (collections of non-repeating elements ordered by a floating-point number called score) Hashes where keys are strings and values are either strings or integers The type of a value determines what operations (called commands) are available for the value itself. Python. Supported languages or language bindings include C. and difference between sets and sorting of lists. Objective-C.Redis (data store) 120 Redis (data store) Redis Developer(s) Initial release Stable release Salvatore Sanfilippo 2009 2. Ruby.2. and Tcl. Lua. sets and sorted sets. Perl. Haskell. Since version 1. Redis is able to rewrite the append-only file in the background in order to avoid an indefinite growth of the journal. networked. and is a semi-persistent durability mode where the dataset is asynchronously transferred from memory to disk from time to time. in-memory. PHP. journaled. In addition to strings.4 could be configured to use virtual memory[3] but this is now deprecated. C++.

paperplanes. html [8] http:/ / www. and Slicehost (http:/ / porteightyeight. html) [3] Redis documentation "Virtual Memory" (http:/ / redis. Replication is useful for read (but not write) scalability or data redundancy. com/ post/ vmware-the-new-redis-home. com/ p/ redis/ wiki/ ReplicationHowto [5] "FAQ" (http:/ / redis. io/ topics/ faq). Flexiscale. [5] There is no notable speed difference between write and read operations.Redis (data store) 121 Replication Redis supports master-slave replication. [6] A. vmware. html [9] http:/ / nosqlberlin.[6] References • Jeremy Zawodny. This allows Redis to implement a single-rooted replication tree. html) [2] VMWare: The Console: VMware hires key developer for Redis (http:/ / blogs. Slides [10] for the Redis presentation. August 31. Linux Magazine. com/ cache/ 7496/ 1. so a client of a slave may SUBSCRIBE to a channel and receive a full feed of messages PUBLISHed to the master. permitting intentional and unintentional inconsistency between instances. 2011. Data from any Redis server can replicate to any number of slaves. redis.io/) • Audio Interview with Salvatore Sanfillipo on The Changelog podcast (http://thechangelog. de/ slides/ NoSQLBerlin-Redis. accessed January 18. h-online. pdf [10] http:/ / www. . Happenings: NoSQL Conference. de/ 2009/ 10/ 27/ theres_something_about_redis. linux-mag. The H. com/ presentations/ newport-evolving-key-value-programming-model External links • Official Redis project page (http://redis. • Billy Newport (IBM): "Evolving the Key/Value Programming Model to a Higher Level [11]" Qcon Conference 2009 San Francisco. io/ topics/ virtual-memory). html [11] http:/ / www. [1] VMware: the new Redis home (http:/ / antirez. A slave may be a master to another slave. com/ 2009/ 11/ 09/ redis-benchmarking-on-amazon-ec2-flexiscale-and-slicehost/ )" [7] http:/ / www.io.com/post/ 2801342864/episode-0-4-5-redis-with-salvatore-sanfilippo/) • Extensive Redis tutorial with real use-cases by Simon WIllison (http://simonwillison. com/ open/ features/ Happenings-NoSQL-Conference-Berlin-843597. Summary . Redis slaves are writable. infoq. Redis: Lightweight key/value Store That Goes the Extra Mile [7]. google. 2009 [8] [9] • Isabel Drost and Jan Lehnard (29 October 2009). anywhere up the replication tree. The Publish/Subscribe feature is fully implemented.net/static/2010/ redis-tutorial/) . [4] http:/ / code.[4] Performance The in-memory nature of Redis allows it to perform extremely well compared to database systems that write every change to disk before considering a transaction committed. Charnock: " Redis Benchmarking on Amazon EC2. com/ console/ 2010/ 03/ vmware-hires-key-developer-for-redis. Berlin .

rcenvironment. org/ The Remote Component Environment (RCE) is an all-purpose.7. nohuddleoffense. a privilege management. de [2] http:/ / www. References [1] http:/ / www. It supports and integrates well known middleware solutions like the GlobusToolkit toolkit and UNICORE and abstractions layers like Hibernate_(Java). Since it has been open sourced the name changed to Remote Component Environment[2] . de/ 2009/ 09/ 19/ remote-component-environment/ External links • Official RCE website (http://www.Remote Component Environment 122 Remote Component Environment Remote Component Environment (RCE) (Was: Reconfigurable Computing Environment) Stable release Written in 1.rcenvironment. Multi-purpose Problem Solving Environment Eclipse Public License http:/ / www. 2010 Java and Python Operating system Cross-platform Type License Website Integration platform.de) • DLR RCE product site (in German) (http://www. Is is a plug-in based system for application integration written in Java on top of the Eclipse framework. RCE enables the developers of integrated applications to concentrate on application-specific logic and to let the different applications interact by embedding them into one unified environment. clusters). sesis.0 / July 20. distributed platform for the integration of applications.de/sc/produkte/rce) . Previously the platform was known by Reconfigurable Computing Environment. RCE provides integrated applications access to general-purpose software components like a workflow engine. or an interface to external compute and storage resources (Grid.dlr. Development of the RCE platform took place in the SESIS [1] project.

RM-ODP has four fundamental elements: • • • • an object modelling approach to system specification.have recently been provided with a solid mathematical foundation in category theory. also named ITU-T Rec. platform and technology independence. X. possibly under different names. davidpratten. which provides five generic and complementary viewpoints current distributed processing on the system and its environment. in the works of Friedrich Hayek). as far as possible. interworking. the definition of a system infrastructure providing distribution transparencies for system applications. RM-ODP. Some of these concepts -such as abstraction.901-X. External links • HyperText Computer Blog [2] • Request Based Distributed Computing Blog [1] References [1] http:/ / www.[1] Overview The RM-ODP is a reference model based on precise concepts derived from The RM-ODP view model. It supports distribution. the International Electrotechnical Commission (IEC) and the Telecommunication Standardization Sector (ITU-T) . and emergence -. Many RM-ODP concepts. have been around for a long time and have been rigorously described and explained in exact philosophy (for example. together with an enterprise architecture framework for the specification of ODP systems.Request Based Distributed Computing 123 Request Based Distributed Computing Request Based Distributed Computing (RBDC) is a term that refers to the distributed computing paradigm underlying the HyperText Computer. com/ 2008/ 01/ 07/ request-based-distributed-computing-a-rough-sketch/ RM-ODP Reference Model of Open Distributed Processing (RM-ODP) is a reference model in computer science. is a joint effort by the International Organization for Standardization (ISO).904 and ISO/IEC 10746. composition. and a framework for assessing system conformance. which provides a co-ordinating framework for the standardization of open distributed processing (ODP). . the specification of a system in terms of separate but interrelated viewpoint specifications. developments and. on the use of formal description techniques for specification of the architecture. in the works of Mario Bunge) and in systems thinking (for example. and portability.

The concept of RM-ODP viewpoints framework. DoDAF and. In only 18 pages. Foundations : Contains the definition of the concepts and analytical framework for normalized description of (arbitrary) distributed processing systems. subdivisions of the specification of a whole system. Examples include the "4+1" view model. Viewpoints modeling and the RM-ODP framework Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications. 124 History Much of the preparatory work that led into the adoption of RM-ODP as an ISO standard was carried out by the Advanced Networked Systems Architecture (ANSA) project. It contains explanatory material on how the RM-ODP is to be interpreted and applied by its users. Parts 1 and 4 were adopted in 1998. who may include standard writers and architects of ODP systems. divide the design activity into several areas of concerns. Parts 2 and 3 of the RM-ODP were eventually adopted as ISO standards in 1996. each viewpoint substantially uses the same foundational concepts . established to bring together those particular pieces of information relevant to some particular area of concern during the analysis or design of the system. This recommendation also defines RM-ODP viewpoints. 4. precise and concise way. Architectural Semantics[9] : Contains a formalization of the ODP modeling concepts by interpreting many concepts in terms of the constructs of the different standardized formal description techniques. [7] 2. These are the constraints to which ODP standards must conform. RM-ODP Topics RM-ODP standards RM-ODP consists of four basic ITU-T Recommendations and ISO/IEC International Standards:[2] [3] [4] [5] 1. This ran from 1984 until 1998 under the leadership of Andrew Herbert (now MD of Microsoft Research in Cambridge). giving scoping. is to provide separate viewpoints into the specification of a given complex system. Overview[6] : Contains a motivational overview of ODP. this standard sets the basics of the whole model in a clear. key items in each are identified as related to items in the other viewpoints. Viewpoint modeling has become an effective approach for dealing with the inherent complexity of large distributed systems. Architecture[8] : Contains the specification of the required characteristics that qualify distributed processing as open. established to bring together those particular pieces of information relevant to some particular area of concern. RM-ODP. the Zachman Framework. These viewpoints each satisfy an audience with interest in a particular set of aspects of the system. Furthermore. justification and explanation of key concepts. the viewpoints are not completely independent. Moreover.RM-ODP The RM-ODP family of recommendations and international standards defines a system of interrelated essential concepts necessary to specify open distributed processing systems and provides a well-developed enterprise architecture framework for structuring the specifications for any large-scale systems including software systems. 3. and an outline of the ODP architecture. A viewpoint is a subdivision of the specification of a complete system. of course. as described in IEEE 1471. Associated with each viewpoint is a viewpoint language that optimizes the vocabulary and presentation for the audience of that viewpoint. TOGAF. A business executive will ask different questions of a system make-up than would a system implementer. Although separately specified. It introduces the principles of conformance to ODP standards and the way in which they are applied. we all have different interests in a given system and different reasons for examining the system's specifications. Current software architectural practices. each one focusing on a specific aspect of the system. and involved a number of major computing and telecommunication companies. therefore.

one for each viewpoint language and one to express the correspondences between viewpoints. which focuses on the semantics of the information and the information processing performed. to allow UML modelers to use the RM-ODP concepts and mechanisms to structure their large UML system specifications according to a mature and standard proposal. • The computational viewpoint. These approaches were consciously defined in a notation. which focuses on the choice of technology of the system. This [10] ) defines use of the Unified Modeling Language 2 (UML 2. It describes the functionality provided by the system and its functional decomposition. This lack of precise notations for expressing the different models involved in a multi-viewpoint specification of a system is a common feature for most enterprise architectural approaches. which focuses on the purpose. . This adds to the cost of adopting the use of UML for system specification. It defines a set of UML Profiles. ISO/IEC and the ITU-T started a joint project in 2004: "ITU-T Rec. this makes more difficult. The viewpoint languages defined in the reference model are abstract languages in the sense that they define what concepts should be used. functionality and presentation of information. hampers communication between system developers and makes it difficult to relate or merge system specifications where there is a need to integrate IT systems. the "4+1" model. It describes the distribution of processing performed by the system to manage the information and provide the functionality. • The information viewpoint. the formal analysis of the specifications produced.906|ISO/IEC 19793: Information technology . the viewpoints are sufficiently independent to simplify reasoning about the complete specification. It describes the business requirements and how to meet them. X. and an approach for structuring them according to the RM-ODP principles. The mutual consistency among the viewpoints is ensured by the architecture defined by RM-ODP. and the use of a common object model provides the glue that binds them all together.Open distributed processing . Although the ODP reference model provides abstract languages for the relevant concepts. or the RM-ODP. including the Zachman Framework.RM-ODP (defined in Part 2 of RM-ODP). However. The purpose of "UML4ODP" to allow ODP modelers to use the UML notation for expressing their ODP specifications in a standard graphical way. However. the development of industrial tools for modeling the viewpoint specifications. the RM-ODP framework provides five generic and complementary viewpoints on the system and its environment: • The enterprise viewpoint. In order to address these issues. scope and policies for the system. for expressing the specifications of open distributed systems in terms of the viewpoint specifications defined by the RM-ODP. • The technology viewpoint. document (usually referred to as UML4ODP ISO/IEC 19505). More specifically. • The engineering viewpoint.and representation-neutral manner to increase their use and flexibility. However. It describes the information managed by the system and the structure and content type of the supporting data. which focuses on the mechanisms and functions required to support distributed interactions between objects in the system. and the possible derivation of implementations from the system specifications. and to allow UML tools to be used to process viewpoint specifications. it does not prescribe particular notations to be used in the individual viewpoints. which enables distribution through functional decomposition on the system into objects which interact at interfaces. there is no widely agreed approach to the structuring of such specifications. thus facilitating the software design process and the enterprise architecture specification of large software systems. not how they should be represented. 125 RM-ODP and UML Currently there is growing interest in the use of UML for system modelling.Use of UML for ODP system specifications". It describes the technologies chosen to provide the processing. among other things.

ITU-T Rec. org/ combine/ overview. X. pdf).904 (http:/ / www. pdf [11] COMBINE (http:/ / www. X. iso. X. X. zip) [8] ISO/IEC 10746-3 | ITU-T Rec. iso. X. net/ ODP/ DIS_15414_X. Reference model . • Interoperability Technology Association for Information Processing (INTAP). opengroup. • ISO/IEC 19500-2:2003. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ s018836_ISO_IEC_10746-2_1996(E). X. pdf) are also available from the RM-ODP resource site (http:/ / www. iso. iso. Trading function: Specification. [2] In the same series as the RM-ODP are a number of other standards and recommendations for the specification and development of open and distributed system. html). iso.RM-ODP In addition.910 | ISO/IEC 14771:1999. tcd. aspx) [13] Interoperability Technology Association for Information Processing (INTAP) (http:/ / www. X. ITU-T Rec. X.906 | ISO/IEC 19793 enables the seamless integration of the RM-ODP enterprise architecture framework with the Model-Driven Architecture (MDA initiative from the OMG. [4] There is also a very useful hyperlinked version (http:/ / www. ITU-T Rec. Japan. zip) [9] ISO/IEC 10746-4 | ITU-T Rec. joaquin. etc. ITU-T Rec.9xx series. net/ publications. Type repository function. etc.911 | ISO/IEC 15414:2002. net/ ODP) of Parts 2 and 3 of the RM-ODP.906 | ISO/IEC 19793 "Use of UML for ODP systems specifications" (http:/ / www. X.[14] • The COMBINE project[11] Notes and references [1] A complete and updated list of references to publications related to RM-ODP (books. htm). or. X. int/ rec/ T-REC-X/ en). ITU-T Rec. ITU-T Rec. Provision of Trading Function using OSI directory service. intap. joaquin. iso. itu.930 | ISO/IEC 14753:1999. Interface references and binding. ITU-T Rec. there are several projects that have used or currently use RM-ODP for effectively structuring their systems specifications: • The Reference Architecture for Space Data Systems (RASDS)[12] From the Consultative Committee for Space Data Systems. itu.901 (http:/ / www. Parts 1 to 4 of the RM-ODP are available for from free download from ISO (http:/ / isotc. ie/ synapses/ public/ ) • • • • • • • • .920 | ISO/IEC 14750:1999. cs. net.902 (http:/ / www.) is available at the RM-ODP resource site (http:/ / www. net/ files/ resources/ LON-040_UML4ODP_IS/ LON-040_UML4ODP_IS. All ODP-related ITU-T Recommendations. htm) [12] Reference Architecture for Space Data Systems (RASDS) (http:/ / public. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ s020697_ISO_IEC_10746-3_1996(E). made available in keeping with a resolution of the ISO council. ch) or from ITU-T (http:/ / www. General Inter-ORB Protocol (GIOP)/Internet Inter-ORB Protocol (IIOP). Interface Definition Language. net/ files/ resources/ LON-040_UML4ODP_IS/ LON-040_UML4ODP_IS. the GIF files for the ODP-specific icons. rm-odp. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ c020696_ISO_IEC_10746-1_1998(E). [3] Copies of the RM-ODP family of standards can be obtained either from ISO (http:/ / www. int/ ). for which RM-ODP provides a standardization framework: ITU-T Rec. The Table of Contents and Index were prepared by Lovelace Computing and are being made available by Lovelace Computing as a service to the standards community.903 (http:/ / www. ccsds. ITU-T Rec. journal articles. 911. the viewpoint metamodels. org/ iso/ en/ ittf/ PubliclyAvailableStandards/ c020698_ISO_IEC_10746-4_1998(E). are freely available from the ITU-T (http:/ / www. 126 Applications In addition. net). conference papers.931 | ISO/IEC 14752:2000. X.[13] • The Synapses European project.960 | ISO/IEC 14769:2001. and with the service-oriented architecture (SOA). zip) [10] http:/ / www. rm-odp. zip) [7] ISO/IEC 10746-2 | ITU-T Rec. ch/ livelink/ livelink/ fetch/ 2000/ 2489/ Ittf_Home/ PubliclyAvailableStandards. Naming framework.Enterprise language (http:/ / www. jp/ e) [14] The Synapses Project: a three-year project funded under the EU 4th Framework Health Telematics Programme (http:/ / www. X. X. [5] Some resources related to the current version of | ITU-T X. [6] ISO/IEC 10746-1 | ITU-T Rec. rm-odp. rm-odp.952 | ISO/IEC 13235-3:1998. including X. together with an index to the Reference Model. They include the UML Profiles of the five ODP viewpoints. Protocol support for computational interactions.950 | ISO/IEC 13235-1:1998. org/ review/ default.

University of Kent. Australia. • ILR (http://www. Paris. France.ac. and is linked with other data across spaces and domains.edu.epfl.RM-ODP 127 External links • RM-ODP Resource site (http://www. This has the benefit of being a useful point for querying about information across domains. UMPC.lip6.stir. Switzerland. Canterbury UK.html).fr/). and Content Management Systems Ontologies and Categorization The approach can be applied to both Web based systems and Desktop based systems.co. .rm-odp.cs. • Distributed Systems Technology Center (http://archive. however it brings together ideas and technologies from various sources: • • • • • The Semantic Web.uk/~kjt/research/formosa.uk/). • Official Record of the ANSA project (http://www. and assists the development of a Web of Data. Semantic Web Data Space A Semantic Web Data Space is a container for domain specific portable data.au/AU/research_news/).cs. this supports the work of the Linked Data project which is part of the Semantic Web effort. which is provided in human and/or machine friendly structures.ukc.ac. This means that an object in a data space should be movable and should also have the ability to be referenced using an identifier such as a Uniform Resource Identifier. UK. A Data Space should be fully supportive of data portability such as that advocated by the DataPortability project.infres.fr/recherche/ILR/rapport.ch/reference/rm-odp). and Data Portability Data in Data Spaces are linked across spaces and domains to enhance the meaning of internal data.Reference Model (http://www.net/ODP/) • RM-ODP information at LAMS (http://lamswww.joaquin. and thus can be viewed in an Object Oriented fashion. University of Stirling. Networks and ComputerScience Department of ENST. Data in a Data Space can be referenced by an identifier. Linked Data.net/) • Open Distributed Processing .uk/) • Computing Laboratory (http://www. and the Linked Data Project Object Oriented Databases Data Portability Web 2. • Systèmes Répartis et Coopératifs (http://www-src. Swiss Federal Institute of Technology.dstc.ansa. Lausanne (EPFL). Semantic Web Data Spaces.html) (Formalisation of ODP Systems Architecture). Paris France.enst. The underlying paradigm is quite new. • FORMOSA (http://www.0.

• H.Zhuge. External links • Novell excerpt on Web Services Frameworks [1] References [1] http:/ / developer. It is built on top of the OpenLink Software Virtuoso Universal Server. a distributed collaborative data space system implemented as a Social networking service and Content Management System. Springer. ACM Transactions on Internet Technology. The Web Resource Space Model. 2008. its design method and applications. Y. For example. novell. OWL and Database: Mapping and Integration. 72(1)(2004)71-81.Zhuge. Resource Space Model. php?title=MonoWebFrameworks& redirect=no . Related web technologies • Uniform Resource Identifiers for object identifiers • Resource Description Framework for object and data space descriptions • SPARQL for querying about objects across domains References • H.Zhuge. Service-oriented distributed applications A RESTful programming architecture that allows some services to be run on the client and some on the server.Semantic Web Data Space 128 Examplary Semantic Web Data Space Implementation • OpenLink Data Spaces. 8/4. 2008. com/ wiki/ index. Resource space model. • H.Shi.Xing and P. a product can first be released as a browser application and then functionality moved module by module to the client application. Journal of Systems and Software.

e. or inside the IStream object returned by CoMarshalInterThreadInterfaceInStream in the COM libraries under Windows. This is most often used for shared libraries and for XIP. the change needs to be reflected to the other processors. Depending on context. Since both processes can access the shared memory area like regular working memory. programs may run on a single processor or on multiple separate processors. it is less powerful.Shared memory 129 Shared memory In computing. as for example the communicating processes must be running on the same machine (whereas other IPC methods can use a computer network). usually with a mechanism that transparently copies the page when a write is attempted. shared memory refers to a (typically) large block of random access memory that can be accessed by several different central processing units (CPUs) in a multiple-processor computer system. • Cache coherence: Whenever one cache is updated with information that may be used by other processors. provide extremely high-performance access to shared information between multiple processors. and care must be taken to avoid issues if processes sharing memory are running on separate CPUs and the underlying architecture is not cache coherent. is generally not referred to as shared memory. this is a very fast way of communication (as opposed to other mechanisms of IPC such as named pipes. Using memory for communication inside a single program. and then lets the write succeed on the private copy. when they work well. Shared memory computers cannot scale very well. each having a similar set of issues. a way of exchanging data between programs running at the same time. Dynamic libraries are generally held in memory once and mapped to multiple processes. shared memory is either • a method of inter-process communication (IPC). IPC by shared memory is used for example to transfer images between the application and the X server on Unix systems. On the other hand. or • a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance instead. In software In computer software. which has two complications: • CPU-to-memory connection becomes a bottleneck. i. otherwise the different processors will be working with incoherent data (see cache coherence and memory coherence). In hardware In computer hardware. See also Non-Uniform Memory Access. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory. for example among its multiple threads. by using virtual memory mappings or with explicit support of the program in question. On the other hand they can sometimes become overloaded and become a bottleneck to performance The alternatives to shared memory are distributed memory and distributed shared memory. Such coherence protocols can. and only pages that had to be customized for the individual process (because a symbol resolved differently there) are duplicated. One process will create an area in RAM which other processes can access. Most of them have ten or fewer processors. shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. . Unix domain sockets or CORBA). A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to a same location.

microsoft.Shared memory 130 Specific implementations POSIX provides a standardized API for using shared memory.. concurrency. ISBN 9780130424112. pdf [11] http:/ / citeseer. cf. htm [4] http:/ / www. sourceforge. Steven Robbins (29003). rwth-aachen. php. and threads (http:/ / books.documentation from SunOS 5. html . External links • • • • • • • • • • • Shared Memory Interface [2] Shared Memory Library FAQ [3] by Márcio Serolli Pinho Article "IPC:Shared Memory [4]" by Dave Marshall shared memory facility [5] from the Single UNIX Specification [6] shm_open . br/ ~pinho/ shared_memory_library. aspx [9] http:/ / www. This uses shmget from sys/shm. Prentice Hall PTR. 512.POSIX shmop [7] . more specifically as a world-writable directory that is stored in memory. UNIX systems programming: communication." [2] http:/ / www. h. p. lfbs. mit. org/ onlinepubs/ 007908799/ xsh/ sysshm. html [5] http:/ / www. boost. POSIX Shared Memory. edu/ cs?q=shared+ memory+ library [12] http:/ / www.h. shmctl.9 CreateSharedMemory function [8] from Win32-SDK Functions in PHP-API [9] Paper "A C++ Pooled. POSIX interprocess communication (part of the POSIX:XSI Extension) includes the shared-memory functions shmat.6 Linux kernel builds have started to offer /dev/shm as shared memory in the form of a RAM disk. ac. net/ rtlinux2003.h. com/ books?id=tdsZHyH9bQEC) (2 ed. sun. This uses the function shm_open from sys/mman. pucrs. Shared Memory Allocator For The Standard Template Library [10]" by Marc Ronell Citations from CiteSeer [11] Boost. inf. uk/ Dave/ C/ node27.). Kay A. /dev/shm support is completely optional within the kernel configuration file. "The POSIX interprocess communication (IPC) is part of the POSIX:XSI Extension and has its origin in UNIX System V interprocess communication. Retrieved 2011-05-13. google. shmdt and shmget. org/ onlinepubs/ 007908799/ xsh/ shm_open. opengroup. com/ app/ docs/ doc/ 817-0691/ 6mgfmmdt3?a=view [8] http:/ / msdn2. opengroup. Recent 2. shmop. cs. net/ manual/ en/ ref. de/ content/ smi [3] http:/ / www. html [6] http:/ / www. BSD systems provide "anonymous mapped memory" which can be used by several processes. php [10] http:/ / allocator.[1] Unix System 5 provides an API for shared memory as well. Both the Fedora and Ubuntu distributions include it by default.Interprocess C++ Library [12] References [1] Robbins. . html [7] http:/ / docs. org/ doc/ libs/ 1_36_0/ doc/ html/ interprocess. csail. com/ en-us/ library/ aa374778.

however "callbacks" can be attached that execute when a "named" object's content changes.[1] SmartVariables style programming interfaces emulate simple "network shared memory. . Sharing and update behaviors do not need to be explicitly programmed. Bob and Charlie. and there is no code that explicitly connects to machine “Charlie” to retrieve the “greeting” object or any changes made to it.Name( “greeting@Charlie” ). it transparently connects to Charlie and modifies the “greeting” object to have its new value: “Hello.” Here is the code for Bob: Var greeting = “Hello.” The code on Alice appears to be a “tight loop. Imagine an environment with three networked computers named: Alice. // attach to and subscribe to the remote object while (1) { cout << “greeting=” << greeting << endl. when the variable changes value. we run another program on machine “Bob” that simply changes the value of the remote “greeting@Charlie” object to be the string “Hello. the environment transparently propagates the change to Alice. Programming Basics This C++ example is from the GPL open-source SmartVariables implementation at SmartVariables. To begin. SmartVariables attach an email-like "name" to each container or list. Next. when the above program on machine Bob gets executed. Now. it does change.com. Applications do not poll for content changes. World!”. This means that the program still looping on Alice will now begin printing its new value of “Hello.Smart variables 131 Smart variables SmartVariables is a term introduced in 1998 referring to a design pattern that merges networking and distributed object technology with the goal of reducing complexity by transparently sharing information at the working program variable level." The design emphasis is API simplicity for systems needing to exchange information.” with no opportunity for the object to be modified. as events get processed asynchronously — working program variables simply receive new content. our program running on “Alice” will function to continuously print out the contents of a remote container object named “greeting@Charlie. it automatically propagates change events across the network into other running processes working with that data. where a change to one item can set off other changes in the database. greeting. World!. SmartVariables propagate themselves into process-level code automatically. World!.Name( “greeting@Charlie” ). World!.[2] The concept has some similarities to that of stored procedures and triggers in database systems.” Here is the code for Alice: Var greeting.” Because SmartVariables containers “know” who have copies of their data. // note that ‘greeting’ can change values here } Note that Alice’s display code is in a tight-loop. greeting. // modify all copies. everywhere. however. Modifications to the “greeting@Charlie” object become automatically reflected by Alice’s program output.

each argument is input. SmartVariables. that is used for defining the interface between Client and Server. Brian.) Stubs are used to perform the conversion of the parameters. so conversion of parameters used in a function call have to be performed. cs. directory.Smart variables 132 References [1] Foote. 2. Joseph Yoder (1998). It uses an interface description language (IDL). [2] Hounshell. — Introduced the concept of "smart variables". Lee (March 2006) (pdf). This method is simple to implement and can handle very complex parameter types. com/ doc/ DistributedProgramming. because of pointers to the computer's memory pointing to different data on each machine. Stub can be generated in one of the two ways: 1. . For example. output or both — only input arguments need to be copied from client to server and only output elements need to be copied from server to client. smartvariables. "Metadata and Active Object-Models" (http:/ / jerry. . The main idea of an RPC is to allow a local computer (client) to remotely call procedures on a remote computer (server). Manually: In this method. uiuc.com)]]. using "smart variables" to simplify Grid computing and implement web services. A client stub is responsible for conversion of parameters used in a function call and deconversion of results passed from the server after execution of the function. Stub libraries must be installed on client and server side. and distributed neural networks..g. External links • Open source commercial implementation (beta) in [[C++ (http://smartvariables. . pdf). Simplifying Web Infrastructure with SmartVariables (http:/ / www.com. Automatically: This is more commonly used method for stub generation. edu/ ~plop/ plop98/ final_submissions). The client and server use different address spaces. The client and server may also use different data representations even for simple parameters (e. Stub (distributed computing) A stub in distributed computing is a piece of code used for converting parameters passed during a Remote Procedure Call (RPC). function. A server stub is responsible for deconversion of parameters passed by the client and conversion of the results after the execution of the function. Pattern Languages of Programs Conference. the RPC implementer provides a set of translation functions from which a user can construct his or her own stubs. otherwise the values of those parameters could not be used. big-endian versus little-endian for integers. so a Remote Function Call looks like a local function call for the remote computer. — Refined and extended the concept. an interface definition has information to indicate whether.

parallel designs are based on "off the shelf" server-class microprocessors. In the 1970s most supercomputers were dedicated to running a vector processor. Today. The term supercomputer itself is rather fluid. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. In the 1980s a large number of smaller competitors entered the market. and coprocessors like NVIDIA Tesla GPGPUs. CDC's early machines were simply very fast scalar processors. simulation of the detonation of nuclear weapons. Currently. Japan is the fastest in the world. [1] Japan's K computer. or Xeon. but many of these disappeared in the mid-1990s "supercomputer market crash". and many of the newer players developed their own such processors at a lower price to enter the market. in parallel to the creation of the minicomputer market a decade earlier. weather forecasting. Opteron. climate research. Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics. molecular modeling (computing the structures and properties of chemical compounds. and physical simulations (such as simulation of airplanes in wind tunnels. attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs. and the speed of today's supercomputers tends to become typical of tomorrow's ordinary computers. . holding the top spot in supercomputing for five years (1985–1990). Relevant here is the distinction between capability computing and capacity computing. some ten times the speed of the fastest machines offered by other companies. In the later 1980s and 1990s. as defined by Graham et al. such as the PowerPC. and research into nuclear fusion). AMD GPUs. supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray. IBM and Hewlett-Packard. who had purchased many of the 1980s companies to gain their experience. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC). biological macromolecules. Cray Research. FPGAs. IBM Cell. Typical numbers of processors were in the range of four to sixteen. Often a capability system is able to solve a problem of a size or complexity that no other computer can. It is three times faster than previous one to hold that title. He then took over the supercomputer market with his new designs. the Tianhe-1A supercomputer located in China. polymers. built by Fujitsu in Kobe. some being off the shelf units and others being custom designs. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects. and crystals). particularly speed of calculation. which led the market into the 1970s until Cray left to form his own company.Supercomputer 133 Supercomputer A supercomputer is a computer that is at the frontline of current processing capacity. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Today.

allowing processes to execute on separate nodes.6 billion neurons with approximately 9 trillion connections. Fujitsu's Numerical Wind Tunnel supercomputer [9] [10] The used 166 vector processors to gain the top spot in 1994 with a peak speed of 1. . Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[15] Modern-day weather forecasting also relies on supercomputers.[18] and the "Peak speed" is given as the "Rmax" rating. and it become one of the most successful supercomputers in history. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[3] [4] Cray left CDC in 1972 to form his own company.[11] [12] [13] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations. containing 1. Cray delivered the 80 MHz Cray 1 in 1976. setting new computational performance records. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[17] This is a recent list of the computers which appeared at the top of the Top500 list.[6] [7] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated.[8] While the supercomputers of the 1981 used only a few processors.Supercomputer 134 History The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. machines with thousands of processors began to appear both Deutsches Museum in the United States and in Japan.9 gigaflops and was the world's fastest until 1990.[16] In 2011 the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project. For more historical data see History of supercomputing. and was ranked the fastest in the world in 1993.7 gigaflops per processor. communicating via the Message Passing Interface. in A Cray-1 supercomputer preserved at the the 1990s.[14] Current research using supercomputers The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex. released in 1964. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh.[5] Four years after leaving CDC. It performed at 1.[2] The CDC 6600. is generally considered the first supercomputer.

Japan Hardware and software design Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel. For example. with latency less of an issue. the Blue Gene/Q reached 1684 MFLOPS/Watt. each holding many processors A typical supercomputer consumes large amounts of electrical power.5 million per year. New Mexico. China Fujitsu K computer 8.[24] However. the IBM Power 775.162 PFLOPS RIKEN. USA 2. Heat management is a major issue in complex electronic devices.g. has closely packed elements that require water cooling.105 PFLOPS 2009 2010 2011 Cray Jaguar Tianhe-IA 1. The Cray 2 was liquid cooled. As with all highly parallel systems. 4MW at $0. and perform poorly at more general computing tasks. usually numerical calculations.759 PFLOPS DoE-Oak Ridge National Laboratory.566 PFLOPS National Supercomputing Center. They tend to be specialized for certain types of computation.10/KWh is $400 an hour or about $3. almost all of which is converted into heat. and using hardware to address the remaining bottlenecks.[26] On the other hand. e. Kobe. much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. and affects powerful computer systems in various ways. and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company. Tianhe-1A consumes 4. and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact.[19] The cost to power and cool the system can be significant.[27] The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt". Tianjin. The supercomputing awards for green computing reflect this issue. the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors.[30] [31] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 . requiring cooling.04 Megawatts of electricity. Energy consumption and heat management An Blue Gene/L cabinet showing the stacked blades. In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt. USA 1. and supercomputer designs devote great effort to eliminating software serialization. because supercomputers are not used for transaction processing. as well as complex detail engineering. Tennessee.[25] In the Blue Gene system IBM deliberately used low power processors to deal with heat density. Amdahl's law applies. released in 2011.Supercomputer 135 Year Supercomputer Peak speed (Rmax) Location 2008 IBM Roadrunner 1. Their I/O systems tend to be designed to support high bandwidth.[20] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies.[28] [29] In November 2010.026 PFLOPS DoE-Los Alamos National Laboratory.[21] [22] [23] The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with.

is based on GPGPUs. An IBM HS20 blade server Supercomputers consume and produce massive amounts of data in a very short period of time. a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. some graphics cards have the computing power of several TeraFLOPS. According to Ken Batcher. "A supercomputer is a device for turning compute-bound problems into I/O-bound problems. graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors. latencies of 1–5 microseconds to send a message between CPUs are typical. Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. The applications to which this power can be applied was limited by the special-purpose nature of early video processing.Supercomputer MFLOPS/W. In particular. Technologies developed for supercomputers include: • • • • • Vector processing Liquid cooling Non-Uniform Memory Access (NUMA) Striped disks (the first instance of what was later called RAID) Parallel filesystems Processing techniques Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly. . Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers. technologies Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason. Indeed. the number 3 [33] supercomputer. hence the cylindrical shape of his Cray range of computers. As video processing has become more sophisticated. and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU). The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In modern supercomputers built of many conventional CPUs running in parallel.[32] 136 Supercomputer challenges. Nebulae built by Dawning in China.

WareWulf.[34] Until the early-to-mid-1980s. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology. similar manner. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. using special libraries to share data between nodes. VTL. and open source-based software solutions such as Beowulf. and openMosix. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on.Supercomputer 137 Operating systems Supercomputers today most often use variants of the Linux operating system as shown by the graph to the right. In the most common scenario. Programming The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. different and incompatible vectorizing and parallelizing compilers for Fortran existed. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA and OpenCL. environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. The base language of supercomputer code is. supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). . in general. Software tools Software tools for distributed processing include standard APIs such as MPI and PVM. In a [34] More than 90% of today's Supercomputers run some variant of Linux. For the most part. or Linux. and the adoption of computer systems such as Cray's Unicos. Fortran or C. supercomputers to this time (unlike high-end mainframes) had vastly different operating systems.

In February 2009. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100. over 30% faster than the world's next fastest computer. and with each multiprocessor controlling multiple co-processors. The supercomputers vary radically with respect to the number of multiprocessors per cluster. The benchmark used for measurig TOP500 performance disregards the contribution of co-processors. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. application-level software is indifferent to the number of CPU cores. the costs of chip development and production make it uneconomical to design custom IBM Roadrunner . The ratio of coprocessors to general-purpose processors varies dramatically.[35] The Sequoia will be powered by 1. It is slated for deployment in late 2011. As of 2007.Supercomputer 138 Modern supercomputer architecture Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors. • A co-processor is incapable of executing "standard" code. the number of simultaneous instructions per SIMD processor. The cores share tasks using Symmetric multiprocessing (SMP) and Non-Uniform Memory Access (NUMA). but with specialized programming can exceed the performance of the multiprocessor by several orders of magnitude for certain applications. each processor of which is SIMD. the Cray XT5 "Jaguar". As of October 2010 the fastest supercomputer in the world is the Tianhe-1A system at National University of Defense Technology with more than 21000 processors." which appears to be a 20 petaflops supercomputer. The core may be a general purpose commodity core or special-purpose vector processor. Within this hierarchy we have: • A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. the number of processors per multiprocessor. Each computer runs under a separate instance of an Operating System (OS). and the type and number of co-processors. wherein the The CPU Architecture Share of Top500 Rankings between 1993 and 2009. it boasts a speed of 2.6 million cores (specific 45-nanometer chips in development) and 1. Co-processors are often GPGPUs. It will be housed in 96 refrigerators spanning roughly 3000 square feet (280 m2). • A SIMD core executes the same instruction on more than one set of data at the same time. • A multiprocessing computer is a computer.6 petabytes of memory. operating under a single instance of an OS and using more than one CPU core.507 petaflops. Furthermore. The cores may all be in from one to thousands of multicore processor devices.000 laptops). IBM also announced work on "Sequoia. It may be in a high-performance processor or a low power processor. each core executes several SIMD instructions per nanosecond.LANL .[36] Moore's Law and economies of scale are the dominant factors in supercomputer design.

many problems carried out by supercomputers are particularly suitable for parallelization (in essence. but is significantly easier to compute than a majority of actual real-world problems. Shaw Research Anton. For example. An exaflop is one quintillion (1018) FLOPS (one million teraflops).[39] for playing chess • • • • • Reconfigurable computing machines or parts of machines GRAPE. the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second). splitting up into smaller parts to be worked on simultaneously) and.[40] for astrophysics and molecular dynamics Deep Crack. Supercomputing is taking a step of increasing density.[41] for breaking the DES cipher MDGRAPE-3. or peta-. . for simulating molecular dynamics[43] The fastest supercomputers today Measuring supercomputer speed In general. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer. with over half being located in the United States. Examples of special-purpose supercomputers: • Belle. by "clusters" of computers of standard design. A current model quad-core Xeon workstation running at 2. in particular.[42] for protein structure computation D. 139 Special-purpose supercomputers A special-purpose supercomputer is a high-performance computing device with a hardware architecture dedicated to a single problem. by some measure. 15 "Petascale" supercomputers can process one quadrillion (10 ) (1000 trillion) FLOPS. offering the computer power that in 1998 required a large room to require less than a desktop footprint. combined into the shorthand "PFLOPS" (1015 FLOPS. pronounced petaflops. pronounced teraflops).[37] Deep Blue.Supercomputer chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. commonly used with an SI prefix such as tera-.000 US dollars as of 2010. E. GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s. traditional supercomputers can be replaced. allowing higher price/performance ratios by sacrificing generality. For this reason.[38] and Hydra. combined into the shorthand "TFLOPS" (1012 FLOPS. This allows the use of specially programmed FPGA chips or even custom VLSI chips. 14 countries account for the vast majority of the world's 500 fastest supercomputers. In addition. most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4. Exascale is computing performance in the exaflops range. which can be programmed to act as one large computer. for many applications. allowing for desktop supercomputers to become available.) This measurement is based on a particular benchmark. This mimics a class of real-world problems. They are used for applications such as astrophysics computation and brute-force codebreaking. which does LU decomposition of a large matrix.

The BOINC platform hosts a number of distributed computing projects. As of May 2011. Folding@home. but it is a widely cited current definition of the "fastest" supercomputer available at any given time. However.8 petaflops come from PlayStation 3 systems.1 petaflops are contributed by clients running on various GPUs.[47] As of May 2011. BOINC recorded a [46] The most active processing power of over 5.000 active computers.000 registered computers. since 1997.Supercomputer 140 The TOP500 list Since 1993.[44] Opportunistic Supercomputing Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks.[48] The Internet PrimeNet Server [49] supports GIMPS's grid computing approach. Of this.[50] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. Examples of Opportunistic Supercomputing Systems Example architecture of a grid computing system connecting many personal computers over the internet The fastest grid computing system. Quasi-opportunistic Supercomputing Quasi-opportunistic Supercomputing is a form of distributed computing whereby the “super virtual computer” of a large number of networked geographically disperse computers performs huge processing power demanding computing tasks. MilkyWay@home. 7. It does not use any GPUs or other accelerators. 1. which is based on BOINC. one of the earliest and most successful grid computing projects. communication topology-aware allocation mechanisms.544 SPARC64 VIIIfx CPUs. basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations. the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. and the rest from various computer systems[45] . Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. It consists of 68. The list does not claim to be unbiased or definitive. Current fastest supercomputer system The K computer is ranked on the TOP500 list as the fastest supercomputer at 8. reported 8. However. reports processing power of over 700 teraflops through over 33. fault tolerant message passing libraries and data pre-conditioning.5 petaflops through over 480. quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements.000 active computers on the network project (measured by computational power).[] . co-allocation subsystems.16 petaFLOPS. using the Tofu interconnect.8 petaflops of processing power as of May 2011. GIMPS's distributed Mersenne Prime search currently achieves about 60 teraflops through over 25. and is one of the most energy-efficient systems on the list.

giving the machine a net of 16 general-purpose machines and 96 vector processors.[56] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[59] Using the Intel MIC (many integrated cores) architecture.[61] Such systems might be built around 2030. time 10 PFLOPs by 2012. The flash mob cluster allows the use of any computer in the network. intended to create a "supercomputer on a chip". as of April 2004.000 servers. which is performing astrophysical simulations of large supermassive black holes capturing smaller compact objects.[60] Samples of MIC chips with 32 cores which combine vector processing units with standard CPU have become available. SGI and Intel to build a 1 petaflops computer. named Sequoia. the processing power of Google's cluster might reach from 20 to 100 petaflops. The Cell processor has a main CPU and 6 floating-point vector processors. This cluster was built in 2007 by Dr. 141 server farms contain 450. in 2009. which is Intel's response to GPU systems. Given the current speed of progress.Supercomputer Examples of Quasi-opportunistic Supercomputing Systems [51] uses a network of 16 machines. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling.[52] In June 2006 the New York Times estimated that the Googleplex and its Other notable computer clusters are the flash mob cluster.[54] Also a "quasi-supercomputer" is Google's search engine system with estimated total processing power of between 126 and 316 teraflops.[55] a C-DAC effort targeted for 2010.[57] In May 2008 a collaboration was announced between NASA. Pleiades. based on the Blue Gene architecture which is scheduled to go online in 2011. IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory. and exploits the Cell processor for the intended The PlayStation 3 Gravity Grid application. which could cover a two week time span accurately. scaling up to Fastest supercomputers: log speed vs.[58] Meanwhile. the Qoscos Grid and the Beowulf cluster. Gaurav Khanna.[62] Applications of supercomputers . SGI plans to achieve a 500 times increase in performance by 2018 to achieve an exaflop. while the Beowulf cluster still requires uniform architecture.[60] Erik P. a professor in the Physics Department of the University of Massachusetts Dartmouth with support from Sony Computer Entertainment and is the first PS3 cluster that generated numerical results that were published in scientific research literature. Research and development IBM is developing the Cyclops64 architecture. supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019. Other PFLOPS projects include one by Narendra Karmarkar in India.[53] According to 2008 estimates.

Reilly 2003 ISBN 1573565210 page 65 [8] Parallel computing for real-time signal processing and control by M. Top500. Ishihara. News. com/ books?id=V08bjkJeXkAC& pg=PA83& dq=cdc+ 6600+ 7600+ cray& hl=en& ei=7LMZTozDIInX8gP0xIkM& sa=X& oi=book_result& ct=result& resnum=1& ved=0CCgQ6AEwAA#v=onepage& q=cdc 6600 7600 cray& f=false) [6] Readings in computer architecture by Mark Donald Hill. M. pdf) [24] Parallel computing for real-time signal processing and control by M. netlib. M. O. . washingtonpost. com/ business/ technology/ petaflop-computer-flap-ibm-unplugs-itself-from-supercomputer-project-at-univ-of-illinois/ 2011/ 08/ 08/ gIQAuiFG3I_story. iTnews Australia. org/ benchmark/ top500/ reports/ report94/ main. [19] Nvidia (29 October 2010). Retrieved 2011-07-08. . [66] Brute force code breaking (EFF DES cracker). Pages 233-241. van der Steen. com/ easyir/ customrel. [17] Washington Post August 8. Michio. John A.Supercomputer 142 Decade 1970s 1980s 1990s Uses and computer involved [63] Weather forecasting.11/91. html) [10] N. T. au/ News/ 65619. Architecture and performance of the Hitachi SR2201 massively parallel processor system. Pages 246-254. Result for each list since June 1993" (http:/ / www. January 1997. nvidia. "Directory page for Top500 lists. Yu-Hen Hu 2009 ISBN pages 70-72 [3] History of computing in education by John Impagliazzo. Nuclear Physics B . google. 10-01-2003 doi 10.957772 (http:/ / sss. itnews. com/ books?id=n3Xn7jMx1RYC& pg=PA1489& dq=history+ of+ supercomputer+ cdc+ 6600& hl=en& ei=nt8cTo-RFc2r-gaDiPHLCA& sa=X& oi=book_result& ct=result& resnum=6& ved=0CEkQ6AEwBQ#v=onepage& q=history of supercomputer cdc 6600& f=false) [5] Wisconsin Biographical Dictionary by Caryn Hannan 2008 ISBN 1878592637 pages 83-84 (http:/ / books. aerodynamic research (Cray-1). H. Yasuda. [11] H. Probabilistic analysis. April 1997. Tokhi. Press release. 2011 (http:/ / www. Y. 2011). . the Netherlands.org. 65. Hirose and M.J. Inagami. Iwasaki. Issues 1-2.com. [20] Better Computing Through CPU Cooling by Alexander A. Overview of recent supercomputers. Physics of the Future (New York: Doubleday.592130. html). html). 2003 Making a Case for Efficient Supercomputing in ACM Queue Magazine. O. Volume 1 Issue 7. Kashiyama. Stichting Nationale Computer Faciliteiten. com. nationalgeographic. top500. google. Balandin in IEEE Spectrum. google. "NVIDIA Tesla GPUs Power World's Fastest Supercomputer" (http:/ / pressroom. Publication of the NCF. Fukuda (1997). . org). Retrieved 2010-10-31. com/ books?id=J46GinHakmkC& pg=PA172& dq=history+ of+ supercomputer+ cdc+ 6600& hl=en& ei=PeAcTv_eI8uf-wb3y9jvCA& sa=X& oi=book_result& ct=result& resnum=7& ved=0CEYQ6AEwBjgK#v=onepage& q=history of supercomputer cdc 6600& f=false) [4] The American Midwest: an interpretive encyclopedia by Richard Sisson. [23] Wu-chun Feng. com/ news/ 2005/ 08/ 0829_050829_supercomputer. O. nytimes. Koga. 2010-10-28.1997.1145/957717. Volume 60. Lee 2004 ISBN 1402081359 page 172 (http:/ / books.green-500-list-ranks-supercomputers. [13] A. [64] [65] radiation shielding modeling (CDC Cyber). Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202 [9] TOP500 Annual Report 1994. green500. Norman Paul Jouppi. Fujii. Wada. (http:/ / www. gov/ pubs/ 031001-acmq. do?easyirid=A0D622CE9F579F09& version=live& prid=678988& releasejsp=release_157). [67] 3D nuclear test simulations as a substitute for banned atmospheric nuclear testing (ASCI Q). ieee. Tokhi. Pao-Ann Hsiung. Y. [22] "Green 500 list ranks supercomputers" (http:/ / www. html) [18] Intel brochure . N. "Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory". New York Times. IEEE Computer Society. org/ semiconductors/ materials/ better-computing-through-cpu-cooling/ 0) [21] "The Green 500" (http:/ / www. Guang-Huei Lin. The CP-PACS project. [12] Y.1109/HPC. Accessed 20 June 2011 [2] Hardware software co-design of a multimedia SOC platform by Sao-Jie Chen. Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202 . . Zacher 2006 ISBN 0253348862 page 1489 (http:/ / books. Gurindar Sohi 1999 ISBN 9781558605398 page 41-48 [7] Milestones in computer science and information technology by Edwin D. [14] Scalable input/output: achieving system balance by Daniel A. H. [16] "Faster Supercomputers Aiding Weather Forecasts" (http:/ / news.Proceedings Supplements. October 2009 (http:/ / spectrum. lanl. org/ sublist). Proceedings of HPC-Asia '97. 19 June 2011. Sumimoto. Proceedings of 11th International Parallel Processing Symposium. Christian K. doi:10. aspx).nationalgeographic. com/ 2011/ 06/ 20/ technology/ 20computer. Reed 2003 ISBN 9780262681421 page 182 [15] Kaku. 2010s [68] Molecular Dynamics Simulation (Tianhe-1A) Notes [1] (http:/ / www. Akashi. January 1998.

org/ overtime/ list/ 32/ os). Top500. org/ primenet). htm). "Belle Chess Hardware". David. LNCS 3203. . 135. not those on the date last accessed. Saul (June 14. Stanford University. Dubitzky. pp. flonnet. Retrieved 2011-05-28 [46] BOINCstats: BOINC Combined (http:/ / www. . . Taiji. Completion of a one-petaflops computer system for simulation of molecular dynamics (http:/ / www. html). serverwatch. BlueGene/Q system . Retrieved 2010-10-31. [48] "Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search" (http:/ / www. com/ ).curpg-2.E. com/ articleshow/ msid-225517. .org. "Nebulae #2 Supercomputer built with NVIDIA Tesla GPGPUs" (http:/ / www. net/ blog/ 2004/ 04/ 30/ how-many-google-machines/ ). Orda." [29] "IBM Roadrunner Takes the Gold in the Petaflop Race" (http:/ / www. html). Google Seeks More Power" (http:/ / www. phy. Ariel. com/ stats/ project_graph. riken. hot topic paper (2007)" (http:/ / citeseer. Retrieved 2008-03-16. php/ 3913536/ Top500-Supercomputing-List-Reveals-Computing-Trends. ibm.com. Theregister. May 20. cnn. computer. edu/ ps3. Retrieved 2010-11-25. College of Engineering. uk/ 2010/ 11/ 22/ ibm_blue_gene_q_super/ ). [37] Condon. Retrieved 2010-10-31. htm). com/ 2008/ TECH/ 06/ 09/ fastest. uk/ 2010/ 05/ 31/ top_500_supers_jun2010/ ). Schuster.. CNN. [27] The Register: IBM 'Blue Waters' super node washes ashore in August (http:/ / www. com/ press/ us/ en/ pressrelease/ 26599. indiatimes. htm) 143 . green500. cnn. Gaurav Khanna. [38] Hsu. ist. Retrieved 2008-03-16. 2006). theregister. April 30. [47] BOINCstats: MilkyWay@home (http:/ / boincstats. University of Massachusetts Dartmouth. Werner. and K. Benny. [32] Green 500 list (http:/ / www.com. ISBN 0-691-09065-3 [39] C. [45] Folding@home: OS Statistics (http:/ / fah-web. top500. org/ web/ 20080610155646/ http:/ / www. (http:/ / www. more than twice that of the next best system. html). 1. . Archived from the original (http:/ / www. .Clarke). Shaw Research Anton" (http:/ / www. . Assaf. U. archive.M. . Rajeshwari Adappa (30 October 2006). 2008. 927 – 932 [40] J Makino and M. . . . psu. com/ topic/ processors/ IBM_Roadrunner_Takes_the_Gold_in_the_Petaflop_Race. Cracking DES . Sunderam 2005 ISBN 3540260439 pages 60-67 [26] "IBM uncloaks 20 petaflops BlueGene/Q super" (http:/ / www. Retrieved 4 August 2011. [44] "Japan Reclaims Top Ranking on Latest TOP500 List of World’s Supercomputers" (http:/ / www. Lorenz. theregister. tnl. The Economic Times. The Register. Retrieved 2011-05-28. The Chess Monster Hydra. com/ communications/ 2008/ 05/ google-surpasses-supercomputer-community-unnoticed. org/ lists/ 2011/ 06/ press-release). ap/ index. TechWorld . . com/ stats/ project_graph. theregister. [30] "Top500 Supercomputing List Reveals Computing Trends" (http:/ / www. Unnoticed? (http:/ / blogs. co. 02/04/2009 [36] "Petaflop Sequoia Supercomputer . [42] RIKEN press release.org. mersenne. . com/ hreviews/ article. co. Princeton University Press. "Quasi-opportunistic supercomputing in grids. hpcwire. Wiley. stanford. Scientific Simulations with Special Purpose Computers: The GRAPE Systems. 03. BOINC. "Hiding in Plain Sight. 27. com/ 2006/ 06/ 14/ technology/ 14search. Associate Professor.Supercomputer [25] Computational science -. 2004 [53] Markoff. [55] Athley. [54] Google Surpasses Supercomputer Community. Note these link will give current statistics. Antwerp – Belgium. IEEE International Symposium on High Performance Distributed Computing. uk/ 2011/ 07/ 15/ power_775_super_pricing/ ) [28] "Government unveils world's fastest computer" (http:/ / web. cms). umassd. of 14th International Conference on Field-Programmable Logic and Applications (FPL). [52] How many Google machines (http:/ / www. Pergamon Press. wss). boincstats. py?qtype=osstats). IEEE. edu/ cgi-bin/ main. [41] Electronic Frontier Foundation (1998). Wiretap Politics & Chip Design (http:/ / cryptome. com/ 2008/ TECH/ 06/ 09/ fastest.. ISBN 1-56592-520-3. deshawresearch. "performing 376 million calculations for every watt of electricity used. com/ content/ hp9la9pwq0a1cmrp/ ) Proc. html?page=1) By Tom Jowitt . "IBM. html). "Tatas get Karmakar to make super comp" (http:/ / economictimes. co. php?pr=milkyway).H. Deshawresearch.Thompson. Timothy (2010-05-31). [34] "Top500 OS chart" (http:/ / www. top500. Oreilly & Associates Inc. php) [33] Prickett. . Yoshpa. Hensell. Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. 2004.ICCS 2005: 5th international conference edited by Vaidy S.. [35] IBM to build new monster supercomputer (http:/ / www. 1. Feng-hsiung (2002). org/ primenet/ [50] Kravtsov. 1998. Donninger. nmscommunications.680 Mflops/watt. 1982. com/ stories/ 20070518003711400. com/ archives/ 2010/ 11/ 18/ ibm-system-clear-winner-in-green-500/ ). html) on 2008-06-10. Carmeli. BOINC.United States" (http:/ / www-03. . edu/ viewdoc/ summary?doi=10.Secrets of Encryption Research. GIMPS. Valentin. In Advances in Computer Chess 3 (ed. datacenterknowledge. 2009-02-03. Retrieved 2010-10-31.co. Retrieved 2011-05-28. html). . nytimes. 2011 [49] http:/ / www. ap/ index. (http:/ / www. J. 8993). [51] "PS3 Gravity Grid" (http:/ / gravity. Note these link will give current statistics. org/ lists/ 2011/ 06/ top/ list.B. . . . Top500. php?pr=bo). mersenne. Gouri Agtey.ibm. [56] C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010. computer. setting a record in power efficiency with a value of 1. John. not those on the date last accessed. Retrieved June 6. html) [43] "D. jp/ engn/ r-world/ info/ release/ press/ 2006/ 060619/ index. org/ cracking-des/ cracking-des. New York Times. springerlink.R. Retrieved 2010-10-31. 2010-11-22. networkworld." [31] "IBM Research A Clear Winner in Green 500" (http:/ / www.uk. com/ news/ 2009/ 020409-ibm-to-build-new-monster.

Supercomputer [57] "National Science Board Approves Funds for Petascale Computing Systems" (http:/ / www. Retrieved 2011-07-08. Indian Institute of Technology Powai. pdf) (PDF). [68] "China’s Investment in GPU Supercomputing Begins to Pay Off Big Time!" (http:/ / blogs. kuleuven. be/ des/ ). acronym. com/ science?_ob=ArticleURL& _udi=B6VC5-3SWXX64-8& _user=10& _rdoc=1& _fmt=& _orig=search& _sort=d& view=c& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=0a76921c6623fa556491f2dccdf4377e) (Subscription required). 144 External links • Supercomputing (http://www. 391–402.kuleuven. Cosic. . gov/ news/ news_summ. InfoWorld. nvidia. [67] "Disarmament Diplomacy: . Heise online. pp. "A new heuristic algorithm for probabilistic optimization" (http:/ / www. html). sciencedirect. html). org/ citation. .S.com. ComputerWorld. Retrieved May 25. 102638650. 2011.DOE Supercomputing & Test Simulation Programme" (http:/ / www. cosic.org. . cfm?id=1062325). [63] "The Cray-1 Computer System" (http:/ / archive. 2011 (http:/ / www. org/ resources/ text/ Cray/ Cray.uk. h-online.be. Patrick (2008-06-10). Retrieved 2011-07-08. com/ article/ 08/ 06/ 10/ IBM_breaks_petaflop_barrier_1. . 2007. . fr/ abs/ html/ iaea0837. 2008-05-09. Inc.dmoz. [60] SGI. computerworld. 2000-08-22. infoworld. "Reversible logic for supercomputing" (http:/ / portal. Retrieved 2011-07-08. Intel plan to speed supercomputers 500 times by 2018. August 10. com/ s/ article/ 9217763/ SGI_Intel_plan_to_speed_supercomputers_500_times_by_2018?taxonomyId=67) [61] DeBenedictis. heise. Cray1. computerhistory. . com/ newsticker/ news/ item/ IDF-Intel-says-Moore-s-Law-holds-until-2029-734779. 2008-04-04. . [65] "Abstract for SAMSY . India. [62] "IDF: Intel says Moore's Law holds until 2029" (http:/ / www. Blogs. 2011. nea. [59] Thibodeau.org/Computers/Supercomputing/) at the Open Directory Project . Retrieved May 25.nvidia. National Science Foundation. . June 20. Department of Mathematics and School of Biomedical Engineering.Shielding Analysis Modular System"" (http:/ / www. 1977. html). [66] "EFF DES Cracker Source Code" (https:/ / www. com/ 2011/ 06/ chinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/ ). de/ english/ newsticker/ news/ 107683). "IBM breaks petaflop barrier" (http:/ / www. (9 June 1998). org. Rajani R. acm. U. Acronym. [58] "NASA collaborates with Intel and SGI on forthcoming petaflops super computers" (http:/ / www. (2005). [64] Joshi. Erik P. uk/ dd/ dd49/ 49doe. Bombay. html). . . . Proceedings of the 2nd conference on Computing frontiers. esat. Retrieved 2008-03-16. Cray Research.esat. Retrieved 2008-07-01. ISBN 1595930191. Heise Online. nsf. jsp?cntn_id=109850).

Terrastore 145 Terrastore Terrastore Original author(s) Developer(s) Initial release Stable release Sergio Bossa Sergio Bossa. Amir Moulavi 2009 0. Master is responsible for managing the cluster membership: hence it notifies when the servers join/leave. Data Model Data model is pure JSON[3] which is stored in documents and buckets which are analogous to table row and table correspondingly in relational DBs. and for durable document storage (and replication). Giuseppe Santoro.0 [1] Terrastore is a distributed. Replication is a pull strategy performed by server nodes from the master node. Moreover. In this way Terrastore facilitates with scalability at both data and computational layers. Data (documents and buckets) is partitioned according to the consistent hashing schema [4] and is distributed on different Terrastore servers. It provides advanced scalability support and elasticity feature without loosening the consistency at data level. changing the group view. Terrastore provides ubiquity by using universally supported HTTP protocol Data is partitioned and distributed among the nodes in the cluster(s) with automatic and transparent re-balancing when nodes join and leave. Mats Henricson. In addition to this membership management.0 / December 13. 2010 Development status Active Written in Operating system Available in Type License Website Java Cross-platform English Document-oriented database Apache License 2. Master is also responsible to durably store the whole documents. scalable and consistent document store supporting single-cluster and multi-cluster deployments. It is also responsible for replicating the data to server nodes but it does not partition the data itself and partitioning strategy is decided by the server nodes which is either the default consistent hashing or a user defined one. Hence each server requests its own partition from the master. Building Blocks and Architecture Terrastore system consists of an ensemble of clusters that in each cluster can exist one Terrastore master and several Terrastore servers.8. Sven Johansson. Terracotta is used as a distributed lock manager for locking single document access during write operations. All the writes go through the master but only the first read request goes through the master and later requests will be read from the server memory. it distributes the computational load for operations like queries and updates to the nodes that actually hold the data. Terrastore employs Terracotta clustering software [2] . . as an intra-cluster group membership service.

If a request is sent to server that does not own the document. it does not refer to visibility of component's internals (as in white box or open system). org/ ). Confusingly. It also facilitates the whole system partition-tolerance behavior. Karger. Daniel Lewin. Thus in the case of partitioning the data will be available locally but it can not be seen by other clusters except the cluster owns the data. slideshare. org/ ). ACM Symposium on Theory of Computing. json. It provides better scalability by providing multiple active masters. The purpose is to shield from change all systems (or human users) on the other end of the interface. All the write requests go to both the server that owns the document and the master node.Terrastore Each server owns a partition to which a number of documents are mapped. the term refers to overall invisibility of the component. google. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. . Tom Leighton. such as new feature or new component. since the term invisible has a bad connotation (usually seen as something that the user can't see and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighbouring layer. Eric Lehman. . The vast majority of the times. Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web. slideshare.Terracotta "JSON" (http:/ / www. a document database for developers [6] Terrastore news and articles on myNoSQL [7] References [1] [2] [3] [4] http:/ / code. The term transparent is widely used in computing marketing in substitution of the term invisible. 146 External links • • • • Project website [1] Introduction to Terrastore [5] Terrastore. Rina Panigrahy. Mathhew Levine. is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behaviour. then the request is routed to the corresponding server. It was achieved through encapsulation . com/ tagged/ terrastore Transparency (human-computer interaction) Any change in a computing system. Also temporarily used later around 1969 in IBM and Honeywell programming manuals the term referred to a certain programming technique. Each document is only own by one server node. mypopescu. [5] http:/ / www. . net/ svjson/ introduction-to-terrastore [6] http:/ / www. the term transparent is used in a misleading way to refer to the actual invisibility of a computing process. terracotta. com/ p/ terrastore/ "Terracotta" (http:/ / www.putting the code into modules that hid internal details. The role of ensemble is to join multiple clusters and make them work together. David. net/ sbtourist/ terrastore-a-document-database-for-developers [7] http:/ / nosql. making them invisible for the main application.

Negotiation of cryptographically secure access of resources must require a minimum of user intervention. Types of transparency in distributed system Transparency means that any form of distributed system should hide its distributed nature from its users. the Network File System is transparent. In object-oriented programming. • Concurrent transparency . because it requires each user to learn how to access files through an ftp client.Whether a resource lies in volatile or permanent memory should make no difference to the user.Should a resource move while in use. If one expects real-time interaction with the distributed system. some file systems encrypt files transparently. it should appear to the user as a single resource. uniform way. In software engineering. because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system. For instance. it is also considered good practice to develop or use abstraction layers for database access. • Migration transparency . this should not be apparent to any of them. so that the same application will work with different databases. this may be very noticeable.[1] Formal definitions of most of these concepts can be found in RM-ODP. . This approach does not require running a compression or encryption utility manually.Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location. so the user might even not notice it while using the folder hierarchy. Similarly. transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes. due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. this should not be noticeable to the end user. The degree to which these properties can or should be achieved may vary widely. • Persistence transparency . • Security transparency . • Relocation transparency . the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object. here. • Location transparency . • Replication transparency .Users of a distributed system should not have to be aware of where a resource is physically located.While multiple users may compete for and share a single resource.Regardless of how resource access and representation has to be performed on each individual computing entity. for example). the users of a distributed system should always access resources in a single.If a resource is replicated among several locations. the Open Distributed Processing Reference Model (ISO 10746).Always try to hide any failure and recovery of computing entities and resources. enabling users to store more files on a medium without any special knowledge. There are many types of transparency: • Access transparency . appearing and functioning as a normal centralized system. some file systems allow transparent compression and decompression of data. or users will circumvent the security in preference of productivity. • Failure transparency . The early File Transfer Protocol (FTP) is considerably less transparent. Not every system can or should hide everything from its users.Transparency (human-computer interaction) 147 Examples For example.

and the . Processes communicate among each other using these shared objects — by updating the state of the objects as and when needed. find out which object provides the needed service. Lisp. was put forward by David Gelernter at Yale University. com/ sandl. Smalltalk. pdf?key1=363836& key2=6763295811& coll=& dl=ACM& CFID=15151515& CFTOKEN=6184618 TreadMarks TreadMarks is a distributed shared memory system created at Rice University in the 1990s. Producers post their data as tuples in the space. its abstract carries an early example of usage in IT field. consider that there are a group of processors that produce pieces of data and a group of processors that use the data. html [2] http:/ / delivery. This is also known as the blackboard metaphor. References [1] http:/ / www. Tuple space may be thought as a form of distributed shared memory. counterpane. Object Space can be thought of as a virtual repository. org/ 10. Python. as a computing paradigm. Object Spaces Object Spaces is a paradigm for development of distributed computing applications. and puts it in the Object Space. 1145/ 370000/ 363836/ p203-gorn.NET framework. A provider of a service encapsulates the service as an Object. which are themselves abstracted as objects. External links • TreadMarks official site [1] References [1] http:/ / www. cs. html Tuple space A tuple space is an implementation of the associative memory paradigm for parallel/distributed computing. Prolog. Tuple spaces were the theoretical underpinning of the Linda language developed by David Gelernter and Nicholas Carriero at Yale University. called Object Spaces. and have the request serviced by the object. As an illustrative example. . Implementations of tuple spaces have also been developed for Java (JavaSpaces). It is characterized by the existence of logical entities. edu/ CS/ Systems/ software/ treadmarks. rice. Ruby. shared amongst providers and accessors of network services. Object Spaces.Transparency (human-computer interaction) 148 References • Transparent-Mode Control Procedures for Data Communication [2] a paper from 1965. Lua. Tcl. All the participants of the distributed application share an Object Space. acm. and the consumers then retrieve data from the space that match a certain pattern. Clients of a service then access the Object Space. Gelernter developed a language called Linda to support the concept of global object coordination. It provides a repository of tuples that can be accessed concurrently.

In a typical environment there are several "spaces". It is used to store the distributed system state and implement distributed algorithms. use the service provided by the object. A process may choose to wait for an object to be placed in the Object Space. an object to be shared in the Object Space is made. The Entry is then written into the JavaSpace. Distribution can also be to remote locations. their methods cannot be invoked while the objects are in the Object Space. and keeps track of how many times it was used. it has to be removed from the Object Space. Here. The updated Entry is written back to the JavaSpace. the Entry is used to encapsulate a service which returns a Hello World! string. several masters and many workers. it can also be used to provide reliable storage of objects through distributed replication. which on its own has not been a commercial success. using properties lookup. Any processes can then identify the object from the Object Directory. The Master hands out units of work to the "space". thereby ensuring mutual exclusion. all communication partners (peers) communicate and coordinate by sharing state. and these are read. when deposited in an Object Space are passive. This means that no other process can access an object while it is being used by one process. they can take any unit of work from the space and process the task. First. // An Entry class public class SpaceEntry implements Entry { public final String message = "Hello World!". Such an object is called an Entry in JavaSpace terminology. 149 JavaSpaces JavaSpaces is a service specification providing a distributed object exchange and coordination mechanism (which may or may not be persistent) for Java objects. updating its usage count by doing so.Tuple space An object. Because once an object is accessed. . i. The client reads the entry from the JavaSpace and invokes its method to access the service. when deposited into a space. and is placed back only after it has been released. public String service() { ++count. high performance applications rather than reliable object caching. however. Instead. The announcement of Jini/JavaSpaces created quite some hype although Sun co-founder and chief [1] Jini architect Bill Joy put it straight that this distributed systems dream will take "a quantum leap in thinking". although this won't survive a total power failure like a disk. the accessing process must retrieve it from the Object Space into its local memory. Example usage The following example shows an application made using JavaSpaces. or JavaSpace. i. where the property specifying the criteria for the lookup of the object is its name or some other property which uniquely identifies it.. JavaSpaces remains a niche technology mostly used in the financial services and telco industries where it continues to maintain a faithful following. public Integer count = 0. Objects. The technology has found and kept new users over the years and some vendors are offering JavaSpaces-based products. processed and written back to the space by the workers. needs to be registered with an Object Directory in the Object Space. JavaSpaces can be used to achieve scalability through parallel processing. it is regarded by many to be reliable as long as the power is reliable.e. This paradigm inherently provides mutual exclusion. The most common software pattern used in JavaSpaces is the Master-Worker pattern. the workers are usually designed to be generic. The server which provides this service will create an Object Space. this is rare as JavaSpaces are usually used to low-latency. if the needed object is not already present. update the state of the object and place it back into the Object Space. In a JavaSpace. JavaSpaces is part of the Java Jini technology.e.

} } 150 Books • Eric Freeman.read(entry. June 1999.MAX_VALUE). Lease. } } // Hello World! server public class Server { public static void main(String[] args) throws Exception { SpaceEntry entry = new SpaceEntry().take(new SpaceEntry(). 1. 2004.MAX_VALUE). SpaceEntry e = space. Wrox Press. Ken Arnold: JavaSpaces Principles. ISBN 1861002777 • Steven Halter: JavaSpaces Example by Example.out.write(entry. Addison-Wesley Professional. and Practice.service()). space. Long.out.sleep(10 * 1000). Prentice Hall. // Create an Object Space // Register and write the Entry into the Space space. ISBN 0-321-11231-8 • Max K.println(e. 2002. // Pause for 10 seconds and then retrieve the Entry and check its state. Thread.println(e). 2002. ISBN 0-201-30955-6 • Phil Bishop.: Professional Java Server Programming. Lease. Patterns.Tuple space return message. } } // Client public class Client { public static void main(String[] args) throws Exception { JavaSpace space = (JavaSpace) space().write(e. Prentice Hall PTR. null.FOREVER). } public String toString() { return "Count: " + count. SpaceEntry e = space. Addison Wesley. // Create the Entry object JavaSpace space = (JavaSpace)space().FOREVER). 1999. System. Long. Goff: Network Distributed Computing: Fitscapes and Fallacies. Susanne Hupfer. null. null. ISBN 0-13-061916-7 . null. Nigel Warren: JavaSpaces in Practice. et al. System. ISBN 0131001523 • Sing Li.

Susanne (1999). Ruby. Nati (2006). Allows free non-commercial use. InformIT. "Understanding JavaSpaces" [9]. BOINC. • Löffler. Offers free "community license" with a subset of features. See page. John Brockman. Part 1 (from 5)" [14]. • Sing. C++ License used Apache License BSD License Commercial. • Angerer.net. Javaspaces)" [7].net.Tuple space 151 Interviews • Gelernter. Retrieved 2007-04-18.com. scalable solutions with JavaSpaces" [12] . IBM developerworks. Steven (2006). SearchWebServices.sun. "Computer Visions: A Conversation with David Gelernter" [3]. Notable features Based on the Jini project that Sun contributed to Apache. (2003). Editor and Publisher Russell Weinberger. "How Web services can use JavaSpaces" [6]. Articles • Brogden. William (2007). Python Python Ruby BSD License GPL Ruby License Clustered. onjava. Associate Publisher. • White. Bernhard. The Blitz Project Single site server. "Designing as if Programmers are People (Interview with Ken Arnold)" [4]. Retrieved 2007-01-31. Tom (2005).com. Entwickler. Andreas (2005). (2005). Erlacher. Bill (2003).com. "Getting Started With JavaSpaces Technology: Beyond Conventional Distributed Programming Paradigms" [13]. Scala Java. "JavaSpaces und ihr Platz im Enterprise Java Universum. See page. . "High-impact Web tier clustering. Dr. Tuple Space Implementations Project Apache River [18] [19] Supported Languages Java Java Java. theserverside.net. • Ottinger. Edge Foundation. . Commercial. William (2007). David (2009). Inc. "Coordination in parallel event-based systems" [17]. Sun Developer Network (SDN). Retrieved 2007-03-20. JavaWorld. • Heiss. "Space-Based Programming" [11]. blogs. Retrieved 2005-05-21. The Fly Object Space GigaSpaces [20] Linda in a Mobile [21] Environment (LIME) LinuxTuples PyLinda Rinda [22] Java C. • Arango. "How To Build a ComputeFarm" [8]. Joseph (2007). • Brogden. Unknown Clustered. • Mamoud. Bernhard (2003). • Hupfer. Part 2: Building adaptive. java. GigaSpaces • Shalom.Net. Retrieved 2006-06-03.. "Space-Based Architecture and the End of Tier-Based Computing" Technologies. See page. "Grid computing and Web services (Beowulf. [16] . "Lord of the Cloud" [2]. "Interview: GigaSpaces" [5]. "Loosely Coupled Communication and Coordination in Next-Generation Java Middleware" [10]. Retrieved 2003-03-19.com. Sun Developer Network (SDN). Das Modell zum Objektaustausch: JavaSpaces vorgestellt" [15].com. Qusay H. java. Li (2003). • Venners. Mauricio (2009). • Angerer. Retrieved 2004-02-01. fault-tolerant. "Make room for Javaspaces. • Haines. Gerald (2004). SearchWebServices. Janice J. java.

uakom. August 1998 [15 January 2006] [2] [3] [4] [5] http:/ / www. com/ developerworks/ java/ library/ j-cluster2/ ?Open& ca=daw-co-news [13] http:/ / java. informit.id. Inactive Projects: • SlackSpaces [26]. com/ [20] http:/ / www. html [7] http:/ / searchwebservices. html http:/ / java. no/ ?docname=SmallSpaces/ [28] http:/ / www. html)".sid26_gci1251765. SunWorld. Ruby. html [11] http:/ / www. com/ developer/ technicalArticles/ tools/ JavaSpaces/ [14] http:/ / www. edge. net/ pub/ a/ today/ 2005/ 06/ 03/ loose. ibm. html#a1 [17] http:/ / blogs. com/ tip/ 0. sun. C/C++ Apache License AGPL (server) + LGPL (clients) Clustered with Terracotta Cluster.00. tv/ http:/ / www. collide.489. sourceforge. html [12] http:/ / www-128. tss?l=UsingJavaSpaces [10] http:/ / today. techtarget. onjava.nodeid. de/ itr/ online_artikel/ psecom. com/ tt/ articles/ article. com/ products/ soa/ in-memory-computing/ activespaces-enterprise-edition/ default. sun. sk/ sunworldonline/ swol-08-1998/ swol-08-jini. html http:/ / www. C#. com/ developer/ technicalArticles/ Interviews/ gelernter_qa. techtarget. by IBM for Java. java. html http:/ / today.289483. com/ tip/ 0. project stalled since 2000 References [1] Rob Guth: " More than just another pretty name: Sun's Jini opens up a new world of distributed computer systems (http:/ / sunsite. gigaspaces. sun.Tuple space SemiSpace SQLSpaces [23] [24] Java Server: Java. gigaspaces. html [15] http:/ / www. ibm. flyobjectspace. fault-tolerant.11. PHP. dancres. com/ arango/ entry/ coordination_in_parallel_event_based [18] http:/ / www. project source is downloadable • SmallSpaces [27]. org/ blitz/ [19] http:/ / www. 152 TIBCO ActiveSpaces Commercial Clustered. Clients: Java. com/ guides/ content. com/ pub/ a/ onjava/ 2003/ 03/ 19/ java_spaces. tibco. semispace. org/ [24] [25] [26] [27] http:/ / sqlspaces. com/ os_papers.00. aspx?g=java& seqNum=263 [6] http:/ / searchwebservices. fongen. html [9] http:/ / www. theserverside. org/ 3rd_culture/ gelernter09/ gelernter09_index. Prolog. javaworld. java. net/ [23] http:/ / www. com/ cs/ TSpaces/ . open-source. geir. Open Source implementation of the Linda/Tuplespace programming model • TSpaces [28]. html [8] http:/ / today. main website down. almaden. [25] Java. com/ [21] http:/ / lime.sid26_gci1248166. info/ http:/ / www. sourceforge. com/ javaworld/ jw-11-1999/ jw-11-jiniology. javamagazin. net/ pub/ a/ today/ 2005/ 04/ 21/ farm. html [16] http:/ / www. net/ pub/ a/ today/ 2003/ 06/ 10/ design. java. net/ [22] http:/ / linuxtuples. jsp http:/ / slackspaces.289483.

with organizations buying and selling computing resources as needed or as they go idle. the new model of computing caught and eventually became mainstream with the publication of Nick Carr's book "The Big Switch". Utility Computing can support grid computing which has the characteristic of very large computations or a sudden peaks in demand which are supported via a large number of computers. instead. To provide utility computing services. a company can "bundle" the resources of members of the public for sale. There was some initial skepticism about such a significant shift. Amazon and others started to take the lead in 2008. IBM. M. storage and services. Software as a Service and Cloud Computing models that further propagated the idea of computing. sometimes called the Virtual Organization (VO). number 1. ACM Transactions on Programming Languages and Systems. volume 7. software and network bandwidth) into a service. Liu External links • "TupleSpace" (http://c2. The definition of "utility computing" is sometimes extended to specialized tasks. as they established their own utility services for computing. such as web services. 2004). David. where the supporting nodes are geographically distributed or cross administrative domains. common among volunteer computing applications. These might be a dedicated computer cluster specifically built for the purpose of being rented out. L. who might be paid with a portion of the revenue from clients.[1] However.org/wiki/JavaSpaces_Specification) at jini. Another model.com • "JavaSpace Specification" (http://www. This repackaging of computing services became the foundation of the shift to "On Demand" computing. January 1985 • Distributed Computing (First Indian reprint. Multiple servers are used on the "back end" to make this possible. water. on the behest of approved end-users (in the commercial case. natural gas. as a metered service similar to a traditional public utility (such as electricity. storage and applications. such as computation. The term "grid computing" is often used to describe a particular form of distributed computing.turning what was previously a need to purchase products (hardware.2433). .com/cgi/wiki?TupleSpace) at c2. computational resources are essentially rented .acm.org Utility computing Utility computing is the packaging of computing resources. payment and development challenges of the new computing model.org/citation. HP and Microsoft were early leaders in the new field of Utility Computing with their business units and researchers working on the architecture. or telephone network).cfm?doid=2363.jini. "Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. One model. Google. the paying customers). is more decentralized. "Generative communication in Linda" (http://portal. is for a central server to dispense tasks to participating nodes. The technique of running a single calculation on multiple computers is known as distributed computing. This model has the advantage of a low or no initial cost to acquire computer resources. application and network as a service. or even an under-utilized supercomputer.Tuple space 153 Sources • Gelernter.

Utility computing

154

History
Utility computing is not a new concept, but rather has quite a long history. Among the earliest references is:

If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility... The computer utility could become the basis of a new and important industry.

[2] —John McCarthy, speaking at the MIT Centennial in 1961

IBM and other mainframe providers conducted this kind of business in the following two decades, often referred to as time-sharing, offering computing power and database storage to banks and other large organizations from their world wide data centers. To facilitate this business model, mainframe operating systems evolved to include process control facilities, security, and user metering. The advent of mini computers changed this business model, by making computers affordable to almost all companies. As Intel and AMD increased the power of PC architecture servers with each new generation of processor, data centers became filled with thousands of servers. In the late 90's utility computing re-surfaced. InsynQ ([3]), Inc. launched [on-demand] applications and desktop hosting services in 1997 using HP equipment. In 1998, HP set up the Utility Computing Division in Mountain View, CA, assigning former Bell Labs computer scientists to begin work on a computing power plant, incorporating multiple utilities to form a software stack. Services such as "IP billing-on-tap" were marketed. HP introduced the Utility Data Center in 2001. Sun announced the Sun Cloud service to consumers in 2000. In December 2005, Alexa launched Alexa Web Search Platform, a Web search building tool for which the underlying power is utility computing. Alexa charges users for storage, utilization, etc. There is space in the market for specific industries and [4] applications as well as other niche applications powered by utility computing. For example, PolyServe Inc. offers a clustered file system based on commodity server and storage hardware that creates highly available utility computing environments for mission-critical applications including Oracle and Microsoft SQL Server databases, as well as workload optimized solutions specifically tuned for bulk storage, high-performance computing, vertical industries such as financial services, seismic processing, and content serving. The Database Utility and File Serving Utility enable IT organizations to independently add servers or storage as needed, retask workloads to different hardware, and maintain the environment without disruption. In spring 2006 3tera announced its AppLogic service and later that summer Amazon launched Amazon EC2 (Elastic Compute Cloud). These services allow the operation of general purpose computing applications. Both are based on Xen virtualization software and the most commonly used operating system on the virtual computers is Linux, though Windows and Solaris are supported. Common uses include web application, SaaS, image rendering and processing but also general-purpose business applications. Utility computing merely means "Pay and Use", with regards to computing power.

References
[1] On-demand computing: What are the odds? (http:/ / www. zdnet. com/ news/ on-demand-computing-what-are-the-odds/ 296135), ZD Net, Nov 2002, , retrieved Oct 2010 [2] Architects of the Information Society, Thirty-Five Years of the Laboratory for Computer Science at MIT, Edited by Hal Abelson [3] http:/ / www. insynq. com [4] http:/ / www. polyserve. com/ index. php

Decision support and business intelligence 8th edition page 680 ISBN 0-13-198660-0

Utility computing

155

External links
• How Utility Computing Works (http://communication.howstuffworks.com/utility-computing.htm) • Utility computing definition (http://searchdatacenter.techtarget.com/sDefinition/0,,sid80_gci904539,00.html)

Virtual Machine Interface
Virtual Machine Interface[1] ("VMI") may refer to a communication protocol for running parallel programs on a distributed memory system. Virtual Machine Interface[2] is also the name given by VMware to the proposed open standard protocol that guest operating systems can use to communicate with the hypervisor of a virtual machine. An implementation of this standard was merged in the main Linux kernel version 2.6.21. A number of popular GNU/Linux distributions now ship with VMI support enabled by default. Since newer AMD and Intel CPUs allow for more efficient virtualization, VMI is being obsoleted and VMI support will be removed from Linux kernel in 2.6.37[3] and from VMware products in 2010-2011 timeframe [4] .

References
[1] Official web site for the VMI communication protocol (http:/ / vmi. ncsa. uiuc. edu/ ) [2] Transparent Paravirtualisation - VMware Inc (http:/ / www. vmware. com/ interfaces/ paravirtualization. html) [3] x86, vmi: Mark VMI deprecated and schedule it for removal (http:/ / git. kernel. org/ ?p=linux/ kernel/ git/ torvalds/ linux-2. 6. git;a=commit;h=d0153ca35d344d9b640dc305031b0703ba3f30f0) [4] Support for guest OS paravirtualization using VMware VMI to be retired from new products in 2010-2011 (http:/ / blogs. vmware. com/ guestosguide/ 2009/ 09/ vmi-retirement. html)

External links
• The VMI virtualization interface (http://lwn.net/Articles/175706/) - article in lwn.net

Virtual Object System

156

Virtual Object System
Virtual Object System
Developer(s) Stable release Interreality 0.23.0 / April 15, 2006 (S5 UI preview released October 19, 2007)

Operating system Linux, Windows, Mac OS X Type License Website Distributed systems, Networking, 3D graphics GNU Lesser General Public License interreality.org [1]

The Virtual Object System (VOS) is a computer software technology for creating distributed object systems. The sites hosting Vobjects are typically linked by a computer network, such as a local area network or the Internet. Vobjects may send messages to other Vobjects over these network links (remotely) or within the same host site (locally) to perform actions and synchronize state. In this way, VOS may also be called an object-oriented remote procedure call system. In addition, Vobjects may have a number of directed relations to other Vobjects, which allows them to form directed graph data structures. VOS is patent free, and its implementation is Free Software. The primary application focus of VOS is general purpose, multiuser, collaborative 3D virtual environments or virtual reality. The primary designer and author of VOS is Peter Amstutz.

External links
• Interreality.org official site [2]

References
[1] http:/ / interreality. org/ [2] http:/ / interreality. org

Volunteer computing

157

Volunteer computing
Volunteer computing is a type of distributed computing in which computer owners donate their computing resources (such as processing power and storage) to one or more "projects".

History
The first volunteer computing project was the Great Internet Mersenne Prime Search, which was started in January 1996.[1] It was followed in 1997 by distributed.net. In 1997 and 1998 several academic research projects developed Java-based systems for volunteer computing; examples include Bayanihan,[2] Popcorn,[3] Superweb,[4] and Charlotte.[5] . Another similar concept is Sideband computing which let a user to share his computing power while he is online. The term "volunteer computing" was coined by Luis F. G. Sarmenta, the developer of Bayanihan. It is also appealing for global efforts on social responsibility, or Corporate Social Responsibility as reported in a Harvard Business [6] [7] Review or used in the Responsible IT forum. In 1999 the SETI@home and Folding@home projects were launched. These projects received considerable media coverage, and each one attracted several hundred thousand volunteers. Between 1998 and 2002, several companies were formed with business models involving volunteer computing. Examples include Popular Power, Porivo, Entropia, and United Devices. In 2002, the Berkeley Open Infrastructure for Network Computing (BOINC) opensource project was founded, and became the software running the largest public computing grid (World Community Grid) in 2007. [8]

Middleware for volunteer computing
The client software of the early volunteer computing projects consisted of a single program that combined the scientific computation and the distributed computing infrastructure. This monolithic architecture was inflexible; for example, it was difficult to deploy new application versions. More recently, volunteer computing has moved to middleware systems that provide a distributed computing infrastructure independently of the scientific computation. Examples include: • The Berkeley Open Infrastructure for Network Computing (BOINC). BOINC is the most widely-used middleware system, and is currently used by the World Community Grid. It is open source (LGPL) and is developed by an NSF-funded research project located at the UC Berkeley Space Sciences Laboratory. It offers client software for Windows, Mac OS X, Linux, and other Unix variants. • XtremWeb is used primarily as a research tool. It is developed by a group based at the University of Paris - South. • Xgrid is developed by Apple. Its client and server components run only on Mac OS X. • Grid MP is a commercial middleware platform developed by United Devices and has been used in volunteer computing projects including grid.org, World Community Grid, Cell Computing, and Hikari Grid. Most of these systems have the same basic structure: a client program runs on the volunteer's computer. It periodically contacts project-operated servers over the Internet, requesting jobs and reporting the results of completed jobs. This "pull" model is necessary because many volunteer computers are behind firewalls that do not allow incoming connections. The system keeps track of each user's "credit", a numerical measure of how much work that user's computers have done for the project. Volunteer computing systems must deal with several problematic aspects of the volunteered computers: their heterogeneity, their churn (that is, the arrival and departure of hosts), their sporadic availability, and the need to not interfere with their performance during regular use. In addition, volunteer computing systems must deal with several related problems related to correctness:

158 Costs for volunteer computing participants • Increased power consumption. com/ articles. 148–157. L. If RAM is a limitation. [2] Sarmenta.F. One common approach to these problems is "replicated computing". [6] Porter..Volunteer computing • Volunteers are unaccountable and essentially anonymous. Charleston. . • Some volunteer computers (especially those that are overclocked) occasionally malfunction and return incorrect results. Springer-Verlag. K. Japan. The results (and the corresponding credit) are accepted only if they agree sufficiently.E. K. it will impact performance of the PC. the volunteer might choose to continue participating.D. September 2009 . O. [4] Alexandrov. [8] BOINC Migration Announcement (http:/ / www.[9] These effects may or may not be noticeable. pdf). "SuperWeb: Research issues in Java-Based Global Computing". P. org/ forums/ wcg/ viewthread?thread=15715) [9] "Measuring Folding@Home's performance impact" (http:/ / techreport. 1998. CPU cache contention. External links • Wanted: Your computer's spare time (http://www. The desire to participate may also cause the volunteer to leave the PC on overnight. pp. • Decreased performance of the PC. org/ various/ history. x/ 4341/ 1). 1998). disk I/O contention.. Scheiman (1996). which helps to alleviate CPU contention. in which each job is performed on at least two computers. Proceedings of the Workshop on Java for High performance Scientific and Engineering Computing Simulation and Modelling. php). • Some volunteers intentionally return incorrect results or claim excessive credit for results.org/featuredetail. "Charlotte: Metacomputing on the Web" (http:/ / citeseer. . responsI. if adequate cooling is not in place. edu/ email/ pdfs/ Porter_Dec_2006. Michael. Karaul.G. psu. Ibel. A CPU that is idle generally has lower power consumption than when it is active. increased disk cache misses and/or increased paging can result.. and network I/O contention. Proceedings of the First international Conference on information and Computation Economies. of the 2nd International Conference on World-Wide Computing and its Applications (WWCA'98). 1998 [3] Regev. Responsible IT forum.. Mark Kramer. Wyckoff. NY. M. . "The POPCORN market~Wan online market for computational resources". March 3-4. html). A. in BOINC client. or to disable power-saving features like suspend. Harvard Business Review. If the volunteer computing application attempts to run while the computer is in use. Lecture Notes in Computer Science 1368. References [1] "GIMPS History" (http:/ / mersenne. "Bayanihan: Web-Based Volunteer Computing Using Java". New York: Syracuse University. Volunteer computing applications typically execute at a lower CPU scheduling priority. [7] "ResponsI.asp?id=38) physics. hbsp.28. Z. Proc.g. This is due to increased CPU contention. Kedem. and even if they are noticeable. . that is available e. worldcommunitygrid.org. Schauser. "The Link Between Competitive Advantage and Corporate Social Responsibility" (http:/ / harvardbusinessonline. this constant load on the volunteer's CPU can cause it to overheat. harvard.physics. Proceedings of the 9th International Conference on Parallel and Distributed Computing Systems. pp. ist. South Carolina. [5] A. Nisan. 444-461. N (October 25 . . New York. edu/ article/ baratloo96charlotte.E.TK" (http:/ / www. tk). (Sept 1996). However the increased power consumption can be remedied to some extent by setting the option of desired processor usage percent. United States: ACM Press. Additionally. Tsukuba. M.

Gary King. Jncraton. 125 anonymous edits Distributed design patterns  Source: http://en. Vrenator. Nurlan926. Bilbo1507. Tomek. Happyrabbit. Shenme. Gjnyasa. TubularWorld. M. 7 anonymous edits Amazon SimpleDB  Source: http://en. Malerin. Bryan Derksen. Locotorp.wikipedia. Ralfw08. Tremilux. Papadopa. Chandlermbing. Shmlchr. Wizgha. Hadal. Nishant shobhit. YPavan.php?oldid=444825978  Contributors: AlexChurchill. Yamavu. Miym. Gslin. Anonymous Dissident. Dionysostom.wikipedia. Tonieisner. Skittleys. JCLately. 3 anonymous edits Amazon Relational Database Service  Source: http://en.g. Kenyon. Ammubhave. Wikiacc. SimonP.poznan.wikipedia. Miym. Woohookitty. Peruvianllama. Sidna. Mubaidr. Yonkeltron. BeardedCat. PigFlu Oink. Leckley. Thumperward. Miym. JLaTondre. Qwertyus. Cybercobra. Panoramix.yosinski. MelRobinson.php?oldid=436647527  Contributors: Alex. Wonko9. Josh Cherry. LilHelpa. Miym.org/w/index. Gamer007. Miym. Mentifisto. Dmccreary.cutler. Saric. Spdegabrielle. Tetriphile.org/w/index. JDowning. GregorySmith. Xhienne.org/w/index. Brutzman. Murt. Discospinster. Dmccreary. Nunquam Dormio. Kuru. EEMIV. Nivix. Favonian. Borgx. Winterst. Khalid hassani. Dpkade. Andreas Kaufmann.php?oldid=444609359  Contributors: BClemente. Mschlindwein. Mange01.org/w/index. Mashoodp.muller. FatalError. Gdo01. Terry1944. RL0919. SixSix. Chuunen Baka. Miym. Khazadum.org/w/index. EvanProdromou. Diego guillen. Frecklefoot.org/w/index. X7q. Yozh. KnowledgeOfSelf. Gardar Rurak. Balexandre. Ehn. AxelBoldt. Chris 73. Epbr123. Vy0123. Levin. 16 anonymous edits Distributed shared memory  Source: http://en.wikipedia.org/w/index. Vektor330. Gajakannan.wikipedia. Kuteni. Last Lost. ReallyNiceGuy. SteveLoughran. Delirium. Blueboy96. Jhellerstein. Gimmetrow. Closedmouth. Donhalcon. PedroPVZ. Bernhard Bauer.php?oldid=443255340  Contributors: Bpalitaa. Dangiankit. Eggstasy.org/w/index. Miym.php?oldid=446912892  Contributors: Bearcat. Rank1cheng. Vegaswikian. Ghettoblaster. RexNL. R39132. Shell Kinney. Freerick. Ewulp. RadioYeti. Rjwilmsi. Dgies. Jh51681. Beano. Shiftworker. Minnaert. Seifsallam.org/w/index. Mac. Amberroom. Mwtoews. SunCreator. Jsayre64. M4gnum0n. Blues-harp. RickScott. Xdxfp. Orange Suede Sofa.php?oldid=420915171  Contributors: Atlant. Okj579. KnightsWizard. R. The Anome CouchDB  Source: http://en. TheParanoidOne. Psychcf.php?oldid=417577382  Contributors: Bearcat. Belovedfreak. Versus22. Chris Chittleborough. Psiphiorg. Tmpnz. JSpung. Intelligentfool. Sboehringer.org/w/index. FuturePrefect. Rich Farmbrough. Gail. Twn. R'n'B. TimBentley. Nikhil search. Patrick. Captian Mar-Vell. Jin. Charles Matthews. Lilwik.php?oldid=446577882  Contributors: A2Kafir. Sepreece.php?oldid=441818900  Contributors: Katharineamy. Davidfstr. Addshore. Gregbard.php?oldid=439119968  Contributors: Alansohn. LMB. Rodrigoq. Johnuniq. Simphonics. Greg 12000. Ryanreporter. Jandalhandler. Ukexpat. Rcsheets. Mohamed Ouda. Ronz. HughesJohn. Nagle. El Cubano. Pitel. CraigKeogh. Ewlyahoocom. Wolfling. Eric-Wester. Dstainer. MartinSpamer.php?oldid=360159083  Contributors: Favonian. Jason. AllenDowney. NicDumZ. . Ade oshineye. Woohookitty. BigDunc. Andreclos. Тиверополник. Ah2190. Lexor. Attilios. MylesBraithwaite. Phantomsteve. Kbrose. John of Reading. Jayron32. Pcap. GeorgeBills. Khazadum. Cburnett. Kingturtle. FghIJklm. Yogendrasinh. Kbdank71. Hsr1. Anthony. SMasters. Ne vasya. Sparky132. Csahut. Shire Reeve. 185 anonymous edits Data Diffusion Machine  Source: http://en. SamJohnston. Pbb. Mqtthiqs. Bobprime. Jww van. Palfrey. Wuzzeb. Zian. Euclidbuena. John Vandenberg. Jni. Owen.Fedak. Iow. Centrx. Tnxman307. Mikecron. Wbrameld. Rror. Onegin. Ignatzmice. 20 anonymous edits Autonomic Computing  Source: http://en. Kkarimi.org/w/index.doe. Awaterl. Robofish. A5b. Creativename. Wireless friend. Drvsrinivasan. Bryant. WereSpielChequers. Whadar. Gensanders. Steveswei. OsamaK.php?oldid=446873882  Contributors: 16@r. Hannes Röst. Pottsdl. Kanestar. Johnuniq.D. Jhfireboy. PatrickFisher. T@nn. Ryanmcdaniel.wikipedia. Grundle. TheThomas. Cholmes75.wikipedia. Don. Nealmcb.wikipedia. Snoyes. SamJohnston. Cntras. Bezenek. Jaliyae. Ebraminio. TexasAndroid. lorenz. Sushi Tax. Wasubire. OnePt618. DeC.torres. Immunize. Spdegabrielle. Bazzargh. 509 anonymous edits Code mobility  Source: http://en. X96lee15. Yorrose. Da monster under your bed. MarkPDF. Fenna.omalley. Chzz. KeyStroke. Suggestednickname. Stewartadcock. Timmillea. Henriyugi. Sprhodes.wikipedia. Idearat. Shadowjams. Qwertyus. TedPavlic. Pingveno.wikipedia. Yaronf. Stephenb. Audriusa. Liao. Rajithgune. LeaveSleaves. Apapadop. Bporopat. Bilaljaffery. Sanxiyn. Esap. 8 anonymous edits Distributed data flow  Source: http://en.org/w/index. LuzGamma. Rick Sidwell.php?oldid=440706600  Contributors: Ashleytate. Hu. JFromm. 1 anonymous edits Distributed database  Source: http://en.php?oldid=447257745  Contributors: "alyosha". Thumperward. J mareeswaran. TutterMouse. Drt1245. Merlinthe.org/w/index. Brainix. T0ny. MrOllie. Eliz81. Nilei81. CharlotteWebb. Pedant17. 1ForTheMoney.wikipedia. JonHarder. Tellyaddict. Robomanx. Cliffb. Unforgiven24. Wikieditoroftoday. A. Crishoj. Guesty-Persony-Thingy. Chewie. Miym. Jncraton. Rich Farmbrough. Tobias Bergemann. Robina Fox. Gwern. Perada. Zen-master. 5 anonymous edits Distributed object  Source: http://en. Khukri. Jhoskins. John of Reading. Jaskiern. JLaTondre. Brucefulton. Ronewolf.wikipedia. Jackiechen01. SymlynX. EdgeOfEpsilon. Gunnar42. Raghutech. Yworo. Jamelan. Ppgardne. Dkf11. Netlad. Kbdank71. Radugramescu. Saligron. Bovineone. Abdull. Llort. Jason Davies. P199. Mindmatrix.php?oldid=441383833  Contributors: 0x6adb015. Henk. Miym. Jenvor. Katharineamy. Bertung. The Thing That Should Not Be. Rouenpucelle. Bruce1ee. Dawynn. Zomno. Someguy1221. PhuongCM88. JLaTondre. Rtweed1955. Tmcw. Haham hanuka. Radagast83.php?oldid=409057229  Contributors: Dgies.wikipedia. Chocmah. Addaintstopnme. Nakakapagpabagabag.Article Sources and Contributors 159 Article Sources and Contributors MapReduce  Source: http://en. Gary King. Liftarn. Vegaswikian. Thingg. Howard the Duck. Plouin. C. Joey Parrish. Last Lost. Bobo192. 28bytes. Diego Moya. Palosirkka. Wickethewok. Nabbia. Wbigger.org/w/index. Jni. Erdody. Darsenault. Urhixidur. Agupte. Quatloo. Edward. Fuhghettaboutit.wikipedia. Kragen2uk. Expatrick. Debresser. Danpovey. MrJones. Heelmijnlevenlang. BBCWatcher. Тиверополник. 23 anonymous edits Amoeba distributed operating system  Source: http://en. Phoe6. Noahslater. Kubanczyk. M4gnum0n. Joshsteiner. RichardVeryard. JCDenton2052. Wolfkeeper. TitusEapen. Reeveorama. Una Smith. Kbrose. Phatom87. Jamelan. Tiptoety. Malcolma. Kinema. Pilif12p. Hellisp.php?oldid=355812613  Contributors: Beland. Peu.kar. Kaicarver. GoingBatty. SimonP. Elsendero. Narthring. Dbroadwell. Krzys ostrowski. Sbowers3. Giftlite. Mindmatrix. Bogdangiusca. Shadowjams. Psantora. Hooperbloob. Alansohn. Bluezy. Allan McInnes. VanGore. PlatoCantRepent. Hhuili. Ruakh. Beland. Kku. Diannaa. Pnr Database-centric architecture  Source: http://en. Ripe. MaNeMeBasat.php?oldid=445601866  Contributors: Ambulnick. Brown. Horv. Cybercobra. Stephan Leeds. Gjbloom. Dyl. Rehashed. Miym. Rickyphyllis. SamJohnston. Bryan Derksen. Swingambassador. Micrypt. White 720. Andy Dingley. Mtking. Lguzenda. Passport90. 74 anonymous edits Citrusleaf database  Source: http://en. R'n'B. Derbeth. Peter.C. ThorstenStaerk. Al3xpopescu. Inimino. (Ghost In The Machine). Miym. Leandrod. Ebainomugisha. Bovineone. 3 anonymous edits Distributed Interactive Simulation  Source: http://en.wikipedia. Panoat. Gilles. Dyl. Az1568. Miym.org/w/index. Portnadler. FatalError. FatalError. BenFrantzDale. Epbr123. AutumnSnow. Bogey97.wikipedia. Cyplm. 239 anonymous edits Aggregate Level Simulation Protocol  Source: http://en. Szopen. Dondemuth. J. Jswanson3141. Luke Lonergan. ProsperousOne. Primordium. Betacommand. Gadfium. Oxymoron83. Thepohl. Brick Thrower. Ttreitlinger. Kbdank71. LeaveSleaves. Salvar. Innv. Jordi. Tree Biting Conspiracy.wikipedia. Hairy Dude.petya. 1 anonymous edits Connection broker  Source: http://en. DVD R W. Nslater. Dyl.wikipedia. Akamad. Vincnet.php?oldid=422685042  Contributors: Derild4921. MaD70. Ingenthr. Furrykef. Danny Rathjens. 6 anonymous edits Distributed application  Source: http://en. Jamitzky. Wernher.php?oldid=400634770  Contributors: Alan Liefting. Dysepsion. Ozsu. Roger D T. Android Mouse. GregRobson. Fluffernutter. CanisRufus. Clngre. Alexwg. Starwiz. Tweegirl. Aldie. PahaOlo. Hugh. Chunbinlin. 14 anonymous edits Distributed memory  Source: http://en. Butterwell. Centrx. Anwar saadat.org/w/index. Chiborg. Frap. AutumnSnow. Husky. Charles Matthews. Dougher. Timdorr.php?oldid=439654564  Contributors: Adrianwn. Moralis. Miym.org/w/index. Edward. Tide rolls. Tinucherian. Saizai. Siruguri. Richard Allen. Evolvingjerk. Gwernol. Mathiastck. LOL. Ravedave. Stephen E Browne Client–server model  Source: http://en. Rst. Conversion script. Kocio. Calimo. BillNace. Wayiran. Uncle Dick.delanoy. Orbst. Anshu. Peterkaptein.org/w/index. Superm401. Edward. NotSoAnonymous54.H. Prasenjitmukherjee. Caiyu. Patcito. Ut Libet. VampWillow. Cdiggins. clown will eat me.org/w/index. Tempodivalse. Pebkac. Julesd. SegfaultWizard. Jfabrizio84. Eastlaw. JCLately. A5b. Ghettoblaster. Wujj123456. MySchizoBuddy. Akshayagupta. Miym. Kwsn. Upsetspecial. Azior. Kuru. Bovineone. RichMHelms. Biscuittin. Flubeca. Prasanna8585. Canaima. David Eppstein.org/w/index. SimonP. FrankRHill. Midgrid. Rjwilmsi. Krzys ostrowski. Piano non troppo. Mdz. Neilc.php?oldid=410450977  Contributors: Friendlydata. Tom Edwards. Qwertyus. Mate Hegyhati. Dianoetic. Nurg. Excirial. Billymac00. Rhwawn. Hmains. DVdm. Gary King.org/w/index. Owenja. MJ94. Miym. 48 anonymous edits Distributed lock manager  Source: http://en. Marudubshinki. Goldzahn. Robomanx.org/w/index.wikipedia. Niteowlneils. Cander0000. LilHelpa. Bovineone. MParaz. Kenyon. Slackr. Lzur. Cybercobra. GlennZ. Can't sleep. Perfecto. Silas S. Maksim. Dawynn. GenerousOne.bit. Sicard.wikipedia. Miym. Isnow. Mark Renier. Superm401. Jivecat. Nanami Kamimura. Miym. Fudoreaper. Thatguyflint.php?oldid=446929912  Contributors: Andrew80k. Cprompt. Altenmann.wikipedia. Triwbe. Larry V. 16 anonymous edits Art of War Central  Source: http://en. Jmeddy.glaser. Av pete. Nagle.org/w/index. Flegelpuss.wikipedia. Crag.wikipedia. Arleyl.

Rich Farmbrough. William Avery. Plaes. Shaunfensom. Mat813. SamJohnston. MarkWahl. Mike2782.org/w/index. Mu Mind. MorganCribbs. Foofy. MarktMan.php?oldid=446515973  Contributors: Airplaneman. Tevildo. Kanebender. Woohookitty. Pengwynn. Rakshith Amarnath.org/w/index. JonintheUK. SimonP. Stephan Leeds. King Arthur6687. Dismas. Emmanuel. Abhinavkin. Michael Hardy. 49 anonymous edits MongoDB  Source: http://en.php?oldid=430841676  Contributors: David. JCLately. Sushinut. Elf Pavlik.wikipedia.wikipedia. Gamer007. Philippe Nicolai-Dashwood. Nomeata. Malcolma. Thumperward. ErrantX. Ramin zeinali. SamyPesse. 35 anonymous edits Dryad (programming)  Source: http://en. Gaensebluemchen at night. Pibara. Abdull. Magog the Ogre. Robertvan1. Remuel. Mdd. Bearcat. MerlinMM. Bpfurtado. Vittyvk.org/w/index. Hervegirod. Fabrictramp. Phatom87. Arichnad. Mycure. Socraticscholar. Hazzik.wikipedia. Frap. Peterdjones. Snezzy. Miym. PigFlu Oink Gemstone (database)  Source: http://en. BClemente. Kocarol. Hadrianheugh.php?oldid=444066741  Contributors: Atownballer. OwenBlacker. AvicAWB. Dancter. 12 anonymous edits Edge computing  Source: http://en.org/w/index. Smartse. CKlunck.wikipedia. Mwalsh34. JVersteeg. Tide rolls.Article Sources and Contributors Pion.php?oldid=444950039  Contributors: 1000Faces.wikipedia. ArneBab. RSaunders. Haikupoet. Thomas Willerich. Wilbysuffolk. Youngtwig. Chester Markel. DoctorElmo. Khazar. Stolenglances. Manasgarg. Jxm. Neumeier.wikipedia. Anrie Nord. Dkf11. Nad. One-dimensional Tangent. Aottley. Saintrain. Neilc. Saifalisabri.pratten. Coldacid. Bmatschke.org/w/index. ClaesWallin. Seaneparker. Hairhorn. Krzys ostrowski. Minghong. Daf. 1 anonymous edits 160 . Omicronpersei8.php?oldid=442006474  Contributors: Aristotle Pagaltzis.wikipedia. Bearcat. Djmackenzie. 5 anonymous edits Kayou  Source: http://en. Q Chris. Antonielly. JCLately. SiarFisher. Retired user 0001. Poison Oak. 43 anonymous edits IBZL  Source: http://en. Miym.revah.org/w/index. Alabamaisntgreat. Gurch.org/w/index.org/w/index.php?oldid=424717286  Contributors: Acdx. Extols. Gsmgm. Miym. Tobias Bergemann. WiktorWandachowicz. Mahanga. Robertvan1. Miym. ScottEdwinBailey. Alexteclo. Allan McInnes.hc. Everyking.wikipedia. Kintetsubuffalo. Orderud. Wikante. Hoist2k. PullUpYourSocks. JaGa. Michael Hardy. Etenil. Twimoki. Gravthuth.php?oldid=429891944  Contributors: Buddy23Lee.org/w/index. Closedmouth. Samutoko. Nforbes. Nixdorf. Wrboyce. Evileye73. Arvindn. Zondor. Skomorokh. Dubwai. JonHarder. Mac. Michael Hardy. Masterhomer. Valio bg. Spl.wikipedia. Zondor. Patrick. Samer. Frap. Seerinteractive. ZS. FreplySpang. Happyinmaine. Shijucv.wikipedia.org/w/index. Miym. FrankTobia. Ideogram. Tomrbj. John Bessa. JCLately. Jinlye. Miym. DavidBourguignon. Nagle. Bertie A. 用 心 阁 . Josephgrossberg. Ettrig. Mwazzap. Difu Wu. Coldacid. The Thing That Should Not Be. 39 anonymous edits Multitier architecture  Source: http://en. Urhixidur. Robomanx. Lodevermeiren. Scoutchen. 52 anonymous edits Messaging pattern  Source: http://en. Orso della campagna. Balabiot.org/w/index. YourEyesOnly. Metrax. Bartledan. Opticalgirl. Oicumayberight. JLaTondre. Dinarphatak. FatalError.php?oldid=409252753  Contributors: Andreas Kaufmann. Steve walkerou. Lyricmac. Teknobo. Zyx. Tuxcantfly. Repat. Cactus26.wikipedia. Greebo the Cat. Frap. Blaisorblade. Frap. Rich Farmbrough. MarkusSchiltknecht. Bovineone.org/w/index. Mdirolf. Aervanath. ABCD. Carmeld1. 16 anonymous edits Explicit multi-threading  Source: http://en. Raul654. Nestea Zen.org/w/index. AlistairMcMillan. Dispenser. Cander0000. JCLately. Toni Stoev. Philip Trueman. Ttonyb1. Bovineone. Galoubet.g. Styfle.wikipedia.wikipedia. Pjoef. Darp-a-parp. UncleDouggie. Eustress. Cander0000. McSly. Peridon. Twirligig. Cander0000. Eeekster. Lackett. Mechanical digger. Jamelan. MarSch. Chowbok. 5 anonymous edits Live distributed object  Source: http://en. Miym. Dainis. Abune. Space89. Sciurinæ. Skizzik. Eschuck. Nabla. Ingenthr. Kadakas. Jamierlawson. Rick. Jamelan. Rwwww.php?oldid=446911323  Contributors: 4th-otaku. IanOsgood. Smjg. Munahaf. Davetrainer. Supa Z. Darren uk. ErnstRohlicek. Tommy2010. Cesium 133. Miborovsky. 8oogers. Adrianwn. Leberwurscht. Soumyasch. Kbdank71.php?oldid=446408740  Contributors: Cybercobra. Jonasfagundes. Joonga.wikipedia. Rich Farmbrough. Magioladitis. Mechanical digger.wikipedia. Khaless. Akuckartz. FatalError. Hu12.php?oldid=443494567  Contributors: 16@r. Mark Renier. JCLately. Fgiorgi. Tobias Bergemann.org/w/index. Heelmijnlevenlang. JLaTondre. RadManCF. WilliamAquarius. M Almarshad. SamJohnston. A000040. Hgrosser. Venustas 12. Kbrhouse. My76Strat. AvicAWB. Asafdapper. Netlad. JerryLerman. Technobadger. GrahamN. Georgewilliamherbert. 30 anonymous edits Membase  Source: http://en. Daarklord. WookieInHeat. Oliphaunt. Avgjoey2k. Ruud Koot.wikipedia. Tmcw. Raywil. Brighterorange. ESkog. Thumperward. Waldhorn Opaak  Source: http://en.org/w/index.org/w/index. Dstainer. Thurston51. Nicolas Barbier. Kusma.php?oldid=430500136  Contributors: Aboutmovies. Brennels.org/w/index. Meandtheshell. 39 anonymous edits Distributed social network  Source: http://en. Voteformike. Bearcat. Mange01. Chris the speller. Outlanderssc. Romanc19s. Marudubshinki.php?oldid=435295809  Contributors: Archimedius. Jweston. Acolovic. Ilammy. Kraftlos. Nrgiii. FeydHuxtable. EagleOne. Kozuch. Robklpd OrientDB  Source: http://en.wikipedia. Radiojon. Fahdshariff. Bblfish. Lexor. Lastorset. JLRedperson. Catatoniatoday. 2 anonymous edits Master/slave (technology)  Source: http://en. GreenReaper. Moheed. 3 anonymous edits Open architecture computing environment  Source: http://en. R'n'B. Kiwibird. 24 anonymous edits Fragmented object  Source: http://en. Samw. Miym.php?oldid=447612200  Contributors: 61cc. Nighthawk2050. Lowellian. Kennyluck. BadenW. Dto.org/w/index. Belovedfreak. Jimmyzimms.org/w/index.org/w/index. Shuitu. Chmod007. SBunce.wikipedia. Andrewa. RJFJR. Joshxyz Fabric computing  Source: http://en. Mboverload. Miym. Radagast83. Quercus basaseachicensis. Phillow318. Stardust8212. Miym. Miym. 126 anonymous edits Multi-master replication  Source: http://en. 6 anonymous edits Fallacies of Distributed Computing  Source: http://en. Ff1959.wikipedia. Srjskam. 1 anonymous edits Message passing  Source: http://en. Alex. Percy Snoodle. RandomXYZb. Zacharewicz.php?oldid=430599102  Contributors: Alvin Seville. Phatom87. Xissburg. Yworo. Davidofithaca. David-Sarah Hopwood. KSEltar. 1 anonymous edits Open Computer Forensics Architecture  Source: http://en. 10 anonymous edits HyperText Computer  Source: http://en. LobStoR. Randall311. Beland. Signalhead. Philippe Nicolai-Dashwood. Svick.php?oldid=400012160  Contributors: Frap. Tonyony83. Tinucherian. Al3xpopescu. Jonathan Williams. Zachlipton. W Nowicki. Dm. AlisonW. Starwiz. Miym. Agne27. Yunyz. Woohookitty. Rhopkins8.wikipedia. John Nowak. CarlHewitt. Miym. Miym. Bearcat. CraigKeogh. Mernen. YUL89YYZ.php?oldid=387166889  Contributors: Dawynn. AdrianThurston. NeilK. Tgautier.wikipedia.Wiggin. The Anome. TittoAssini. Reyk. Stephen B Streater. Jsmethers. Mortense. Sjc. Riadlem. Kocio.org/w/index. Rwwww. Philomathoholic.php?oldid=447208239  Contributors: 10nitro. Beaddy1238.wikipedia. Σ. Mdwh. VampWillow. Lee Carre. 15 anonymous edits Message consumer  Source: http://en. Yami Vizzini. Yunyz. Najeeb1010. Guoguo12. Drewnic. Armadillo-eleven. Louspringer. KennethJ. R'n'B. Suli123. DOSGuy.wikipedia. Friendlydata.org/w/index. Gribeco. BMF81.org/w/index. Av pete. Zombie1986. Anon126.wikipedia.org/w/index. Fæ. Siyamed. Jonas AGX. Torc2. Selket. JForget. Chris Capoccia. Rwwww.wikipedia. Bostonvaulter. Conversion script. Frap. AlainV.php?oldid=427403475  Contributors: John of Reading. Ruralhouse. Mcsee.php?oldid=428534732  Contributors: Agileball. SimonP.php?oldid=444281694  Contributors: 1manfern. Bovineone. Jackollie. 3 anonymous edits High level architecture (simulation)  Source: http://en. WikiMax. Shanes. Rayc. Stephan Leeds. Twimoki. Entonian. Tagishsimon. RHaworth. Lucaas. Econet. Nickptar. Neilc.php?oldid=446828398  Contributors: Adm. Hodsondd. Jncraton. JohnCatlin.muller. 10 anonymous edits Mobile agent  Source: http://en. DellTechWebGuy.org/w/index. Iridescent. BrennanNovak. Deon Steyn. Robert K S. 220 anonymous edits Network cloaking  Source: http://en. Madpaiand17. Sn0wflake.php?oldid=444067796  Contributors: Bunnyhop11. Rofrol. Night of the Big Wind Turbo. Wavemaster447. Edward321. Jaycoh. RainbowCrane. Akerans. Nesjo. Xamian. Hairy Dude. Ninja987.wikipedia. Arkroll. SteveLoughran. Ewlyahoocom. SAE1962. Postcard Cathy. Thepaul0. Richwales. Raul654. Rjwilmsi. Miym. Mbferg.wikipedia. Yadavjpr. Sander.php?oldid=446363990  Contributors: ENeville. VictorAnyakin. Shepard. 4th-otaku. Ochbad. Nasa-verve. Gurch.org/w/index. Joeyguerra. Eleassar. Reedy. Yellowgoat. Senthryl. Stypex. Jay. Bunnyhop11. Iridescent. Lismoreboy. Henk. R'n'B. Sriehl.org/w/index. CaptTofu. Jakub Vrána. Hroðulf. William Avery. Pegship. Beland. Karada. Discospinster. Æåm Fætsøn. Rodrigob. Ozten. Miym. Lauciusa.brennan. Rajgopalv. GoldKanga. Jdzarlino. Grshiplett. Eggyknap.php?oldid=420149708  Contributors: Miym.php?oldid=402986017  Contributors: Andreas Kaufmann. SiddhartaPranha. Animum.wikipedia. Cynehelm. Sbowers3. EagleOne. Ben Ben. Rjaf29. Sidna. Heelmijnlevenlang. SvenGodo. Bweck. Chzz. Vrenator. Happyinmaine. LilHelpa. CloudComputing. Oleg Alexandrov.php?oldid=447414552  Contributors: Afrab null. Vdzhuvinov. Shaunfensom. Miym. Rettetast. Jpbowen. Tobias Conradi. Elibarzilay. Szopen. OmidPLuS. Lelek. Yurivict. Jonmmorgan. Smartse. History2007. Cntras. Krzys ostrowski. Gemstone Staffing.php?oldid=430566863  Contributors: Alvin Seville. Momo54. Gogo Dodo. Neilc. Ewlyahoocom. Ladybirdintheuk. Lateg. Johnny99. Theinfo. 5 anonymous edits Dynamic infrastructure  Source: http://en. Richard Slater. Megaltoid.org/w/index. Prickus. Lenoxus. Khalid hassani. Joriki. Jpbowen. JubalHarshaw. Ceefour. MacTed. LOL. Cybercobra.

TheRanger. Drilnoth. Sanket ar. Slarson. Arancaytar. Gioto. Gwern. OrgasGirl. GreatTurtle. Billymac00. 29 anonymous edits TreadMarks  Source: http://en.org/w/index. Johntex. Ryan Roos. Tempshill. Yurivict. Josh Parris. Xeworlebi. Phr.wikipedia.php?oldid=394780368  Contributors: Andreas Kaufmann.wikipedia.php?oldid=432008745  Contributors: 4twenty42o. Mipadi. Mhdrateln. Heelmijnlevenlang. Richard Arthur Norton (1958. Demonkoryu. Wikipelli. Proofreader. Mmason56. Michael Devore. MER-C. Cp111. 1 anonymous edits Stub (distributed computing)  Source: http://en. Scm83x. Vssun. Toussaint. Brighterorange. Tovojolo. Mean as custard. ShaunMacPherson. Loyalist Cannons. Ongar the World-Weary. Anastasios.wikipedia. Herbee. Chad Vander Veen. Miym. Bramp. Fæ.org/w/index. Kant66. Roadrunner. Dale Arnett. Andyzweb. Charles Matthews. Samtheboy. SamJohnston. Russell. Zrs 12. Yarnalgo. Kubigula. Cretog8. AxelBoldt. L Kensington. Brevity. Aomsyz. Grutness. Robertvan1. KFP. 4 anonymous edits Portable object (computing)  Source: http://en. JamesBWatson. Dto. Neelix. Jmundo. Thecheesykid. Cowpriest2. Tokek. Runtime. Giftlite. Vincentwilliamse. Maury Markowitz. Air55. Guy Harris. Gadomski. JoeBruno. Beland. Nanobri. Mwaisberg. Neilc. LONGSHOT. Isilanes. ThinkEnemies. Bryan Derksen. Ruud Koot. Editor at Large. Mdd. Seuakei. The Anome. Simesa. Equendil. CesarB. Dyl. Husky. Trcunning. Jj05y. Wowiamgood123. Joseph Solis in Australia. Icey. Rich Farmbrough. Hadal. 5950FX. Gronau. Shainer. Barmijo. Hydrox. Nivix. Junkblocker. Qwerty Binary. Remag Kee.php?oldid=400145051  Contributors: Lismoreboy. Dbroadwell. Balabiot. Rookkey. Adamd1008. MrMambo. Kvng. Khcw77. Datacenterguy. Jonkerz. Davedx. CharlesGillingham. 48 anonymous edits Smart variables  Source: http://en. Texture. Emmess2006. Statsone. Wiki alf. SkeletorUK. Lionelt. Eastmain. ZeroOne. Bongwarrior. Paxsimius. WikiTome. Adam850. Kevin Saff. Gioto. Dgies. Bosniak. Der Falke. Tomtzigt. Hgrobe. Jebba. Maurice Carbonaro. Bovineone. Jeff3000. Arch dude. Diannaa.booth. John. Burschik. CSWarren. HaakonHjortland. Leadmelord. Sorenriise.borders. Modulatum.wikipedia. Rror. Jessvj. Closedmouth. Calliopejen1. Calmer Waters. Fiftyquid. Devilrose. Margana. Niroht. Eugene-elgato. Zodon. Alexkon. Zphelj. Koyaanis Qatsi. Lsb34. Cwolfsheep. Thegreenj. EoGuy. Magnus Manske. PeterStJohn. Liao. Myanw. Glane23. Giraffedata. Nurg. Bijee.davies. Buster79. Mange01. RichardVeryard. TNLNYC. LogicDictates. Miym. MK8. Codetiger. Raul654. Liquiddatallc.org/w/index. Hankwang. Evil Twin Skippy. Erxnmedia.wikipedia.xxx. Gershwinrb. MichaelsProgramming. Artlondon. Zoeb. Pedro.org/w/index. Wknight94. Miym. Can't sleep. Quadell. Coffeespoon. Miym. Ventura. Robert Merkel. Page Up. Miym. GregRobson.ferre. Stevertigo. Omicronpersei8. Mgreenbe. AdSR. SchfiftyThree. Emperorbma. Merope. Kozuch. Ludovic. Leehounshell. Alphachimp. Miracle Pen. Agentbla.org/w/index. Eivind. Sdornan. Earle Martin. Chowbok. Asparagus. Miym.org/w/index. Teryx. Piotrus. Shieldforyoureyes. ESkog. Philippe Nicolai-Dashwood. Edward. Pinethicket. Paradiseo. Gunter. Autopilots. Chuunen Baka. Jawz44. Waldir. Kozuch. Onorem. Autarchprinceps. Hogne. Emre D. Alexwcovington. Cwolfsheep. Bloodshedder. RandomStringOfCharacters.php?oldid=440634049  Contributors: 667NotB. Splash. 9 anonymous edits PlanetSim  Source: http://en. Woohookitty. Catin20. Shoeofdeath. Kavehmb. FrenchIsAwesome. Winterst. CarlHewitt.). Pbannister. Muéro. JCLately. Megacat.org/w/index. Hellis. Bsadowski1. Jncraton. Johnlogic. Frap. Ciphergoth.php?oldid=443129974  Contributors: A5b. TAnthony. Kompere. Etxrge. Phr. NickW557. Materialscientist.122. Olegos. Canens. Angwill. Peturingi. RJFJR. Edward321. Komap.wikipedia. Harley peters. DarlingMarlin. Doc Daneeka. Chocmah. SJP. Gererd+.wikipedia. Tpbradbury. SMC. Gp5588. EEPROM Eagle. Zahid Abdassabur. Pgk. C. Miym. Haeleth. Hitman012. Yaos. Champlax. Applicationit. Tiddly Tom. Ahmedabadprince. Jeh. Fuzheado. No1lakersfan. MonoAV. Artaxiad. Monobi. Kgfleischmann. Leszek Jańczuk. DerHexer. Delta759. Smitty.. Darkstar1st. 130. Michaelmas1957. Bkkeim2000. Akata. Emmess2005.NETLover. Jnmoyne.wikipedia. Roche-Kerr. Cpiral. SriMesh. Harmil. Miym. Strait. DragonflySixtyseven. MainBody.php?oldid=443706886  Contributors: Amir. Jsbillings. Metageek. Andy M. Agentbla. Mikeroodeus. Quuxplusone. RedWolf. Roger Davies. Javawizard. Duffman. Jahiegel. Wikineer. Robert Brockway. Jni. Gamester17. Yworo. Caltas.wikipedia. Bobrayner. Damian Yerrick. Circeus. Adam M. Taxman. Nakakapagpabagabag. DavidCary. Epbr123. Mskfisher. Sink257. Rrburke. Marcoacostareyes. History2007. TreyGeek. Funandtrvl.php?oldid=400489047  Contributors: Arleyl. Miym.org/w/index. HazeNZ. Bevo. Antandrus. Khazadum.Article Sources and Contributors Overlay network  Source: http://en.wikipedia. Slathering. Wikicojamc. Mdd.cz. 6 anonymous edits Tuple space  Source: http://en. Gsonnenf. A. Monedula. Koffieyahoo. Sfraza. Tony Fox. Miym. JForget. HenryLi. Gerry Ashton. Nethgirb. Bd84. Orderud. Jpbowen.t. Bentogoa. Supercomputtergeek. Ignatzmice. Quest for Truth. Scovetta. Christopher. Cowman109. Dthomsen8. PrologFan. Stesch. Nick125. Tempodivalse. La Pianista. Fijal.wikipedia. Michael Hardy. Shinkansen Fan. Namazu-tron. Hft. 5 anonymous edits Service-oriented distributed applications  Source: http://en. The 888th Avatar. Torswin. Husond. Pczajkowski. Jamesontai. N328KF. T0ny. Kmerenkov. Billbrixton. Elsajoy. Jder. Searchme. Natishalom. Doc Daneeka. Humble Guy.wikipedia. Balcer. Iluvcapra. Phil Sandifer. Matt Crypto. Chych. Jay. Piet Delport. Miym. AdjustShift. Dannyc77. JonHarder. ViveCulture. Cswierkowski. Ark. Efa. Colorvision. RoyBoy. 9 anonymous edits Transparency (human-computer interaction)  Source: http://en. Melsaran. John Reaves.php?oldid=400929191  Contributors: C777. Stuartyeates. Delirium. Miym. 13 anonymous edits Semantic Web Data Space  Source: http://en. Tuqui. Some fool. Inkling. Bubba73.org/w/index. Stbalbach.org/w/index. Slark. clown will eat me. 62. Nic Doye. Richfife. RTC. Chrisch. Soveran. Jjmerelo.wikipedia. Ramu50. Jeff G. Ripper234. Erik the Appreciator.xxx. MikhailGusarov.php?oldid=431188269  Contributors: BahramH. 45 anonymous edits Remote Component Environment  Source: http://en. Liao. Drphilharmonic. Sannse. Nagle.php?oldid=446536073  Contributors: An0n.wiki. Unknown. Gilliam. Tonywalton Redis (data store)  Source: http://en. I dream of horses. Ronz. Truaxd. CES1596. Evil Monkey. Kku. Dialectric. Michael Rogers. FuFoFuEd. Jmurali. Lowellian. Spayrard.php?oldid=428682338  Contributors: Bovineone. Sam Hocevar. RossPatterson. Cec. Ævar Arnfjörð Bjarmason. Andre Engels. Chillum. Edal. Headsrfun. Mr Stephen. Lord British. LittleOldMe. CanisRufus. Hu12. Bovineone. Ali azad bakhsh. Raistolo. Jharrell.org/w/index.wikipedia. Greg Lindahl. Wwoods. Kubanczyk. JLaTondre. Jerryobject. Vaibhavahlawat1913.php?oldid=443982412  Contributors: Befreax. Amwebb. Bovineone. Rilak. SheckyLuvr101. RedWolf. CosineKitty. Mion. Tide rolls. ZotovBST. Derek Ross. Krallja. Femto. Karim ElDeeb. Balderdash707. BMF81. Anonymous Cow. Toddst1. TakuyaMurata. Fredrik. Karmiq. Mary quite contrary. Spf2. KarlKarlson. Discospinster. Another-anomaly. Alansohn. J Milburn. Dontrustme. Miym. Anwar saadat. Trainor. Da monster under your bed. Gino chariguan. Duckbill. Bjh21. Roaming. DocendoDiscimus. Gfoley4. IvanLanin. Fishnet37222. Matt Deres. Allan McInnes. Frap. Jmarcelo95. TwoOneTwo. Epatrocinio. Epolk. Tangotango. Swillison. RedWolf. El Baby. Squideshi. Martin451. Henriok.php?oldid=447344005  Contributors: 12 Noon.64. Arto B. Rjwilmsi. Nikai. J. Anr. OrbitalAnalyst. SymlynX. K25125. Fireaxe888. Chowbok. Samuel Curtis. Iamfscked. Długosz. Dgies. Jasper Deng. Ericoides. VB. Hede2000.php?oldid=438879586  Contributors: Elkman. Oleg Alexandrov. Ehn.org/w/index. Kuru. Southen. Peyre. Colonies Chris. Beetstra. Wernher. List of marijuana slang terms. Plest. SimonP. 161 . Wolph. Moxfyre. Arkanosis. Mifam. AlienZen. KathrynLybarger. Metapsyche. Heron. Ojw. JteB. Ixfd64. Marangog. お む こ さ ん 志 望 . Mortense. Sbtourist. Xalfor. Pearle. Iridescent. Scott McNay. Szopen. JamesBondMI6. Romanm. Barrylb. Arakunem. Belovedfreak. Pearle. Antonielly. D6. Svjson. Pcap. Gbleem. Ehn. Bk0.94. Vsm01. Simetrical. Miym. Platyk. Hello Control. Racklever. Miym. Kbdank71. ST47. Jbaxt7. 32 anonymous edits Parasitic computing  Source: http://en. Frieda. Johndelorean. Tnm8. Dekisugi. Quietust. Devgus. Bodhran. 16 anonymous edits Supercomputer  Source: http://en.wikipedia. James086. Rgamble. Frap. Kjkolb. Aufidius. Kbtarc. Nick Drake. RxS. RichardVeryard. MC10. Boul22435.wikipedia. Rjwilmsi.org/w/index. Bryan Derksen. GiM. Gatemansgc. Ttiotsw. Rainald62. Philipp Weis. Rainer Wasserfuhr. Mchu amd. Тиверополник. Torla42. Ryansca. Wikibofh.org/w/index. Nojhan. Miym. Alcachi. Rhobite.wikipedia. ClementSeveillac. Vroman. み れ で ぃ ー . Jaybuffington. Methedras. Elcombe2000. Gz33. Aumakua. Jonah Stein. Arvindn. Rama's Arrow. Radagast83. SteveSims. Ludraman. CSWarren. Mojska. Torqueing. Corvus cornix. Myscrnnm. Voidxor. Dipskinny. Rosiestep. Nakon. Sharon08tam. T. Rich Farmbrough. UltraMuffin. Stephenb. Igottalisp. Stoakron97. Mjr162006. Roy da Vinci. Nuno Tavares. Doctorevil64. IanBrock. Hervegirod. BobM. Shell Kinney. LokiiT. Tim1357. Cooperised. Bergin. Тиверополник. Hibernian. Cdamama. Aldie. Patstuart. Edward. Everyking. Calaka. Nneonneo.php?oldid=368256529  Contributors: Andreas Kaufmann.php?oldid=429896483  Contributors: Bpeel. Somatrix.php?oldid=438526958  Contributors: Abdull.moulavi. Sonicology. Donreed. Koavf. Ww. Abce2. Vinceouca.us. Cometstyles.org/w/index. Ancheta Wis. Old Death. Samrawlins. Miym. Jasper Chua. Randhirreddy. Poccil. Linuxbeak. Chanakyathegreat. Ms. Heron. MrOllie. Meteshjj.org/w/index. RJaguar3. Chuunen Baka. New guestentry. FrummerThanThou. Mihaigalos. Fuhghettaboutit.wikipedia. Arthur a stevens. Suruena. RexNL.wikipedia. Mwtoews. Lightmouse. RJASE1. Miym. Jorfer. Ultimatewisdom. Infrangible. Muijz. Vanished user 39948282. Wang. Nono64. Thatdumisdum. Shaw SANAR. Finchsnows. Bender235. Joffeloff.wikipedia. MrOllie.org/w/index. Maycrow. AnonGuy. Dck7777. Ahoerstemeier. LinaMishima. LilHelpa. 2 anonymous edits Shared memory  Source: http://en. Ali. Jaqiefox. Conversion script.. Keraunos. Propaniac. Modify. Yakudza. Szopen. Laug. EdMcMahon. Harman malhotra. Guanaco. Tim@. 15 anonymous edits Request Based Distributed Computing  Source: http://en. Rich Farmbrough. X42bn6. Ryanaxp.253. Philip Trueman. DanielSHaischt. Rwwww. Jvs. PeterBrian. Th1rt3en. Owenozier. 2 anonymous edits RM-ODP  Source: http://en. Jschwa1. Editor4567. Elvarg. Imroy. Marj Tiefert. Hu12. Shandris. Violetriga. D.delanoy. Tawker. Soumya92. Mmernex. Zachary. Eleckyt. Kristiewells. Er Komandante. Cyrius. Esap. Jayen466.php?oldid=447465905  Contributors: Aaaidan. Rjwilmsi. Sorenriise. Maddiekate. Thadius856. Wavelength. Topbanana. Nickg.org/w/index. Maxim. RadiantRay. Henry Robinson. Damian Yerrick.org/w/index. Oldhamlet. Chuck Marean. Capricorn42. Jpbowen. Seegoon. Sleske. Avi4now. JonHarder. Jeffshantz. TexasAndroid. Andymrhodes. Hashproduct. Qrex123. Jehochman. JimParkerRogers. Pohl. Vivek prakash81.php?oldid=385961828  Contributors: BD2412. Danbert8. Mannjc. Geoff97. Vaceituno. Sean D Martin. Peyna. Qwertyus. 50 anonymous edits Utility computing  Source: http://en. Kleinheero. Khalid hassani. JH-man. JHunterJ. Jpahullo. Dmuth. Akadruid. Sukiari. History2007. DancingPenguin. Pion. Raryel. MJSkia1. MER-C. CredoFromStart. Pretzelpaws. Newone. Manop. Dyl. Miami33139.php?oldid=396957096  Contributors: AntonioVallecillo. TheCoffee. Grzegorz Dubicki. Grimey109. 1137 anonymous edits Terrastore  Source: http://en.org/w/index. Heath. Katieh5584. Alchemist Jack. Afskymonkey. SCOnline. Poohneat. RichardVeryard. Lulzfish. Nutiketaiel. VictorianMutant. Kubanczyk. MIT Trekkie. IanOsgood. Davidweiner23. Jedonnelley. DARTH SIDIOUS 2. Nirvana888. Ekashp. 40 anonymous edits Paradiseo  Source: http://en. Intgr. Bonadea. Mani1. T-bonham. Beland. Thumperward. Krtek2125.

php?oldid=440045994  Contributors: Bovineone. Ronz. Rich Farmbrough. SpigotMap. Posix memalign. Miym. ShellyT123. 18 anonymous edits 162 . The Anome.wikipedia. Skysmith. Softguyus.php?oldid=332950032  Contributors: ArthurDenture. FatalError. Soumyasch. Shenme. Davepape. Dlrohrer2003. AzzAz. Royalguard11.wikipedia.Article Sources and Contributors Roman Doroshenko. CeciliaPang. Miym. Balrog-kun. ReedHedges. Wmahan. Snrjefe. Pearle.php?oldid=434581986  Contributors: Avalon. Chip Zero. Verbamundi. Guy Harris. Soggyc. Suyambuvel. Rich Farmbrough. SamJohnston.org/w/index. Mild Bill Hiccup. 89 anonymous edits Virtual Machine Interface  Source: http://en. Thumperward.org/w/index. Shire Reeve. MathieuDutourSikiric. StoneIsle. Wojteklw. UncleDouggie. Marvinandmilo. Paul Foxworthy. Rare4. Ycagen. Tobias Bergemann.wikipedia. 6 anonymous edits Volunteer computing  Source: http://en. Weregerbil. Rwwww. GraemeMcRae. SteveLoughran. Inc ru. THB. 4 anonymous edits Virtual Object System  Source: http://en. Tlausser.org/w/index. Bluemask. Salad Days. Softtest123. RodneyMyers. Licor.

Yaleks Image:IBM HS20 blade server.png  Source: http://en.5  Contributors: The picture shows the PlanetSim layered architecture.wikipedia.png  Source: http://en.svg  License: Creative Commons Attribution-Share Alike  Contributors: Ludovic.php?title=File:BlueGeneL_cabinet. See log.png  License: Public Domain  Contributors: Image:Fabric computing.gif  License: Creative Commons Attribution 3.org/w/index.org/w/index.gif  Source: http://en.php?title=File:Client-server-model.org/w/index.svg  License: GNU Free Documentation License  Contributors: Sam Johnston Image:Fragmented object.org/w/index.wikipedia.svg  Source: http://en.org/w/index.svg  License: GNU Lesser General Public License  Contributors: Gnome-fs-client.org/w/index.wikipedia.png  License: Public Domain  Contributors: Original uploader was Sjschmid at en.php?title=File:PoweredMongoDBbrown66. Image:Symphony 1000 random.wikipedia.wikipedia. PlanetSim was developed within the research project Planet (http://planet.org/w/index.wikipedia.php?title=File:RM-ODP_viewpoints.org/w/index.jpg  License: Creative Commons Attribution-Sharealike 3.php?title=File:Network_Overlay_merged.php?title=File:Chord_1000_random. Image:PlanetsimArchitecture.php?title=File:Definition_of_a_Live_Distributed_Object.php?title=File:Distributed_Memory.svg  License: Public Domain  Contributors: Various.php?title=File:Processor_families_in_TOP500_supercomputers.wikipedia.Image Sources.) Image:Distributed Memory.jpg  License: GNU Free Documentation License  Contributors: Raul654.wikipedia.org/w/index.org/w/index.wikipedia.org/w/index.wikipedia.png  Source: http://en.org/w/index.svg  License: Creative Commons Attribution-Share Alike  Contributors: Ludovic.org/w/index.php?title=File:Planetsimlogo.png  Source: http://en.gif  Source: http://en.php?title=File:Distributed_object_communication.wikipedia.org/w/index.org/w/index.wikipedia File:Network Overlay merged.php?title=File:Supercomputing-rmax-graph. Licenses and Contributors Image:ALSP.php?title=File:Definition_of_a_Distributed_Data_Flow.wikipedia.org Image:Definition of a Distributed Data Flow.0  Contributors: Damien Katz File:Couchdb screenshot.php?title=File:IBM_HS20_blade_server.PNG  License: Creative Commons Attribution 3.org/w/index.php?title=File:Operating_systems_used_on_top_500_supercomputers.org/w/index. Records Management/Media Services and Operations Image:Processor families in TOP500 supercomputers.org/w/index.php?title=File:ALSP.wikipedia.wikipedia. Original uploader was Bartledan at en.gif  License: Creative Commons Attribution 3.svg  Source: http://en.php?title=File:Network_Overlay.0  Contributors: LokiiT File:ArchitectureCloudLinksSameSite.php?title=File:Cray-1-deutsches-museum.php?title=File:Couchdb-logo.jpg  License: Public Domain  Contributors: BClemente Image:AutonomicSystemModel.jpg  Source: http://en.svg  Source: http://en.5  Contributors: Clemens PFEIFFER Image:BlueGeneL cabinet.wikipedia File:Client-server-model.es).org/w/index.svg: David Vignoni derivative work: Calimo (talk) Image:Couchdb-logo.png by Duesentrieb.wikipedia. which was based on Image:Red copyright. PlanetSim was developed within the research project Planet (http://planet.5  Contributors: The picture is the results for a 1000-node Symphony network.png  Source: http://en.0  Contributors: Krzys ostrowski File:PoweredMongoDBbrown66.urv. Sfan00 IMG File:Distributed object communication.wikipedia.0  Contributors: Marcel Douwe Dekker Image:Cray-1-deutsches-museum.org/w/index.org/w/index.php?title=File:Roadrunner_supercomputer_HiRes.urv.php?title=File:ArchitectureCloudLinksSameSite.php?title=File:Overview_of_a_three-tier_application_vectorVersion.org/w/index.jpeg  License: GNU Free Documentation License  Contributors: Khazadum.jpg  License: Public Domain  Contributors: LeRoy N.php?title=File:Couchdb_screenshot.png  Source: http://en.svg  Source: http://en.svg  License: Public Domain  Contributors: Bartledan (talk).svg  Source: http://en.jpg  Source: http://en.org/w/index.5  Contributors: The picture is the results for a 1000-node Chord network.jpg  License: Creative Commons Attribution 2.php?title=File:Symphony_1000_random.svg  License: Creative Commons Attribution-Sharealike 3.svg  Source: http://en.org/w/index.ferre]] Image:Planetsimlogo.wikipedia.ferre File:Network Overlay.php?title=File:Fragmented_object.png  License: Public Domain  Contributors: Gaensebluemchen at night Image:Definition of a Live Distributed Object.0  Contributors: Moxfyre File:Supercomputers countries share pie.jpg  License: Creative Commons Attribution-Sharealike 2.wikipedia.Seidl Image:Roadrunner supercomputer HiRes.wikipedia.jpg  License: Creative Commons Attribution-Sharealike 2. (Original SVG was based on File:PD-icon.wikipedia.org/w/index.jpg  Source: http://en. Image:Chord 1000 random.png  License: Public Domain  Contributors: Driquet Image:Supercomputing-rmax-graph.jpg  Source: http://en.svg  Source: http://en.org/w/index.php?title=File:Fabric_computing.png  Source: http://en.png  License: GNU General Public License  Contributors: apache.svg  License: Public Domain  Contributors: Benedikt.jpg  Source: http://en.es).5  Contributors: The picture is the logo of the PlanetSim simulator.png  License: Creative Commons Zero  Contributors: Lucaswilkins .svg: David Vignoni Gnome-fs-server.jpg  Source: http://en.png  License: Creative Commons Attribution 3.0  Contributors: Krzysztof Ostrowski Image:PD-icon.png  License: Creative Commons Zero  Contributors: Megaltoid File:Overview of a three-tier application vectorVersion.php?title=File:AutonomicSystemModel.gif  License: Creative Commons Attribution-Sharealike 2.org/w/index.php?title=File:PlanetsimArchitecture.PNG  Source: http://en.svg  Source: http://en.php?title=File:Supercomputers_countries_share_pie. PlanetSim was developed within the research project Planet (http://planet.png by Rfl.wikipedia.jpg  Source: http://en.wikipedia.urv.jpeg  Source: http://en.urv.es). Sanchez.wikipedia.png  Source: http://en.wikipedia.wikipedia. developed within the research project Planet (http://planet.org/w/index.jpg  License: Creative Commons Attribution-Sharealike 2.wikipedia.wikipedia.php?title=File:PD-icon.wikipedia.gif  Source: http://en.es).gif  Source: http://en.wikipedia.org/w/index. Licenses and Contributors 163 Image Sources.jpg  Source: http://en. based on a file by User:Foofy.0  Contributors: Robert Kloosterhuis Image:Operating systems used on top 500 supercomputers.org/w/index.wikipedia. File:RM-ODP viewpoints.gif  License: Creative Commons Attribution-Sharealike 2.

org/ licenses/ by-sa/ 3.License 164 License Creative Commons Attribution-Share Alike 3. 0/ .0 Unported http:/ / creativecommons.

Amazon Relational Database Service. Dynamic infrastructure. Erlang. RM-ODP. The framework is inspired by the map and reduce functions commonly used in functional programming. TreadMarks. A quick look inside: MapReduce. Open architecture computing environment. Remote Component Environment. Code mobility. Redis (data store). Transparency (human-computer interaction). learn EVERYTHING you need to know about MapReduce. Virtual Object System. Edge computing. CouchDB.and Much.. F#. Stub (distributed computing). Amoeba distributed operating system. Stop Searching. Fabric computing. Distributed memory. High level architecture (simulation). proposal and implementation with the ultimate book – guaranteed to give you the education that you need. Much More! This book explains in-depth the real drivers and workings of MapReduce. Serviceoriented distributed applications.. In 2 Days Or Less. Perl. Smart variables. this book is a unique collection to help you become a master of MapReduce. MongoDB. Get the edge. Open Computer Forensics Architecture. Explicit multi-threading. Supercomputer. Parts of the framework are patented in some countries. Virtual Machine Interface. Portable object (computing).. Tuple space. Python. Distributed application. OCaml. Citrusleaf database. Mobile agent. Network cloaking. Multi-master replication. Message passing. Opaak. HyperText Computer.. Distributed object. Data Diffusion Machine. Distributed lock manager. The #1 ALL ENCOMPASSING Guide to MapReduce. with extensive references and links to get you to know all there is to know about MapReduce right away. Terrastore. PHP. R and other programming languages. and ace any discussion. Utility computing..” MapReduce is a software framework introduced by Google in 2004 to support distributed computing on large data sets on clusters of computers. Fragmented object. Autonomic Computing. Connection broker. Membase. MapReduce libraries have been written in C++. Volunteer computing. Overlay network. Fallacies of Distributed Computing. analysis. Ruby. Live distributed object. Master/slave (technology). “Here’s Your Chance To Skip The Struggle and Master MapReduce. Request Based Distributed Computing. Amazon SimpleDB. Distributed social network. Client–server model. Shared memory. Here you will find the most up-to-date information. faster than you ever dreamed possible! The information in this book can show you how to be an expert in the field of MapReduce.The Knowledge Solution. Gemstone (database). IBZL. Are you looking to learn more about MapReduce? You’re about to discover the most spectacular gold mine of MapReduce materials ever created. An Important Message for ANYONE who wants to learn about MapReduce Quickly and Easily. although their purpose in the MapReduce framework is not the same as their original forms.Grab your copy now. Messaging pattern. Kayou. Message consumer. Parasitic computing. . With the Least Amount of Effort. Paradiseo. while you still can. Distributed design patterns. Distributed shared memory. Distributed data flow. Stand Out and Pay Off. Dryad (programming). OrientDB. C#. PlanetSim. Distributed database. Art of War Central. In easy to read chapters. Multitier architecture. Aggregate Level Simulation Protocol. Java.. It reduces the risk of your technology. This book is your ultimate resource for MapReduce. Semantic Web Data Space. background and everything you need to know. Database-centric architecture. time and resources investment decisions by enabling you to compare your understanding of MapReduce with the objectivity of experienced professionals . Distributed Interactive Simulation.

You're Reading a Free Preview

Download
scribd