You are on page 1of 9

Computer cluster

Not to be confused with data cluster or computer lab.
“Cluster computing” redirects here. For the journal, see
Cluster Computing (journal).
A computer cluster consists of a set of loosely or tightly

erating systems can be used on each computer, and/or
different hardware.[3]
They are usually deployed to improve performance and
availability over that of a single computer, while typically
being much more cost-effective than single computers of
comparable speed or availability.[4]
Computer clusters emerged as a result of convergence of
a number of computing trends including the availability of low-cost microprocessors, high speed networks,
and software for high-performance distributed computing. They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the
world such as IBM’s Sequoia.[5]

1 Basic concepts

Technicians working on a large Linux cluster at the Chemnitz
University of Technology, Germany

Sun Microsystems Solaris Cluster

connected computers that work together so that, in many
respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node
set to perform the same task, controlled and scheduled
by software.[1]
The components of a cluster are usually connected to each
other through fast local area networks (“LAN”), with each
node (computer used as a server) running its own instance
of an operating system. In most circumstances, all of the
nodes use the same hardware[2] and the same operating
system, although in some setups (i.e. using Open Source
Cluster Application Resources (OSCAR)), different op-

A simple, home-built Beowulf cluster.

The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial
off-the-shelf computers has given rise to a variety of architectures and configurations.
1

each computer could be restarted [9][10] architecture. cluster multiprocessor systems.g.2 3 ATTRIBUTES OF CLUSTERS The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e. The history of early computer clusters is more or less directly tied into the history of early networks. Two other noteworthy early commercial clusters were the Tandem Himalayan (a circa 1994 high-availability product) and the IBM S/390 Parallel Sysplex (also circa 1994. creating a de facto computer cluster.[6] A computer cluster may be a simple two-node system which just connects two personal computers. developed in 1977. This allowed up to four computers.[6] The activities of the computing nodes are orchestrated by “clustering middleware”. a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit. or needed a backup. and introduced internal parallelism via vector processing. the Cray 1 was delivered in 1976. without disrupting overall operation. or may be a very fast supercomputer. while maintaining data reliability and uniqueness. The ARC and VAXcluster products not only supported parallel computing. . supercomputers began to use them within the same computer. the K computer) relied on cluster architectures.[12] While early supercomputers excluded clusters and relied on shared memory. The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high per. but with a far more distributed nature.g.A VAX 11/780. while computer clusters used parallelism outside the computer on a commodity network.[7] The developers used Linux. 1977 formance at a relatively low cost. to be tightly coupled to a common disk storage subsys- The first commercial loosely coupled clustering product was Datapoint Corporation’s “Attached Resource Computer” (ARC) system. but also shared file systems and peripheral devices. e. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM. 2 History Main article: History of computer clusters See also: History of supercomputing Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer. each with either one or two processors. personal computers used as servers) via a fast local area network. the cluster architecture may also be used to achieve very high levels of performance. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer. Unlike standard the K computer which has a distributed memory. via a single system image concept. c. as one of the primary motivations for the development of a network was to link computing resources. Within the same time frame. Following the success of the CDC 6600 in 1964. It is distinct from other approaches such as peer to peer or grid computing which also use many nodes.[8] Although a cluster may consist of just a few personal computers connected by a simple network. the world’s fastest machine in 2011 was tem in order to distribute the workload. The TOP500 organization’s semiannual list of the 500 fastest supercomputers often includes many clusters.g. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing.[11] Pfister estimates the date as some time in the 1960s.[6] Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system (now named as OpenVMS).g. who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl’s Law. primarily for business use). e. in time some of the fastest supercomputers (e. The idea was to provide the advantages of parallel processing. and using ARCnet as the cluster interface.

[13] Computer clusters are used for computation-intensive purposes. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure.[14] By contrast. a web server cluster may assign different queries to different nodes. 3 5 Design and configuration Attributes of clusters Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support. Very tightly coupled computer clusters are designed for work that may approach "supercomputing". rather than general purpose scientific computations. However.[14] In a typical implementation the Master has two network interfaces. so the overall response time will be optimized. etc. the private slave network may also have a large and shared file server that stores global persistent data. A typical Beowulf configuration One of the issues in designing a cluster is how tightly coupled the individual nodes may be. "High-availability clusters" (also known as failover clusters. the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a “computer cluster” may also use a high-availability approach.3 4 Benefits Clusters are primarily designed with performance in mind. In either case. and in high performance situations. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node. and needs little or no inter-node communication. "Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. approaches to load-balancing may significantly differ among applications. and probably has homogeneous nodes. e. the other for the general purpose network of the organization.g. There are commercial implementations of High-Availability clusters for many operating systems.[14] The slave computers typically have their own version of the same operating system. a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network. accessed by the slaves as needed. to computation-intensive scientific calculations. is densely located. other advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity . approaching grid computing.[17] . or HA clusters) improve the availability of the cluster approach. For instance. a computer cluster might support computational simulations of vehicle crashes or weather. In a Beowulf system. one that communicates with the private Beowulf network for the slaves. They operate by having redundant nodes. the application programs never see the computational nodes (also called slave computers) but only interact with the “Master” which is a specific computer handling the scheduling and management of the slaves. and centralized management. rather than handling IO-oriented operations such as web service or databases. the special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel treecode. but installations are based on many other factors.[14] For instance.[15][16] A load balancing cluster with two servers and N user stations (Galician). and local memory and disk space. For example.[13] However. which are then used to provide service when system components fail. resource consolidation. The other extreme is where a computer job uses one or few nodes. The Linux-HA project is one commonly used free software HA package for the Linux operating system. fault tolerance (the ability for a system to continue working with a malfunctioning node) also allows for simpler scalability. low frequency of maintenance routines.

due to the ease of administration. With the advent of virtualization. Besides game consoles.4 7 CLUSTER MANAGEMENT Due to the increasing computing power of each generation of game consoles. Python.[24] In some cases this provides an advantage to shared memory architectures with lower administration costs.[21] MPI is now a widely available communications model that enables parallel programs to be written in languages such as C. etc. task and resource management. C++. the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar.[21][22] MPI emerged in the early 1990s out of discussions among 40 organizations. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters.[22] Thus.[24] A NEC Nehalem cluster 7. if the cluster has N nodes. However.[21] PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. Microsoft’s Cluster Shared Volumes or the Oracle Cluster File System. or Fortran. To date clusters do not typically use physically shared memory. PVM can be used by user programs written in C.1 Task scheduling As the computer clusters were appearing during the When a large multi-user cluster needs to access very large 1980s. etc.1 Data sharing and communication Data sharing early supercomputers relied on shared memory. distinguished the three classes at that time was that the In a heterogeneous CPU-GPU cluster with a complex . An example implementation is Xen as the virtualization manager with Linux-HA. unlike PVM which provides a concrete implementation. a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters.[20] 6 6. the design of MPI drew on various features available in commercial systems of the time. while many supercomputer architectures have also abandoned it. Examples include the IBM General Parallel File System. the Parallel Virtual Machine. when using double-precision values. MPI implementations typically use TCP/IP and socket connections. The MPI specifications then gave rise to specific implementations.[19] The cluster may also be virtualized on various configurations as maintenance takes place. and still be much less costly (purchase cost). Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation. and fault notification.2 Message passing and communication Main article: Message passing in computer clusters Two widely used approaches for communication between cluster nodes are MPI. high-end graphics cards too can be used instead.[24] This has also made to virtual machines popular. which uses multiple graphics accelerator processor chips. so were supercomputers. However. the use of a clustered file system is essential in modern computer clusters. The use of graphics cards (or rather their GPU’s) to do calculations for grid computing is vastly more economical than using CPU’s. they become as precise to work with as CPU’s. PVM provides a run-time environment for message-passing. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a “parallel virtual machine”. One of the elements that amounts of data. The initial effort was supported by ARPA and National Science Foundation.[22][23] 7 Cluster management One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines.[18] Computer clusters have historically run on separate physical computers with the same operating system. Rather than starting anew. Fortran. MPI is a specification which has been implemented in systems such as MPICH and Open MPI. task scheduling becomes a challenge. despite being less precise. the Message Passing Interface and PVM. 6.

so does fencing methods.8.state of the system when a node fails during a long multiing a node or protecting shared resources when a node node computation. but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors.2 Debugging and monitoring 5 ing rapid user access to shared data. 8. OpenSSI. developed in India. This may include persistent reservation fencing via the SCSI3.[28] Automatic parallelization of programs continues to remain a technical challenge. and MPICH. al7.gLite is a set of middleware technologies created by the tion. for application clustering. using Apache Hadoop on Lubuntu application environment. Linux-HA . OpenSSI are full-blown clusters 8 Software development and ad.[25] This is an area of ongoing research. MSMPI typically each user request is routed to a specific node.on the Windows Server platform provides pieces for High ter architectures to support a large number of users and Performance Computing like the Job Scheduler. .[31] This is essential in large clusters.[22] "fencing" may be employed to keep the rest of the sys.2 Debugging and monitoring The development and debugging of parallel programs on a cluster requires parallel language primitives as well as suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications. given that the main goal of the system is provid. Kerrighed. algorithms that combine and extend MapReduce and Hadoop have been proposed and studied. or global net. and the other the likelihood of node failure under heavy computational disallows access to resources such as shared disks. the performance of each job depends on the characteristics of the underlying cluster.The GNU/Linux world supports various cluster softwork block device (GNBD) fencing to disable access to ware. strategies such as clusters. Therefore.[26] The resources fencing approach disallows access to re. there is distcc. For instance. “computer clusters” which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition “the same computation” among several nodes.[26] loads.directorthe GNBD server.Application checkpointing can be used to restore a given tem operational.integrated into the kernel that provide for automatic process migration among homogeneous nodes. However. LinuxPMI. fibre channel fencing to disable the fibre channel port. achieving task parallelism without multi-node coopera. library and management tools. based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. mapping tasks onto CPU cores and GPU devices provides significant challenges. meaning that the suspected node is compute results. The Berkeley NOW (Network of Workstations) system gathers cluster data and stores them in a database. ministration openMosix and Kerrighed are single-system image implementations.2 Node failure management lows for the visual observation and management of large When a node in a cluster fails.[26][27] Fencing is the process of isolat.1 Parallel programming Microsoft Windows computer cluster Server 2003 based Load balancing clusters such as web servers use clus. while a system such as PARMON. Linux Virtual Server. There are two classes of given that as the number of nodes increases. Checkpointing can restore the system to a stable The STONITH method stands for “Shoot The Other state so that processing can resume without having to reNode In The Head”.[25] 8.9 Some implementations sources without powering off the node. power fencing uses a power controller to turn off an inoperable node.Enabling Grids for E-sciencE (EGEE) project. MOSIX.[31] disabled or powered off.[22][30] Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use MPI or PVM for message passing. one disables a node itself. appears to be malfunctioning.[28][29] Low-cost and low energy tiny-cluster of Cubieboards.

[6] Network-Based Information Systems: First International Conference.102. 72–79. Ryan: Xen Virtualization and Linux Clustering [2] Cluster vs grid computing [21] Distributed services with OpenAFS: for enterprise and education by Franco Milicchio. Aho. pages 371-372 [11] Pfister.). http://www. [27] Sun Cluster environment: Sun Cluster 2. microsoft. The Telegraph. [15] “IBM Cluster System : Benefits”. Forrest M. “A High-Performance. NBIS 2007 ISBN 3-540-74572-6 page 375 [23] Gropp. Development 24:21-31. Edward K. Retrieved October 18. Robert Pennington (June 1996). Microsoft. ISBN pages 339-341 [3] Hardware of computer clusters not always needing to be the same. External link in |website= (help) Other approaches Although most computer clusters are permanent fixtures. 36. 2010 [9] TOP500 list To view all clusters on the TOP500 select “cluster” as architecture from the sublist menu. Francioni and Cherri M. 3 2010 pages 733 .6 12 REFERENCES slurm is also used to schedule and manage some of the largest supercomputer clusters (see top500 list). Hennessy 2011 ISBN 0-12-374750-3 pages 641-642 [7] William W. ISBN 0-13-899709-8. Hargrove and Forrest M. 2011. Ewing. pp. David Deeths 2001 ISBN page 58 [10] M. “Cluster Computing: Linux Taken to the Extreme”. 30 2010Dec. Shirahata. Hybrid Map Task Scheduling for GPUBased Heterogeneous Clusters in: Cloud Computing Technology and Science (CloudCom). [26] Alan Robertson Resource fencing using STONITH. doi:10. p. 28 March 2003. The K Computer. Pancake.1. Retrieved 2007-07-13.ibm.VECPAR 2004 by Michel Daydé. Wolfgang Alexander Gehrke 2007. Howtowards cost effective. Comput. IBM.1007/s00450-009-0089-1 11 See also [18] GPU options [19] Using Xen 12 References [20] Maurer. 2010 Nov.1. Portable Implementation of the MPI Message Passing Interface”. (2009) A novel multiple-walk parallel algorithm for the Barnes–Hut treecode on GPUs – build short-lived clusters for specific computations. Norman Paul Jouppi. probably depends on software used [22] Grid and Cluster Computing by Prabhu 2008 8120334280 pages 109-112 [1] Grid vs cluster computing [4] Bader. Res. Gurindar Sohi 1999 ISBN 978-155860-539-8 page 41-48 [13] High Performance Linux Clusters by Joseph D. Anthony (1996).9485. Patterson and John L. Retrieved 8 September 2014. Parallel Computing. Gudula Rünger 2010 ISBN 3-64204817-X pages 94–95 [30] A debugging standard for high-performance computing by Joan M. [5] “Nuclear weapons supercomputer reclaims world speed record for US”. 2011. http://www-03. Skjellum. CiteSeerX: 10. David. Sci. et al. Gregory (1998). Hoffman and Thomas Sterling (August 16. high performance N-body simever.com/. Jack Dongarra 2005 ISBN 3-540-25424-2 pages 120-121 [28] Computer Science: The Hardware. Lusk. In Search of Clusters (2nd ed.2 by Enrique Vargas. IBM Linux Research Center. in “International Symposium on Low Power Electronics and Design” (ISLPED) 1-3 Aug. com/. William.740 ISBN 978-1-4244-9405-7 [8] William W. Scientific American 265 (2). 2001). et al. Hargrove. “The Do-It-Yourself Supercomputer”. attempts at flash mob computing have been made to [17] Hamada T. April 2000 [31] Computational Science-. Software and Heart of It by Alfred V. 18 Jun 2012. 2011. NJ: Prentice Hall PTR. Retrieved 8 September 2014. larger scale volunteer computing systems such as ulation. [24] Computer Organization and Design by David A. Blum 2011 ISBN 1-46141167-X pages 156-166 [29] Parallel Programming: For Multicore and Cluster Systems by Thomas Rauber. Sloan 2004 ISBN 0-596-00570-9 page [14] High Performance Computing for Computational Science . BOINC-based systems have had more followers. Retrieved 18 Jun 2012. “Cluster Computing: Applications”. Yokokawa et al. Linux magazine. External link in |website= (help) 10 [16] “Evaluating the Benefits of Clustering”. Hoffman (1999). Joseph Bianco. Upper Saddle River. [12] Readings in computer architecture by Mark Donald Hill. [25] K. Retrieved October 18. in the “Journal of Scientific Programming” Volume 8 Issue 2.ICCS 2003: International Conference edited by Peter Sloot 2003 ISBN 3-540-40195-4 pages 291-292 . Georgia Tech College of Computing.

Prentice Hall. et al. 14 External links • IEEE Technical Committee on Scalable Computing (TCSC) • Reliable Scalable Cluster Technology.. Madhukar Korupolu. USA. Hal Stern: Blueprints for High Availability: Designing Resilient Distributed Systems. 11 Jan 2001. ISBN 013-013785-5. ISBN 0-471-35601-8 • Greg Pfister: In Search of Clusters. IBM • Tivoli System Automation Wiki • Large-scale cluster management at Google with Borg. and Volume 2. Luis Pedrosa. ISBN 0-13-013784-7. ISBN 0-13-899709-8 • Rajkumar Buyya (editor): High Performance Cluster Computing: Architectures and Systems. • Evan Marcus. NJ. 1999. John Wiley & Sons.7 13 Further reading • Mark Baker. Cluster Computing White Paper . Eric Tune and John Wilkes . Volume 1. Prentice Hall. April 2015. David Oppenheimer. by Abhishek Verma.

0 Contributors: ? Original artist: ? • File:Cubieboard_HADOOP_cluster.marjovi. J.wikimedia. Whispering. R'n'B. Mitchanimugen.8 15 15 15. Bbryce.wikipedia. Davidkazuhiro. Maria C Mosak. Bota47. Paladin1979. Yamaha5.svg License: Cc-bysa-3. TCorp. NawlinWiki. Oleszkie~enwiki. Sn1per. Psneog. HJ Mitchell.com Original artist: MEGWARE Computer GmbH • File:Nec-cluster. Orange Suede Sofa. Rwwww. Atlant. Mdebets. Tide rolls. AlistairMcMillan. Kooo. Socialservice. Mmazur. Beginning. FosterHaven. Bmitov. ClueBot NG. Altenmann. Ningauble.CLIC. 0x6adb015. Dcirovic.org/media/a10-cubieboard-hadoop/IMG_0774. Ellywa. Codename Lisa.org/wikipedia/commons/1/1a/Nec-cluster. Dylan Lake. Jimmi Hugh. Pillefj. Gaius Cornelius. Tea2min. Phil Boswell.png' width='50' height='50' srcset='https://upload. Chris Chittleborough. ErkinBatu. Nabber00. Mattflaschen. The Anome.wikipedia • File:Beowulf. Carl Caputo. Louperibot.CLIC. Jopsen. Disavian. CmdrObot. Edward.jpg License: CC BY-SA 2.maghsoudy.svg License: CC-BY-SA-3. Squingynaut. Can't sleep. StaticGull. Zavvyone.navarro. Robert Brockway. Wimt. Dsimic. Nurg. Blinkin1.org/wikipedia/commons/thumb/6/6c/Wiki_ letter_w. Chris the speller. Thpierce. Japo. RoyBoy. Amwebb. Metazargo. Thomasgl. Donner60. Ljvillanueva. Headbomb. CRGreathouse. EliasTorres. Kross. JWNoctis. Vegaswikian. Epan88. CrypticBacon. Ardahal. Rossami. Lurker. ArthurBot. RHaworth.png License: Public domain Contributors: Own work Original artist: Mukarramahmad • File:Commons-logo. Giftlite. Pleasantville. Gwern.svg License: CC-BY-SA-3. MJA. Veggies. Melizg. Wavelength. Craigy144. TenOfAllTrades. KylieTastic.svg. Butros. Sdpinpdx. Kevin B12.png Source: https://upload. Sriharsh1234. Sunil Mohan. Cwmccabe.1 TEXT AND IMAGE SOURCES. LeonardoRob0t. Cmr08. Airconswitch. Cogiati. Gm246. Sophis~enwiki. Ukexpat. Haakon.jpg License: CC BY 2. Mandruss. Ground. Qwertyus. YurikBot. Kvng. Gbeeker. Dyl.0 Contributors: This file was derived from Wiki letter w.svg.wikimedia.0 Contributors: Own work Original artist: Hindermath • File:SPEC-1_VAX_05. Borgx. Quaeler. SimonDeDanser.wikimedia. Mdd.org/wikipedia/commons/2/27/Cubieboard_HADOOP_ cluster.wikimedia. EmausBot. Trescott2000. Danakil.wikimedia. Sollosonic.0 Contributors: http://www. RenamedUser01302013. Wmahan. BenFrantzDale. WikitanvirBot. Arnvidr. Jth299. Chobot.5x. Yahia. Sunray.org/wikipedia/commons/e/ec/SPEC-1_VAX_05.nitw. Notmuchtotell. Zigger. Monkbot.org/wiki/File: Wiki_letter_w. W Nowicki. Arny. JAnDbot.jpg Source: https://upload. SimonP. RedWolf. Kbdank71. FrescoBot.jpg Source: https://upload. Neilc. Klbrain. Wonderfl. Jesse V. CSWarren. Jarble.wikimedia. Nuno Tavares. CONTRIBUTORS. Software War Horse. Shigdon.0 Contributors: Photo by Joe Mabel Original artist: Joe Mabel • File:Sun_Microsystems_Solaris_computer_cluster. Funkysh. Hgz168.wikimedia. Rfc1394. Agentbla. Elkhashab. Ekilfeather.utc. Raysonho. Gamester17.jpg License: GPL Contributors: ? Original artist: User Linuxbeak on en. Gilliam. Pedant. ClueBot. WikiNickEN. Laurentius. Smallpond. Minghong. Rgbatduke. Addbot. Recurring dreams. MainFrame. Wiki alf. Kutsy~enwiki. Etienne. Oshwah. Ilanrab.0 Contributors: ? Original artist: ? • File:MEGWARE.wikimedia. DavidBailey. DavidCary.svg/50px-Wiki_letter_w.jpg License: CC BY-SA 3. AntiVandalBot. DanielLeicht. MatthewWilcox. Sumdeus.5 Contributors: ? Original artist: ? • File:Beowulf. Beland. Polarscribe. Buyya. Louzada. Edward Z. Pengo. Ketil. Bender235.org/wikipedia/commons/7/7c/Sun_ Microsystems_Solaris_computer_cluster. Akadruid. Geni.png 1. Guoguo12. Theanthrope.org/wikipedia/commons/3/3e/Balanceamento_de_carga_ %28NAT%29. Vgobvu and Anonymous: 447 15. Andrey4763913. Thijs!bot. Stevertigo. MorganCribbs. Dmsar. Rich Farmbrough. Slashyguigui. Strait. Chuunen Baka. MarkMLl. MOmarFarooq. Ronz. Brian G. Kcordina. David Eppstein. Ortolan88. CogitoErgoSum14. Enatron.JPG License: Public domain Contributors: http://dl.svg Source: https://upload. Demiurge1000. MartinBot.jpg Source: https://upload. contributors. Dgies. Lupinoid. Laoris. Nixdorf. Blaxthos. Edivorce. Yang. NeoChaosX. Optikos. JCLately. DumZiBoT. Vhiruz. Selket. Haseo9999. AND LICENSES Text and image sources. Xqbot. Moa3333. Ali. Alex R S. AlanR. Jack007. H. LFaraone. Bovineone. Fyyer. Zachlipton. and licenses Text • Computer cluster Source: https://en. Adamantios. Adamd1008. Daniel7066. StunnedBySoup. Missy Prissy. Frasmacon. Mwilensky. Rmarsha3. Pnm. Masticate. RA0808. Hu12. Winterheart. Mercy11. Nv8200pa. Albing~enwiki. Freemyer. Burschik. Makwy2. Guaka.jpg Source: https://upload. Gpahal.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w. Vrenator. Solitude. Bogey97.org/wikipedia/commons/thumb/6/6c/ Wiki_letter_w. Jgrun300. Kestasjk. Nakakapagpabagabag. Numa. Helpful Pixie Bot. Egil. I already forgot. Channelsurfer. Iluvcapra. Mkellner@penguincomputing. Gerard Czadowski. Samwb123. Wilson. Charles Matthews.com. Closedmouth.0 Contributors: Flickr Original artist: ChrisDag • File:Wiki_letter_w_cropped. Redgolpe.JPG Original artist: Cubie Team • File:Folder_Hexagonal_Icon. Titodutta. JonHarder. Jfmantis.. Glenn. Jano-r.cubieboard.svg' class='image'><img alt='Wiki letter w. Chowbok. VanishedUserABC. MrOllie. Sietse Snel. Jacob. Zwobot.svg Source: https://upload. Indivara. Schapel.org/wikipedia/commons/c/c5/MEGWARE. Hhmmrrhh. 1exec1. Rimmon~enwiki.svg. Wysdom. Luckas-bot. DeeperQA. SmackBot. Anchor Link Bot.png 2x' data-file-width='44' data-file-height='44' /></a> Original artist: Derivative work by Thumperward . Lear’s Fool. Greg Lindahl. Angryllama11. Stevenj. DARTH SIDIOUS 2.org/wikipedia/commons/8/8c/Beowulf. Bfugett.svg/ 100px-Wiki_letter_w. Jonesey95.wikimedia. Atomota. Gihansky. Cloudruns~enwiki. SieBot. Matty. Yobot. From-cary. Joy. JenniferForUnity. Shevel2. Tobi. Beetstra. Jeeves. Wikipelli. Decltype. Thumperward. WikiLeon. Dexbot.svg: <a href='//commons. https://upload.svg/75px-Wiki_letter_w. Tmopkisn. Susvolans. Quadell. Rjstott. HughesJohn. Nivanov. Izzyu. Kyng. Mika au. DragonBot.jpg Source: https://upload. Popolon. Chris 73. Jumbuck. Epbr123. Wizardman. LonHohberger.barie. Wagino 20100516. Georgewilliamherbert. X201. GridMeUp.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped. Goudzovski. Razimantv. Troels Arvin. Miym. Giggy. Xyb.jpg License: CC-BYSA-3. Alexkctam. Phr. Ylee. Chris Roy. Dhuss. Deskana. Pat Berry.wikimedia. Myasuda.wikimedia. Friedo. Quarl. SteinbDJ.org/wikipedia/en/4/48/Folder_Hexagonal_Icon. Prgururaj.JPG Source: https://upload.wikimedia. Bdiscoe.jpg License: CC BY-SA 3. Jeremy Visser. AnomieBOT.2 Images • File:Balanceamento_de_carga_(NAT). Discospinster. I dream of horses.jpg Source: https://upload. Cellis3. Vny. XLinkBot. Vivewiki.megware.org/wikipedia/commons/4/40/Beowulf. Bubba73.svg' src='https://upload.svg Source: https://upload. YiFeiBot. Wensong~enwiki.wikimedia. DNewhall. Arthena. RobHutten. Szopen.wikimedia. Nataliia Matskiv. Gardrek. Stbalbach. clown will eat me.org/wikipedia/en/4/4a/Commons-logo. Abrech. Roopa prabhu. ConcernedVancouverite.org/wiki/Computer_cluster?oldid=726174997 Contributors: Mav.delanoy. 74s181. Sumitidentity. Ebrambot. ScotXW. Gauravd05. Wickethewok.

3 Content license Content license • Creative Commons Attribution-Share Alike 3.15.0 9 .3 15.