You are on page 1of 36
ill benefit ring Cloud Computing res, can still benefit from BEE ester sno raseve compu irate This has given birth to tte pwning ol byt rea the Cloud IT service delivery bie in 2010, the Us. federal governmen aah) gee mice pubic COU: PT Gy more than 19000 systoms, jy) rraate Cloud, as opposed to the tert ound $76 pllon 0° Mor ios with a mora 8h Porid’s largest consumers of IT spent gover 2eig also driven by tho ney viding the | inttatve aimed at PrOVETE Gi, house SOM oe a Cloud computing t prow Bue their computing facies. Tho use a CT “tion's promises In da i Ch BS dovernmaly! i within the org 5 profer 8s rower enterprise Clouds. unable to meet users quality cise require ie Renever private Cloud resources a@ UIT nosed bY tublic Cloud resources ang", ms, partially .d, These are often referreq the deadline, hybrid computing syste! orron’s Nee owned infrastructures, are created to Serve the or ng the we salbitles offered by Clog’ SI ; Clouds. which are becoming a common way 10 sta ing by many stakeholders. 1.1.4. Cloud-Comput ‘A fundamental characteristic of Cloud comp! ing Reference Model uting is the capability of delivering on demand a y, or IT services, which are quite diverse from each other. This varity oc a Ea Percepii What Cloud computing is among users. Despite this, itis possible to ie ro, eas MPUtng seryn offerings into three major categories: Infrastructure-as-a-Service (laaS), i if -a8-a-Service te snd Software-as-a-Service (SaaS). These categories are related to each other as described ni 4.5, which provides an organic view of Cloud ‘computing. We refer to this diagram as “Cloud Com, y ing Reference Model” and we will use it throughout the Book 10 ‘explain the technologies and intro ie sevant research on this phenomenon. The model organizes the wide range of Cloud compu: services into a layered view that walks the computing stack from bottom to top. tng Web 2.0 ‘Software asa Service Interfaces eee Scientific applications Office automation, photo editing, MB — FE oie 0068} ORM, and social networking = _> Examples : Google Documents, Facebook, Flickr, Salesforce Platform asa Service Runtime environment for applications Development and data processing platforms Examples : Windows Azure, Hadoop, Google AppEngine, Anoka é Virtualized servers Storage and networking Examples: Amazon EC2, S3, Rightscale, vCloud Fig. L5. Cloud-Computing Reference Madel. At the base of the stack, Infrastruc 3s 7 icture-as-a-Service soluti i tora 1 solutions deliver infrast form of vical fercware, Storage, and networking. Virtual hardware is ulized to ace conpiieet Pepe ie ol ds ee instances. These are created on users’ request on the provider’ . . 5 | tools and interfaces to ftwal i ir ual machine. The pricing mode is usualy defined in terms of sola oor renee soe ta nul 2 , where the hourly co: fovroducton JT |g Influenced by tho characteristics of tho virtual hardwaro. Virtual storago Is doliverad In tho form of raw disk space or object store, The formor comploments a virtual hardwaro offoring that roquiros porsiatent storago. The latter Is a more high-levol abstraction for storing ontitlos rathor than filos, Virtual notworklng Identifies the collection of services that manago tho networking among virtual Instances and tholr con- nectivity towards the Internet or private notworks. Platform-as-a-Sorvice solutions aro tho noxt stop In the stack. Thoy dolivor scalablo and olastlc run- time environments on demand that host the execution of applications, These services aro backed by a core middleware platform that Is responsible for creating tho abstract environment whore applications are doployed and executed. It Is the responsibility of the servico provider to provido scalability and to manage fault-tolerance, while users are requested to focus on the logic of the application developed by leveraging the provider's APIs and libraries. This approach increases the lovel of abstraction at which Cloud computing Is leveraged but also constrains the user in a more controlled environmont. ‘At the top of the stack, Software-as-a-Service solutions provide applications and services on demand. ‘Most of the common functionalities of desktop applications—such as office automation, document man- agement, photo editing, and customer relationship management (CRM) softwaro—are replicated on the provider's infrastructure, made more scalable, and accessible through a browser on demand. These applications are shared across multiple users, whose interaction is Isolated from the other users. The SaaS layer is also the area of social networking Websites, which leverage Cloud-based infrastructures to sustain the load generated by their popularity. Each layer provides a different service to users. laaS solutions are sought by users that want to leverage Cloud computing from building dynamically scalable computing systems requiring a specific software stack. laaS services are therefore used to develop scalable Web sites or for background pro- cessing. PaaS solutions provide scalable programming platforms for developing applications, and are more appropriate when new systems have to be developed. laaS solutions target mostly end users, ‘who want to benefit from the elastic scalability of the Cloud without doing any software development, installation, configuration, and maintenance. This solution is appropriate when there are existing SaaS services that fit user's needs (i.e., email, document management, CRM, etc.) and a minimum level of customization is needed. 1.1.5 Characteristics and Benefits ‘Cloud computing has some interesting characteristics that bring benefits to both Cloud Service Consumers (CSCs) and Cloud service providers (CSPs). They are no upfront commitments; on demand access; nice pricing; simplified application acceleration and scalability; efficient resource allocation; energy efficiency; and seamless creation and the use of third-party services. ececoee The most evident benefit from the use of Cloud computing systems and technologies Is the increased ‘economical retum due to the reduced maintenance costs and operational costs related to IT software and infrastructure. Thisis mainly because IT assets, namely software and infrastructure, are turned into utility costs, which are paid for as long as they are used and not upfront. Capital costs are costs assocl- ated to assets that need to be paid in advance to start a business activity. Before Cloud computing, IT infrastructure and software generated capital costs, since they were paid upfront to afford a computing infrastructure enabling the business activities of an organization. The reveriue of the business is then Utilized to compensate over time for these costs. Organizations always minimize capital costs, since they are often associated to depreciable values. This is the case of hardware: a server bought today for 1000 dollars will have a market value less than its original price when it will be replaced by a new hardware. In order to make profit, organizations have also to compensate this depreciation created by BEDE esr tin . pip norvice providers te ‘i wait nino holp 2orvie 5 beg jor ving Clon we Pend eiarea nine rodeo f y an: in no tanger anconnery to bolong 10 only one hd Ha might became # COneLIMOT of A connpotiter gals, teat atablishinant of 2 global Cloud ey, L ropronontation of Cloud apie, Hgigien, Aconidarant abit! towatdy Cnt fciitaton itt ConBOLMALON. Bygygt concentrating the core capabilities of Cloud computing into large datacenters, 118 possible to fod 4 Tomovo the need for any technical Infrastructure on tho horvice consumer aid, This appro Set Spportunitios for optimizing datacontor facilities and fully Mralizing thoir capabiliiod 16 BOrvE mtigg y and carbon emission, thus cont an ang y ors. This consolidation modol will reduce tho wasta of onerd Sareonor IT on one ond, and Incroaso the rovenuo on the other ane. existence of a global platform f Uisiblo, and therofore to potentially Inereney Hore between service consumors and provid two categotins For example, 0 Cloud prov! ordor to fulfill ts prominos fo cuntomert Those. are all possibiiies that ara introduced market place and by defining an effective standard Wratlas: the interaction among difforont Cloud techy puting has already been roqistorod, and tts rapid adoption with tho 6 4 for ho unifiod 1.1.2 Defining a Cloud Cloud computing has becomo a popular buzzword and it has been widoly used to refer to at flan associated with virtualized Infrastructure or hana al technologies, services, and concopts. It Is of ‘on demand, utility computing, IT outsourcing, platform and software as a sorvico, and many other t, {ho plothora of difforant notions ona pony i" that now aro the focus of the IT Industry. Figuro 1.2 dopicts when defining Cloud computing. ee ee tii ‘i (ace) Cloud Computing? Ag. rf 1-12. Cloud Computing Technvlgies, Concepts, and Meus, tnrotcien LE an abstraction of the term “Cloud” hi i a The ud" has historically been used in the telecommunication industry a5 an aoa ttornel. network in system diagrams. It then became the of the most popular comput - This meaning also applies to Cloud computing, ‘which voters to an internet-centric way of doing piu ing. Internet plays a fundamental role in Cloud computing since it represents either the medium or ine platform through which many Cloud computing services are delivered and made accessible. This 95P' is also reflected into the definition given by Armbrust et al. [2 “Cloud computing refers to both the applications delivered as services over tho Internet, and the hardware and system software in the datacenters that provide those services.” This definition describes Cloud computing as a phenomenon touching on the entire stack: from the underiying hardware to the high level software services and applications, It introduces the concept of everything as a service, mostly referred as XaaS?, where the different components of a system can De delivered, measured and consequently priced, as a service: IT infrastructure, development platforms databases, and so on. This new approach significantly influences not only the way in which we build software, but also the way in which we deploy it, make it accessible, design our IT infrastructure, and even the way in which companies allocate the costs for IT needs. The approach fostered by Cloud com puting is global: it covers both the needs of a single user hosting documents in the Cloud and the ones of a CIO deciding to deploy part of or the entire IT infrastructure in public Cloud. This notion of multiple parties using shared Cloud computing environment is highlighted in a definition proposed by American National Institute of Standards and Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared poo! of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” ‘Another important aspect of Cloud computing is its utility-oriented approach. More than any other trend in distributed computing, Cloud computing focuses on delivering services with a given pricing model; in most of the cases a ‘pay-per-use" strategy. It makes possible to access online storage, to rent virtual hardware, or to use development platforms and pay only for their effective usage, with no or minimal upfront costs. All these operations can be performed and billed simply by entering the credit card de- tails, and accessing the exposed services through a Web browser. This helps us to provide a different and more practical characterization of Cloud computing. According to Reese [29], we can define three criteria to discriminate whether a service is delivered in the Cloud computing style: ‘* The service is accessible via a Web browser (non-proprietary) or Web services API. ‘* Zero capital expenditure is necessary to get started. * You pay only for what you use as you use Even though many Cloud computing services are freely available for single users, enterprise class services are delivered according a specific pricing scheme. In this case, users subscribe to the service and establish with the service provider a Service Level Agreement (SLA) defining quality of service pa- rameters under which the service is delivered. The utility-oriented nature of Cloud computing is clearly expressed by Buyya et al. [30] “A Cloud is a type of parallel and distributed system consisting of a collection of interconnected and Virtualized computers that are dynamically provisioned and presented as one or more unified com- puting resources based on service-level agreements established through negotiation between the service provider and consumers.” ? XaaS ie an acronym standing for X-as-a-Sorvico where the X letter can be replaced by everything: S for software, P for platform, Hor instrastructure, H for hardware, D for database, and 80 on. : gos AADISSTION S]WESS Ih FS ete te veior’s ens USeS eh ens enolicstors S= ceciey SOS asterng Cine Carantd istri s 1 21 Distributed System facilities that make available their services fo third sinbuted computing f Clouds are essentinny re distributed cpsider the characterization of @ distributed system proposed nnd pons parties on deme sTTanenbaum et at. [1] ; ” “ uted system is @ corfection of independent computers thal appears to its users as a single ea distributed system 's coherent system ‘As a reference, but it evidences two very init ich includes @ variety of computer systems Lindi general Seo za ‘a distributed system: the fact i is ‘composed of multiple independent Important eer al p vreived as a single entity by users. This is particularly Ae and that these components are PS ee Goud computing. where Clouds hide the complex architecture they rely on and provide true in case of CI a. ers. ence of distributed systems is fo share resources 27 acer ine on This ena ose peed computing, where this concept is taken to the to uilize them Me res (infrastructure, runtime envi He vents, and services) are rented to users. In fact. oe the driving factors for Cloud comping cov heen the availabilty of large computing facility of TT one (amazon, Google, etc.), who found that offering their computing capabiiies 26 service to be an opportunity for better utilization of their infrastructure. Distributed systems often exhibit other properties ore heterogeneity, openness. scalebilty, ransperert ‘concurrency, continuous availability, and svppendent failures. To some extent, these also re oracterize Clouds, especially in the context of scal- vty, concurrency, and continuous avallabilly. ; He comayor milestones have led to Cloud computing: mainframe computing, cluster computing, and Grid computing, a) Mainframes. These were the first examples of large computational facilities leverag- ing multiple processing units. Mainframes were powerful, highly reliable computers specialized for Jarge data: movement and massive 10 operations. They were mostly used by large organizations for i gata processing such as online transactions, enterprise Tenor planning, and other operations involving the processing of significant amount ‘of data. Even though mainframes cannot be consid- vey ainbuted systoms, they were offering large computational Pow rt by using multiple processors. which wore prosented as a single entity to users. ‘One of the most attractive features of mainframes iC wot fy te be highly rollable computers that were “always on” and capable of tolerating failures vee ivsently, No system shul down was required to replace falled components, and the system rei ep york without intorruptions. Batch processing was the maln application of mainframes. Now (ear popularity and doployments have reduced, but evolved versions of such systems are still in use foe trannaction processing (I., online banking, airline ticket booking, supermarket and telcos, and government services) (b) Clusters. Cluster computing {3]{4] started as a low-cost altornativo to tho use of mainframes ind supercomputers, The tochnology advancement that created fastor and more poworful mainframes a supercomputert has eventually generated an increased availablity of chomp commodiy machines ata side effect These machines could then be connected by a high-bandwidth ne twork and controlled Py apeeiie oftware took thal manage thom ap a bingla systom. By starting from tle T9808, clustors be- » the standard technology for pi snd high-performance computing. Bolng bulll by commodity mactings, sey we co than mnainfamos, and mado available high-porformance computing to a vue suon of ou Inaudng univeraitios and small research labs, Cluster tochnoloay considerably Soniatosiie 6 tho evolution of tools and framework for distributed com puting, somo of ther cl n Condor {b), Parallel Virtuo! Machine (PVM) (6), and Mossago Passing intortace (MPN ur on of tho ony wo AI Ha maw ny Comune aR span bn any mn ocean ona mtn an one ean pees enone 4 1 bolieetive communication, MPT hae been dasigned for high performance, & r y AD i of tive dorninan! paradigns for developing perailel applications tenets eelsealy ea ™ (c) Grids. Grid computing (8] appeared in the early 80° as an evolution of elu i oe with the power grid, Grid computing proposed a new apprean to cecets rae Conant al power, huge storage facilities, and a variety of services. Users tan sone goes Fesources in the same way 26 they use other utites such as power, gas, and water. Grids ilale developed as aggregation cf geooraphicaly dispersed clusters by means of Internet connection wa 'ese clusters belonged to dif- the computati ferent from a “large cluster’, a computing gri nodes, and its scale was nationwide or even worldwide. Several reasons made Possible the diffusion of computing grids: f) clusters were now resources quite common; i) they were ones ‘under-utilized; if) new problems were requiring computational power going beyond the Capability of single clusters; iv) the improvements in networking and the diffusion of Internet made possible long distance high bandwidth connectivity. All these elements led to the development of grids, which now sone a multitude of users across the world. Cloud computing is often considered as the successor of Grid computing. In Teality, it embodies aspects of all of these three major technologies. C hosted by a single organization that provides servi omputing Clouds are deployed on large datacenters ices to others. Clouds are ct having virtually infinite capacity, being tolerant to *haracterized by the fact of failures, and always on as in the case of mainframes. In many cases, the computing nodes that form the infrastructure of computing Clouds are commocity machines as in the case of clusters. The services made available by a CI @ pay-per-use basis and Clouds implement fully the utility vision introdu: ‘loud vendor are consumed on iced by Grid computing, 1.2.2 Virtualization Naser ld Camping The computing era started with a development in hardware at" ctures, which actually enabieg the creation of system software—particularly in the area of compilers ane SPeralna Systems —whieh supported the management of such systems and the development of appleatorts Ti Cavolopment of, applications and systems are the major element of interest and it comes 1 a hen probleme iments are designed and introduced to facilitate and empower engineers. This is when the paradigm characterizing the computing achieves maturity and becenies 1 al Moreover, every aspect of this era underwent a three-phase process: research and developme ), commer. cialization, and commoditization. 2.2 PARALLEL VS. DISTRIBUTED COMPUTING ‘The terms “parallel” and. “distributed” computing are often used interchangeably even though they mean slightly ifferent things. The term “parallel” implies a tightly coupled system while “distributed” refers to a wider class of system including those who are tightly coupled. . More precisely, the term “parallel computing” refers to a model where the computation is divided ‘among several processors sharing the same memory. The architecture of a parallel computing system is often characterized by the homogeneity of components: each processor Is of the same type and it has the same capability of the others. The shared memory has a single address space, which is ac- cessible to all the processors. Parallel programs are then broken down into several units of executions that can be allocated to different processors, and can communicate with each other by means of the shared memory. Originally, only those architectures were considered as parallel systems which featured multiple processors sharing the same physical memory, and they were considered a single computer. Over time, these restrictions have been relaxed and parallel systems now include all those architectures that are based on the concept of shared memory, whether this is physically present or created with the support of libraries, specific hardware, and a highly efficient networking infrastructure. For example, a cluster whose nodes are connected through InfiniBand network and configured with a distributed shared memory system can be considered a parallel system. ‘The term ‘distributed computing” encompasses any architecture or system that allows the computa- tion to be broken down into units and executed concurrently on different computing elements, whether these are processors on different nodes, processors on the same computer, or cores within the same processor. Therefore, it includes a wider range of systems and applications with respect to parallel com~ puting and itis often considered a more general term. Even though it is not a rule, the term “distributed” ‘often implies that the location of the computing elements is not the same, and such elements might be heterogeneous in terms of hardware and software features. Classic examples of distributed computing systems are computing Grids or the Internet computing systems, which combine together the biggest variety of architectures, systems, and applications in the world, 2.3 P ELEMENTS OF PARALLEL COMPUTING It is now clear that silicon-based processor chips are reaching their physical limits. Processing speed is constrained by the speed of light; and the number of transistors package density in processor is con straiged by thermodyhamic limitations. A viable solution to overcome this limitation is to connect multipl Processors working in coordination with each other to solve grand challenge problems. The first steps towards this direction led to the development of Parallel! Computing, which encompasses techniques, architectures, and systems for performing multiple activities in parallel. As we already discussed, the term “Parallel Computing” has blurred its edges with the term “Distributed Computing” and it is often ted in place of that. In this section, we refer to its proper characterization, which involves the introduc- ion of parallelism within a single computer by coordinating the activity of multiple processors together. rages of Porat nd Oied Cmputn EET 2.3.1 What is Parallel Processing? Processing of multiple tasks simultaneoush i is cessing. The ° ly on multiple processors is called parallel pro parallel program consists of multiple active processes (tasks) simultaneously solving a given problem. A given task i divided sree eile subtasks using divide-and-conquer technique, and each one of them is processed on different CPUs. Programming on multi-proct ising divide-and-conquer technique is called parallel programming. Processor system using Many applications today require more computin: i tercan : : \9 power than a traditional sequential compute! offer. Parallel processing provides a cost-effective solution to this problem by increasing the number of GPUs in a computer and by adding an efficient communication system between them. The workload can now be shared between different processors. This results in high formance ‘er computing power and perform: than a single processor system. a puting P The development of parallel processin, 9 is being influence . The prominent among them include the following: i ii.by many factors.;The pr e computa’ ional requirements are ever increasing, both in the area of scientific and business com- puting. The technical computing problems, which require high speed computational power, are elated to life sciences, aerospace, geographical information systems, mechanical design and analysis, etc. sequential architectures are reaching physical limitation as they are constrained by the speed of light and thermodynamics laws. Speed with which sequential CPUs can operate is reaching saturation point (no more vertical growth), and hence an alternative ‘way to get high computational speed is to connect multiple CPUs (opportunity for horizontal growth). hardware improvements in pipelining, superscalar, etc., are non-scalable and require sophisti- cated compiler technology. Developing such compiler technology is a difficult task. e vector processing works well for certain kind of problems, It is suitable mostly for scientific prob- tems (involving lots of matrix operations) and graphical processing. It is not useful to other areas such as database. ‘the technology of parallel processing is mature and can be exploited commercially; there is already significant research and development (R & D) work on development tools and environment. nificant development in networking technology is paving a way for heterogeneous computing. 2.3.2 Hardware Architectures for Parallel Processing “The core elements of parallel processing are CPUs. Based on a number of instruction and data streams that can be processed simultaneously, computing systems are classified into the following four categories: * Single Instruction Single Data (SISD) * Single Instruction Multiple Data (SIMD) © Multiple Instruction Single Data (MISD) * Multiple Instruction Multiple Data (MIMD) 1, Single Instruction Single ite. Itacton OS, "Stream Data (SISD) 1 A SISD computing system is a uniprocessor mia- chine capable of executing a single instruction, | which operates on a single data stream (see Fig. 2.2). In SISD, machine instructions are processed sequentially, and hence computers adopting this model are popularly called sequential computers. : Data Output Ne Processor Fig. 2.2, Single Instruction Single Date (SISO) Architecture, Principles of Parfle and Ostributed Computing wa 2.3.1 What is Parallel Processing? Processing of multiple tasks simultaneously on multiple processors Is called parallel processing. The parallel program consists of multiple activa processes (tasks) simultaneously solving a given problem. A gen Lass fe die nls mutipte sublasks using divide-and-conquer technique, and each one of them ech alled parallel praca oar : fvide-and-conquer technique ls calod paralol propeareeeamnna OM Mull-processor system using ai Many applications today require more comy puting power than a traditional sequential computer can offer. Parallel processing provides a cost-effective solution to this problem by increasing the number of eee no eataen BY adding an efficient communication system botwoen them. The workload can now ferent processors. Thi fornerice Hons eingla pioosaaor ai-siern, is results in higher computing power and pet The development of parallel processing Is being infh inent amony them include the following: 1g influenced by many factors. The promi 9 * computational requirements are ever increasing, both in the area of scientific and business com- puting. The technical computing problems, which require high speed computational power, are related to life sciences, aerospace, geographical information systems, mechanical design and analysis, etc, | © sequential architectures are reaching physical limitation as they are constrained by the speed of light and thermodynamics laws. Speed with which sequential CPUs can operate is reaching saturation point (no more vertical growth), and hence an alternative way to get high computational speed is to connect multiple CPUs (opportunity for horizontal growth). ¢ hardware improvements in pipelining, superscalar, etc., are non-scalable and require sophisti- cated compiler technology. Developing such compiler technology is a difficult task. * vector processing works well for certain kind of problems. It is suitable mostly for scientific prob- lems (involving lots of matrix operations) and graphical processing. It is not useful to other areas such as database. * the technology of parallel processing is mature and can be exploited commercially; there is already significant research and development (R & D) work on development tools and environment. © significant development in networking technology is paving a way for heterogeneous computing. 2.3.2. Hardware Architectures for Parallel Processing ‘The core elements of parallel processing are CPUs. Based on a number of instruction and data streams that can be processed simultaneously, computing systems are classified into the following four categories: Single Instruction Single Data (SISD) Single instruction Multiple Data (SIMD) Mulliple Instruction Single Data (MISD) Multiple Instruction Multiple Data (MIMD) (5p, Instruction Ey Stream 1. Single Instruction Single Data (SISD) A SISD computing system is @ uniprocessor ma- chine capable of executing a single instruction, which operates on a single data stream (see Fig. 2.2). In SISD, machine instructions are processed sequentially, and hence computers adopting this model are popularly called sequential computers. Data Output] ===> Processor Fig. 22. Single Instruction Single Data (SISO) Architecture. string ld Coon ing SISO model. All the instruct buill using ICHIONS and q; ; omputers are ‘speed of processing el a Host of he Sr ee eg the primary memory. formation internally Dora tS model rocessed ha} ‘at which the computer © ali esentative Iimtes by the va BNLPC, Nrseintosh, Workstations. SISD system: i IMD) i [tiple Data (S ; 9. Single Instruction Multip’ capatle of executing the samo instruction on a ‘a SIMD computing SY" Fig, 2.3). Machines based on SIMD mayer ot ting on the OPUS, but opera and matrix operations. Forinstaree, it scientific COMP! well suited for st statements such aS i or machine stem is a multiprocess¢ Se different data streams (52° ting since the; i Jements of vector A and B canbe slements); ganized data el 7 I hs rN PE oan PE can process one data set. Dominant can be Passe S(O for N PE systems); and € can pros et-Do Eloaaambat mule ei re of NE soe rprocessing machine, Thinking Machines’ cm et sit 8) a octal representative CRAY vet | Data Output +} | —_— i 4 Data Output 2} | ESS Sr Processor 2 (Saf Processor N 7 Fig. 28. Single Instruction Mule Dat (SIMD) Architesture. 3. Multiple Instruction Single Data (MISD) AMISD computing system is a multiprocessor machine capable of executing different instructions on different PEs, but all of them operating on the same data-set (see Fig. 2.4). For instance, statements cuca y= sin(x) + cos(x) + tan(x) perform different operations on the same data set. Machines built using MISD model, are not useful in {most of the applications; a few machines are built, but none of them are available commercially. They became more of an intellectual exercise than a practical configuration. 4, Multiple Instruction Multiple Data (MIMD) ‘A MIMD computing system is a multiprocessor machine capable of executing multiple instructions ‘on multiple data sets (see Fig. 2.5). Each PE in the MIMD model has separate instructions and data Princples of Paratel end Dstrbvted Comostng (EER streams, and henco machines buill using this model are well suited for any kind of application. Unlike SIMD and MISD machines, PEs in MIMD machines work asynchronously. MIMD machines are broadly categorized into shared-memory MIMD and distributed-memory MIMD based on how PEs are coupled to the main memory. ! é aano= dil ¢ Single Data Input Steam | ‘Single Data Output Steam EEE (oaatnous = | grep | we —/) where the filters are the single-shell programs composed together and the connections are their input and output streams that are chained together. Applications of this architecture can also be found in the compilers design (e.g. the lex/yacc model is based on a pipe of the following phases: scanning | parsing | semantic analysis | ), image and signal processing, and voice and video streaming. Data-flow architectures are optimal when the system to be designed embodies a multi-stage process, which can be clearly identified into a collection of separate componenis that need to be orchestrated together. Within this reference scenario, components have well-defined interfaces exposing input and output ports, and the connectors are represented by the data streams between these ports. The main differences between the two sub-categories are reported in Table 2.3. Table 2.3, Comparison between Batch-Sequential and Pipe-and-Filter Styles, EEes ee Rieunaes Coarse grained Fine grained . High latency Reduced latency due to the incremental processing of input External access to input | Localized input No concurrency Concurrency possible Noninteractive | Interactivity awkward but possible (c) ual Machine Architectures. This class of architectural styles Is characterized by the presence of an abstract execution environment (generally referred as virtual machine) that simulates [277 a soynPonente Iwolwod in tho communication, bu on hw tho dyno of th hievAtlon ovoIvOn by point message modots are moro Hikoly to ho hanod on a roquont-oply Intoraotion onpoctally fy that Of diroct communtention. Publish-subserlbo modole aro Iona likoly to. bo bawed on toque apy CO they rely on notifications "hey The models tha! havo bon presented constitute a rofowonco modo! for stot th ony, tion among compononte in a distiibuted systom. Its very uncommon that one single mode ‘ating the communication needs within a aystom. Moro likoly, @ ca'npoaition of them or thotr Confuney yyh MH ordor to design and implomont different ngpocts ts the commen eano iy 1 Comping 250% TECHNOLOGIES FOR DISTRIBUTED COMPUTING In this section, wo Introduce rolovant tochnolopies that provide concroto implomontations of Internet Fetaatlch mostly roly on mossngo-based communtention, Thoy aro: romoto procoduro cal ye distributed objocts frameworks, and sorvico-orionted computing, 2.5.1 Remote Procedure Call Romote Procadure Call (RPC) Is tho funcamontal abstraction onabling tho execution of Procoduros client's requost. It allows oxtonding tho concopt of procedure call boyondl tho boundarlos of a Procogy and a single momory addross spaco, Tho callod procoduro and calling procoduro may bo on tho gq im system, or thoy may bo on different systoms Ina notwork. Tho concopt of RPC has boon discussoq since the 1976, and complotoly formalized by Nolson [111] and Birroll [112] In tho oarly olghtion From {here on, it has not changed In its major compononts, Evon though it fs a quito old tochnology, is sti used today as a fundamontal component for Inter-procoss communication in moro complox ‘systoms, on 0), Node A, Procoduro Rogistry Procoduro GNodo 8 | IT ( [RPC Library | Paramotors Marshaling Rotun Valuo |} | Paramotors Unmarshating | | Rotumn Valuo ——— anne . - { Network > Fig. 2.14 illustrates the major components that enable an RPC system, The system is based on a client- Server model. The server process maintains a registry of all the available procedures that can be remotely Fig. 2.14. RPC Referonco Maal, Priveigtes of Parallel end Owirbated Computing EEE invoked, and listens for requests from clients that specify which procedure to Invoke tagather with the values of the parameters required by the procedure. RPC maintains the synchronous pattern that fs natural in in- process procedure and function calls, Therefore, the calling process thread remains blocked until the proce dure on the server process has completed its execution and the result (if any) Is retumed to the lant An important aspect of RPC is marshaling, which Identifies the process of converting paramotor and retum values into 2 form thal is more suitable to be transported over a notwork through a sequence of bytes The term “unmarshaling” refers to the opposite procedure. Marshaling and unmarshaling aro performed by the RPC runtime infrastructure, and the cliont and servor usor code dows not necoasarily have to perform these tasks, The RPC runtime on the other hand, is not only responsible for paramotars packing and unpacking but also for handling the request-reply intaraction that happens betwoan tho cli- ent and the server process ina complate transparent manner. Therofore, developing a systom lavorag- ing RPC for inter-process communication consists of the following steps: Design and implementation of the server procedures that will be exposed for remote Invocation * Registration of remote procedure with tha RPC server on the node whero they will bo made available. ‘* Design and implementation of the client code that Invokes the remote procedure. Each RPC implementation generally provides client and server APIS that facilitato tho use of this simple and powerful abstraction. An important observation has to be made concerning the passing of param- eters and of return values. Being the server and the client process in two separate addrass spaces, the use of parameters passed by reference or pointers is not suitable In this sconario, since once unmarshaled these will refer a memory location that is not accessible from within the server process. Secondly, in case of user-defined parameters and return value types, It is necessary to ensure that the RPC runtime is able to marshal them. This is generally possible especially when user-defined types are composed by simple types, for which marshaling is naturally provided. RPC has been 2 dominant technology for inter-process communication for quite a long time, and several programming languages and environments support this Interaction pattern in the form of librar- ies and additional packages. For instance, RPyC is an RPC implementation for Python. There also exist platform-independent solutions such as XML-RPC and JSON-RPC which provide RPC facilities over XML and JSON respectively. Thrift [113] is the framework developed at Facebook for enabling a trans- parent cross-language RPC model. Currently, are considered RPC implementations even frameworks that evolved this concept towards more powerful abstractions such as frameworks for distributed object programming (CORBA, DCOM, Java RMI, and .NET Remoting) and Web Services. We discuss the + peculiarity of these approaches in the following sections. 2.5.2 Distributed Object Frameworks Distributed object frameworks extend the object-oriented programming systems by allowing objects to be distributed across a heterogeneous network and provide facilities so that they can coherently act as if they were in the came address space. Distributed object frameworks leverage the basic mechanism introduced with RPC, and extend it to enable the remote invocation of object methods and to keep track of references to objects made available through a network connection. With respect to the RPC model, the infrastructure manages instances that are exposed through well- known interfaces instead of procedures. Therefore, the common interaction pattern Is the following: © The server process maintains a registry of active objects that are made avallable to other pro- cesses. According to the specific implementation, active objects can be published using Interfaco definitions or class definitions. ‘+ The client process, by using a given addressing scheme, obtains a reference to the active remote object. This reference is represented by a pointer to an instance that is of a shared type of inter face and class definition. The client process invokes the methods on the active object by calling them through the reference previously obtained. Parameters and return values are marshaled as happens in case of RPC. ss stl HERE ste Foal Poneto Distibutodt objocts hamoworke give the taslon af tater wlth 4 ya stein while ave methods, This fe dono by a machanian oatiod prey skefofon Hig 2 (Bian ay averallayenview aly this infrastructure works Prayy and ekoloton akwayn oonetitute cpa the Beever Tet ma Rt skoloton component, whiely fe HY change ob osecuthig the yettnts yautaly BEA A Nagy, maintain the prowy companont alsin He hovting ettannent ts nately Hw NY AMAT ty pow infertace. The tansparoacy ofrenyole method favcation fy aetulaved HY MIEN AT TT tay Mental properties of abjecterionted prognannnatig tybonitiners al way Hew ATA A! taeda, tho active remote object expose the nantes Tater tac sony the wet wt wetter At YN bent, caiod. On the client sido, a ranting object sub anti the typo publ Lay thn ew Ma This object tanslatos: the Toga motte! nvoration tate an EE pal ar the cure Fomote active abject. On the server site, whonever a REG raqutent fs teeaverd HHH Mba tho mothod call i dispatehod to the shototen that fey ataek with the client thal baie’ He eM Gy the mothod execution on the vorver is conyploted, the raturn vation ate packed, nenEHACTO The ollggg, and tho local mothod call on the proxy retumne, [ie [amon | ry r ff mi ) Remote Roforence Module } = oa i ) _ &) 1 | i ‘| yf Use J | i fiat] oumatar | (iat se ie | oO Qo 4 , al i | | Nobwork | Fig. 215. Oistetbatod Uhjout Prageamniay Mal, | Distributed objects framoworks introduco objocto as firal-claue ontitios for Intorprocous communteation They are the principal gatoways for Invoking romolo mothods: bul can Aluo ho pasuodl as parameles and return values. This posas an intorosting problom, slnco object Instancon are Gomploy, Instances tit encapsulate a stale and might bo roforancod by othor components, Prnalig an object an A paramolel or return value involves the duplication of tho Instance on tho othor oxocutlon contoxt, This operate Jeads to two separate objects whose stato evolves Indopondontly, Tho duplication hocoman nacosball since the instanco needs to trospass tho boundarlos of tho provons, Tile Ib ay Lnportant anpoct th tale info account when designing distributed objocts systoms bocnun TL night load to Inconsintenclos. AP alternative to this standard process, which Is callod marshalling hy valuo, ly marsballng by roferon® In this second caso, the objec! Instanco Is nol duplicntod and « proxy of It le croatad on the server sil (for parameters) or the cliont side (for rotumn valuos), Marshaling by roferonco Is.4 more complox wor Principles of Parale and Oisinbuted Computirg (MEET nique and generally puts more burden on the runtime infrastructure since remote references have to be tracked. Being more complex and resource demanding, marshaling by reference should be used only when duplication of parameters and return values lead to unexpected and inconsistent behavior of the system. 1. Objects Activation and Lifetime The management of distributed objects poses additional challenges with respect to the simple invoca- tion of procedure on a remote node. Methods live within the context of an object instance and they can alter the internal state of the object as a side effect of their execution. In Particular, the lifetime of an object instance is a crucial element in distributed object-oriented systems. Within a single memory ad- dress space scenario, objects are explicitly created by the programmer and their references are made available by passing them from one object instance to another. The memory allocated for them can be explicitly reclaimed by the programmer or automatically by the runtime system when there are no more references to that instance. A distributed scenario introduces additional issues that require a different management of the lifetime of objects exposed through remote interfaces. The first element to be considered is the object's activation, which is the creation of a remote object. There are different strategies that can be used to manage object activation and we can distinguish two major classes: server-based activation and client-based activation. In server-based activation, the ac- tive object is created in the server process and registered as an instance that can be exposed beyond process boundaries. In this case, the active object has a life of its own and occasionally executes meth- ods as a consequence of a remote method invocation. In client-based activation, the active object does not originally exist on the server side and itis created when a request for method invocation comes from a client. This scenario is generally more appropriate when the active object is meant to be stateless and should exist for the sole purpose of invoking methods from remote clients. For example, if the remote object is simply a gateway to access and modify other components hosted within the server process, client-based activation is a more efficient pattern. The second element to be considered is the lifetime of remote objects. In the case of server-based activation, the lifetime of an object is generally user-controlled since the activation of the remote object is explicit and controlled by the user. In case of client-based activation, the creation of the remote object is implicit, and therefore its lifetime is controlled by some policy by the runtime infrastructure. Different policies can be considered: the simplest one implies the creation of a new instance for each method in- vocation. This solution is quite demanding in terms of object instances, and it is generally integrated with some lease management strategy that allows objects to be reused for subsequent method invocations, if they occur within a specified time interval (lease). Another policy might consider only having a single instance at a time, and the lifetime of the object is then controlled by the number and the frequency of method calls. Different frameworks provide different level of controls on this aspect. Object activation and lifetime management are features that are now supported to some extent in almost all the frameworks for distributed object programming, since they are essential for understan the behavior of a distributed system. In particular, these two aspects are becoming fundamental while designing components that are accessible from other processes and that maintain states. Understand- ing how many objects representing the same component are created, and for how long they last is es- sential in tracking inconsistencies due to erroneous updates to the instance internal data. 2. Examples of Distributed Object Frameworks The support for distributed object programming has evolved over time, and nowadays, it is a common feature of mainstream programming languages such as C# and Java, which’provide these capabilities as part of the base class libraries. This level of integration is a sign of the maturity of this technology, which originally has been designed as a separate component that could be used in several program- ming languages. In this section, we briefly review the most relevant approaches and technologies for distributed objects programming. Aen Aa ed rte Rquo and generally Put MOT huedon of tho HuOliMNG InfraStuctite Elnce faMls relereneas haze to be {rocked Boing Mor coMplox and ToROUT domaniting, matshallng by florsALs should bo uBEd bly \ when duptioation oF paTametors ond folucn vation fond to UnexpSced ANd INCOnsistont behavior of fhe \ My wl 1. Oblects Activation and lifetime BEE bostoring Cloud Computing icati jonents are represente, Communicating Processes. In this architectural style, eon cmiee i cae by dependent processes that leverage inter-process communication ( a (on eeor nla agement. This is an abstraction that is quite suitable to model distributed systems that, being distrib In. over a network of computing nodes, are necessarily composed by several cneatont Processes, git of the processes provides other processes with services and can leverage the pant the other processes. The conceptual organization of these processes and the way in which he commune tion happens vary according to the specific model used: peer-to-peer oF client-server? ‘ONNCCtorg an identified by IPC facilities used by these processes to communicate. . Event Systems. In this architectural style, the components of the system aro loosely couplag or connected. In addition to exposing operation for data and state manipulation, each Somponont aj! publishes (or announces) a collection of events that other components can ioe er “ - In genera? other components provide a callback that will be executed when the eventis as tet ie ina the acti ity of a component, a specific runtime condition can activate one of the exposed even; Ns triggering the execution of the callbacks registered with it. Event activation may be accompanied by conte! information that can be used in the callback to handle the event. This information can be Passeq af an argument to the callback or by using some shared repository between core: Event-base systems have become quite popular and support for their implementation is provided either at Ap) eas ‘or at programming language level’®. The main advantage of such architectural styles is that it foster, the development of open systems: new modules can be added and easily integrated into the system 5. long as they have compliant interfaces for registering to the events. This architectural style solves son, of the limitations observed for the top-down and the object-oriented style. First, the invocation pattern i, implicit, and the connection between the caller and the callee is not hard-coded, this gives a lot of fe, ibility since addition or removal of handler to events can be done without changes in the source s of applications. Secondly, the event source does not need to know the identity of the event handler jy order to invoke the callback. The disadvantage of such style is that it relinquishes control over system, computation. When a component triggers an event, it does not know how many event handlers will pe invoked and if there is any registered handler. This information is available only at runtime, and froma static design point of view, becomes more complex to identify the connections among components ang to reason about the correctness of the interactions. In this section, we reviewed the most popular software architectural styles that can be utllized as g reference for modeling the logical arrangement of components in a system. They are a subset of al the architectural styles and other styles can be found in [105]. 3. System Architectural Styles ‘System architectural styles cover the physical organization of components and processes over a dis. tributed infrastructure. They provide a set of reference models for the deployment of such systems, and help engineers not only in having a common vocabulary in describing the physical layout of systems but also in quickly identifying the major advantages and drawbacks of a given deployment, and whether it is applicable for a specific class of applications. In this section, we introduce two fundamental reference styles: client-server and peer-to-peer. (a) Client-Server. This architecture is very popular in distributed computing and it is sut- able for a wide variety of applications. As depicted in Fig. 2.12, the client-server model features two major components: a server and a client. These two components interact with each other through a ® The terms “client-server” and “peer-to-peer” will be further discussed in the next section. 1° The Observer pattern (106] is a fundamental element of software designs, while-programming languages such as C#, VB.NET, land other languages implemented for the Common Language Infrastructure (53) expose the event language constructs to model implicit invocation patterns. Princiles of Paral and Distrbuted Computing ma tion i network connection by using a given protocol. The communication is unidirectional: the client Issues There could be a request to Ihe server, and server alter processing the request returns @ response the Taportant | components issuing requests to the server that is passively waiting for them. Hence, client-server parad i ten and response (server side). paradigm are request, accept (client side), and is! This model is Suitable in many-to-one scenario: : : : 3s, where the information and the services of interest can be Sette en bates a through a single access point: the server. In general. multiple clients are interested in Such veces and the server must be appropriately designed to server requests com ing from different c ef ficiently. This consideration has implications on both client design and server Gesign. For the client design, we identify two major models: Thin-client Model. In this model, the load of data processing and transformation is put on the server side, and the client has a light implementation that is mostly concerned with retrieving and return ing the data itis being asked for, with no considerable further processing. Fat-client Model. In this model, the client component is also responsible for processing and transforming the dal before returning it back to the user, while the server features a relatively light implementation mostly concerned with the management of access to the data. request Two Tier {Classic Model) = etl (cient) “= Fig.212, Client-Server Architectural Styles. nt-server model: presentation, application lo We can identify three major components in the clie ification, we can observe that in the thin-client and data storage. With respect to the previous classi model, the client embodies only the presentation component, while the server absorbs the other two, rot cien cnt model, the client encapsulates presentation and most of the application logic, and the server is principally responsible for the data storage and maintenance. 2} Mastering Cloud Computing Presentation, application logic, and data maintenance can be seen aS concep toes ih more appropriately called liers. The mapping between the conceptual layers an of archi al im. plementa: tion in modules and components allows differentiating among several types ‘ectures, Which go under the name of multi-tiered architectures. Two major classes exist: Two-tier Architecture, This architecture partitions the systems into two tiers, which are loca €d one in the client component and the other on the server, The client is responsible for the presentatio, iar BY Providing a user interface, while tre server concentrates the application logic and the data story into a single tier. The server component is generally deployed on a powerful machine that is capabjp of processing user is duests, accessing data, and executing the application logic to provide a client with a response. This architecture is suitable for systems of limited size and suffers from scalability jg, Sena Particular, as the number of uses increases, the performance of the server might dramaticaly ary Gecrease. Another limitation is cauees by the dimension of the data to maintain, manage, and acces, which might be prohibitive for @ single computation node or too large for serving the clients with Satisfac. tory performance, ation server encapsulates the busi- (Possibly replicated for high availability) maintains the ird party (or external) services in order to satisty the ‘amples of N-tiered architectures, (b) Peer-to-Peer. this model introduces a symmetric architecture where all the components, (Ole and incorporate both the client and server eapabiltios of the previous called peers, play the sam: model. More precisely, each peer acts as a server when it procesees requests fr fom other peers and as a client when it issues requests to other peers. With respect to the client-server the responsibi The most relevant example of peer-to-peer systems [87] is constituted by file sharing applications such as Gnutella, BitTorrent, and Kazaa. Despite the differences that each of these networks have in Coordinating the nodes and sharing the information of the fles and their locations, ec them provide a user client which is at the same time a server (providing files to other peers) and a client (downloading files from other peers). In order to address an incredibly large number of Peers, different architectures Props of Paral nd Dcrbvied Corpvting EXER have been designed which ‘iver slighlly from the peer-to-peer model. For example, in Kazaa, not all : ae fe have Be oa ° ro le and some of them are used to group the accessibility information of a Q ling example of peer-to-peer architecture is represented by the Skype ~ network, 5 peer peer Fig. 213. Peer-to-Peer Architectural Style. | The system architectural styles presented in this section constitute a reference mode! that are further Uhharced or diversified according to the specific needs of the application to be designed an ple- Wronted, For example, the client-server architecture, which originally included only two types of compo- see nas been further extended and enriched by developing multi-tier architectures as he complexity Of systems increased. Currently, this model is still the predominant reference architecture for distributed systems and applications. The server and client abstraction can be used in some cases to model the macro scale or the micro-scale of the systems. For peer-to-peer systems, pure implementations are very hard to find and, as discussed for the case of Kazaa, ‘evolutions of the model, which introduced some kind of hierarchy among the nodes, are common. 2.4.4 Models for Inter-Process Communication Distributed systems are composed of a collection of concurrent processes interacting with each other by means of a network connection. Therefore, inter-process communication (IPC) is a fundamental aspect of distributed systems design and implementation. IPC is used to either exchange data and information or to coordinate the activity of processes. It is what ties together the different components of a distrib- uted system, thus making them acting as a single system. There are several different models in which processes can interact with each other, and these map to different abstractions for IPC; among the vost relevant we can mention: shared memory, remote procedure call (RPC), and message passing. At 2 lower level, IPG is realized through the fundamental tools of network programming. Sockets are the most popular IPC primitive for implementing communication channels between distributed processes. They facilitate interaction patterns that, at the lower level, mimic the client-server abstraction and are based on a request-reply communication model. Sockets provide the basic capability of transferring ‘a sequence of bytes, which is converted at higher levels into a more meaningful representation (such as procedure parameters or return values or messages). Such a powerful abstraction allows system BREE tence hardware or softwaro. Applic! pvaliablo In tho nd bocomo port tation of tho virtual mac! footuros that aro not (on top of this Inyor thore In an Implomen! tyystom Implementing this petor lal atptnte in an asract formal whic In Into aha program contaltutos Ite exocullon Wis qui internal ropronontation of the program atato. Vi systems, Interprot Rule-Based Style. This archltocture Is vironment as an inforonco engine. Pt Tue. The input data for applications Inference ongine uses to activalo elther be the product of thi The sol of rules oF prodicat the systom. This approach Is qui its behavior rather thi artificial intalligence. systoms aro used to monitor {ho stat and procossed by PLCs’, and by acl Another interesting use of rule-bi slon Detection possible intrusions in computing systems. tors, Practical applications can Interpreter Style. The core feature of used to interpret a pseudo program expr tion of the pseudo program constitutes tt {0 this style exhibit four main component this style, an internal memory that contal current slate of the engine, and a represen model is quite useful in designing virtual mac! ts: thi languages (Awk, PERL, etc.). Within this scenari ftware/hardware envi les are character design has the major a ironment but at the same 1! user abstractions and the sof Virtual machine architectural st and the hosting environment. This underlying hardware and software envi such as slow-down in performance. Other Iss ‘execution environment, specific (d) Call and Return Arc! hitectures. composed by components mostly connected in this way Is characterized by a chain of me the execution of one or more operations. The may vary. Nonetheless, it is possible to ident the system is structured and how methods a! iho following: the progr ‘and command language ? rograma aro Ox is gonorally ropresente ‘ulos of fo apply prodicales, 1¢ rulos activation or a 801 os Identify the knowledge ite peculiar, since It al ‘an in torms of the components. Rt 1us of physical dovices by bein ivating alarms whon specific c ased systems can ‘Systems (NIDS) often rely on a sot rossed in a format acceptable for the execution of the prograr ing the pseudo-code to be interprete {ation of the current state of the program features of the underlying system mi tions and aystoms aro Implomanta ;d noftware onvironmont as tong a ‘Tho onoral Intoractlon flow fer ‘applicatlon) dofines its operation, protod by the virtual machine engine. The interprotation ‘aconario that the ongine maintains ap, ito common In this ry popular oXé mplos within this category ro: rule-based rocoBGOIS. charactorized by reprosonting the abstract execution on, yeaod In tho form of rules or predicates that hola ‘J by a sot of assertions or facts that the thus transforming data. The output can Y agsortions that holds true for the given Input data, ase that can bo quorled to infer properties about lows oxprossing a system or a domain in terms of, sulo-based systems are very popular in the field of ‘din the field of process control where rule-based .g fod from the sensory data collected conditions on the sensory data apply, king domain: Network Intru- I behaviors connected to be founc be found in the networ! t of rules to Identify abnormal the interpreter style is the presence of an engine that is the Interpreter. The Interpreta- m itself, Systems modeled according that executes the core activity of xd, a representation of the being executed. This fine for high-level programming (Java, Cif) and scripting i, the virtual machine closes the gap between the end ronment where such abstractions are executed. ized by an indirection layer between applications dvantage of decoupling applications from the limo introduces some disadvantages to the fact that, by providing a virtual ight not be accessible. interpretation engine ues might be related ‘This category Identifies all those systems that are together by method calls. The activity of systems modeled ‘hod calls whose overall execution and composition identify internal organization of components and thelr connections tify throe major sub-categories, which differentiate by how re invoked. tor nate used for automa or olacromechanlal pr so malo np tnes and produce wavralaupusInpalefo, ent ton gener purpose compu ent om ener rren cei mor agora ooo edt eos hn rg hon rebueomor ene ome aceboy ae oxposod io oda Moodle ie os ina given time tory onvironmonts. PLCs oro an o Interval since the reception of the Input. Principles ofParlel and Distributed Computing EER) Top-Down Style. This architectural style is quite representative for systems developed with im- perative programming, which leads to a divide-and-conquer approach for problem resolution. Systems developed according to this style are composed af one large main program that accomplishes ite tasks by Invoking sub-programs or procedures. The components in this style are procedures and sub-pro- grams, and connections are method calls or invocation. The calling program passes information with parameters and receives data from return values or parameters. Method calls can also extend beyond the boundary of a single process by leveraging techniques for remote method invocation such as RPC and all its descendants. The overall structure of the Program execution at any point in time is character- ized by a tree whose root is constituted by the main function of the principal program. This architectural eee intuitive from a design point of view but hard to maintain and manage in case of large plemented bi y leveraging the abstractions of object-oriented programming. ied in terms of classe: °S and implemented in terms of objects. Classes define the type of components by specifying the data that represent their state and the operations that can be done over this data. One of the main advantages over the Previous style is that there is a coupling between data and operations used to manipulate them, Object instances become responsible for hiding their internal Fade ee eneation and for protecting its integrity while providing operations to other components. This leads to a better decomposition process and more manageable systems. Disadvantages of this style are mainly tw 'o know the identity of an object if it wants to invoke operations on it; and shared objects need to be carefully designed in order to ensure the consistency of their state. Layered Style. The layered s systems in terms of layers, which generally operates with at most two that provides a higher abstraction k: interact. It is possible to model suc Therefore, the components are t 'ystem style allows the design and the implementation of software Provide a different level of abstraction of the system. Each layer layers: the one that provides a lower abstraction level, and the one layer. Specific protocols and interfaces define how adjacent layers. ich systems as a stack of layers—one for each level of abstraction. he layers and the connectors are the interfaces and protocols used between adjacent layers. A user or client generally interacts with the layer at the highest abstraction, which in order to carry its activity, interacts and uses the services of the lower layer. This process is repeated (if necessary) until the lowest layer is reached. Itis also possible to have the opposite behav, ior: events and callbacks from the lower layers can trigger the activity of the higher layer and propagate information up through the stack. The advantages of the layered style is that, as it happens for the object-oriented style, it sup; ports a modular design of system and in addition, it allows to decompose the system according to different levels of abstractions by encapsulating together all the operations that belong to a specific level. Layers can be replaced as long as. they are compliant with the expected Protocols and interfaces, thus making the system flexible. The main disadvantage is constituted by the lack of extensibility, since it is not possible to add an additional layer without changing the protocols and the interfaces between layers®. This also makes it complex to add additional operations. Examples of layered architectures are the modern operating systems kernels and the ISO/OSI or the TCP/IP stack. (e) Architectural Styles based on Independent Components. this class. of architectural styles model systems in terms of independent components, having their own life cycle, which interact to each other in order to perform their activities. There are two. major categories within this class, which differentiate in the way the interaction among components is managed. ® “The only option given is to partion a layer Into sub-layors co that the extemal interfaces remain the same but the intemal archic tecture can be reorganized into dierent layers which can define ciferen abrasion lors, Eee tne wana eat layer, the new re-organized layer sill appears as a single block.

You might also like