You are on page 1of 17

KING KHALID UNIVERSITY

Department of
Computer Science
College of
Computer Science

Parallel and Distributed Computing– 482-CCS-3

Dr. Mohammad Nadeem Ahmed

Chapter 2: Architectures & Coordination: Architectural styles, centralized


architectures, application layering, multitiered architectures, structured
peer-to-peer architectures, unstructured peer-to-peer architectures,
Lamport’s Logical clock Algorithm Lamport’s Vector clock Algorithm, Mutual
Exclusion, edge-server systems, collaborative distributed systems,
interceptors, general approaches to adaptive software.
Architectural styles: -
We discuss on architectures by first considering the logical organization of a distributed system into software
components, also referred to as its software architecture.

To show different arrangement styles among computers Architecture styles are proposed.

1. Layered architectures
2. Object-based architectures
3. Resource-centered architectures
4. Event-based architectures

Layered Architecture: Each layer communicates with its adjacent layer by sending requests and
getting responses. Any layer cannot directly communicate with another layer. A layer can only
communicate with its neighboring layer and then the next layer transfers information to another
layer and so on the process goes on.
Request flow from top to bottom(downwards) and response flow from bottom to top(upwards). The
advantage of layered architecture is that each layer can be modified independently without affecting
the whole system. This type of architecture is used in Open System Interconnection (OSI) model

Object-Oriented Architecture: components are treated as objects which convey information to


each other. Object-Oriented Architecture contains an arrangement of loosely coupled objects.
Objects can interact with each other through method calls. Objects are connected to each other
through the Remote Procedure Call (RPC) mechanism or Remote Method Invocation (RMI)
mechanism. OOA Provides a Natural way of Encapsulation.
Fig: - An object-based architectural style.

Each object corresponds to what we have defined as a component, and these components are
connected through a procedure call mechanism. Object-based architectures are attractive because
they provide a natural way of encapsulating data (called an object’s state) and the operations that
can be performed on that data (which are referred to as an object’s methods) into a single entity.

Resource centered Architecture:


As an increasing number of services became available over the Web and the development of
distributed systems through service composition became more important, researchers started to
rethink the architecture of mostly Web-based distributed systems. One of the problems with service
composition is that connecting various components can easily turn into an integration nightmare.
As an alternative, one can also view a distributed system as a huge collection of resources that are
individually managed by components. Resources may be added or removed by (remote)
applications, and likewise can be retrieved or modified. This approach has now been widely
adopted for the Web and is known as Representational State Transfer (REST)
There are four key characteristics of what are known as RESTful architectures

1. Resources are identified through a single naming scheme


2. All services offer the same interface, consisting of at most four operations, as shown in below
Figure.
3. Messages sent to or from a service are fully self-described
4. After executing an operation at a service, that component forgets everything about the caller

FIG: -RESTFUL ARCHITECTURE OPERATIONS


To illustrate how RESTful can work in practice, consider a cloud storage service, such as Amazon’s
Simple Storage Service (Amazon S3). Amazon S3, supports only two resources: objects, which
are essentially the equivalent of files, and buckets, the equivalent of directories. There is no concept
of placing buckets into buckets. An object named ObjectName contained in bucket BucketName is
referred to by means of the following uniform resource identifier (URI):
http://s3.amazonaws.com/BucketName/ObjectName
To create a bucket, or an object for that matter, an application would essentially send a PUT request
with the URI of the bucket/object. In principle, the protocol that is used with the service is HTTP. In
other words, it is just another HTTP request, which will subsequently be correctly interpreted by
S3. If the bucket or object already exists, an HTTP error message is returned.

Event-Based Architecture: In this architecture, the entire communication is done through events.
When an event occurs, the system, as well as the receiver, get notified. Data, URLs etc are
transmitted through events.
The components of this system are loosely coupled that’s why it is easy to add, remove and modify
them.
Heterogeneous components can communicate through the bus.
One significant benefit is that these heterogeneous components can communicate with the bus
using any protocol. However, a specific bus or an ESB has the ability to handle any kind of incoming
request and respond appropriately.
Examples: - Publisher Subscriber system, Enterprise Service Bus (ESB)
Application layering
1. The application-interface level or user-interface level
2. The processing level
3. The data level
The application-interface level: The user-interface level is often implemented by clients.
Programs that let users interact with applications make up this level. The sophistication level across
application programs differs significantly. A character-based screen is the most basic user-interface
application. Typically, mainframe environments have employed this kind of interface. One hardly
ever speaks of a client-server setup in situations where the mainframe manages every aspect of
interaction, including the keyboard and monitor.

The Processing level: This is the middle part of the architecture. This is a logical part of the
system where all the processing actions are performed on the user interaction and the data level.
It processes the requests from the user interface and performs certain operations.

The Data level: The data level in the client-server model contains the programs that maintain the
actual data on which the applications operate. An important property of this level is that data are
often persistent, that is, even if no application is running, data will be stored somewhere for the
next use. In its simplest form, the data level consists of a file system, but it is more common to use
a full-fledged database. In the client-server model, the data level is typically implemented on the
server side.
Example: Decision support system for stock brokerage

Intern

• A front end implementing the user interface or offering a programming interface to external
applications.

• A back end for accessing a database with the financial data.

• The analysis programs between these two.


Multi-tiered Architectures
N-tier, “N” refers to a number of tiers or layers are being used like – 2-tier, 3-tier or 4-tier, etc. It is also
called “Multi-Tier Architecture”.
The n-tier architecture is an industry-proven software architecture model. It is suitable to support
enterprise level client-server applications by providing solutions to scalability, security, fault tolerance,
reusability, and maintainability. It helps developers to create flexible and reusable applications.

• 2 Tier Architectures – Thin Client model – Fat Client model


• 3 Tier Architectures
• Multitiered Architectures
A diagrammatic representation of a Multi-tier system depicts here –

● Presentation
● Application
● Database layers.

Example:-

● MakeMyTrip.com
● Sales Force enterprise application
● Flight ticket booking Application
● Amazon.com, etc
Three-tier client-server architecture in a distributed system:

Presentation Tier: It is the user interface and topmost tier in the architecture. Its purpose is to take
request from the client and displays information to the client. It communicates with other tiers using a web
browser as it gives output on the browser. If we talk about Web-based tiers then these are developed
using languages like- HTML, CSS, JavaScript.

Application Tier: It is the middle tier of the architecture also known as the logic tier as the
information/request gathered through the presentation tier is processed in detail here. It also interacts
with the server that stores the data. It processes the client’s request, formats, it and sends it back to the
client. It is developed using languages like- Python, Java, PHP, etc.

Data Tier: It is the last tier of the architecture also known as the Database Tier. It is used to store the
processed information so that it can be retrieved later on when required. It consists of Database Servers
like- Oracle, MySQL, DB2, etc. The communication between the Presentation Tier and Data-Tier is done
using middle-tier i.e. Application Tier
Centralized Architecture

The centralized architecture is defined as every node being connected to a central coordination system,
and whatever information they desire to exchange will be shared by that system. A centralized
architecture does not automatically require that all functions must be in a single place or circuit, but rather
that most parts are grouped together and none are repeated elsewhere as would be the case in a
distributed architecture.

It consists following types of architecture:

● Client-server
● Application Layering
Client Server
Processes in a distributed system are split into two (potentially overlapping) groups in the fundamental
client-server architecture. A server is a program that provides a particular service, such as a database
service or a file system service. A client is a process that sends a request to a server and then waits for
the server to respond before requesting a service from it. This client-server interaction, also known as
request-reply behavior is shown in the figure below:

When the underlying network is reasonably dependable, as it is in many local-area networks,


communication between a client and a server can be implemented using a straightforward connection-
less protocol. In these circumstances, a client simply bundles a message for the server specifying the
service they want along with the relevant input data when they make a service request. After that, the
server receives the message. The latter, on the other hand, will always await an incoming request,
process it after that, and then package the outcomes in a reply message that is then provided to the client

Efficiency is a clear benefit of using a connectionless protocol. The request/reply protocol just sketched
up works as long as communications do not get lost or damaged. It is unfortunately not easy to make the
protocol robust against occasional transmission errors. When no reply message is received, our only
option is to perhaps allow the client to resubmit the request. However, there is a problem with the client’s
ability to determine if the original request message was lost or if the transmission of the reply failed.

A reliable connection-oriented protocol is used as an alternative by many client-server systems. Due to its
relatively poor performance, this method is not totally suitable for local-area networks, but it is ideal for
wide-area networks where communication is inherently unreliable.

Peer-to-peer (P2P) architecture


A peer-to-peer network, also called a (P2P) network, works on the concept of no central control in a
distributed system. A node can either act as a client or server at any given time once it joins the network.
A node that requests something is called a client, and one that provides something is called a server. In
general, each node is called a peer.

P2P networks of today have three separate sections:


● Structured P2P: The nodes in structured P2P follow a predefined distributed data
structure.
● Unstructured P2P: The nodes in unstructured P2P randomly select their neighbors.
● Hybrid P2P: In a hybrid P2P, some nodes have unique functions appointed to them in an
orderly manner.
Lamport’s Logical clock Algorithm

To synchronize logical clocks, Lamport defined a relation called happens before. The expression a → b
is read “event a happens before event b” and

means that all processes agree that first event a occurs, then afterward, event b occurs.

The happens-before relation can be observed directly in two situations:

1. If a and b are events in the same process, and a occurs before b, then a → b is true.

2. If a is the event of a message being sent by one process, and b is the event of the message being
received by another process, then a → b is also true. A message cannot be received before it is
sent, or even at the same time it is sent, since it takes a finite, nonzero amount of time to arrive.

Happens-before is a transitive relation, so if a → b and b → c, then a → c. If two events, x and y,


happen in different processes that do not exchange messages (not even indirectly via third parties),
then x → y is not true, but neither is y → x. These events are said to be concurrent, which simply
means that nothing can be said (or need be said) about when the events happened or which event
happened first

Fig:- The positioning of Lamport’s logical clocks in distributed systems

Explanation of the Algorithm

The algorithm is pretty straightforward and works as such:

3. Each process in the system maintains its own logical clock, which is essentially a counter (initially
set to zero) that is incremented for each event it experiences.

4. When a process does work, it increments its own clock value by a certain unit (usually 1).

5. When a process sends a message, it includes its current clock value with the message.

6. When a process receives a message, it updates its clock to be the maximum of its own clock and
the received clock value from the message, and then increments it by 1. This ensures that the
receiving process logically happens after the sending process and any other events that the sender
knew about.
Lamport’s Vector clock Algorithm

Vector Clock is an algorithm that generates partial ordering of events and detects causality violations in
a distributed system. These clocks expand on Scalar time to facilitate a causally consistent view of the
distributed system, they detect whether a contributed event has caused another event in the distributed
system. It essentially captures all the causal relationships. This algorithm helps us label every process
with a vector(a list of integers) with an integer for each local clock of every process within the system.
So for N given processes, there will be vector/ array of size N.

How does the vector clock algorithm work :

7. Initially, all the clocks are set to zero.

8. Every time, an Internal event occurs in a process, the value of the processes’s logical clock in the
vector is incremented by 1

9. Also, every time a process sends a message, the value of the processes’s logical clock in the vector
is incremented by 1.

Every time, a process receives a message, the value of the processes’s logical clock in the vector is
incremented by 1, and moreover, each element is updated by taking the maximum of the value in its
own vector clock and the value in the vector in the received message (for every element).

Example:
Consider a process (P) with a vector size N for each process: the above set of rules mentioned are to be
executed by the vector clock:
Mutual Exclusion

Mutual exclusion is a concurrency control property which is introduced to prevent race conditions. It is
the requirement that a process can not enter its critical section while another concurrent process is
currently present or executing in its critical section i.e only one process is allowed to execute the critical
section at any given instance of time.
Requirements of Mutual exclusion Algorithm:
• No Deadlock: Two or more site should not endlessly wait for any message that will never
arrive.
• No Starvation: Every site who wants to execute critical section should get an opportunity to execute it
in finite time. Any site should not wait indefinitely to execute critical section while other site are
repeatedly executing critical section.
• Fairness: Each site should get a fair chance to execute critical section. Any request to execute critical
section must be executed in the order they are made i.e Critical section execution requests should be
executed in the order of their arrival in the system.
• Fault Tolerance: In case of failure, it should be able to recognize it by itself in order to continue
functioning without any disruption

Solution to distributed mutual exclusion:

1. Token Based Algorithm:


2. Non-token-based approach:

Edge Server System


An edge server is a piece of hardware that performs data computation at the end (or "edge") of a network.
Like a regular server, an edge server can provide compute, networking, and storage functions.

The idea behind edge computing is to reduce the amount of data that needs to be sent to the cloud or a
central server for processing, thereby reducing network latency and improving overall system
performance

End users, or clients in general, connect to the Internet by means of an edge server. The edge server’s
main purpose is to serve content, possibly after applying filtering and transcoding functions. More
interesting is the fact that a collection of edge servers can be used to optimize content and application
distribution. The basic model is that for a specific organization, one edge

server acts as an origin server from which all content originates. can use other edge servers for
replicating Web pages

This concept of edge-server systems is now often taken a step further: taking cloud computing as
implemented in a data center as the core, additional servers at the edge of the network are used to assist
in computations and storage, essentially leading to distributed cloud systems. In the case of fog
computing, even end-user devices form part of the system and are (partly) controlled by a cloud-service
provider
Edge servers process data physically close to the end-users and on-site apps, so these devices process
requests faster than a centralized server. Instead of sending unprocessed data on a trip to and from a
data center, these devices process raw data and return content to client machines. As a result, edge
servers provide snappier performance, lower latency, shorter loading times.

● A hybrid architecture
● An idempotent class of distributed systems
● Deployed on the Internet where serversare “at the edge” of the network ( i.e. first entry
to network)
● Each client connects to the Internet by means of an edge server

There are two types of edge servers:


● Content delivery network (CDN) edge servers: A CDN edge server is a computer with cached
versions of static content from an origin server (images, JavaScript files, HTML files, etc.). A
company can deploy CDN edge servers at various points of presence (PoPs) across a content
delivery network.

● Edge compute servers: This server type provides compute resources at the network's edge.
While a CDN server only delivers static web content, an edge compute server provides
functionalities needed for IoT apps.

Edge Server Use Cases


An edge server is a good choice for most use cases that require fast real-time data processing. These
devices are also an excellent fit for use cases in which you cannot deploy standard, bulky servers. Here
are some examples of how companies put edge servers to use:

• Servers within IoT sensors on industrial equipment.


• Surveillance sensors that have a local server for real-time data analysis.
• An app that streams movies and TV shows through an edge server with cashed videos.
• A banking app that uses an edge server for quick performance but isolates sensitive data on the
origin server.
• IoT devices that perform in-hospital patient monitoring.
• Edge servers that provide remote monitoring of oil and gas assets.
• Self-driving cars that collect large volumes of data and make decisions in real-time.
Collaborative distributed systems

Distributed Collaboration is a way of collaboration wherein participants, regardless of their location, work
together to reach a certain goal.
A collaborative system is one where multiple users or agents engage in a shared activity, usually from
remote locations. In the larger family of distributed applications, collaborative systems are distinguished
by the fact that the agents in the system are working together towards a common goal and have a critical
need to interact closely with each other: sharing information, exchanging requests with each other, and
checking in with each other on their status.

Examples of Collaborative distributed systems: -

BitTorrent file-sharing system: - BitTorrent is a peer-to-peer file downloading system. Its principal
working is shown in Figure 2.22. The basic idea is that when an end user is looking for a file, he
downloads chunks of the file from other users until the downloaded chunks can be assembled together
yielding the complete file.
Peer production:-Peer production (also may refer to as mass or social collaboration) is a way of

producing goods and services that relies on self-organizing communities of individuals. In such
communities, the labor of a large number of people is coordinated towards a shared outcome

Collaborative Writing:- Collaborative writing refers to projects where written works are collaboratively

created by multiple people together rather than individually

Mobile collaboration:- Mobile collaboration is a technology-based process of communicating using


electronic assets and accompanying software designed for use in remote locations. Mobile collaboration
utilizes wireless, cellular and broadband technologies enabling effective distributed collaboration
independent of location

Distributed collaborative learning: - Collaborative learning is based on the model that knowledge can
be created within a population where members actively interact by sharing experiences and take on
asymmetry roles

Interceptors
Conceptually, an interceptor is nothing but a software construct that will break the usual flow of control
and allow other (application specific) code to be executed. Interceptors are a primary means for adapting
middleware to the specific needs of an application. As such, they play an important role in making
middleware open

To make matters concrete, consider interception as supported in many object-based distributed systems.
The basic idea is simple: an object A can call a method that belongs to an object B, while the latter
resides on a different machine than A. As we explain in detail later in the book, such a remote-object

invocation is carried out in three steps:

1. Object A is offered a local interface that is exactly the same as the interface offered by object B. A calls
the method available in that interface.

2. The call by A is transformed into a generic object invocation, made possible through a general object-
invocation interface offered by the middleware at the machine where A resides

3. Finally, the generic object invocation is transformed into a message that is sent through the transport-
level network interface as offered by A’s local operating system.
General Approach to Adaptive Software
Adaptive :- having the ability or tendency to adapt to different situations

Adaptive Software Approach is a method to build complex software and system. ASD focuses on
human collaboration and self-organization. ASA “life cycle” incorporates three phases namely:

1. Speculation

2. Collaboration

3. Learning

1. Speculation:
During this phase the project is initiated and planning is conducted. The project plan uses project initiation
information like project requirements, user needs, customer mission statement, etc, to define set of
release cycles that the project wants.

2. Collaboration:
This Phase needs the workers to be motivated. It collaborates communication and teamwork but
emphasizes individualism as individual creativity plays a major role in creative thinking. People working
together must trust each others to

● Criticize without animosity,

● Assist without resentment(anger),

● Work as hard as possible,

● Possession of skill set,

● Communicate problems to find effective solution.

3. Learning:
The workers may have a overestimate of their own understanding of the technology which may not lead
to the desired result. Learning helps the workers to increase their level of understanding over the project.
Learning process is of 3 ways:

• Focus groups
• Technical reviews
• Project post mortem

ASD’s overall emphasis on the dynamics of self-organizing teams, interpersonal collaboration, and
individual and team learning yield software project teams that have a much higher likelihood of success.

You might also like