Professional Documents
Culture Documents
com
Cloud Computing Applications
and Workflow Management
Dr. Syed
Dr. Syed Nasir
Nasir Mehmood
Mehmood Shah
Shah
nasirsyed.utp@gmail.com
nasirsyed.utp@gmail.com
© nasirsyed.utp@gmail.com
Sequence
Apache ZooKeeper
Cloud Computing
Cloud applications
Social computing and digital content
© nasirsyed.utp@gmail.com
Cloud Computing
© nasirsyed.utp@gmail.com
Cloud computing
• Information processing can be done more efficiently on large farms of computing and
storage systems accessible via the Internet.
• Grid computing – initiated by the National Labs in the early 1990s; targeted primarily at scientific
computing.
• Utility computing – initiated in 2005-2006 by IT companies and targeted at enterprise computing.
• The focus of utility computing is on the business model for providing computing services; it
often requires a cloud-like infrastructure.
© nasirsyed.utp@gmail.com
Evolution of concepts and technologies
• The concepts and technologies for network-centric computing and content
evolved along the years.
• The web and the semantic web- expected to support composition of services. The web is
dominated by unstructured or semi-structured data, while the semantic web advocates
inclusion of semantic content in web pages.
• The Grid - initiated in the early 1990s by National Laboratories and Universities; used
primarily for applications in the area of science and engineering.
• Peer-to-peer systems.
• Cloud system
© nasirsyed.utp@gmail.com
Origins of cloud computing
ARPANET was the network that became the basis for
the Internet.
© nasirsyed.utp@gmail.com
Cloud computing
• The term “elastic computing” refers to the ability of dynamically acquiring computing
resources and supporting a variable workload.
• The resources used for these services can be metered and the users can be charged
only for the resources they used.
• The service providers can operate more efficiently due to specialization and
centralization.
© nasirsyed.utp@gmail.com
Cloud computing
• Lower costs for the cloud service provider are past to the cloud users.
• Data is stored:
• closer to the site where it is used.
• The data storage strategy can increase reliability, as well as security, and
can lower communication costs.
© nasirsyed.utp@gmail.com
Common Qualities
• Pool of resources
• Billed consumption
• Virtualized resources
• Dynamically reconfigured
• Scalable
0 © nasirsyed.utp@gmail.com
Cloud Resources
1 © nasirsyed.utp@gmail.com
Motivators
• Flexibility
• Costs
• Consumerization of enterprise IT
• Business perspective
2 © nasirsyed.utp@gmail.com
Business Drive
• Capacity Planning
• Cost Reduction
• Organizational Agility
3 © nasirsyed.utp@gmail.com
Capacity Planning Strategies
• Lead Strategy –adding capacity to an IT resource in anticipation of
demand. (predictive)
• Lag Strategy –adding capacity when the IT resource reaches its full
capacity. (preventive)
4 © nasirsyed.utp@gmail.com
Cost Reduction
5 © nasirsyed.utp@gmail.com
Organizational Agility
6 © nasirsyed.utp@gmail.com
Cloud activities (1/3)
• Service management and provisioning including:
• Virtualization.
• Service provisioning.
• Call center.
• Operations management.
• Systems management.
• QoS management.
• Billing and accounting, asset management.
• SLA management.
• Technical support and backups.
7 © nasirsyed.utp@gmail.com
Cloud activities (2/3)
• Security management including:
• ID and authentication.
• Intrusion prevention.
• Intrusion detection.
• Virus protection.
• Cryptography.
• Development.
19
9 © nasirsyed.utp@gmail.com
Cloud Applications
0 © nasirsyed.utp@gmail.com
Cloud applications
• Cloud computing is very attractive to the users:
• Economic reasons.
• They are free to design an application without being concerned with the system
where the application will run.
1 © nasirsyed.utp@gmail.com
Cloud applications
• Convenience and performance.
• Cloud computing is also beneficial for the providers of computing cycles - it typically leads
to a higher level of resource utilization.
2 © nasirsyed.utp@gmail.com
Cloud applications
• Ideal applications for cloud computing:
• Web services.
• Database services.
3. Web applications.
• Mobile interactive applications which process large volumes of data from different types of
sensors.
• Science and engineering could greatly benefit from cloud computing as many applications
in these areas are compute-intensive and data-intensive.
4 © nasirsyed.utp@gmail.com
1- Processing pipelines
• Data produced by applications, devices, or humans must be processed before it is consumed.
By definition, a data pipeline represents the flow of data between two or more systems. It is a set
of instructions that determine how and when to move data between these systems.
• Big Data Indexing large datasets created by web crawler engines.
• Data mining - searching large collections of records to locate items of interests.
• Image processing .
• Image conversion, e.g., enlarge an image or create thumbnails.
• Compress or encrypt images.
• Video transcoding from one video format to another, e.g., from AVI to MPEG.
• Document processing.
• Convert large collections of documents from one format to another, e.g., from Word to PDF.
• Encrypt documents.
• Use Optical Character Recognition to produce digital images of documents.
5 © nasirsyed.utp@gmail.com
2- Batch processing applications
• Batch processing is a general term used for frequently used programs that are executed
with minimum human interaction. Batch process jobs can run without any end-user
interaction or can be scheduled to start up on their own as resources permit.
• Generation of daily, weekly, monthly, and annual activity reports for retail, manufacturing, other
economical sectors.
• Processing, aggregation, and summaries of daily transactions for financial institutions, insurance
companies, and healthcare organizations.
• Active during a particular season (e.g., the Holidays Season) or income tax
reporting.
8 © nasirsyed.utp@gmail.com
Architectural styles for Cloud applications
• Based on the client-server paradigm.
• Stateless servers - view a client request as an independent transaction and respond to it;
the client is not required to first establish a connection to the server.
• Often clients and servers communicate using Remote Procedure Calls (RPCs).
• Simple Object Access Protocol (SOAP) - application protocol for web applications; message
format based on the XML. Uses TCP or UDP transport protocols.
P0
U0 R0
U1 P1 R1
U2 R2
P2
Un Rk
Pm
Users
Processes Resources
0 © nasirsyed.utp@gmail.com
Workflows
• Process description - structure describing the tasks to be executed and the order of their
execution. Resembles a flowchart.
• State of a case at time t - defined in terms of tasks already completed at that time.
• The life cycle of a workflow - creation, definition, verification, and enactment; similar to
the life cycle of a traditional program (creation, compilation, and execution).
1 © nasirsyed.utp@gmail.com
2 © nasirsyed.utp@gmail.com
Safety and liveness
• Desirable properties of workflows.
3 © nasirsyed.utp@gmail.com
Tasks and their dependencies
4 © nasirsyed.utp@gmail.com
Basic workflow patterns
• Workflow patterns - the temporal relationship among the tasks of a process
• Sequence - several tasks have to be scheduled one after the
completion of the other.
• AND split - both tasks B and C are activated when task A terminates.
• Synchronization - task C can only start after tasks A and B terminate.
Basic workflow patterns. (a) Sequence. (b) AND split. (c) Synchronization
5 © nasirsyed.utp@gmail.com
Basic workflow patterns (2/4)
• Workflow patterns - the temporal relationship among the tasks of a process
• XOR split - after completion of task A, either B or C can be activated.
• XOR merge - task C is enabled when either A or B terminate.
• OR split - after completion of task A one could activate either B, C, or
both.
(h) Discriminator.
8 © nasirsyed.utp@gmail.com
Basic workflow patterns
• Workflow patterns - the temporal relationship among the tasks of a process
• N out of M join - barrier synchronization. Assuming that M tasks run concurrently, N (N<M) of them
have to reach the barrier before the next task is enabled. In our example, any two out of the three
tasks A, B, and C have to finish before E is enabled.
• Deferred Choice - similar to the XOR split but the choice is not made explicitly; the run-time
environment decides what branch to take.
9 © nasirsyed.utp@gmail.com
Apache ZooKeeper
0 © nasirsyed.utp@gmail.com
Apache ZooKeeper
• Apache ZooKeeper is an open source file application program interface (API) that allows
distributed processes in large systems to synchronize with each other so that all clients making
requests receive consistent data.
• Zookeeper uses a distributed consensus protocol to determine which node in the ZooKeeper
service is the leader at any given time.
• The leader assigns a timestamp to each update to keep order. Once a majority of nodes have
acknowledged receipt of a time-stamped update, the leader can declare a quorum, which
means that any data contained in the update can be coordinated with elements of the data
store. The use of a quorum ensures that the service always returns consistent answers.
1 © nasirsyed.utp@gmail.com
Apache ZooKeeper – Key Features
• An open source, high-performance coordination service for distributed
application.
• ZooKeeper
• Distributed coordination service for large-scale distributed systems.
3 © nasirsyed.utp@gmail.com
Coordination – ZooKeeper
• ZooKeeper
• Open-source software written in Java with bindings for Java and C.
• The servers in the pack communicate and elect a leader.
• A database is replicated on each server; consistency of the replicas is
maintained.
• A client connect to a single server, synchronizes its clock with the
server, and sends requests, receives responses and watch events
through a TCP connection.
44
4 © nasirsyed.utp@gmail.com
A. The ZooKeeper coordination service.
(a) The service provides a single system image. Clients can connect to any server in the
pack.
5 © nasirsyed.utp@gmail.com
• ZooKeeper Service is replicated over a set of machines
• All machines store a copy of the data (in-memory)
• A leader is elected on service startup
• Clients only connect to a single ZooKeeper server and maintain a TCP connection
6 © nasirsyed.utp@gmail.com
The ZooKeeper coordination service.
(b) Functional model of the ZooKeeper service. The replicated database is accessed directly
by read commands; write commands involve more intricate processing based on atomic
broadcast.
(c) Processing a write command:
(1) A server receiving the command from a client forwards the command to the leader;
(2) the leader uses atomic broadcast to reach consensus among all followers
7 © nasirsyed.utp@gmail.com
States and Lifetime
8 © nasirsyed.utp@gmail.com
Zookeeper communication
• Messaging layer responsible for the election of a new leader when the
current leader fails.
9 © nasirsyed.utp@gmail.com
Zookeeper communication
• Messaging protocols use:
• Packets - sequence of bytes sent through a FIFO channel.
• Proposals - units of agreement.
• Messages - sequence of bytes atomically broadcast to all servers.
1 © nasirsyed.utp@gmail.com
ZooKeeper service guarantees
• Atomicity - a transaction either completes or fails.
• Sequential consistency of updates - updates are applied strictly in the order they are
received.
• Single system image for the clients - a client receives the same response regardless of
the server it connects to.
3 © nasirsyed.utp@gmail.com
Elasticity and load distribution
• Elasticity ability to use as many servers as necessary to optimally respond to cost and timing constraints
of an application.
• arbitrarily divisible the workload can be partitioned into an arbitrarily large number of smaller
workloads of equal, or very close size.
• Many applications in physics, biology, and other areas of computational science and engineering obey the
arbitrarily divisible load sharing model.
4 © nasirsyed.utp@gmail.com
Clouds for science and engineering
• The generic problems in virtually all areas of science are:
• Collection of experimental data.
• Management of very large volumes of data.
• Building and execution of models.
• Integration of data and literature.
• Documentation of the experiments.
• Sharing the data with others; data preservation for a long periods of time.
• All these activities require “big” data storage and systems capable to deliver
abundant computing cycles.
• Computing clouds are able to provide such resources and support collaborative
environments.
5 © nasirsyed.utp@gmail.com
Online Data Discovery
• Phases of data discovery in large scientific data sets:
• recognition of the information problem.
• generation of search queries using one or more search engines.
• evaluation of the search results.
• evaluation of the web documents.
• comparing information from different sources.
• Large scientific data sets:
• biomedical and genomic data from the National Center for Biotechnology
Information (NCBI).
• astrophysics data from NASA.
• atmospheric data from the National Oceanic and Atmospheric Administration
(NOAA) and the National Center for Atmospheric Research (NCAR).
6 © nasirsyed.utp@gmail.com
High Performance Computing on a Cloud
CERN CLOUD
Example
7 © nasirsyed.utp@gmail.com
The LHC Data Challenge
8 © nasirsyed.utp@gmail.com
Solution: Cloud
9 © nasirsyed.utp@gmail.com
LHC Computing project
• More than 140 computing centres
0 © nasirsyed.utp@gmail.com
Tier 0 at CERN: Acquisition, First pass reconstruction,
Storage & Distribution
Ian.Bird@cern.ch
1 © nasirsyed.utp@gmail.com
NCP Cloud Node of CERN Cloud
http://www.ncp.edu.pk/
2 © nasirsyed.utp@gmail.com
NCP Cloud
3 © nasirsyed.utp@gmail.com
HPC Research areas @ NCP
Computational Fluid Dynamics(CFD)
Molecular Biology
Bio Chemistry
Physics
Weather Forecasting
Density Functional Theory (DTF)
Ion channeling
Multi-Particle Interaction
4 © nasirsyed.utp@gmail.com
Cloud Deployment @ NCP
5 © nasirsyed.utp@gmail.com
HPC Cluster@ NCP
6 © nasirsyed.utp@gmail.com
7 © nasirsyed.utp@gmail.com
Social computing and digital content
• Networks allowing researchers to share data and provide a virtual
environment supporting remote execution of workflows are domain specific:
• MyExperiment for biology.
• nanoHub for nanoscience.
68
8 © nasirsyed.utp@gmail.com
Social computing and digital content
• Volunteer computing - a large population of users donate resources such as
CPU cycles and storage space for a specific project:
• Mersenne Prime Search
• SETI@Home,
• Folding@home,
• Storage@Home
• PlanetLab
9 © nasirsyed.utp@gmail.com
Challenges for cloud application development
• Performance isolation - nearly impossible to reach in a real system, especially when
the system is heavily loaded.
• Reliability - major concern; server failures expected when a large number of servers
cooperate for the computations.
• Cloud infrastructure exhibits latency and bandwidth fluctuations which affect the
application performance.
• Data logging Performance considerations limit the amount of data logging; the
ability to identify the source of unexpected results and errors is helped by frequent
logging.
0 © nasirsyed.utp@gmail.com
Challenges for Cloud Application Development
1 © nasirsyed.utp@gmail.com
Challenges for Cloud Application Development
2 © nasirsyed.utp@gmail.com
Key Challenges for Cloud Application Development
Security
Regulatory Compliance
SLA
Availability
Migration, Integration
Risk management
3 © nasirsyed.utp@gmail.com
Questions / Comments
4 © nasirsyed.utp@gmail.com