You are on page 1of 11

Unit 1

High Performance Computing most generally refers to the practice of aggregating


computing power in a way that delivers much higher performance than one could get out of a
typical desktop computer or workstation in order to solve large problems in science,
engineering, or business.

Super Computer
A supercomputer is a computer with a high-level computational capacity compared to a
general-purpose computer. Performance of a supercomputer is measured in floating
point operations per second (FLOPS) instead of million instructions per second (MIPS).

High-throughput computing (HTC) is a computer science term to describe the use of many
computing resources over long periods of time to accomplish a computational task.

Centralized computing is computing done at a central location, using terminals that are
attached to a central computer. The computer itself may control all the peripherals directly (if
they are physically connected to the central computer), or they may be attached via a terminal
server.

Distributed computing is a field of computer science that studies distributed systems.


A distributed system is a software system in which components located on
networked computers communicate and coordinate their actions by passing messages. The
components interact with each other in order to achieve a common goal.

Parallel computing is a form of computation in which many calculations are carried out
simultaneously, operating on the principle that large problems can often be divided into
smaller ones, which are then solved at the same time.

Cloud computing means storing and accessing data and programs over the Internet
instead of your computer's hard drive.

Multithreading is the ability of a program or an operating system process to manage its use
by more than one user at a time and to even manage multiple requests by the same user
without having to have multiple copies of the programming running in the computer. Each
user request for a program or system service (and here a user can also be another program) is
kept track of as a thread with a separate identity. As programs work on behalf of the initial
request for that thread and are interrupted by other requests, the status of work on behalf of
that thread is kept track of until the work is completed.

Code threading breaks up a software task into subtasks called "threads," which run
concurrently and independently. You should choose a threaded programming model that suits
the parallelism inherent to the application. When there are a number of independent tasks that
run in parallel, the application is suited to functional decomposition. Explicit threading is
usually best for functional decomposition. When there is a large set of independent data that
must be processed through the same operation, the application is suited to data
decomposition. Compiler-directed methods, such as OpenMP, are designed to express data
parallelism.
Issues in maintaining an Enterprise data centers

Hardware and equipment outages, hard disk corruptions, data corruptions etc
Constant need to update patches and virus vaccines
Hardware wears out and becomes obsolete
Temptation to upgrade to new versions of software
Regular monitoring of traffic, email spans, attacks etc
Tools required to monitor these are expensive
Data needs to cleaned up regularly to make more space
Need to keep on adding more hard disks racks to add more space.
Cost of maintenance of hardware and software
People and building costs add up
Training costs for people to maintain new tools and hardware

Grid Computing
Grid computing is a distributed architecture of large numbers of computers connected to
solve a complex problem. In the grid computing model, servers or personal computers run
independent tasks and are loosely linked by the Internet or low-speed networks. Computers
may connect directly or via scheduling systems.
The grid infrastructure forms the core foundation for successful grid applications.
This infrastructure is a complex combination of a number of capabilities and resources
identified for the specific problem and environment being addressed.
In the early development stages of grid applications, numerous vertical "towers" and
middleware solutions were often developed to solve Grid Computing problems. These
various middleware and solution approaches were developed for fairly narrow and limited
problem-solving domains, such as middleware to deal with numerical analysis, customized
data access grids, and other narrow problems. Today, with the emergence and convergence of
grid service-oriented technologies,[3] including the interoperable XML[4]-based solutions
becoming ever more present and industry providers with a number of reusable grid
middleware solutions facilitating the following requirement areas, it is becoming simpler to
quickly deploy valuable solutions
In general, a Grid Computing infrastructure component must address several
potentially complicated areas in many stages of the implementation. These areas are:
Security
Resource management
Information services
Data management

Cloud computing service

Infrastructure as a Service (IaaS)


The infrastructure layer builds on the virtualization layer by offering the virtual
machines as a service to users. Instead of purchasing servers or even hosted services, IaaS
customers can create and remove virtual machines and network them together at will. Clients
are billed for infrastructure services based on what resources are consumed. This eliminates
the need to procure and operate physical servers, data storage systems, or networking
resources.
Platform as a Service (PaaS)
The platform layer rests on the infrastructure layers virtual machines. At this layer
customers do not manage their virtual machines; they merely create applications within an
existing API or programing language. There is no need to manage an operating system, let
alone the underlying hardware and virtualization layers. Clients merely create their own
programs which are hosted by the platform services they are paying for.
Software as a Service (SaaS)
Services at the software level consist of complete applications that do not require
development. Such applications can be email, customer relationship management, and other
office productivity applications. Enterprise services can be billed monthly or by usage, while
software as service offered directly to consumers, such as email, is often provided for free.
Deployment models of cloud computing

The Private Cloud


This model doesnt bring much in terms of cost efficiency: it is comparable to
buying, building and managing your own infrastructure. Still, it brings in tremendous
value from a security point of view. During their initial adaptation to the cloud, many
organizations face challenges and have concerns related to data security. These concerns
are taken care of by this model, in which hosting is built and maintained for a specific
client. The infrastructure required for hosting can be on-premises or at a third-party
location.
Security concerns are addressed through secure-access VPN or by the physical
location within the clients firewall system.
Furthermore, for mission-critical applications we need to consider downtime in
terms of internet availability, quality and performance. Hence, hosting the application
with an on-premises private cloud is the suggested approach.
In addition to security reasons, this model is adopted by organizations in cases
where data or applications are required to conform to various regulatory standards such
as SOX, HIPAA, or SAS 70, which may require data to be managed for privacy and
audits that govern the corporation. For example, for the healthcare and pharmaceutical
industries, moving data to the cloud may violate the norms. Similarly, different
countries have different laws and regulations for managing and handling data, which can
impede the business if cloud is under different jurisdiction.
Public Cloud
The public cloud deployment model represents true cloud hosting. In this
deployment model, services and infrastructure are provided to various clients. Google is
an example of a public cloud. This service can be provided by a vendor free of charge or
on the basis of a pay-per-user license policy.
This model is best suited for business requirements wherein it is required to
manage load spikes, host SaaS applications, utilize interim infrastructure for developing
and testing applications, and manage applications which are consumed by many users
that would otherwise require large investment in infrastructure from businesses.
This model helps to reduce capital expenditure and bring down operational IT
costs.
Hybrid Cloud
This deployment model helps businesses to take advantage of secured
applications and data hosting on a private cloud, while still enjoying cost benefits by
keeping shared data and applications on the public cloud. This model is also used for
handling cloud bursting, which refers to a scenario where the existing private cloud
infrastructure is not able to handle load spikes and requires a fallback option to support
the load. Hence, the cloud migrates workloads between public and private hosting
without any inconvenience to the users.
Many PaaS deployments expose their APIs, which can be further integrated with
internal applications or applications hosted on a private cloud, while still maintaining
the security aspects. Microsoft Azure and Force.com are two examples of this model.

Community Cloud
In the community deployment model, the cloud infrastructure is shared by several
organizations with the same policy and compliance considerations. This helps to further
reduce costs as compared to a private cloud, as it is shared by larger group.
Various state-level government departments requiring access to the same data relating to
the local population or information related to infrastructure, such as hospitals, roads,
electrical stations, etc., can utilize a community cloud to manage applications and data.
Cloud computing is not a silverbullet technology; hence, investment in any
deployment model should be made based on business requirements, the criticality of the
application and the level of support required.

NIST reference architecture of cloud computing

The Conceptual Reference Model Figure 1 presents an overview of the NIST cloud
computing reference architecture, which identifies the major actors, their activities and
functions in cloud computing. The diagram depicts a generic high-level architecture and is
intended to facilitate the understanding of the requirements, uses, characteristics and
standards of cloud computing.
Figure 1: The Conceptual Reference Model

As shown in Figure 1, the NIST cloud computing reference architecture defines five
major actors: cloud consumer, cloud provider, cloud carrier, cloud auditor and cloud broker.
Each actor is an entity (a person or an organization) that participates in a transaction or
process and/or performs tasks in cloud computing.

Actor Definitio
n
Cloud Consumer A person or organization that maintains a business relationship
with, and uses service from, Cloud Providers.
Cloud Provider A person, organization, or entity responsible for making a
service available to interested parties.
Cloud Auditor A party that can conduct independent assessment of cloud
services, information system operations, performance and
security of the cloud implementation.
Cloud Broker An entity that manages the use, performance and delivery of
cloud services, and negotiates relationships between Cloud
Providers and Cloud Consumers.
Cloud Carrier An intermediary that provides connectivity and transport of
cloud services from Cloud Providers to Cloud Consumers.

Pros:
1. Cloud Computing has lower software costs. With Cloud Computing a lot of
software is paid on a monthly basis which when compared to buying the software in
the beginning, software through Cloud Computing is often a fraction of the cost.
2. Eventually your company may want to migrate to a new operating system, the
associated costs to migrate to a new operating system, isoften less than in a
traditional server environment.
3. Centralized data- Another key benefit with Cloud Computing is having all the data
(which could be for multiple branch offices or project sites) in a single location "the
Cloud".
4. Access from anywhere- never leave another important document back at the office.
With Cloud computing and an Internet connection, your data are always nearby, even
if you are on the other side of the world.
5. Internet connection is a required for Cloud Computing. You must have an Internet
connection to access your data.
Cons
1. Internet Connection Quality & Cloud Computing
1. Low Bandwidth -If you can only get low bandwidth Internet (like dial-up) then
you should not consider using Cloud Computing. Bandwidth is commonly referred
to as "how fast a connection is" or what the "speed" of your Internet is. The
bandwidth to download data may not be the same as it is to send data.
2. Unreliable Internet connection -If you can get high speed Internet but it is
unreliable (meaning your connection drops frequently and/or can be down for long
periods at a time), depending on your business and how these outages will impact
your operations, Cloud Computing may not be for you (or you may need to look into
a more reliable and/or additional Internet connection).
2. Your company will still need a Disaster Recovery Plan, and if you have one now,
it will need to be revised to address the changes for when you are using Cloud
Computing.

Cloud eco system


Cloud ecosystem is a term used to describe the complex system of interdependent
components that work together to enable cloud services.
The Cloud Ecosystem Reference Model serves as an abstract foundation for the
instantiations of architectures and business solutions of an enterprise. It defines a flexible and
agile collaborative enterprise Cloud Ecosystem. It also provides for an effective digital
customer experience for sharing business information securely regardless of its underlying
data location.
The Cloud Ecosystem Reference Model ensures consistency and applicability of
Cloud Services within a wide variety of Enterprise Architecture management
frameworks. Figure 1 describes the relationships and dependencies between the various
enterprise frameworks to manage the life cycle of Cloud Services utilizing the Architecture
Building Blocks (ABBs) identified in the Cloud Ecosystem Reference Model to deliver
enterprise business solutions. Please refer to the TOGAF standard for further explanation of
the concepts associated with Architecture Development Methods (ADMs) and management
of frameworks.

Managing Frameworks of an Enterprise Cloud Ecosystem

The Cloud Ecosystem Reference Model defines the major actors and their
relationships and a minimum set of ABBs. The model describes the architectural capabilities
to be realized and facilitated by at least one of the new or existing participants of an
enterprise Cloud Ecosystem. The model establishes a common language for the various
participants of an enterprise Cloud Ecosystem that supports the validations of Cloud Service
Providers solutions to achieve architectural integrity of business solutions of an enterprise

Architecture of P2P

Peer-to-Peer architecture (P2P architecture) is a commonly used computer networking


architecture in which each workstation, or node, has the same capabilities and
responsibilities. It is often compared and contrasted to the classic client/server architecture, in
which some computers are dedicated to serving others.
P2P may also be used to refer to a single software program designed so that each
instance of the program may act as both client and server, with the same responsibilities and
status.
P2P networks have many applications, but the most common is for content
distribution. This includes software publication and distribution, content delivery networks,
streaming media and peer-casting for multicasting streams, which facilitates on-demand
content delivery. Other applications involve science, networking, search and communication
networks. Even the U.S. Department of Defence has started researching applications for P2P
networks for modern network warfare strategies.
P2P architecture is often referred to as a peer-to-peer network.

Grid computing is a form of distributed computing whereby a "super and virtual computer"
is composed of a cluster of networked, loosely coupled computers, acting in concert to
perform very large tasks.

Focus on architecture issues


Propose set of core services as basic infrastructure
Used to construct high-level, domain-specific solutions (diverse)
Design principles
Keep participation cost low
Enable local control
Support for adaptation
IP hourglass model
Challenges

Security and Trust


Customer SLA compare Cost/Performance
Dynamic VM migration Unique Universal IP
Clouds Interoperability
Data Protection & Recovery
Standards: Security, SLA, VMs
Management Tools
Integration with Internal Infrastructure
Small compact economical applications
Cost/Performance prediction and measurement
Keep it Transparent and Simple

On-Demand Computing
On-demand (OD) computing is an increasingly popular enterprise model in which
computing resources are made available to the user as needed. The resources may be
maintained within the user's enterprise, or made available by a service provider.

Benefit of Cloud Computing On-Demand

Instant Access

Stand up new infrastructure in minutes


No upfront resource size, capacity or cost commitments
Self-service provisioning

Resizable Resources

Virtually unlimited resource pool


Custom builds and live reallocations of any ratio of CPU, memory and storage
Flexibility of multiple virtual data centres
Value and Assurance

Turn static capital costs into variable operating, pay-as-you-go expenses


Leverage your same VMware management tools, templates and processes
Run your applications onsite, offsite or in any combination with the ability to cancel
at any time

You might also like