You are on page 1of 108

Enterprise Computing: A Retrospective

1.1 Introduction

Enterprise Computing is a business-oriented IT resources that can be customize by


user according to their requirements all over the internet. Enterprise computing is
usually seen as a collection of big business software solutions to common problems
such as resource management and streamlining processes.

Enterprise computing is sometimes sold to business users as an entire platform that can
be applied broadly across an organization and then further customized by users within
each area. This means the analytics, reporting, database management and other
applications are standard across the system, while the application packages being used
and the data being accessed in each area will be different. In this sense, enterprise
computing is a departure from finding single software solutions to specific business
problems, such as inventory or accounting software. Instead, enterprise computing is
intended to offer integrated solutions to these problems.

1.2 Mainframe Architecture


Centralized architecture with high computing power and centralization of computing
resources vs distributed computing architecture are two main directions in which IT
technology has moved or progressed with the passing time. Today, we look at two
technologies that governed or decided the course and future of digitization –
Mainframes and Cloud Computing.

Mainframe computers are also referred as ‘Big iron’ is used by large organizations for
critical applications, data processing in bulk, supported massive throughput, hot
swapping of hardware such as hard disks and memory, backward compatibility to older
hardware, extensive Input/Output facilities, operated in batch mode, Virtualization and
so on.
Mainframe computers were characteristic by – large size, high processing power and
had multiple peripherals attached to them.
Chapter 2:Evolution Of Computing
2.1 Internet Technology and Web-Enabled Applications

Internet Technology

That group of technologies that allow users to access information and communication over the
World Wide Web (Web browsers, ftp, e-mail, associated hardware, Internet service providers,
and so forth) is called Internet Technology. The technical field of internet technologies includes
the knowledge and abilities that are required to create web-based, cloud-based, mobile and
e-commerce applications and systems.

After the completion of the Internet technology on the cloud based System or computing system,
we will be able to expand our job prospects on the solution of AWS architect, Cloud
infrastructure Support System, Cloud System Analyst etc.

Web –Enabled Applications

A product or service that may be used through or in conjunction with the World Wide Web is
referred to as being "web enabled." A Web-enabled product can connect to other Web-based
applications to synchronize data or be accessible through a Web browser.

Web enabled applications are the applications that are based on the desktop versions.
web-enabled applications help to facilitate the smooth-running of their sites which may include
an e-Commerce or line-of-business function, Enterprise Resource Planning and Customer
Relations Management. For eg. Web applications include online forms, shopping carts, word
processors, spreadsheets, video and photo editing, file conversion, file scanning, and email
programs such as Gmail, Yahoo and AOL. Most Popular applications include Google Apps and
Microsoft 365. The main advantages of the web enabled applications are:

• Web applications can be run on the multiple platforms regardless of OS or device as long
as the browser is compatible

• All users access the same version, eliminating any compatibility issues

• They are not installed on the hard drive, so they are eliminating space limitations.

• They reduce software piracy in subscription-based web applications (i.e. SaaS)

• They reduce costs for both the business and end user as there is less support and
maintenance required by the business and lower requirements for the end user’s computer.
2.2 Web application servers
A web server is software and hardware that uses HTTP (Hypertext Transfer Protocol) and other
protocols to respond to client requests made over the World Wide Web. The main job of a web
server is to display website content through storing, processing and delivering webpages to users.

There are a number of common web servers available, some including:

· Apache HTTP Server.

· Microsoft Internet Information Services (IIS).

· Nginx.

· Lighttpd.

· Sun Java System Web Server.

An application server is a server that hosts applications or software that delivers a business
application through a communication protocol. An application server framework is a service
layer model. It includes software components available to a software developer through an
application programming interface. An application server may have features such as clustering,
fail-over, and load-balancing. The goal is for developers to focus on the business logic

The web application servers are used in the media, business logic, Web Server Integration, APIs,
Asynchronous JavaScript, Mobile Application Server,EC2 Deployment and Desktop
Applications.

2.3 Overview of Computing Paradigm:

➔ Grid Computing
Grid Computing is the group of computers connected into a network to perform a task
which would be difficult for a single computer to perform. The task is of high computing power
and the computers worked on the same protocol while acting as a virtual supercomputer. The
computers are connected using Ethernet or the Internet and provide resources or processing
power.

➔ Cluster Computing
Cluster Computing is the group of similar devices working together to perform a task
connected into a network which appears as a single system.

➔ Distributed Computing
Distributed Computing is the technique of connecting the computer servers into a cluster to
share the processing power and perform the task divided into smaller parts through message
passing.

➔ Utility Computing
Utility Computing is the service based model which provides computing resources,
infrastructure management and technical services available to the customers according to their
need.

➔ Cloud Computing
Cloud Computing is the availability of IT resources through the Internet using a pay-as-you-go
model. The customers can choose whichever services they want and pay for the uses only where
one can global in a minute.

2.4 Internet of Services


Internet of Services is the marketplace of worldwide web-based services where one can have
everything that one needs to develop, run, store, and communicate web applications through the
use of the Internet.

2.5 Adopting Cloud Computing in Business


Cloud Computing is booming in today's world and in the near future almost every company
worldwide would be on cloud as it has lots of benefits for the business enterprises. Business
Enterprise seeks profit and cloud computing provides endless profits without much hard work.
The benefit of adoption the cloud computing in business are stated below:
★ Pay-as-you-use
The pricing is done on the basis of the resources used and no extra cost is required to
waste on extra resources.

★ No need to setup data centers


Cloud computing is done using the Internet so there is no need to invest in physical
data centers and there won’t be problems in managing the data centers.
★ Flexible and Scalable
There is no need to guess how many resources we need as cloud computing is scalable
meaning as our demand increases it increases and shrink if our demand decreases. Also, any
number of users can use our service without creating a problem.

★ Global in a minute
Cloud Computing helps the business to go global in a minute with just a few clicks and
people worldwide will be using the service.

★ Upgrades and Maintenance


Cloud Computing helps in upgrading and maintaining our services without much hassle
and we don’t need to think about the maintenance just focus on our product. Other things are
performed by the cloud.

Chapter 3
Enterprise Architecture:Role and Evolution
-Shivaji Pandit Chhetri
SEC075BEI006
3.1 Enterprise data and Processes

3.1.1 Enterprise data


Enterprise data is the totality of the digital information flowing through an
organization. This includes structured data, such as records in spreadsheets and
relational databases, and unstructured data, such as images and video content.
Some examples include:

● Operational data, such as customer orders and transaction records, billing


and accounting systems, or internal labor statistics
● Network alerts and logs used in managing IT infrastructure, by cybersecurity
teams, or by application developers
● Strategic data from customer relationship management (CRM) systems,
sales reporting, trend and opportunity analyses, or external sources of market
data
● Application-specific data, including GPS data for logistics or transportation
companies, sensor data for IoT businesses, weather data for news
organizations, or web content for social media applications.

Organizations today struggle to adopt, integrate, and manage the enterprise data
that moves through their systems. Data may be the currency that drives business,
but without a holistic enterprise data management strategy, businesses are unable to
harness its full value.

3.1.2 Enterprise Processes

Enterprise process management, also known as business process management,


is a method that organizes and implements all of the activities in an organization
in a structured way. This aligns them with organizational goals and maximizes
integration across different functions and processes.There are many advantages
of implementing enterprise process management some of which are mentioned
below:
● First, it’s a great alternative to carrying out manual, paper, or
spreadsheet-based processes that are time-consuming, inefficient, and
inaccurate.
● It’s also incredibly difficult to cross-reference information and data when
working with manual systems. Therefore, team members generally fall back
on ad hoc communication that can be prone to misinterpretation and
inconsistencies. A good enterprise process management system can
eliminate these headaches by seamlessly integrating tasks across different
areas, meaning your processes fit together and flow easily.
● It enables a business to link all of its procedures back to the broader
company goals and strategy. Thus, every activity is adding value and is
contributing positively towards the organization’s objectives. Since the best
enterprise process management systems make all tasks and processes
viewable and open to scrutiny, potential issues can be picked up quickly and
rectified–be they halted processes, duplicate tasks, or redundant activities.

3.2 Enterprise Component

Enterprise components correspond to platforms such as front end, back end, and
cloud-dependent delivery and the utilized network. So, a framework of cloud
computing is broadly categorized as three specifically clients, distributed servers
and data centre.
For the operation of this enterprise computing, the following three components
have a big hand and the responsibilities of these components can be elucidated
clearly as below:

a)Clients

Clients in cloud computing are in general to the operation of Local Area Networks

(LAN’s). They are just the desktops where they have their place on desks. These

might be also in the form of laptops, mobiles, tablets to enhance mobility. Clients

hold the responsibility of interaction which pushes for the management of data on

cloud servers.

b)Datacentre
It is an array of servers that houses the subscribed application. Progressing the IT

industry has brought the concept of virtualizing servers, where the software might

be installed through the utilization of various instances of virtual servers. This

approach streamlines the process of managing dozens of virtual servers on multiple

physical servers.

c)Distributed Servers

These are considered as a server where that is housed in the other location. So, the

physical servers might not be housed in a similar location. Even the distributed

server and the physical server appear to be in different locations, they perform as

they are so close to each other.

While the other component is Cloud Applications, where it is defined as cloud

computing in the form of software architecture. So, cloud applications serve as a

service which operates both the hardware and software architecture.

3.3 Application Integration and SOA

3.3.1 Application Integration :

Application integration is the process of enabling individual applications


each designed for its own specific purpose to work with one another. By
merging and optimizing data and workflows between multiple software
applications, organizations can achieve integrations that modernize their
infrastructures and support agile business operations.

Application integration helps bridge the gap between existing on-premises


systems and fast-evolving cloud-based enterprise applications. Through
seamlessly interconnected processes and data exchanges, application
integration allows enterprises to orchestrate a variety of functions across
their entire infrastructures, enabling businesses to operate more effectively
and efficiently.

Application integration concepts:

When an organization considers moving forward with application


integration, there are various components required to orchestrate
processes between two or more applications successfully.

● Application Programming Interface (API)

An API is a set of functions and procedures that specify how software


components should interact. They allow developers to easily and
quickly access the functionality of other software through well-defined
data structures and have, as a result, become a popular way of
integrating applications, data, and services, in recent years.

● Events and actions

An event is an occurence in your connected applications—such as a


payment being received. An event then triggers an action or series of
actions, which can include standard functionality—like creating,
retrieving, or updating datasets—and be application-specific—such
as a new case being creating in Salesforce.
● Data mapping

Data mapping specifies the information exchange that's to be used.


For example, when you complete and submit contact forms in one
application, this event can trigger actions that map those form fields
to other corresponding datasets on other applications, categorizing
the information entered into first name, last name, status, etc. This
simplifies the process of exporting data for easier grouping and
analysis.

Benefits of application integration:

There are many complexities that integration can resolve, but what are the
benefits? Integration provides value both on an organizational level as well
as an operational level if you choose the right integration tool.

Organizational benefits:

Integrating your applications across various clouds is an important step


toward synchronizing your data. However, you need an integration tool that
allows deployment of integration runtimes within multiple clouds. This
allows you to deploy close to your applications, resulting in lower latency
times as processes run directly within the cloud and lower costs from not
needing to move data in and out of platforms.

Operational benefits:

The right application tool can also yield important timesaving, cost-cutting,
and performance-enhancing operational benefits:
● Access any data anywhere: With organizations diversifying their
application landscape (e.g., adopting SaaS applications, building new
solutions in the cloud) data is increasingly dispersed across multiple
environments. Integration tools that deploy across these
environments enable access from any system to any sort of data in
any format.
● Resolve ‘endpoint individuality’: Each system or application has its
own idiosyncrasies that must be accounted for in any
integration—error handling, authentication protocols, load
management, performance optimization and more. Integration tools
that handle these factors ‘out of the box’ yield tremendous gains in
productivity over coding and a higher level of enterprise-class
resiliency.
● Let integrators focus on integration: Purpose-built tooling can help
integrators focus less on the surrounding infrastructure and more on
building business logic. By addressing error recovery, fault tolerance,
log capture, performance analysis, message tracing, and
transactional update and recovery, an integration tool enables users
to create integration flows more without requiring a deep knowledge
of the various platforms and domains.

Application integration use cases:

As more and more organizations concentrate on deploying agile integration


strategies, modernizing legacy systems is a primary focus. Industry-specific
examples include the following:

● Banking: By integrating customer accounts, loan applications


services, and other back-end systems with their mobile app, a bank
can provide services via a new digital channel and appeal to new
customers.
● Manufacturing: Factories use hundreds or even thousands of devices
to monitor all aspects of the production line. By connecting the
devices to other systems (e.g., parts inventories, scheduling
applications, systems that control the manufacturing environment),
manufacturers can uncover insights that help them identify production
problems and better balance quality, cost, and throughput.
● Healthcare: By integrating a hospital patient’s record with an
electronic health record (EHR) system, anyone who treats the patient
has access to the patient’s history, treatments, and records from the
primary care physician and specialists, insurance providers, and
more. As the patient moves through different areas of the hospital,
the relevant caregivers can easily access the information they need
to treat the patient most effectively.

Organizations in any industry can leverage mission-critical systems through


integration:

● ERP systems: Enterprise resource planning (ERP) systems serve as


a hub for all business activities in the organization. By integrating
ERP with supporting applications and services, organizations can
streamline and automate mission-critical business processes, such
as payment processing, supply chain functions, sales lead tracking,
and more.
● CRM platforms: When combined with other tools and services,
customer relationship management (CRM) platforms can maximize
productivity and efficiency by automating a number of sales,
marketing, customer support, and product development functions.

3.3.2 Service Oriented Architecture (SOA)


In Simple words, Service Oriented Architecture (SOA) is a solution for
making two applications communicate with each other by a collection of
services. A service is self-independent, does not depend on the language,
on which the consumer application is written. This is how the concept of the
Service Oriented Architecture revolves, the consumer application will send
the request to the service provider and in return, the response will be
received by the consumer. The connections between the service provider
and the service consumer will be done through the web services, which
transfers the request and responding with that the data which is being
requested.

This is how Service Oriented Architecture looks like:


Figure: SOA Architecture

There are three roles in each of the Service-Oriented Architecture building


blocks: service provider; service broker, service registry, service repository;
and service requester/consumer.

The service provider works in conjunction with the service registry, debating
the whys and hows of the services being offered, such as security,
availability, what to charge, and more. This role also determines the service
category and if there need to be any trading agreements.

The service broker makes information regarding the service available to


those requesting it. The scope of the broker is determined by whoever
implements it.

The service requester locates entries in the broker registry and then binds
them to the service provider. They may or may not be able to access
multiple services; that depends on the capability of the service requester.
Characteristics Of Service-Oriented Architecture:

While the defining concepts of Service-Oriented Architecture vary from


company to company, there are six key tenets that overarch the broad
concept of Service-Oriented Architecture. These core values include:

● Business value
● Strategic goals
● Intrinsic inter-operability
● Shared services
● Flexibility
● Evolutionary refinement

Each of these core values can be seen on a continuum from older format
distributed computing to Service-Oriented Architecture to cloud computing.

Benifits of SOA:

● Reliability
● Location Independence
● Scalability
● Platform Independence
● Loosely Coupled
● Reusability
● Agility
● Easy Maintenance

How Service-Oriented Architecture And Cloud Computing Work


Together :
First, it’s important to note that Service-Oriented Architecture can work with
or without cloud computing, although more and more businesses are
moving file storage to the cloud so it makes sense to use cloud computing
and Service-Oriented Architecture together.

Using cloud computing allows users to easily and immediately implement


services tailored to the requirements of their clients, “without needing to
consult an IT department.”

One downfall of using Service-Oriented Architecture and cloud computing


together is that some aspects of it are not evaluated, such as security and
availability. When using cloud computing, users are often at the mercy of
the provider.

There is one fairly major challenge businesses face when merging cloud
computing and Service-Oriented Architecture is the integration of existing
data and systems into the cloud solution. There needs to be continuity from
beginning to end in order for there to be a seamless transition. It’s also
important to keep in mind that not every IT aspect can be outsourced to the
cloud — there are some things that still need to be done manually.
The Difference Between Service-Oriented Architecture and SaaS:

We’ve talked quite a bit about what Service-Oriented Architecture is and


how it can be used to advance your business. But there’s also SaaS
(Software as a Service), which can also be used to advance your business.
You may be wondering what SaaS is and how it differs from
Service-Oriented Architecture. In brief, the resources available through
SaaS are software applications. A key component is that the SaaS
infrastructure is “available to, but hidden, from users.” An advantage of
SaaS is that users don’t have to both install and maintain software, which
eliminates any complex requirements. With SaaS, the customer also
doesn’t require any up-front licensing, which leads to lower costs because
providers are only maintaining a single application.
3.4 Enterprise technical architecture

Enterprise architecture (EA) is the practice of designing a business with a


holistic view, considering all of its parts and how they interact.

t’s a way to optimize an enterprise’s performance, using a framework that


considers business goals, technology, and the current environment.

This blog post will discuss what EA is, the benefits it can provide, and a
framework you can use to get started.
Definition of Enterprise Architecture:

Enterprise architecture is the process of designing and managing an


enterprise’s information systems by creating a model that shows how the
enterprise works.

It aims to create a blueprint that outlines how the enterprise should be


structured and how its information systems should work. This model shows
an enterprise’s current and future target states.

It use enterprise architecture models to help them design and manage


information systems that support business processes, corporate strategies,
and other enterprise needs.

Benefits of EA:

The benefits of enterprise architecture include the following:

● A blueprint for the enterprise that can be used to align IT with


business goals
● A way to optimize enterprise performance
● A way to manage enterprise information systems
● A way to improve enterprise communication and collaboration
● A way to facilitate enterprise change management processes
● A way to enhance enterprise innovation is by understanding the
enterprise as a whole system.
The Goals of Enterprise Architecture:

The EA framework has three goals:

First, the goals of enterprise architecture are to create a blueprint that


outlines how the enterprise should be structured and how its information
systems should work. The enterprise architecture model shows an
enterprise’s current and future target states.

It uses enterprise architecture models to design and manage information


systems that support business processes and meet other enterprise needs.

It can also help an enterprise achieve its goals by optimizing enterprise


performance. Enterprise architects can identify and address issues in a
company before they become a problem by understanding the company as
a whole system.

Additionally, It can improve communication and collaboration, facilitate


change management processes, and improve enterprise innovation.

What is the process of enterprise architecture?

There are eight steps in the process:

● Establish: Establish the business context.


● Define: Define the enterprise architecture vision and principles.
● Collect: Collect enterprise information and identify stakeholders.
● Assess: Assess the current state of enterprise.
● Develop: Develop the future state for the enterprise.
● Create: Create an implementation plan.
● Manage: Manage the enterprise architecture change.
● Monitor: Monitor and review enterprise architecture.
3.4 Data Center Infrastructure: Coping with Complexity

A data center is a facility that provides shared access to applications and


data using a complex network, compute, and storage infrastructure.
Industry standards exist to assist in designing, constructing, and
maintaining data center facilities and infrastructures to ensure the data is
both secure and highly available.

Types Of Data Centers

Data centers vary in size, from a small server room all the way up to groups
of geographically distributed buildings, but they all share one thing in
common: they are a critical business asset where companies often invest in
and deploy the latest advancements in data center networking, compute
and storage technologies.

The modern data center has evolved from a facility containing an


on-premises infrastructure to one that connects on-premises systems with
cloud infrastructures where networks, applications and workloads are
virtualized in multiple private and public clouds.

● Enterprise data centers are typically constructed and used by a


single organization for their own internal purposes. These are
common among tech giants.
● Colocation data centers function as a kind of rental property where
the space and resources of a data center are made available to the
people willing to rent it.
● Managed service data centers offer aspects such as data storage,
computing, and other services as a third party, serving customers
directly.
● Cloud data centers are distributed and are sometimes offered to
customers with the help of a third-party managed service provider.

Evolution of the Data Center to the Cloud:

The fact that virtual cloud DC can be provisioned or scaled-down with only
a few clicks is a major reason for shifting to the cloud. In modern data
centers, software-defined networking (SDN) manages the traffic flows via
software. Infrastructure as a Service (IaaS) offerings, hosted on private and
public clouds, spin up whole systems on-demand. When new apps are
needed, Platform as a Service (PaaS) and container technologies are
available in an instant.

More companies are moving to the cloud, but it isn’t a leap that some are
willing to take. In 2019, it was reported that enterprises paid more annually
on cloud infrastructure services than they did on physical hardware for the
first time. However, an Uptime Institute survey found that 58% of
organizations say a lack of visibility, transparency, and accountability of
public cloud services keeps most workloads in corporate data centers.

Data Center Architecture Components:

Data centers are made up of three primary types of components: compute,


storage, and network. However, these components are only the top of the
iceberg in a modern DC. Beneath the surface, support infrastructure is
essential to meeting the service level agreements of an enterprise data
center.
Data Center Computing

Servers are the engines of the data center. On servers, the processing and
memory used to run applications may be physical, virtualized, distributed
across containers, or distributed among remote nodes in an edge
computing model. Data centers must use processors that are best suited
for the task, e.g. general purpose CPUs may not be the best choice to
solve artificial intelligence (AI) and machine learning (ML) problems.

Data Center Storage

Data centers host large quantities of sensitive information, both for their
own purposes and the needs of their customers. Decreasing costs of
storage media increases the amount of storage available for backing up the
data either locally, remote, or both. Advancements in non-volatile storage
media lowers data access times. In addition, as in any other thing that is
software-defined, software-defined storage technologies increase staff
efficiency for managing a storage system.

Data Center Networks

Datacenter network equipment includes cabling, switches, routers, and


firewalls that connect servers together and to the outside world. Properly
configured and structured, they can manage high volumes of traffic without
compromising performance. A typical three-tier network topology is made
up of core switches at the edge connecting the data center to the Internet
and a middle aggregate layer that connects the core layer to the access
layer where the servers reside. Advancements, such as hyperscale
network security and software-defined networking, bring cloud-level agility
and scalability to on-premises networks.
Data Center Support Infrastructure:

Data centers are a critical asset that is protected with a robust and reliable
support infrastructure made up of power subsystems, uninterruptible power
supplies (UPS), backup generators, ventilation and cooling equipment, fire
suppression systems and building security systems.

Industry standards exist from organizations like the Telecommunications


Industry Association (TIA) and the Uptime Institute to assist in the design,
construction and maintenance of data center facilities. For instance, Uptime
Institute defines these four tiers:

● Tier I: Basic capacity, must include a UPS.


● Tier II: Redundant capacity and adds redundant power and cooling.
● Tier III: Concurrently maintainable and ensures that any component
can be taken out of service without affecting production.
● Tier IV: Fault tolerant, allowing any production capacity to be
insulated from ANY type of failure.

Data Center Security:

In addition to the building security systems supporting a data center facility


discussed above, DC networks require a thorough zero trust analysis
incorporated into any DC design. Data center firewalls, data access
controls, IPS, WAF and their modern equivalent Web Application & API
Protection (WAAP) systems need to be specified properly to ensure they
scale as needed to meet the demands of data center networks. In addition,
if you’re choosing a data storage or cloud services provider, it’s important
that you understand the security measures they use for their own DC.
Invest in the highest possible level of security to keep your information
safe.
Partnering with a data center security provider is a good way to accomplish
these goals. Check Point Maestro provides hyperscale security that scales
on-demand to meet an organization’s data center security requirements.
To learn more, check out this ESG whitepaper. Then, schedule a free
demo of Maestro Hyperscale Network Security
Name: Santosh Bhat
Roll No. : SEC075BEI003

Chapter 4: Cloud Concept

1. Cloud computing (NIST model), Properties,


Characteristics, Benefits

Cloud computing
Cloud computing is the delivery of different services through the Internet, including data storage,
servers, databases, networking, and software and paying for the services only for the time you
use the service.

Cloud
"The cloud" refers to servers that are accessed over the Internet, and the software and databases
that run on those servers. Cloud servers are located in data centers all over the world.

Computing
Computing is the process of using computer technology to complete a given goal-oriented task.

NIST Model of cloud computing


National Institute of Standards and Technology’s (NIST) describe cloud computing as “Cloud
computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources that can be rapidly provisioned and released
with minimal management effort or service provider interactive.”

Essential Cloud Computing Characteristics

● On-demand self-service
● Broad network access
● Resources pooling
● Rapid elasticity
● Measured service
Service Models

● Software as a Service (Saas)


● Platform as a Service (PaaS)
● Infrastructure as a Service (IaaS)

Deployment Models

● Private cloud
● Community cloud
● Public cloud
● Hybrid cloud
Properties/Characteristics of cloud computing

On-demand self-service: consumers can unilaterally provision computing capabilities as needed


automatically without requiring human interaction with each service provider.

Broad network access: capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms.

Resources pooling: The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand

Rapid elasticity: capabilities can be elastically provisioned and released to scale rapidly outward
and inward commensurate with demand.

Measured service: cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service.

Benefits of cloud computing

1. High Speed – Quick Deployment


The ability to spin up new cloud computing instances in a matter of seconds reshaped the agility
and speed of software development.
2. Automatic Software Updates and Integration

Continuous Integration and Continuous Delivery rely on the fact that new software versions can
be easily tested and deployed in the cloud environment, which allows for higher velocity of
product innovation, releasing more and more features to the end-users on a monthly, weekly and
in some cases even daily basis.

3. Efficiency and Cost Reduction


By using cloud infrastructure, you don’t have to spend huge amounts of money on purchasing
and maintaining equipment. You do not even need large IT teams to handle your cloud data
center operations, as you can enjoy the expertise of your cloud provider’s staff.
4. Data Security
Cloud offers many advanced security features that guarantee that data is securely stored and
handled. Cloud storage providers implement baseline protections for their platforms and the data
they process, such as authentication, access control, and encryption.

5. Scalability
Using the cloud is a great solution because it enables enterprises to efficiently — and quickly —
scale up/down their IT departments, according to business demands.We can easily increase your
cloud capacity without having to invest in physical infrastructure. This level of agility can give
businesses using cloud computing a real advantage over competitors.This scalability minimizes
the risks associated with in-house operational issues and maintenance.

7. Unlimited Storage Capacity


Related to the scalability benefit above, the cloud has essentially unlimited capacity to store any
type of data in various cloud data storage types, depending on the availability, performance and
frequency the data has to be accessed.

8. Back-up and Restore Data


The fact that data can be stored in the cloud without capacity constraints also helps with backup
and restore purposes. As end-users data changes over time and needs to be tracked for
regulations or compliance reasons, older software versions can be stored for later stages, in cases
they would be needed for recovery or rollback.

9. Disaster Recovery
Having previous versions of software stored in the cloud, and having production instances
running on multiple cloud availability zones or regions allow for faster recovery from disasters:

10. Mobility
Cloud computing allows mobile access to corporate data via smartphones and devices, which is a
great way to ensure that no one is ever left out of the loop.Resources in the cloud can be easily
stored, retrieved, recovered, or processed with just a couple of clicks. Users can get access to
their works on-the-go, 24/7, via any devices of their choice, in any corner of the world as long as
you stay connected to the internet.
Sarwesh Kattel
SEC075BCT013

4.2 Cloud Types

The internet cloud is generally categorized into three types:

Public cloud — data and other info delivered over the internet that can be shared with
various people and organizations.

Private cloud — data and other info that is only accessible to users within your
organization.

Hybrid cloud — a combination of the two. This environment uses public and private
clouds.

Public Cloud

Documents, apps, data, and anything else that doesn't dwell on a physical appliance—like
your computer, a server, a hard drive, etc.—are all considered to be living in the cloud. It
can only be viewed online and resides in a sizable data warehouse. Although a public cloud
is more open than other types of clouds, it does not imply that everyone can access it. It is
the most well-liked due to this.

Benefits of Public Cloud


● Strong cyber security — It costs money to draw in the world's top engineers. The
average company cannot afford to hire large security staff and the best security
equipment. This issue is resolved by cloud computing. You gain from having highly
qualified IT specialists entirely in charge of safeguarding your public cloud
infrastructure.
● Advanced technology creates more security innovations — Modernized technology
has produced more sophisticated security services. The cloud's security innovations
are created expressly for cloud-based systems.
● Stringent penetration testing — Public clouds are subjected to more rigorous
standards and penetration checks than on-premise or private cloud services.
Penetration testing for private clouds frequently falls short because it is believed
that internal breaches are unlikely.
● Controlled access — Most data breaches are caused by human error. Critics
contend that internal data storage enables better control, but the reality is quite
different. The likelihood of data stored in a public cloud being compromised by an
employee's error is lower. Your danger increases when human control over your
information reduces.
Private Cloud

An organization-specific cloud solution is known as the private cloud. The computing


resources are not shared with anyone else. The data center resources may be situated on
your property or off-site and under the management of a different provider. Your company
receives the computer resources via an exclusive, secure private network that is not shared
with any other clients.

Benefits of private cloud

● Exclusive, dedicated environments — The private cloud is for your use only. It
cannot be accessed by any other organizations.
● Somewhat scalable — The environment can be scaled as needed without tradeoffs.
It is highly efficient and able to meet unpredictable demands without compromising
security or performance.
● Customizable security — The private cloud complies with stringent regulations,
keeping data safe and secure through protocol runs, configurations, and measures
based on the company’s unique workload requirements.
● Highly efficient — The performance of a private cloud is reliable and efficient.
● Flexible — The private cloud is able to transform its infrastructure according to the
growing and changing needs of the organization.

Hybrid Cloud

An on-site data center, commonly referred to as a private cloud, and the strength of a
public cloud are combined to create a hybrid cloud, which is a computing environment.
This enables the two systems to exchange data and programs as necessary. A public cloud
solution, private cloud services, and on-premises infrastructure are all combined to form a
hybrid cloud, which is referred to as a mixed computing, storage, and service environment.
Amazon Web Services (AWS) and Microsoft Azure are two examples of this. Making the
most of your infrastructure budget by using this combination gives you a lot of flexibility
and control.
Benefits of hybrid cloud

Although cloud services might help you save a lot of money, their major benefit is in
enabling a digital corporate structure that is always evolving. Every technology
management team needs to concentrate on two key goals: the firm's IT demands and the
requirements for business transformation. Usually, IT works toward a budgetary objective.
The digital business transformation side, on the other hand, concentrates on fresh and
creative ways to boost revenue.

The agility of a hybrid cloud is its key advantage. Organizations need to be able to adapt
and shift course as rapidly as possible if they want to stay ahead of the competition. Every
organization is built on this fundamental idea, and a hybrid computing solution may help it
succeed. A business might want to combine on-premises resources with private and public
clouds to retain the agility needed to stay ahead in today’s world.

4.3 Service models IaaS, PaaS, SaaS

1. IAAS: Providing computing infrastructure as on-demand services is possible with


infrastructure as a service (IAAS). It is one of the three essential operating systems for
servers, storage, and networks used in cloud service models. A model may be required by a
user who wants to buy servers, software data centers, or networking hardware and rent
those resources as a fully outsourced service. The resources are distributed as services, and
it supports dynamic scalability. On a single piece of hardware, numerous users are typically
present.

2. PAAS: A cloud delivery approach for apps made up of services managed by a third party
is called Platform As A Service (PAAS). It offers elastic scaling for your application,
enabling developers to create online services and applications with public, private, and
hybrid deployment methods.

3. SAAS: Software as a Service (SAAS) is a model of software that is deployed as a hosting


service and is accessed over Output Rephrased/Re-written Text the internet. It is a software
delivery model in which software and its associated data are hosted centrally and accessed
using their client, typically an online browser, over the web. When creating and deploying
contemporary apps, SAAS services are utilized.
Difference between IAAS, PAAS and SAAS:

Basis Of IAAS PAAS SAAS

Stands for Infrastructure as a Platform as a service. Software as a service.


service.

Uses IAAS is used by PAAS is used by SAAS is used by end


network architects. developers. users.

Access IAAS gives access to PAAS gives access to SAAS gives access to
resources like virtual the run time the end user.
machines and virtual environment to
storage. deployment and
development tools for
application.

Model It is a service model It is a cloud computing It is a service model in


that provides model that delivers cloud computing that
visualized computing tools that are used for host software makes
resources over the development of available for clients.
internet. applications.

Technical It requires technical In this some knowledge There is no


understanding. knowledge. is required for the basic requirement about
setup. technicalities, the
company handles
everything.
Popularity. It is popular between It is popular among It is popular between
developers and developers who focus consumer and
researchers. on the development of company, such as file
apps and scripts. sharing, email and
networking.

Cloud services. Amazon Web Facebook, and Google MS Office web,


Services, sun, vCloud search engine. Facebook and Google
Express. Apps.

Enterprise AWS virtual private Microsoft azure. IBM cloud analysis.


services. cloud.

Outsourced Salesforce Force.com, Gigaspaces. AWS, Terremark


cloud services.

9.2 Cloud management platforms and tools

The integrated systems known as cloud management platforms provide for the
management of private, public, and hybrid cloud environments. Companies may monitor
and manage cloud environments, resources, and services with the use of these platforms.

The administrative capabilities and visibility of cloud cost management, cloud


infrastructure monitoring, cloud infrastructure automation, and other technologies are
provided by cloud management platforms. Additionally, they contain open-source software
components that offer a foundation for building and controlling both public and private
cloud infrastructure. Additionally, CMPs are made to integrate with a range of
infrastructure as a service (IaaS) solutions.
There are several cloud management solutions available right now, and more are being
released every day. These tools, whose functions are as varied as the companies who buy
them, are designed to assist enterprises in managing and monitoring all cloud applications.

In 2021, some of the most well-liked cloud management products will include:

● Apache Cloudstack
● BMC Helix Cloud Security
● CloudHealth by VMware
● Microsoft Azure Management Tools
● Morpheus by Morpheus Data
● Terraform Enterprise by HashiCorp
● Turbonomic

Customers have high expectations in the age of agility and digital transformation;
therefore, IT must develop and deliver services more quickly than before. By accelerating
digital innovation, lowering complexity, and maintaining and automating governance and
compliance regulations, cloud management systems must meet these objectives.

Region:

We cluster data centers in physical locations around the world. Each cluster of logical data
centers is referred to as an Availability Zone. Each AWS Region is made up of multiple, isolated,
and physically distinct AZs spread across a geographical area.

Availability zone:

Separate areas within an AWS Region are designed to be separated from failures in other
Availability Zones. In the same AWS Region, they offer cheap, low-latency network connectivity
to different Availability Zones. Each area is totally autonomous..

Edge location:
Edge locations are AWS data centers designed to deliver services with the lowest latency
possible. Amazon has dozens of these data centers spread across the world. They're closer to
users than Regions or Availability Zones, often in major cities, so responses can be fast and
snappy.

Name: Anish Kumar Shah


Roll:SEC075BCT002

Amazon Ec2

Amazon Elastic Compute Cloud (EC2) is the Amazon Web Service which is used to create and
run virtual machines in the cloud.Using Amazon EC2 eliminates our need to invest in hardware
up front, so we can develop and deploy applications faster. We can use Amazon EC2 to deploy
as many servers as we want and also as per our need we can have storage control (increase or
decrease),configure security and networking and various others.

Features of Amazon EC2


There are various feature of EC2,they are as follows:
● Virtual computing environments, known as instances

● Preconfigured templates for our instances, known as Amazon Machine Images


● Various configurations of CPU, memory, storage, and networking capacity for our
instances.
● Secure login information for your instances using key pairs
● Storage volumes for temporary data that's deleted when we stop, hibernate, or terminate
our instance, known as instance store volumes
● Persistent storage volumes for our data using Amazon Elastic Block Store (Amazon
EBS), known as Amazon EBS volumes.

Benefits of Amazon Ec2:

● Elastic Web-Scale Computing. Amazon EC2 enables you to increase or decrease


capacity within minutes,
● Completely Controlled. You have complete control of your instances.
● Flexible Cloud Hosting Services.
● Designed for use with other Amazon Web Services.
● Reliable.
● Secure.
● Inexpensive.
● Easy to Start.

How to get started with Amazon EC2:


First, you need to go to the amazon console management and get set up to use Amazon EC2
and follow the given below steps:

Step 1: Choose an Amazon Machine Image (AMI)


In this step,We can select an AMI provided by AWS, our user community, or the AWS
Marketplace; or We can select one of our own AMIs.

Step 2: Choose an Instance Type


Amazon EC2 provides a wide selection of instance types optimized to fit different use cases.
Instances are virtual servers that can run applications.
Step 3: Configure Instance Details
Configure the instance to suit your requirements. We can launch multiple instances from
the same AMI, request Spot instances to take advantage of the lower pricing, assign an
access management role to the instance, and more.
Step 4: Add Storage
Our instance will be launched with the following storage device settings. We can attach
additional EBS volumes and instance store volumes to our instance, or edit the settings of
the root volume.

Step 5: Add Tags


A tag consists of a case-sensitive key-value pair. Tags are given for the billing
purpose.Tags will be applied to all instances and volumes

Step 6: Configure Security Group


A security group is a set of firewall rules that control the traffic for our instance. On this
page, We can add rules to allow specific traffic to reach your instance.

Step 7: Review Instance Launch


This steps consist of reviewing our instance launch details.And also We can go back to
edit changes for each section
Name:Lekhnath Dhakal

Roll no:SEC075BCT006

Amazon Elastic Block Store (Amazon EBS) :

Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes
for use with EC2 instances. EBS volumes behave like raw, unformatted block
devices. You can mount these volumes as devices on your instances. EBS volumes
that are attached to an instance are exposed as storage volumes that persist
independently from the life of the instance. You can create a file system on top of
these volumes, or use them in any way you would use a block device (such as a
hard drive). You can dynamically change the configuration of a volume attached to
an instance. We recommend Amazon EBS for data that must be quickly accessible
and requires long-term persistence. EBS volumes are particularly well-suited for
use as the primary storage for file systems, databases, or forany applications that
require fine granular updates and access to raw, unformatted, block-level storage.
Amazon EBS is well suited to both database-style applications that rely on random
reads and writes, and to throughput-intensive applications that perform long,
continuous reads and writes.

Features of Amazon EBS

• You create an EBS volume in a specific Availability Zone, and then attach it to an
instance in that same Availability Zone. To make a volume available outside of the
Availability Zone, you can create a snapshot and restore that snapshot to a new
volume anywhere in that Region. You can copy snapshots to other Regions and
then restore them to new volumes there, making it easier to leverage multiple AWS
Regions for geographical expansion, data center migration, and disaster recovery.

• Amazon EBS provides the following volume types: General Purpose SSD,
Provisioned IOPS SSD, Throughput Optimized HDD, and Cold HDD. The
following is a summary of performance and use cases for each volume type.

• General Purpose SSD volumes (gp2 and gp3) balance price and
performance for a wide variety of transactional workloads. These volumes are ideal
for use cases such as boot volumes, medium-size single instance databases, and
development and test environments.

• Provisioned IOPS SSD volumes (io1 and io2) are designed to meet the
needs of I/O-intensive workloads that are sensitive to storage performance and
consistency. They provide a consistent IOPS rate that you specify when you create
the volume. This enables you to predictably scale to tens of thousands of IOPS per
instance. Additionally, io2 volumes provide the highest levels of volume durability.

• Throughput Optimized HDD volumes (st1) provide low-cost magnetic


storage that defines performance in terms of throughput rather than IOPS. These
volumes are ideal for large, sequential workloads such as Amazon EMR, ETL, data
warehouses, and log processing.

• Cold HDD volumes (sc1) provide low-cost magnetic storage that defines
performance in terms of throughput rather than IOPS. These volumes are ideal for
large, sequential, cold-data workloads. If you require infrequent access to your data
and are looking to save costs, these volumes provides inexpensive block storage.

• You can create your EBS volumes as encrypted volumes, in order to meet a wide
range of data-at-rest encryption requirements for regulated/audited data and
applications. When you create an encrypted EBS volume and attach it to a
supported instance type, data stored at rest on the volume, disk I/O, and snapshots
created from the volume are all encrypted. The encryption occurs on the servers
that host EC2 instances, providing encryption of data-in-transit from EC2 instances
to EBS storage.

• You can create point-in-time snapshots of EBS volumes, which are persisted to
Amazon S3. Snapshots protect data for long-term durability, and they can be used
as the starting point for new EBS volumes. The same snapshot can be used to
create as many volumes as needed. These snapshots can be copied across AWS
Regions.

• Performance metrics, such as bandwidth, throughput, latency, and average queue


length, are available through the AWS Management Console. These metrics,
provided by Amazon CloudWatch, allow you to monitor the performance of your
volumes to make sure that you are providing enough performance for your
applications without paying for resources you don't need.

Amazon EBS volumes:

An Amazon EBS volume is a durable, block-level storage device that you can
attach to your instances. After you attach a volume to an instance, you can use it as
you would use a physical hard drive. EBS volumes are flexible. For
current-generation volumes attached to current-generation instance types, you can
dynamically increase size, modify the provisioned IOPS capacity, and change
volume type on live production volumes.

You can use EBS volumes as primary storage for data that requires frequent
updates, such as the system drive for an instance or storage for a database
application. You can also use them for throughput intensive applications that
perform continuous disk scans. EBS volumes persist independently from the
running life of an EC2 instance.

You can attach multiple EBS volumes to a single instance. The volume and
instance must be in the same Availability Zone. Depending on the volume and
instance types, you can use Multi-Attach (p. 1455) to mount a volume to multiple
instances at the same time.

Amazon EBS provides the following volume types: General Purpose SSD (gp2
and gp3), Provisioned IOPS SSD (io1 and io2), Throughput Optimized HDD (st1),
Cold HDD (sc1), and Magnetic (standard). They differ in performance
characteristics and price, allowing you to tailor your storage performance and cost
to the needs of your applications.

Benefits of using EBS volumes :

EBS volumes provide benefits that are not provided by instance store volumes.

· Data availability:

When you create an EBS volume, it is automatically replicated within its


Availability Zone to prevent data loss due to failure of any single hardware
component. You can attach an EBS volume to any EC2 instance in the same
Availability Zone. After you attach a volume, it appears as a native block device
similar to a hard drive or other physical device. At that point, the instance can
interact with the volume just as it would with a local drive. You can connect to the
instance and format the EBS volume with a file system, such as ext3, and then
install applications. If you attach multiple volumes to a device that you have
named, you can stripe data across the volumes for increased I/O and throughput
performance. You can attach io1 and io2 EBS volumes to up to 16 Nitro-based
instances.Otherwise, you can attach an EBS volume to a single instance. You can
get monitoring data for your EBS volumes, including root device volumes for
EBS-backed instances, at no additional charge.

· Data persistence

An EBS volume is off-instance storage that can persist independently from the life
of an instance. You continue to pay for the volume usage as long as the data
persists. EBS volumes that are attached to a running instance can automatically
detach from the instance with their data intact when the instance is terminated if
you uncheck the Delete on Termination check box when you configure EBS
volumes for your instance on the EC2 console. The volume can then be reattached
to a new instance, enabling quick recovery. If the check box for Delete on
Termination is checked, the volume(s) will delete upon termination of the EC2
instance. If you are using an EBSbacked instance, you can stop and restart that
instance without affecting the data stored in the attached volume. The volume
remains attached throughout the stop-start cycle. This enables you to process and
store the data on your volume indefinitely, only using the processing and storage
resources when required. The data persists on the volume until the volume is
deleted explicitly. The physical block storage used by deleted EBS volumes is
overwritten with zeroes before it is allocated to another account. If you are dealing
with sensitive data, you should consider encrypting your data manually or storing
the data on a volume protected by Amazon EBS encryption .By default, the root
EBS volume that is created and attached to an instance at launch is deleted when
that instance is terminated. You can modify this behavior by changing the value of
the flag DeleteOnTermination to false when you launch the instance. This modified
value causes the volume to persist even after the instance is terminated, and
enables you to attach the volume to another instance. By default, additional EBS
volumes that are created and attached to an instance at launch are not deleted when
that instance is terminated. You can modify this behavior by changing the value of
the flag Delete On Termination to true when you launch the instance. This
modified value causes the volumes to be deleted when the instance is terminated.
· Data encryption

For simplified data encryption, you can create encrypted EBS volumes with the
Amazon EBS encryption feature. All EBS volume types support encryption. You
can use encrypted EBS volumes to meet a wide range of data-at-rest encryption
requirements for regulated/audited data and applications. Amazon EBS encryption
uses 256-bit Advanced Encryption Standard algorithms (AES-256) and an
Amazon-managed key infrastructure. The encryption occurs on the server that
hosts the EC2 instance, providing encryption of data-in-transit from the EC2
instance to Amazon EBS storage.Amazon EBS encryption uses AWS Key
Management Service (AWS KMS) master keys when creating encrypted volumes
and any snapshots created from your encrypted volumes. The first time you create
an encrypted EBS volume in a region, a default master key is created for you
automatically. This key is used for Amazon EBS encryption unless you select a
customer master key (CMK) that you created separately using AWS KMS.
Creating your own CMK gives you more flexibility, including the ability to create,
rotate, disable, define access controls, and audit the encryption keys used to protect
your data.

· Data security :

Amazon EBS volumes are presented to you as raw, unformatted block devices.
These devices are logical devices that are created on the EBS infrastructure and the
Amazon EBS service ensures that the devices are logically empty (that is, the raw
blocks are zeroed or they contain cryptographically pseudorandom data) prior to
any use or re-use by a customer. If you have procedures that require that all data be
erased using a specific method, either after or before use (or both), such as those
detailed in DoD 5220.22-M (National Industrial Security Program Operating
Manual) or NIST 800-88 (Guidelines for Media Sanitization), you have the ability
to do so on Amazon EBS. That block-level activity will be reflected down to the
underlying storage media within the Amazon EBS service.
· Snapshots

Amazon EBS provides the ability to create snapshots (backups) of any EBS
volume and write a copy of the data in the volume to Amazon S3, where it is stored
redundantly in multiple Availability Zones. The volume does not need to be
attached to a running instance in order to take a snapshot. As you continue to write
data to a volume, you can periodically create a snapshot of the volume to use as a
baseline for new volumes. These snapshots can be used to create multiple new EBS
volumes or move volumes across Availability Zones. Snapshots of encrypted EBS
volumes are automatically encrypted. When you create a new volume from a
snapshot, it's an exact copy of the original volume at the time the snapshot was
taken. EBS volumes that are created from encrypted snapshots are automatically
encrypted. By optionally specifying a different Availability Zone, you can use this
functionality to create a duplicate volume in that zone. The snapshots can be shared
with specific AWS accounts or made public. When you create snapshots, you incur
charges in Amazon S3 based on the volume's total size. For a successive snapshot
of the volume, you are only charged for any additional data beyond the volume's
original size. Snapshots are incremental backups, meaning that only the blocks on
the volume that have changed after your most recent snapshot are saved. If you
have a volume with 100 GiB of data, but only 5 GiB of data have changed since
your last snapshot, only the 5 GiB of modified data is written to Amazon S3. Even
though snapshots are saved incrementally, the snapshot deletion process is
designed so that you need to retain only the most recent snapshot.

· Flexibility

EBS volumes support live configuration changes while in production. You can
modify volume type, volume size, and IOPS capacity without service interruptions.

Amazon EBS volume types


Amazon EBS provides the following volume types, which differ in performance
characteristics and price, so that you can tailor your storage performance and cost
to the needs of your applications. The volumes types fall into these categories:

• Solid state drives (SSD) — Optimized for transactional workloads involving


frequent read/ write operations with small I/O size, where the dominant
performance attribute is IOPS.

• Hard disk drives (HDD) — Optimized for large streaming workloads where the
dominant performance attribute is throughput.

• Previous generation— Hard disk drives that can be used for workloads with
small datasets where data is accessed infrequently and performance is not of
primary importance. We recommend that you consider a current generation volume
type instead.

Elastic File System:


Amazon Elastic File System (Amazon EFS) provides a simple, serverless,
set-and-forget elastic file system for use with AWS Cloud services and on-premises
resources. It is built to scale on demand to petabytes without disrupting
applications, growing and shrinking automatically as you add and remove files,
eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS has a simple web services interface that allows you to create and
configure file systems quickly and easily. The service manages all the file storage
infrastructure for you, meaning that you can avoid the complexity of deploying,
patching, and maintaining complex file system configurations.
Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0)
protocol, so the applications and tools that you use today work seamlessly with
Amazon EFS. Multiple compute instances, including Amazon EC2, Amazon ECS,
and AWS Lambda, can access an Amazon EFS file system at the same time,
providing a common data source for workloads and applications running on more
than one compute instance or server.

With Amazon EFS, you pay only for the storage used by your file system and there
is no minimum fee or setup cost. Amazon EFS offers a range of storage classes
designed for different use cases. These include:

· Standard storage classes – EFS Standard and EFS


Standard–Infrequent Access (Standard–IA), which offer multi-AZ resilience
and the highest levels of durability and availability.
· One Zone storage classes – EFS One Zone and EFS One
Zone–Infrequent Access (EFS One Zone–IA), which offer customers the
choice of additional savings by choosing to save their data in a single
Availability Zone.

Amazon EFS is designed to provide the throughput, IOPS, and low latency needed
for a broad range of workloads. With Amazon EFS, you can choose from two
performance modes and two throughput modes:

· The default General Purpose performance mode is ideal for


latency-sensitive use cases, like web serving environments, content
management systems, home directories, and general file serving.
· File systems in the Max I/O mode can scale to higher levels of
aggregate throughput and operations per second with a tradeoff of higher
latencies for file system operations.
· Using the default Bursting Throughput mode, throughput scales as
your file system grows.

· Using Provisioned Throughput mode, you can specify the throughput


of your file system independent of the amount of data stored. The service is
designed to be highly scalable, highly available, and highly durable. Amazon
EFS file systems using Standard storage classes store data and metadata
across multiple Availability Zones in an AWS Region. EFS file systems can
grow to petabyte scale, drive high levels of throughput, and allow massively
parallel access from compute instances to your data.

Amazon EFS provides file system access semantics, such as strong data
consistency and file locking. Amazon EFS also enables you to control access to
your file systems through Portable Operating System Interface (POSIX)
permissions

Amazon EFS supports authentication, authorization, and encryption capabilities to


help you meet your security and compliance requirements. Amazon EFS supports
two forms of encryption for file systems, encryption in transit and encryption at
rest. You can enable encryption at rest when creating an Amazon EFS file system.
If you do, all your data and metadata is encrypted. You can enable encryption in
transit when you mount the file system. NFS client access to EFS is controlled by
both AWS Identity and Access Management (IAM) policies and network security
policies like security groups.

Authentication and access control

You must have valid credentials to make Amazon EFS API requests, such as create
a file system. In addition, you must also have permissions to create or access
resources. By default, when you use the root account credentials of your AWS
account you can create and access resources owned by that account. However, we
don't recommend using root account credentials. In addition, any AWS Identity and
Access Management (IAM) users and roles that you create in your account must be
granted permissions to create or access resources.

IAM authorization for NFS clients is an additional security option for Amazon EFS
that uses IAM to simplify access management for Network File System (NFS)
clients at scale. With IAM authorization for NFS clients, you can use IAM to
manage access to an EFS file system in an inherently scalable way. IAM
authorization for NFS clients is also optimized for cloud environments.

Data consistency in Amazon EFS

Amazon EFS provides the close-to-open consistency semantics that applications


expect from NFS.

In Amazon EFS, write operations are durably stored across Availability Zones on
file systems using Standard storage classes in these situations:

• An application performs a synchronous write operation (for example, using the


open Linux command with the O_DIRECT flag, or the fsync Linux command).

• An application closes a file.


Depending on the access pattern, Amazon EFS can provide stronger consistency
guarantees than closeto-open semantics. Applications that perform synchronous
data access and perform non-appending writes have read-after-write consistency
for data access.

Storage classes:

With Amazon EFS, you can choose from a range of storage classes that are
designed for different use cases:

• EFS Standard – A regional storage class for frequently accessed data. It offers the
highest levels of availability and durability by storing file system data redundantly
across multiple Availability Zones in an AWS Region.

• EFS Standard-Infrequent Access (Standard-IA) – A regional storage class for


infrequently accessed data. It offers the highest levels of availability and durability
by storing file system data redundantly across multiple Availability Zones in an
AWS Region.

• EFS One Zone – For frequently accessed files stored redundantly within a single
Availability Zone in an AWS Region.

• EFS One Zone-IA (One Zone-IA) – A lower-cost storage class for infrequently
accessed files stored redundantly within a single Availability Zone in an AWS
Region.

EFS Lifecycle Management

Amazon EFS Lifecycle Management automatically manages cost-effective file


storage for your file systems. When enabled, Lifecycle Management migrates files
that haven't been accessed for a set period of time to an infrequent access storage
class, Standard-IA or One Zone-IA. You define that period of time by using a
lifecycle policy.

EFS Intelligent‐Tiering

Amazon EFS Intelligent‐Tiering uses lifecycle management to monitor the access


patterns of your workload and automatically transitions files to and from your
corresponding Infrequent Access (IA) storage class. With intelligent tiering, files in
the standard storage class (EFS Standard or EFS One Zone) that are not accessed
for a period of time, for example 30 days, are transitioned to the corresponding
Infrequent Access (IA) storage class. Additionally, if access patterns change, EFS
Intelligent‐Tiering automatically moves files back to the EFS Standard or EFS One
Zone storage classes. This helps to eliminate the risk of unbounded access charges,
while providing consistent low latencies.

EFS replication:

You can create a replica of your Amazon EFS file system in the AWS Region of
your preference using Amazon EFS Replication. When enabled, Amazon EFS
Replication automatically and transparently replicates the data and metadata on
your EFS file system to a new destination EFS file system that is created in an
AWS Region that you choose. Amazon EFS automatically keeps the source and
destination file systems synchronized. Amazon EFS replication is continual and
designed to provide a recovery point objective (RPO) and a recovery time
objective (RTO) of minutes. These features should assist you in meeting your
compliance and business continuity goals.
9

Name:Sunil Pokharel
Rollno:SEC075BCT017
Auto Scaling:

Autoscaling is the process of automatically increasing or decreasing the computational resources


delivered to a cloud workload based on need. The primary benefit of autoscaling, when
configured and managed properly, is that your workload gets exactly the cloud computational
resources it requires (and no more or less) at any given time. You pay only for the server
resources you need, when you need them.

The major public cloud infrastructure as a service (IaaS) providers vendors all offer auto scaling
capabilities:
● In Aws, this feature is called autoscaling group
● In Google cloud, this feature is called instance groups
● Microsoft azure provides the Virtual machine scale sets
Figure: Distributing cloud workload via autoscaling clusters: before and after autoscaling

Steps for creation of autoscaling groups:-

1) Choose launch template or configuration from made instances

2) Choose instance launch options


3) Configure advanced options

4) Configure group size and scaling policies

5) Review and lunch the autoscale group


Name: Shubu Gyanwali
Roll no :14

Public Cloud Providers:


The term "public cloud" refers to computing services made available to anybody who
wishes to use or buy them via the open Internet by third-party providers. Free or
on-demand sales options are available, allowing users to pay only for the CPU cycles,
storage, or bandwidth they use.
The following are the primary advantages of the public cloud:

a)Fewer wasted resources because users only pay for what they use.
b) Decreased need for businesses to invest in and maintain their own on-premises IT
resources.
C)Scalability to match workload and user demands.
The infrastructure required to host and install workloads in the cloud is provided by the
public cloud provider. Additionally, it provides customers with tools and services to assist
them in managing cloud applications, including data storage, security, and monitoring.
Organizations can choose between a big, all-purpose provider like AWS, Microsoft
Azure, or Google Cloud Platform (GCP), or a smaller provider. For needs involving
several uses of the cloud, general cloud providers are preferable due to their wide
availability and integration possibilities. Specialized vendors provide more customisation.

Future of Enterprise Computing:


Enterprise computing has lately seen revolutionary changes due to the rising computing
trends of social, mobile, cloud, and big data, as well as the new trend of BYOD (Bring
Your Own Device) in workplaces. This session will begin by examining each of the most
current computing developments and how they have affected enterprise computing both
individually and collectively. Although the trends first focused mostly on consumer
computing, they ultimately had a big impact on business computing. It is anticipated that
they will fundamentally and permanently alter the nature of the enterprise computing
industry by transforming how information and communication technologies are
employed in businesses.That is the required future of Enterprise Computing.
Jeevan Tamang
Roll no:SEC075BCT005

Amazon S3

Amazon S3 is an (Amazon Simple Storage Service) object storage device which offers us
the industry-leading scalability, data availability, security and performance. Amazon s3 is
used to store and retrieve any amount of data from anywhere and any time we needed. It
is also a high speed, web based cloud storage service. S3 provides 99.999999999%
durability for objects stored in the service and supports multiple certifications. Data can
be transferred to S3 over the public internet via access to APIs. Amazon S3 can be used
by many companies from small to large enterprises.
Common use case for S3 are:-
Data storage
Data archiving
Software delivery
Disaster recovery
Data backup ,etc.

Before AWS S3
Organizations had a difficult time finding, storing, and managing all of your data. Not
only that, running applications, delivering content to customers, etc.

AWS S3 Benefits
Some of the benefits of AWS S3 are:

Durability: S3 provides 99.999999999 percent durability.


Low cost: S3 lets you store data in a range of “storage classes.” The pricing is also less in
comparision to other services.
Scalability: S3 charges you only for what resources we actually use, and there are no
other extra charges.
Availability: S3 offers 99.99 percent availability of objects
Security: S3 offers an impressive range of access management tools and encryption
features that provide great security.
Flexibility: S3 is ideal for a wide range of uses like data storage, data backup, software
delivery, data archiving, disaster recovery, website hosting, mobile applications, IoT
devices, and much more.
Simple data transfer: We don’t have to be an IT genius to execute data transfers on S3.
The service revolves around simplicity and ease of use. It is very easy to transfer our
data.

Amazon S3 uses blocks and file storage and each object is stored as a file with it's
metadata included. The object is also Given an ID number on which applications uses to
access objects.
Amazon S3 comes in seven storage classes:-
S3 Standard
S3 Intelligent-Tiering
S3 STandard-IA
S3 One Zone-IA
S3 Glacier
S3 Glacier Deep Archive
S3 Outposts

Amazon S3 storage classes


S3 Standard is suitable for frequently accessed data that needs to be delivered with low
latency and high throughput. S3 Standard targets applications, dynamic websites, content
distribution and big data workloads.

S3 Intelligent-Tiering is most suitable for data with access needs that are either changing
or unknown. S3 Intelligent-Tiering has four different access tiers: Frequent Access,
Infrequent Access (IA), Archive and Deep Archive. Data is automatically moved to the
most inexpensive storage tier according to customer access patterns.

S3 Standard-IA offers a lower storage price for data that is needed less often but that
must be quickly accessible. This tier can be used for backups, DR and long-term data
storage.

S3 One Zone-IA is designed for data that is used infrequently but requires rapid access on
the occasions that it is needed. Use of S3 One Zone-IA is indicated for infrequently
accessed data without high resilience or availability needs, data that is able to be
recreated and backing up on-premises data.

S3 Glacier is the least expensive storage option in S3, but it is strictly designed for
archival storage because it takes longer to access the data. Glacier offers variable retrieval
rates that range from minutes to hours.

S3 Glacier Deep Archive has the lowest price option for S3 storage. S3 Glacier Deep
Archive is designed to retain data that only needs to be accessed once or twice a year.

S3 Outposts adds S3 object storage features and APIs to an on-premises AWS Outposts
environment. S3 Outposts is best used when performance needs call for data to be stored
near on-premises applications or to satisfy specific data residency requirements.

We can only create 100 buckets from our AWS account and limit can be increased to
1000 with service limit increases. We can use Amazon S3 API to upload objects. Due to
the features like scalability, flexibility, security , it is very much preferrable means of
storage over the internet.
Fig:Amazon S3 lab

SUDARSHAN REGMI
Load Balancer
Cloud load balancing is the process of distributing workloads across computing resources
in a cloud computing environment and carefully balancing the network traffic accessing
those resources. Load balancing enables organizations to meet workload demands by
routing incoming traffic to multiple servers, networks or other resources, while improving
performance and protecting against disruptions in services. Load balancing also makes it
possible to distribute workloads across two or more geographic regions.

Cloud load balancing helps enterprises achieve high performance levels for potentially
lower costs than traditional on-premises load balancing technology. Cloud load balancing
takes advantage of the cloud's scalability and agility to meet the demands of distributed
workloads with high numbers of client connections. It also improves overall availability,
increases throughput and reduces latency.

In addition to workload and traffic distribution, cloud load balancing services typically
offer other features, such as application health checks, automatic scaling and failover and
integrated certificate management.

Amazon Web Services (AWS) Elastic Load Balancing distributes incoming client traffic
and routes it to registered targets such as EC2 instances. Elastic Load balancing supports
four types of load balancers: Application, Network, Gateway and Classic. The load
balancers differ in the features offered, the network layers at which they operate and
supported communication protocols.
Relational database service(RDS)
Name: Manish Mainali

Roll No:SEC075BCT007

AWS :DATABASE
•Amazon Relational Database Service (Amazon RDS)
•Amazon DynamoDB
•Amazon Redshift
Amazon Aurora

Unmanaged versus managed services


There are normally two types of AWS Solutions: managed and
unmanaged. Unmanaged services are often delivered in distinct amounts
in accordance with user specifications. You control how the services
respond to changes in load, faults, and circumstances when resources are
unavailable as the service owner. Installing a relational database
management system on an Amazon EC2 instance gives you full control
over your database. This will serve as an illustration of an unmanaged
service and will be quite comparable to running your database in a real
data center that you own. Imagine you've just started a web server on
an EC2 instance. Because Amazon EC2 is an unmanaged solution,
unless you indicate that it utilize a scaling solution, the web server will
not scale to handle an increase in traffic demand or replace unhealthy
instances with healthy ones. You may be using an auto scaling service.
Using an unmanaged service has the advantage of giving you more
precise control over how your solution responds to changes in load,
faults, and circumstances when resources are unavailable . A few
configurations exist for managed services. You could, for instance,
create an S3 bucket and then modify its rights. Managed services, on the
other hand, often require less configuration . We'll now examine the
difficulties of maintaining a solitary relational database that is not
controlled . Following that, we'll see how Amazon RDS addresses.

OVERVIEWS
•Challenges of RDS?
•What database does Amazon RDS use?
•What are the advantages of RDS?
•Stepsof creating mysql database using amazon rds and aws
management console

•Some examples of architecture using RDS


● Responsibility and access lists
Challenges of relational database
● Server maintenance and energy footprint.
● Software installation and patches
● Database backups and high availability
● Limits on scalability
● Data security
● Operating system (OS) installation and patches
What database does Amazon RDS use?
❖ Amazon RDS is a managed relational database service that
provides you six familiar database engines to choose from,
including Amazon Aurora, MySQL, MariaDB, PostgreSQL,
Oracle, and Microsoft SQL Server.
❖ Amazon Relational Database Service (Amazon RDS) is a
collection of managed services that makes it simple to set up,
operate, and scale databases in the cloud.

Advantages
● Easy Deployment
You can build, delete, and alter your database instances using a set of
APIs via the AWS Management Console with Amazon RDS. You have
complete control over the security and access for your instances, and
managing database backups and snapshots is simple.

Instances of Amazon RDS for MySQL come pre-configured with


settings and characteristics customized for the instance type you choose.
However, you need not worry because you have a great deal of control
over these settings thanks to simple to manage database parameter
groups that offer granular management and tuning choices for your
database instances.

● Fast Storage Options


Two SSD-backed storage choices are offered by Amazon RDS for your
database instances. With the General Purpose storage option, smaller or
medium-sized workloads can be stored affordably. Provisioned IOPS
Storage offers constant storage performance of up to 40,000 I/Os per
second for applications that require higher performance (such those with
significant OLTP workloads).

You may quickly add more storage as needed without any downtime as
your storage needs increase.

● High Availability
In-house high availability is frequently difficult since there are so
many moving parts that must coordinate seamlessly. This does not
even take into account the requirement for numerous
geographically dispersed data centers.

Your MySQL database instances can gain improved availability


and durability using Amazon RDS Multi-AZ deployments, which
makes them a suitable fit for production database applications. For
read-intensive workloads, it is simple to elastically scale out
beyond the limitations of a single database instance by using
Amazon RDS Read Replicas.

● Backup & Recovery


A DBA's last backup is the only thing that really matters.
This is a proverb that I have heard ever since I began using
MySQL back in version 3.2.3. What even the finest DBA can
achieve to restore production services without the data is true
both then and now.

Your MySQL database instances can be backed up and


recovered using Amazon RDS' automated backup services to
any point in time within your selected retention periods (up
to 35 days). Additionally, you can manually start backups of
your instances, and Amazon RDS will keep all of these
backups until you specifically delete them. Backing up has
never been so simple.

● Security
For your MySQL databases, Amazon RDS, a managed
service, offers a high degree of safety. These include
network solitude via Amazon VPC (virtual private
cloud), encryption at rest via keys you generate and
manage using AWS Key Management Service, and
more (KMS). SSL can also be used to encrypt data as it
is being transmitted over the wire.

This is a good time to bring up the Shared


Responsibility Model because there are still parts of
your RDS configuration that you need to protect.

● Monitoring/Metrics
All of the metrics for your RDS database instances are freely available
using the RDS monitoring functionality in Amazon Cloudwatch.
CloudWatch Enhanced Monitoring gives you access to more than 50

CPU, RAM, file system, and disk I/O metrics if you need more in-depth
and thorough monitoring choices.
Key operational indicators, including as compute, memory, storage
capacity utilization, I/O activity, instance connections, and more, are
immediately viewable in the AWS Management Console. By being
aware of what is going on within your database stack, you will never
again be taken off guard.

Responsibility and access lists

Fig 1:Responsibility and access lists


# Steps of creating mysql database using amazon
RDS and AWS management console
❖ Go to console select RDS

fig:AWS Management console

❖ Create database
Fig:resolution
❖ Select engine
Fig: Select MySQL
❖ After selecting engine click on next

❖ Choose use case


Fig:Use case selection
➔ Note: you can choose any case as u prefer
❖ Click on NEXT

❖ Specify db details

1.Select db.t2.micro for DB instant/choose any of them you want


2.Select multi AZ or non.
3.Allocate storage
4.Enable/Disable auto scaling.
5.Give DB identifier
6.Provide master username
7.Provide master password and confirm password

❖ Hit NEXT Button


➔ Further query can be given from command window

❖ Configure advanced setting

1.Network and Security

2.Select vpc

3.Select subnet

4.Select public accessibility(yes/No)

5.Give availability zone


6.Choose vpc security group

7.Database options

8.Give DB name

9.Provide Port info

10.IAM DB selection(optional)

❖ Encryption
1. Select encryption options (optional)
2. Backup
3. Select backup period
4. Copy/no copy tag to snapshot
5. Monitoring
6. Enable/Disable
❖ Log exports
1. Select effective log
2. Maintenance
3. Enable/Disable
4. Deletion Protection
5. Enable/Disable
6. HIT create database

❖ Created database
❖ HIT View DB instance details
❖ All the details of DB can be observed

❖ DB Available
Note : Hit refresh until DB is available

AWS Architecture using RDS

fig : RDS example 1 via architecture


Figure 2: Architecture example 2

Fig: architecture 2 of RDS


Name : Aarjan Rajbhandari

Roll No : SEC075BCT001

Identity and Access Management (IAM)


AWS Identity and Access Management (IAM) allows you to
control access to compute, storage, database and application
services in the AWS cloud. It allows you to manage users and
their level of access to the AWS console. It is a web service that
helps you securely control access to AWS resources.

IAM provides fine-grained access control across all of AWS.


With IAM, you can specify who can access which resources and
how these resources can be accessed. You can grant different
permissions to different people for different resources. IAM is
an AWS service that is offered at no additional charge.

IAM can be used to control who is authenticated (signed in) and


authorized (has permissions) to use resources.
Authentication is the process or action of verifying the identity
of a user or process. It refers to the methods you use to
determine that someone is who they claim to be. Authentication
occurs whenever a user attempts to access your organization's
network and downstream resources.

Authorization is the allocation or delegation of permissions to a


particular individual or type of user. It refers to the process of
giving someone permission to do or have something.
Authorization carries out the rest of an organization's identity
and access management processes once the user has been
authenticated.

Features of IAM
· Granular permissions

· Identity Federation

· Multi Factor Authentication

· Networking controls

· PCI DSS Compliance


· Free to use

Components of IAM
· IAM user

· IAM group

· IAM policy

· IAM role

IAM user is a person or application that is defined in an AWS


account. IAM user may refer to an actual person who is a user or
it could be an application that is a user. With IAM, you can
securely manage access to AWS services by creating an IAM
user name for each employee in your organization. Each IAM
user is associated with only one AWS account.

IAM group is a collection of IAM users. You can use IAM


groups to specify permissions for multiple users and it also
makes it possible to manage the permissions easily for those
users. A group is used when multiple users need the same
permissions, while IAM policy is used to grant the permissions.
Managing groups is quite easy.

IAM policy is a document that defines permission and controls


access to AWS resources. It defines which resources can be
accessed and the level of access to each resource. There are two
types of IAM policy. Identity-based policies and Resource-based
policies. IAM policies enable you to fine-tune privileges that are
granted to IAM users, groups and roles.

IAM role is a set of permissions that define what actions are


allowed and denied by an entity in the AWS. These permissions
are attached to the role, not to an IAM user or a group. IAM role
is similar to an IAM user because it can be accessed by any type
of individual or AWS service. A role is not uniquely associated
with a single person, it can be used by anyone who needs it.
SEC075BCT009

Prabesh Parajuli

Cloud security
At AWS, cloud security is given top importance. AWS incorporates security into
every aspect of our cloud infrastructure and provides supporting services to
assist businesses in meeting their specific security needs in the cloud.
You will profit as an AWS customer from a network and data centre architecture
designed to satisfy the needs of the most security-conscious businesses. Security
in the cloud is quite similar to security in your on-premises data centres, with
the exception that there are no upkeep expenses for buildings and technology.
Maintaining client confidence and trust, as well as helping to safeguard the
security, integrity, and availability of your systems and data, are essential to
AWS.
AWS identifies threats by continuously monitoring the network activity and
account behaviour within your cloud environment.
Security scales with your AWS Cloud usage. No matter the size of your
business, the AWS infrastructure is designed to keep your data safe.
You have access to a vast array of features and tools to assist you in achieving
your security goals. In the areas of network security, configuration management,
access control, and data encryption, AWS offers security-specific tools and
services.

Shared responsibility model


The shared responsibility model outlines your responsibilities as a cloud user
and those of your cloud service provider (CSP). The CSP is in charge of the
security "of" the cloud, which includes the hardware, cables, utilities, and other
tangible components. In terms of network controls, identity and access
management, application configurations, and data, the customer is in charge
of security "in" the cloud.

However, depending on the service model you choose, this division of labor
may alter. Three main cloud service models are defined in the NIST Definition
of Cloud Computing at the most fundamental level:
1) Infrastructure as a Service (IaaS): The CSP is in charge of the physical
data centre, physical networking, and the physical servers/hosting under the
IaaS model.
2) Platform as a Service (PaaS): In a PaaS model, the CSP assumes
additional responsibility for tasks like operating system upkeep and patching
(which customers are notoriously bad at doing and is a major source of
security incidents).
3) SaaS (software as a service) In SaaS, the client can only alter the
configuration settings of an application; the CSP retains control over all other
aspects (think of Gmail a basic example).
Although a shared security model is intricate and necessitates close
study and cooperation between the CSP and the client, the strategy has
numerous significant advantages for users. These consist of:

1)Efficiency: Although the Shared Duty Model places a large amount of


responsibility on the client, some crucial security components, including
as the security of hardware, infrastructure, and the virtualization layer,
are virtually always handled by the CSP. In a conventional on-premises
approach, the customer was in charge of handling these elements.

2)Enhanced security: Cloud service providers place a high priority on


the security of their cloud environment and frequently invest a sizable
amount of resources to guarantee that their clients are completely safe.
CSPs carry out thorough testing and monitoring as required by the
service agreement.

3)Expertise: When it comes to the developing field of cloud security,


CSPs frequently possess a higher level of knowledge and expertise.
Customers who work with cloud vendors gain access to the expertise,
resources, and assets of the partner company.
Name:Bishal Karki
Roll:SEC075bct004
Client Server Architecture
The Client-server model is a distributed application structure that
partitions task or workload between the providers of a resource or
service, called servers, and service requesters called clients. In the
client-server architecture, when the client computer sends a request for
data to the server through the internet, the server accepts the requested
process and delivers the data packets requested back to the client.
Clients do not share any of their resources. Examples of Client-Server
Model are Email, World Wide Web, etc.

How does the Client-Server Model work ?


In this article we are going to take a dive into the Client-Server model
and have a look at how the Internet works via web browsers. This article
will help us in having a solid foundation of the WEB and help in
working with WEB technologies with ease.
● Client: When we talk about the word Client, it means to talk of
a person or an organization using a particular service. Similarly
in the digital world a Client is a computer (Host) i.e. capable of
receiving information or using a particular service from the
service providers (Servers).
● Servers: Similarly, when we talk about the word Servers, It
means a person or medium that serves something. Similarly in
this digital world a Server is a remote computer which provides
information (data) or access to particular services.So, it's
basically the Client requesting something and the Server serving
it as long as its present in the database.

Advantages of Client-Server model:

● Centralized system with all data in a single place.


● Cost efficiency requires less maintenance cost and Data
recovery is possible.
● The capacity of the Client and Servers can be changed
separately.

Disadvantages of Client-Server model:

● Clients are prone to viruses, Trojans and worms if present in the


Server or uploaded into the Server.
● Servers are prone to Denial of Service (DOS) attacks.
● Data packets may be spoofed or modified during transmission.
● Phishing or capturing login credentials or other useful
information of the user are common and MITM(Man in the
Middle) attacks are common.

3-Tier Architecture With TP Monitors

The 3-tiers consist of a presentation tier, application tier, and


database tier.
Presentation tier:

The presentation tier, or web tier, is where I will create a static


webpage that can be accessed through the internet at the end of this
project. The web tier will contain an internet-facing application
load balancer and an auto scaling group. The application load
balancer (ALB) will allow us to directly access our web tier from
the internet. This tier will consist of two public subnets located in
different availability zones. The public subnets will have access to
the internet and allow access from the internet, through an internet
gateway.

Application tier:

The application tier will consist of the application servers and


resources. It will also contain an auto scaling group to scale up if
additional servers are needed. This tier will have two private
subnets.

Database tier:

The database layer will consist of data resources. These databases


will need to be connected to the application layer, and will only be
connected to the application layer. This layer will also have two
private subnets, and we will use RDS as our database.
Sudip Neupane
SEC075BCT016

Chapter 7: Cloud Computing Economics

7.1 Introduction
Economics is the study of scarcity and its implications for the use of resources, production of
goods and services, growth of production and welfare over time. In simple terms, Cloud
computing means the delivery of computing resources like servers, storage, databases,
networking, software, analytics, and intelligence over the cloud to offer faster innovation,
flexible resources, and economies of scale. Every service provider of cloud computing like AWS,
Azure,etc are advertising that the main advantage of cloud computing is getting better service
even at a low price. Cloud computing can actually save a lot of money than implementing
resources in physical servers. The emergence of cloud services is fundamentally shifting the
economics of IT. Cloud technology standardizes and pools IT resources and automates many of
the maintenance tasks done manually today. Cloud architectures facilitate elastic consumption,
self-service, and pay-as-you-go pricing.

Figure 7.1: Cloud Opportunity (Source: Microsoft)


Cloud Computing economics focuses on two main principles: economics of scale and global
reach.
Through economies of scale, cloud providers save organizations money because they purchase
computing resources in massive quantities at lower costs. When companies utilize these shared
resources, they avoid the costs of purchasing their own expensive infrastructure. And with a
pay-as-you-go pricing model, clients pay only for the resources they actively use, scaling up or
down as needed.
The global reach of cloud computing also brings substantial savings. When servers no longer
need to be housed on premises, they can be located and accessed from anywhere in the world .
So, companies can reduce costs. They no longer need to devote time to deploying and
maintaining complex hardware on site.
Beyond the tremendous efficiencies and cost savings of cloud computing, there is another
economic benefit: business agility. Companies that utilize cloud computing resources can
deploy applications faster and ramp up storage and computing power on demand. This practice
allows businesses to respond to market changes and customer demands more quickly, leading to
faster revenue growth.
Some of the practices in cloud computing that benefits in cloud computing economics are:
● Cloud vendors take money from the clients on the basis of 3 driving factors- compute,
storage and outbound data transfer,
● Pay-as-you-go model,
● Auto scaling and load balancing,
● Global reach in a short time, etc
Figure 7.2: Graph showing Per user cost vs Different computing infrastructure

How does the vendor earn from cloud computing?


“There is no cloud, it's just someone else's computer.” The cloud we are using is a resource of a
Cloud Vendor which is maintained physically by them at a data center. The cloud vendor has
multiple clients all over the world. They earn money by renting their hardware/resources to the
clients. Vendors bring different plans and packages to attract clients in their business. Vendors
can earn if and only its resources are rented. Some failures in data centers like power failure,
technical problems, etc and not getting a client can be a problem for a vendor.
Let's look at following example to understand how vendor earns:
Eg. Vendor buys 1 TB storage for Rs. 4000. It rents that storage to multiple clients at a rate of
Rs. 2 per 10 GB per month. (affordable for clients)
Return After 1 year : Rs. 1 * 100 * 2 * 12 = 2400
Return After 2 year: Rs. 4800
→ In this way, cloud vendors earn. As well as clients can save lots of money instead of buying
their own resources.
7.2 Economics of Private Cloud
The private cloud is defined as computing services offered either over the Internet or a private
internal network and only to select users instead of the general public. Private cloud computing
gives businesses many of the benefits of a public cloud with the additional control and
customization available from dedicated resources over a computing infrastructure hosted
on-premises. Private clouds are prohibitively expensive compared to public clouds. The only
way for these small organizations or departments to share in the benefits of at-scale cloud
computing is by moving to a public cloud.

Figure7.3: Cost Benefits of Private Cloud (Source: Microsoft)


Some economics factor of private cloud computing are:
● Security: Private cloud security is enhanced since traffic to a private cloud is typically
limited to the organization’s own transactions. Public cloud providers must handle traffic
from millions of users and transactions simultaneously, thus opening a greater chance for
malicious traffic. Since private clouds consist of dedicated physical infrastructure, the
organization has better control over server, network, and application security.
● Predictable performance: Because the hardware is dedicated rather than multi-tenant,
workload performance is predictable and unaffected by other organizations sharing
infrastructure or bandwidth.
● Long-term savings:  While it can be expensive to set up the infrastructure to support a
private cloud, it can pay off in the long term. If an organization already has the hardware
and network required for hosting, a private cloud can be much more cost-effective over
time compared to paying monthly fees to use someone else’s servers on the public cloud. 
● Predictable costs: Public cloud costs can be very unpredictable based on usage, storage
charges and data egress charges. Private cloud costs are the same each month, regardless
of the workloads an organization is running or how much data is moved.
● Regulatory governance

Name: Swopnil Sapkota


SEC075BCT018

Software productivity in cloud

At its most basic level, cloud software provides users with a mechanism to
continually collaborate from any place. Software productivity suites frequently
contain accounting or word processing programs that you can access from a web
browser.

Although there are many advantages to working in the cloud, some people are still
curious about how cloud software will affect their organizations as a whole. Once
you have the components in place and train your users on how to utilize the
applications, cloud-based software and applications have a lot of advantages.
Software for cloud productivity can significantly alter how you conduct a company
by using fewer resources and enhancing your returns.

List of Cloud-Based Productivity software


· Google office apps for business:

Google is offering the best cloud-based productivity software names as Google


Office Applications. Google cannot be ignored while discussing cloud computing
software. With Google Drive, you can use the approximately 15 GB of free Google
online storage (30 GB premium) that is offered.

· Microsoft Office 365:

One of Microsoft's well-known cloud-based productivity tools is Office 365. You


can stay on top of everything and keep things organized using Microsoft Office. By
using Microsoft Office, one may maintain well-written documents. Accessibility
across devices is still Office's best feature. Additionally, each Microsoft Office
application comes with free training. It is simple, clever, and incredibly original.

· Zoho Office Online Suite:

In the ranking of cloud-based productivity tools, Zoho ranks immediately after


Microsoft and Google. Due to the numerous attributes that enable it to do so, this
name has developed and attained its position in the market. Your everyday tasks
are made easier, sales records are improved, and employee productivity is raised.
Campaigns, connections, and bug tracking tools are also included with it. You can
use it to view showtimes, create documents or survey forms for research, or even
fully customize your contacts.

· Apple iWork for icloud:

Apple introduced iWork, productivity software that can be accessed via an iCloud
account. Apple will refrain from creating its catalog of Office online products with
the motto "I" and instead wait. Keynote for office users, Libre Office Project,
iScript, and Automation features are a few of its features. Many experts consider
the iPhoto function, which is special and distinguishes it from the competition, to
be one of the best features. The main drawback is that only users of Apple devices
can profit from these suits because Apple developed its own echo system.

Benefits of using cloud based softwares

· Cloud-Based Productivity software can boost productivity

· Mobility of usage with Cloud-Based Productivity Software

· All features in place

· Auto-save option with Cloud Office Suites

· Cloud-based software is Secure and Reliable


Economics of scale: public vs private cloud

Public cloud computing is also often referred to as "utility" computing. This


is so that it can lower the costs related to an application's deployment and
subsequent scaling. Economy of scale is important, not just for the
customer but also for the provider. Because of the size of its operations, the
provider is able to deliver commoditized resources at a price that is quite
reasonable. The infrastructure has become commoditized, including the
network, servers, and storage. All of these shared resources work together
to create a financially sound business model that allows for very easy
on-demand resource scaling. Because economy of scale is achieved by
standardizing as much as possible and minimizing contact, there is very
little to no customization (read: alignment of process with
business/operational goals) accessible. Enterprise cloud computing is more
concerned with the efficiency of resources, both technological and human,
than it is with resource scalability. Private cloud computing is the hybrid
model, in which IT uses public cloud computing as an addition to its data
center and, one would hope, as part of its own enterprise cloud computing
program. It involves using economies of scale to reduce the costs of new
projects and the scalability of existing applications without jeopardizing the
economies of scale made possible by process automation and integration
efforts. The best of both worlds can be found in utility computing resources
that can be used and controlled just like business resources. Efficient
Economy of Scale: Private Cloud Each firm has a variety of applications
that actually fuel the demand for economies of scale, or a public cloud
computing environment. The private (hybrid) cloud computing paradigm
satisfies the very real organizational need for at least architectural control
over those resources for integration, management, and cost minimization
governance while enabling corporate organizations to benefit from utility
computing.

Chapter 8 : Enterprise analytics and Search

Enterprise Knowledge Goals and Approaches:


Enterprises face increasingly competitive environments. As companies downsize to adapt to
these environments they may be able to cut costs. But unless they have captured the knowledge
of their employees, downsizing can result in a loss of critical information.

Enterprise Knowledge Management refers to the process of capturing the knowledge resources
held within the enterprise enabling the retrieval and reuse of knowledge by individuals who are a
part of the organization. EKM typically uses technologies that can organize the data into a format
that can be readily accessed by members of the company.

Effective EKM helps you break down knowledge silos that keep your team working in a
disconnected way. Pathways to information flow are opened up to ensure that knowledge is
passing freely through your organization, instead of being hoarded by individual employees.
Everyone can benefit from the power of collective wisdom.

Business Intelligence:
It is about using data to help enterprise users make better business decisions. It is a
technology-driven process for analyzing data and delivering actionable information that helps
executives, managers and workers make informed business decisions.

BI systems have four main parts:

1. A data warehouse stores company information from a variety of sources in a centralized


and accessible location.
2. Business analytics or data management tools mine and analyze data in the data
warehouse.
3. Business performance management (BPM) tools monitor and analyze progress towards
business goals.
4. A user interface (usually an interactive dashboard with data visualization reporting tools)
provides quick access the information

BI software and systems provide options suited to specific business needs. They include
comprehensive platforms, data visualization, embedded software applications, location
intelligence software and self-service software built for non-tech users.

Some examples of the latest BI software and systems:


● Business intelligence platforms:
These are comprehensive analytics tools that data analysts use to connect to data
warehouses or databases. The platforms require a certain level of coding or data
preparation knowledge. These solutions offer analysts the ability to manipulate data to
discover insights. Some options provide predictive analytics, big data analytics and the
ability to ingest unstructured data.
● Data visualization software:
Suited to track KPIs and other vital metrics, data visualization software allows users to
build dashboards to track company goals and metrics in real-time to see where to make
changes to achieve goals. Data visualization software accommodates multiple KPI
dashboards so that each team can set up their own.
● Embedded business intelligence software:
This software allows BI solutions to integrate within business process portals or
applications or portals. Embedded BI provides capabilities such as reporting, interactive
dashboards, data analysis, predictive analytics and more.
● Location intelligence software:
This BI software allows for insights based on spatial data and maps. Similarly, a user can
find patterns in sales or financial data with a BI platform; analysts can use this software
to determine the ideal location to open their next retail store, warehouse or restaurant.
● Self-service business intelligence software:
Self-service business intelligence tools require no coding knowledge to take advantage of
business end-users. These solutions often provide prebuilt templates for data queries and
drag-and-drop functionality to build dashboards. Users like HR managers, sales
representatives and marketers use this product to make data-driven decisions.

Text and Data Mining :

TDM (Text and Data Mining) is the automated process of selecting and analyzing large amounts
of text or data resources for purposes such as searching, finding patterns, discovering
relationships, semantic analysis and learning how content relates to ideas and needs in a way that
can provide valuable information needed for studies, research,

Data Mining:

Data mining is the process of finding patterns and extracting useful data from large data sets. It is
used to convert raw data into useful data. Data mining can be extremely useful for improving the
marketing strategies of a company as with the help of structured data we can study the data from
different databases and then get more innovative ideas to increase the productivity of an
organization. Text mining is just a part of data mining

Origins of data mining:

Traditional techniques may be unsuitable due to data that is

- Large-scale
- High dimensional
- Heterogeneous
- Complex
- Distributed

The major tasks of Data mining are:

1) Clustering Clustering is the task of dividing the population or data points into a number
of groups such that data points in the same groups are more similar to other data points in
the same group than those in other groups. In simple words, the aim is to segregate
groups with similar traits and assign them into clusters:
2) Association Rules: Association rule mining, at a basic level, involves the use of
machine learning models to analyze data for patterns, or co-occurrences, in a database. It
identifies frequent if-then associations, which themselves are the association rules. An
association rule has two parts: an antecedent (if) and a consequent (then).
3) Anomaly Detection: Anomaly Detection is the technique of identifying rare events or
observations which can raise suspicions by being statistically different from the rest of
the observations. Such “anomalous” behaviour typically translates to some kind of a
problem like a credit card fraud, failing machine in a server, a cyber attack, etc.
4) Predictive Modelling: Predictive modeling is a mathematical process used to predict
future events or outcomes by analyzing patterns in a given set of input data. It is a crucial
component of predictive analytics, a type of data analytics which uses current and
historical data to forecast activity, behavior and trends.

Text Mining:

Text mining is basically an artificial intelligence technology that involves processing the data
from various text documents. Many deep learning algorithms are used for the effective
evaluation of the text. In text mining, the data is stored in an unstructured format. It mainly uses
linguistic principles for the evaluation of text from documents.

Text mining incorporates and integrates the tools of information retrieval, data mining, machine
learning, statistics, and computational linguistics, and hence, it is nothing short of a
multidisciplinary field. Text mining deals with natural language texts either stored in
semi-structured or unstructured formats.
Text mining techniques and text mining tools are rapidly penetrating the industry, right from
academia and healthcare to businesses and social media platforms. This is giving rise to a
number of text mining applications. Such as

a) Risk Management
b) Customer care service
c) Fraud and anomaly detection
d) Business Intelligence
e) Social Media Analysts
S.No. Data Mining Text Mining
Text mining is the part of data mining
Data mining is the statistical technique of which involves processing of text from
1 processing raw data in a structured form. documents.

Pre-existing databases and spreadsheets are The text is used to gather high quality
2 used to gather information. information.
Processing of data is done
3 Processing of data is done directly. linguistically.

Statistical techniques are used to evaluate Computational linguistic principles are


4 data. used to evaluate text.

In data mining data is stored in structured In text mining data is stored in


5 format. unstructured format.

Data is heterogeneous and is not so


6 Data is homogeneous and is easy to retrieve. easy to retrieve.
In text mining, mining of text is only
7 It supports mining of mixed data. done.
It applies pattern recognizing and
It combines artificial intelligence, machine natural language processing to
8 learning and statistics and applies it on data. unstructured data.

It is used in fields like marketing, medicine, It is used in fields like bioscience and
9 and healthcare. customer profile analysis.

Text and Database search

A database is a searchable collection of information. Most research databases are searchable


collections of journal, magazine, and newspaper articles. Each database contains thousands of
articles published in many different journals, allowing us to find relevant articles faster than we
would by searching individual journals.
Some databases provide the full text of articles. Others provide abstracts, or summaries only.

Text search refers to searching some text inside extensive text data stored electronically and
returning results that contain some or all of the words from the query. In contrast, traditional
search would return exact matches.

While traditional databases are great for storing and retrieving general data, performing full-text
searches has been challenging. Frequently, additional tooling is required to achieve this.

Full-text search examples

Full-text search can have many different usages—for example, looking for a dish on a restaurant
menu or looking for a specific feature in the description of an item on an e-commerce website. In
addition to searching for particular keywords, you can augment a full-text search with search
features like fuzzy-text and synonyms. Therefore, the results for a word such as “pasta” would
return not only items such as “Pasta with meatballs” but could also return items like “Fettuccine
Carbonara” using a synonym, or “Bacon and pesto flatbread” using a fuzzy search.

Some terminologies:
Data Lake:
It is a central repository to store
1) Structured data such as on-premise or cloud database.
2) Semi-structured data such as json, avro, xml and other raw files.
3) Unstructured data such as audio, video and binary files.
It is ingested from several batch. Data lake platforms are robust, elastic, and easily scalable to
serve rapid analytics over massive data sets to users over a variety of use cases, including
maintaining a single source of truth.

You might also like