You are on page 1of 34

Elective IV- Cloud Computing

Q.1 Explain General Security Advantages of Cloud-Based Solutions.


Answer:
Protection against DDoS. Distributed denial of service attacks are on the rise, and a top
cloud computing security solution focuses on measures to stop huge amounts of traffic aimed at
a company’s cloud servers. This entails monitoring, absorbing and dispersing DDoS attacks to
minimize risk.
Data security. In the ever-increasing era of data breaches, a top cloud computing
security solution has security protocols in place to protect sensitive information and transactions.
This prevents a third party from eavesdropping or tampering with data being transmitted.
Regulatory compliance. Top cloud computing security solutions help companies in
regulated industries by managing and maintaining enhanced infrastructures for compliance and
to protect personal and financial data.
Flexibility. A cloud computing solution provides you with the security you need whether
you’re turning up or down capacity. You have the flexibility to avoid server crashes during high
traffic periods by scaling up your cloud solution. Then when the high traffic is over, you can
scale back down to reduce costs.
High availability and support. A best-practices cloud computing security solution
offers constant support for a company’s assets. This includes live monitoring 24 hours a day, 7
days a week, and every day of the year. Redundancies are built-in to ensure your company’s
website and applications are always online.
Security: Many organizations have security concerns when it comes to adopting a cloud-
computing solution. After all, when files, programs, and other data aren't kept securely onsite,
how can you know that they are being protected? If you can remotely access your data, then
what's stopping a cybercriminal from doing the same thing? Well, quite a bit, actually.

Q.2 Write a short note on Infrastructure as a Service.


Answer:
Infrastructure as a Service (IaaS) is the practice of delivering a full compute stack —
including servers, storage, networking and operating software — as an abstract, virtualized
construct.Like other service-based offerings (Software as a Service, Platform as a Service), IaaS
allows users to consume only what they need while offloading complex and expensive
management tasks to their provider. Infrastructure as a service (IaaS) is also known as hardware
as a service (HaaS).

IaaS grew out of the broader conversion from traditional hardware-oriented data centres
to virtualized and cloud-based infrastructure. By removing the fixed relationship between
hardware and operating software and middleware, organizations found that they could scale data
environments quickly and easily to meet workload demands.

From there, it was just a small step to begin purchasing infrastructure on a service model
to cut costs and deliver the kind of flexibility needed to accommodate the growing demand for
digital services.

While the typical consumption model for IaaS is to acquire services from a third-party
provider, many large enterprises are adapting it for their own internal, private clouds. IaaS, after
all, is built on virtual pools of resources which ideally are parcelled out on-demand and then
returned to the pool when no longer needed.

Rather than providing discrete server, storage and networking resources in this way,
internal IaaS models deliver them in an integrated fashion to avoid bottlenecks and conflicts. In
this way, the enterprise is able to streamline its actual hardware infrastructure while still
providing the needed resources to serve the business model.

Q.3 Explain these standards as follows:


Answer:
Ajax:

Ajax is a set of web development techniques using many web technologies on the client
side to create asynchronous web applications. With Ajax, web applications can send and retrieve
data from a server asynchronously (in the background) without interfering with the display and
behaviour of the existing page. By decoupling the data interchange layer from the presentation
layer, Ajax allows web pages and, by extension, web applications, to change content
dynamically without the need to reload the entire page.[3] In practice, modern implementations
commonly utilize JSON instead of XML.

Ajax is not a single technology, but rather a group of technologies. HTML and CSS can
be used in combination to mark up and style information. The webpage can then be modified by
JavaScript to dynamically display—and allow the user to interact with—the new information.
The built-in XMLHttpRequest object, or since 2017 the new "fetch ()" function within
JavaScript, is commonly used to execute Ajax on webpages allowing websites to load content
onto the screen without refreshing the page. Ajax is not a new technology, or different language,
just existing technologies used in new ways.

In the early-to-mid 1990s, most Web sites were based on complete HTML pages. Each
user action required that a completely new page be loaded from the server. This process was
inefficient, as reflected by the user experience: all page content disappeared, then the new page
appeared. Each time the browser reloaded a page because of a partial change, all of the content
had to be re-sent, even though only some of the information had changed. This placed additional
load on the server and made bandwidth a limiting factor on performance.

In 1996, the iframe tag was introduced by Internet Explorer; like the object element, it can load
or fetch content asynchronously. In 1998, the Microsoft Outlook Web Access team developed
the concept behind the XMLHttpRequest scripting object. It appeared as XMLHTTP in the
second version of the MSXML library, which shipped with Internet Explorer 5.0 in March 1999.

The functionality of the XMLHTTP ActiveX control in IE 5 was later implemented by Mozilla,
Safari, Opera and other browsers as the XMLHttpRequest JavaScript object.[7] Microsoft adopted
the native XMLHttpRequest model as of Internet Explorer 7. The ActiveX version is still
supported in Internet Explorer, but not in Microsoft Edge. The utility of these background HTTP
requests and asynchronous Web technologies remained fairly obscure until it started appearing in
large scale online applications such as Outlook Web Access (2000) and Odd post (2002).

XML:

Extensible Markup Language (XML) is a markup language that defines a set of rules for
encoding documents in a format that is both human-readable and machine-readable. The World
Wide Web Consortium's XML 1.0 Specification of 1998 and several other related
specifications—all of them free open standards—define XML.

The design goals of XML emphasize simplicity, generality, and usability across the
Internet. It is a textual data format with strong support via Unicode for different human
languages. Although the design of XML focuses on documents, the language is widely used for
the representation of arbitrary data structures such as those used in web services. Several schema
systems exist to aid in the definition of XML-based languages, while programmers have
developed many application programming interfaces (APIs) to aid the processing of XML data.

Hundreds of document formats using XML syntax have been developed, including RSS,
Atom, SOAP, SVG, and XHTML. XML-based formats have become the default for many
office-productivity tools, including Microsoft Office (Office Open XML), OpenOffice.org and
LibreOffice (OpenDocument), and Apple's iWork. XML has also provided the base language for
communication protocols such as XMPP. Applications for the Microsoft .NET Framework use
XML files for configuration, and property lists are an implementation of configuration storage
built on XML.

Many industry data standards, such as Health Level 7, Open Travel Alliance, FpML,
MISMO, and National Information Exchange Model are based on XML and the rich features of
the XML schema specification. Many of these standards are quite complex and it is not
uncommon for a specification to comprise several thousand pages. In publishing, Darwin
Information Typing Architecture is an XML industry data standard. XML is used extensively to
underpin various publishing formats

JSON:

JavaScript Object Notation is an open standard file format, and data interchange format,
that uses human-readable text to store and transmit data objects consisting of attribute–value
pairs and array data types (or any other serializable value). It is a very common data format, with
a diverse range of applications, such as serving as replacement for XML in AJAX systems.
JSON is a language-independent data format. It was derived from JavaScript, but many modern
programming languages include code to generate and parse JSON-format data. The official
Internet media type for JSON is application/json . JSON filenames use the extension. json.

Douglas Crockford originally specified the JSON format in the early 2000s. JSON was first
standardized in 2013, as ECMA-404. RFC 8259, published in 2017, is the current version of the
Internet Standard STD 90, and it remains consistent with ECMA-404. That same year, JSON
was also standardized as ISO/IEC 21778:2017.[1] The ECMA and ISO standards describe only
the allowed syntax, whereas the RFC covers some security and interoperability considerations.

JSON was based on a subset of the JavaScript scripting language (specifically, Standard
ECMA-262 3rd Edition—December 1999) and is commonly used with JavaScript, but it is a
language-independent data format. Code for parsing and generating JSON data is readily
available in many programming languages. JSON's website lists JSON libraries by language.

Though JSON was originally advertised and believed to be a strict subset of JavaScript
and ECMAScript, it inadvertently allows some unescaped characters in strings that were illegal
in JavaScript and ECMAScript string literals. JSON is a strict subset of ECMAScript as of the
language's 2019 revision.

Q.4 Explain in brief:

Answer:
Bigtable:

Bigtable is a compressed, high performance, proprietary data storage system built on Google File
System, Chubby Lock Service, SSTable (log-structured storage like LevelDB) and a few other
Google technologies. On May 6, 2015, a public version of Bigtable was made available as a
service. Bigtable also underlies Google Cloud Datastore, which is available as a part of the
Google Cloud Platform.

Bigtable development began in 2004 and is now used by a number of Google


applications, such as web indexing MapReduce, which is often used for generating and
modifying data stored in Bigtable, Google Maps, Google Book Search, "My Search History",
Google Earth, Blogger.com, Google Code hosting, YouTube, and Gmail. Google's reasons for
developing its own database include scalability and better control of performance characteristics.
Google's Spanner RDBMS is layered on an implementation of Bigtable with a Paxos
group for two-phase commits to each table. Google F1 was built using Spanner to replace an
implementation based on MySQL.

Bigtable is one of the prototypical examples of a wide column store. It maps two arbitrary
string values (row key and column key) and timestamp (hence three-dimensional mapping) into
an associated arbitrary byte array. It is not a relational database and can be better defined as a
sparse, distributed multi-dimensional sorted map. Bigtable is designed to scale into the petabyte
range across "hundreds or thousands of machines, and to make it easy to add more machines [to]
the system and automatically start taking advantage of those resources without any
reconfiguration". For example, Google's copy of the web can be stored in a Bigtable where the
row key is a domain-reversed URL, and columns describe various properties of a web page, with
one particular column holding the page itself. The page column can have several timestamped
versions describing different copies of the web page timestamped by when they were fetched.
Each cell of a Bigtable can have zero or more timestamped versions of the data. Another
function of the timestamp is to allow for both versioning and garbage collection of expired data.

Amazon Dynamo:

Amazon DynamoDB is a fully managed proprietary NoSQLdatabase service that supports key-
value and document data structures and is offered by Amazon.com as part of the Amazon Web
Services portfolio. DynamoDB exposes a similar data model to and derives its name from
Dynamo, but has a different underlying implementation. Dynamo had a multi-master design
requiring the client to resolve version conflicts and DynamoDB uses synchronous replication
across multiple data centers for high durability and availability. DynamoDB was announced by
Amazon CTO Werner Vogels on January 18, 2012, and is presented as an evolution of Amazon
Simple DB solution.

DynamoDB differs from other Amazon services by allowing developers to purchase a


service based on throughput, rather than storage. If Auto Scaling is enabled, then the database
will scale automatically. Additionally, administrators can request throughput changes and
DynamoDB will spread the data and traffic over a number of servers using solid-state drives,
allowing predictable performance. It offers integration with Hadoop via Elastic MapReduce. In
September 2013, Amazon made a local development version of DynamoDB available so
developers could test DynamoDB-backed applications locally.

A DynamoDB table features items that have attributes, some of which form a primary
key. In relational systems, however, an item features each table attribute (or juggles "null" and
"unknown" values in their absence), DynamoDB items are schema-less. The only exception:
when creating a table, a developer specifies a primary key, and the table requires a key for every
item. Primary keys must be scalar (strings, numbers, or binary) and can take one of two forms. A
single-attribute primary key is known as the table's "partition key", which determines the
partition that an item hashes to––more on partitioning below––so an ideal partition key has a
uniform distribution over its range. A primary key can also feature a second attribute, which
DynamoDB calls the table's "sort key". In this case, partition keys do not have to be unique; they
are paired with sort keys to make a unique identifier for each item. The partition key is still used
to determine which partition the item is stored in, but within each partition, items are sorted by
the sort key.

Q.5 What are CPU Virtualization and Memory Virtualization? Explain in


brief.

Answer:
Virtualization uses software to create an abstraction layer over computer hardware that
allows the hardware elements of a single computer—processors, memory, storage and more—to
be divided into multiple virtual computers, commonly called virtual machines (VMs). Each VM
runs its own operating system (OS) and behaves like an independent computer, even though it is
running on just a portion of the actual underlying computer hardware.

It follows that virtualization enables more efficient utilization of physical computer


hardware and allows a greater return on an organization’s hardware investment. Today,
virtualization is a standard practice in enterprise IT architecture. It is also the technology that
drives cloud computing economics. Virtualization enables cloud providers to serve users with
their existing physical computer hardware; it enables cloud users to purchase only the computing
resources they need when they need it, and to scale those resources cost-effectively as their
workloads grow.
CPU Virtualization is a hardware feature found in all current AMD & Intel CPUs that
allows a single processor to act as if it was multiple individual CPUs. This allows an operating
system to more effectively & efficiently utilize the CPU power in the computer so that it runs
faster. This feature is also a requirement for many virtual machine software and is required to be
enabled in order for them to run properly or even at all.

CPU Virtualization goes by different names depending on the CPU manufacturer. For
Intel CPUs, this feature is called Intel Virtualization Technology, or Intel VT, and with AMD
CPUs it is called AMD-V. Regardless of what it is called, each virtualization technology
provides generally the same features and benefits to the operating system.

Memory virtualization allows networked, and therefore distributed, servers to share a


pool of memory to overcome physical memory limitations, a common bottleneck in software
performance. With this capability integrated into the network, applications can take advantage of
a very large amount of memory to improve overall performance, system utilization, increase
memory usage efficiency, and enable new use cases. Software on the memory pool nodes
(servers) allows nodes to connect to the memory pool to contribute memory, and store and
retrieve data. Management software and the technologies of memory overcommitment manage
shared memory, data insertion, eviction and provisioning policies, data assignment to
contributing nodes, and handles requests from client nodes. The memory pool may be accessed
at the application level or operating system level. At the application level, the pool is accessed
through an API or as a networked file system to create a high-speed shared memory cache. At
the operating system level, a page cache can utilize the pool as a very large memory resource that
is much faster than local or networked storage.

Memory virtualization implementations are distinguished from shared memory systems.


Shared memory systems do not permit abstraction of memory resources, thus requiring
implementation with a single operating system instance (i.e. not within a clustered application
environment). Memory virtualization is also different from storage based on flash memory such
as solid-state drives (SSDs) - SSDs and other similar technologies replace hard-drives
(networked or otherwise), while memory virtualization replaces or complements traditional
RAM.
Q.6 Explain the steps for Configuring a Server for EC2.

Answer:

1. Create credentials that you want to assign to the EC2 instance.


2. Choose an Amazon Machine Image (AMI).
3. Choose an Instance Type.
4. Configure Instance Details such as network and storage.
5. Add labels or tags for for identifying your EC2 instance.
6. Configure firewall (called security group) as appropriate.
7. Review and launch the EC2 instance.
8. Once the EC2 instance is created, you can connect to it using the chosen credentials.

Q.7 Explain in detail Snap shotting an EBS Volume and Increasing


Performance.

Answer:
Amazon Elastic Block Store (EBS) provides raw block-level storage that can be
attached to Amazon EC2 instances and is used by Amazon Relational Database Service (RDS).
Amazon EBS provides a range of options for storage performance and cost. These options are
divided into two major categories: SSD-backed storage for transactional workloads, such as
databases and boot volumes (performance depends primarily on IOPS), and disk-backed storage
for throughput intensive workloads, such as MapReduce and log processing (performance
depends primarily on MB/s).

In a typical use case, using EBS would include formatting the device with a filesystem
and mounting it. EBS supports advanced storage features, including snapshotting and cloning.
As of June 2014, EBS volumes can be up to 1TB in size. EBS volumes are built on replicated
back end storage, so that the failure of a single component will not cause data loss.

 Reliable and secure storage − Each of the EBS volume will automatically respond to its
Availability Zone to protect from component failure.
 Secure − Amazon’s flexible access control policies allows to specify who can access
which EBS volumes. Access control plus encryption offers a strong defense-in-depth
security strategy for data.

 Higher performance − Amazon EBS uses SSD technology to deliver data results with
consistent I/O performance of application.

 Easy data backup − Data backup can be saved by taking point-in-time snapshots of
Amazon EBS volumes.

Create Amazon EBS

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

2. From the navigation bar, select the Region in which you would like to create your volume.
This choice is important because some Amazon EC2 resources can be shared between
Regions, while others can't. For more information, see Resource Locations.

3. In the navigation pane, choose ELASTIC BLOCK STORE, Volumes.


4. Choose Create Volume.
5. For Volume Type, choose a volume type. For more information, see Amazon EBS Volume
Types.
6. For Size (GiB), type the size of the volume. For more information, see Constraints on the
Size and Configuration of an EBS Volume.
7. With a Provisioned IOPS SSD volume, for IOPS, type the maximum number of input/output
operations per second (IOPS) that the volume should support.
8. For Availability Zone, choose the Availability Zone in which to create the volume. EBS
volumes can only be attached to EC2 instances within the same Availability Zone.
9. (Optional) If the instance type supports EBS encryption and you want to encrypt the volume,
select Encrypt this volume and choose a CMK. If encryption by default is enabled in this
Region, EBS encryption is enabled and the default CMK for EBS encryption is chosen. You
can choose a different CMK from Master Key or paste the full ARN of any key that you can
access. For more information, see Amazon EBS Encryption.
10. (Optional) Choose Create additional tags to add tags to the volume. For each tag, provide a
tag key and a tag value. For more information, see Tagging Your Amazon EC2 Resources.
11. Choose Create Volume. After the volume status is Available, you can attach the volume to
an instance. For more information, see Attaching an Amazon EBS Volume to an Instance.

Q.8 What AWS load balancing service? Explain the Elastic Load Balancer
and its types with its advantages.

Answer:

Elastic Load Balancing automatically distributes incoming application traffic across


multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda
functions. It can handle the varying load of your application traffic in a single Availability Zone
or across multiple Availability Zones. Elastic Load Balancing offers three types of load balancers
that all feature the high availability, automatic scaling, and robust security necessary to make
your applications fault tolerant.

A load balancer distributes workloads across multiple compute resources, such as virtual
servers. Using a load balancer increases the availability and fault tolerance of your applications.
You can add and remove compute resources from your load balancer as your needs change,
without disrupting the overall flow of requests to your applications.

You can configure health checks, which monitor the health of the compute resources, so
that the load balancer sends requests only to the healthy ones. You can also offload the work of
encryption and decryption to your load balancer so that your compute resources can focus on
their main work.

Types:

 Application Load Balancer

An Application Load Balancer makes routing decisions at the application layer


(HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on
each container instance in your cluster. Application Load Balancers support dynamic host port
mapping. For example, if your task's container definition specifies port 80 for an NGINX
container port, and port 0 for the host port, then the host port is dynamically chosen from the
ephemeral port range of the container instance (such as 32768 to 61000 on the latest Amazon
ECS-optimized AMI). When the task is launched, the NGINX container is registered with the
Application Load Balancer as an instance ID and port combination, and traffic is distributed to
the instance ID and port corresponding to that container. This dynamic mapping allows you to
have multiple tasks from a single service on the same container instance. For more information,
see the User Guide for Application Load Balancers.

 Network Load Balancer

A Network Load Balancer makes routing decisions at the transport layer (TCP/SSL). It
can handle millions of requests per second. After the load balancer receives a connection, it
selects a target from the target group for the default rule using a flow hash routing algorithm. It
attempts to open a TCP connection to the selected target on the port specified in the listener
configuration. It forwards the request without modifying the headers. Network Load Balancers
support dynamic host port mapping. For example, if your task's container definition specifies
port 80 for an NGINX container port, and port 0 for the host port, then the host port is
dynamically chosen from the ephemeral port range of the container instance (such as 32768 to
61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the NGINX
container is registered with the Network Load Balancer as an instance ID and port combination,
and traffic is distributed to the instance ID and port corresponding to that container. This
dynamic mapping allows you to have multiple tasks from a single service on the same container
instance. For more information, see the User Guide for Network Load Balancers.

 Classic Load Balancer

A Classic Load Balancer makes routing decisions at either the transport layer (TCP/SSL)
or the application layer (HTTP/HTTPS). Classic Load Balancers currently require a fixed
relationship between the load balancer port and the container instance port. For example, it is
possible to map the load balancer port 80 to the container instance port 3030 and the load
balancer port 4040 to the container instance port 4040. However, it is not possible to map the
load balancer port 80 to port 3030 on one container instance and port 4040 on another container
instance. This static mapping requires that your cluster has at least as many container instances
as the desired count of a single service that uses a Classic Load Balancer. For more information,
see the User Guide for Classic Load Balancers.

Advantages of Elastic Load Balancing:


 Highly available: -Elastic Load Balancing automatically distributes incoming traffic across
multiple targets – Amazon EC2 instances, containers, IP addresses, and Lambda functions –
in multiple Availability Zones and ensures only healthy targets receive traffic. Elastic Load
Balancing can also load balance across a Region, routing traffic to healthy targets in different
Availability Zones. The Amazon Elastic Load Balancing Service Level Agreement
commitment is 99.99% availability for a load balancer.
 Elastic: -Elastic Load Balancing is capable of handling rapid changes in network traffic
patterns. Additionally, deep integration with Auto Scaling ensures sufficient application
capacity to meet varying levels of application load without requiring manual intervention.
 Robust monitoring & auditing: -Elastic Load Balancing allows you to monitor your
applications and their performance in real time with Amazon CloudWatch metrics, logging,
and request tracing. This improves visibility into the behavior of your applications,
uncovering issues and identifying performance bottlenecks in your application stack at the
granularity of an individual request.

Q.9 What is the idea of Cloudlets? Explain Cloudlets. Also differentiate


Cloudlets and Clouds.

Answer:
A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge
of the Internet. The main purpose of the cloudlet is supporting resource-intensive and interactive
mobile applications by providing powerful computing resources to mobile devices with lower
latency. It is a new architectural element that extends today’s cloud computing infrastructure. It
represents the middle tier of a 3-tier hierarchy: mobile device - cloudlet - cloud. A cloudlet can
be viewed as a data center in a box whose goal is to bring the cloud closer. The cloudlet term
was first coined by M. Satyanarayana, Victor Bahl, Ramón Caceres, and Nigel Davies, and a
prototype implementation is developed by Carnegie Mellon University as a research project. The
concept of cloudlet is also known as follow me cloud, and mobile micro-cloud.

Cloudlets aim to support mobile applications that are both resource-intensive and
interactive. Augmented reality applications that use head-tracked systems require end-to-end
latencies of less than 16 Ms.Cloud games with remote rendering also require low latencies and
high bandwidth. Wearable cognitive assistance systems combine devices such as Google
Glasswith cloud-based processing to guide users through complex tasks. This futuristic genre of
applications is characterized as “astonishingly transformative” by the report of the 2013 NSF
Workshop on Future Directions in Wireless Networking. These applications use cloud resources
in the critical path of real-time user interaction. Consequently, they cannot tolerate end-to-end
operation latencies of more than a few tens of milliseconds. Apple Siri and Google Now which
perform compute-intensive speech recognition in the cloud, are further examples in this
emerging space.

Sr.No Comparison Cloud Cloudlet


Attribute

1 Managed by Over the service provider The business itself

2 Connectivity Over the Internet Over LAN or Wi-Fi

3 Users Several users worldwide Just local users

4 State of data Real and consistent data Temporarily Cached data

Q.10 Explain Performance metrics for HPC/HTC Systems. Write note on


Innovative applications of IoT.
Answer:

There are many differences between high-throughput computing, high-performance


computing (HPC), and many-task computing (MTC).

HPC tasks are characterized as needing large amounts of computing power for short periods of
time, whereas HTC tasks also require large amounts of computing, but for much longer times
(months and years, rather than hours and days). HPC environments are often measured in terms
of FLOPS.
The HTC community, however, is not concerned about operations per second, but rather
operations per month or per year. Therefore, the HTC field is more interested in how many jobs
can be completed over a long period of time instead of how fast.

As an alternative definition, the European Grid Infrastructure defines HTC as “a


computing paradigm that focuses on the efficient execution of a large number of loosely-coupled
tasks”, while HPC systems tend to focus on tightly coupled parallel jobs, and as such they must
execute within a particular site with low-latency interconnects. Conversely, HTC systems are
independent, sequential jobs that can be individually scheduled on many different computing
resources across multiple administrative boundaries. HTC systems achieve this using various
grid computing technologies and techniques.

MTC aims to bridge the gap between HTC and HPC. MTC is reminiscent of HTC, but it
differs in the emphasis of using many computing resources over short periods of time to
accomplish many computational tasks (i.e. including both dependent and independent tasks),
where the primary metrics are measured in seconds (e.g. FLOPS, tasks/s, MB/s I/O rates), as
opposed to operations (e.g. jobs) per month. MTC denotes high-performance computations
comprising multiple distinct activities, coupled via file system operations.

Consumer applications:

A growing portion of IoT devices are created for consumer use, including connected
vehicles, home automation, wearable technology, connected health, and appliances with remote
monitoring capabilities.

Smart home:

IoT devices are a part of the larger concept of home automation, which can include
lighting, heating and air conditioning, media and security systems. Long-term benefits could
include energy savings by automatically ensuring lights and electronics are turned off. A smart
home or automated home could be based on a platform or hubs that control smart devices and
appliances. For instance, using Apple's HomeKit, manufacturers can have their home products
and accessories controlled by an application in iOS devices such as the iPhone and the Apple
Watch. This could be a dedicated app or iOS native applications such as Siri. This can be
demonstrated in the case of Lenovo's Smart Home Essentials, which is a line of smart home
devices that are controlled through Apple's Home app or Siri without the need for a Wi-Fi
bridge. There are also dedicated smart home hubs that are offered as standalone platforms to
connect different smart home products and these include the Amazon Echo, Google Home,
Apple's HomePod, and Samsung's SmartThings Hub. In addition to the commercial systems,
there are many non-proprietary, open source ecosystems; including Home Assistant, OpenHAB
and Domoticz.

Elder care:

One key application of a smart home is to provide assistance for those with disabilities
and elderly individuals. These home systems use assistive technology to accommodate an
owner's specific disabilities. Voice control can assist users with sight and mobility limitations
while alert systems can be connected directly to cochlear implants worn by hearing-impaired
users. They can also be equipped with additional safety features. These features can include
sensors that monitor for medical emergencies such as falls or seizures. Smart home technology
applied in this way can provide users with more freedom and a higher quality of life. The term
"Enterprise IoT" refers to devices used in business and corporate settings. By 2019, it is
estimated that the EIoT will account for 9.1 billion devices.

Q.11 What is Energy aware Cloud Computing? Explain in detail.

Answer:

Cloud computing, as a trending model for the information technology, provides unique
features and opportunities including scalability, broad accessibility and dynamic provision of
computing resources with limited capital investments. This paper presents the criteria, assets, and
models for energy-aware cloud computing practices and envisions a market structure that
addresses the impact of the quality and price of energy supply on the quality and cost of cloud
computing services. Energy management practices for cloud providers at the macro and micro
levels to improve the cost and reliability of cloud services are presented.
space data was lost which meant a few gigabytes for the Google. Cloud providers handle
various technical challenges in smart grids including optimizing energy management costs
through monitoring and controlling the power grid assets, providing software applications on
both producer and consumer sides to control the power flow and implementing various pricing
strategies according to the energy consumption, decreasing carbon emission by dispatching
renewable energy resources effectively, and providing unlimited storage capacity for storing
customers’ data. Adopting cloud services by power grid operators would result in more efficient
and reliable delivering of electricity. However, the reliability and security of the provided grid
services would be heavily dependent on the data and data processing capabilities of the cloud
providers and any failure in the data centers that provide the cloud computing services can lead
to considerable loss in the power grid. In 2003 north-east blackout caused by a software flaw in
an alarm system in the control room in Ohio that eventually led to a cascading failure. As a
result, the energy supply is cut off to 45 million people in eight states and 10 million people in
Ontario. The cloud providers implement different pricing schemes for the offered services. The
pricing strategy of each cloud provider is dependent on the strategies adopted by the competitors.
Offering higher prices for the same cloud service would result in losing the customers in the
cloud computing market. The pricing schemes for the cloud providers are divided into three
categories: static, dynamic and market dependent. In static pricing scheme, the customer pays a
fixed price for the cloud services regardless of the volume of the received services. In dynamic
pricing scheme, the prices of cloud services alter dynamically with the service characteristics as
well as customer characteristics and preferences. In market dependent pricing schemes, the price
of services is determined based on the real-time market conditions including bargaining,
auctioning, and demand behavior. Regardless of the choice of pricing scheme, the price of cloud
computing services depends on several factors including the initial cost of the cloud resources,
the quality of offered services including privacy and security, the availability of the resources
and the operation and maintenance costs.

Currently, cloud computing market is an oligopoly among the vendors such as Amazon,
Microsoft, Google, and IBM to provide similar cloud services to the customers. These cloud
providers implement inflexible pricing schemes based on the duration of the service and usage
threshold. The lack of standard application programming interfaces (APIs) for the provided
cloud services restricts the customers’ choice. API is set of clearly defined methods protocols
and tools devised for communication between various software components including routines,
data structures, object classes, variables or remote calls. Adopting a unified interface would lead
to forming a market structure in which the cloud services are treated as commodities. SHARP,
Tycoon, Bellagio, and Shirako are some examples of research projects that propose a unified
market structure for the cloud services. 4. Macro-level Energy Management Solutions for Cloud
Service Providers Cloud service providers, such as Google, Amazon, and Microsoft own and
operate geographically dispersed data centers that ensure acceptable quality of service for the
end-users across the globe. The ability to reroute applications between multiple data centers is
one of the important factors to provide secure, fast, and more available services to the end users.
Geo-distributed cloud environment that runs over distributed data centers enables the cloud
providers to foster power management techniques with heterogeneous objectives.

Q.12 Explain the concept of Autonomic Cloud Engine.


Answer:
Autonomic Computing is the ability of distributed system to manage its resources with
little or no human intervention. It involves intelligently adapting to environment and requests by
users in such a way the user does not even know. It was started in 2001 by IBM to help reduce
complexity of managing large distributed systems. This (autonomic computing) was instituted in
cloud computing to address the challenges of cloud computing. Autonomic cloud computing
helps address challenges related to QoS by ensuring SLA are met. QoS is maintained by mostly
scaling up or down resources automatically depending on demand by client's business. In
addition, autonomic cloud computing helps reduce the carbon footprint of data centers and cloud
consumers by automatically scaling up or down energy usage base on cloud activity. Autonomic
monitoring are mostly implemented on specific layers of the cloud computing architecture.
Authors in implemented an autonomic management system on the PaaS to ensure SaaS layer
meets SLA, energy efficiency and maintains security. Authors in developed fuzzy Q-Learning
for knowledge evolution. It is a self-learning. Self-adapting cloud controller that auto scales
(down or up) the number of virtual machines that support the cloud. It uses data collected at run-
time and automatically continues to tune the data in order to achieve desired goals. This is
particularly favorable, when there is not enough knowledge at design time. Additionally,
implemented a QoS autonomic information delivering system for delivering agricultural
information systems to farmers. This was achieved at IaaS layer using Cuckoo optimization
algorithm and fuzzy logic to attain autonomic resource allocation. Reference presented a
decentralized autonomic architecture for managing wireless sensor networks. They identified
automatic operation, aware operation and adaptive operation as properties that each autonomic
system must have. An automatic operation property ensures that the system can control its
functions and internal resources without human intervention. Aware operation property allows
the system to be aware of its resources and capabilities, this allows the system to monitor itself
and use feedback mechanism to adapt to its environment. Adaptive operation property allows the
system to continuously adapt to environment on short term and long-term basis to control its
operations. Reference further asserted that feedback loop in a system helps gives such a system
the level of awareness for adapting to environmental and operational changes. While identifying
autonomous computing as the best solution to open and non-deterministic environments,
lamented that there is no particular standard to developing autonomous solutions. Hence, they
extended the JADE (Java Agent Development Environment) to support autonomous computing
in a Multi-Agent System (MAS). In order to address the development and deployment bottleneck
of context aware systems, proposed a service-oriented framework that assists in developing and
managing context for pervasive systems. Their work focused on collecting, modelling,
processing and distributing information. Though successful, they admitted to having a number of
limitations to their work. The work is still centered around feasibility of the approach, there are
performance issues in terms of memory and operational issues in terms of automatic creations of
application entities and relations. Reference proposed a methodology for implementing power
aware runtime systems. They achieved this by proposing an algorithm to manage memory and
optimal memories, their work may be used to address memory issues identified by.

Q.13 What is Virtualization? Enlist its types with examples of each.

Answer:

Virtualization is the process of running a virtual instance of a computer system in a layer


abstracted from the actual hardware. Most commonly, it refers to running multiple operating
systems on a computer system simultaneously. To the applications running on top of the
virtualized machine, it can appear as if they are on their own dedicated machine, where the
operating system, libraries, and other programs are unique to the guest virtualized system and
unconnected to the host operating system which sits below it.

Types:

1. Hardware virtualization
2. Desktop virtualization
3. Network virtualization
4. Storage virtualization
5. Data Virtualization
6. Memory Virtualization
7. Software Virtualization

Q.14 Write a short note on Ajax, XML, JSON.

Answer:

Ajax:

AJAX stands for Asynchronous JavaScript and XML. AJAX is a new technique for creating
better, faster, and more interactive web applications with the help of XML, HTML, CSS, and
Java Script.

 Ajax uses XHTML for content, CSS for presentation, along with Document Object Model and
JavaScript for dynamic content display.

 Conventional web applications transmit information to and from the sever using synchronous
requests. It means you fill out a form, hit submit, and get directed to a new page with new
information from the server.

 With AJAX, when you hit submit, JavaScript will make a request to the server, interpret the
results, and update the current screen. In the purest sense, the user would never know that
anything was even transmitted to the server.

 XML is commonly used as the format for receiving server data, although any format, including
plain text, can be used.
 AJAX is a web browser technology independent of web server software.

 A user can continue to use the application while the client program requests information from
the server in the background.

 Intuitive and natural user interaction. Clicking is not required, mouse movement is a sufficient
event trigger.

 Data-driven as opposed to page-driven.

XML:

XML stands for Extensible Markup Language. It is a text-based markup language derived from
Standard Generalized Markup Language (SGML).

XML tags identify the data and are used to store and organize the data, rather than specifying
how to display it like HTML tags, which are used to display the data. XML is not going to
replace HTML in the near future, but it introduces new possibilities by adopting many
successful features of HTML.

There are three important characteristics of XML that make it useful in a variety of systems and
solutions −

 XML is extensible − XML allows you to create your own self-descriptive tags, or language,
that suits your application.

 XML carries the data, does not present it − XML allows you to store the data irrespective of
how it will be presented.

 XML is a public standard − XML was developed by an organization called the World Wide
Web Consortium (W3C) and is available as an open standard.

JSON:

JSON or JavaScript Object Notation is a lightweight text-based open standard designed for
human-readable data interchange. Conventions used by JSON are known to programmers, which
include C, C++, Java, Python, Perl, etc.

 JSON stands for JavaScript Object Notation.


 The format was specified by Douglas Crockford.

 It was designed for human-readable data interchange.

 It has been extended from the JavaScript scripting language.

 The filename extension is .json.

 JSON Internet Media type is application/json.

 The Uniform Type Identifier is public.json.

Q.15 Write a note on Open Cloud Consortium.

Answer:

The Open Cloud Consortium is a newly formed group of universities that is both trying to
improve the performance of storage and computing clouds spread across geographically
disparate data centers and promote open frameworks that will let clouds operated by different
entities work seamlessly together. Everyone’s talking about building a cloud these days. But if
the IT world is filled with computing clouds, will each one be treated like a separate island or
will open standards allow all to interoperate with each other?

That’s one of the questions being examined by the Open Cloud Consortium (OCC), a newly
formed group of universities that is both trying to improve the performance of storage and
computing clouds spread across geographically disparate data centers and promote open
frameworks that will let clouds operated by different entities work seamlessly together. Cloud is
certainly one of the most used buzzwords in IT today, and marketing hype from vendors can at
times obscure the real technical issues being addressed by researchers such as those in the Open
Cloud Consortium.

Q.16 Explain in brief I/O Virtualization.

Answer:

I/O virtualization (IOV), or input/output virtualization, is technology that uses software to


abstract upper-layer protocols from physical connections or physical transports. This technique
takes a single physical component and presents it to devices as multiple components. Because it
separates logical from physical resources, IOV is considered an enabling data center technology
that aggregates IT infrastructure as a shared pool, including computing, networking and storage.

Recent Peripheral Component Interconnect Express (PCIe) virtualization standards include


single root I/O virtualization (SR-IOV) and multi-root I/O virtualization (MR-IOV). SR-IOV
carves a hardware component into multiple logical partitions that can simultaneously share
access to a PCIe device. MR-IOV devices reside externally from the host and are shared across
multiple hardware domains.

How I/O virtualization works

In I/O virtualization, a virtual device is substituted for its physical equivalent, such as a network
interface card (NIC) or host bus adapter (HBA). Aside from simplifying server configurations,
this setup has cost implications by reducing the electric power drawn by these devices.

Virtualization and blade server technologies cram dense computing power into a small form
factor. With the advent of virtualization, data centers started using commodity hardware to
support functions such as burst computing, load balancing and multi-tenant networked storage.

I/O virtualization is based on a one-to-many approach. The path between a physical server and
nearby peripherals is virtualized, allowing a single IT resource to be shared among virtual
machines (VMs). The virtualized devices interoperate with commonly used applications,
operating systems and hypervisors.

This technique can be applied to any server component, including disk-based RAID controllers,
Ethernet NICs, Fibre Channel HBAs, graphics cards and internally mounted solid-state drives
(SSDs). For example, a single physical NIC is presented as a series of multiple virtual NICs.

Q.17 Explain Amazon EBS Snapshot. Give steps to create EBS Snapshot.

Answer:
An EBS snapshot is a point-in-time copy of your Amazon EBS volume, which is lazily copied to
Amazon Simple Storage Service (Amazon S3). EBS snapshots are incremental copies of data.
This means that only unique blocks of EBS volume data that have changed since the last EBS
snapshot are stored in the next EBS snapshot. This is how incremental copies of data are created
in Amazon AWS EBS Snapshot.

Each AWS snapshot contains all the information needed to restore your data starting from the
moment of creating the EBS snapshot. EBS snapshots are chained together. By using them, you
will be able to properly restore your EBS volumes, when needed.

Deletion of an EBS snapshot is a process of removing only the data related to that specific
snapshot. Therefore, you can safely delete any old snapshots with no harm. If you delete an old
snapshot, AWS will consolidate the snapshot data: all valid data will be moved forward to the
next snapshot and all invalid data will be discarded.

Steps to create a snapshot using the console:

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


2. Choose Snapshots under Elastic Block Store in the navigation pane.
3. Choose Create Snapshot.
4. For Select resource type, choose Volume.
5. For Volume, select the volume.
6. (Optional) Enter a description for the snapshot.
7. (Optional) Choose Add Tag to add tags to your snapshot. For each tag, provide a tag key
and a tag value.

Q.18 Enlist the services offered by Amazon.


Answer:
1. Amazon S3
2. Amazon EC2 [Elastic Compute Cloud]
3. AWS Lambda
4. Amazon Glacier
5. Amazon SNS
6. Amazon CloudFront
7. Amazon EBS [Elastic Block Store]
8. Amazon Kinesis
9. Amazon VPC
10. Amazon SQS
11.Amazon Elastic Beanstalk
12.DynamoDB
13.Amazon RDS [Relational Database Service]
14.Amazon ElastiCache
15.Amazon Redshift

Q. 19 What is RFID? Explain RFID tags and major components of RFID.


Answer:
RFID is an acronym for “radio-frequency identification” and refers to a technology whereby
digital data encoded in RFID tags or smart labels (defined below) are captured by a reader via
radio waves. RFID is similar to barcoding in that data from a tag or label are captured by a
device that stores the data in a database. RFID, however, has several advantages over systems
that use barcode asset tracking software. The most notable is that RFID tag data can be read
outside the line-of-sight, whereas barcodes must be aligned with an optical scanner. If you are
considering implementing an RFID solution, take the next step and contact the RFID experts at
AB&R® (American Barcode and RFID).
RFID tags are a type of tracking system that uses smart barcodes in order to identify items. RFID
is short for “radio frequency identification,” and as such, RFID tags utilize radio frequency
technology. These radio waves transmit data from the tag to a reader, which then transmits the
information to an RFID computer program. RFID tags are frequently used for merchandise, but
they can also be used to track vehicles, pets, and even patients with Alzheimer’s disease. An
RFID tag may also be called an RFID chip. There are two main types of RFID tags: battery-
operated and passive.
RFID Components:

 RFID reader: Depending on the frequency that is used and its performance, an RFID reader
sends radio waves of between one centimeter and 30 meters or more. If a transponder enters this
electromagnetic region, it detects the activating signal from the reader. The RFID reader decodes
the data stored in the integrated circuit of the transponder (silicon chip), and communicates them,
depending on the application, to a host system.

 RFID antenna: An RFID antenna consists of a coil with one or more windings and a matching
network. It radiates the electromagnetic waves generated by the reader, and receives the RF
signals from the transponder. An RFID system can be designed so that the electromagnetic field
is constantly generated, or activated by a sensor.

 RFID transponder (or tag): The heart of an RFID system is a data carrier, referred to as the
transponder, or simply the Tag. The designs and modes of function of the transponders also
differ depending on the frequency range, just as with the antennas.

Q.20 Explain: i. Applications of Sensor networks ii. Stages of supply chain


management.
Answer:
i. Applications of Sensor networks:
1. Military Applications
2. Health Applications
3. Environmental Applications
4. Home Applications
5. Commercial Applications
6. Area monitoring
7. Health care monitoring
8. Environmental/Earth sensings
9. Air pollution monitoring
10. Forest fire detection
11. Landslide detection
12. Water quality monitoring
13. Industrial monitoring

ii. Stages of supply chain management:

1. Plan

The initial stage of the supply chain process is the planning stage. We need to develop a plan or
strategy in order to address how the products and services will satisfy the demands and
necessities of the customers. In this stage, the planning should mainly focus on designing a
strategy that yields maximum profit.

2. Develop(Source)

After planning, the next step involves developing or sourcing. In this stage, we mainly
concentrate on building a strong relationship with suppliers of the raw materials required for
production. This involves not only identifying dependable suppliers but also determining
different planning methods for shipping, delivery, and payment of the product.
3. Make

The third step in the supply chain management process is the manufacturing or making of
products that were demanded by the customer. In this stage, the products are designed,
produced, tested, packaged, and synchronized for delivery.

4. Deliver

The fourth stage is the delivery stage. Here the products are delivered to the customer at the
destined location by the supplier. This stage is basically the logistics phase, where customer
orders are accepted and delivery of the goods is planned. The delivery stage is often referred as
logistics, where firms collaborate for the receipt of orders from customers, establish a network
of warehouses, pick carriers to deliver products to customers and set up an invoicing system to
receive payments.

5. Return

The last and final stage of supply chain management is referred as the return. In the stage,
defective or damaged goods are returned to the supplier by the customer. Here, the companies
need to deal with customer queries and respond to their complaints etc.

Q.21 Describe: i. Location aware application ii. Intelligent fabrics.

Answer:

i. Location aware application:

A location-aware application presents online conents to users, specifically based on their


geographical locations. Different technologies implement cellular phone infrastructure, wireless
access points or GPS to determine the physical location of electronic gadgets like cellphones or
laptops. The users can then opt to share this information with the location-aware applications.
The location-aware applications can then present the users with resources, for instance, an exact
location marker on a map, restaurant reviews in that specific area, a snooze alarm set for a
particular stop while using a commuter train service, updates or cautions regarding nearby
bottlenecks in traffic, etc.

ii. Intelligent fabrics:


Smart fabrics can sense different environmental conditions and intelligent textiles or e-textiles
can not only sense environmental changes, but can automatically respond to their surroundings
or stimuli, such as thermal, chemical, or mechanical changes, as well.
The foundation of smart textiles lays within its cutting-edge technology, which essentially is
embedding a variety of tiny semiconductors and sensors into fabrics that can see, hear, and
communicate. These devices take this information to deliver greater comfort, for example by
warming or cooling the wearer or capturing useful biometrics for monitoring the wearer’s health.
Smart textiles are paving the way to a new frontier of the Internet of Things (IoT), a technology
of which IBM is at the forefront. It’s easy for me to understand how a Fitbit can communicate,
but it’s a little less clear how my t-shirt can. So I dug a bit deeper into the innovation of this
technology and how it can transform the health care industry.

Q.22 Explain the future of cloud TV.


Answer:
CloudTV is a software platform that virtualizes CPE or STB functionality, enabling pay-TV
operators and other video service providers to bring advanced user interfaces and online video
experiences such as YouTube and Hulu to existing and next-generation cable
television and IPTV set-top boxes and connected consumer electronics devices.

A product of ActiveVideo, a Silicon Valley software company, CloudTV is available on more


than 15 million devices. Announced customers include Charter
Communications and Cablevision Systems (now part of Altice) in the United States and Liberty
Global in the Americas and Europe. CloudTV-powered services also are available on Philips
NetTVs and on Roku players.

By virtualizing STB functionality, CloudTV enables Web-like guides and full online video
experiences on existing and next-generation devices, including QAM STBs and “newer IP-
capable devices, such as Charter’s new Worldbox,” Internet-connected TVs and specialized
streaming boxes.

Multichannel News notes that “instead of requiring operators to write a different version of the
UI for each device, operating system and rendering engine, ActiveVideo’s approach looks to
avoid that operational nightmare by requiring that it only be written once, in HTML5, and
managed from the cloud.” ScreenPlays adds that the platform enables delivery of “protected
OTT streams as an integral part of channel offerings” without replacing existing customer
devices. The analyst firm nScreenMedia cites such advantages as: Compatibility with the widest
range of devices; the ability to update an app once and see it reflected on every device;
scalability for large and small service providers; and the ability to use the most advanced UI
techniques available to ensure high “coolness” factors. ACG Research notes that for cable
operators, CloudTV can reduce total cost of ownership by up to 83% when compared to a set-top
box replacement program.

Q.23 Explain Client/Server model for Docker.

Answer:
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which
does the heavy lifting of building, running, and distributing your Docker containers. The Docker
client and daemon can run on the same system, or you can connect a Docker client to a remote
Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.
The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such
as images, containers, networks, and volumes. A daemon can also communicate with other
daemons to manage Docker services.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to dockerd,
which carries them out. The docker command uses the Docker API. The Docker client can
communicate with more than one daemon.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use,
and Docker is configured to look for images on Docker Hub by default. You can even run your
own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry
(DTR).

When you use the docker pull or docker run commands, the required images are pulled from
your configured registry. When you use the docker push command, your image is pushed to your
configured registry.

Docker objects

When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.

IMAGES

An image is a read-only template with instructions for creating a Docker container. Often, an
image is based on another image, with some additional customization. For example, you may
build an image which is based on the ubuntu image, but installs the Apache web server and your
application, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published
in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining
the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in
the image. When you change the Dockerfile and rebuild the image, only those layers which have
changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.

CONTAINERS

A container is a runnable instance of an image. You can create, start, stop, move, or delete a
container using the Docker API or CLI. You can connect a container to one or more networks,
attach storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You
can control how isolated a container’s network, storage, or other underlying subsystems are from
other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when
you create or start it. When a container is removed, any changes to its state that are not stored in
persistent storage disappear.

Q.24 Discuss in brief Docker Workflow.

Answer:

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which
does the heavy lifting of building, running, and distributing your Docker containers. The Docker
client and daemon can run on the same system, or you can connect a Docker client to a remote
Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX
sockets or a network interface.
The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such
as images, containers, networks, and volumes. A daemon can also communicate with other
daemons to manage Docker services.

The Docker client

The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to dockerd,
which carries them out. The docker command uses the Docker API. The Docker client can
communicate with more than one daemon.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use,
and Docker is configured to look for images on Docker Hub by default. You can even run your
own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry
(DTR).
When you use the docker pull or docker run commands, the required images are pulled from
your configured registry. When you use the docker push command, your image is pushed to your
configured registry.

Docker objects

When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.

IMAGES

An image is a read-only template with instructions for creating a Docker container. Often, an
image is based on another image, with some additional customization. For example, you may
build an image which is based on the ubuntu image, but installs the Apache web server and your
application, as well as the configuration details needed to make your application run.

You might create your own images or you might only use those created by others and published
in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining
the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in
the image. When you change the Dockerfile and rebuild the image, only those layers which have
changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.

CONTAINERS

A container is a runnable instance of an image. You can create, start, stop, move, or delete a
container using the Docker API or CLI. You can connect a container to one or more networks,
attach storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You
can control how isolated a container’s network, storage, or other underlying subsystems are from
other containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when
you create or start it. When a container is removed, any changes to its state that are not stored in
persistent storage disappear.

You might also like