You are on page 1of 934

• Software and hardware are decoupled in the development state, but they are not

decoupled in the deployment and running states.

• As the ecological chain flourishes, the management and integration of heterogeneous


hardware from multiple vendors become increasingly complex.

• The focus of enterprise informatization is shifting to software. However, computing,


storage, and network hardware are lacking in collaboration and elastic supply
capabilities, which increasingly restricts software value improvement.
• Distributed maintenance and long service fault recovery period

• Siloed development, difficult capacity expansion, and lack of flexibility

• Disk-centric computing causes huge computing bottlenecks.

• I/O bottlenecks cause latency and low CPU utilization.

• Bottlenecks of traditional SAN engines include the bottlenecks of transmission


bandwidth, CPU processing capability, caches, and network latency
• In the next 10 years of IT infrastructure evolution in the cloud computing era, we will
see an evolution from separation to convergence.

▫ The cloud operating system is used to horizontally integrate heterogeneous


computing, storage, and network resources of multiple vendors in the data
center and provide open and standard IT service interfaces for external systems,
implementing convergence of reused IT infrastructure.

▫ The converged infrastructure machine vertically integrates computing, storage,


and network resources of a single vendor, and provides a modular, one-stop,
high-performance, and cost-effective delivery mode for new infrastructure.

• No matter how the IT architecture evolves, the customer benefits and driving force will
be follows:

▫ Lower TCO

▫ More efficient service deployment and lifecycle management

▫ Better performance and user experience


• In 2013, VMware launched its server virtualization solution vSphere.

• Amazon Web Services (AWS) is a professional cloud computing service provided by


Amazon. It was launched in 2006 to provide IT infrastructure services for enterprises in
the form of web services. These services have come to be called cloud computing
services.

• OpenStack began as a joint project of Rackspace Hosting and NASA and was released
under the terms of the Apache license. OpenStack is a free and open-source project.

• As the name implies, a hybrid cloud is a combination of public cloud and private cloud
in the target architecture. For security and control purposes, not all the enterprise
information is placed on the public cloud. In this case, most enterprise users of cloud
computing will adopt the hybrid cloud model. Many enterprises will choose to use
public and private clouds at the same time.
• Enterprises demand new technologies (such as big data and AI) to simplify innovation
and enhance enterprise competitiveness. In this process, enterprises tend to pay more
attention to data security, smooth migration, and cloud migration.

• Cloud 2.0 differs from Cloud 1.0 in the capabilities, requirements, and supply modes of
the cloud platforms, and raises new requirements for the platforms.
• Simply put, the cloud is a metaphor for the Internet. It is an abstraction of the Internet
and the infrastructure underpinning the Internet. Computing refers to services provided
by a sufficiently powerful computer capable of providing a range of functionalities,
resources, and storage. Cloud computing can be understood as the delivery of on-
demand, measured computing services over the Internet.
• On-demand self-service: Users can deploy processing services according to their
specific requirements for server running time, network, and storage, and do not need
to communicate with each service provider.

• Ubiquitous network access: Customers can obtain various capabilities over the Internet
from different clients, such as mobile phones, laptops, and PDAs in standard mode.

• Resource pooling: Computing resources of the service provider are centralized so that
customers can rent services. In addition, different physical and virtual resources can be
dynamically allocated or reallocated based on user requirements. Users generally
cannot control or know the exact location of the resources. The resources include the
storage medium, processor, memory, network bandwidth, and virtual machines (VMs).
• Rapid elastic scaling: Resources can be provided to users rapidly and elastically. Users
can expand or reduce resources rapidly. Users can rent unlimited resources at any time.

• Pay per use: The service billing mode is based on a pay-per-use or advertisement basis,
thereby making efficient use of resources. For example, a user can be charged per
month based on the actual storage, bandwidth, and computing resources used. A
cloud used by the same organization in a company can be billed across departments.
• Private cloud
▫ A private cloud is built for a specific user or organization and can only optimize
resources within a small scope. Therefore, the private cloud does not comply with
the essence of the cloud, that is, social division of labor. A hosted private cloud
implements social division of labor to some extent, but cannot improve physical
resource utilization efficiency on a large scale.
• Public cloud
▫ The public cloud is built for the public. All registered users are called tenants. If a
tenant leaves, their resources can be immediately released to the next tenant.
The public cloud is the most thorough social division of labor and can optimize
resources on a large scale.
▫ The differences between the public cloud and private cloud:
• Hybrid cloud
▫ A hybrid cloud is a combination of public cloud, private cloud, and community
cloud. It can be realized through the combination of computing, storage, or both.
Currently, the public cloud technology is not mature. It is difficult to maintain,
deploy and dynamically scale out services on the private cloud. Therefore, the
hybrid cloud is an ideal platform for smooth transition, and will be seen a sharp
increase in market share in a short period of time.
• Community cloud
▫ The community cloud is a form between public and private clouds. If there are
multiple small enterprises in a sensitive industry, they probably will not trust
public clouds to handle their industry restrictions and risks in terms of policies
and management. These enterprises would jointly build a cloud platform.
• The IaaS layer provides basic computing, storage, and network services. Typical IaaS
cloud services include Elastic Cloud Server (ECS).

• The PaaS layer provides application running, development environment, and


application development components. Typical PaaS cloud services include database
services.

• The SaaS layer provides software-related functions through web pages. Typical SaaS
cloud services include Office 365.
• In the next two to three decades, human society will evolve into an intelligent one. This
society will have three features: all things are sensing, all things are connected, and all
things are intelligent. In an intelligent society, everything can sense the physical world
and convert this sensory perception into digital signals. Multiple sensor channels
(temperature, space, touch, hearing, and vision) will help us detect things and interact
with them in an immersive way. The network connects everything online, and ensures
wide and deep connection in different fields such as cities, mountains, and space. The
application of big data and AI will make everything intelligent. Digital twins will
gradually gain popularity among individuals, families, industries, and cities for meeting
the requirements of the physical world. In addition, digital survivors will be able to live
a second life, making the spiritual world richer. These three characteristics can be
realized only with advanced ICT technologies.
• Huawei provides innovative end-to-end ICT solutions and products for carriers,
enterprises, and consumers by means of information distribution, interaction,
transmission, processing, and storage, as well as information learning and inferencing.
After releasing a full-stack all-scenario AI strategy in 2018, we released a computing
strategy in 2019. Under the auspices of intelligence, we further improved product
solutions and service capabilities to build a fully connected, intelligent world.

• In the era of converged ICT, this mode not only ensures product innovation and
collaboration, but also provides differentiated services based on the characteristics of
different customer groups.
• HUAWEI CLOUD has maintained rapid growth. By the second quarter of 2019,
HUAWEI CLOUD had launched over 200 cloud services and over 190 solutions.
According to the China Public Cloud Service Market (First Half of 2019) Tracker
released by IDC in November 2019, the overall market share of HUAWEI CLOUD
IaaS+PaaS increased by more than 350% in the second quarter of 2019. It is the fastest
growing vendor among top vendors. The market share of HUAWEI CLOUD IaaS rose to
No. 4 at 6.7%. It also rose to No. 2 in brand recognition in China. In 2019, a total of
3,500 applications were launched in HUAWEI CLOUD Marketplace, and more than
10,000 partners were developed. More than 3 million companies and developers
develop applications on HUAWEI CLOUD.

• Because of its leading technologies, full-stack offerings, localized services, and


comprehensive ecosystem, HUAWEI CLOUD was named by Forrester in 2019 as a
leader among China's full-stack public cloud platforms.

• In addition, multiple domains of HUAWEI CLOUD products, such as containers,


DevCloud, and computer vision, are listed in the Leaders quadrant.
• Outside China, HUAWEI CLOUD launched cloud data centers in Singapore, Chile, Brazil,
Mexico, and Peru, working with partners to provide services in 45 availability zones
across 23 regions worldwide. HUAWEI CLOUD provides multinational enterprises with
global public cloud services with consistent experience and service performance. It can
provide full support for Chinese enterprises going international and non-Chinese
enterprises entering the Chinese market.

• HUAWEI CLOUD has deployed seven availability zones (AZs) in regions such as Hong
Kong, Thailand, and Singapore to provide secure and reliable cloud services for users in
the Asia Pacific region. HUAWEI CLOUD has also set up local professional service
teams in more than 10 Asia-Pacific countries and regions.

• In Latin America, HUAWEI CLOUD has been deployed in Chile, Brazil, Mexico, and
Peru, and is the cloud service provider with the most local data centers in Latin
America.

• HUAWEI CLOUD was officially launched in South Africa in 2019, making Huawei the
first cloud service provider to operate local data centers in South Africa. Currently,
HUAWEI CLOUD provides cloud services for 12 African countries, namely, Angola,
Botswana, Ghana, Kenya, Mauritius, Mozambique, Namibia, Nigeria, South Africa,
Tanzania, Zambia, and Zimbabwe. HUAWEI CLOUD has developed more than 200
partners in Africa, covering more than 30 countries in Africa.
• In the Cloud 1.0 era, cloud users were mainly Internet companies, which accounted for
60% of the entire market. Cost-effectiveness was the key competitiveness.

• In the Cloud 2.0 era, governments and traditional enterprises have accelerated their
cloud journey. By 2022, they will account for 70% of the market. To respond to this
shift, HUAWEI CLOUD will build core competitive strengths from four aspects:

1. Continuously innovate based on full-stack capabilities to build cloud services with


optimal performance and experience.

2. Leverage the full-stack AI advantages of HUAWEI CLOUD to build the best


enterprise AI platform for industry scenarios and enter the core production
system of enterprises.

3. Invest US$3 billion every year to build digital transformation capabilities, such as
WeLink, DevCloud, and ROMA.

4. Build the most suitable hybrid cloud for governments and enterprises based on
the online and offline collaboration of HUAWEI CLOUD Stack.
• ABCD
In China, HUAWEI CLOUD uses a "2+7+N" interconnected node architecture to provide
cloud services nationwide.

• The "2" refers to the data centers in Ulanqab and Guiyang, the level-1 centers built by
HUAWEI CLOUD. Each level-1 center has three AZs, forming a three-AZ HA
architecture. There is a distance of 30 to 50 kilometers between every two AZs.

• The "7" refers to the 7 HUAWEI CLOUD regions, including North China, East China,
South China, and Hong Kong.

• The "N" refers to the number of satellite nodes. Each node works as a government
cloud serving the local government and also as a HUAWEI CLOUD node. Currently,
there are five satellite nodes deployed in regions including Ulanqab, Xiangyang, Yuxi,
and Karamay.
• Outside China, HUAWEI CLOUD launched cloud data centers in Singapore, Chile, Brazil,
Mexico, and Peru, working with partners to provide services in 45 availability zones
across 23 regions worldwide. HUAWEI CLOUD provides multinational enterprises with
global public cloud services with consistent experience and service performance. It can
provide full support for Chinese enterprises going international and non-Chinese
enterprises entering the Chinese market.

• HUAWEI CLOUD has deployed seven availability zones (AZs) in regions such as Hong
Kong, Thailand, and Singapore to provide secure and reliable cloud services for users in
the Asia Pacific region. HUAWEI CLOUD has also set up local professional service
teams in more than 10 Asia-Pacific countries and regions.

• In Latin America, HUAWEI CLOUD has been deployed in Chile, Brazil, Mexico, and
Peru, and is the cloud service provider with the most local data centers in Latin
America.

• HUAWEI CLOUD was officially launched in South Africa in 2019, making Huawei the
first cloud service provider to operate local data centers in South Africa. Currently,
HUAWEI CLOUD provides cloud services for 12 African countries, namely, Angola,
Botswana, Ghana, Kenya, Mauritius, Mozambique, Namibia, Nigeria, South Africa,
Tanzania, Zambia, and Zimbabwe. HUAWEI CLOUD has developed more than 200
partners in Africa, covering more than 30 countries in Africa.
• Chips are the core, the most difficult part of R&D in the IT industry. They require long-
term investment.

• Huawei has over 20 years of experience in chip R&D and is continuously innovating
chips for the Cloud 2.0 era. We have launched a full series of chips for next-generation
cloud data centers.

• 1. Computing chips: The full series of AI processors are no doubt the highlights.

• 2. Network chips: Huawei's next-generation network chip Hi822 is based on a


programmable NP-like architecture and supports offloading over multiple protocols. Its
performance is 2.5 times better than the currently best chip in the industry.

• 3. Storage chips: The fourth generation of storage chips provides 76% higher
performance, 64% higher bandwidth, and 15% lower latency by using the intelligent
multi-stream technology.

• 4. Security chips: Huawei builds security and reliability into the chips, so there is
comprehensive protection for firmware, identities, software systems, and data.
• In Cloud 1.0, the main hardware of cloud data centers was the general servers, which
had to meet clustering requirements; whereas in Cloud2.0, the key enterprise
applications (such as CRM and ERP) need more professional hardware to achieve high
reliability and low latency. To meet this requirement, Huawei launched KunLun, an
extremely reliable chipset that allows for hot-swappable CPUs and memory, and
supports hybrid deployment of physical and logical partitions. KunLun has been used
both on cloud and on-premises environments.
• Based on these technologies, Huawei has launched a smart cloud infrastructure 2.0,
providing innovative cloud data centers for Cloud 2.0. Smart cloud infrastructure 2.0
includes data center infrastructure, data center management, chips, hardware, basic
cloud services, and application platforms.

• Huawei believes that a hybrid cloud is the best solution for enterprise digital
transformation. Today, I'd like to show you some of Huawei's technical innovation
related to cloud data centers. We provide customers with both public cloud services
and with the HUAWEI CLOUD Stack for private cloud, which, together, form one of the
most competitive hybrid cloud solutions out there.
• The next-generation C6 series is coming soon. The C6 series will further enhance the
performance of HUAWEI CLOUD computing services. Lower storage latency, more
memory, and better compute performance will provide stronger support for enterprise
AI.
• Dedicated Host (DeH) provides dedicated physical hosts. You can create
ECSs on a DeH to enhance isolation, security, and performance. You can bring
your own license (BYOL) to the DeH to keep costs down, and you can
independently manage your ECSs.

• Cloud Container Engine (CCE) provides highly scalable, high-performance, enterprise-


class Kubernetes clusters and supports Docker containers. With CCE, you can easily
deploy, manage, and scale containerized applications on HUAWEI CLOUD.

• Cloud Container Instance (CCI) is a serverless container engine that allows you to run
containers without creating and managing servers or clusters.

• With the serverless architecture, you can apply for resources to quickly build
applications. You do not need to create, manage, or maintain servers. Traditionally, to
run containerized workloads using Kubernetes, you need to create a Kubernetes cluster
first.

• That is not the case with CCI. CCI allows you to directly create and use containerized
workloads using the console, kubectl, or Kubernetes APIs, and pay only for the
resources consumed by these workloads.

• A cloud phone is a simulated mobile phone app on the cloud. Cloud Phone provides
cloud phones of different specifications for different scenarios. These cloud phones are
running 24/7 and are fully compatible with native Android apps. They can run large-
scale mobile games and are good assistants for mobile office.
• Photo source 3ms.

• Huawei cloud storage service is not just a pure storage medium, it also provides the
ultimate in performance and reliability. We integrate intelligent capabilities, such as
data mining and data migration, to cope with the new challenges of massive data
migration and governance in the cloud era.
• Dedicated Distributed Storage Service (DSS) provides dedicated physical storage
resources. With data redundancy and cache acceleration technologies, DSS delivers
highly reliable, durable, and low-latency storage resources. By interconnecting with
compute services, such as Elastic Cloud Server (ECS), Bare Metal Server (BMS), and
Dedicated Computing Cluster (DCC), DSS offers first-class performance in scenarios
such as HPC, OLAP, or mixed workload environments.

• Dedicated Enterprise Storage Service (DESS) provides out-of-the-box dedicated storage


services with the same performance and reliability as that in private cloud
environments. It is ideal for mission-critical applications such as Oracle RAC and SAP
HANA TDI.
• Block storage chunks data into arbitrarily organized, evenly sized volumes and stores
them as separate "block" of data. Each block of data is given a unique identifier, which
allows a storage system to place the smaller pieces of data wherever is most
convenient.

• In file storage, data is stored as a single piece of information inside a folder, just like
you'd organize pieces of paper inside a manila folder. When you need to access that
piece of data, your computer needs to know the file path to find it. Data stored in files
is organized and retrieved using a limited amount of metadata that tells the computer
exactly where the file itself is kept. It's like a library card catalog for data.

• Object storage, also known as object-based storage, uses a flat structure where files
are broken into pieces and spread out across different hardware. In object storage, the
data is broken into discrete units called objects and is kept in a single repository,
instead of being kept as files in folders or as blocks on servers. A universally unique
identifier (UUID) is assigned to every object in an OBS system and allows the object
storage system to differentiate objects from one another and find the data without
needing to know the exact physical drive, array, or site where the data is.
• HUAWEI CLOUD provides a wide range of network cloud services to help enterprises
build cloud-based networks, connections, and hybrid cloud networks.
• The Identity and Access Management (IAM) service provides permissions management
for secure access to your HUAWEI CLOUD services and resources.

• Cloud Eye is a multi-dimensional resource monitoring service. You can use Cloud Eye to
monitor resource usage and cloud service statuses. You can also configure alarm rules,
so the system can notify you if and when alarms are triggered, so that you can
respond quickly and ensure services continue uninterrupted.

• Cloud Trace Service (CTS) is a log audit service that allows you to collect, store, and
query resource operation records. You can use these records for security analysis,
compliance check, resource tracing, and troubleshooting.

• Log Tank Service (LTS) can collect, analyze, and store logs. You can use LTS to
efficiently perform device O&M management, service trend analysis, and security
monitoring and audit.
• The "No" in NoSQL refers to non-relational databases. When it comes to processing
the ultra-large-scale and highly concurrent SNS website requests common to Web 2.0
sites, traditional relational databases performance lags far behind what NoSQL can
offer. NoSQL databases are used to solve the challenges involved in handling multiple
data types in large-scale data sets, especially from big data applications. NoSQL
databases include key-value pair, wide column, document, and graph databases.

• Database tools

▫ Distributed Database Middleware (DDM) works with RDS to remove a single


node's dependency on hardware, facilitate capacity expansion to address data
growth challenges, and ensure fast response to query requests. DDM eliminates
bottlenecks in capacity and performance and ensures concurrent access to a
massive amount of data.

▫ Data Replication Service (DRS) is a stable, efficient, and easy-to-use cloud service
for database online migration and synchronization. It simplifies data migration
between databases and reduces data migration costs.

▫ Data Admin Service (DAS) enables you to manage DB instances from a web-
based console, simplifying database management and improving efficiency and
security.
• High security and reliability:

▫ The Huawei big data platform is the first in the industry to provide HA for all
components and DR for two sites with a distance of greater than 1000 km.

• High performance:

▫ VMs can access local hard disks directly, reducing virtualization overheads and
improving processing performance.

▫ The Huawei-developed Carbondata file storage format is the only top-level


open-source project in China that has been accepted by the Apache community.
Carbondata improves multi-dimensional cross-table query by 3 fold.

• Easy to use

▫ To match the multi-layered tree structure of enterprise organizations, Huawei big


data uses a tree-shaped multi-tenant structure to help enterprise improve O&M
efficiency, facilitate permissions management, and simplify resource control.

• High-quality services:

• Huawei leverages local delivery teams worldwide as well as more than 1,000 R&D
experts in eight R&D centers across the globe to provide 24/7 support.
• Answer: ABC
13
14
• Partner Types

• HCPN partners can be divided into consulting and technology partners. Partners
choose the partner type appropriate for them when applying to join HCPN.

• Consulting Partners

• Consulting partners are professional service firms that help customers of all sizes
design, architect, migrate, or build new applications or perform daily customer
service operations on HUAWEI CLOUD. Consulting partners include SIs, strategic
consultancies, agencies, MSPs, VARs, and telecom operators.

• Technology Partners

• HCPN technology partners are commercial software and/or Internet service


companies that provide software solutions that are either hosted on or
integrated with HUAWEI CLOUD. Technology partners include ISVs as well as
SaaS, PaaS, developer tool, management, and security vendors.
• The Solution Partner Program is built based on Huawei's technical environments
and solutions. It's not just a program, but a methodology for cooperating with
solution partners. The program mobilizes various resources and processes of
Huawei. It also helps partners efficiently run complex services and deliver high-
quality products and services to end customers.

• Through Huawei's Manage Alliance Relationship (MAR) process, products jointly


incubated by OpenLab and experts from different technical fields can be
monetized in Huawei's exponential, multi-dimensional ecosystem.
• To join HUAWEI CLOUD, a partner needs to perform three steps:

• Step 1: Join HCPN and apply to become a consulting or technology partner. Only
partners who meet our requirements can become HCPN partners. Partners obtain
corresponding benefits upon successful registration.

• Step 2: Join one or more partner programs. For example, consulting partners can
join the Solution Partner Program to resell HUAWEI CLOUD services for
additional support and incentives. SaaS vendors can join the Software Partner
Program to obtain technical and marketing support from HUAWEI CLOUD.

• Step 3: Do what a consulting or technology partner would do. For example,


consulting partners engage customers, help them purchase HUAWEI CLOUD
services, and provide them with service support. At the same time, HUAWEI
CLOUD offer benefits to partners based on their partner type and the programs
they have joined.
• An Elastic Cloud Server (ECS) is a computing server that consists of vCPUs, memory,
image, and EVS disks allowing on-demand allocation and elastic scaling. ECSs
integrate the Virtual Private Cloud (VPC), virtual firewall, and multi-data-copy
capabilities to construct an efficient, reliable, and secure computing environment. This
ensures stable and uninterrupted operation of services. After creating an ECS, you can
use it like you use your local computer or physical server.
• Rich specifications: A variety of ECS types are available for different scenario
requirements. There are multiple customizable specifications for each type.

• Comprehensive images: Public, private, and shared images can be flexibly selected to
request for ECSs.

• Differentiated EVS disks: General-purpose I/O, high I/O, and ultra-high I/O EVS disks
are available for all of your service requirements.

• Flexible billing: Yearly/Monthly and pay-per-use billing modes allow you to purchase
and release resources at any time based on service fluctuation.
ECS works with other products and services to provide computing, storage, network, and
image installation functions.
• ECSs are deployed in multiple availability zones (AZs) connected with each
other through an intranet. If an AZ becomes faulty, other AZs in the same
region will not be affected.

• With the Virtual Private Cloud (VPC) service, you can build a dedicated
network, set the subnet and security group, and allow the VPC to
communicate with a external network through an EIP with bandwidth
assigned.

• With the Image Management Service (IMS), you can install images on ECSs,
or create ECSs using private images for rapid service deployment.

• EVS provides storage and Volume Backup Service (VBS) provides data
backup and recovery functions.

• Cloud Eye is a key service to help ensure ECS performance, reliability, and
availability. It enables you to easily visualize ECS metrics through graphs.

• With Cloud Backup and Recovery (CBR), you can back up data for EVS disks
and ECSs, and use snapshot backups to restore the EVS disks and ECSs.

• Kunpeng general computing-plus KC1 ECSs use Kunpeng 920 processors and 25GE
high-speed intelligent NICs to offer powerful computing and high-performance
networks, meeting the requirements of governments and Internet enterprises for cost-
effective, secure, reliable cloud services.
• Kunpeng memory-optimized KM1 ECSs use Kunpeng 920 processors and 25GE high-
speed intelligent NICs to provide up to 480 GB DDR4-based memory with high
network performance for large-memory datasets.
• Kunpeng ultra-high I/O ECSs use high-performance local NVMe SSDs to provide high
storage input/output operations per second (IOPS) and low read/write latency. You can
create such ECSs with high-performance local NVMe SSDs attached on the
management console. The capacity of a Kunpeng ultra-high I/O disk is 3.2 TB.
• General computing ECSs provide a balance of computing, memory, and network
resources and a baseline level of vCPU performance with the ability to burst above the
baseline. These ECSs are suitable for many applications, such as web servers, enterprise
R&D, and small-scale databases.
• General computing-plus ECSs provide dedicated vCPUs when compared with general
computing ECSs, and deliver high performance. In addition, the ECSs use latest-
generation network acceleration engines and Data Plane Development Kit (DPDK) to
provide higher network performance, meeting requirements in different scenarios.
• Disk Type

Disks are classified as EVS disks and DSS disks based on whether the storage
resources used by the disks are dedicated. DSS disks allow you to use dedicated
storage resources.

▫ If you have applied for a storage pool on the DSS console, click DSS and create
disks in the obtained storage pool.

▫ If not, click EVS and create EVS disks that use public storage resources.
• For Linux ECSs, the initial password is the password of user root.For Windows
ECSs, it is that of user Administrator.
• Select a login method and log in to the ECS.

▫ Through the management console (VNC): The login username is Administrator.

▫ Using the RDP file provided on the management console: The login username is
Administrator, and the ECS must have an EIP bound.

▫ Using MSTSC: The login username is Administrator, and the ECS must have an
EIP bound.

▫ From a mobile terminal: The login username is Administrator, and the ECS must
have an EIP bound.

▫ From a Mac: The login username is Administrator, and the ECS must have an EIP
bound.
• To log in to a password-authenticated ECS for the first time, use one of the following
methods:

▫ Through the management console (VNC) with login username root

▫ Using an SSH password: The login username is root, and the ECS must have an
EIP bound.

▫ From a mobile terminal: The login username is root, and the ECS must have an
EIP bound.

• To log in to a key-pair-authenticated ECS for the first time, use a tool, such as PuTTY
or XShell, and the SSH key as user root. The ECS must have an EIP bound.

Note: If you want to log in to an ECS using VNC provided on the management console,
log in to the ECS using an SSH key, configure the login password, and use the password
for login.
• Procedure

1. Log in to the management console.

2. Click the map icon in the upper left corner and select the desired region and
project.

3. Under Computing, select Elastic Cloud Server.

4. Locate the row containing the target ECS. Click More in the Operation column
and select Manage Image/Disk > Reinstall OS. Before reinstalling the OS, stop the
ECS or select Automatically stop the ECSs and then reinstall OSs.

5. Configure the login mode. If the target ECS used key pair authentication, you can
replace the original key pair.

6. Click OK.
• Plug-in name: CloudResetPwdAgent and CloudResetPwdUpdateAgent

• After installing the one-click password reset plug-ins, do not delete the
CloudResetPwdAgent or CloudResetPwdUpdateAgent process. Otherwise, one-click
password reset will not be available.
• CBR backs up data for EVS disks and ECSs, and uses snapshot backups to restore the
EVS disks and ECSs. In addition, CBR supports synchronizing backup data in the offline
backup software OceanStor BCManager and VMware VMs to the cloud. In this way,
you can manage backup data on the cloud and restore data to other servers on the
cloud using the backup data, maximizing the security and accuracy of your data to
ensure service security.

• CBR facilitates data integrity and service continuity. For example, if an ECS or disk is
faulty or a misoperation causes data loss, you can use backups to quickly restore data.
Answers:

▫ B
• Consider a web application for buying train tickets running on the public cloud. This
application is rarely used during Q2 and Q3 because there aren't many travelers, but it
is frequently used during Q1 and Q4 because of increased travel during the holiday
season. In most cases, servers are added to increase the processing capability, or
applications are added to process the requests together, thereby meeting service
requirements. However, these two solutions may waste resources or struggle to meet
demand spikes. After you enable AS for an application, AS automatically adjusts the
number of servers based on requirements to reduce cost and meet demand spikes.
• Enhanced Cost Management

AS enables you to use ECS instances and bandwidth on demand by automatically


adjusting resources in the system, eliminating waste of resources and reducing
costs.

• Improved Availability

AS ensures proper resources for applications. Working with ELB, AS automatically


associates with a load balancing listener to any ECS instances newly added to the
AS group and balances access traffic on all the instances of an AS group through
the listener.

• High Fault Tolerance

AS monitors instance status in an AS group. After detecting an unhealthy instance,


AS replaces it with a new one.
• For example, the service load changes of a live video website in different time periods
are difficult to predict. Therefore, the bandwidth needs to be dynamically adjusted
between 10 Mbit/s and 30 Mbit/s based on metrics such as outbound traffic and
inbound traffic. AS can automatically adjust the bandwidth to meet requirements. You
need to select the target EIP and create two alarm policies. One policy is to add 2
Mbit/s when the outbound traffic is greater than xxx bytes, with the limit set to 30
Mbit/s. The other policy is to reduce 2 Mbit/s when the outbound traffic is less
than xxx bytes, with the limit set to 10 Mbit/s.
• AS Configuration: defines the specifications of ECSs be added to an AS group. You can
select an existing AS configuration or create an AS configuration.

• Configuration Template: specifies a template for creating a new AS configuration. You


can create a new specifications template or use specifications of an existing ECS.

• Specifications: specifies the specifications of the ECS to be added to the AS group,


including vCPUs and memory.

• Image: specifies an ECS or BMS template that contains an OS or service data and may
also contain proprietary software and application software, such as database software.
Public, private, and shared images are available.

• Disk: provides storage for ECSs. System disks are mandatory. You can select the disk
I/O type and size.

• Security Group: specifies a logical group that controls access within or between security
groups. If this parameter is set to the default value, inbound traffic is controlled and all
outbound packets are permitted.

• EIP: If a load balancer has been bound to the AS group, you do not need to set this
parameter. The system automatically binds the load balancer listener to ECSs in the AS
group. These ECSs will provide services via an EIP bound to the load balancer.

• Login Mode: specifies the login mode of the ECS. Two login modes are supported, key
pair and password.

• Advanced Settings: allows you to configure File Injection, User Data Injection, and ECS
Group.
▫ Manually adjust the expected number of instances: After you manually change
the number of expected instances, the current number of instances is not
consistent with the expected number. As a result, a scaling action is triggered to
adjust the number of instances in the AS group to the expected number.

▫ Change the number of expected instances by scaling policies: After a scaling


policy which adds two instances to an AS group is triggered, the system will add
two to the expected number of instances. In this case, the system triggers a
scaling action to add two instances so that the number of instances in the AS
group is the same as the expected number.
• AS supports the following instance removal policies:

• Oldest instance: The oldest instance is removed from the AS group first. Use this policy
if you want to replace old instances by new instances in an AS group.

• Newest instance: The latest instance is removed from the AS group first. Use this policy
if you want to test a new AS configuration and do not want to retain it.

• Oldest instance created from oldest AS configuration: The oldest instance created
based on the oldest configuration is removed from the AS group first. Use this policy if
you want to update an AS group and delete the instances created based on early AS
configurations gradually.

• Newest instance created from oldest AS configuration: The latest instance created
based on the oldest configuration is removed from the AS group first.
• The manually added ECS instances are removed at last. AS does not delete the
instances after removing them. If multiple manually added ECSs must be removed, AS
preferentially removes the earliest-added ECS.
• AS supports the following policies:Alarm policy: AS automatically increases or
decreases the number of instances in an AS group or sets the number of instances to
the configured value when an alarm is generated for a configured metric, such as CPU
Usage.

• Scheduled policy: AS automatically increases or decreases the number of instances in


an AS group or sets the number of instances to the configured value at a specified
time.

• Periodic policy: AS automatically increases or decreases the number of instances in an


AS group or sets the number of instances to the configured value at a configured
interval, such as daily, weekly, and monthly.
• When a traffic peak occurs, an alarm policy is triggered. In this case, AS automatically
adds an instance to the AS group to help handle the added demands. However, it
takes several minutes for the instance to start. After the instance is started, it takes a
certain period of time to receive requests from ELB. During this period, alarms may be
triggered continuously. As a result, an instance is added each time an alarm is
triggered. If you set a cooldown time, after an instance is started, AS stops adding new
instances according to the alarm policy until the specified period of time (300 seconds
by default) passes. Therefore, the newly started instance has time to start processing
application traffic. If an alarm is triggered again after the cooldown period elapses, AS
starts another instance and the cooldown period takes effect again.
• AS allows you to manage AS groups by performing the following basic operations:
creation, enabling, disabling, modification, and deletion.

• If the service scenario changes, you need to change the specifications of the ECS
instances. This can be done by changing the AS configuration of the AS group.

• To improve the fault tolerance of an AS group, you can add an ELB listener to the
group. Then, this listener will evenly distribute access traffic to all ECSs in the group.

• For details, see Auto Scaling User Guide.


• You can create an AS configuration either by using specifications of an existing ECS or
by creating a new specifications template.

• The AS configuration can only be copied and deleted. To modify the AS configuration
of an AS group, you can copy this configuration, modify parameters on the copied
configuration, save the copied configuration as a new one, and then replace the
configuration of the AS group with the new configuration.
• The differences between the three methods are as follows:

• Dynamically expanding resources: You can configure an alarm policy to adjust the
number of instances in the AS group or the EIP bandwidth based on the CPU usage or
inband incoming rate.

• Expanding resources as planned: For predictable traffic needs, you can configure a
scheduled or periodic policy to adjust the instances in the AS group or the bandwidth.

• Manually expanding resources: You can manually change the expected number of
instances in the AS group to expand resources. This method cannot be used for
bandwidth scaling.

• When service demands are reduced, you can also reduce resources to control costs.
• Oldest instances created from oldest AS configuration: The oldest instance created
based on the oldest configuration is removed from the AS group first. Use this policy if
you want to update an AS group and delete the instances created based on early AS
configurations gradually.

• Newest instances created from oldest AS configuration: The latest instance created
based on the oldest configuration is removed from the AS group first.

• Oldest instances: The earliest instance is removed from the AS group first.

• Newest instances: The latest instance is removed from the AS group first.

Manually added ECSs are removed in the lowest priority. AS does not delete manually
added ECSs when removing them. If multiple manually added ECSs must be removed, AS
preferentially removes the earliest-added ECS.

Removing instances preferentially ensures that the rest instances are evenly distributed in
AZs.
• Due to limited space, this figure only shows some metrics. The metrics not displayed
include the disks read rates, disks write rates, disks write requests, and number of
instances.

• You can view the records of scaling actions in a table. Each scaling action of the AS
group is recorded in this table.
• When the AS group performs a scaling action and triggers the lifecycle hook, the
scaling action is suspended and the ECS that is being added to or removed from the AS
group is set to a waiting state. During this period, you can perform custom operations
on the ECS, such as installing or configuring software an ECS being added, or
downloading the log file from an ECS being removed.
• AS policy management enables you to handle diversified demands and cope
with complex scenarios.

• When a policy needs to be executed but the trigger condition is not met, you
can manually execute the AS policy.

• For details about how to manage AS policies, see Auto Scaling User Guide.
• For details, see Auto Scaling User Guide.
• AS ensures that the application system consistently has a proper resource capacity to
comply with access volume requirements. When AS works with a load balancer, the AS
group automatically adds available instances to the load balancer listener. Access
traffic is automatically distributed to all the ECSs of an AS group through the listener.
• Elastic Cloud Server (ECS): provides the ECSs that are scaled in or out by the
AS service.

• Virtual Private Cloud (VPC): provides bandwidth data for configuring a


bandwidth scaling policy.

• Elastic Load Balance (ELB): works with AS to evenly distribute traffic to each
instance in the AS group, improving system availability.

• Simple Message Notification (SMN): promptly pushes AS group information to


users so that they cloud learn about the latest status of the AS group.

• Cloud Trace Service (CTS): records operations related to auto scaling for later
query, auditing, and backtracking.

• Cloud Eye: provides alarm conditions for triggering scaling actions when an
alarm policy is configured.
Answers:

▫ Q1: ABC

Explanation

AS does not support monitoring policies. The alarm policy of an AS group uses
metrics of Cloud Eye, such as CPU usage.
• An image is a template used to create servers or disks. IMS provides image lifecycle
management. You can use an existing ECS or an external image file to create a system
or data disk image, or you can use an Elastic Cloud Server (ECS) or ECS backup to
create a full-ECS, complete with data disks.
• A public image is a standard, widely used image that is available to all users. It
contains an OS and a set of standard preinstalled applications. You can configure the
application environment or related software as needed.

• A private image is available only to the user who created it. It contains an OS, service
data, standard public applications, and can include additional custom software.

• A shared image is a private image shared by another user.


• Convenient: You can use an ECS or ECS backup to create a private image or use an
existing image to quickly and conveniently create ECSs.

• Secure: You can store multiple copies of private images to improve data durability.

• Flexible: You can manage images through the management console or using open APIs.

• Unified: You can use images to create identical cloud servers for application
deployment and upgrades. This ensures the application environment consistency and
simplifies O&M.
You can:

• Use public images that contain a standard OS, such as Windows, Ubuntu Server,
CentOS, Debian, or openSUSE. For details, visit IMS Help Center.

• Create a private image from an ECS or external image file.

• Conveniently manage a large number of images. For example, you can search for
images by OS type, name, or ID, and then you can see the ID, system disk size.
Features such as user data injection and disk hot swap are also included.

• Manage private images. You can modify image details, and share or replicate images.

• Quickly and conveniently create ECSs from an image.


• Migrating servers to the cloud or between clouds

• Create images (in VHD, VMDK, QCOW2, or RAW format) from existing servers and
import the images to the cloud platform to migrate services to the cloud. Migrate ECSs
between accounts and regions through image sharing and cross-region image
replication.

• Deploying a specified software environment

• Use shared images or Marketplace images to quickly build specified software


environments, without having to manually configure environments or installing
software. This is especially useful for Internet startups.

• Batch deploying software environments

• Use an ECS with an OS, partitions, and software to create a private image, and then
use the image for batch ECSs creation. The new servers will all have the same
configuration as the source ECS.

• Backing up server running environments

• Create an image from an ECS to back up the ECS. If the software of the ECS becomes
faulty, you can use the image to restore the ECS.
• Public images are provided by the cloud platform and are available to all users.

• Private images are created by the user and are available only to the user who created
them. A private image is more customizable, so they can save time. A private image
can be created from an ECS or an external image file.

• You can create a private image as needed.

▫ Creating a system disk image from a Windows ECS

▫ Creating a system disk image from a Linux ECS

▫ Creating a Windows system disk image from an external image file

▫ Creating a Linux system disk image from an external image file

▫ Creating a data disk image from an ECS data disk

▫ Creating a data disk image from an external image file

▫ Creating a full-ECS image from an ECS

▫ Creating a full-ECS image from a CSBS backup


• You are advised to install Cloudbase-Init on the ECS to be used to create a private
image, or the new ECSs created from the private image may not be configurable.
• If a Windows ECS configured with a static IP address is used to create a private image,
you will have to log in to the ECS and change the network settings to use DHCP. The
procedure is as follows:

▫ Log in to the Windows ECS. Choose Start > Control Panel > Network and Internet
Connections > Network and Sharing Center > Connection with the statistic IP
address > Properties > General.
▫ On the General tab page, select Obtain an IP address automatically and Obtain
DNS server address automatically, and click OK.
• To ensure that ECSs created from a private image are configurable, you are advised to
install Cloudbase-Init on the ECS before using it to create a private image. You need to
configure an EIP for the ECS so that you can download Cloudbase-Init from the official
website and then install it.
• You can create a Windows system disk image by using an ECS with Windows installed
on it.
• If a Linux ECS configured with a static IP address is used to create a private image, you
will have to change the network settings to use DHCP.

• The configuration method depends on the distribution.

▫ CentOS and EulerOS: Use vi to add PERSISTENT_DHCLIENT="y" to the


configuration file /etc/sysconfig/network-scripts/ifcfg-ethX.

▫ SUSE: Use vi to set DHCLIENT_USE_LAST_LEASE to no in the configuration file


/etc/sysconfig/network/dhcp.

▫ Ubuntu 12.04: Upgrade dhclient to ISC dhclient 4.2.4 so that the NICs can
consistently obtain an IP address from the DHCP server. For details, see Image
Management Service User Guide.
• When creating a private image based on a Linux ECS, you must first delete any existing
network rule files. Do not restart the ECS after deleting the network rule files, or the
deleted rule files will be recreated.

• Run the following command on the ECS to view the files in the network rule directory:

▫ ls -l /etc/udev/rules.d

▫ Check whether the file name in the command output contains both persistent
and net. If it does, you need to delete network rules.

▫ 70-persistent-net.rules

• Run the following commands to delete any rule files whose names include persistent
and net in the network rule directory:

▫ rm /etc/udev/rules.d/30-[net_persistent-names].rules

▫ rm /etc/udev/rules.d/70-[persistent-net.rules]
• To ensure that ECSs created from a private image are configurable, you are advised to
install Cloud-Init on the ECS that is used to create the image. You need to configure an
EIP for the ECS so that you can download Cloud-Init from the official website and then
install it.
• You must detach any EVS data disks attached to the ECS before using it to create a
private image, or new ECSs created using the private image may be unusable. To
detach the EVS data disks:
• If you have an external Windows image that meets the format and OS requirements,
you can use it to create a private Windows image.
• To download the OBS Browser, visit
http://static.huaweicloud.com/upload/files/tools/OBSBrowser.zip.
• If you have an external Linux image file that meets the format and OS requirements,
you can use it to create a private Linux image.
• A data disk image contains service data only. You can use an external image file to
create a data disk image.

• Then, you can use the data disk image to create EVS disks and migrate your service
data to the disks.
• You can use an ECS with data disks to create a full-ECS image. The created image
contains your service data and can be used to quickly create ECSs with service data.

• The ECS used to create a full-ECS image can be in the Running state.

• After a full-ECS image is deleted, the associated CSBS backup will not be deleted. To
delete the associated backup, go to the CSBS console.
• NIC multi-queue enables multiple CPUs to process NIC interruptions for load balancing.

• After the SR-IOV driver is installed for an image, the network performance of ECSs
created from the image will be greatly improved.
• You can share your private images with others. If you are a DeC user, image sharing
makes it easy to use images in multiple projects, as long as they are in the same
region.

▫ You can share images, stop sharing images, and add or delete tenants that can
use the shared images.

▫ The recipient can choose to accept or reject the shared images, or remove the
images they have accepted.
• How can I share an encrypted image or publish it in the Marketplace?

▫ You are not allowed to share an encrypted image or publish it in the Marketplace
directly. If you want to do this, you can replicate the image to generate an
unencrypted one, and share or publish the unencrypted version.

• How can I change an unencrypted image to an encrypted one?

▫ If you want to store an unencrypted image in an encrypted way, you can select
an encryption key and replicate the image to generate an encrypted image.

• Constraints

▫ An encrypted image cannot be shared with other tenants, published in the


Marketplace, or replicated across regions.

▫ The key used for encrypting an image cannot be changed.

• You can create encrypted images to improve data security. The encryption mode is
KMS envelope encryption.

• You can create a KMS key on the Key Management Service (KMS) console and use it
on the IMS console.
• You may need to replicate an image in the following scenarios:

▫ Creating an unencrypted version of an encrypted image

Encrypted images cannot be shared with others or published in the Marketplace.


If you want to publish or share an encrypted image, you need to create an unencrypted
version.

▫ Replicating an encrypted image

The key used for encrypting an image cannot be changed directly. If you want to
change the key of an encrypted image, you can replicate this image and encrypt the new
image using a different key.

▫ Creating an encrypted version of an unencrypted image

If you want to encrypt an unencrypted image, you can replicate the image and
encrypt the new image using a key.
• You can replicate an image from one region to another, using the image to clone your
ECSs. This allows you to migrate services across regions more conveniently.
• When querying predefined tags, adding tags for an image, or searching for an image
by tag, you must have the permissions necessary to access the Tag Management
Service (TMS).
• Elastic Cloud Server (ECS): You can use an image to create ECSs or use an ECS to
create an image.

• Bare Metal Server (BMS): You can use an image to create BMSs or use a BMS to create
an image.

• Object Storage Service (OBS): Images are stored in OBS buckets. External image files
to be uploaded to the system are stored in OBS buckets, and private images are
exported to OBS buckets.

• Data Encryption Workshop (DEW): Images can be encrypted through envelope


encryption of DEW to ensure data security. The keys used for encrypting images are
stored in DEW.

• Elastic Volume Service (EVS): You can create a data disk image using a data disk of an
ECS. The created data disk image can be used to create other EVS disks.

• Cloud Server Backup Service (CSBS): You can use a CSBS backup to create a full-ECS
image.

• Cloud Backup and Recovery (CBR): You can use a CBR backup to create a full-ECS
image.

• Tag Management Service (TMS): You can tag images to classify and search them easily.

• Cloud Trace Service (CTS): CTS records IMS operations for querying, auditing, or
backtracking.
• Answers:

• ABD
• The BMS self-service feature allows you to apply for a BMS. To apply for a BMS, you
need to specify the server type, image, required network, and other configurations. You
can then obtain the requested BMS within 30 minutes. By using the BMS service, you
can focus on your business without worrying about purchasing or maintaining servers.
• Tenants share physical resources of ECSs, but can exclusively use physical resources of
BMSs.

• BMSs can better meet your requirements for deploying critical applications and
services that require high performance (such as big data clusters, enterprise
middleware systems, supercomputing centers, and DNA sequencing), and a secure and
reliable running environment.

• Compared with physical servers, BMSs support automatic provisioning, automatic O&M,
VPC connection, and interconnection with shared storage.

• You can provision and use BMSs as easily as ECSs while enjoying the excellent
computing, storage, and network capabilities provided by BMSs.
• Specifications: The cloud platform provides different types of BMSs, including general-
purpose, disk-intensive, memory-optimized, and I/O-optimized BMSs. They have
different configurations in terms of vCPUs, memory size, storage media, storage
capacity, and number of NICs.
• Storage: BMSs have local disks with different media, interfaces, and capacities. If your
businesses require data redundancy, purchase BMSs with RAID cards. BMSs that do not
have local disks can be booted from Elastic Volume Service (EVS) disks, and can be
provisioned in minutes.
• Network
▫ Through a Virtual Private Cloud (VPC), BMSs can communicate with Elastic Cloud
Servers (ECSs), GPU-accelerated Cloud Servers (GACSs), and other cloud products.
VPC provides a 2 Gbit/s or higher bandwidth.
▫ A high-speed network is an internal network among BMSs and provides high
bandwidth (10 Gbit/s or higher) for connecting BMSs in the same AZ.
• Image: Public, private, and shared images are available.
• Security:
▫ A BMS uses the same security group policy as an ECS.
▫ The Host Security Service (HSS) improves the overall security of BMSs.
▫ The Anti-DDoS service provides defense against network-layer and application-
layer DDoS attacks and real-time alarm notifications.
▫ The Web Application Firewall (WAF) service ensures secure and stable running of
web services.
• High stability, reliability, and performance

▫ Users exclusively occupy computing resources without any virtualization


performance overhead or feature loss, and can use the disk backup capability
provided by the BMS service.

• AnyStack on BMS

▫ Compatible with various hypervisors such as VMware, Citrix XenServer, Xen, KVM,
and Hyper-V, the BMS service helps enterprises quickly and smoothly migrate
data center virtualization services to the cloud. Hybrid deployment and flexible
networking

▫ BMSs within an AZ can communicate with each other through an internal


network. VPCs can be used to connect BMSs and external resources. You can also
use BMSs together with other services, such as ECS, to achieve hybrid
deployment and meet flexible networking requirements of complex application
scenarios.

• High throughput and low latency

▫ The BMS service provides a high-throughput and low-latency network for BMSs
in an AZ. The maximum bandwidth is 10 Gbit/s and minimum latency is 25 μs.
• High security

▫ Financial and security industries have high compliance requirements, and some
customers have strict data security requirements. The BMS service ensures
exclusive, dedicated resource use, data isolation, as well as operation monitoring
and tracking.

• High-performance computing

▫ High-performance computing, such as supercomputer centers and DNA


sequencing, needs to process a large amount of data. Therefore, these scenarios
have high computing performance, stability, and timeliness requirements. BMSs
can meet these high-performance computing requirements with easeCore
databases

▫ Some critical database services cannot be deployed on VMs and must be


deployed on physical servers that have dedicated resources, isolated networks,
and assured performance. The BMS service provides high-performance servers
dedicated for individual users, meeting isolation and performance requirements.

• Mobile apps

▫ Kunpeng-powered BMSs provide a one-stop solution for the development, testing,


launch, and usage phases of mobile apps, especially mobile games, depending on
the good compatibility of Kunpeng servers with terminals.
• General-purpose: This BMS flavor uses the Intel Xeon V4 CPU or next-generation Intel
Skylake V5 CPU and meets the requirements for dedicated resources, isolated networks,
and basic performance. It is ideal for databases, core ERP systems, and financial
systems.
• Disk-intensive: This flavor uses Intel Skylake V5 CPU, has large-capacity SATA disks,
and is great for big data and distributed cache scenarios, where the data volume is
large and the compute performance, stability, and real-time capability are required.
• High-performance computing: This flavor provides a large number of CPU cores, large
memory size, and high throughput, and is ideal for high-performance processor
applications. This flavor uses the V5 CPU server and InfiniBand NIC to support quick
BMS provisioning.
• GPU-accelerated: This flavor provides outstanding floating-point computing
capabilities, and is perfect for deep learning, scientific computing, CAE, 3D animation
rendering, and CAD scenarios, which require real-time, highly concurrent massive
computing.
• Memory-optimized: This flavor has DIMM memory with quick read and write speed
and high density, and is suitable for SAP HANA and in-memory databases.
• I/O-optimized: This flavor uses SSDs as both the system disk and data disks, and
applies to high-performance big data, databases, and other scenarios where high
storage I/O performance is required.
• Flagship: This flavor has 192 to 768 high-performance CPUs, 8 to 32 TB memory, and
an all-flash storage, making it ideal for SAP HANA, in-memory databases, and HPC fat
node.
• Kunpeng: This flavor uses the latest Kunpeng 920 chip and has a 960 GB SATA SSD
system disk and twelve 10 TB SATA HDD data disks, making it perfect for big data,
HPC, and native ARM scenarios.
• Images can be classified into public images, private images, and shared images based
on the image source.
▫ A public image is a standard, widely used image that is available to all users. It
contains an OS and preinstalled public applications or services.
▫ A private image is available only to the user who created it. It contains an OS,
preinstalled public applications, and a user's private applications. Using a private
image to create BMSs eliminates the need to individually configure multiple
BMSs.
▫ A shared image is a private image shared by another user.
• Public image characteristics
▫ OS types: Linux and Windows OSs that are updated and maintained periodically
by HUAWEI CLOUD.
▫ Supported software: Integrate plug-ins on which BMS storage, network, and basic
functions depend.
▫ Compatibility: Compatible with server hardware.
▫ Security: Licensed and secure.
▫ Restrictions: No usage restrictions.
• Private image characteristics
▫ Compatibility: Compatible only with BMSs of the same model as them and may
fail to provision BMSs of other models.
▫ Supported functions: You can create and delete private images and use a private
image to create BMSs or reinstall the OS of a BMS. You can perform the
following operations on private images:
▪ Share images among different accounts.
▪ Replicate images across regions.
▪ Export images to an OBS bucket.
▫ Restrictions: You can create a maximum of 50 private images.
▫ Pricing: You will be billed for storing private images.
• Public image characteristics
▫ OS types: Linux and Windows OSs that are updated and maintained periodically
by HUAWEI CLOUD.
▫ Supported software: Integrate plug-ins on which BMS storage, network, and basic
functions depend.
▫ Compatibility: Compatible with server hardware.
▫ Security: Licensed and secure.
▫ Restrictions: No usage restrictions.
• Private image characteristics
▫ Compatibility: Compatible only with BMSs of the same model as them and may
fail to provision BMSs of other models.
▫ Supported functions: You can create and delete private images and use a private
image to create BMSs or reinstall the OS of a BMS. You can perform the
following operations on private images:
▫ Restrictions: You can create a maximum of 50 private images.
▫ Pricing: You will be billed for storing private images.
• Common I/O: This disk type delivers a maximum of 2200 IOPS. It is ideal for
application scenarios, such as enterprise office applications and small-scale testing,
which require large capacity, a medium read/write rate, and fewer transactions.

• High I/O: This disk type delivers a maximum of 5000 IOPS and a minimum of 1 ms
read/write latency. It is designed to meet the needs of mainstream high-performance,
high-reliability application scenarios, such as enterprise applications, large-scale
development and testing, and web server logs.

• Ultra-high I/O: This disk type delivers a maximum of 33,000 IOPS and a minimum of 1
ms read/write latency. It is great for ultra-high I/O, ultra-high bandwidth, and
read/write-intensive application scenarios, such as distributed file systems in HPC
scenarios or NoSQL/RDS in I/O-intensive scenarios.
• In this figure, ToR indicates the cabling mode in the server cabinet. The access switch is
placed on top of the rack and the server is placed beneath it. HB indicates the high-
speed network. QinQ indicates the 802.1Q tunnel.

• VPC and high-speed network interfaces are generated by the system and you should
not change them. They are configured in the same NIC bond.

• ECSs and BMSs can communicate through VPCs or InfiniBand networks (if any).

• Only a VPC supports security groups, EIPs, and ELB.

• For a high-speed network and user-defined VLAN, BMSs in the same network
communicate with each other only through layer-2 connections.
• Each VPC consists of a private CIDR block, route tables, and at least one subnet.
▫ Private CIDR block: When creating a VPC, you need to specify the private CIDR
block used by the VPC. The VPC service supports the following CIDR blocks:
10.0.0.0 – 10.255.255.255, 172.16.0.0 – 172.31.255.255, and 192.168.0.0 –
192.168.255.255
▫ Subnet: Cloud resources, such as cloud servers and databases, must be deployed
in subnets. After a VPC is created, you can create more subnets in a VPC, if
required.
▫ Route table: When you create a VPC, the system automatically generates a
default route table. The route table ensures that the subnets in the VPC can
communicate with each other. If the routes in the default route table cannot
meet application requirements (for example, a cloud server without an EIP bound
needs to access the Internet), you can create a custom route table.
• Security groups and network access control lists (ACLs) are used to ensure the security
of cloud resources deployed in a VPC. A security group is similar to a virtual firewall
• HUAWEI CLOUD provides multiple VPC connectivity options to meet diverse
requirements.
▫ VPC Peering allows two VPCs in the same region to communicate with each
other using private IP addresses.
▫ Elastic IP or NAT Gateway allows cloud servers in a VPC to communicate with the
Internet.
▫ Virtual Private Network (VPN), Cloud Connect, Direct Connect, or Layer 2
Connection Gateway can connect your local data center to VPCs.
• Restrictions on using high-speed networks

▫ When creating a BMS, the network segment used by common NICs cannot
overlap with that used by high-speed NICs.

▫ A high-speed network does not support security groups, EIPs, DNS, VPNs, or
Direct Connect connections.

▫ Each high speed NIC of a BMS must belong to a different high-speed network.

▫ After a BMS is provisioned, you can no longer configure its high-speed network.
• Through hardware and software optimizations, an enhanced high-speed network
enables BMSs in different PODs to communicate with each other.

• An enhanced high-speed network has the following advantages over a high-speed


network:

▫ The bandwidth is increased to 10 Gbit/s or higher.

▫ The number of network planes can be customized and a maximum of 4000


subnets are supported.

▫ BMSs can be virtualized to access the Internet.


• Ethernet NICs that are not used by the system do not contain configuration files and
are in the down state during system startup. You can run ifconfig -a to view the NIC
names and run ifconfig eth2 up to configure the NICs. The configuration method varies
depending on the OS.

• For example, on a Linux BMS, eth0 and eth1 are automatically bonded in a VPC
network, and eth2 and eth3 are used in a user-defined VLAN. You can send packets
with any VLAN tags through eth2 and eth3. If you want to allocate a VLAN, configure
eth2 and eth3 bonding and create the target VLAN network interface on the bond
device. The method is similar to that of creating a bond device and a VLAN sub-
interface in a VPC.
• An InfiniBand network uses 100 Gbit/s Mellanox InfiniBand NICs, dedicated InfiniBand
switches, and controller software UFM to ensure network communication and
management. Partition keys are used to isolate InfiniBand networks of different
tenants (similar to VLANs in the Ethernet).
• Storage space expansion: If you require additional storage space, you can either
expand the capacity of EVS disks that are attached to a BMS or attach more EVS disks
to the BMS. The additional storage space is charged based on the billing mode of the
EVS disks.

• To renew a BMS, choose More > Renew Fee in the Operation column. The Renew page
is displayed.
• Yearly/Monthly: a prepaid billing mode. The BMS will be billed based on the required
duration you specify.

• Flavor configurations, such as the CPU, memory, and local disks, cannot be changed.

• The bandwidth of different BMS flavors varies. Choose a flavor that meets your
requirements.

• Some flavors support quick BMS provisioning. If you select a flavor of this type,
parameter System Disk is displayed under Disk. The OS of this type of BMS is installed
on an EVS disk.
• When you use the VPC service for the first time, the system automatically creates a
VPC for you, including the security group and NIC. The default subnet segment is
192.168.1.0/24 and the subnet gateway is 192.168.1.1. Dynamic Host Configuration
Protocol (DHCP) is enabled for the subnet.
• You can create security groups and define access control rules to control BMS access
within a security group or between security groups. The rules you define for a security
group apply to all BMSs added to the security group.
• When creating a BMS, you can select only one security group. After a BMS is created,
you can associate it with multiple security groups.
• An EIP is a static public IP address bound to a BMS in a VPC. Using an EIP, the BMS
provides services externally.
• You can select one of the following options for EIP as needed:
▫ Automatically assign: The system automatically assigns an EIP with a dedicated
bandwidth to the BMS. The bandwidth is configurable.
▫ Use existing: An existing EIP is assigned to the BMS.
▫ Not required: The BMS cannot communicate with the Internet and can only be
used to deploy services or clusters in a private network.
• Quick-provisioning BMSs have the following advantages:

▫ BMSs are booted from EVS disks and can be provisioned within approximately 5
minutes.

▫ BMSs support CSBS backups, ensuring data security.

▫ BMSs can be rebuilt if they are faulty, facilitating quick service recovery.

▫ The image of such BMSs can be exported to apply configurations to other BMSs,
eliminating the need to individually configure BMSs.
• Flavor: OSs supported by BMSs may vary based on the flavors.
• Dedicated Computing Cluster (DCC): To physically isolate your BMS, apply for a DCC.
Then, you can deploy your BMSs in either of the following ways:
1. Choose Service List > Dedicated Cloud > Dedicated Bare Metal Server and
click Provision BMS in DeC.
2. Choose Service List > Computing > Bare Metal Server and click Provision BMS
in DeC.
• Dedicated Enterprise Storage Service (DESS): If you want exclusive storage devices with
stable latency to support critical applications, such as Oracle RAC and SAP HANA TDI,
you can enable Dedicated Computing Cluster (DCC) and apply for the DESS service
which uses Huawei's OceanStor enterprise storage devices to provide storage resources
for BMSs in the DCC.
• Dedicated Distributed Storage Service (DSS) provides you with dedicated, physical
storage resources. With data redundancy and cache acceleration technologies, DSS
delivers highly reliable, durable, low-latency, and stable storage resources. It provides
first-class performance in a wide range of scenarios, such as HPC, OLAP, and a mix of
loads.
• Cloud Server Backup Service (CSBS) offers the backup service for BMSs. It works based
on the consistent snapshot technology for EVS disks. With CSBS, you can use backup
data to restore BMS data, ensuring data security and correctness.
• Cloud Eye: After you obtain a BMS and install and configure Agent on the BMS, you
can view the monitoring data of the BMS in Cloud Eye and also monitoring graphs.
• Cloud Trace Service (CTS): You can record operations associated with BMSs for later
query, audit, and backtrack.
• Tag Management Service (TMS): You can tag BMSs to classify and search them easily.
• Answers:

▫ A

▫ A, B, C, D
• On August 9, 2006, Eric Schmidt, former CEO of Google, first proposed the concept of
cloud computing at the Search Engine Strategies Conference.

• Pain Points of Application Cloudification:

▫ Re-deployment:

▪ Inconsistency between the local environment and the cloud environment

▪ Historical data management

▫ Image: To run an application, a large image is used, and the entire system image
is oversized.

▫ Extra resource consumption and occupation due to the use of virtualization


technologies

• Note: This course focuses on Linux containers.


• Cloud Foundry is the first open source PaaS platform in the industry. It was initially
developed by VMware and then became an open source platform. In 2015, the Cloud
Foundry Foundation was created.

▫ After running simple commands, developers can pack the local


application (executable files and startup scripts of the
application) into a compressed package and upload the
package to the Cloud Foundry cloud storage. Cloud Foundry
selects a VM that can run the application through the scheduler,
instructs the agent to download the compressed package of the
application, and then starts the application.
▫ Cloud Foundry calls namespaces and cgroups to create isolated
running environments (sandboxes) for each application so that
applications do not interfere with each other. The
implementation of Cloud Foundry is similar to that of Docker
containers.
• Docker was originally dotCloud. In 2013, dotCloud determined to open its container
project Docker. In 2017, the Docker project was renamed the Moby project.

• The name Docker is derived from dock worker, a person who loads and unloads goods
from ships.

• Docker has joined the Linux Foundation and complies with the Apache 2.0 protocol.

• Docker is a container solution for packaging, shipping, and running any application.
Docker facilitates the "Build, Ship and Run" process and unifies the development,
testing, and deployment environments and processes, thereby greatly reducing O&M
costs.

• Note: The figures are collected from the GitHub and Docker official websites.
• Docker images are used to package the OS of an application, ensuring that the local
environment is consistent with the cloud environment. This frees O&M personnel from
repetitive O&M work.
• Kubernetes is an open source container scheduling platform across host clusters. It can
manage multiple types of underlying containers.
• Master nodes provide the cluster's control plane and make global decisions about the
cluster, such as scheduling. Generally, user containers do not run on the master node.

• Node components run on every worker node, maintaining running pods and providing
the Kubernetes runtime environment.
• In July 2015, Google and Red Hat took the lead in establishing the CNCF, which
belongs to the Linux Foundation.

• Note: The picture is from the CNCF official website.


• What is OCI?

▫ On June 22, 2015, Docker donated Libcontainer and renamed it the runC project,
and Docker, CoreOS, Google, Red Hat, and other companies announced that the
runC project is managed by a neutral foundation. Then, a set of container and
image standards and specifications are formulated based on runC.

• Libcontainer: container runtime library released by Docker, which is the predecessor of


runC.

• The OCI standards actually separate runtimes and container images from the Docker
project.

• Note: The image is from the Docker official website.


• Docker does not interact with the kernel directly. It interacts with the kernel using a
lower-layer tool.

• Libcontainer is a Docker library.


• Docker is always used to refer to Docker Engine.

• Docker Engine acts as a client-server application. Docker Engine consists of the


following components:

▫ Server: a type of long-running program called a daemon process.

▪ The Docker daemon creates and manages Docker objects such as images,
containers, networks, and volumes..

▫ Command line interface client (Docker CLI):

▪ The CLI uses the Docker REST API to control or interact with the Docker
daemon through scripting or direct CLI commands.

▫ REST API: The client uses the API to communicate with the daemon process.

• Note: The image is from the Docker official website.


• Docker client: Docker is an application that uses the client-server architecture. The
Docker client initiates a request using the Docker commands. On the user interface,
users can communicate with the Docker daemon.

• Docker daemon: The Docker daemon receives requests from the client, implements the
functions required by the client, and returns the corresponding results. The Docker
daemon is the core engine that drives all the functions of Docker. It implements its
functions by leveraging the implementation and interaction of containers, images,
storage, and other modules.

• Container:

▫ Containers are created based on images and provide a standard, isolated runtime
environment for images.

▫ Docker containers are software containers. Users can install any software
application and library, and configure the container runtime as required. Docker
image: A container provides a complete and isolated runtime environment, and
an image is a static display of the runtime environment.

• Registry: It is a place for storing images. Docker registries are an independent open
source project. Enterprises can use registry images to create private image registries.

• Note: The image is from the Docker official website.


• The container ID on this page is also called full-id.

• Common options for docker run:

▫ -d: Start a container in the background.

▫ -p: Map the container port to the host port.

▫ -it: Interact with the container through the CLI client after the container is
started.

▫ -h: Specify the hostname of the container.


• Identifier of a container on the host:

▫ CONTAINER ID: container ID, which is the first 12 characters of the container
full-id and is also called short-id.

▫ NAMES: container name, which is automatically allocated by Docker. Users can


also specify a container name by setting a parameter.
• Container lifecycle management commands:

▫ docker create: Create a container.

▫ docker start: Start a container.

▫ docker run: Create and run a container.

▫ docker pause: Pause a container.

▫ docker unpause: Resume a paused container.

▫ docker restart: Restart a container.

▫ docker stop: Stop a running container.

▫ docker rm: Delete a terminated container.

▫ docker kill: Kill a container process.


• Answers:

▫ 1. A

▫ 2. ABCD
• In computing (specifically data transmission and data storage), a block is a
sequence of bytes or bits. A data block is the smallest unit for management
and storage in a database. Blocking data simplifies the processing of a data
stream generated by a computer program. Data is stored in blocks in hard
disks, CD-ROMs, and NAND flash memory. One data block is read at a time.

• Most file systems are built on block devices, which is a level of abstraction for the
hardware responsible for storing and retrieving specified blocks of data. Sometimes,
the block size in file systems may be a multiple of the physical block size. In a
traditional file system, a block may be only a fraction of a file. This leads to space
inefficiency due to internal fragmentation, since file lengths are often not integer
multiples of block size, and thus the last block of a file may remain partially empty.
This will create slack space. Some newer file systems attempt to solve this through
techniques like block suballocation and tail merging.

• Block storage is normally abstracted by a file system or database management system


(DBMS) for use by applications and end users. The physical or logical volumes accessed
via block I/O may be devices internal to a server, directly attached via SCSI or Fibre
Channel, or distant devices accessed via a storage area network (SAN) using a protocol
such as iSCSI, or AoE. DBMSes often use their own block I/O for improved performance
and recoverability as compared to layering the DBMS on top of a file system.
• Multiple specifications: EVS disks come in a range of different specifications
and can be attached to ECSs as data disks or system disks. You can select
the EVS disk specifications best suited to your service requirements and to
your budget.

• Elastic scalability: The minimum capacity and maximum capacity of a single


EVS disk are 10 GB and 32 TB, respectively. The disk capacity can always be
expanded later though, in increments of at least 1 GB. EVS capacity can be
expanded without interrupting services. However, make sure the added space
does not exceed the remaining quota. If the quota is insufficient to meet your
requirements, apply for more quota.

• Security and reliability: Both system disks and data disks support data
encryption. Data protection functions, such as backups and snapshots, can
safeguard the EVS disk data, preventing data form being corrupted by
application exceptions or by online attacks.

• Real-time monitoring: Cloud Eye monitors the health and status of EVS disks
at all times.
• This slide introduces only two common metrics. For details about more metrics,
visit https://support.huaweicloud.com/intl/en-us/productdesc-evs/en-
us_topic_0014580744.html.

• EVS disks need to be used with servers. The application scenarios here are for
reference only.

• IOPS: Number of read/write operations performed by an EVS disk per second

• Throughput: Data read from and written into an EVS disk per second

• Read/write I/O latency: Minimum interval between two consecutive read/write


operations of an EVS disk
• For more details about the parameters, visit https://support.huaweicloud.com/intl/en-
us/qs-evs/en-us_topic_0021738346.html.

• EVS disks can be attached only to servers in the same AZ. Once a disk is created, you
cannot change the AZ of the disk.

• A shared EVS disk can be attached to up to 16 servers.


• For more information, visit https://support.huaweicloud.com/intl/en-us/productdesc-
evs/en-us_topic_0052554220.html.
• For more information about the EVS disk status, visit
https://support.huaweicloud.com/intl/en-us/usermanual-
evs/evs_01_0040.html.

• Disk creation: Creating → Available

• Disk attachment: Available → Attaching → In-use (assuming attachment


succeeds)

• Capacity expansion of an Available disk: Available > Expanding → In-use (if the
expansion succeeded)

• Capacity expansion of an In-use disk: In-use → Expanding → In-use (if the


expansion succeeded)
• Attaching a Non-shared EVS Disk

▫ Independently purchased EVS disks are data disks. In the disk list, such disks are
displayed as Data disk, and their status is displayed as Available. Data disks need
to be attached to servers.

▫ System disks are automatically attached to the servers you purchased. In the EVS
disk list, such disks are displayed as System Disk, and their status is displayed as
In-use. After the system disk is detached from a server, the disk attribute changes
to Boot Disk, and the disk status changes to Available.
• If the disk capacity is expanded when the server is stopped, the added space for a
Windows system disk, Windows data disk, and Linux system disk may be automatically
added to the end of the disk after the server is started. If so, the added space can be
used directly. If the space is not automatically added to the end of the disk, you need
to extend the partition and file system.
• For details about how to create a snapshot, see
https://support.huaweicloud.com/intl/en-us/usermanual-evs/en-
us_topic_0066615262.html.
• Snapshots and backups are different in that a backup saves data as another copy in
another storage system, whereas a snapshot establishes a relationship between the
snapshot and data.

• The following example describes the snapshot creation for disk v1 at different points
in time:

▫ Create disk v1, which contains no data.

▫ Write data d1 and d2 to disk v1. Data d1 and d2 are written to new spaces.

▫ Create snapshot s1 for disk v1. Data d1 and d2 are not saved as another copy
elsewhere. Instead, a relationship is established between snapshot s1 and data d1
and d2.

▫ Write data d3 to disk v1 and change data d2 to d4. Data d3 and d4 are written
to new spaces, and data d2 is not overwritten. The relationship between snapshot
s1 and data d1 and d2 is still valid. Therefore, snapshot s1 can be used to restore
data if needed.

▫ Create snapshot s2 for disk v1. A relationship is established between s2 and data
d1, d3, and d4.
• Creating a Cloud Disk Backup (https://support.huaweicloud.com/intl/en-us/qs-
cbr/cbr_02_0007.html)

• Restoring Data Using a Cloud Disk Backup (https://support.huaweicloud.com/intl/en-


us/usermanual-cbr/cbr_03_0033.html)
• With three-copy storage, the storage system automatically distributes copies
of data to three different physical disks of different servers so that the failure of
a single hardware device will not affect services.

• The storage system guarantees strict consistency between the data copies.

• For example, the storage system backs up data block P1 on physical disk A of
server A as P1'' on physical disk B of server B and P1' on physical disk C of
server C. Data blocks P1, P1', and P1'' are the three copies of the same data
block. If physical disk A where P1 resides is faulty, P1' and P1'' are still available
to ensure service continuity.
• It requires a certain amount of time to create a backup because data needs to
be transferred. Creating a snapshot or rolling back disk data takes less time
than creating a backup in the first place.
• Answer:

• 1. ABC

• 2. D
• The bucket and object are the two basic concepts in OBS.

• A bucket is a container for storing objects in OBS. Each bucket is specific to a region
and has specific storage class and access permissions. A bucket is accessible over the
Internet through its access domain name.

• An object is the basic unit of data storage in OBS. It consists of a key, metadata, and
data.

▫ A key specifies the name of an object. An object key is a UTF-8 string ranging
from 1 to 1024 characters. Each object in a bucket is uniquely identified by a key.

▫ Metadata describes an object, and is classified into system metadata and custom
metadata. The metadata is a set of key-value pairs that are assigned to the
object stored in OBS.

▪ System metadata is automatically assigned by OBS to manage the object.


System metadata includes Date, Content-Length, Last-Modified, Content-
MD5, and more.

▪ You can specify custom metadata to describe the object when you upload
the object to OBS.

▫ Data refers to the content that the object contains.


• An object is the basic unit of data storage in OBS. An object consists of data and
metadata that describes the object. Data uploaded to OBS is stored in buckets as
objects.

• A bucket is a container for storing objects in OBS. OBS provides flat storage in the
form of buckets and objects. Unlike the conventional multi-layer directory structure of
file systems, all objects in a bucket are stored at the same logical layer.
▫ OBS offers three storage classes: Standard, Infrequent Access, and Archive,
meeting various requirements for storage performance and costs. When creating
a bucket, you can set a storage class for it. The storage class of a bucket can be
changed as needed.
• The account provided by OBS includes the access key ID (AK) and secret access key
(SK), which are used for identity authentication. If you use a client to send a request to
OBS, the request header must contain a signature. The signature is generated based
on the SK, request time, and request type.
▫ An AK and an SK form a key pair used to access OBS. When OBS APIs are used to
access stored data, AKs and SKs are used to generate authentication information.
▫ After subscribing to OBS, you can log in the console, and create AKs and SKs on
the My Credentials page. The system identifies users who access the system by
AKs, and performs key authentication by SKs.
▫ An AK belongs to only one user but one user can have multiple AKs.
▫ An SK corresponds to an AK, forming a key pair for accessing OBS, which ensures
access security.
• OBS offers three storage classes: Standard, Infrequent Access, and Archive, meeting
various requirements for storage performance and costs.

▫ The Standard storage class features low access latency and high throughput. It is
therefore suitable for storing a massive number of hot files (frequently accessed
every month) or small files (less than 1 MB). The application scenarios include
big data analytics, mobile apps, hot videos, and social apps.

▫ The Infrequent Access storage class is ideal for storing data that is semi-
frequently accessed (less than 12 times a year) and requires quick response. The
application scenarios include file synchronization, file sharing, and enterprise
backup. It provides the same durability, access latency, and throughput as the
Standard storage class but at a lower cost. However, the Infrequent Access
storage class has lower availability than the Standard storage class.

▫ The Archive storage class is suitable for archiving data that is rarely-accessed (on
average once a year). The application scenarios include data archiving and long-
term data backups. The Archive storage class is secure, durable, and inexpensive,
and can be used to replace tape libraries. However, it may take hours to restore
data from the Archive storage class.
• Reliable data durability and service continuity: OBS is used by the Cloud Album for
Huawei mobile phones, and supports access for hundreds of millions of users. Cross-
region replication, cross-AZ disaster recovery, intra-AZ device and data redundancy,
slow disk and bad sector detection of storage media, and other technologies together
ensure data durability of up to 99.9999999999% and service continuity of up to
99.995%, far higher than that of conventional architecture.

• Multi-level protection and authorization management: OBS has the Trusted Cloud
Service (TRUCS) certification. Its multiple data protection mechanisms, including
versioning, server-side encryption, VPC-based network isolation, access log audit, and
fine-grained permission control, ensure persistent data security.

• Unlimited number of objects and high-level concurrency: With intelligent scheduling


and response, optimized data access paths, and technologies such as event notification,
transmission acceleration, and big data vertical optimization.

• Ease-of-use and management: OBS supports REST APIs, multi-version SDKs, and data
migration tools, making it easier to migrate services to the cloud. You do not need to
plan storage capacity beforehand or worry about storage capacity, because storage
resources are available for linear and nearly infinite expansion.

• Various storage classes and flexible billing modes: OBS can be subscribed to through
pay-per-use and monthly/yearly billing modes. Data in each of the Standard,
Infrequent Access, and Archive storage classes are separately metered and billed,
significantly reducing storage costs.
• For product pricing details, visit the following website:
https://www.huaweicloud.com/intl/en-us/pricing/index.html?tab=detail#/obs
• OBS provides two billing modes: pay-per-use and yearly/monthly. Pay-per-use is the
default billing mode of OBS.

• The billing items include storage capacity, data traffic, requests, and data restoration.
For details about pay-per-use pricing, see the table.

• With this billing mode, your service account is only billed for the amount of time
(hours) resources are actually used for. No minimum consumption is required.
• OBS provides two billing modes: pay-per-use and yearly/monthly. Pay-per-use is the
default billing mode of OBS.

• You can also purchase a yearly/monthly resource package. OBS offers resource
packages for multi-AZ storage, common storage, downstream traffic, and pull traffic.

• Resource packages are paid for once and take effect immediately upon payment.
Currently, you cannot specify the date when a resource package takes effect.

• Yearly/Monthly subscriptions apply only to the Standard storage class.

• If your usage exceeds the package quota within the validity of the package purchased,
you will be charged on a pay-per-use basis for the excess use.

• Yearly/Monthly subscriptions can neither be automatically renewed nor cancelled.


When a package has expired, it does not impact your operations and data security in
OBS. The system automatically charges you according to pay-per-use rates. Ensure.
• After server-side encryption is enabled, objects to be uploaded will be encrypted and
stored on the server. When you download the encrypted objects, the encrypted data
will be decrypted on the server and displayed in plaintext to users.

• Key Management Service (KMS) uses the hardware security module (HSM) to protect
key security, helping you easily create and control your encryption keys. Keys are not
displayed in plaintext outside HSMs, which prevents key disclosure. All operations on
keys are controlled and logged, and usage records of all keys can be provided to meet
regulatory compliance requirements.

• The objects to be uploaded can be encrypted from the server side using the encryption
service provided by KMS. You need to create a key using KMS or use the default key
provided by KMS. Then you can use the KMS key to perform server-side encryption
when uploading objects on OBS.

• OBS supports both server-side encryption with KMS-managed keys (SSE-KMS) and
server-side encryption with customer-provided keys (SSE-C) by invoking APIs. In SSE-C
mode, OBS uses the keys and MD5 values provided by customers for server-side
encryption.
• Lifecycle management rules involve two key elements:
▫ Policy: You can specify the prefix of object names so that objects whose names
contain this prefix are restricted by the rules. You can configure lifecycle rules for
a bucket so that all objects in the bucket are subject to the rules.
▫ Time: You can specify the number of days after which objects that have been last
updated and meet specified conditions are automatically transitioned to
Infrequent Access or Archive, or expire and are automatically deleted.
▪ Transition to Infrequent Access: You can specify the number of days after
which objects that have been last updated and meet specified conditions
are automatically transitioned to Infrequent Access.
▪ Transition to Archive: You can specify the number of days after which
objects that have been last updated and meet specified conditions are
automatically transitioned to Archive.
▪ Expiration Time: You can specify the number of days after which objects are
automatically deleted or the day after which an object that matches a rule
is deleted.
• The number of days for objects to be transitioned to Infrequent Access is at least 30. If
objects are configured to be able to transition to both Infrequent Access and Archive,
the number of days for transition to Archive must be at least 30 more than that for
transition to Infrequent Access. For example, if the number of days for transition to
Infrequent Access is 33, that for transition to Archive must be at least 63. If only
transition to Archive is enabled and transition to Infrequent Access is disabled, there is
no limit on the number of days for transition. The expiration time must be greater
than the two transition times.
• Static websites contain static web pages and scripts that can run on clients, such as
JavaScript and Flash. Unlike static websites, dynamic websites rely on servers to
process scripts, including PHP, JSP, and ASP.NET. OBS does not support scripts running
on servers.

• The configuration of static website hosting takes effect within two minutes of the
configuration. After the static website hosting is enabled on OBS, you can access the
static website using the URL provided by OBS.

• When using static website hosting, you can configure redirection to redirect specific or
all requests.

• If the structure, address, or file name extension of a website is changed, you cannot
access the website using the old address (such as the address saved in a favorites
folder), and the 404 error message is returned. In this case, you can configure
redirection for the website to redirect user access requests to the specified page
instead of returning the 404 error page.

• Typical configurations include:

▫ Redirecting all requests to another website.

▫ Redirecting specific requests based on redirection rules.


• Some websites steal links from other websites to enrich their own content, without
increasing their costs. Stealing links not only damages the interests of the original
websites but also increases workloads on the server. Therefore, URL validation is used
to resolve this problem.

• In HTTP, a website can detect the web page that accesses a target web page using the
Referer field. As the Referer field can trace sources, specific techniques can be used to
block or return specific web pages if requests are not from trusted sources. URL
validation checks whether the Referer field in requests matches the whitelist or
blacklist by setting Referers. If the field matches the whitelist, the requests are allowed.
Otherwise, the requests are blocked or specified pages are displayed.
• You can configure event notifications to be objects filtered by the prefix and suffix of
an object name. For example, you can add an event notification rule to send
notifications whenever an object with the .jpg suffix is uploaded to the specified bucket.
You can also add an event notification rule to send notifications whenever an object
with the images/ prefix is uploaded to the specified bucket.
• Constraints

• Only buckets of version 3.0 and later support user-defined domain name binding. To
check the bucket version, go to Overview of the bucket on OBS Console. You can then
view the bucket version in the Basic Information area.

• A maximum of 5 user-defined domain names can be bound to each bucket.

• Currently, user domain names bound to OBS only allow access requests over HTTP.

• If you want to use a bound user domain name to access OBS over HTTPS, you need to
enable CDN to manage HTTPS certificates.

• A user-defined domain name can be bound to only one bucket domain name.
• After a cross-region replication rule is enabled, objects that meet the following
conditions are copied to the destination bucket:

▫ Newly uploaded objects (excluding objects in the Archive storage class).

▫ Updated objects. For example, the object content is updated or the ACL
information of a copied object is updated.

▫ Historical objects in a bucket (The function of synchronizing existing objects must


be enabled.)

• Applicable scenarios:

▫ Cross-region access: you need to access the same OBS resource in different
locations. To minimize the access latency, you can use cross-region replication to
create object copies in the nearest region.

▫ OBS data migration: you need to migrate OBS data to the data center in another
region.

▫ Data backup: To ensure data security and availability, you need to create explicit
backups for all data written to OBS in the data center of another region.
Therefore, secure backup data is available if the source data is permanently
damaged.
• With the versioning function, OBS can store multiple versions of an object. You can
quickly search for and restore different versions or restore data in the event of a
misoperation or application fault.

• By default, versioning is disabled for new buckets on OBS. Therefore, if you upload an
object to a bucket where an object with the same name exists, the new object will
overwrite the existing one.

• Enabling versioning

▫ Enabling versioning does not change the versions and contents of existing objects
in the bucket. The version ID of an object is null before versioning is enabled. If a
namesake object is uploaded after versioning is enabled, a version ID will be
assigned to the object.

▫ OBS automatically allocates a unique version ID to a newly uploaded object.


Objects with the same name are stored in OBS with different version IDs.

▫ The latest objects in a bucket are returned by default after a GET Object request.

▫ Objects can be downloaded by version IDs. By default, the latest object is


downloaded if no version ID is specified.

▫ You can select an object and click Delete on the right to delete the object. After
the object is deleted, OBS generates a Delete Marker with a unique version ID for
the deleted object, and the deleted object is displayed in the Deleted Objects list.
• All object versions stored in OBS are billed except those with Delete Marker.
• Suspending versioning

▫ Once the versioning function is enabled, it can be suspended but cannot be


disabled. Once versioning is suspended, version IDs will no longer be allocated to
newly uploaded objects. If an object with the same name exists and does not
have a version ID, the object will be overwritten.

▫ Historical versions are retained in OBS. If you do not need these historical
versions, manually delete them.

▫ Objects can be downloaded by version IDs. By default, the latest object is


downloaded if no version ID is specified.

▫ All historical object versions stored in OBS are billed except those with Delete
Marker.

• Differences between scenarios when versioning is suspended and disabled

▫ If you delete an object when versioning is suspended, a null version with Delete
Marker is generated regardless of whether the object has historical versions. But
if versioning is disabled, the same operation will not generate a version with
Delete Marker.
• When you log in to OBS Console using your HUAWEI CLOUD account or as an IAM
user, OBS uses your account or IAM user information for authentication.

• When you access OBS using the tools (OBS Browser+ and obsutil), SDKs, or APIs,
instead of your account or IAM user information, OBS requires the access keys (AK and
SK) of your account or IAM user for authentication. Therefore, you need to obtain the
access keys (AK and SK) before you access OBS using these methods.

• obsutil is a command line tool for accessing Object Storage Service (OBS). You can use
this tool to perform common configurations in OBS, such as creating buckets,
uploading and downloading files/folders, and deleting files/folders. If you are familiar
with command line interface (CLI), obsutil is recommended as an optimal tool for
batch processing and automated tasks.

• obsutil is compatible with the Windows, Linux, and macOS operating systems (OSs). To
obtain the obsutil download links and installation methods for different OSs, see Tools
Guide > obsutil > Download and Installation in Object Storage Service.
• OBS Browser+ can keep the login information of up to 100 accounts.

• If a proxy is required to access your network environment, configure the network proxy
before login.

• OBS Browser+ does not support the query or deletion of historically authorized login
information.

• OBS Browser+ automatically deletes expired authorization codes.


• If you have a large number of tasks running on OBS Browser+ but you want to
perform other operations outside OBS Browser+, you can close OBS Browser+ while
tasks are still running in the background. Specifically, click the icon for closing OBS
Browser+ in the upper right corner, and then click Background Running on the Exit
OBS Browser+ dialog box. All your tasks will enter the background running mode. You
can double-click the OBS Browser+ icon in the tray to display the UI later as needed.
• IAM policies define the actions that can be performed on your cloud resources. In other
words, IAM policies specify what actions are allowed or denied.

• IAM policies with OBS permissions take effect on all OBS buckets and objects. To grant
an IAM user the permission to operate OBS resources, you need to assign one or more
OBS permission sets to the user group to which the user belongs.

• IAM policies are used to authorize IAM users under an account. IAM policies control:

▫ the permissions to cloud resources as a whole under an account.

▫ the permissions to all OBS buckets and objects under an account.

▫ the permissions to specified cloud resources under an account.


• Bucket policy application scenarios:

▫ If no IAM policies are used for access permission control and you want to grant
other accounts the permission to access your OBS resources, you can use bucket
policies to authorize such permissions.

▫ You can configure different bucket policies to grant IAM users different access
permissions to different buckets.

▫ You can also use bucket policies to grant other accounts the permissions to
access your buckets.
• https://support.huaweicloud.com/intl/en-us/usermanual-obs/en-
us_topic_0066088967.html

• It is recommended that you use bucket ACLs in the following scenarios:

▫ Granting the log delivery user the write access to the target bucket, so that
access logs can be delivered to the target bucket.

▫ Granting an account the read and write access to a bucket, so that bucket data
can be shared or external buckets can be mounted. For example, if account A
grants the bucket read and write permissions to account B, then account B can
access the bucket by using the API and SDK, and can add an external bucket
through OBS Browser+.

• It is recommended that you use object ACLs in the following scenarios:

▫ Object-level access control is required. A bucket policy can control access


permissions for an object or a set of objects. If you want to further specify an
access permission for an object in the set of objects for which a bucket policy has
been configured, then the object ACL is recommended for easier access control
over single objects.

▫ An object is accessed through a URL. Generally, if you want to grant anonymous


users the permission to read an object through a URL, use the object ACL.
• OBS not only features high performance, high reliability, low latency, and low costs,
but also provides a tiered storage class system (Standard, Infrequent Access, and
Archive) to meet various cost and capability requirements. End-to-end solutions are
available for device management, video surveillance, and video processing.

• You can upload surveillance video files recorded by cameras to HUAWEI CLOUD over
the Internet or using DC. You can then segment the video files on the processing
platform, which consists of ECS and ELB, and store video segmentation files in OBS.
Later, you can download the video segmentation objects from OBS, and transfer them
to terminal players.
• OBS provides a low-latency and low-cost storage system for storing massive amounts
of data, and features high reliability and high-concurrent access. Working with the
MPC, Content Moderation, and CDN services, OBS helps you construct a fast, secure,
and highly available online video on demand (VOD) platform.

• OBS can also serve as the origin server of VOD services. Common users or professional
content creators can upload their video files to OBS, review the video content using the
Content Moderation service, transcode the video source files through MPC, and then
play the content on devices after CDN acceleration.
• OBS provides a low-latency and low-cost storage system for storing massive amounts
of data, and features high reliability and high-concurrent access, fulfilling archiving
requirements for applications, databases, and unstructured data.

• You can use the client synchronization function, mainstream backup software, Cloud
Storage Gateway (CSG), or DES to back up your on-premises DC data and store the
data in OBS. In addition, OBS provides a lifecycle management function for automatic
switching between different storage classes, reducing storage costs. You can restore
data from OBS to the DR host or test host on the cloud.
• By working with cloud services such as Elastic Cloud Server (ECS), Auto Scaling (AS),
Elastic Volume Service (EVS), Image Management Service (IMS), Identity and Access
Management (IAM), and Cloud Eye, OBS can provide high-performance computing
(HPC) with huge capacity, large single-stream bandwidth, and secure and reliable
solutions.

• 300 MB/s bandwidth for fast upload and download

▫ Up to 300 MB/s single-stream bandwidth: quick online import of data.

▫ Temporary authorization: secure and convenient secondary distribution of data.

• 120 TB data import within one day

▫ Up to 120 TB data can be imported to the cloud within one day through
Teleport-based DES (offline).

• Archive storage as low as ¥0.033/GB per month

▫ Source data and calculation results can be stored in the archive storage, for as
low as ¥0.033/GB per month.
• OBS works with cloud services such as ECS, ELB, RDS, and VBS to provide enterprise
cloud disks with a storage system that allows high concurrency, high reliability, low
latency, and low costs. The storage capacity automatically scales with the volume of
stored data.

• Dynamic data on devices such as mobile phones, PCs, and tablets interacts with the
enterprise cloud disk service system built on HUAWEI CLOUD. Requests for dynamic
data are sent to the service system for processing and then returned to devices. Static
data is stored in OBS. Service systems can process static data over the intranet. End
users can directly request and read the static data from OBS. OBS also provides a
lifecycle management function to allow automatic switching between different storage
classes, significantly reducing storage costs.
• Answer:

• 1. ABC
• Compared with traditional file sharing storage, SFS has the following advantages:

▫ File sharing

▪ ECSs across multiple availability zones (AZs) within a region can access the
same file system concurrently and share files.

▫ Elastic scalability

▪ Storage can be scaled up or down on demand to dynamically adapt to


service changes without interrupting applications. You can complete
resizing in just a few clicks.

▫ Superior performance and reliability

▪ SFS enables file system performance to increase as capacity grows, and


delivers a high data durability to support rapid service growth.

▫ Seamless integration

▪ SFS supports Network File System (NFS). With this standard protocol, a
broad range of mainstream applications can read and write data in the file
system.

▫ Easy operations and low costs

▪ You can create and manage file systems with ease using an intuitive
graphical user interface (GUI). SFS also slashes costs as it is billed on a pay-
per-use basis.
• The SFS service provides two types of file systems: SFS and SFS Turbo. There are two
types of SFS Turbo: SFS Turbo Standard and SFS Turbo Performance.

• The table shown here describes the characteristics, advantages, and application
scenarios of these file system types.
• HPC
▫ SFS provides superb compute and storage capabilities, as well as high bandwidth
and low latency for HPC applications in industries such as biopharma, gene
sequencing, image processing, scientific research, and meteorology.
• Media processing
▫ TV series and various forms of new media are more likely to be deployed on
cloud platforms than in the past. Services include streaming media, archiving,
editing, transcoding, content distribution, and video on demand (VoD). In such
scenarios, a large number of workstations are involved in the whole program
production process. Different operating systems may be used by different
workstations, and these different workstations may still need to share materials.
In addition, HD/4K video is becoming more and more common in the
broadcasting industry. Take video editing as an example. HD editing often
involves 30- to 40-layers. Therefore, a single editing client may require a file
system with bandwidth of up to hundreds of megabytes per second. Usually,
producing a single TV program involves multiple editing stations, all working on
the same material at the same time. To meet such requirements, SFS provides
customers with stable, high-bandwidth, and low-latency file systems.
• File sharing
▫ For an organization with a large staff, SFS can create shared file systems that are
accessible to everyone, to make file sharing within the organization easy.
• Content management and web services
▫ SFS can be used in a wide range of content management systems to provide
shared file storage for websites, home directories, online releasing, and archiving.
• Big data and analytic applications
▫ SFS delivers an aggregate bandwidth of over 10 GB/s, enough bandwidth to
handle ultra-large files, like as satellite images. SFS has excellent reliability, so
service interruptions due to file system failures are extremely rare.
• High-performance websites

▫ For I/O-intensive website services, SFS Turbo can provide shared website source
code directories for multiple web servers, enabling low-latency and high-IOPS
concurrent shared access.

• Log storage

▫ SFS Turbo can provide multiple service nodes for shared log output, facilitating
log collection and management of distributed applications.

• DevOps

▫ A development directory can be shared by multiple VMs or containers, which


simplifies the configuration process and improves R&D experience.

• Office applications

▫ Office documents of enterprises or organizations can be saved in an SFS Turbo


file system for high-performance shared access.
• Network File System (NFS) is a distributed file system protocol that allows different
computers and operating systems to share data over a network.

• Common Internet File System (CIFS) is a protocol used for network file access. It is a
public or open version of the Server Message Block (SMB) protocol, which is
implemented by Microsoft. CIFS allows applications to access and request files on
computers over the Internet. Using CIFS, network files can be shared between hosts
running Windows.

• A file system provides users with shared file storage through NFS or CIFS. POSIX can be
used to access network files remotely. After you create shared directories in the
management console, the file system can be mounted to multiple ECSs and is
accessible through the standard POSIX interface.

• An availability zone (AZ) is a geographical area with an independent network and an


independent power supply. In general, an AZ is an independent physical equipment
room, ensuring the independence of the AZ. One region has multiple AZs. If one AZ
becomes faulty, the other AZs in the same region can still provide services. AZs in the
same region can access each other using the intranet. ECSs can share the same file
system across AZs of the same region.

• A region is a geographical concept. Each region is a different geographical location.


You can select a region close to you to reduce access latency.
• Creating a file system

▫ Create a file system and mount it to ECSs. Then this file system can be shared by
the ECSs.

• Mounting a file system to ECSs

▫ After creating a file system, mount the file system to ECSs so that the ECSs can
share the file system.
• https://support.huaweicloud.com/intl/zh-cn/usermanual-sfs/sfs_01_0036.html

• Authorized Address/Segment

• Only one IPv4 address or address segment can be entered.

• The entered IPv4 address or address segment must be valid and cannot be an IP
address or address segment starting with 0 except 0.0.0.0/0. The value 0.0.0.0/0
indicates any IP address in the VPC. In addition, the IP address or address segment
cannot start with 127 or any number from 224 to 255, such as 127.0.0.1, 224.0.0.1, or
255.255.255.255. This is because IP addresses or address segments starting with any
number from 224 to 239 are class D addresses and they are reserved for multicast. IP
addresses or address segments starting with any number from 240 to 255 are class E
addresses and they are reserved for research purposes. If an invalid IP address or
address segment is used, the access rule may fail to be added or the added access rule
cannot take effect.

• Multiple addresses separated by commas (,), such as 10.0.1.32,10.5.5.10 are not


allowed.

• An address segment, for example, 192.168.1.0 to 192.168.1.255, needs written using


slash notation, for example, 192.168.1.0/24. Other formats such as 192.168.1.0-255 are
not supported. The number of bits in a subnet mask must be an integer ranging from
0 to 31, but a 0 can only be used for 0.0.0.0/0.
• Restrictions: SFS Turbo file systems do not support system-defined policies. Any custom
policies created for SFS are invalid for SFS Turbo file systems.
• ABCD
• CBR consists of backups, vaults, and backup policies.

• Backups: A backup is a copy of the original data that is backed up. A backup is used to
restore the original data. CBR backups include cloud disk backups, cloud server
backups, file system backups, and hybrid cloud backups.

• Vaults: CBR uses vaults to store backups. Before creating a backup, you need to create
at least one vault and associate the server or disk to be backed up with the vault. Then
the backup of the server or disk will be stored in the associated vault. Vaults can be
classified into two types: backup vaults and replication vaults. Backup vaults store
backups, whereas replication vaults store replicas of backups. Backups of different
types of backup objects must be stored in different types of vaults accordingly.

• Backup policies: To perform automatic backups, configure a backup policy by setting


the execution time of backup tasks, backup cycle, and retention rules, and then apply
the policy to a vault.
• A backup vault is a container that stores backups of servers and disks. Backup vaults
are classified into the following types:

▫ Server backup vaults include those that only store backups of common servers
and those that store backups of database servers. You can associate a server with
a server backup vault and apply a backup or replication policy to the vault. You
can also replicate backups from a vault in one region to a replication vault in
another region. Server backups can be used to restore server data.

▫ Disk backup vaults store only disk backups. You can associate a disk with a disk
backup vault and apply a backup policy to the vault.

• Replication vaults store only replicas of backups. Such replicas cannot be replicated
again. Replication vaults for server backups also include those that store only replicas
of common backups and those that store replicas of database backups.
• Backup: CBR supports one-off backup and periodic backup. A one-off backup task is
manually created by users and takes effect only once. Periodic backup tasks are
automatically executed based on a user-defined backup policy.
• After an initial full backup, a server continues to be backed up incrementally by
default.

▫ The initial full backup covers only the used capacity of a disk. If a 100 GB disk
contains 40 GB data, the initial backup consumes 40 GB backup space.

▫ A subsequent incremental backup backs up data changed since the last backup.
If 5 GB data is changed since the last backup, only the 5 GB changed data will be
backed up.

• CBR allows you to use any backup, no matter it is a full or incremental one, to restore
the full data of a resource. By virtue of this, you can use any incremental backups to
restore data even if a backup is deleted manually or automatically.

• In extreme cases, the size of a backup is the same as the disk size. The used capacity in
a full backup and the changed capacity in an incremental backup are calculated based
on the data block change in a disk, not by calculating the file change in the operating
system. The size of a full backup cannot be evaluated based on the file capacity in the
operating system, and the size of an incremental backup cannot be evaluated based
on the file size change.
• Full backup: A full backup is a complete copy of all your data assets at a point in time.
If a file within is changed during the period from backup start to backup completion,
the changed data is backed up in the following backup operation.

• Cumulative incremental backup: A cumulative incremental backup is a backup of all


blocks that have changed since the previous full backup. If no backup has been
performed yet, all files are backed up.

• Differential incremental backup: Unlike a cumulative incremental backup, a differential


incremental backup is a backup of all blocks that have changed since the previous
backup. If no backup has been performed yet, all files are backed up.
• One-off backup: No backup policies are required. You need to manually perform a
one-off backup task. By default, the first backup is a full backup, and subsequent
backups are incremental. You are advised to execute a one-off backup before patching
or upgrading the OS or upgrading an application on a resource, so that you can use
the backup to restore the resource to the original state in case the patching or
upgrading fails.

• Periodic backup: Periodic backups are executed based on backup policies. By default,
the first backup is a full backup, and subsequent backups are incremental. You are
advised to back up resources periodically for routine maintenance so that the backups
can be used to restore data in case of unpredictable faults.

• You can also use the two backup options together if needed. For example, associate
target resources with a vault and then apply a backup policy to the vault to execute
periodic backup for them, and manually perform one-off backups for the most
important resources to further ensure data security. The figure in this slide shows the
operation process.
• For details on API-based access, see Cloud Backup and Recovery API Reference.
• Yearly/monthly is a prepaid billing mode. You are billed based on the
subscription duration that you specified. This mode provides lower prices and
is ideal when the resource use duration is predictable.

• Pay-per-use is a postpaid billing mode. You are billed based on the usage of resources.
With this mode, you can increase or delete resources at any time. Fees are deducted
from the account balance.

• If you want to use a vault for a long time, you can change its billing mode from
pay-per-use to yearly/monthly to reduce costs.
• By default, new IAM users do not have permissions assigned. You need to add a user
to one or more groups, and attach permissions policies or roles to these groups. Users
inherit permissions from their groups and can perform specified operations on cloud
services.

• CBR is a project-level service deployed and accessed in specific physical regions. To


assign CBR permissions to a user group, specify the scope as region-specific projects
and select projects for the permissions to take effect. If All projects is selected, the
permissions will take effect for the user group in all region-specific projects. When
accessing CBR, the users need to switch to a region where they have been authorized
to use this service.

• A backup vault is a container that stores backups of servers and disks. Backup vaults
are classified into the following types:

▫ Server backup vaults include those that only store backups of common servers
and those that store backups of database servers. You can associate a server with
a server backup vault and apply a backup or replication policy to the vault. You
can also replicate backups from a vault in one region to a replication vault in
another region. Server backups can be used to restore server data.

▫ Disk backup vaults store only disk backups. You can associate a disk with a disk
backup vault and apply a backup policy to the vault.

▫ File system backup vaults store only backups of SFS Turbo file systems. You can
associate a file system with such a vault and apply a backup policy to the vault.

▫ Hybrid cloud backup vaults store only backups synchronized from the offline
backup software OceanStor BCManager and VMware VMs. You can replicate
backups to a replication vault of another region and restore the backup data to
other servers.

• Replication vaults store only replicas of backups. Such replicas cannot be replicated
again. Replication vaults for server backups also include those that store only replicas
of common backups and those that store replicas of database backups.
• Backup: CBR supports one-off backup and periodic backup. A one-off backup task is
manually created by users and takes effect only once. Periodic backup tasks are
automatically executed based on a user-defined backup policy.
• Querying a vault: You can set search criteria for querying desired vaults in the vault
list.

• Deleting a vault: You can delete unwanted vaults to reduce space usage and costs.
Only pay-per-use vaults can be deleted. Yearly/monthly vaults need to be
unsubscribed. All backups stored in the vault will be deleted once you delete a vault.
Therefore, exercise caution when performing this operation.

• Dissociating a resource: If an associated server or disk no longer needs to be backed


up, you can dissociate it from the vault. After a resource is dissociated from a vault,
the automatic backup and replication policies of the vault does not take effect on the
resource. In addition, all manual and automatic backups of the resource will be
deleted. The deleted data cannot be used for data restoration. Exercise caution when
performing this operation.

• Expanding a vault: You can expand a vault if needed. Vault capacity can be expanded
only.

• Changing vault specifications: Server backup vaults and server replication vaults both
have two specifications: those for server backups/replicas and those for application-
consistent backups/replicas.

▫ Server backups/replicas are backups or backup replicas of common servers.

▫ Application-consistent backups/replicas are backups or backup replicas of servers


with databases.
• Querying a backup: When a backup task is running or completed, you can set search
criteria to filter backups from the backup list and view backup details.

• Sharing a backup: You can share a server or disk backup with other projects. Shared
backups can be used to create servers.

• Deleting a backup: You can delete unwanted backups to reduce space usage and costs.

• Using a backup to create an image: CBR allows you to create images using ECS
backups. You can use the images to provision ECSs to rapidly restore service operating
environments. With cross-region replication, CBR allows you to replicate backups to
destination regions and then create images. You can use the images to provision ECSs.

• Using a backup to create a file system: You can use a file system backup to create a
new file system. After it is created, data on the new file system is the same as that in
the backup.

• Replicating a backup across regions: CBR enables you to replicate server backups from
one region to another. You can use backup replicas in the destination region to create
images and provision servers. CBR Console provides the following methods for
replication: 1) Select a backup from the backup list and perform one-off replication
manually. 2) Select a backup vault and manually replicate it. Configure a replication
policy to periodically replicate backups that have not been replicated or failed to be
replicated to the destination region.
• Crash-consistent backup for multiple disks and application-consistent backup for
database servers are supported.
• Backups can be replicated to regions with replication capabilities. The replication
limitations are as follows:
▫ A backup can be replicated only when it meets the following conditions:
▪ The backup is an ECS backup.
▪ The backup contains system disk data.
▪ The backup is in the Available state.
• Only backups generated in the current region can be replicated. Replicas cannot be
replicated again but can be used to create images.
• A backup can be replicated to multiple destination regions but can have only one or
zero replica in each destination region. The replication rule varies with the replication
method:
▫ Manual replication: A backup can be replicated to the destination region as long
as it has no replica in the destination region; that is to say, a backup can be
replicated again if its replica has been deleted.
• Policy-driven replication: Once a backup has been successfully replicated to the
destination region, it cannot be replicated to that region again, even if its replica has
been deleted.
• Only regions with replication capabilities can be selected as destination regions for
replication.
1. Cloud server backup allows you to back up all disks including system and data disks
on a server. Cloud disk backup allows you to back up one or more specified disks
(system or data disks). Only data disks need to be backed up, because the system disk
does not contain personal data.

2. ABCD
• Compared with the classic network, VPCs are more flexible and secure. You can define
subnets, IP addresses, and routes, and implement access control through security
groups and network ACLs. VPCs are applicable to services that require isolation for
high security, elastic hybrid cloud, and host multi-tier web applications, meeting the
strong supervision and data security requirements of industries such as finance,
government, and enterprise.
• With a VPC, you can:

▫ Easily manage and configure private networks and change network


configurations flexibly and securely.

▫ Customize the ECS access rules within a security group or between security
groups to improve ECS security.

▫ Have full control over virtual networks, including creating subnets and
configuring the DHCP service.

▫ Create security groups as well as add inbound and outbound rules to improve
ECS security.

▫ Create network ACLs as well as add inbound and outbound rules to improve
subnet security.

▫ Assign EIPs in a VPC and use NAT gateways to connect ECSs in the VPC to the
Internet.

▫ Connect a VPC to your local data center using VPN or Direct Connect for smooth
application migration to the cloud.

▫ Connect VPCs in the same region using VPC peering connections.


• User-defined subnets -- You can customize subnets in your VPC and deploy
applications and services in the subnets.

• Configurable security policies -- You can divide ECSs in a VPC into different security
groups and then configure different access control rules for each security group. You
can create network ACLs to control traffic in and out of associated subnets, improving
subnet security.

• Binding an EIP -- You can assign an EIP and bind the EIP to or unbind it from an ECS
as required. The binding and unbinding operations take effect immediately.

• Direct Connect or VPN access -- You can use Direct Connect or VPN to connect your
on-premises data center to a VPC, forming a hybrid network for smooth application
migration to the cloud.
• You can create subnets, modify subnet information, and delete subnets.
• You can add, query, modify, and delete routes.
• You can assign and release virtual IP addresses, bind a virtual IP address to an EIP or
ECS, and access a virtual IP address through an EIP, a VPN, Direct Connect, or VPC
peering connection.

• Networking mode 1: HA

If you want to improve service availability and avoid single points of failure, you can
deploy ECSs in the active/standby mode or deploy one active ECS and multiple standby
ECSs. In this way, all the ECSs use the same virtual IP address. If the active ECS becomes
faulty, a standby ECS takes over services from the active ECS and services continue
uninterrupted.

• Networking mode 2: HA load balancing cluster

If you want to build a high-availability load balancing cluster, use Keepalived and
configure LVS nodes as direct routers.
• After you create a security group, you can create different access rules for the security
group to protect the ECSs that it contains. You can create, modify, and delete security
groups, add multiple security group rules and replicate security group rules, modify,
delete, import or export security group rules, view the security group of an ECS, modify
the security group of an ECS, and add cloud resources to or remove them from a
security group.
• Similar to security groups, network ACLs control access to subnets and add an
additional layer of defense to your subnets. Security groups only have the "allow"
rules, but network ACLs have both "allow" and "deny" rules. You can use network ACLs
together with security groups to implement access control that is both comprehensive
and fine-grained.

• You can create, view, modify, delete, enable, and disable network ACLs, associate
subnets with or disassociate them from network ACLs, and add, modify, change the
sequence of, enable, disable, and delete network ACL rules.
• When you host a large number of applications on the cloud, if each EIP uses an
independent bandwidth, a lot of bandwidths will be required, which significantly
increases bandwidth costs. If all EIPs share the same bandwidth, you can lower
bandwidth costs and easily perform system O&M.

• You can assign, modify, delete a shared bandwidth, add EIPs to a shared bandwidth,
and remove EIPs from a shared bandwidth.
• If your services experience a significant increase in traffic during a specific period (for
example, on Black Friday), you will need to temporarily increase your bandwidth. You
can purchase a bandwidth add-on package with a specific validity period. If the
bandwidth add-on package expires, your bandwidth will automatically default to its
original specifications.

• You can purchase, modify, and unsubscribe from bandwidth add-on packages.
• You can create a VPC peering connection with another VPC in your account or with a
VPC in another account. You can also view, modify, and delete VPC peering
connections.
• Answer: ABCD
• A load balancer distributes incoming traffic across multiple backend servers in one or
more AZs.
• You can add one or more listeners to a load balancer. A listener specifies a set of rules
based on which the load balancer receives requests and routes the requests to backend
servers. For an HTTP or HTTPS listener, you can add forwarding policies to forward
requests based on domain names or URLs.
• A backend server group uses the protocol and port you specify to forward requests to
one or more backend servers.
• You can configure health checks for each backend server group to check the health
status of each backend server in the group. When a backend server is unhealthy, ELB
automatically stops routing new requests to this server until it recovers and routes the
requests to those healthy ones.
• Summary

▫ The round robin algorithm is suitable for short connections while the least
connections algorithm for persistent connections.

▫ Weighted round robin and weighted least connections are often used in scenarios
where the performance of servers in a backend server group varies.
Both public and private network load balancers support the following protocols:

- TCP: load balancing at Layer 4

- UDP: load balancing at Layer 4

- HTTP: load balancing at Layer 7

- HTTPS: encrypted load balancing at Layer 7


The load balancer uses the following algorithms to distribute traffic:

• Weighted round robin: Requests are routed to different servers based on their weights,
which indicate server processing performance. The weight indicates the processing
performance of the server. Servers with the same weights process an equal number of
connections.

• Weighted least connections: In addition to the weight assigned to each server, the
number of connections processed by each backend server is also considered. Requests
are routed to the server with the lowest connections-to-weight ratio.

• Source IP hash: The source IP address of the client is input into a consistent hashing
algorithm, and the resulting hash is used to identify a server in the static fragment
table.

ELB supports the following types of sticky sessions:

• Source IP address: The hash of the source IP address is used to identify a server in the
static fragment table.

• Load balancer cookie: The load balancer generates a cookie after it receives a request
from a client. All the subsequent requests with the same cookie are distributed to the
same backend server.

• Application cookie: This type of sticky sessions relies on backend applications. All
requests with the cookie generated by backend applications are routed to the same
backend server.
• Answers:

▫ A, B, and C

▫ A, B, C, and D
• By default, ECSs in a VPC cannot communicate with your data center or private
network. To enable communication between them, use a VPN.
• ABCD
• A connection is dedicated network connection between your on-premises data center
and a Direct Connect location over a line you lease from a carrier. You can create a
standard connection by yourself or request a hosted connection from a partner. After
you are certified as a partner, you can also create an operations connection.

• A virtual gateway is a logical gateway for accessing VPCs. Each VPC can have only one
virtual gateway associated. However, a virtual gateway can be associated with multiple
connections. If you have multiple connections and require access to one VPC, you can
associate your connections with the same virtual gateway to access the same VPC.

• A virtual interface serves as an entrance for you to access VPCs through a connection.
A virtual interface links a connection with one or more virtual gateways, each of which
is associated with a VPC, so that your on-premises data center can access all these
VPCs.
For details about the configuration, see https://support.huaweicloud.com/intl/en-
us/bestpractice-dc/dc_05_0001.html.
• BC
• 1. A VPC peering connection is a network connection between two VPCs. You can use
private IP addresses to communicate with each other, just as if the two VPCs are in the
same network. In the same region, you can create a VPC peering connection between
your own VPCs or between your own VPCs and the VPCs of other accounts. VPC
peering connections cannot be created between VPCs in different regions.2. Routing
rules are defined in the routing table. The routing rules are used to route traffic from
the destination network segment to the specified destination.
• To use HUAWEI CLOUD, you need to register an account using your mobile number.
The account owns your HUAWEI CLOUD resources and has full access permissions for
the resources. You can use the account to reset the passwords of IAM users and assign
permissions to IAM users. The account makes payments for the resources used by IAM
users. To log in to the HUAWEI CLOUD management console using an account, choose
Account Login.

• Forgotten passwords can be reset.


• An account and its IAM users share a parent-child relationship. The account
administrator owns the cloud resources and has full permissions for these resources.
The administrator creates IAM users and assigns specific permissions to them. The
administrator can modify or cancel the IAM users' permissions at any time. The bills
generated as a result of resources used by IAM users are paid by the administrator.
• Identity credentials confirm the identity of a user when the user accesses HUAWEI
CLOUD through the console or APIs. Identity credentials include the password and
access key, which can be managed in IAM.

▫ Password: A common identity credential for logging in to the HUAWEI CLOUD


management console or calling HUAWEI CLOUD APIs.

▫ Access key: Comprises an access key ID (AK) and secret access key (SK) pair that
is used when HUAWEI CLOUD is accessed using APIs. Access keys cannot be used
to log in to the console. Each access key provides a signature for cryptographic
authentication to ensure that access requests are secret, complete, and correct.
• After users log in to the management console, the users can create and delete access
keys on the My Credentials page. If an IAM user does not have permissions to log in to
the management console, the user can request the administrator to manage access
keys in IAM.

• Precautions

▫ Access keys have unlimited validity, and each user can have a maximum of two
access keys. Each access key can be downloaded only when it is created. For
security purposes, change the access keys of IAM users periodically. You can
delete an access key and create a new one, but you cannot modify access keys.
• The default user group admin has all permissions for all cloud resources. Users in this
group can perform operations on all resources, including but not limited to creating
user groups and users, assigning permissions, and managing resources.
• Authorization is the process of granting required permissions for a user to perform
operations on specific resources. After a system-defined or custom policy is assigned to
a user group, users in the group inherit the permissions defined by the policy to
manage specific resources. For example, managing Elastic Cloud Servers (ECSs).
• For more refined access control, create subprojects under a project and purchase
resources in the subprojects. IAM users can then be assigned permissions to access only
specific resources in the subprojects.
• You can delegate resource access only to HUAWEI CLOUD accounts. The accounts can
then delegate access to their IAM users.

• For example, assume that account A wants to delegate account B to manage its
resources.

▫ First, account A needs to create an agency.

▫ Second, account B can assign permissions to an IAM user to manage specific


resources of account A. This step can be ignored if account B itself will manage
account A's resources.

▪ Account B needs to create a user group, and grant it permissions required


to manage account A's resources.

▪ Account B then creates a user and adds the user to the user group.

▫ Finally, account B or the authorized IAM user can manage account A's resources.

▪ They log in to HUAWEI CLOUD and switch roles to account A.

▪ Then, they need to switch to region A and manage account A's resources in
this region.
• Refined permissions management
▫ With IAM, you can authorize IAM users to manage specific resources in your
account. For example, you can authorize Charlie to manage Virtual Private Cloud
(VPC) resources in project B and authorize James to view VPC data in this project.
• Secure access
▫ Instead of sharing your account password with others, you can create IAM users
for employees or applications in your organization and generate identity
credentials for them to securely access your resources based on assigned
permissions.
• Critical operation protection
▫ IAM provides login and critical operation protection. When IAM users created
using your account log in to the console or perform a critical operation, they
need to complete authentication by email, SMS, or virtual MFA device. This
function can keep your account and resources secure.
• Assigning permissions by user group
▫ You do not need to assign permissions to each user. Instead, you can manage
users by group and assign permissions to the group. Each user then inherits
permissions from the groups to which the user belongs. If you need to change
the permissions of a user, you only need to remove the user from the original
groups or add the user to other groups.
• Project-based resource isolation
▫ You can create subprojects in each region to isolate resources.
• Federated identity authentication
▫ This function allows enterprises with identity authentication systems to access
HUAWEI CLOUD through single sign-on (SSL). The enterprises do not need to
create users on HUAWEI CLOUD.
• If you have purchased multiple resources on HUAWEI CLOUD, such as Elastic Cloud
Server (ECS), Elastic Volume Service (EVS), and Bare Metal Server (BMS), for different
teams or applications in your enterprise, you can create IAM users for the team
members or applications and grant them permissions required to complete specific
tasks. IAM users use their own usernames and passwords to log in to HUAWEI
CLOUD and access resources under their account.

• In addition to IAM, HUAWEI CLOUD provides Enterprise Management to control access


to cloud resources. Enterprise Management supports more fine-grained permissions
management and enterprise project accounting management. You can choose either of
the two services according to your requirements.
• For example, after you create an agency for a professional O&M company, the
company can use its own account to manage the specified resources. You can modify
or cancel the delegated permissions at anytime. In this figure, account A is the
delegating party, and account B is the delegated party.
• If your enterprise has an identity system, you can create an identity provider in IAM to
provide single-sign on (SSO) access to HUAWEI CLOUD for employees in your
enterprise. The identity provider establishes a trust relationship between your
enterprise and HUAWEI CLOUD, allowing the employees in your enterprise to access
HUAWEI CLOUD using their existing enterprise accounts.
• Graph Engine Service (GES)

▫ Delegate GES to access other cloud services, for example, to bind your EIP to the
primary load balancer if a failover occurs.

• Take a GES agency as an example.

▫ Go to the SFS console.

▫ On the Create File System page, enable static data encryption.

▫ A dialog box is displayed requesting you to confirm the creation of an SFS


agency. After you click OK, the system automatically creates an SFS agency with
KMS Administrator permissions for the current project. With the agency, SFS can
obtain KMS keys for encrypting or decrypting file systems.

▫ You can view the agency in the agency list on the IAM console.
• Web SSO: Browsers are used as the communication media. This authentication type
enables common users to access HUAWEI CLOUD using browsers.

• API calling: Development tools (such as OpenStack Client and ShibbolethECP Client)
are used as the communication media. This authentication type enables enterprise
users or common users to access HUAWEI CLOUD by calling APIs.
• To implement federated identity authentication between an IdP and HUAWEI CLOUD,
you need to complete three steps:

▫ Establish a trust relationship and create an identity provider. Exchange the


metadata files of HUAWEI CLOUD and the enterprise IdP to establish a trust
relationship.

▫ Configure identity conversion rules. Map the users, user groups, and permissions
in the identity provider to HUAWEI CLOUD.

▫ Configure a login link. Configure a login link in the enterprise management


system, allowing users to access HUAWEI CLOUD through SSO.
• Visual editor: Select a cloud service, specify actions and resources, and add request
conditions. You do not need to have knowledge of JSON syntax.

• JSON: Create JSON policies from scratch or based on an existing policy.


• IAM Users

▫ IAM users are created using an account in IAM or Enterprise Management. They
are managed and granted permissions by the account. Bills generated by the IAM
users' use of resources are paid by the account.

▫ In an enterprise, if there are multiple employees who need to use the resources
purchased from HUAWEI CLOUD through an account, the account can be used
to create IAM users for these employees and assign permissions to the users for
using resources. The IAM users have their own passwords for accessing the
resources under the account.

• Enterprise Member Accounts

▫ Enterprise master accounts and member accounts are generated upon successful
registration with HUAWEI CLOUD using mobile numbers. Accounting
Management of Enterprise Management allows multiple HUAWEI CLOUD
accounts to be associated with each other for accounting purposes. You can
create a hierarchical organization and a master account, add member accounts
to this organization, and associate them with the master account.

• The master account can then allocate funds to the member accounts for them to
manage resources.
• Answer: A and B
• Cloud Eye provides the following functions:
▫ Automatic monitoring: Monitoring starts automatically after you have enabled
cloud services and created related resources, for example, Elastic Cloud Servers
(ECSs). Then on the Cloud Eye console, you can view the service running status
and set alarm rules for those resources.
▫ Server monitoring: After installing the Agent on an ECS or a Bare Metal Server
(BMS), you can collect 60-second granularity ECS or BMS monitoring data in
real-time. 40 metrics, such as CPU, memory, and disk metrics, are provided.
▫ Flexible alarm rule configuration: You can create alarm rules for multiple
resources at the same time. After an alarm rule is created, you can flexibly
manage it, for example you can modify, enable, disable, or delete it at any time.
▫ Real-time notification: Users can enable Simple Message Notification (SMN)
when creating alarm rules. When the cloud service status changes and the
monitoring data of the metric reaches the threshold specified in an alarm rule,
Cloud Eye notifies you by text messages, emails, or by sending messages to
server addresses. In this way, users can monitor the cloud resource status and
changes in real time.
▫ Monitoring panel: The panel enables you to view cross-service and cross-
dimension monitoring data. It displays key metrics centrally, providing an
overview of the service running status and allowing monitoring details to be
checked when troubleshooting.
▫ OBS dump: Raw data for each metric is kept for only two days on Cloud Eye. If
you need to retain data for longer, enable Object Storage Service (OBS), and raw
data will be automatically synchronized to OBS and saved.
• Automatic enablement: Cloud Eye is automatically enabled for all users. You can use
the Cloud Eye console or APIs to view the service running status and set alarm rules.

• Real-time and reliable monitoring: Raw data is reported to Cloud Eye in real time for
monitoring of cloud services. Cloud Eye generates alarms, and sends notifications in
real time.

• Monitoring visualization: Cloud Eye monitoring panels provide you with rich
monitoring graphs supporting automatic data refresh and multi-metric comparison.

• Multiple notification types: You can enable the SMN service when creating alarm rules.
When the metric data reaches the threshold specified in an alarm rule, Cloud Eye
notifies users by emails or text messages, allowing you to keep track of the running
status of cloud services. Cloud Eye can also send HTTP/HTTPS requests to an IP
address of your choice, helping you build smart alarm handling programs.

• Batch creation of alarm rules: Alarm templates allow you to quickly create alarm rules
for multiple cloud services.
• Advantages:

▫ Editable Monitoring Panel

▪ Provides you with key system monitoring information on panels.

▫ Alarm-Triggered Scaling

▪ Based on the configured alarm rules, Cloud Eye automatically triggers AS to


add ECSs as the service peak comes in a short period of time.

▫ Comprehensive Server Monitoring

▪ Fine-grained monitoring of rich network metrics helps you analyze and


identify network bottlenecks.
• Advantages

▫ Alarm-Triggered Auto Scaling

▪ Based on the configured alarm rules, servers automatically scale in/out


once the thresholds are reached.

▫ Login and Security Log Monitoring

▪ Login logs are monitored in real time. Malicious login requests will be
rejected and alarms will be generated.

▫ Comprehensive Server Monitoring

▪ Fine-grained monitoring of rich network metrics helps you analyze and


identify network bottlenecks.
• Server monitoring comprises basic monitoring, OS monitoring, and process monitoring
for servers.

▫ Basic monitoring provides Agent-free monitoring for basic ECS or BMS metrics.

▫ OS monitoring provides system-wide, active, and fine-grained monitoring for


servers, and requires Agents to be installed on the servers that will be monitored.

▫ Process monitoring is used to monitor active processes on hosts.

• Functions

▫ Server monitoring provides more than 40 metrics, such as metrics for CPU,
memory, disk, and network, to meet the basic monitoring and O&M
requirements for servers.

▫ After the Agent is installed, data of Agent-related metrics is reported once a


minute.

▫ CPU usage, memory usage, and number of opened files used by active processes
give you a better understanding of the resource usages on ECSs or BMSs.
• Currently, website monitoring is free.

• The website monitoring function is available in the CN North-Beijing1 region. If you


want to use this function in other regions, the user group that you belong to must be
assigned permissions for cn-north-1 [CN North-Beijing1] through IAM.

• Advantages

▫ You can create, modify, disable, enable, or delete monitors.

▫ The configuration is simple and quick, allowing you to improve efficiency and
save resources that you would otherwise use to configure complex open source
products.

▫ You receive notifications of website exceptions in real time.


• Events are key operations on cloud service resources that are stored and monitored by
Cloud Eye. You can view events to see operations performed by specific users on
specific resources, such as delete ECS or reboot ECS.

• Event monitoring is enabled by default.

• Event monitoring provides an API for reporting custom events, which helps you collect
and report abnormal events or important change events generated by services to
Cloud Eye.

• The difference between custom event monitoring and custom monitoring lies in the
data to be monitored:

▫ Custom event monitoring is to monitor data for non-continuous custom events


you reported to Cloud Eye.

▫ Custom monitoring is to monitor periodically and continuously collected data.


• Cloud Eye uses the SMN service to send notifications to users. This requires you to
create a topic and add subscriptions to this topic on the SMN console. Then when you
create alarm rules on Cloud Eye, you can enable the alarm notification function and
select the created topic. When an exception occurs, Cloud Eye can send the alarm
information to the subscriptions in real time.

• The Alarm Rules function supports enterprise projects. If an alarm rule is associated
with an enterprise project, only users who have the permission of the enterprise project
can view and manage the alarm rule.
• The rollup period can be 5 minutes, 20 minutes, 1 hour, 4 hours, or 1 day.

• The methods Cloud Eye uses to process collected data vary depending on data type:

▫ If the data is an integer, the data will be rounded.

▫ If the data is a decimal fraction, two decimal places will be retained. Any further
decimal places will be rounded.

• For example, the instance quantity in AS is an integer. If the rollup period is 5 minutes,
the values of collected metric data are 1 and 4, then the average value is 2 instead of
2.5.

• You can choose the proper rollup method to meet your service requirements.
• If an instance is disabled, stopped, or deleted, its metrics will be deleted one hour after
the raw data reporting of those metrics stops.

• When the instance is enabled or restarted, raw data reporting of its metrics will
resume. If the instance has been disabled or stopped shorter than two days or the
previous rollup data retention time, you can view the historical data of its metrics
generated before these metrics are deleted.
• If Avg. is selected for Statistic, Cloud Eye calculates the average value of metric data
within a rollup period.

• If Max. is selected for Statistic, Cloud Eye calculates the maximum value of metric data
within a rollup period.

• If Min. is selected for Statistic, Cloud Eye calculates the minimum value of metric data
within a rollup period.

• If Sum is selected for Statistic, Cloud Eye calculates the sum of metric data within a
rollup period.

• If Variance is selected for Statistic, Cloud Eye calculates the variance value of metric
data within a rollup period.
• Answers:

▫ ABCDE
• In traditional O&M, system faults and human errors can only be located manually.

• By examining the logs collected by CTS, you can quickly locate faults and determine
whether they are caused by system exceptions or misoperations. This makes fast fault
rectification possible.
• What makes CTS special? Compared with traditional auditing, CTS has advantages in
the following aspects:

▫ IT environment

▫ Access security

▫ IT control

▫ Data security
• You can use CTS to track operations such as creation, use, and deletion.

• CTS, being synchronized with other services, is a good choice for you to secure systems
and record logs in real time.

• CTS can record operations on a variety of resources. Audit logs document operation
time, users or devices that initiate operations, details about target resources, and other
information as required by typical compliance obligations.

• In addition, audit log transmission is highly encrypted. Logs are also encrypted and
checked for integrity before storage.

• CTS can be interconnected with SMN to notify you of key events and with
FunctionGraph to trigger functions. It also supports log data analysis.
• Usually, a code defined in the CTS documentation in the HUAWEI CLOUD website will
be returned. However, code 302 shown in the figure is a common status code and is
not described in the documentation. The following codes will be returned for
operations about trace management. You can identify the faults based on the error
code descriptions.

▫ 200: The request is normal.

▫ 400: The query parameters are abnormal.

▫ 401: The request is rejected due to authentication failure.

▫ 403: The server understood the request but refused to authorize it.

▫ 404: The requested trace does not exist.

▫ 500: The server has received the request but encountered an internal error.

▫ 503: The requested service is unavailable. The client should not repeat the
request without modifications.

• The following codes will be returned for operations about tracker management. You
can identify the faults based on the error code descriptions.

▫ 201: The request is successful.

▫ 400: The server failed to process the request.

▫ 404: The requested OBS bucket does not exist.


• Trace files can be transferred from CTS to OBS. If the OBS bucket policies are
incorrectly configured, file transfer will fail. You will only be able to check traces
retained in CTS in the last 7 days.

• CTS can work with SMN to send notifications of specified operations performed on
major HUAWEI CLOUD services, such as Elastic Cloud Server (ECS), Image
Management Service (IMS), Bare Metal Server (BMS), Cloud Container Instance (CCI),
Cloud Container Engine (CCE), Auto Scaling (AS), FunctionGraph, and Cloud Phone.
You can configure CTS to alert you of sensitive operations, such as cloud service
creation and deletion.

• By working with DEW, CTS provides management on keys and SSH key pairs for data
encryption. Keys are protected by third-party hardware security modules (HSMs) and
managed by KMS. KMS implements access control over keys and tracks the use of all
keys to meet compliance requirements. In terms of SSH key pair management, SSH2
key pairs created on the console support only the RSA-2048 algorithm, whereas
imported SSH key pairs support the RSA-1024, -2048, and -4096 algorithms.

• In addition, integrity verification for trace files is supported. Refer to the parameter
descriptions on Page 13 during the configuration. The time, service_type, resource_type,
trace_status, and trace_type parameters are required for an integrity verification. Set
the other parameters based on the requirements of specific services.
• Traces can be transferred to and permanently stored in OBS buckets for future
backtracking.

• CTS, together with SMN, can alert you of sensitive operations.


• Answer: C
• Log Tank Service (LTS) enables you to collect, query, conduct statistics on, and process
large volumes of logs.

• It supports log collection using ICAgent, SDKs, and open-source collectors such as
Fluentd and Filebeat.

• PB-level log storage, and high-performance log collection and query are provided. For
example, search of 1 billion data records takes only seconds, and an agent can collect
logs at a rate of 100 MB/s.

• In addition, logs can be transferred to OBS and DIS for Kafka to categorize data into
hot and cold data.
• You can create log groups directly on the LTS console. In addition, when other
HUAWEI CLOUD services are interconnected with LTS, it automatically creates log
groups and log streams, to which the logs of the interconnected services will be sent.

• Data is written to and read from a log stream. You can configure logs of different
types, such as operation logs and access logs, to be written into different log streams.
The ICAgent will package and send the collected log data to LTS on a log-stream basis.
To view logs, you can go to the corresponding log stream for quick query. In short, the
use of log streams greatly reduces the number of log reads and writes and improves
efficiency.

• An agent is a log collection tool that sends logs from the host where it is installed to
LTS. When you use LTS to collect logs for the first time, you need to install the ICAgent.
Batch installation is supported if you need to collect logs from multiple hosts. The
running status of the ICAgent is displayed in real time on the LTS console.
• Real-time log collection

▫ Logs are collected in real time and displayed on the LTS console in a clear and
orderly manner for you to quickly query. Logs can be stored for a long time as
required.

• Log query and real-time analysis

▫ Collected logs can be quickly queried by keyword or fuzzy match, which enables
efficient real-time log analysis, security diagnosis and analysis, operations, and
customer services. Metrics of operations, such as cloud service visits and clicks,
can be generated through log analysis.

• Log monitoring and alarms

▫ LTS works with Application Operations Management (AOM) to collect statistics


on keywords of logs stored in LTS. It monitors the service running status in real
time based on the frequency of keyword occurrences in logs.

• Log transfer

▫ After being reported to LTS, logs of hosts and cloud services are retained for
seven days by default. You can also set the retention period to a value ranging
from 1 to 30 days. Logs older than the retention period will be automatically
deleted. For long-term storage or persistent logging, you can transfer logs to
Object Storage Service (OBS) or Data Ingestion Service (DIS).
• Log collection and analysis

▫ Without proper management, logs of hosts and cloud services are too numerous
or messy to be queried and are cleared periodically. LTS displays collected logs
on the console in a clear and orderly manner for fast query, and can be stored
for a long time if necessary. Log query supports search-by-keyword and fuzzy
match, enabling efficient real-time log analysis, security diagnosis and analysis,
operations, and customer services. Metrics of operations, such as cloud service
visits and clicks, can be generated through log analysis.

• Service performance optimization

▫ Performance and quality of website services, such as databases and networks,


play an important role in customer satisfaction. By analyzing the network
congestion logs, you can pinpoint the performance bottlenecks of your websites,
and take measures such as improving website caching policies or network
transmission policies to optimize performance. To be specific, you can

• Quick locating of network faults

▫ Network quality is the cornerstone of service stability. LTS aggregates logs from
different sources, helping you detect and locate faults in a timely manner and
enabling backtracking. For example:
• Collected logs are sent to their corresponding log streams and groups. If there are a
large number of logs, you are advised to name log groups and log streams in an
identifiable way, so that you can quickly find your desired logs.

• A log group name must meet the following requirements:

▫ Only letters, digits, hyphens (-), underscores (_), and periods (.) are allowed.

▫ It cannot start or end with a period (.).

▫ It must be a string of 1 to 64 characters.

• You can set the log retention duration to a value ranging from 1 to 30 days. If the
duration is not specified, logs are retained for 7 days by default. LTS provides a free
quota of 500 MB per month for log read and write, log storage, and log indexing.
When the quota is used up, you will be billed based on log usage.
• Answer: ABCD
• Structured Query Language (SQL) is an interactive, easy-to-use database language
that allows you to define and manipulate data. Database management systems should
make full use of the SQL language to improve the quality and efficiency of the
computer application systems. SQL can not only be independently used on terminals,
but also be used as a sub-language for programming design. In database applications,
SQL can work with other programming languages to provide you with comprehensive
information.
▫ MySQL was created by a Swedish company, MySQL AB, which was bought by
Sun Microsystems (now Oracle Corporation). MySQL is one of the most popular
relational database management systems. It is also one of the best types of
RDBMS software for web applications. MySQL uses dual-licensing distribution. It
is available in two editions: Community Edition and Commercial Edition. MySQL
is the best choice for small or medium-sized websites because of its small size,
fast speed, low cost, and especially because of its open source nature.
▫ PostgreSQL is an object-relational database management system (ORDBMS)
derived from the POSTGRES package based on the 4.2 version written at the
University of California, Berkeley. Many leading POSTGRES concepts did not
appear until fairly late in the development of business databases.
▫ NoSQL refers to non-relational databases. With the emergence of Web 2.0
websites, traditional relational databases started lagging far behind NoSQL
databases in terms of processing ultra-large-scale and highly concurrent SNS
website requests. NoSQL databases are used to solve challenges stemming from
handling multiple data types of large-scale data collections, especially where big
data applications are concerned. NoSQL databases come in a variety of types
based on different data models. The main types are key-value pair, wide column,
document, and graph.
• Elastic scaling: Horizontal scaling: Read replicas (up to five for each instance) can be
created or deleted. Vertical scaling: DB instance classes can be modified and storage
space can be scaled (up to 4 TB).

• Backup and restoration: Backup: Various backup types are provided: automated,
manual, full, and incremental backups. Backup files can be added, deleted, queried, or
replicated. Restoration: Data can be restored to any point in time within the backup
retention period, or to a new or an original DB instance. The backup retention period is
up to 732 days.

• Log management: Slow query logs and error logs can be queried and downloaded.

• Parameter configuration: Database administrators (DBAs) can adjust DB engine


parameter configurations based on monitoring metrics and log information for
database tuning. DB engine parameters can be added, deleted, modified, queried,
reset, compared, and replicated through the parameter template management.
• PostgreSQL is an open source object-relational database management system focused
on extensibility and standards compliance. It is known as the most advanced open
source database. RDS for PostgreSQL is designed for enterprise-oriented online
transaction processing (OLTP) scenarios and supports NoSQL (JSON, XML, or hstore)
and geographic information system (GIS) data types. It has earned a reputation for
reliability and data integrity, and is suitable for websites, location-based applications,
and complex data object processing.

• RDS for PostgreSQL supports the postgis plugin, which provides excellent spatial
performance meeting international standards. RDS for PostgreSQL offers functions
similar to Oracle databases but at a lower cost.

• RDS for PostgreSQL is a cost-effective solution for a range of different scenarios. You
can flexibly scale resources based on service requirements and you only pay for what
you use.
• PostgreSQL is an open source object-relational database management system focused
on extensibility and standards compliance. It is known as the most advanced open
source database. RDS for PostgreSQL is designed for enterprise-oriented OLTP
scenarios and supports NoSQL (JSON, XML, or hstore) and GIS data types. It has
earned a reputation for reliability and data integrity, and is suitable for websites,
location-based applications, and complex data object processing.

• RDS for PostgreSQL supports the postgis plugin, which provides excellent spatial
performance meeting international standards. RDS for PostgreSQL offers functions
similar to Oracle databases but at a lower cost.

• RDS for PostgreSQL is a cost-effective solution for a range of different scenarios. You
can flexibly scale resources based on service requirements and you only pay for what
you use.
• GaussDB Helps Huawei Consumer Cloud Implement Smart Operations
• Service requirements and challenges:
▫ Huawei Consumer Cloud Big Data platform centrally stores and manages service
data by using databases in a hybrid architecture incorporating Hadoop and MPP.
It faces the following challenges: 1. Services expand rapidly. The amount of data
being handled expands by over 30% every year. 2. The data analysis platform
must support real-time analysis to provide an intelligent user experience. 3.
Independent report development and visual analysis are required.
• Solutions:
▫ On-demand scaling-out makes it easy to quickly expand capacity.
▫ SQL on HDFS is used for real-time analysis in ad hoc exploration. Kafka flow
data is imported to databases fast to generate real-time reports.
▫ Key technologies, such as multi-tenant load management and approximate
computing, are used for efficient report development and visual analysis.
• Customer benefits:
▫ On-demand scaling-out is performed without service interruptions.
▫ After the new data analysis model is brought online, analysis results can be
obtained in real time, and marketing precision is improved by more than 50%.
▫ The query and analysis response time of typical visualized reports is reduced from
several minutes to no more than 5 seconds, and the report development period is
reduced from 2 weeks to just half an hour.
• Financial industry pain points:

▫ User traffic and generated data volumes are unpredictable. User experience is
affected during peak hours, and services may even need to be stopped in order to
expand capacity.

• Pain points of large Internet companies and traditional enterprises:

▫ Service volumes are large and database sharding solutions for open-source
databases are complex.

▫ Enterprise customers prefer commercial databases (SQL Server and Oracle), but
the licensing fees are high.
• A database is a warehouse where data is organized, stored, and managed based on its
structure.

• Common database types: MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB.

• Application scenarios: SaaS applications, e-commerce, social websites, mobile apps,


gaming applications, and government websites

• MySQL is one of the world's most widely used open-source databases, especially in the
Internet industry, and delivers excellent read performance.

• PostgreSQL is a powerful open-source database. It provides more complete transaction


support, more powerful functions, and better stability than MySQL, making it the
preferred alternative to Oracle. It offers powerful location-based functions.

• SQL Server is an internationally popular commercial relational database. Compared


with other commercial databases (such as Oracle and DB2), SQL Server is more cost-
effective. It provides the same level of transaction support, but at a lower price.
• Benefits: Cloud databases improve database O&M efficiency and enable customer
database teams to devote more time on database architecture design.

• Cloud databases help you reduce the total cost of ownership (TCO) and O&M
workload, freeing you to focus on developing key services.

• Note:

▫ Database high availability: primary/standby DB instances (hot backup), Huawei-


developed high availability module, and failover within seconds

▫ Backup and restoration: 732-day backup retention period, point-in-time restore,


and snapshots

▫ Elastic scaling: scaling within minutes (CPU, memory, and storage), and
transparent read/write splitting through a database proxy
• Ready for use – quick rollout
▫ RDS helps you easily complete the entire process, from project concept to
production deployment. There is no need to install database software or deploy
database servers. A production-ready relational database is available within just
a few minutes. You only pay for the resources you actually use. In the early stage,
you do not need to invest much in infrastructure. You can start from DB
instances with low specifications and flexibly scale resources as required.
• Reliable – easy and care-free
▫ RDS runs on highly reliable infrastructure. Primary/standby DB instances can be
deployed within an AZ or across AZs. Data is automatically synchronized from a
primary DB instance to a standby DB instance. If the primary DB instance fails,
services are quickly and automatically switched over to the standby DB instance.
RDS also provides other functions to enhance database reliability, including
automated backups, manual backups, and disaster recovery capabilities.
• Easy to manage – visible and controllable
▫ With RDS, you can easily create, manage, and scale databases. O&M operations
like managing database connections, migration, backup and restoration, and
monitoring have also been simplified. The Cloud Eye console displays key
performance metrics for DB instances, including the CPU usage, memory usage,
storage space usage, I/O activity, and database connections.
• Console Home: a unified public cloud portal. After you log in to the console and
choose RDS, you are redirected to the RDS console.

• RDS console: a self-service web-based console where you can input parameters
required for DB instance management commands. For example, when creating a DB
instance, you can set the DB engine and version, specifications, storage space, and
automated backup policy on the RDS console.

• RDS management plane: RDS backend services that manage DB instance creation,
configuration, and other operations. In most cases, the RDS service connects to the
RDS instance to deliver and execute management commands. When compute
resources need to be obtained, for example, when an instance needs to be created, the
IaaS services need to be invoked to apply for resources.

• ECS/EVS/VPC: IaaS services on the public cloud that provide scalable compute, storage,
and network resources for users. RDS applies for resources from these services to build
an environment for DB instances.

• RDS instance plane: includes DB engines and required tools, such as backup tools. After
DB instances are created and initialized, applications in other ECSs can connect to the
DB instances to read and write data.

• Cloud Eye: RDS periodically reports DB instance statuses to Cloud Eye. Cloud Eye stores
and displays monitoring data and reports alarms when the monitoring data exceeds
specified thresholds.

• OBS: Object Storage Service on the public cloud, used to store RDS backups.
• ECS: Elastic Cloud Server connects to RDS DB instances over a private network to
reduce the application response time and avoid public traffic fees.

• VPC: Virtual Private Cloud isolates DB instances and controls access to RDS DB
instances.

• OBS: Object Storage Service stores automated and manual backups of RDS DB
instances.

• Cloud Eye: Monitors RDS resources in real time. It reports alarms and issues warnings
promptly to ensure that services are running properly.

• CTS: Cloud Trace Service records operations on cloud service resources for user query,
audit, and backtrack.

• DBSS: Database Security Service protects databases from attacks and ensures database
security on the cloud.

• DCS: Distributed Cache Service caches hot data to accelerate access to databases and
improve user experience.

• DDM: Distributed Database Middleware connects to multiple RDS MySQL DB instances


and allows you to access distributed databases.

• DRS: Data Replication Service smoothly migrates databases to the cloud.

• DAS: Data Admin Service provides a GUI interface for you to manage cloud databases,
improving the efficiency and security of database management.
• Inexpensive
▫ Immediately ready for use, elastic scaling, full compatibility, and easy O&M
• High-performance
▫ Performance optimized, SQL optimized, high-quality hardware infrastructure,
high-quality hardware investment, and high-speed access
• Secure
▫ Network isolation, access control, encrypted transmission and storage, data
deletion, anti-DDoS, and security assurance
• Reliable
▫ Primary/standby DB instances (hot backup), and data backup and restoration
• Automated backups: RDS keeps automated backups based on your specified retention
period. You can restore data to any point in time within the backup retention period.

• Manual backups: They are retained unless you manually delete them.
• Before logging in to the HUAWEI CLOUD management console, you need to create a
HUAWEI CLOUD account.

• Before buying an RDS DB instance, ensure that your account balance is greater than $0
USD.

• After the DB instance is created, the DB engine cannot be changed. Therefore, exercise
caution when selecting the DB engine.

• MySQL is one of the world's most popular open-source relational databases. It works
with Linux, Apache, and PHP to establish a LAMP stack, thereby providing efficient web
solutions. RDS for MySQL has significantly enhanced read/write performance, scaling
capability, backup and restoration, and fault tolerance.

• PostgreSQL is a typical open-source relational database that ensures data reliability


and integrity. It supports Internet e-commerce, location-based application systems,
financial insurance systems, complex data object processing, and other application
scenarios.

• Microsoft SQL Server is an internationally popular commercial relational database. It


integrates various Microsoft management and development tools. RDS for SQL Server
is authorized by Microsoft and supports Windows-based applications. It's reliable,
scalable, secure, cost-effective, easy to manage, and immediately ready for use, freeing
you to focus on developing applications.
• After binding an EIP to a DB instance, you can connect to the DB instance through the
EIP.
• After binding an EIP to a DB instance, you can access the DB instance through the EIP.

▫ Log in to the management console.

▫ Select a region and a project.

▫ Click Service List. Under Database, click Relational Database Service to go to the
RDS console.

▫ On the Instance Management page, locate the target DB instance and click Log
In in the Operation column.

▫ On the displayed login page, enter the correct username and password and click
Log In.
• Currently, RDS does not allow you to change the types of instance classes. For
example, if the current instance class is a general-purpose instance with 2 vCPUs and 4
GB of memory, you can upgrade it to 4 vCPUs and 8 GB of memory, but you cannot
change it into a general-enhanced II instance.

• When you change the DB instance classes, the following message is displayed:
"Changing the instance class will cause the DB instance to reboot. To prevent service
interruption, wait until off-peak hours to perform this operation." For primary/standby
DB instances, a failover is triggered during the instance class change. The time required
for an instance class change depends on the failover duration and is irrelevant to the
DB instance reboot duration.
• You can create a maximum of 100 parameter templates by default. All RDS DB
engines share the same parameter template quota.
• Rebooting a primary DB instance will automatically cause all the read replicas
associated with it to also be rebooted.

• You can reboot a DB instance only when its status is Available. Your database may be
unavailable in some cases such as when data is being backed up or some modifications
are being made.

• The time required for rebooting a DB instance depends on the crash recovery of the
DB engine. To shorten the reboot time, you are advised to reduce database activities
during the reboot to reduce rollback activities of transit transactions.

• For primary/standby DB instances, if you reboot the primary DB instance, the standby
DB instance is also rebooted automatically.
• Constraints:

▫ DB instances that are currently being created cannot be deleted.

▫ If you delete a pay-per-use DB instance, its automated backups are also deleted
and you are no longer charged for them. Manual backups are still retained and
will incur additional costs.
• AD
• This document focuses on security issues faced by tenants and security
services provided on HUAWEI CLOUD.
Cloud Security Alliance<The Treacherous 12 - Top Threats to Cloud
Computing + Industry Insights>

4
6
7
• A hardware security module (HSM) is a hardware device that securely
produces, stores, manages, and uses CMKs. In addition, it provides
encryption processing services.

• KMS is used for encrypting small-size data, large volumes of data, data in
OBS, EVS, IMS, and RDS. KPS is used for logging into a Linux ECS and
obtaining the password for logging into a Windows ECS. Dedicated HSM
is used for encrypting your service system.

• SQL injection is an attack in which malicious code is inserted into strings


that are later passed to an instance of SQL Server for parsing and
execution.
• Malicious programs refer to programs with the intention to attack or
perform remote control, for example, backdoors, Trojan horses, worms,
and viruses.

• Key files refer to the files that may affect system running, for example,
system files.

• A security policy is a security rule that must be followed when a container


is running. If a container violates a security policy, the system reports a
container exception error. Security policies are defined by users, including
processes allowed to run by containers and read-only files in containers.
• CC attack is a type of DoS attack in which the attacker uses a proxy
server to generate and send seemingly-legitimate requests to a target
server.

• Precise protection: Groups multiple common HTTP fields, such as the


URL, IP, Params, Cookie, Referer, User-Agent, and Header, together to
customize a policy. You can also block or allow the traffic based on logic
conditions.

• E-mall promotion protection prevents CC attacks by blocking massive


malicious requests to ensure website availability.
 The TCP/IP architecture consists of the following layers: physical layer,
data link layer, network layer, transport layer, session layer, presentation
layer, and application layer. Anti-DDoS can withstand multi-layered
(layers 4 to 7) attacks.

• Border Gateway Protocol (BGP) is standardized exterior gateway


protocol designed to exchange routing and reachability information among
autonomous systems (AS) on the Internet.
14
• Currently, the following database types are supported: SQL Server (from
version 2008 to 2014), MySQL (from version 5.5 to 5.7), and PostgreSQL
(from version 9.4 to 9.5).

15
• The primary WAF benefit is protection for custom web applications' "self-
inflicted" vulnerabilities in web application code developed by the enterprise, and
protection for vulnerabilities in off-the-shelf web application software.
• Regular engine: Defends against OWASP top 10 attacks.

• Semantic engine: Protects against XSS/SQL injection attacks.

• AI engine: Fends off APTs and zero-day vulnerabilities.

• XSS attacks are a type of injection, in which malicious scripts are injected
into otherwise benign and trusted websites. XSS attacks occur when an
attacker uses a web application to send malicious code, generally in the
form of a browser side script, to a different end user.
• An origin server indicates the server where users' service are running on.

• An origin server IP address is the Internet IP address used by users to


provide services.

• A back-to-source IP address is an IP address provided by AAD for


security purposes. From origin servers' perspective, traffic returned to
customers is all sent from the back-to-source IP address.

20
• Answer: A, B, C, and D.

• A and B are mainly used to defend against web attacks and vulnerabilities.
C is used to ensure website reliability and D is used to prevent website
services from DDoS attacks. Misunderstanding: Only services that are
classified into application security can ensure website security.
• When an end user accesses a website that uses HUAWEI CLOUD CDN, the
local DNS server will redirect all domain requests to CDN using the CNAME
method. Then, based on a group of preconfigured policies (including content
types, geological locations, and network loads), CDN provides the end user
with the IP address of the CDN node that can respond the fastest, enabling the
end user to obtain the desired content faster than would have otherwise been
possible.
• HUAWEI CLOUD CDN is the first CDN vendor with self-developed software and
hardware. Our CDN solution has strong compatibility and provides fast and reliable
acceleration. Its uptime is up to 99.9%. Your data is yours alone. We respect data
privacy. HUAWEI CLOUD CDN provides advanced security functions, such as HTTPS
secure transmission and hotlink protection, to protect your content from pirated. Based
on a global IP geolocation database, CDN delivers a scheduling success rate of up to
99%. An easy-to-use console is provided for you to configure domain names and
create custom configurations.
• HUAWEI CLOUD CDN works together with top carriers to ensure that there is always
plenty of bandwidth. Limited bandwidth is no longer a bottleneck. The HUAWEI
CLOUD CDN backbone runs on only the best performing nodes in tier 1 and 2 cities,
ensuring fast and stable acceleration, but HUAWEI CLOUD CDN also leverages peer-
to-peer connections with small- and medium-sized carriers in China, such as Great
Wall Broadband, Tietong, Wasu, and 21Vianet. A good relationship with carriers makes
it easier to respond faster to DNS hijacking.
• The three billing options are mainly for small- and medium-sized customers. The
validity period of a prepaid traffic package is one year. The package automatically
expires when the validity period ends, and any remaining traffic is discarded.

• Traffic-based and peak bandwidth-based billing are settled on the following day when
the usage occurred. Choose traffic-based billing if your site's traffic flow cannot be
predicted and daily bandwidth usage is smaller than 30%. Choose peak bandwidth-
based billing if your site's traffic flow is predictable and daily bandwidth usage is
greater than 30%.
Notes:
1. The Dashboard page and the Statistical Analysis page on the CDN console display the
logged traffic statistics of accelerated domain names. These statistics are logged at the
application level. However, the billable traffic (the actual network traffic) is 7% to 15%
higher than the displayed statistics on the Dashboard and Statistical Analysis pages due
to TCP/IP packet header and TCP retransmission consumptions. Therefore, according to
the industry standard, the billable traffic is 10% higher than the statistics monitored by
logs.
2. Bandwidth usage = Traffic used per day (GB)/(Peak bandwidth (Mbit/s) x 10.54). The
number 10.54 indicates that a bandwidth of 1 Mbit/s at 100% usage approximately
generates traffic of 10.54 GB per day.
• Billing by 95th percentile bandwidth and average daily peak bandwidth are provided
for customers whose monthly consumption is greater than ¥100,000. These two billing
options are settled on a monthly basis.

• In 95th percentile bandwidth billing, the bandwidth is measured and recorded every 5
minutes on each valid day. At the end of the month, the records are sorted from the
highest to the lowest, and the top 5% of the recorded bandwidth values are thrown
away. Then the highest bandwidth value in the remaining records is the billable
bandwidth of the month.

• In average daily peak bandwidth billing, by the end of a calendar month, the system
calculates the average bandwidth based on the peak bandwidth of each valid day in
this month. The average value is the billable bandwidth of the month and the bill is
generated based on the contract price. For details about the two billing options,
contact Huawei solution managers.

• Whole site acceleration is billed based on the number of requests initiated by users to
the system and the consumed bandwidth each day. The usage generated on the
current day will be billed on the following day.
• Cloud video services are streaming media services developed based on cloud
computing. These cover the entire process from video capture to video playback. They
allow customers to build a professional video system inexpensively and efficiently.

• Benefits: Cloud video services take advantage of cloud characteristics to greatly lower
barriers to entry when developing online video services. They help enterprises stay
focused on their core businesses.
• Ease-of-use and inexpensive O&M
• You can access VOD on a web-based management console or using SDKs and APIs.

• You pay only for what you use. There is no need to consider the cost of infrastructure.
• A superior viewing experience
• Fast transcoding
• VOD uses parallel transcoding to transcode a single input into multiple outputs with
different resolutions.
• Fast content distribution
• Video resources are cached on CDN nodes. When users request the content, nearby
CDN nodes serve the content directly, speeding up content distribution and improving
user experience.
• Security and reliability
• Hotlink protection prevents other websites from linking to your resources. Video
encryption and playback authentication safeguard your video assets.
• Multi-level assurance
• The extremely reliable Object Storage Service (OBS) ensures secure storage of even
massive amounts of resources. The HUAWEI CLOUD monitoring system and service
system ensure 24/7 technical support.
• Uploading a local video is used as an example.

1. Log in to the VOD console.

2. In the navigation pane, choose Audio and Video Uploads > Local Upload.

3. Click Upload File. The Upload File dialog box is displayed.

4. Click Add File to add a local media file, or directly drag a file to the file area.

5. Switch on Preventing Duplicate File Uploads to avoid waste of time and storage space.

6. Switch on Editing, which indicates allowing Web Video Editor to process uploaded
media files.

7. Categorize uploaded files, change the file name, or process them.

8. Click Upload.

9. View media file information on the Audio and Video Management page.
• VOD is billed on a pay-per-use basis by default. You only pay for what you use. You
can also buy economical resource packages. VOD provides traffic, transcoding, and
storage packages.
• VOD billing consists of media management, media processing, and content distribution.
In the billing items, peak bandwidth and video snapshots are settled once a day, and
other billing items are settled once an hour.
• Media Processing Center (MPC) transcodes your media files online, inexpensively,
efficiently, and at any scale. MPC combines object storage and cloud computing to
convert your media into the formats you need for playback on devices like
smartphones, PCs, and TVs. It also provides functions such as frame capture, content
analysis, video encryption, animated GIFs, and watermarking to meet a wide range of
your requirements.
• Video Encoding

• Many aspects of Huawei's HEVC/H.265 encoder ranked No. 1 in the 2018 MSU Video
Codecs Comparison (subjective quality, objective quality, and compression ratio).
Huawei has been actively engaged in the R&D and formulation of the next-generation
video encoding standard VVC/H.266.

• Low Bitrate HD

• HUAWEI CLOUD's low bitrate HD technology and H.265 encoder use a 20% to 40%
lower bitrate than standard transcoding without compromising video quality. This
significantly reduces the traffic and storage costs.

• Image Enhancement

• Image enhancement repairs old and damaged video files and upscales LD videos to
HD ones to provide users a hassle-free viewing experience.

• Fast Transcoding

• HUAWEI CLOUD provides distributed transcoding and auto scaling to achieve high-
speed and high-volume transcoding.

• Diverse Functions

• Video transcoding, image enhancement, frame capture, image watermarks, and video
encryption meet your diverse media requirements.
• Requirements and Challenges
• Scenario: Huya provides live streaming services in China, and needs to keep latency low
even when there are a huge number of concurrent viewers.
• Users are geographically distributed. User experience matters.

• Huawei Solution
• Stream pushing and content distribution to countless users while keeping latency low
• Live recording, on-demand playback, repurposing live content for on-demand viewing,
live content review, and stream banning to meet diverse needs on a variety of devices
• Multiple stream pushing and pulling methods, such as origin pull, forwarding, and
peer-to-peer (P2P), and intelligent routing for streaming to nearby CDN nodes
• Logs and metrics provided by Cloud Eye to ensure user experience.

• Customer Benefits
• Live video starts playing instantly with a playback success rate of more than 99% and
freeze rate of less than 2.5%.
• The uptime during live streaming is greater than 99.9%. Huya handles high
concurrency and a peak bandwidth of 300 Gbit/s.
• Multi-level security mechanisms such as hotlink protection, playback authentication,
and video encryption provide Huya with comprehensive protection against video
pirating.
• Answer: ABCD
• Simply put, everything in the world, from watches and keys to home appliances,
automobiles, and buildings, can become "smart" after being embedded with
smart microchips. With the help of various communications network
technologies, people can "talk" with objects, and objects can "communicate" with
each other. This is the Internet of Things.
• The survey data of 1,096 companies comes from 11 vertical industries in 17
countries.
• 280 million connections (by the end of 2019): 180 million for China Telecom, 50
million for HiLink, 7 million for PSA, 1 million for cities, 1 million for public cloud,
and the remaining for others
• Image from 3 ms
• Core competitive advantages:

▫ OpenCPU architecture: The MCU and communication module are


integrated into one, significantly reducing the terminal size and costs.

▫ Open-source technologies: Open-source IoT solutions are provided with


various SDKs to implement multiple functions.

▫ Security design: A secure transmission mechanism with low-power-


consumption supports two-way authentication, differential upgrade of
FOTA firmware, and DTLS+.

▫ OTA remote upgrade: The optimized differential combination algorithm


reduces demand on RAM resources.

▫ Lightweight and low-power framework: The minimum size is 6 KB. The


Tickless mechanism is used to reduce the power consumption of data
collection.

▫ Device-cloud synergy and plug-and-play: Multiple protocols are supported


for simplified, secure, and standard access.
• Image from 3 ms

• What does Global SIM Link provide?


• Data packages (data volume/card/month)
• Within China: 10 MB, 30 MB, 100 MB, 500 MB, 1 GB, 2 GB, and 10
GB
• Outside China: 10 MB, 30 MB, and 100 MB
• SIM card type: eSIM, vSIM, and traditional SIM cards
• Typical application scenarios

▫ Basic scenario: requiring wireless connections

▪ Challenges: IoT devices are deployed at different locations, such as


rooftops and aisles, making it impossible to lay cables everywhere.

▪ Typical industries: three meters (electricity meter, water meter, and


gas meter), POS machines, shared bicycles, and shared electric
vehicles

▪ Solution: Global SIM Link provides 2G/3G/4G wireless cellular


connections for IoT devices.

▫ Extended scenario 1: using data outside China


▪ Challenges: Enterprises produce devices inside China and sell them
outside China. To use data outside China, you need to negotiate with
multiple carriers outside China. Some enterprises do not have channels
outside China.

▪ Typical industries: automotive enterprises (sold outside China), logistics,


transportation, and healthcare

▪ Solution: Global SIM Link provides wireless cellular data for 100+
countries/regions.
• Device Access Service supports connections of massive amounts of devices to the
cloud, bidirectional message communication between devices and the cloud,
batch device management, remote control and monitoring, OTA upgrade, and
device linkage rules. The service flexibly transfers device data to other HUAWEI
CLOUD services, helping IoT industry users quickly complete device networking
and application integration.

• Core competitive advantages:

▫ Multiple access protocols and access modes for various devices and access
scenarios

▪ Multi-protocol access: Popular native access protocols


(CoAP/MQTT/HTTP) and mainstream industry protocols (Modbus and
OPC UA) are supported.

▪ Generic-protocol access: Industry-customized protocols allow device


access through cloud bridges or protocol plug-ins.

▫ Simplified fast access, and one-stop integration and interconnection

▪ Quick access experience: The online wizard-based guidance is


provided to complete the entire device access process in four steps.

▪ Serialized Agent SDKs: Open-source serialized AgentTiny/Lite SDKs


and demos are provided.
• Core competitive advantages

▫ Quick construction

▪ The IoT web application can be developed online in 10 minutes.

▫ Custom reports

▪ Service fields, data sources, and charts can be flexibly customized.


Hundreds of millions of data records can be pre-processed.

▫ Diversified release modes

▪ Applications developed on IoT Studio can be seamlessly embedded


into self-owned applications of enterprises.

▫ Multi-level management

▪ Cross-domain layered management is supported.

▫ Cost effectiveness

▪ Logical multi-tenant mode is supported (resources are shared


logically). The basic edition costs CNY 2,000 per year.
• In addition to cloud platform protection, Huawei attaches great importance to
IoT device protection, including the following mechanisms:
▫ Multiple device access authentication modes are supported to ensure that
devices can connect to the cloud platform after authentication.
▫ Secure transmission channels are used between devices and the cloud
platform to prevent sensitive information from being stolen or tampered
with. In addition, IoT devices powered by batteries support DTLS+ for low
power consumption, prolonging battery lifespans.
▫ Firmware over-the-air (FOTA) is supported. Currently, many IoT attacks are
on firmware. If firmware can be updated in time, hacker attacks can be
prevented.
• Security assurance is provided for devices, pipes, and the cloud. Based on
HUAWEI CLOUD's security capabilities and the security management platform,
HUAWEI CLOUD IoT cloud services provide E2E security for device access and
connection, cloud services, and user data.
• Device
▫ Remote upgrade: SOTA and FOTA
▫ Multiple authentication methods: one-machine-one-secret, one-model-one-
secret, and digital certificates
▫ Secure OS: LiteOS, an open-source, lightweight IoT OS developed by
Huawei
• Photo source: 3 ms
• Image from 3 ms

• Surveyed 252 enterprises in 3 months and supported the upgrade and


transformation of 76 enterprises.

• Organized 20+ professional training courses/salons and cultivated 300+


professionals.

• Worked with enterprises to formulate three IoT industry standards.

• Completed a review of the local IoT industry chain within one month.

• Attracted 30 leading IoT enterprises to join local IoT projects.

• Promoted cooperation between enterprises and achieved a transaction volume


exceeding CNY 10 million.

• Invited five enterprises to join the HUAWEI CLOUD IoT ecosystem.


• Solution

▫ Fully connected: Data is aggregated and shared by supporting multi-


protocol access of 16 device subsystems from different manufacturers,
including security and energy consumption subsystems.

▫ Visualized: An Intelligent Operation Center (IOC) developed on natural


language interaction manages building events and devices, and displays all-
dimension information in real time.

▫ Intelligent: HUAWEI CLOUD EI and IoT Edge services are used to


intelligently detect faces, intrusions, foot traffic, smoke, and fire.

• Advantages
▫ Efficient: Aggregated data is visualized and shared from end to end.

▫ Smart: A large number of connections and AI capabilities promote security


protection and bring more business value.

▫ Green: With big data analytics, energy consumption is reduced by 15%.


• Volume: model and growth rate of unstructured data

▫ 80% to 90% of the total data volume

▫ 10 to 50 times faster growth than structured data

▫ 10 to 50 times larger than the traditional data warehouses

• Variety: heterogeneity and diversity of big data

▫ Various forms of data, such as text, video, and machine data

▫ No pattern or no clear pattern

• Value: low value density

▫ In-depth analysis required to predict trends and models

• Velocity: real-time analysis and result display


• Image from 3 ms.
• 1. Unified data storage eliminates silos and reduces redundant data.

• 2. Unified management enables resource sharing and management automation.

• 3. Batch processing and stream processing are simultaneously implemented for


data, and different computing models can be queried.
• Huawei started big data research in 2007. Before 2017, Huawei focused on
providing big data platform software for carriers and enterprise networks, and
occupied a large market share in the finance, public security, and transportation
fields. Huawei's big data evolved from open source and is optimized in terms of
the scale, security, performance, and reliability of large clusters. In addition,
Huawei has contributed its optimization results to the community, led Apache's
top open-source project CarbonData, and has multiple PMCs and Committers.
• Huawei one-stop big data solution: building full-lifecycle, full-stack services
throughout the data development process

• The solution helps customers build a unified big data platform for data access,
data storage, data analysis, and value mining, and interconnects with HUAWEI
CLOUD IoT, ROMA platform, DLF, and DLV to help customers easily resolve
difficulties in data channel cloudification, big data job development and
scheduling, and data display.
• Enterprise-class: enterprise-class scheduling to isolate resources between different
jobs; SLA assurance for multi-level users

• Easy O&M: no need to purchase and maintain hardware; cluster monitoring and
management performed in the enterprise cluster management system

• High security: MRS has passed the security certification test of German company
PSA; It features role-based security control and sound audit functions based on
Kerberos authentication

• Low cost: compute and storage decoupling; on-demand cluster creation and
deletion reduces costs by 90%
• Image from 3 ms.

• DAYU: One-Stop Data Operations Platform for Enterprise Data Governance

▫ Full-link visualization development

▫ Real-time + batch data integration

▫ Interconnection with 20+ heterogeneous data sources


• Data Warehouse Service (DWS) is an online data processing database based on a
public cloud infrastructure and platform. It provides scalable, fully managed, and
out-of-the-box analytic database services. It is a native cloud service based on the
Huawei converged data warehouse GaussDB, and is fully compatible with ANSI
SQL 99 and SQL 2003 standards, as well as the PostgreSQL and Oracle database
ecosystems. DWS provides competitive solutions for PB-level big data analytics in
various industries.

• DWS is widely used in domains such as finance, Internet of Vehicles (IoV),


government and enterprise, e-commerce, energy, and telecom. It has been listed
in the Gartner Magic Quadrant for Data Management Solutions for Analytics for
two consecutive years. Compared with conventional data warehouses, DWS is
more cost-effective and has large-scale scalability and enterprise-level reliability.

• DWS uses the Huawei-developed GaussDB 200 database kernel and is


compatible with PostgreSQL 9.2.4. GaussDB 200 transformed from a single OLTP
database to an enterprise-grade, MPP-based, and distributed OLAP database
oriented to massive data analysis.

• Compared with conventional data warehouses, DWS excels in hyper-scale data


processing and general platform management and delivers the following
benefits:
• Customer requirements:
▫ Ad effectiveness monitoring: Monitor ad marketing in all domains, evaluate
ad visibility effects, trigger online warnings of abnormal traffic, and display
ad effects on the GUI.
▫ Conversion evaluation: Quantitatively and accurately evaluate the
conversion effects of channels based on ad monitoring, for example, self-
media visitors visiting the official website.
▫ Data management platform: Activate and delicately manage data assets,
and effectively integrate multiple production, supply, and sales terminals
and various data sources to eliminate data silos.
• Solution advantages:
▫ Computing is isolated from storage. Service data is stored on OBS and
metadata is externally placed on RDS. Compute and storage resources can
be used on demand, which is more cost-effective.
▫ Rolling patch upgrade can be performed without interrupting services.
▫ Professional service support and performance optimization help shorten
service rollout time by 30%.
▫ HUAWEI CLOUD big data services are more stable and efficient, improving
service performance by 3 times.
• Challenges
▫ Calculation difficult: Hundreds of millions of data points are collected per
second. Traditional technologies are unable to process massive amounts of
unstructured data.
▫ Poor data quality: Repeated data extraction from various service systems
results in low data quality.
▫ Data sharing difficulty: Complex power production management, inefficient
cross-profession collaboration, and siloed service systems make data
difficult to share.
• Benefits
▫ Unified data collection: A unified platform is provided for integrating APIs,
messages, and data to build capabilities of quick data integration.
▫ Intelligent data lake: Open architecture with decoupled storage and
compute, one copy of data, and multiple computing engines are supported.
Multi-layer storage supports automatic data dumping and optimal selection
of storage based on data access frequency.
▫ Intelligent data operations: A data asset management platform is built
based on the data map, where unified data service governance,
development, visualization, and openness capabilities are supported, and a
one-stop digital operations system is set up.
• Machine learning can be understood in multiple ways. Tom Mitchell, known as
the Father of Machine Learning, defines machine learning as follows: A computer
program is said to learn from experience E with respect to some class of tasks T
and performance measure P, if its performance at tasks in T, as measured by P,
improves with experience E. The definition is relatively simple and abstract. As we
deepen our understanding of machine learning, we will find that the connotation
and extension of machine learning keep changing over time. The concept of
"machine learning" is not easy to define simply because it involves a wide range
of fields and applications and develops and changes rapidly.

• It is generally agreed that machine learning uses its processing system and
algorithm to make predictions by finding patterns hidden in data. It is an
important subfield of artificial intelligence (AI), which intersects with more
extensive fields, such as data mining (DM) and Knowledge Discovery in
Databases (KDD).
• Continuous upgrading, 60+ services, 160+ APIs, and 10+ industry pre-integration
solutions

• Automatic differentiation, automatic parallelization, and simple and efficient AI


development

• Multi-layer open custom operator development system

• Powerful computing power for AI development based on Ascend 310, Ascend


910, and GPU
• Image from 3 ms.
• Huawei HiLens Advantages

▫ Device-cloud synergy inference: The device analyzes the collected data


locally, greatly reducing the data moved to the cloud.

▫ Skill development framework: The easy-to-use skill development


framework lightens the skill development workloads of developers on the
device side.

▫ Device model optimization: During the skill development phase on the


cloud, the device model is optimized by being split, quantized, and pruned
to protect privacy.

▫ Cross-platform design: HiSilicon chips and other mainstream chips are


supported, meeting the requirements of mainstream monitoring scenarios.

▫ Preset abundant AI skills: Skills applicable to multiple scenarios are preset in


the skill market. Users can quickly deploy skills on devices without
development.

▫ Developer community: After developing new skills, developers can share


them with other developers as templates or release them to the skill
market for users to install.
• Background

▫ OCR outperforms competitors in identification of express waybills, ID cards,


and receipts.

▫ Visual services

▪ Content Moderation is mainly used for reviewing the content


(including videos, images, text, and voice) published by Internet
companies and detecting pornography, terrorism-related content, and
sensitive political information. In image and video recognition,
Content Moderation delivers strong performance in identifying images
among images. For detection of sensitive political information,
Huawei provides the most comprehensive library of political figures,
as detailed to the county level.

▪ Face and human body recognition: This is currently used in campus


scenarios. Associated human bodies are captured based on faces to
achieve more accurate personal tracking and access control without
delays.

▪ CBS: Phonebots are mainly used for survey questionnaires and deliver
good performance in express and recruitment industries.
• Image from 3 ms.
• Image from 3 ms.
• ASR and TTS
▫ Automatic Speech Recognition (ASR): converts speech into text, with an
accuracy of 97%. This technology has been verified on a large scale; it
supports speech recognition for 60 seconds of speech under 4 MB.
▫ Long speech recognition: Supports speech recognition for up to 4 hours of
speech.
▫ Real-time streaming speech recognition: converts continuous audio streams
into text in real time, enabling faster speech recognition.
▫ Text To Speech (TTS): converts text into lifelike voices of various styles, such
as voices of different genders.
• NLP and CBS
▫ Natural Language Processing (NLP): uses machine learning and other
technologies to provide the abilities of natural language understanding,
analysis, and utilization.
▫ Question Answering Bot (QABot): helps enterprises quickly build, publish,
and manage intelligent Q&A bots.
▫ Task-oriented Conversational Bot (TaskBot): helps build intelligent bots that
understand and help complete tasks.
▫ Speech Analytics (SA): analyzes conversations between clients and call
centers.
▫ CBS Customization (CBSC): customizes natural language models aligned to
business interests.
• Image from 3 ms.

• Every 15 minutes, a woman dies of cervical cancer. In China, 130,000 people


suffer from cervical cancer every year, but there is a gap of at least 90,000
pathologists in China. KingMed Diagnostics cooperates with the HUAWEI CLOUD
EI visual team to improve the sensitivity and specificity of examination to 99%
and 80% respectively using AI technologies. This is close to the level of first-class
pathologists in China and the United States.
• Intelligently reporting alarms when an elderly person or a child falls down. When
a fall is detected, the camera can be locked remotely and trace the movement of
the fallen person.
• Automatically generating alarms to a user when a baby is crying. Specific words
such as "Daddy" and "Mommy" can be detected to generate alarms for a user.
• Timely detection and reporting of intrusions enabled by human figure & abnorma,
breaking glass or explosions. l sound detection, for example

• Retrieving videos of a specified family member from historical videos


• Selecting images with smiling faces automatically for future entertainment
collection or editing
• Identifying specific appliances such as air conditioners and TVs; supporting
adjustment of an appropriate camera selector action for appliances
• Identifying fruits, vegetables, and meat to fulfill monitoring requirements of
home appliances and kitchens
• Image from 3 ms.
• Answer: ABCD

You might also like