0% found this document useful (0 votes)
429 views488 pages

Doc01 - Datacenter Virtualization Solution Product

The document is the product documentation for Huawei's Datacenter Virtualization Solution, version 2.1.0, issued on June 9, 2025. It includes comprehensive information on the solution's features, architecture, software, hardware, and security aspects, along with technical specifications and application scenarios. The document also outlines conventions for usage and provides details on obtaining support and updates.

Uploaded by

caitano1985
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
429 views488 pages

Doc01 - Datacenter Virtualization Solution Product

The document is the product documentation for Huawei's Datacenter Virtualization Solution, version 2.1.0, issued on June 9, 2025. It includes comprehensive information on the solution's features, architecture, software, hardware, and security aspects, along with technical specifications and application scenarios. The document also outlines conventions for usage and provides details on obtaining support and updates.

Uploaded by

caitano1985
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Datacenter Virtualization Solution


2.1.0
Datacenter Virtualization Solution Product
Documentation

Issue Date 2025-06-09

127.0.0.1:51299/icslite/print/pages/resource/print.do? 1/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Copyright © Huawei Technologies Co., Ltd. 2025. All rights reserved.


No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.

Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services
and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all
statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either
express or implied.

The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of
the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China

Website: https://www.huawei.com

Email: support@huawei.com

127.0.0.1:51299/icslite/print/pages/resource/print.do? 2/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Contents
Contents
1 Library Information
1.1 Change History
1.2 Conventions
Symbol Conventions
General Conventions
Command Conventions
Command Use Conventions
GUI Conventions
GUI Image Conventions
Keyboard Operations
Mouse Operations
1.3 How to Obtain and Update Documentation
Obtaining Documentation
Updating Documentation
1.4 Feedback
1.5 Technical Support
2 Product Overview
2.1 Solution Overview
2.1.1 Datacenter Virtualization Solution
Definition
Features
2.1.2 Multi-Tenant Service Overview
2.2 Application Scenario
2.3 Solution Architecture
2.3.1 Functional Architecture
2.3.2 Basic Network Architecture
2.4 Software Description
2.4.1 FusionCompute
Overview
Technical Highlights
2.4.2 eDME
Overview
Application Scenarios
2.4.3 eBackup
Overview
Virtual Backup Solution
2.4.4 UltraVR
Overview
Highlights
2.4.5 HiCloud
Overview
Highlights
2.4.6 eDataInsight
Overview
Application Scenarios
2.4.7 iMaster NCE-Fabric
Overview
Highlights
2.4.8 eCampusCore
Overview
Application and Data Integration Service
2.4.9 eContainer
Overview
Application Scenarios
2.5 Hardware Description
2.5.1 Server
2.5.2 Switch
2.5.3 Storage Device

127.0.0.1:51299/icslite/print/pages/resource/print.do? 3/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.5.4 iMaster NCE-Fabric Appliance
2.6 System Security
Security Threats
Security Architecture
Security Value
2.7 Technical Specifications
2.8 Feature Description
2.8.1 Compute
Overview
Technical Highlights
2.8.2 Storage
Virtualized storage
Block storage service
File storage service
2.8.3 Network
2.8.3.1 Network Overlay SDN
2.8.4 DR
2.8.5 Backup
2.8.6 Multi-Tenancy
2.8.6.1 ECS
2.8.6.1.1 What Is an ECS?
Definition
Functions
2.8.6.1.2 Advantages
2.8.6.1.3 Application Scenarios
2.8.6.1.4 Related Services
2.8.6.1.5 Implementation Principles
Architecture
Workflow
2.8.6.2 BMS
2.8.6.2.1 BMS Definition
2.8.6.2.2 Benefits
2.8.6.2.3 Application Scenarios
2.8.6.2.4 Functions
2.8.6.2.5 Related Services
2.8.6.3 IMS
2.8.6.3.1 What Is Image Management Service?
Definition
Type
2.8.6.3.2 Advantages
2.8.6.3.3 Application Scenarios
2.8.6.3.4 Relationship with Other Services
2.8.6.3.5 Working Principle
Architecture
Specifications
2.8.6.4 AS
2.8.6.4.1 Introduction
Definition
Functions
2.8.6.4.2 Benefits
2.8.6.4.3 Application Scenarios
2.8.6.4.4 Usage Restrictions
2.8.6.4.5 Working Principles
Architecture
2.8.6.5 Elastic Container Engine
2.8.6.5.1 Introduction
Definition
Functions
2.8.6.5.2 Benefits
Ease of Use
High Performance
Security and Reliability
127.0.0.1:51299/icslite/print/pages/resource/print.do? 4/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Fault and Performance Monitoring
2.8.6.5.3 Relationship with Other Services
2.8.6.5.4 Working Principles
2.8.6.5.5 Basic Concepts
K8s Cluster and Node
K8s Cluster Storage Class
K8s Cluster Namespace
2.8.6.6 SWR
2.8.6.6.1 Overview
Definition
Functions
2.8.6.6.2 Benefits
Ease of Use
Security and Reliability
2.8.6.6.3 Relationship with Other Services
2.8.6.6.4 Basic Concepts
Image
Container
Image Repository
2.8.6.7 Block Storage Service
2.8.6.7.1 What Is the Block Storage Service?
Definition
Functions
2.8.6.7.2 Advantages
2.8.6.7.3 Relationships with Other Services
2.8.6.7.4 Implementation Principles
Architecture
2.8.6.8 OBS
2.8.6.8.1 What Is the Object Storage Service?
Definition
Functions
2.8.6.8.2 Advantages
2.8.6.8.3 Related Concepts
Bucket
Object
AK/SK
Endpoint
Quota Management
Access Permission Control
2.8.6.8.4 Application Scenarios
Backup and Active Archiving
Video Storage
2.8.6.8.5 Implementation Principles
Logical Architecture
Workflow
2.8.6.8.6 User Roles and Permissions
2.8.6.8.7 Restrictions
2.8.6.8.8 How to Use the Object Storage Service
Third-Party Client
Object Service API
SDK
Mainstream Software
How to Use S3 Browser
2.8.6.9 SFS
2.8.6.9.1 What Is Scalable File Service?
Definition
Functions
2.8.6.9.2 Advantages
2.8.6.9.3 Relationship with Other Services
2.8.6.9.4 Application Scenario
Video Cloud
Media Processing
127.0.0.1:51299/icslite/print/pages/resource/print.do? 5/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.9.5 Constraints and Limitations
2.8.6.9.6 Implementation Principle
Architecture
Workflow
2.8.6.10 VPC Service
2.8.6.10.1 What Is Virtual Private Cloud?
Concept
Function
Benefits
2.8.6.10.2 Region Type Differences
2.8.6.10.3 Application Scenarios (Region Type II)
Secure and Isolated Network Environment
Common Web Applications
2.8.6.10.4 Application Scenarios (Region Type III)
Secure and Isolated Network Environment
2.8.6.10.5 Implementation Principles (Region Type II)
2.8.6.10.6 Constraints
2.8.6.10.7 Relationships with Other Cloud Services
2.8.6.11 EIP Service
2.8.6.11.1 What Is an EIP?
Definition
Network Solution
Functions
2.8.6.11.2 Benefits
2.8.6.11.3 Application Scenarios
Using an EIP to Enable an ECS in a VPC to Access an Extranet
Using an EIP and SNAT to Enable ECSs in a VPC to Access an Extranet
2.8.6.11.4 Relationship with Other Cloud Services
2.8.6.11.5 Constraints
2.8.6.12 Security Group Service
2.8.6.12.1 Security Group Overview
2.8.6.12.2 Constraints and Limitations
2.8.6.13 NAT Service
2.8.6.13.1 What Is the NAT Service?
2.8.6.13.2 Benefits
2.8.6.13.3 Application Scenarios
2.8.6.13.4 Constraints and Limitations
2.8.6.13.5 Relationships with Other Services
2.8.6.14 ELB
2.8.6.14.1 What Is Elastic Load Balance?
Definition
Functions
2.8.6.14.2 Benefits
2.8.6.14.3 Application Scenarios
Load Distribution
Capacity Expansion
2.8.6.14.4 Relationships with Other Cloud Services
2.8.6.14.5 Accessing and Using ELB
2.8.6.15 vFW
2.8.6.15.1 What Is Virtual Firewall?
2.8.6.15.2 Advantages
2.8.6.15.3 Application Scenarios
2.8.6.15.4 Constraints
2.8.6.15.5 Relationships with Other Cloud Services
2.8.6.15.6 Accessing and Using vFW
2.8.6.16 DNS
2.8.6.16.1 What Is Domain Name Service?
2.8.6.16.2 Advantages
2.8.6.16.3 Application Scenarios
Managing Host Names of Cloud Servers
Replacing a Cloud Server Without Service Interruption
Accessing Cloud Resources
127.0.0.1:51299/icslite/print/pages/resource/print.do? 6/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.16.4 Restrictions
2.8.6.16.5 Related Services
2.8.6.17 VPN
2.8.6.17.1 What Is Virtual Private Network?
Networking Solution
Key Technologies
2.8.6.17.2 Advantages
2.8.6.17.3 Application Scenarios
Deploying a VPN to Connect a VPC to a Local Data Center
Deploying a VPN to Connect a VPC to Multiple Local Data Centers
Cross-Region Interconnection Between VPCs
2.8.6.17.4 Related Services
2.8.6.17.5 Restrictions and Limitations
2.8.6.18 Public Service Network
2.8.6.18.1 Concept
2.8.6.18.2 Function
2.8.6.18.3 Benefits
2.8.6.18.4 Application Scenarios
2.8.6.18.5 Constraints
2.8.6.18.6 Procedure
2.8.6.19 CSHA
2.8.6.19.1 What Is Cloud Server High Availability?
Definition
Restrictions
2.8.6.19.2 Benefits
2.8.6.19.3 Application Scenarios
2.8.6.19.4 Implementation Principles
2.8.6.19.5 Relationships with Other Cloud Services
2.8.6.19.6 Key Indicators
2.8.6.19.7 Access and Usage
2.8.6.20 Backup Service
2.8.6.20.1 What Is the Backup Service?
Definition
2.8.6.20.2 User Roles and Permissions
2.8.6.20.3 Related Concepts
2.8.6.21 VMware Cloud Service
2.8.6.21.1 Introduction to VMware Integration Service
2.8.6.21.1.1 VMware ECS
2.8.6.21.1.2 VMware EVS Disk
2.8.6.21.2 Benefits
2.8.6.21.3 Application Scenarios
2.8.6.21.4 Functions
2.8.6.22 Application and Data Integration Service
2.8.6.22.1 Overview of the Application and Data Integration Service
2.8.6.22.1.1 Introduction to System Integration Service
2.8.6.22.1.1.1 Functions
Related Concepts
Connection Management
Connection Tools
Link Engine: DataLink
Link Engine: LinkFlow
Link Engine: MsgLink
Connection Assets: Integration Assets
I/O Asset Compatibility
Built-in Gateway Functions
2.8.6.22.1.1.2 Values and Benefits
2.8.6.22.1.1.3 Usage Scenarios
2.8.6.22.1.2 Introduction to Device Integration Service
2.8.6.22.1.2.1 Definition
Concepts
2.8.6.22.1.2.2 Functions
LinkDevice
127.0.0.1:51299/icslite/print/pages/resource/print.do? 7/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
LinkDeviceEdge
Device Connection
Device Management
Message Communication
Monitoring and O&M
Edge Management
2.8.6.22.1.2.3 Values and Benefits
2.8.6.22.1.2.4 Usage Scenarios
2.8.6.22.1.3 Introduction to the APIGW Service
2.8.6.22.1.3.1 Functions
Gateway Management
API Lifecycle Management
2.8.6.22.1.3.2 Values and Benefits
2.8.6.22.1.3.3 Application Scenarios
2.8.7 O&M Management
3 Installation and Deployment
3.1 Installation Overview
3.1.1 Deployment Solution
Overview
Separated Deployment Scenario
Hyper-Converged Deployment Scenario
Deployment Modes and Principles
3.1.2 Network Overview
Network Plane Planning (Non-SDN)
VLAN Planning Principles (Non-SDN)
(Optional) IP Address Planning Principles (Non-SDN Solution)
(Optional) Network Plane Planning (Network Overlay SDN Solution)
(Optional) VLAN Planning Principles (Network Overlay SDN Solution)
(Optional) IP Address Planning Principles (Network Overlay SDN Solution)
NVMe over RoCE Networking Planning
3.1.3 System Requirements
3.1.3.1 Local PC Requirements
3.1.3.2 Management System Resource Requirements
3.1.3.3 Storage Device Requirements
3.1.3.4 Network Requirements
3.1.3.5 Physical Networking Requirements
3.2 Installation Process
3.3 Preparing for Installation
3.3.1 Obtaining Documents, Tools, and Software Packages
Preparing Documents
Tools
Verifying Software Packages
FusionCompute Software Package
eDME Software Package
UltraVR Software Package
eBackup Software Package
OceanStor Pacific Series Deployment Tool
eDataInsight Software Package
HiCloud Software Package
SFS Software Package
eCampusCore Software Package
3.3.2 Integration Design
3.3.2.1 Planning Using LLDesigner
Scenario
Prerequisites
Operation Process
Procedure
3.3.3 Planning Communication Ports
3.3.4 Accounts and Passwords
3.3.5 Preparing Data
3.3.5.1 Preparing Data for FusionCompute
3.3.5.2 Preparing Data for eDataInsight
127.0.0.1:51299/icslite/print/pages/resource/print.do? 8/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.3.5.2.1 (Optional) Creating an Authentication User on OceanStor Pacific HDFS in the Decoupled Storage-Compute Scenario
3.3.5.2.1.1 Deploying OceanStor Pacific Series HDFS Storage Service
3.3.5.2.1.2 Configuring Basic HDFS Storage Services
3.3.5.2.1.3 Configuring NTP Time Synchronization
Procedure
3.3.5.2.1.4 Configuring Users on the Storage
3.3.5.2.1.4.1 Configuring Static Mapping
Procedure
3.3.5.2.1.4.2 Configuring Proxy Users on the Storage
Procedure
3.3.5.2.2 (Optional) Collecting OceanStor Pacific HDFS Domain Names and Users in the Decoupled Storage-Compute Scenario
Obtaining the DNS IP Address
3.3.5.3 Preparing Data for eCampusCore
3.3.5.3.1 Planning Data
Network Planning
Password Planning
3.3.5.3.2 Checking the FusionCompute Environment
Prerequisites
Procedure
3.3.5.3.3 Obtaining the eDME Certificate
Prerequisites
Procedure
3.3.5.3.4 Obtaining the FusionCompute Certificate
Prerequisites
Procedure
3.3.5.3.5 Creating and Configuring the OpsMon User
Context
Procedure
3.3.6 Compatibility Query
3.4 Deploying Hardware
3.4.1 Hardware Scenarios
Scenario Overview
3.4.2 Installing Devices
3.4.3 Installing Signal Cables
3.4.3.1 Separated Deployment Networking
Procedure
3.4.3.2 Hyper-Converged Deployment Networking
3.4.4 Powering On the System
Scenarios
Operation Process
Procedure
3.4.5 Configuring Hardware Devices
3.4.5.1 Configuring Servers
3.4.5.1.1 Logging In to a Server Using the BMC
Scenarios
Process
Procedure
3.4.5.1.2 Checking the Server
Scenarios
Procedure
3.4.5.1.3 Configuring RAID 1
3.4.5.1.3.1 (Recommended) Configuring RAID 1 on the BMC WebUI
Scenarios
Procedure
3.4.5.1.3.2 Logging In to a Server Using the BMC WebUI to Configure RAID 1
Scenarios
Operation Process
Procedure
3.4.5.2 Configuring Storage Devices
3.4.5.3 Configuring Switches
3.4.5.4 Configuring Hyper-Converged System Hardware Devices
3.4.5.5 (Optional) Configuring Network Devices
127.0.0.1:51299/icslite/print/pages/resource/print.do? 9/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.5 Deploying Software
3.5.1 Unified DCS Deployment (Separated Deployment Scenario)
3.5.1.1 Installation Process
3.5.1.2 Installation Using SmartKit
Scenarios
Prerequisites
Procedure
3.5.1.3 Initial Configuration After Installation
3.5.1.3.1 Configuring Bonding for Host Network Ports
Procedure
3.5.1.3.2 Configuring FusionCompute After Installation
3.5.1.3.2.1 Loading a FusionCompute License File
Scenarios
Prerequisites
Procedure
3.5.1.3.2.2 (Optional) Configuring MAC Address Segments
Scenarios
Prerequisites
Procedure
3.5.1.3.3 Configuring eDME After Installation
3.5.1.3.3.1 (Optional) Configuring the NTP Service
Context
Precautions
Procedure
3.5.1.3.3.2 (Optional) Loading a License File
3.5.1.3.3.3 (Optional) Configuring SSO for FusionCompute (Applicable to Virtualization Scenarios)
Prerequisites
Procedure
3.5.1.3.3.4 Expanding Partition Capacity
Procedure
3.5.1.3.3.5 Enabling Optional Components
Prerequisites
Procedure
3.5.1.3.4 Configuring HiCloud After Installation
3.5.1.3.4.1 Configuring kdump on FusionCompute
Procedure
3.5.1.3.4.2 Configuring Certificates
3.5.1.3.4.2.1 Importing the CMP HiCloud Certificate to eDME
Scenarios
Procedure
3.5.1.3.4.2.2 Obtaining Certificates to Be Imported to GDE
3.5.1.3.4.2.2.1 Obtaining the Certificate Trust Chain
Procedure
3.5.1.3.4.2.2.2 Exporting the iBMC Certificate and Root Certificate
Exporting the iBMC Certificate
Exporting the Root Certificate
3.5.1.3.4.2.2.3 Exporting vCenter Certificates
Exporting the vCenter System Certificate
Exporting the PM Certificate Managed by vCenter
3.5.1.3.4.2.2.4 Exporting the NSX-T Certificate
Procedure
3.5.1.3.4.2.2.5 Exporting the DBAPPSecurity Cloud Certificate
Procedure
3.5.1.3.4.2.2.6 Exporting the VastEM Certificate
Procedure
3.5.1.3.4.2.3 Importing Certificates to GDE
Procedure
3.5.1.3.4.2.4 (Optional) Changing the Certificate Chain Verification Mode
Prerequisites
Procedure
3.5.1.3.4.2.5 Restarting CMP HiCloud Services
Involved Services
127.0.0.1:51299/icslite/print/pages/resource/print.do? 10/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
3.5.1.3.4.3 Configuring CAS SSO
Procedure
3.5.1.3.4.4 Importing an Adaptation Package (Either One)
3.5.1.3.4.4.1 Importing an Adaptation Package on the eDME O&M Portal
Prerequisites
Procedure
3.5.1.3.4.4.2 Importing an Adaptation Package Using SmartKit
Prerequisites
Procedure
3.5.1.3.5 Configuring CSHA After Installation
3.5.1.3.5.1 Interconnecting with eDME
Obtaining the Adaptation Package and Document
Connecting to eDME
3.5.1.3.6 Configuring eCampusCore After Installation
3.5.1.3.6.1 Interconnecting the O&M Plane with eDME
3.5.1.3.6.1.1 Interconnecting eCampusCore with eDME
Prerequisites
Procedure
3.5.1.3.6.1.2 Importing the Service Certificate to the eDME
Prerequisites
Procedure
3.5.1.3.6.1.3 Configuring the Login Mode
3.5.1.3.6.1.3.1 Configuring Multi-Session Login
Prerequisites
Procedure
3.5.1.3.6.1.3.2 Configuring SSO for the eDME O&M Portal
Procedure
3.5.1.3.6.2 Importing a VM Template on the Operation Portal
3.5.1.3.6.2.1 Importing a VM Template
Prerequisites
Procedure
3.5.1.3.6.2.2 Creating VM Specifications
Context
Procedure
3.5.1.3.6.3 Configuring the eDME Image Repository
Prerequisites
Procedure
3.5.1.4 Checking Before Service Provisioning
3.5.1.4.1 System Management
Prerequisites
Procedure
3.5.1.4.2 Site Deployment Quality Check
Prerequisites
Procedure
3.5.2 Configuring Interconnection Between iMaster NCE-Fabric and FusionCompute
3.5.3 Configuring Interconnection Between iMaster NCE-Fabric and eDME
3.5.4 Installing FabricInsight
3.5.5 (Optional) Installing FSM
Prerequisites
Procedure
3.5.6 Installing eDME (Hyper-Converged Deployment)
3.5.6.1 Network Planning
3.5.6.2 Firewall Planning
3.5.6.3 SmartKit-based Installation (Recommended)
Scenario
Prerequisites
Procedure
3.5.6.4 (Optional) Configuring Data Disk Partitions Using Commands (EulerOS)
Procedure
3.5.6.5 Post-installation Check
3.5.6.5.1 Checking the O&M Portal After Installation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 11/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Context
Prerequisites
User Login Procedure
Post-login Check
3.5.6.5.2 Checking the Operation Portal After Installation (Multi-Tenant Services)
Context
Prerequisites
User Login Procedure
Post-login Check
3.5.6.6 Initial Configuration
3.5.6.6.1 (Optional) Configuring the NTP Service
Context
Precautions
Procedure
3.5.6.6.2 (Optional) Loading a License File
3.5.6.6.3 (Optional) Configuring SSO for FusionCompute (Applicable to Virtualization Scenarios)
Prerequisites
Procedure
3.5.6.6.4 (Optional) Adding Static Routes
Procedure
3.5.6.7 Software Uninstallation
Prerequisites
Precautions
Procedure
3.6 (Optional) Installing DR and Backup Software
3.6.1 Disaster Recovery (DR)
3.6.1.1 Local HA
3.6.1.1.1 Local HA for Flash Storage
3.6.1.1.1.1 Installing and Configuring the DR System
3.6.1.1.1.1.1 Installation and Configuration Process
3.6.1.1.1.1.2 Preparing for Installation
Installation Requirements
Documents
Preparing Software Packages and Licenses
3.6.1.1.1.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.1.1.1.4 Configuring Storage
Scenarios
Procedure
3.6.1.1.1.1.5 Installing FusionCompute
Scenarios
Prerequisites
Process
Procedure
3.6.1.1.1.1.6 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.1.1.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.1.1.2 DR Commissioning
3.6.1.1.1.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.1.1.2.2 Commissioning DR Switchover
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 12/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.1.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.1.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.2 Local HA for Scale-Out Storage
3.6.1.1.2.1 Installing and Configuring the DR System
3.6.1.1.2.1.1 Installation and Configuration Process
3.6.1.1.2.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.1.2.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.1.2.1.4 Installing FusionCompute
Scenarios
Prerequisites
Operation Process
Procedure
3.6.1.1.2.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.1.2.1.6 Configuring HA Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.1.2.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.1.2.1.8 Creating a Protected Group
Scenarios
Procedure
3.6.1.1.2.2 DR Commissioning
3.6.1.1.2.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.1.2.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
127.0.0.1:51299/icslite/print/pages/resource/print.do? 13/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.6.1.1.2.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.2.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.2.2.5 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.1.1.3 Local HA for eVol Storage
3.6.1.1.3.1 DR System Installation and Configuration
3.6.1.1.3.1.1 Installation and Configuration Process
3.6.1.1.3.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.1.3.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.1.3.1.4 Installing FusionCompute
Scenarios
Procedure
3.6.1.1.3.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.1.3.1.6 Installing UltraVR
Scenarios
Prerequisites
Procedure
3.6.1.1.3.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.1.3.1.8 Configuring DR Policies
Scenarios
Procedure
3.6.1.1.3.2 DR Commissioning
3.6.1.1.3.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Commissioning Procedure
Expected Result
3.6.1.1.3.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.1.3.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
127.0.0.1:51299/icslite/print/pages/resource/print.do? 14/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.1.3.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2 Metropolitan HA
3.6.1.2.1 Metropolitan HA for Flash Storage
3.6.1.2.1.1 Installing and Configuring the DR System
3.6.1.2.1.1.1 Installation and Configuration Process
3.6.1.2.1.1.2 Preparing for Installation
Installation Requirements
Documents
Preparing Software Packages and Licenses
3.6.1.2.1.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.2.1.1.4 Configuring Storage
Scenarios
Procedure
3.6.1.2.1.1.5 Installing FusionCompute
Scenarios
Prerequisites
Process
Procedure
3.6.1.2.1.1.6 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.2.1.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.2.1.2 DR Commissioning
3.6.1.2.1.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.2.1.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.1.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.1.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
127.0.0.1:51299/icslite/print/pages/resource/print.do? 15/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.2 Metropolitan HA for Scale-Out Storage
3.6.1.2.2.1 Installing and Configuring the DR System
3.6.1.2.2.1.1 Installation and Configuration Process
3.6.1.2.2.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.2.2.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.2.2.1.4 Installing FusionCompute
Scenarios
Prerequisites
Process
Procedure
3.6.1.2.2.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.2.2.1.6 Configuring HA Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.2.2.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.2.2.1.8 Creating a Protected Group
Scenarios
Procedure
3.6.1.2.2.2 DR Commissioning
3.6.1.2.2.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Commissioning Procedure
Expected Result
3.6.1.2.2.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.2.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.2.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.2.2.5 Backing Up Configuration Data
127.0.0.1:51299/icslite/print/pages/resource/print.do? 16/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
Prerequisites
Procedure
3.6.1.2.3 Metropolitan HA for eVol Storage
3.6.1.2.3.1 DR System Installation and Configuration
3.6.1.2.3.1.1 Installation and Configuration Process
3.6.1.2.3.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.2.3.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.2.3.1.4 Installing FusionCompute
Scenarios
Procedure
3.6.1.2.3.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.2.3.1.6 Installing UltraVR
Scenarios
Prerequisites
Procedure
3.6.1.2.3.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.2.3.1.8 Configuring DR Policies
Scenarios
Procedure
3.6.1.2.3.2 DR Commissioning
3.6.1.2.3.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Commissioning Procedure
Expected Result
3.6.1.2.3.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.3.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.3.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3 Active-Standby DR
3.6.1.3.1 Active-Standby DR Solution for Flash Storage
3.6.1.3.1.1 DR System Installation and Configuration
3.6.1.3.1.1.1 Installation and Configuration Process
127.0.0.1:51299/icslite/print/pages/resource/print.do? 17/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.6.1.3.1.1.2 Preparing for Installation
Installation Requirements
Documents
Software Packages
3.6.1.3.1.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.3.1.1.4 Configuring Storage Devices
Scenarios
Procedure
3.6.1.3.1.1.5 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.3.1.1.6 Configuring the Remote Replication Relationship
Scenarios
Prerequisites
Procedure
3.6.1.3.1.1.7 Configuring DR Policies
Scenarios
Procedure
3.6.1.3.1.2 DR Commissioning
3.6.1.3.1.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.3.1.2.2 Commissioning a DR Test
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.3 Commissioning Scheduled Migration
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.4 Commissioning Fault Recovery
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.5 Commissioning Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.6 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
127.0.0.1:51299/icslite/print/pages/resource/print.do? 18/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Additional Information
3.6.1.3.1.2.7 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.1.3.2 Active-Standby DR Solution for Scale-Out Storage
3.6.1.3.2.1 DR System Installation and Configuration
3.6.1.3.2.1.1 Installation and Configuration Process
3.6.1.3.2.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.3.2.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.3.2.1.4 Configuring Storage Devices
Scenarios
Procedure
3.6.1.3.2.1.5 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.3.2.1.6 Configuring DR Policies
Scenarios
Procedure
3.6.1.3.2.2 DR Commissioning
3.6.1.3.2.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.3.2.2.2 Commissioning a DR Test
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.3 Commissioning Scheduled Migration
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.4 Commissioning Fault Recovery
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.5 Commissioning Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.6 Commissioning DR Switchback
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 19/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.7 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.1.4 Geo-Redundant 3DC DR
3.6.1.4.1 DR System Installation and Configuration
3.6.1.4.1.1 Installation and Configuration Process
3.6.1.4.1.2 Preparing for Installation
Installation Requirements
Documents
Software Packages and License Files
3.6.1.4.1.3 Configuring Switches
Scenarios
3.6.1.4.1.4 Installing FusionCompute
Scenarios
3.6.1.4.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.4.1.6 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.4.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster
Scenarios
3.6.1.4.1.8 Configuring the Remote Replication Relationship (Non-Ring Networking Mode)
Scenarios
3.6.1.4.1.9 Configuring DR Policies
Scenarios
Procedure
3.6.1.4.2 DR Commissioning
3.6.1.4.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.4.2.2 Commissioning a DR Test
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.3 Commissioning Scheduled Migration
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.4 Commissioning Fault Recovery
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
127.0.0.1:51299/icslite/print/pages/resource/print.do? 20/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.6.1.4.2.5 Commissioning Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.6 Commissioning DR Switchback
Purpose
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.7 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.2 Backup
3.6.2.1 Centralized Backup Solution
3.6.2.1.1 Installing and Configuring the Backup System
3.6.2.1.1.1 Installation and Configuration Process
3.6.2.1.1.2 Preparing for Installation
3.6.2.1.1.3 Installing the eBackup Server
Scenarios
Prerequisites
Process
Procedure
3.6.2.1.1.4 Connecting the eBackup Server to FusionCompute
Scenarios
Prerequisites
Procedure
3.6.2.1.2 Backup Commissioning
3.6.2.1.2.1 Commissioning VM Backup
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.2.1.2.2 Commissioning VM Restoration
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.7 Verifying the Installation
Procedure
3.8 Initial Service Configurations
3.9 Appendixes
3.9.1 FAQ
3.9.1.1 How Do I Handle the Issue that System Installation Fails Because the Disk List Cannot Be Obtained?
Symptom
Possible Causes
Troubleshooting Guideline
Procedure
3.9.1.2 How Do I Handle the Issue that VM Creation Fails Due to Time Difference?
Symptom
Procedure
3.9.1.3 What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported During System Installation?
Symptom
Possible Causes
127.0.0.1:51299/icslite/print/pages/resource/print.do? 21/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
3.9.1.4 How Do I Handle Common Problems During Hygon Server Installation?
3.9.1.5 How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be Added Due to a GPT Partition During Tool-based Installation?
Symptom
Procedure
3.9.1.6 How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for Customized VRM
Installation?
Symptom
Solution
3.9.1.7 How Do I Handle the Issue that the Mozilla Firefox Browser Prompts Connection Timeout During the Login to FusionCompute?
Symptom
Possible Causes
Procedure
3.9.1.8 How Do I Handle the Storage Device Detection Failure on a FusionCompute Host During VRM Installation?
Scenarios
Prerequisites
Procedure
3.9.1.9 How Do I Configure an IP SAN Initiator?
Scenarios
Prerequisites
Procedure
3.9.1.10 How Do I Configure an FC SAN Initiator?
Scenarios
Prerequisites
Procedure
3.9.1.11 How Do I Configure Time Synchronization Between the System and an NTP Server of the w32time Type?
Scenarios
Prerequisites
Procedure
3.9.1.12 How Do I Configure Time Synchronization Between the System and a Host When an External Linux Clock Source Is Used?
Scenarios
Impact on the System
Prerequisites
Procedure
3.9.1.13 How Do I Reconfigure Host Parameters?
Scenarios
Prerequisites
Procedure
3.9.1.14 How Do I Replace Huawei-related Information in FusionCompute?
Scenarios
Prerequisites
Procedure
Additional Information
3.9.1.15 How Do I Measure Disk IOPS?
Procedure
3.9.1.16 What Should I Do If a Linux VM with More Than 32 CPU Cores Cannot Be Started?
Scenarios
Prerequisites
Procedure
3.9.1.17 How Do I Query the FusionCompute SIA Version?
Scenarios
Procedure
3.9.1.18 What Should I Do If Tools Installed on Some OSs Fails to be Started?
Symptom
Possible Causes
Procedure
3.9.1.19 Expanding the Data Disk Capacity
Method of Adding Disks
Expanding the Capacity of Existing Disks
3.9.1.20 How Do I Manually Change the System Time on a Node?
Scenarios
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 22/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
3.9.1.21 How Do I Handle the Issue that VRM Services Become Abnormal Because the DNS Is Unavailable?
Symptom
Possible Causes
Procedure
3.9.1.22 What Can I Do If an Error Message Is Displayed Indicating That the Sales Unit HCore Is Not Supported When I Import Licenses on
FusionCompute?
Symptom
Possible Causes
Fault Diagnosis
Procedure
Related Information
3.9.1.23 How Do I Determine the Network Port Name of the First CNA Node?
3.9.1.24 Troubleshooting
Problem 1: Changing the Non-default Password of gandalf for Logging In to Host 02 to the Default One
Problem 2: Host Unreachable
Problem 3: Incorrect Password of root for Logging In to Host 02
Problem 4: Duplicate Host OS Names at the Same Site
Problem 5: The Host Where the Installation Tool Is Installed Does Not Automatically Start Services After Being Restarted
Problem 6: PXE-based Host Installation Failed or Timed Out
Problem 7: Automatic Logout After Login Using a Firefox Browser Is Successful but an Error Message Indicating that the User Has Not
Logged In or the Login Times Out Is Displayed When the User Clicks on the Operation Page
Problem 8: Alarm "ALM-15.1000103 VM Disk Usage Exceeds the Threshold" Is Generated During Software Installation
3.9.2 Common Operations
3.9.2.1 How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?
Scenarios
Prerequisites
Procedure
3.9.2.2 Logging In to FusionCompute
Scenarios
Prerequisites
Procedure
3.9.2.3 Installing Tools for eDME
Scenarios
Prerequisites
Procedure
Additional Information
3.9.2.4 Uninstalling the Tools from a Linux VM
Scenarios
Impact on the System
Prerequisites
Procedure
Follow-up Procedure
3.9.2.5 Checking the Status and Version of the Tools
Scenarios
Prerequisites
Procedure
3.9.2.6 Configuring the BIOS on Hygon Servers
Two Methods for Accessing the BIOS
3.9.2.7 Setting Google Chrome (Applicable to Self-Signed Certificates)
Scenarios
Prerequisites
Procedure
3.9.2.8 Setting Mozilla Firefox
Scenarios
Prerequisites
Procedure
3.9.2.9 Obtaining HiCloud Software Packages from Huawei Support Website
3.9.2.9.1 Obtaining GDE Software Packages
3.9.2.9.1.1 x86
GDE Kernel Software Packages
DSP Software Packages
127.0.0.1:51299/icslite/print/pages/resource/print.do? 23/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IT Infra Software Packages
ADC Software Packages
3.9.2.9.1.2 Arm
GDE Kernel Software Packages
DSP Software Packages
IT Infra Software Packages
ADC Software Packages
3.9.2.9.2 Obtaining Product Software Packages
3.9.2.10 Restarting Services
Procedure
3.9.3 Physical Network Interconnection Reference
3.9.4 Introduction to Tools
Overview
Functions
Precautions
3.9.5 Verifying the Software Package
3.9.6 VM-related Concepts
Related Concepts
VM Creation Methods
Requirements for VM Creation

127.0.0.1:51299/icslite/print/pages/resource/print.do? 24/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1 Library Information
Change History

Conventions

How to Obtain and Update Documentation

Feedback

Technical Support

1.1 Change History


Issue Date Description

04 2025-06-09 This issue is the fourth official release.

03 2025-04-07 This issue is the third official release.

02 2025-03-14 This issue is the second official release.

01 2024-12-25 This issue is the first official release.

1.2 Conventions

Symbol Conventions
The symbols that may be found in this document are defined as follows.

Symbol Description

Indicates a hazard with a high level of risk which, if not avoided, will result in death or serious injury.

Indicates a hazard with a medium level of risk which, if not avoided, could result in death or serious injury.

Indicates a hazard with a low level of risk which, if not avoided, could result in minor or moderate injury.

Indicates a potentially hazardous situation which, if not avoided, could result in equipment damage, data loss, performance
deterioration, or unanticipated results.
NOTICE is used to address practices not related to personal injury.

Supplements the important information in the main text.


NOTE is used to address information not related to personal injury, equipment damage, and environment deterioration.

General Conventions

Format Description

Arial or Huawei Normal paragraphs are in Arial or Huawei Sans.


Sans

Boldface Names of files, directories, folders, and users are in boldface. For example, log in as user root. File paths are in boldface, for
example, C:\Program Files\Huawei.

Italic Book titles are in italics.

Courier New Terminal display is in Courier New. The messages input on terminals by users are displayed in boldface.

"" Double quotation marks indicate the section name in the document.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 25/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Command Conventions

Format Description

Boldface Command keywords (the fixed part to be entered) are in boldface.

Italic Command arguments (replaced by specific values in an actual command) are in italic.

[] Items in square brackets ([ ]) are optional.

{ x | y | ... } Optional items are grouped in braces ({ }) and separated by vertical bars (|). One item must be selected.

[ x | y | ... ] Optional items are grouped in square brackets ([ ]) and separated by vertical bars (|). Only one item or no item can be selected.

{ x | y | ... Optional items are grouped in braces ({ }) and separated by vertical bars (|). A minimum of one item or a maximum of all items can be
* selected.
}

* Optional items are grouped in square brackets ([ ]) and separated by vertical bars (|). Multiple items or no item can be selected.
[ x | y | ... ]

Command Use Conventions


The commands that may be found in this document are used for deployment and maintenance.
This document does not provide commands used during manufacturing and depot repair and some advanced commands used for engineering and
fault locating. If necessary, contact technical support for assistance.

GUI Conventions

Format Description

Boldface Buttons, menus, parameters, tabs, windows, and dialog titles are in boldface. For example, click OK.

> Multi-level menus are in boldface and separated by greater-than signs (>). For example, choose File > Create > Folder.

Italic Variable nodes in navigation panes or multi-level menus are in italic.

GUI Image Conventions


The GUI images that may be found in this document are for reference only.

Keyboard Operations

Format Description

Key Press the key. For example, press Enter, Tab, Backspace, and a.

Key 1+Key 2 Press the keys simultaneously. For example, pressing Ctrl+Alt+A means that the three keys should be pressed concurrently.

Key 1, Key 2 Press the keys in turn. For example, press Alt and F in turn.

Mouse Operations

Format Description

Click Select and release the primary mouse button without moving the pointer.

Double-click Press the primary mouse button twice continuously and quickly without moving the pointer.

Drag Press and hold the primary mouse button and move the pointer to a certain position.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 26/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1.3 How to Obtain and Update Documentation


To prevent a software package from being maliciously tampered with during transmission or storage, download the corresponding digital signature
file for integrity verification when downloading the software package. After the software package is downloaded from the Huawei Support website,
verify its PGP digital signature by referring to the OpenPGP Signature Verification Guide. If the software package fails the verification, do not use
the software package, and contact technical support for assistance.
Before a software package is used for installation or upgrade, its digital signature also needs to be verified to ensure that the software package is not
tampered with.

Obtaining Documentation
You can obtain documentation using either of the following methods:

Use the online search function provided by ICS Lite to find the documentation package you want and download it. This method is
recommended because you can directly load the required documentation to ICS Lite. For details about how to download a documentation
package, see the online help of ICS Lite.

Visit the Huawei support website to download the desired documentation package.

For enterprises: https://support.huawei.com/enterprise/en/index.html

For carriers: https://support.huawei.com/carrierindex/en/hwe/index.html

Apply for the documentation CD-ROM from your local Huawei office.

To use ICS Lite or to visit Huawei technical support website, you need a registered user account. You can apply for a user account at the support website or apply for
an account by contacting the service manager at Huawei local office.

Updating Documentation
You can update documentation using either of the following methods:

Enable the documentation upgrade function of ICS Lite to automatically detect the latest version for your local documentation and load it to
ICS Lite as required. This method is recommended. For details, see the online help of ICS Lite.

Download the latest documentation packages from the Huawei technical support websites.

For enterprises: https://support.huawei.com/enterprise/en/index.html

For carriers: https://support.huawei.com/carrierindex/en/hwe/index.html

To use ICS Lite or to visit Huawei technical support website, you need a registered user account. You can apply for a user account at the support website or apply for
an account by contacting the service manager at Huawei local office.

1.4 Feedback
Your opinions and suggestions are warmly welcomed. You can send your feedback on the product documents, online help, or release notes in the
following ways:

Call the service hotline of your local Huawei office.

Give your feedback using the information provided on the Contact Us page at the Huawei technical support websites:

For enterprises: https://support.huawei.com/enterprise/en/index.html

For carriers: https://support.huawei.com/carrierindex/en/hwe/index.html

127.0.0.1:51299/icslite/print/pages/resource/print.do? 27/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1.5 Technical Support


Huawei Technologies Co., Ltd. provides customers with all-round technical support and services. If you encounter any problem during product use
or maintenance, obtain information that can help you locate or solve your problem through the following channels:

Interactive community: https://forum.huawei.com/enterprise/en/forums

If the problem persists, you can contact local Huawei representative office or the company's headquarters.

Call the service hotline of your local Huawei office.

Give your feedback using the information provided on the Contact Us page at the Huawei technical support websites:

For enterprises: https://support.huawei.com/enterprise/en/index.html

For carriers: https://support.huawei.com/carrierindex/en/hwe/index.html

2 Product Overview
Solution Overview

Application Scenario

Solution Architecture

Software Description

Hardware Description

System Security

Technical Specifications

Feature Description

2.1 Solution Overview


Datacenter Virtualization Solution

Multi-Tenant Service Overview

2.1.1 Datacenter Virtualization Solution

Definition
The advent of new data center technologies and business demands poses tremendous challenges to traditional data centers (DCs). To rise to these
challenges and follow technology trends, Huawei launches the next-generation Datacenter Virtualization Solution (DCS).
DCS uses eDME as the full-stack management software for data centers. With a unified management interface, open APIs, cloud-based AI
enablement, and multi-dimensional intelligent risk prediction and optimization, eDME implements automatic management and intelligent O&M of
resources throughout the lifecycle from planning, construction, and O&M to optimization, and manages multiple data centers in a unified manner,
helping customers simplify management and improve data center O&M efficiency. FusionCompute is used as the cloud operating system (OS)
software to consolidate resources in each physical data center. It virtualizes hardware resources to help carriers and enterprises build secure, green,
and energy-saving data centers, reducing operating expense (OPEX) and ensuring system security and reliability. eBackup and UltraVR are used to
implement VM data backup and disaster recovery (DR), and provide a unified DR protection solution for data centers in all regions and scenarios.
HiCloud is used to provide the security service, which supports heterogeneous management and resource provisioning of VMware. eDataInsight
functions as a big data platform to meet application requirements in typical big data scenarios. DCS uses software-defined networking (SDN)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 28/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

hardware and iMaster NCE-Fabric to implement automated network linkage configuration and control network devices, enabling automatic and fast
service orchestration.
DCS is a service-driven solution that features cloud-pipe synergy and helps carriers and enterprises manage physically distributed, logically unified
resources throughout their lifecycles.

Physical distribution
Physical distribution indicates that multiple data centers of an enterprise are distributed in different regions. After data center virtualization
components are deployed in physical data centers in different regions, IT resources can be consolidated to provide services in a unified manner.

Logical unification
Logical unification indicates that data center full-stack management software uniformly manages multiple data centers in different regions.

Features
Reliability
This solution enhances the reliability of a single device, data, and the solution. The distributed architecture improves the overall reliability and
therefore, lowers the reliability requirements on a single device.

Availability
The system delivers remarkable availability by employing hardware/link redundancy deployment, high-availability clusters, and application
fault tolerance (FT) features.

Security
The solution complies with the industry security specifications to ensure data center security. It focuses on the security of networks, hosts,
virtualization, and data.

Lightweight
You can start small with just three nodes and flexibly select DCS deployment specifications according to your service scale.

Scalability
Data center resources must be flexibly adjusted to meet actual service load requirements, and the IT infrastructure is loosely coupled with
service systems. Therefore, you only need to add IT hardware devices when service systems require capacity expansion.

Openness
Various types of servers and storage devices based on the x86 or Arm hardware platform and mainstream Linux and Windows OSs are
supported in data centers for flexible selection. Open APIs are provided to flexibly interconnect with cloud management software.

2.1.2 Multi-Tenant Service Overview


The DCS provides diverse multi-tenant services.

Table 1 Compute service

Service Description

Elastic Cloud An ECS is a compute server that consists of vCPUs, memory, and disks. ECSs are easy to obtain and scalable, and can be used on-
Server (ECS) demand. ECSs work with other services including storage service, Virtual Private Cloud (VPC), and Cloud Server Backup Service
(CSBS) to build an efficient, reliable, and secure compute environment, ensuring uninterrupted and stable running of your services.

Bare Metal Server A BMS features both the scalability of VMs and high performance of physical servers. It provides dedicated servers on the cloud for
(BMS) users and enterprises, delivering the performance and security required by core databases, critical applications, high-performance
computing (HPC), and big data. Tenants can apply for and use BMSs on demand.

Image An image is an ECS template that contains software and mandatory configurations, including operating systems (OSs), preinstalled
Management public applications, and the user's private applications or service data. Images are classified into public, private, and shared images.
Service (IMS) IMS allows you to manage images easily. You can apply for ECSs using a public or private image. In addition, you can also create a
private image from an ECS or external image file.

Auto Scaling (AS) AS is a service that automatically adjusts service resources according to AS policies configured based on user service requirements.
When service demands increase, AS automatically adds ECS instances to ensure computing capabilities. When service demands
decrease, AS automatically reduces ECS instances to reduce costs.

Table 2 Storage service

Service Description

127.0.0.1:51299/icslite/print/pages/resource/print.do? 29/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Block storage The block storage service provides block storage space for VMs. You can create Elastic Volume Service (EVS) disks in online mode
service and attach them to VMs. The block storage service provides various persistent storage devices. You can choose disk types based on
your needs and store files and build databases on EVS disks.

Object Storage The OBS is an object-based mass storage service. It provides mass, secure, reliable, and cost-effective data storage capabilities,
Service (OBS) including bucket creation, modification, and deletion.

Scalable File SFS provides ECSs and BMSs in HPC scenarios with a high-performance shared file system that can be scaled on demand. It is
Service (SFS) compatible with standard file protocols (NFS, CIFS, OBS, and DPC) and is scalable to petabytes of capacity to meet the needs of mass
data and bandwidth-intensive applications.

Table 3 Network service

Service Description

VPC service A VPC is a logically isolated virtual network environment that is built for ECSs and is configured and managed by users for improving
the security of user resources and simplifying user network deployment.

Elastic IP An EIP is a static IP address on a network outside the cloud (also called external network), can be directly accessed through the network,
address (EIP) and is mapped to the instance bound to the EIP using network address translation (NAT).
service

Security group A security group is a logical group which provides access policies for cloud servers that have the same security protection requirements
service and are mutually trusted in the same VPC. After a security group is created, you can define different access rules in the security group to
protect servers that are added to it.

NAT service The NAT service provides network address translation for cloud servers in a VPC so that the cloud servers can share an EIP to access the
Internet or can be accessed by an external network.
The NAT service provides two functions: source network address translation (SNAT) and destination network address translation
(DNAT).

Elastic load ELB distributes access traffic to multiple backend cloud servers based on forwarding policies. ELB can expand the access handling
balance (ELB) capability of application systems through traffic distribution and achieve a higher level of fault tolerance and performance. ELB also
service improves system availability by eliminating single points of failure (SPOFs). In addition, ELB is deployed on the internal and external
networks in a unified manner and supports access from the internal and external networks.

Virtual firewall The vFW controls VPC access and supports blacklists and whitelists (allow and deny policies). Based on the inbound and outbound
(vFW) Access Control List (ACL) rules associated with a VPC, the vFW determines whether data packets are allowed to flow into or out of the
VPC.

Domain Name A DNS service translates frequently-used domain names into IP addresses for servers to connect to each other. You can enter a domain
Service (DNS) name in a browser to visit a website or web application.

Virtual Private A VPN provides a secure, reliable encrypted communication channel that meets industry standards between remote users and their
Network VPCs. Such channel can seamlessly extend users' data center (DC) to a VPC.
(VPN)

Public service The public service network is used for a server to communicate with ECSs, VIPs, or BMSs in all VPCs of a user. With the public service
network network, you can quickly deploy VPC share services.

Table 4 Multi-tenant DR and backup service

Service Description

Cloud Server High CSHA adopts the HyperMetro feature of storage to implement cross-AZ VM HA. It is mainly used to provide DR protection for
Availability (CSHA) ECSs with zero recovery point objective (RPO). If the production AZ is faulty, it can implement the fast DR switchover with
minute-level recovery time objective (RTO) to ensure service continuity.

Backup service The backup service provides a unified operation portal for tenants in the multi-tenant scenario. Administrators can define backup
service specifications to form a logically unified backup resource pool for multiple physically dispersed backup devices, helping
tenants quickly obtain backup services, simplifying configuration, and improving resource provisioning efficiency.

Table 5 VMware cloud service

Service Description

VMware cloud A VMware ECS is a compute server provided by vCenter that can be obtained and elastically expanded at any time. After adding the
server VMware cloud service to the vCenter resource pool, you can synchronize VMware ECSs to eDME and implement unified management.

Table 6 Container services

Service Description

127.0.0.1:51299/icslite/print/pages/resource/print.do? 30/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Elastic Container As an enterprise-level K8s cluster hosting service, Elastic Container Engine (ECE) enables management for cluster lifecycle,
Engine container images, and containerized applications, as well as container monitoring and O&M. In addition, it provides highly scalable
and reliable cloud-native application deployment and management solutions. Therefore, it is a good choice for you to achieve
application modernization.

SoftWare SoftWare Repository for Container (SWR) provides easy, secure, and reliable management of container images throughout their
Repository for lifecycle. It is compatible with the Registry V2 protocol of the community and allows you to manage container images through a GUI,
Container CLI, or native APIs. SWR can be seamlessly integrated with ECE to help customers quickly deploy containerized applications and
build a one-stop solution for cloud native applications.

Table 7 Application and data integration service

Service Description

Application and data The application and data integration service is used to build an enterprise-level connection platform for connecting enterprise IT
integration service systems with OT devices. It provides multiple connection options including API, message, data, and device access to enable
enterprises to create digital twins based on the physical world and speed up their digital transformation.

2.2 Application Scenario


DCS manages full-stack IT infrastructure of enterprise data centers and builds compute, storage, and network resource pools, significantly
improving the utilization efficiency of data infrastructure, providing features such as resource management, resource monitoring, intelligent O&M,
and system management. DCS mainly applies to converged resource pool scenarios and single-hypervisor scenarios. In the converged resource pool
scenario, eDME and FusionCompute need to be deployed. eDME functions as the unified O&M platform and is responsible for resource
provisioning and overall O&M management. FusionCompute virtualizes hardware resources. It uses compute, storage, and network virtualization
technologies to virtualize compute, storage, and network resources, and centrally schedules and manages virtual resources over unified interfaces.
This reduces OPEX and ensures system security and reliability. Single-virtualization applies to the scenario where the customer has deployed a
unified O&M platform and DCS provides only resource virtualization capabilities. In this scenario, only FusionCompute needs to be deployed.
This document describes only the converged resource pool scenario. For details about the single-virtualization scenario, see the FusionCompute
Product Documentation.

2.3 Solution Architecture


Functional Architecture

Basic Network Architecture

2.3.1 Functional Architecture


DCS consists of hardware infrastructure, virtual resource pools, virtualization DR and backup, virtualization security, virtualization management
platform, and professional services, as shown in Figure 1.

Figure 1 DCS functional architecture

127.0.0.1:51299/icslite/print/pages/resource/print.do? 31/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 describes the functions of DCS.

Table 1 Functions of the virtualization solution

Function Description

Data center It provides full-stack O&M management of site resources from hardware infrastructure to virtual resources, supports unified
virtualization management of resource sites in multiple data centers in different regions, and enables system administrators (including
management platform hardware administrators) manage hardware and software on one easy-to-use, unified, and intelligent O&M portal.

Virtualization + Virtual resource pools are built upon the physical infrastructure, which are classified into virtual compute, virtual storage, and
Container dual-stack virtual network resource pools.
resource pool Container resource pools are containerized application resource pools constructed based on virtualization resource pools.

Virtualization DR and Active-standby, active-active, and 3DC DR management is provided based on storage remote replication and active-active
backup capabilities to ensure service continuity. Data is replicated to dump devices. If a system fault or data loss occurs, the backup
data can be used to recover the system or data. This function provides a unified DR protection solution for data centers in all
regions and scenarios.

Virtualization security Full-stack security protection is implemented for data storage, network transmission, management and O&M, host systems,
and VMs to ensure secure service access.

Hardware infrastructure Hardware infrastructure includes servers, storage devices, network devices, backup devices, and security devices required by
data centers. Based on different service requirements, this layer provides multiple hardware deployment architectures.

Professional services Professional full-lifecycle virtualization services are offered covering consulting, planning and design, delivery
implementation, migration, and training and certification.

Table 2 describes the software components of DCS.

Table 2 Software components of DCS

Component Description

FusionCompute This component is mandatory.


FusionCompute is a cloud operating system (OS) that virtualizes hardware resources and centrally manages virtual, service, and user
resources. It virtualizes compute, storage, and network resources using the virtual compute, virtual storage, and virtual network
technologies. It centrally schedules and manages virtual resources over unified interfaces. FusionCompute provides high system
security and reliability and reduces the OPEX, helping carriers and enterprises build secure, green, and energy-saving data centers.

eDME This component is mandatory.


It is a component for unified management of multiple resource sites in different regions and enables system administrators (including
hardware administrators) to manage hardware and software on an easy-to-use, unified, and intelligent O&M portal.

eBackup This component is optional.


eBackup is a Huawei-developed backup software for virtualization environments. It provides comprehensive protection for user data in
virtualization scenarios based on VM/disk snapshot and Changed Block Tracking (CBT) technologies.

UltraVR This component is optional.


The DR management software uses storage devices to protect and restore VM data.

eDataInsight This component is optional.


eDataInsight is a distributed data processing system that supports large-capacity data storage, search, and analysis.
To deploy eDataInsight, you need to deploy OceanStor Pacific HDFS (with separated storage and compute) first. OceanStor Pacific
HDFS provides a high-performance HDFS storage solution with separated storage and compute

HiCloud This component is optional.


HiCloud is an industry-leading hybrid cloud management platform that provides unified management of heterogeneous devices.

SFS This component is optional.


Scalable File Service (SFS) provides high-performance file storage that is scalable on demand. It can be shared with multiple Elastic
Cloud Servers (ECSs).

127.0.0.1:51299/icslite/print/pages/resource/print.do? 32/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

eContainer This component is optional.


eContainer provides container-related services, including Elastic Container Engine (ECE) and SoftWare Repository for Container
(SWR).
ECE is a logically isolated container platform built for cloud servers. It can be configured and managed by you, and aims to provide
you with a highly reliable and elastic cloud native foundation platform based on K8s and other cloud native components. A tenant can
select the node type, node specifications, network configuration, and storage configuration for a VPC, and create the tenant's own K8s
container clusters based on these configurations.
SWR provides easy, secure, and reliable management of container images throughout their lifecycle. It is compatible with the Registry
V2 protocol of the community and allows you to manage container images through a GUI, CLI, or native APIs. SWR can be seamlessly
integrated with ECE to help you quickly deploy containerized applications and build a one-stop solution for cloud native applications.

eCampusCore This component is optional and can be deployed only on the Region Type II network (network overlay SDN scenario).
eCampusCore is an enterprise-level platform for application and data integration. It provides connections between IT systems and OT
devices and pre-integrated assets for digital scenarios in the enterprise market.

2.3.2 Basic Network Architecture


The network planes of DCS are as follows:

Table 1 DCS network planes

Network Description Network Communication Switch Port Configuration Recommendation


Plane Requirement

BMC plane This plane is used by the baseboard The management planes of eDME and Configure the port on the switch connected to the
management controller (BMC) network Virtual Resource Management (VRM) BMC network interface card (NIC) on the server to
port on a host. The BMC plane enables nodes can communicate with the BMC allow VLANs of the BMC plane to pass without any
remote access to the BMC system of plane. The management plane and the tags. If the management plane and the BMC plane are
the server and controllers of storage BMC plane can be combined. The deployed in a converged manner, you are advised to
devices. BMC port and the storage configure the port on the switch connected to the
management port are connected to the management plane VLAN to work in Access mode, or
leaf switch. set the management plane VLAN to PVID VLAN in
Trunk mode.

Management This plane is used for the management The eDME, VRM, and FSM nodes are Configure the port on the switch connected to the
plane of all nodes in a unified manner, the deployed on the management plane management plane NIC on the server to allow VLANs
communication between all nodes, and and can communicate with each other. of the management plane to pass without any tags.
the monitoring, O&M, and VM You are advised to configure the port on the switch
management of the entire system. connected to the management plane VLAN to work in
Management plane IP addresses include Access mode, or set the management plane VLAN to
host management IP addresses and IP PVID VLAN in Trunk mode.
addresses for the management of VMs.

Storage plane Enterprise IP SAN storage service The storage plane of each host can Configure the port on the switch connected to the
access: communicate with the storage plane of SAN storage plane NIC on the server to allow VLANs
Hosts can communicate with the storage devices. of the storage plane to pass with tags. You are advised
storage devices through this plane. to configure the port on the switch connected to the
Storage plane IP addresses include the storage plane NIC on the server to work in Trunk
storage IP addresses of all hosts and mode and allow VLANs of the storage plane to pass.
storage devices. VLAN tags need to be removed when the traffic of the
storage plane passes the storage port of SAN devices.
Configure the port on the switch connected to the
storage port on IP SAN devices to allow VLANs of
the storage plane to pass without any tags. You are
advised to configure the port on the switch connected
to the storage port of SAN devices to work in Access
mode.

Enterprise FC SAN storage service The storage plane of each host can FC switch ports connected to the storage plane HBAs
access: communicate with the storage plane of on the server are configured to the same zone.
Hosts can communicate with the storage devices.
storage devices through this plane. Host
bus adapters (HBAs) on all hosts are
connected to Fiber Channel (FC)
switches.

Scale-out storage service access: The storage plane of each host can Configure the port on the switch connected to the
Hosts can communicate with scale-out communicate with the service plane of service plane NIC of the scale-out storage to allow
storage services through this plane. The the scale-out storage nodes. VLANs of the storage plane to pass with tags. You are
IP addresses used on this plane include advised to configure the port on the switch connected
storage IP addresses of all hosts and to the service plane NIC of OceanStor Pacific Block
service IP addresses of scale-out to work in Trunk mode and allow VLANs of the
storage. storage plane to pass.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 33/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Back-end Back-end storage plane of scale-out The scale-out storage nodes must use Configure the port on the switch connected to the
storage plane storage: an independent plane to communicate back-end storage plane NIC of scale-out storage to
Storage nodes are interconnected and with each other. allow VLANs of the back-end storage plane to pass
all storage nodes use IP addresses of the with tags. You are advised to configure the port on the
back-end storage plane. switch connected to the scale-out back-end storage
plane NIC to work in Trunk mode and allow VLANs
of the back-end storage plane to pass.

Service plane This plane is used by the service data of VMs communicate with the service VLAN tags do not need to be removed when the
user VMs. plane. traffic of the service plane passes the service NIC of a
host. Configure the port on the switch connected to
the service plane NIC on the server to allow VLANs
of the service plane to pass with tags. You are advised
to configure the port on the switch connected to the
service plane NIC on the server to work in Trunk
mode and allow VLANs of the service plane to pass.

Storage Replication service plane between Devices in the storage site are FC or IP ports on storage devices are connected across
replication storage devices: interconnected through the wavelength sites through WDM devices.
plane Devices at storage sites are division multiplexing (WDM)
interconnected through IP or FC ports technology.
for storage data replication.

External Used for communication between the Border leaf switches are connected to Static or dynamic routes need to be configured for
network internal network and external network customers' physical endpoints (PEs) or firewalls, load balancers, border leaf switches, and
plane of data centers: routing devices. egress routers.
This plane is the egress for VMs to
access external network services and
ingress for external networks to access
the data center network. Public IP
addresses need to be planned and
configured by customers for border leaf
switches.

Public The public service network consists of Border leaf switches can communicate Border leaf switches are connected to the server
service plane the client network and server network. with the server network (for example, network of public services through routes.
The client network plane reuses the OBS) of public services.
service plane.
The server network plane is determined
by the carried service. For example, the
Object Storage Service (OBS) is
deployed on the server, reuses the
storage plane, and connects the client
network to the OBS storage network
through route configuration.

DCS typical networking includes single-DC single-core networking, single-DC single-core Layer 2 networking, single-DC single-core Layer 3
networking, and network overlay SDN networking.

Single-DC single-core networking


In single-DC single-core networking, the management core switch and service core switch share one core spine switch. The storage core switch
can be flexibly configured based on live network conditions and can use an independent core switch or share the management and service core
spine switch, as shown in Figure 1.

Figure 1 Single-DC single-core networking diagram

127.0.0.1:51299/icslite/print/pages/resource/print.do? 34/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The functions of the nodes in the networking are described as follows:

Border leaf node: border function nodes that connect to firewalls, routers, or transmission devices to transmit external traffic to data
center networks and implement interconnection between data centers. Border leaf and spine nodes can be deployed in converged mode.

Value-added service (VAS): Firewalls and load balancers are connected to spine or border leaf nodes in bypass mode, and 10GE ports are
used for networking.

Spine: core nodes that provide high-speed IP forwarding and connect to each function leaf node through high-speed interfaces.

Leaf: access nodes, including the leaf switches connected to the compute nodes and converged nodes and the leaf switches connected to
the storage nodes. The leaf switches connected to compute nodes and converged nodes provide the capability of connecting compute and
management resources such as virtualized or non-virtualized servers to the fabric network. Leaf switches connected to storage nodes
provide the capability of connecting IP storage nodes to the fabric network.

For the interconnection between leaf and spine nodes, it is recommended that 40GE or 100GE ports be used for networking based on the
oversubscription ratio (OSR).

BMC access switch: connects to the BMC network port of converged and compute nodes or the management port of storage devices, and
accesses to spine switches in the uplink. If the BMC access switch is not deployed, the BMC network port or the management port of
storage devices needs to be directly connected to leaf access switches.

FC switches connect to FC storage nodes, compute nodes, and converged nodes.

FusionCompute and eDME system management software can be deployed on converged nodes, and remaining resources can be used to
deploy VM compute services. You can deploy either IP or FC storage nodes, depending on your network performance requirements.

Distributed virtual switch (DVS): runs on compute and converged nodes to connect VMs to fabric networks.

Compute/Converged nodes: The nodes are connected to leaf nodes and can be configured with management, storage, and service ports.
DR service ports are optional. The service NICs use 10GE networking.

Storage replication service: Storage nodes are aggregated by storage top of rack (TOR) or FC switches and then interconnected with
remote storage devices through optical transport network (OTN) or WDM devices. This enables synchronous replication, asynchronous
replication, and active-active services on storage nodes, which can be configured based on customers' networking DR requirements.

For a new data center, it is recommended that the customer configure leaf, border, and spine switches according to the preceding
networking (spine and border switches can be deployed in converged mode). If the customer has planned the network, submit network
requirements to the customer based on the number of converged nodes, compute nodes, and TOR switches, for example, the number of
10GE ports on leaf switches and the number of 40GE or 100GE uplink ports on TOR switches.

A fabric network refers to a large switching network consisting of multiple switches, which is distinguished from the network with a single switch.
A fabric network consists of a group of interconnected spine and leaf nodes. All nodes connected to the network can communicate with each other.
Multiple tenants can share one physical device and one tenant can also use multiple servers, greatly saving costs and improving resource usage.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 35/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Compute nodes support the deployment with four or six network ports. When four network ports are used, the management and service ports
are deployed in converged mode, and the storage ports are deployed independently. When six network ports are used, the management ports,
service ports, and storage ports are deployed independently and isolated by VLANs. If the DR service software is deployed on compute nodes,
the DR service ports and storage services port are deployed in converged mode in the six- or four-port configuration scenarios. Compute nodes
can also be configured with eight network ports. In this case, the management ports, service ports, storage ports, and DR ports are deployed
independently and isolated by VLANs. The following two networking topologies are recommended based on the service bandwidth and scale:

Layer 2 networking topology: spine + leaf (converged deployment of border and spine nodes)

Layer 3 networking topology: border leaf + spine + leaf

Selection Principle Networking

The network has a single region, single DC, and single physical egress, with the scale of no more than 2,000 Single-DC single-core Layer 2
VMs and 350 servers. The management plane and service plane do not need to be physically isolated and can networking (spine + leaf)
share core switches.
The inter-rack east-west traffic is small, and the service traffic OSR is less than 4:1.
Spine, service leaf, and border leaf nodes are deployed in converged mode.

The network has a single region, single DC, and single physical egress, with the scale of no more than 5,000 Single-DC single-core Layer 3
VMs and 1,000 servers. networking (border leaf + spine +
leaf)
The management plane and service plane do not need to be physically isolated and can share core switches.
The inter-rack east-west traffic is large, and the service traffic OSR is less than 2:1.
Spine, service leaf, and border leaf nodes are deployed separately.

Single-DC single-core Layer 2 networking


Border leaf nodes and spine nodes are deployed in converged mode. Spine nodes also function as border routers. Firewalls and load balancers
are connected to spine nodes in bypass mode. The networking of four or six network ports is supported. The networking diagram is shown in
Figure 2.

Figure 2 Single-DC single-core Layer 2 network topology (six network ports)

Single-DC single-core Layer 3 networking


Border leaf nodes are independently deployed as border routers. Firewalls and load balancers are connected to border leaf nodes in bypass
mode. The networking of four or six network ports is supported. The networking diagram is shown in Figure 3.

Figure 3 Single-DC single-core Layer 3 network topology (six network ports)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 36/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Network overlay SDN networking


In the Layer 2 networking topology and iMaster NCE management networking diagram, border leaf and spine nodes are deployed in converged
mode. Spine nodes function as border routers, and firewalls and load balancers are connected to spine nodes in bypass mode. The networking
of four or six network ports is supported. The networking diagram is shown in Figure 4.

In the Layer 3 networking topology and iMaster NCE management networking diagram, border leaf nodes are independently deployed as
border routers. Firewalls and load balancers are connected to border leaf nodes in bypass mode. The networking of four or six network ports is
supported. The networking diagram is shown in Figure 5.

Figure 4 Layer 2 networking topology and iMaster NCE management networking diagram (converged deployment of spine and border nodes)

Figure 5 Layer 3 networking topology and iMaster NCE management networking diagram (separated deployment of spine and border nodes)

2.4 Software Description


FusionCompute

eDME

127.0.0.1:51299/icslite/print/pages/resource/print.do? 37/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

eBackup

UltraVR

HiCloud

eDataInsight

iMaster NCE-Fabric

eCampusCore

eContainer

2.4.1 FusionCompute

Overview
FusionCompute is a cloud OS. It virtualizes hardware resources and centrally manages virtual resources, service resources, and user resources. It
uses compute, storage, and network virtualization technologies to virtualize compute, storage, and network resources. It centrally schedules and
manages virtual resources over unified interfaces. FusionCompute provides high system security and reliability and reduces the OPEX, helping
carriers and enterprises build secure, green, and energy-saving data centers.

Technical Highlights
FusionCompute adopts virtualization management software, divides compute resources into multiple VM resources, and provides you with high-
performance, operational, and manageable VMs.

Supports on-demand allocation of VM resources.

Supports multiple OSs.

Uses QoS to ensure resource allocation and prevents users from affecting each other.

2.4.2 eDME

Overview
eDME is an intelligent O&M platform designed for DCS to centrally manage software and hardware. It also provides lightweight cloud solutions for
small- and medium-sized data centers.

Provides automatic management and intelligent O&M throughout the lifecycle of data center virtualization infrastructure, including planning,
construction, O&M, and optimization, helping customers simplify management and improve data center O&M efficiency.

Provides lightweight, elastic, agile, and efficient lightweight solutions, and provides multi-level VDCs, computing, storage, network, DR and
backup, security, database, and heterogeneous virtualization capabilities as services. This solves the challenges of difficult planning, use, and
management for customers and improves the efficiency of enterprise business departments.

Northbound and southbound APIs are opened. Northbound APIs can be used to connect to customers' existing O&M platforms and various
cloud management platforms. Southbound APIs can be used to take over third-party devices through plug-ins or standard protocols.

Application Scenarios
eDME centrally manages software and hardware in virtualization scenarios. It controls, manages, and collaboratively analyzes databases, data tools,
servers, switches, and storage devices. Based on Huawei's unified intelligent O&M management platform, eDME provides real-time O&M and

127.0.0.1:51299/icslite/print/pages/resource/print.do? 38/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

closed-loop problem solving based on alarms, a wide range of report analysis capabilities based on unified datasets, and automatic O&M capabilities
based on policies and AI. In the southbound and northbound ecosystem, eDME can connect to customers' existing O&M platforms and various
cloud management platforms through northbound protocols such as RESTful, SNMP, Telnet, and Redfish. In addition, eDME supports typical third-
party devices, including servers, switches, and storage devices, through the open standard southbound interfaces in the industry.

2.4.3 eBackup

Overview
eBackup is a Huawei-developed backup software for cloud and virtual environments. Employing VM and disk snapshot, and CBT, eBackup
provides comprehensive protection for user data in virtualization scenarios.
eBackup supports backup and restoration of VM data and disk data in virtualization scenarios.

Virtual Backup Solution


Based on Huawei FusionCompute virtualization platform, eBackup Virtual Backup Solution uses VM and disk snapshot and CBT technologies to
provide comprehensive protection for massive VM data. eBackup Virtual Backup Solution consists of the following:

Backup object
Indicates an object to be backed up. eBackup Virtual Backup Solution can protect data of VMs on Huawei FusionCompute.

eBackup backup software


Backs up and restores data in the virtual environment. The eBackup backup software can be deployed using a template or a software package.
The software cannot be deployed using a template and a software package at the same time in one eBackup system. Its performance can be
expanded by adding backup proxies.

Backup storage
Supports Network Attached Storage (NAS), Simple Storage Service (S3), and Storage Area Network (SAN) storage.

Backup policy
Supports the creation of protection policies for VMs and one or multiple disks. Permanent incremental backup is supported to reduce the
amount of data to be backed up.

2.4.4 UltraVR

Overview
UltraVR is a piece of DR service management software for enterprise data centers. It enables you to configure DR services in a simple and efficient
manner, monitor the running status of DR services in a visualized manner, and quickly complete data recovery and tests.

Highlights
Convenience and efficiency
UltraVR supports process-based DR service configuration. In addition, it supports one-click DR tests, planned migration, fault recovery, and
reprotection.

Visualization
UltraVR enables you to easily manage the entire DR process by graphically displaying physical topologies of global DR and logical topologies
of service protection. You can easily understand the execution status of protected groups and recovery plans.

Integration
UltraVR integrates with storage resource management. It meets the DR O&M requirements in various application scenarios, such as active-
standby data centers, geo-redundancy with three data centers, and active-active data centers, reducing O&M costs and improving O&M
efficiency.

High reliability

127.0.0.1:51299/icslite/print/pages/resource/print.do? 39/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The multi-site deployment of UltraVR improves the reliability of the DR service management and scheduling system. In addition, the
automatic backup of management data ensures quick recovery of the management system.

2.4.5 HiCloud

Overview
CMP is a core component that implements unified management of heterogeneous resource pools in cloud data centers and converts resources into
cloud services. It distinguishes cloud data centers from traditional data centers.
CMP HiCloud is an industry-leading hybrid cloud management platform that provides unified management of heterogeneous resources. It supports
management of cloud services such as Huawei hybrid cloud and VMware. In addition, CMP supports dual stacks of x86 and Arm, and features
unified service orchestration, resource scheduling, and cross-cloud deployment, which enable it to quickly adapt to customer's cloud platforms and
promote service innovation.

Highlights
HiCloud builds industry-leading competitiveness based on the following aspects:

Builds a service-oriented/component-based open architecture (based on the microservice architecture).

Supports Docker image installation and fast boot based on the Docker container technologies.

Supports interconnection with third-party devices of IT O&M management systems (ITSM, ITOM, and CMDB).

Takes over multiple heterogeneous resource pools.

Takes over x86 physical servers.

Provides HA could management platform solutions based on the customers' service requirements and live network environment.

Implements automatic allocation, automatic billing, unified management, and unified O&M on tenant resources, improving rollout and O&M
efficiency.

2.4.6 eDataInsight

Overview
Huawei eDataInsight is a distributed data processing system, which provides large-capacity data storage, query, and analysis capabilities. It can meet
the following enterprise requirements:

Quickly consolidates and manages massive amounts of data of various types.

Performs advanced analysis of native information.

Visualizes all available data for special use.

Provides a development environment for new analysis applications.

Optimizes and schedules workloads.

Application Scenarios
The advent of competition in Internet finance poses challenges to financial enterprises and urged them to reconstruct a decision-making and service
system based on big data analysis and mining to improve their competitiveness and customer satisfaction. In the big data era, banks need to focus on
data instead of transactions to address challenges of real-time processing of multidimensional, massive data and Internet business.
Huawei eDataInsight can solve the problems of financial enterprises from different aspects and improve their competitiveness. For example:

Real-time query of historical transaction details


User transactions made in the past seven years or longer can be queried in real time. Hundred-TB-level of historical data can be queried in
milliseconds.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 40/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Real-time credit investigation


The time for credit investigation on a user is reduced from about 3 days to less than 10 minutes.

Micro- and small-load service prediction


The prediction of top 1000 potential micro- and small-load service users is over 40 times more accurate as compared with conventional
models.

Precise push

Distributed online banking logs can be collected more efficiently. User preference is analyzed based on the online banking logs to
achieve precise push, greatly improving online banking users' experience.

All target users can be covered with just less than 20% of original recommendation SMS messages to achieve precise push.

2.4.7 iMaster NCE-Fabric

Overview
iMaster NCE-Fabric functions as the SDN controller to manage switches in data centers and automatically deliver service configurations.

Highlights
iMaster NCE-Fabric implements network automation and automatic and fast service orchestration.

iMaster NCE-Fabric is used to take over network devices.

iMaster NCE-Fabric associates network resources with compute resources to implement automatic network configuration, reducing the
configuration workload of network administrators.

FusionCompute interconnects with iMaster NCE-Fabric to associate compute and network resources, implementing automatic provisioning of
virtual networks and automatic network configuration during VM provisioning, HA, and migration.

2.4.8 eCampusCore

Overview
As a core component of the campus digital platform, eCampusCore is dedicated to building an enterprise-level integration platform. It provides IT
system and OT device connection, heterogeneous AI algorithm integration, lightweight data processing, and application O&M capabilities for digital
scenarios in the enterprise market. In addition, it provides flexible multi-form deployment adaptation capabilities for industry solutions to support
enterprise digital transformation.

Application and Data Integration Service


The application and data integration service is used to build an enterprise-level connection platform for connecting enterprise IT systems to OT
devices. It provides multiple connection options including API, message, data, and device access to enable enterprises to create digital twins based
on the physical world and speed up their digital transformation.
On the operation portal, it provides the instance service for the system integration and device integration services, efficiently connecting IT systems
to OT devices. In addition, an API gateway is provided for service openness and supports the management of all open APIs, including the calling,
traffic, authorization, access control, monitoring, and API versions of the open APIs. On the O&M portal, it provides the PaaS instance management
service for service enabling and maintenance, including the service instance provisioning, alarm reporting, and inspection.

2.4.9 eContainer

Overview
Huawei eContainer is a cloud-native container platform that provides cloud-native infrastructure management and unified orchestration and
scheduling of K8s-based containerized applications. This platform addresses the core requirements of enterprises undergoing cloud-native digital

127.0.0.1:51299/icslite/print/pages/resource/print.do? 41/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

transformation. eContainer offers two services in DCS: Elastic Container Engine (ECE) and SoftWare Repository for Container (SWR). The primary
capabilities include:

Custom container networks


You can customize subnet segments in a VPC, and deploy ECS nodes of K8s clusters and other services in the subnets as required.

Availability of all basic functions by default


By default, DCS ECE provides you with various necessary plug-ins such as CNI, CSI, and CRI plug-ins, facilitating easy use of your container
services.

Elastic and flexible connection to external networks


You can provide microservices deployed on K8s clusters to external users through standard ELB provided by DCS ECE, without worrying
about that your services cannot be accessed by external users.

Comprehensive security protection


DCS ECE implements security hardening and reliability enhancement based on K8s of the community version, reducing the possibility of
external malicious attacks on your clusters and thus protecting your services.

Comprehensive monitoring and alarm reporting functions


By default, DCS ECE provides various monitoring indicators such as the CPU, memory, GPU, and NPU, displays monitoring indicators on the
GUI, and automatically generates alarms, allowing you to monitor your services at any time.

Containerized application management


DCS ECE allows you to manage applications deployed using Helm charts or images.

Full-lifecycle management of images

You can manage container images easily without having to build and maintain a platform.

You can manage container images throughout their lifecycles, including uploading, downloading, and deleting container images.

The community Registry V2 protocol is supported. Container images can be managed through the community CLI (such as containerd,
iSula, and Docker) and native APIs.

A container image security isolation mechanism is provided at the resource set granularity based on the multi-tenant service to secure
data access.

The image storage space can be flexibly configured based on service requirements, reducing initial resource costs.

Seamless integration with ECE


ECE automatically interconnects with SWR through secure links and internal accounts. In this way, you can directly select uploaded container
images to deploy service applications, simplifying the deployment process.

Application Scenarios
DCS AI full-stack
The DCS AI full-stack solution is built based on the DCS eContainer platform. It provides an underlying AI platform for training and inference in
healthcare, finance, coal mining, scientific research, and many other scenarios.

It supports large models of multiple large model vendors.

The XPU K8s cluster provided by ECE interconnects with AI development platform ModelMate, enabling AI users to provision their own AI
platforms to complete end-to-end AI training and inference as well as data processing.

SWR provides container image management capabilities for the AI full-stack solution. It supports unified management of container images
related to AI training and inference and data processing services, simplifying the process of deploying containerized applications.

2.5 Hardware Description


Server

Switch

127.0.0.1:51299/icslite/print/pages/resource/print.do? 42/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Storage Device

iMaster NCE-Fabric Appliance

2.5.1 Server
For details about servers supported by DCS, visit Huawei Storage Interoperability Navigator.

2.5.2 Switch
Table 1 lists the switches supported by DCS. For details about other switches, visit Huawei Storage Interoperability Navigator.

Table 1 Typical switches supported by DCS

Category Model

10GE switch CE6881-48S6CQ

CE6857F-48S6CQ

CE6863-48S6CQ (only for SDN)

CE6863E-48S6CQ (only for SDN)

CE6881-48T6CQ (only for SDN)

CE9860-4C-E1 (only for SDN)

CE8850-64CQ-E1 (only for SDN)

CE8851 (only for SDN)

CE16804 (only for SDN)

FM6865E-48S8CQ (only for SDN)

FM6865-48S8CQ-S1 (only for SDN)

FM6857E-48S6CQ-E1 (only for SDN)

FM6857E-48S6CQ

FM6857-48S6CQ-EI

FM8850-64CQ-E1 (only for SDN)

FM8861-4C-E1 (only for SDN)

25GE switch CE6860-SAN

100GE switch CE8850-SAN

GE switch FM5855E-48T4S2Q

2.5.3 Storage Device


For details about storage devices supported by DCS, visit Huawei Storage Interoperability Navigator.

2.5.4 iMaster NCE-Fabric Appliance


The appliance is a physical server preinstalled with the iMaster NCE-Fabric software. Figure 1 shows the appearance of the appliance.

Figure 1 Appearance of the FusionServer Pro 2288X V5 rack server

127.0.0.1:51299/icslite/print/pages/resource/print.do? 43/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.6 System Security

Security Threats
In addition to addressing security threats of traditional data centers, new data centers also face the following new security threats and challenges:

Storage-layer infrastructure

If static data is damaged and the error cannot be detected immediately, incorrect data may be returned to the host, causing service
exceptions.

Data may not be entirely cleared after the compute resource or storage space is released.

The data processing may breach laws and regulations.

Network-layer infrastructure

The distributed deployment of data center resources complicates route and domain name configuration and therefore makes the data
center more vulnerable to network attacks, such as domain name server (DNS) and distributed denial-of-service (DDoS) attacks. DDoS
attacks come not only from the external network but also from the internal network.

Logical isolation instead of physical isolation and the change of the network isolation model produce security vulnerabilities in the
original isolation of an enterprise network.

Multiple tenants share compute resources, which may result in risks in resource sharing such as user data leakage, caused by improper
isolation measures.

Host-layer infrastructure

Hypervisor works with the highest priority (even higher than the priority of the OS). If Hypervisor is hacked, all VMs running on
Hypervisor are completely under attack.

If VMs do not have security measures or security measures are not automatically created, keys for accessing and managing VMs may
be stolen, services (such as FTP and SSH) that are not patched in a timely manner may be attacked, accounts with weak passwords or
without passwords may be stolen, and systems that are not protected by host firewalls may be attacked.

O&M management security


Users cannot exclusively control resources and therefore have higher requirements for system access and authentication.

Security Architecture
Huawei provides a security solution to face the threats and challenges posed to virtualization. Figure 1 shows the security solution architecture.

Figure 1 Security solution architecture

127.0.0.1:51299/icslite/print/pages/resource/print.do? 44/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Data storage security

User data on different VMs is isolated at the virtualization layer to prevent data theft and ensure data resilience.

Data access control is implemented. In FusionCompute, different access policies are configured for different volumes. Only users with
the access permission can access a volume, and different volumes are isolated from each other.

Remaining information is protected. When reclaiming resources, the system can format the physical bits of logical volumes to ensure
data resilience. After the physical disks of a data center are replaced, the system administrator of the data center needs to degauss them
or physically destroy them to prevent data leakage. Data storage uses a reliability mechanism. One or more copies of backup data are
stored so that data is not lost and services are not affected even if storage devices such as disks become faulty.

Cyber resilience

Network isolation is adopted. The network communication plane is divided into the service plane, storage plane, and management
plane, and these planes are isolated from each other. As a result, operations of the management platform do not affect service running
and end users cannot damage basic platform management.

Network transmission security must be ensured. Data transmission may be interrupted, and data may be replicated, modified, forged,
intercepted, or monitored during transmission. Therefore, it is necessary to ensure the integrity, confidentiality, and validity of data
during network transmission. Data transmission security must be ensured. HTTPS is used for pages that contain sensitive data and
SSL-based transmission channels are used for system administrators to access the management system. Users access VMs using
HTTPS, and data transmission channels are encrypted using SSL.

Multiple tenants share compute resources, which may result in risks in resource sharing such as user data leakage, caused by improper
isolation measures.

Host security

VM isolation is implemented. Resources of VMs on the same physical host are isolated, preventing data theft and malicious attacks
and ensuring the independent running environment for each VM. Users can only access resources allocated to their own VMs, such as
hardware and software resources and data, ensuring secure VM isolation.

OS hardening is implemented. Compute nodes, storage nodes, and management nodes run on EulerOS Linux. Host OS security is
ensured by the following security configurations:

Disable unnecessary services, such as Telnet and FTP.

Harden SSH services. Control access permissions for files and directories.

Restrict system access permissions.

Manage user passwords.

Record operation logs.

Detect system exceptions.

Security patches are provided. Software design defects result in system vulnerabilities. System security patches must be installed
periodically to fix these vulnerabilities and protect the system against attacks by viruses, worms, and hackers. The patches include
virtualization platform security patches and user VM security patches.

O&M management security

127.0.0.1:51299/icslite/print/pages/resource/print.do? 45/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Web security is ensured. The web service platform automatically redirects users' access requests to HTTPS links. When users access a
web service platform using HTTP, the web service platform automatically redirects the users' access requests to HTTPS links to
enhance access security.

Role-based permission management assigns different permissions for different resources to users to ensure system security.

Log management is implemented. Logs record system running statuses and users' operations on the system, and can be used to query
user behaviors and locate problems. Logs are classified into operation logs and run logs. Operation logs record system security
information.

Security Value
Unified and comprehensive security policies
The centralized management of compute resources makes it easier to deploy boundary protection. Comprehensive security management
measures, such as security policies, unified data management, security patch management, and unexpected event management, can be taken to
manage compute resources. For users, this also means having professional security expert teams to protect user resources and data.

Low security costs


Since security measures are taken for all virtual resources shared among many users, security costs paid by each user are low.

Efficient security protection


Leveraging fast and elastic resource allocation capabilities, the virtualization system can efficiently provide security protection for filtering,
traffic shaping, encryption, and authentication processes, and dynamically allocate compute resources to improve processing efficiency.

2.7 Technical Specifications


Technical specifications include VM specifications, O&M management specifications, and operation management specifications. For details, log in
to Specifications Query.

2.8 Feature Description


Compute

Storage

Network

DR

Backup

Multi-Tenancy

O&M Management

2.8.1 Compute
FusionCompute virtualizes compute resources and uses eDME to provision VMs on a unified page.

Overview
FusionCompute uses the compute resource virtualization technology to virtualize compute resources and manage virtual resources and service
resources in a centralized manner. Multiple VMs can be deployed on one physical server so that one server can function as multiple servers.

Technical Highlights
Improves resource utilization of data center infrastructure.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 46/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Shortens the service rollout period.

Reduces the power consumption of the data center.

Leverages high availability and powerful restoration capabilities of virtualized infrastructure to provide rapid automatic fault recovery for
services, reducing data center costs and increasing system uptime.

2.8.2 Storage

Virtualized storage
Storage virtualization abstracts storage devices as datastores. VMs are stored as a group of files in their own directories in datastores. A datastore is a
logical container that is similar to a file system. It hides the features of each storage device and provides a unified model to store VM files. Storage
virtualization helps the system better manage virtual infrastructure storage resources, greatly improving storage resource utilization and flexibility
and increasing application uptime.
The following storage units can be encapsulated as datastores:
Logical unit numbers (LUNs) on storage area network (SAN) storage, including Internet Small Computer Systems Interface (iSCSI) and fibre
channel (FC) SAN storage
File systems on network attached storage (NAS) devices
Storage pools of mass block storage
Local disks of hosts
Storage pools of eVol storage

Block storage service


eDME is used to provision and manage block storage services, including provisioning and managing LUNs on storage devices and managing hosts
that use the LUNs.
Overview
The block storage service manages the process of storage resource pooling and usage. A storage device usually consists of multiple disks. The
storage space of these disks is integrated into a storage pool. LUNs created from the storage pool can be mapped to hosts as logical disks to provide
storage services for the hosts.
eDME supports LUN provisioning based on service levels. A service level is a logical collection of storage pools. It automatically adjusts
performance indicators of LUNs based on their service loads to adapt to different service scenarios.
Technical highlights

Provisions resources automatically based on service levels. Storage resources are allocated on demand based on service scenarios, maximizing
storage resource utilization.

Provides convenient task management. You can view the steps and execution results of the block storage service provisioning process in the
task center.

File storage service


eDME is used to provision and manage file storage services, including creating and managing file systems, dtrees, network file system (NFS) shares,
and common Internet file system (CIFS) shares on storage devices.
Overview
A file system is created on a storage device to centrally manage resource information related to the file systems and provide file storage and sharing
capabilities for external systems.
Technical highlights

Orchestrates the creation, deletion, and modification operations of file systems. You can customize parameters to configure related resources in
one step.

Provides convenient task management. You can view the steps and execution results of the file storage service provisioning process in the task
center.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 47/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.3 Network
The distributed virtual switch (DVS) service depends on the FusionCompute virtualization suite. After hardware virtualization is complete, you can
provision and manage DVSs in eDME to enable communication among VMs and between VMs and external networks.
Definition
A DVS connects to VM NICs through port groups and to host physical NICs through uplinks, thereby connecting VMs to the external network, as
shown in Figure 1.

Figure 1 Network access of VMs

Table 1 describes the concepts of each network element (NE) in the figure.

Table 1 Concepts

NE Description

DVS A DVS is similar to a layer-2 physical switch. It connects to VMs through port groups and connects to physical networks through uplinks.

Port A port group is a virtual logical port which is similar to a network attribute template, used to define the VM NIC attributes and the mode in
group which a VM NIC connects to the network through a DVS.
When VLAN is used: No IP address is assigned to the VM NIC that uses the port group corresponding to the VLAN (you need to manually
assign an IP address to the VM NIC), but the VM connects to the VLAN defined by the port group.

Uplink An uplink connects a DVS to a physical NIC on a host for VM data transfer.

Benefits

Beneficiary Benefit

Customer The logical architecture is similar to that of traditional switches, which is easy for IT O&M personnel to understand and use.
You can view the steps and execution results of the DVS provisioning process on the task center page.

Network Overlay SDN

2.8.3.1 Network Overlay SDN


Network virtualization – computing
DCS uses iMaster NCE-Fabric in the network virtualization-computing scenario to interconnect with the network overlay SDN management
platform. Compute and network resources are managed separately. In addition, the solution implements automatic network configuration,
interconnection with compute resources platform, and collaborative resource allocation and scheduling, flexibly and conveniently managing
resources and delivering services.
Figure 1 shows the architecture of the network virtualization – computing scenario.

Figure 1 Network virtualization – computing scenario

127.0.0.1:51299/icslite/print/pages/resource/print.do? 48/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Service provisioning consists of the following parts:

Network service provisioning: The network administrator uses the network controller to allocate network resources to specified services
or applications. The network controller automatically delivers service configurations of overlay networks and access configurations of
VMs or PMs.

Compute service provisioning: The compute administrator creates, deletes, and migrates compute and storage resources using
FusionCompute. iMaster NCE-Fabric can automatically detect the operations performed by FusionCompute on compute resources.

In the network virtualization – computing scenario, the functions of each layer are as follows:

Service presentation layer/Network control layer

The service presentation layer is oriented to data center users. iMaster NCE-Fabric provides GUIs for network administrators,
implementing network service orchestration, policy provisioning, automated deployment, and O&M management.

The network control layer is the core of the network virtualization - computing scenario. iMaster NCE-Fabric implements
network modeling and instantiation, collaborates virtual and physical networks, and provides network resource pooling and
automation. In addition, as the key component for separating SDN network control from forwarding, iMaster NCE-Fabric
constructs an entire network view to uniformly control and deliver service flow tables.

The network service layer is the infrastructure of a data center network, providing high-speed channels for carrying services, including
L2-L3 basic network services and L4-L7 value-added network services. The network service layer uses the flat spine-leaf architecture. As
core nodes on the Virtual Extensible LAN (VXLAN) fabric network, spine nodes provide high-speed IP forwarding, and connect to leaf
nodes of various functions through high-speed interfaces. As access nodes on a VXLAN fabric network, leaf nodes connect various
network devices to the VXLAN network.

The compute access layer supports access from virtualized servers and physical servers.

A virtualized server indicates that a physical server is virtualized into multiple VMs and vSwitches using virtualization
technologies. VMs connect to the fabric network through vSwitches. iMaster NCE-Fabric is compatible with mainstream
products that have virtualized servers.

Physical servers are considered as logical ports by iMaster NCE-Fabric. Physical servers are connected to the fabric network
through logical ports.

iMaster NCE-Fabric
Definition
On the iMaster NCE-Fabric page, configure a port group and associate the specified VLAN of the port group with the logical switch. Different
port groups of the same logical switch communicate with each other at Layer 2 through the logical switch, as shown in Figure 2.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 49/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 2 iMaster NCE-Fabric network access schematic diagram

Table 1 describes the concepts of each network element (NE) in the figure.

Table 1 Concepts

NE Description

LogicSwitch By configuring a port group, you can associate a specified VLAN of the port group with a logical switch.

PortGroup A port group is a virtual logical port which is similar to a network attribute template, used to define the VM NIC attributes and the mode in
which a VM NIC connects to the network through a DVS.
When VLAN is used: No IP address is assigned to the VM NIC that uses the port group corresponding to the VLAN (you need to manually
assign an IP address to the VM NIC), but the VM connects to the VLAN defined by the port group.

vm VM

SDN: DCS uses iMaster NCE-Fabric to implement network automation and automatic and fast service orchestration.

Computing association: FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric detects VM login, logout, and
migration status and automatically configures the VM interworking network.

FusionCompute: The solution uses the network overlay SDN solution and supports association between FusionCompute and iMaster
NCE-Fabric to implement automatic provisioning of virtual network services and automatic network configuration during VM
provisioning, HA, and migration.

Benefits

Beneficiary Benefit

Customer The overlay virtualized network based on the Virtual Extensible LAN (VXLAN) and SDN enables configuration of server virtualization
and network automation without changing the existing network.
This simplifies conventional network deployment, enables fast service rollout, improves service deployment flexibility, and meets
customers' requirements for dynamic service changes.

2.8.4 DR
DCS uses the unified management software UltraVR to provide multiple DR solutions, including the local high availability (HA) solution,
metropolitan HA solution, active-standby DR solution, and geo-redundant 3DC DR solution. It provides customers with all-region and all-scenario
DR solutions within a single data center, between data centers, and to the cloud, maximizing the service continuity.
DCS provides key capabilities such as unified DR management, DR process automation, failover, reprotection, scheduled migration, and DR drills.
The local HA solution uses the storage active-active feature (HyperMetro) and virtualization HA capability to ensure zero RPO and minute-level
RTO in a single data center in the event that storage devices or hosts are faulty, maximizing the customer service continuity.

Figure 1 Solution architecture

127.0.0.1:51299/icslite/print/pages/resource/print.do? 50/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The metropolitan HA solution uses the storage active-active feature and VM HA capability of two data centers in the same city to ensure that the
intra-city DR center can quickly take over services from the production center if the production center is faulty. The solution enables zero RPO and
minute-level RTO, and maximizing the service continuity.

Figure 2 Homogeneous virtualization, homogeneous storage DR solution architecture

The active-standby DR solution replicates data between two data centers that are located far from each other. If the production center is faulty, the
DR software can be used for failover, providing minute-level RPO and hour-level RTO DR capabilities, and maximizing the service continuity. In

127.0.0.1:51299/icslite/print/pages/resource/print.do? 51/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

addition, the solution also provides multiple automatic management functions, such as DR drills, reprotection, and failback.
The active-standby DR solution is classified into the active-standby DR solution based on the storage-layer replication and the DR solution based on
the application-layer replication.
The active-standby DR solution based on storage-layer replication uses the Huawei storage replication capability (HyperReplication) and Huawei
DR management software UltraVR to implement synchronous or asynchronous DR protection.

Figure 3 Active-standby DR solution based on storage-layer replication

The active-standby DR solution based on application-layer replication uses Information2's byte-level replication and SQL semantic-level replication
to implement asynchronous active-standby DR protection.

Figure 4 Active-standby DR solution based on application-layer replication

The geo-redundant 3DC DR solution is a combination of the metropolitan HA solution and active-standby DR solution. It provides DR protection by
storing multiple copies of the same data across data centers. This enables the service recovery when any of two data centers fails, maximizing the
service continuity.
The network architecture of the geo-redundant 3DC DR solution is classified into the following two types:
Cascading architecture: The metropolitan active-active DR solution is deployed between the production center and intra-city DR center. In addition,
the active-standby DR solution is deployed between the intra-city DR center and remote DR center.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 52/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 5 Cascading architecture

Ring architecture: The metropolitan HA solution is deployed between the production center and intra-city DR center, and the active-standby DR
solution is deployed between the production center and remote DR center. In addition, the active-standby DR solution is deployed between the intra-
city DR center and remote DR center as a backup DR solution. If the replication link between the production center and remote DR center is faulty, a
replication link is enabled between the intra-city DR center and the remote DR center.

Figure 6 Ring architecture

2.8.5 Backup
DCS provides backup solutions for VMs and applications on Huawei virtualization platforms, such as centralized backup, to effectively cope with
data damage or loss caused by human errors, viruses, or natural disasters.
The centralized backup solution combines eBackup with Huawei backup storage, general-purpose storage, or cloud storage, uses VM snapshot and
Changed Block Tracking (CBT) technologies to implement high-performance full and incremental backup of VMs, and provides VM- or disk-level
restoration and ultimate experience of deduplication and compression.

Figure 1 Centralized backup

2.8.6 Multi-Tenancy
Multi-tenant service management can simulate a set of virtual resources into multiple sets for multiple tenants. These tenants share the same
platform. However, resources of different tenants are isolated, and each tenant can view and use only its own resources. This can save IT investment
and simplify O&M management.
ECS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 53/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

BMS

IMS

AS

Elastic Container Engine

SWR

Block Storage Service

OBS

SFS

VPC Service

EIP Service

Security Group Service

NAT Service

ELB

vFW

DNS

VPN

Public Service Network

CSHA

Backup Service

VMware Cloud Service

Application and Data Integration Service

2.8.6.1 ECS
What Is an ECS?

Advantages

Application Scenarios

Related Services

Implementation Principles

127.0.0.1:51299/icslite/print/pages/resource/print.do? 54/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.1.1 What Is an ECS?

Definition
An Elastic Cloud Server (ECS) is a virtual compute server that consists of vCPUs, memory, disks, and other required resources. ECSs are easy to
obtain and scalable. In addition, you can use ECSs on demand. The ECS service works with storage services, Virtual Private Cloud (VPC), and
Cloud Server Backup Service (CSBS) services to build an efficient, reliable, and secure computing environment, ensuring stability and continuity of
your data and applications. The resources used by the ECS service, including vCPUs and memory, are hardware resources that are consolidated
using the virtualization technology.
When creating an ECS, you can customize the number of vCPUs, memory size, image type, and more. After an ECS is created, you can use it like
using your local computer or physical server. They provide you with relatively inexpensive compute and storage resources on demand. A unified
management platform simplifies management and maintenance, enabling you to focus on services.

Functions
The ECS service allows you to perform the following operations. For details about the application process and supported functions, see Table 1 .

When applying for an ECS, you can configure the ECS's specifications, images, network, disks, and advanced parameters.

Manage the lifecycle of ECSs, including starting, stopping, restarting, and deleting them; clone ECSs; convert ECSs into images; create
snapshots for ECSs; modify vCPUs and memory of ECSs.

2.8.6.1.2 Advantages
Compared with traditional servers, ECSs are easy to provision and use, and have high reliability, security, and scalability.

Table 1 Comparison of ECSs with traditional servers

Item ECS Traditional Server

Reliability The ECS service can work with other cloud services, such as storage services Traditional servers, subject to hardware reliability issues,
and disaster recovery & backup, to allow specification modification, data may easily fail. You need to manually back up their data.
backup, recovery using a backup, and rapid recovery from a fault.
You need to manually restore their data, which may be
complex and time-consuming.

Security The security service ensures that ECSs work in a secure environment and You need to purchase and deploy security measures
protects your data, hosts, and web pages, and checks whether ECSs are under additionally.
brute force attacks and whether remote logins are performed, enhancing your
system security and mitigating the risks of hacker intrusion. It is difficult to perform access control on multiple users
to multiple servers.

Scalability You can modify the ECS specifications, including the number of vCPUs and Configurations are fixed and are difficult to meet
memory size. changing needs.
You can expand the capacity of the system disk and data disk. Hardware upgrade is required for modifying
configuration, which takes a long time and the service
interruption time is uncontrollable. Service scalability
and continuity cannot be guaranteed.

Ease of use A simple and easy-to-use unified management console streamlines operations Without software support, users must repeat all steps
and maintenance. when adding each new server.
A wide range of products are provided, including network, storage, DR, and It is difficult for you to obtain all required services from
more products, which can be provisioned and deployed at the one-stop manner. one service provider.

Ease of After deploying an entire cloud and finishing necessary configurations, you can When using traditional servers, you must buy and
provision customize the number of vCPUs, memory size, images, and networks to apply assemble the components and install the operating
for ECSs at any time. systems (OSs).

2.8.6.1.3 Application Scenarios


ECSs are cloud servers that can be rapidly provisioned and scaled to suit your changing demands. They provide you with relatively inexpensive
compute and storage resources on demand. A unified management platform simplifies management and maintenance, enabling you to focus on
services.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 55/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

We provide multiple types of ECSs to meet requirements of various scenarios. ECSs are widely used for:

Simple applications or small-traffic websites


Simple applications or small-traffic websites, such as blogs and enterprise websites, have relatively low requirements on the computing and
storage performance of the server. A general-purpose ECS will meet the requirements. If you have higher requirements on CPUs, memory, data
disks, or the system disk of an ECS, you can modify the ECS specifications or expand disk capacity. If you need to increase the number of
ECSs, you can also apply for new ECSs at any time.

Multimedia making, video making, and image processing


In multimedia making, video making, or image processing scenarios, ECSs must provide good image processing capabilities. For these
scenarios, you can choose ECSs with high CPU and GPU computing performance, such as GPU graphics-accelerated or GPU-computing-
accelerated ECSs, to meet your service requirements.

Databases and other applications that require fast data exchange and processing
For high-performance relational databases, NoSQL databases, and other applications that require high I/O performance on servers, you can
choose ultra-high I/O ECSs and use high-performance local NVMe SSDs as data disks to provide better read and write performance and lower
latency, improving the file read and write rate.

Applications with apparent load peaks and troughs


For applications that have noticeable load peaks and troughs, such as video websites, school course selection systems, and game companies,
the number of visits may increase significantly within a short time. To improve resource utilization and ensure that your applications run
properly, you can use Auto Scaling (AS) to work with ECSs. You can configure AS policies so that ECSs are automatically added and removed
during traffic peaks and troughs, respectively. This helps maximize resource utilization and also meet service requirements, thereby reducing
costs.

2.8.6.1.4 Related Services


The ECS service can work with other cloud services to provide you with a stable, secure, highly-available, and easy-to-manage network experience.
The following figure shows services that may be used together with ECS. For details, see Table 1.

Table 1 Relationship between ECS and other cloud services

Service Name Description

Block storage service The block storage service provides the storage function for ECSs. Users can create EVS disks online and attach them to
ECSs.

Image Management Service You can apply for an ECS using a public, private, or shared image. You can also convert an ECS to a private image.
(IMS)

Virtual Private Cloud (VPC) VPC provides networks for ECSs. You can use the rich functions of VPC to flexibly configure a secure running
environment for ECSs.

2.8.6.1.5 Implementation Principles

Architecture
Figure 1 ECS logical architecture

127.0.0.1:51299/icslite/print/pages/resource/print.do? 56/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Component details

Type Description

Console ECS_UI is a console centered on the ECS service and manages relevant resources.

Composite API Provides a backend service for ECSs. It can be seen as the server end of ECS_UI, and can call FusionCompute components. Requests
(ECS) sent by an ECS from the console are forwarded by ECS_UI to Composite API and are returned to ECS_UI after being processed by
Composite API.

Resource pool FusionCompute can:


Manage the lifecycle of compute instances, for example, creating instances in batches, and scheduling or stopping instances on
demand.
Provide persistent block storage for running instances. Facilitate block storage creation and management with pluggable drives.
Provide APIs for network connectivity and addressing.

Unified Provides Identity and Access Management (IAM) during login.


authentication

Unified operation Composite API reports ECS quota, order, product information, and metering and charging information to the eDME operation
module.

Unified O&M Composite API reports ECS log, monitoring, and alarm information to the eDME O&M module.

Workflow
The following figure shows the workflow for creating an ECS.

Figure 2 Workflow for creating an ECS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 57/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The steps in the figure above are as follows:

1. Submit the application on the ECS page, corresponding to step 1 in the preceding figure.

2. Create storage resources, corresponding to step 3 to step 4 in the preceding figure.

a. The ECS API of Composite API calls the EVS API of Composite API.

b. EVS creates volumes in the storage pool according to storage resource application policies.

3. Create network resources, corresponding to step 5 to step 6 in the preceding figure.

a. The ECS API of Composite API calls the VPC API of Composite API.

b. Region Type II: The VPC API calls the DVN service to create an EIP, a port, and more.
Region Type III: The VPC API calls the DVN service to create a port and more.

4. Create compute resources, corresponding to steps 4, 6, and 7 in the preceding figure.

a. The ECS interface delivers the request to FusionCompute to create an ECS in the compute resource pool.

2.8.6.2 BMS
BMS Definition

Benefits

Application Scenarios

Functions

127.0.0.1:51299/icslite/print/pages/resource/print.do? 58/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Related Services

2.8.6.2.1 BMS Definition


Bare Metal Servers (BMSs) feature both the high-reliability of hosting servers and scalability of cloud-based resources. They provide dedicated
servers on the cloud, delivering excellent computing performance and data security required by core databases, critical applications, high-
performance computing (HPC), and Big Data. Tenants can apply for and use BMSs on demand.
The BMS self-service provision feature allows users to apply for a BMS by themselves. When applying for a BMS, users only need to specify the
server type, image, network, and other configurations.

2.8.6.2.2 Benefits
Compared with VMs and PMs, BMSs have no feature or performance loss. For details, see Table 1. Y indicates supported, N indicates unsupported,
and N/A indicates that the function is not involved.

Table 1 Feature comparison

Function Category Function BMS PM VM

Delivery mode Automatic provisioning Y N Y

Computing No functionality loss Y Y N

No performance loss Y Y N

No contention for resources Y Y N

Storage Local storage Y Y N

Booting from an EVS disk (system disk) Y N Y

Use of images (free from OS installation) Y N Y

Configurable RAID card Y Y N/A

Network VPC Y N Y

User-defined network Y N N

Communication between physical servers and VMs through a VPC Y N Y

Management and control Consistent remote login experience as VMs Y N Y

Monitoring and auditing of key operations Y N Y

2.8.6.2.3 Application Scenarios


BMSs are used in a wide range of scenarios, including:

Security-demanding scenario
Financial and security industries have high compliance requirements, and some customers have strict data security requirements. BMSs meet
the requirements for exclusive, dedicated resource usage, data isolation, as well as operation monitoring and tracking.

High-performance computing scenario


High-performance computing, such as supercomputer centers and genome sequencing, needs to process a large amount of data. Therefore,
these scenarios have high computing performance, stability, and timeliness requirements. BMSs can meet your high-performance computing
requirements. With BMSs, you do not need to worry about performance overheads caused by virtualization or hyperthreading.

Core database scenario


Some critical database services cannot be deployed on VMs and must be deployed on physical servers that feature dedicated resources, isolated
networks, and assured performance. BMSs are dedicated for each individual user, meeting the isolation and performance requirements.

Mobile application scenario


In the development, test, rollout, and operations of mobile apps, especially mobile phone games, Kunpeng-powered BMSs can help build a
one-stop solution, depending on the good compatibility of Kunpeng servers with terminals.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 59/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.2.4 Functions
The following figure shows the functions of the BMS service.

Figure 1 BMS functions

Function Description

BMS lifecycle management Users can start, shut down, restart, and delete BMSs.

Application for BMS with specified flavors O&M administrators can predefine BMS flavors and associate them with BMSs. Tenants can apply for
BMSs of different flavors based on application scenarios.

BMS OS installation using an image O&M administrators can create images which are generally common standard OS images. This function
also implements OS pre-installation.

Attachment of multiple EVS disks and The following operations are supported for EVS disks: attachment, detachment, capacity expansion, and
management of EVS disks retry.

Multiple NICs and specified IP addresses for Users can configure IP addresses for each physical NIC of a BMS.
NICs

Password setting The initial password can be set.

Host name setting Users can set initial names of hosts.

BMS instance metering BMS instances can be metered based on flavors.

EIP binding Users can bind public IP addresses that have been applied for to BMSs.

BMS management Users can configure BMC IP address segments, BMC user names, and passwords to manage BMSs.

Task log viewing Users can view asynchronous task execution records and logs on the BMS O&M portal.

2.8.6.2.5 Related Services


BMSs can work with other services to provide you with a stable, secure, highly-available, and easy-to-manage network experience. The following
figure shows services that may be used together with BMSs. For details, see Related Services .

Table 1 Relationship between BMSs and other services

Cloud Service Description

Block storage service The block storage service provides storage for BMSs. You can create EVS disks online and attach them to BMSs. EVS disks of
BMSs can only use centralized storage.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 60/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Virtual Private Cloud VPC provides networks for BMSs. You can use the rich functions of VPC to flexibly configure a secure running environment
(VPC) for BMSs.

2.8.6.3 IMS
What Is Image Management Service?

Advantages

Application Scenarios

Relationship with Other Services

Working Principle

2.8.6.3.1 What Is Image Management Service?

Definition
An image is an Elastic Cloud Server (ECS) template that contains software and other necessary configurations, including the OS, preinstalled public
applications, and the user's private applications or service data. Images are classified into public, private, and shared images.
Image Management Service (IMS) provides easy-to-use, self-service image management functions. You can apply for an ECS using a public,
private, or shared image. You can also create a private image from an ECS or an external image file.

Type
Public image
A public image is a standard image provided by the cloud platform system. A public image contains the common standard OS and preinstalled
public applications. It provides easy and convenient self-service image management functions, and is visible to all users. You can conveniently
use a public image to create an ECS.

Private image
A private image is created by a user based an ECS or external image file. A private image is only visible to the user who has created it. A
private image contains the OS, preinstalled public applications, and the user's private applications and service data.

Private images can be classified into the following types by user service:

System disk image


A system disk image is an image created using the system disk. It contains the OS, preinstalled public applications, and the user's private
applications.

Data disk image


A data disk image contains only user service data. It can be used to create an EVS disk to migrate the user service data to the cloud.

ECS image
An ECS image contains the OS, preinstalled public applications, and the user's private applications and service data.

You can use a system disk image to create ECSs so that you do not need to repeatedly configure the ECSs.
You can apply for an EVS disk using a created data disk image to quickly migrate data.
You can use an ECS image to create ECSs so that an ECS can be migrated quickly as a whole.

Shared image
A shared image is a private image that you have created and is shared with other resource sets. After the image is shared, the recipient can use
the shared image to quickly create a cloud server running the same image environment.

2.8.6.3.2 Advantages

127.0.0.1:51299/icslite/print/pages/resource/print.do? 61/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

IMS has the following advantages:

Convenience
You can create private images from ECSs or external image files, and create ECSs in batches using an image.

Security
An image file has multiple redundant copies, ensuring high data durability.

Flexibility
You can manage images in custom mode on the GUI or using the API.

Consistency
You can deploy and upgrade application systems using images so that O&M will be more efficient and the application environments will be
consistent.

2.8.6.3.3 Application Scenarios


Images are classified into public images, private images (including system disk images, data disk images, and ECS images), and shared images. You
can flexibly select different images according to your application requirements.

Create ECSs using an image.


You can create ECSs in batches using an existing image (public image, private image, or shared image).

Create a private image using an ECS.

You can create a private image using an existing ECS and use the private image to create ECSs in batches. In this way, services can be quickly
migrated or deployed in batches. The advantages of this scenario are as follows:

A private image can be created using an ECS so that services can be flexibly migrated.

Services can be quickly deployed in batches.

The design specifications are durable, and the image data will not be lost.

2.8.6.3.4 Relationship with Other Services


Refer to Table 1 to know about the relationship between IMS and other cloud services.

Table 1 Relationship with other services

Cloud Service Description

Block storage service You can apply for an EVS disk using a data disk image.

ECS You can create ECSs using an image and can convert an ECS to an image.

Auto scaling (AS) service You can create AS configuration using an image.

2.8.6.3.5 Working Principle

Architecture
Figure 1 Logical architecture of IMS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 62/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Component details

Component Type Description

IMS Creates and manages IMS. Manages the lifecycle of images.

Block storage service Provisions and manages block storage.

Network service Manages VPCs, security groups (SGs), and elastic IP addresses (EIPs).

Resource pool FusionCompute virtualizes compute, storage, and network resources.

Unified authentication Provides unified identity authentication during login.

Unified operation Reports ECS-related quota, order, product information, and metering information to eDME for unified operation.

Unified O&M Reports ECS-related operation logs, monitoring information, and alarms to the eDME O&M module.

Specifications
Table 2 describes the image specifications.

Table 2 Image specifications

Description Maximum Value

Number of images supported by a single region 500, including public and private images.

Size of a private image file that can be uploaded The size of the image file uploaded in HTTPS mode is less than 6 GB.
The maximum size of the image file uploaded in NFS or CIFS mode depends on the corresponding
image specifications. The maximum size is 256 GB.

Size of a public image file that can be exported The maximum size of the image file that can be exported to the local PC is 6 GB.
The maximum size of the image that can be exported to a shared path depends on the corresponding
image specifications. The maximum size is 256 GB.

Size of the system disk of the source ECS used to 255 GB


create a private image

Number of images that can be shared by a single 30


resource set

Number of resource sets with which a single 128


image can be shared

127.0.0.1:51299/icslite/print/pages/resource/print.do? 63/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.4 AS
Introduction

Benefits

Application Scenarios

Usage Restrictions

Working Principles

2.8.6.4.1 Introduction

Definition
Auto Scaling (AS) is a service that automatically adjusts resources based on your service requirements and configured AS policies. When service
demands increase, AS automatically adds elastic cloud server (ECS) instances to ensure computing capabilities. When service demands decrease, AS
automatically reduces ECS instances to reduce costs.

Functions
AS provides the following functions:

Manages the AS group life cycle, including creating, enabling, disabling, modifying, and deleting an AS group.

Automatically adds instances to or removes them from an AS group based on configured AS policies.

Configures the image, flavors, and other configuration information for implementing scaling actions based on the AS configurations.

Manages the expected, minimum, and maximum numbers of instances in an AS group and maintains the expected number of ECS instances to
ensure that services run properly.

Checks the health of ECS instances in an AS group and automatically replaces unhealthy instances.

Displays monitoring data of AS groups, facilitating resource assessment.

Works with the elastic load balance (ELB) service to automatically bind load balancers to ECS instances in an AS group.

2.8.6.4.2 Benefits
AS has the following advantages:

Enhanced cost management


AS adds resources for your application system when the access volume increases and reduces extra resources from the system when the access
volume drops, reducing your cost.

Improved availability
AS helps users ensure that the application system consistently has a proper resource capacity to comply with traffic requirements. When AS
works with ELB, an AS group automatically adds available instances to the load balancer listener, through which the incoming traffic is evenly
distributed across the instances in the AS group.

High error tolerance


AS detects the running status of instances in the application system and starts new instances to replace instances that are running improperly.

Appropriate number of ECS instances


AS ensures that an appropriate number of ECS instances handle application loads. During the creation of an AS group, you can specify the
minimum and maximum numbers of instances in the AS group. After AS policies are configured, AS increases or reduces the number of ECS
instances. The number will never be lower than the minimum value or greater than the maximum value when application requirements increase

127.0.0.1:51299/icslite/print/pages/resource/print.do? 64/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

or decrease. In addition, you can set the expected values in the AS group when or after creating the AS group, and AS ensures that the number
of ECS instances in the AS group is always the expected value.

2.8.6.4.3 Application Scenarios


Service features

The number of service requests increases abruptly or the access volume fluctuates.

Computing and storage resources need to be dynamically adjusted based on amount of calculation. AS checks the health of ECS instances in an
AS group and automatically replaces unhealthy instances.

Common deployment

AS adds new instances to the application when necessary and stop instance adding when unnecessary. In this way, you do not need to prepare a
large number of ECS instances for an expected marketing activity or unexpected peak hours, thereby ensuring system reliability and reducing
system operating costs.

AS can work with the object storage service to send to-be-processed data back to the object storage. Additionally, AS can integrate with ELB to
use ECSs in an AS group for data processing, and perform capacity expansion or reduction based on the ECS load.

2.8.6.4.4 Usage Restrictions


The AS service has the following restrictions:

Only applications that are stateless and can be scaled out can run on ECS instances in an AS group. AS automatically releases ECS instances.
Therefore, the ECS instances in AS groups cannot save application status information (such as sessions) and related data (such as database data
and logs). If an application requires that ECS instances save status or log information, you can save required information to an independent
server.

Stateless: There is no record or reference for previous transactions of an application. Each transaction is made as if from scratch for the first time.
Stateless application instances do not locally store data that needs to be persisted.
For example, a stateless transaction can be regarded as a vending machine: one request corresponds to one response.

Stateful: Applications and processes can be repeated and occur repeatedly. Operations are performed based on previous transactions, and the current
transaction may be affected by previous transactions. Stateful application instances will locally store data that needs to be persisted.
For example, stateful transactions can be regarded as online banking or email, which are performed in the context of previous transactions.
Scale-out: An application can be deployed on multiple ECSs.

Resource requirements of AS: AS is a native service and needs to be deployed on two new ECSs. The service requires a quad-core CPU, 8 GB
memory, system disk with a minimum capacity of 55 GB, and data disk with a minimum capacity of 500 GB.

AS resources must comply with quota requirements listed in Table 1.

Table 1 AS quotas

Category Description Maximum Value

AS group Number of AS groups supported by a region 300

Number of AS groups supported by a resource set 25

AS configuration Number of AS configurations supported by a region 300

Number of AS configurations supported by a resource set 100

Number of AS configurations supported by an AS group 1

AS policy Number of AS policies supported by a region 3000

Number of AS policies supported by an AS group 10

AS instance Number of instances that can be created in an AS group 300

127.0.0.1:51299/icslite/print/pages/resource/print.do? 65/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Number of AS instances that can be created in a scaling action 50

Concurrency Number of concurrent AS actions supported by an AS group 1

2.8.6.4.5 Working Principles

Architecture
Figure 1 Logical architecture of AS

Table 1 Component details

Type Description

Elastic cloud It is a service that manages the lifecycle of elastic cloud servers (ECSs).
service (ECS)

Virtual private It provides network services for ECSs. You can use the functions provided by VPC to configure the operating environment for ECSs
cloud (VPC) in a secure and flexible manner.

Elastic load It distributes traffic across multiple backend servers based on the configured rules.
balance (ELB)

Elastic IP (EIP) It provides independent public IP addresses and bandwidth for Internet access.

Data monitoring It displays the CPU usage, memory usage, NIC inbound and outbound traffic, and instance change trends of instances in each AS
group.

AsService01 The two AS backend services (which exist in active-active mode) provide the lifecycle management for AS groups, AS configurations,
AsService02 and AS policies. After a tenant performs an operation on the eDME operation portal UI, the request is sent from the gateway and route
bus to AsService01 or AsService02. After processing the request, AsService01 or AsService02 will return the response to the UI.

AS-schedule AS task scheduling module, which provides scheduling by periodic or scheduled policy.

AS-monitor AS monitoring module, which provides scheduling by monitoring policy and monitoring data management.

AS-service AS core module, which provides the function of creating ECS instances after policies are triggered.

2.8.6.5 Elastic Container Engine


Introduction

Benefits

Relationship with Other Services

127.0.0.1:51299/icslite/print/pages/resource/print.do? 66/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Working Principles

Basic Concepts

2.8.6.5.1 Introduction

Definition
As an enterprise-level K8s cluster hosting service, Elastic Container Engine (ECE) enables management for cluster lifecycle, container images, and
containerized applications, as well as container monitoring and O&M. In addition, it provides highly scalable and reliable containerized application
deployment and management solutions. Therefore, it is a good choice for you to achieve application modernization.

Functions
Container cluster management
You can: 1. create, display, delete, and configure K8s clusters, and configure cluster certificates, DNS, and NTP; 2. create, delete, and scale
container node pools, manage nodes, and configure node labels and annotations in batches; 3. delete nodes, configure labels, annotations, and
schedulability for container nodes, and drain and evict container groups; 4. add, delete, configure namespaces, and configure quotas and
resource limits; 5. view the VPC and subnet of a cluster, including the management network and service network, as well as view the container
network plug-in, network type, Service CIDR, and Pod CIDR; 6. upgrade K8s cluster versions and K8s cluster node OS versions.

Containerized application management


You can: 1. upload, export, edit, and delete a Helm chart application template; 2. track historical versions; 3. deploy, upgrade, roll back,
modify, and delete application instances, and discover application instances from K8s clusters; 4. manage K8s cluster workloads, configuration
resources, storage resources, and network resources.

Container image management


You can: 1. create, edit, and delete image repositories, including third-party image repositories. 2. view the name and capacity usage of a
content library, and edit the content library and quota; 3. upload software packages and driver packages, and view their names, OS types, CPU
architectures, versions, total size, and release time.

Container monitoring and O&M


You can: 1. monitor exceptions in K8s clusters, nodes, and containers, and set silence rules for container alarms; 2. monitor the performance of
K8s clusters, nodes, application instances, and workloads.

2.8.6.5.2 Benefits
ECE is a container service built on popular Docker and Kubernetes technologies and offers a wealth of features best suited to enterprises' demands
for running container clusters at scale. With unique advantages in system reliability, performance, and compatibility with open-source communities,
ECE can meet the diverse needs of enterprises interested in building containerized services.

Ease of Use
Create a K8s cluster in one-click mode on the web UI, manage Elastic Cloud Server (ECS) or Bare Metal Server (BMS) nodes, and implement
automatic deployment and O&M for containerized applications in a one-stop manner.

Easily add or remove cluster nodes and workloads on the web UI, and upgrade K8s clusters in one-click mode.

Utilize deeply-integrated Application Service Mesh (ASM) and Helm charts, which ensure out-of-the-box usability.

High Performance
ECE supports the iSula container engine. The engine provides high-performance container cluster services to support high-concurrency and large-
scale scenarios, with featuring fast startup and low resource usage.

Security and Reliability


127.0.0.1:51299/icslite/print/pages/resource/print.do? 67/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

ECE offers enhanced capabilities such as high availability, domain-based tenant management, quota control, and authentication.

Fault and Performance Monitoring


ECE provides comprehensive monitoring capabilities covering K8s clusters, nodes, and applications, and supports tenant self-service monitoring, so
that you can view resource alarms and performance associated with tenant projects to accelerate fault detection and locating.

2.8.6.5.3 Relationship with Other Services


Figure 1 shows the dependencies between ECE and other services. Table 1 describes the dependencies.

Figure 1 Domain model - Container cluster service

Table 1 Dependencies between ECE and other services

Service Name Dependency

ECS Tenants can create node pools in K8s clusters based on ECSs.

BMS Tenants can create node pools in K8s clusters based on BMSs.

Scalable File Service (SFS) SFS can be used as persistent storage for a container, and the storage file is mounted to the container during Job creation.

Block storage It provides disk storage services for ECSs and BMSs, and supports elastic binding, scalability, and sharing.

Virtual Private Cloud (VPC) When creating K8s clusters, tenants can select VPCs and subnets. A VPC can contain multiple clusters.

2.8.6.5.4 Working Principles


Figure 1 shows the ECE logical architecture.

Figure 1 Logical diagram of ECE main components

127.0.0.1:51299/icslite/print/pages/resource/print.do? 68/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 ECE component details

Type Description

Web portal Provides a unified and easy-to-use operation interface for users.

eDME unified Provides unified OMS functions for microservices on the platform, including alarms and logs.
microservice

eDME framework Provides functions such as the traffic gateway, service gateway, and service center for microservices on the platform.

Installation and Provide installation, deployment, and upgrade tools of the container platform.
deployment tools

K8s cluster Provides a container operating environment for users, including the service K8s cluster management plane components, CCDB,
container engines Docker and iSula, container Calico network, container storage CSI plug-in, and traffic ingress.

K8s cluster node OS Provides an operating environment for K8s clusters, including the OS and driver.

2.8.6.5.5 Basic Concepts

K8s Cluster and Node


A container cluster indicates a K8s cluster, which consists of master nodes and worker nodes which are grouped and managed through the node pool.

The master nodes run K8s control portal services, including:

API service (kube-apiserver): provides declarative APIs.

Control management (kube-controller-manager): controls resources and puts resources in the expected state.

Scheduler (kube-scheduler): schedules resources to nodes based on the resource usage of nodes and the scheduling policy specified by
the user.

The worker nodes run K8s agents and service containers, including:

Node agent (kubelet): receives Pod requests and ensures that the container specified by the Pod is running properly.

Network proxy (kube-proxy): forwards service traffic.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 69/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

K8s Cluster Storage Class


A storage class is a storage service level that defines a storage service template with specific service capabilities. Storage service parameters are
defined by the K8s storage plug-in (CSI). If a service container is required to store persistent service data, you can declare a PVC and mount the
volume to the container directory. The CSI plug-in accepts the PVC request, creates a persistent volume (PV) based on the parameters defined by the
storage class, and mounts the PV to the container.

K8s Cluster Namespace


A container cluster namespace is used for domain-based management of K8s resources. Roles are used to define the visibility of resources in
namespaces. RoleBinding is used to bind users to roles to implement rights- and domain-based management of users.
The administrator can define the resource quota (ResourceQuota) of a namespace to control the CPU, memory, storage capacity, and number of
resources that can be used in the namespace.
The administrator can define the upper and lower limits (LimitRange) and default values of the resources requested by Pods and containers.

2.8.6.6 SWR
Overview

Benefits

Relationship with Other Services

Basic Concepts

2.8.6.6.1 Overview

Definition
SoftWare Repository for Container (SWR) provides easy, secure, and reliable management of container images throughout their lifecycle. It is
compatible with the Registry V2 protocol of the community and allows you to manage container images through a GUI, CLI, or native APIs. SWR
can be seamlessly integrated with Elastic Container Engine (ECE) to help customers quickly deploy containerized applications and build a one-stop
solution for cloud native applications.

Functions
Container image storage configuration
O&M administrators can configure container image storage and view the storage space occupied by container images on the O&M portal.
O&M administrators can manually collect image garbage on the O&M portal to release container image storage space.

Container image management


Public container images can be managed. O&M administrators can upload and delete public container images on the O&M portal. Tenants on
the operation portal can use public container images to deploy containerized applications.
Private container images can be managed. Container image repositories can be isolated based on resource sets. Tenants on the operation portal
can create container image namespaces in resource sets to which they belong, and upload and manage private container images in the image
namespaces.
Images can be uploaded and pulled through CLIs of common container engines, such as Docker, iSula, and iSula-build.
A server CA certificate can be downloaded and shortcut command query is supported.

2.8.6.6.2 Benefits

Ease of Use
You can directly push and pull container images without building a platform or performing O&M.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 70/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You can manage container images throughout their lifecycle on the SWR console.

Security and Reliability


SWR supports HTTPS to ensure secure image transmission, and provides multiple security isolation mechanisms between and inside accounts to
ensure secure data access.

2.8.6.6.3 Relationship with Other Services


SWR automatically interconnects with ECE through secure links and internal accounts. In this way, you can directly select uploaded container
images to deploy containerized applications, simplifying the deployment process.

2.8.6.6.4 Basic Concepts

Image
A container image is a template that provides a standard format for packaging containerized applications. When deploying containerized
applications, you can use images from the public image repository or your private image repository. For example, a container image can contain a
complete Ubuntu OS, and can be installed with only the required application and its dependencies. A container image is used to create a container.
Docker provides an easy way to create and update images. You can also download images created by other users.

Container
A container is a running instance created by a container image. Multiple containers can run on one node. A container is essentially a process. Unlike
a process directly executed on a host, the container process runs in its own independent namespace.
The relationship between the image and the container is similar to that between the class and the instance in the object-oriented program design. An
image provides a static definition, and a container is the entity of the image that is running. Containers can be created, started, stopped, deleted, and
suspended.

Image Repository
An image repository is used to store container images. A single image repository can correspond to a specific containerized application and host
different versions of the application.

2.8.6.7 Block Storage Service


What Is the Block Storage Service?

Advantages

Relationships with Other Services

127.0.0.1:51299/icslite/print/pages/resource/print.do? 71/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Implementation Principles

2.8.6.7.1 What Is the Block Storage Service?

Definition
The block storage service provides block storage space for instances. Users can create disks online and attach them to instances.

In this document, instances refer to the Elastic Cloud Servers (ECSs) or bare metal servers (BMSs) that you apply for. Elastic Cloud Server (ECS)
disks are also referred to as disks in this document.

Figure 1 Definition of EVS disk functions

Functions
The block storage service provides various persistent storage devices. You can choose disk types based on your needs and store files and build
databases on EVS disks. The block storage service has the following features:

Elastic attaching and detaching


A block storage service disk is like a raw, unformatted, external block device that you can attach to a single instance. Disks are not affected by
the running time of instances. After attaching a disk to an instance, you can use the disk as if you were using a physical disk. You can also
detach a disk from an instance and attach it to another instance.

Various disk types


You can divide disk types of EVS disks based on storage backend types to meet different requirements of services.

Elastic scalability
You can configure storage capacity and expand the capacity on demand to deal with your service data increase.

Shared disk
Multiple instances can access (read and write) a shared disk at the same time, meeting the requirements of key enterprises that require cluster
deployment and high availability (HA).

2.8.6.7.2 Advantages
Table 1 compares the block storage service and object storage service.

Table 1 Comparison between the block storage service and object storage service

Comparison Block Storage Service Object Storage Service


Dimension

Usage mode Provides persistent block storage for compute services such as Provides RESTful APIs that are compatible with Amazon S3.
instances. EVS disks feature high availability, high durability, and You can use browsers or third-party tools to access object
low latency. You can format, create file systems on, and persistently storage and use RESTful APIs to perform secondary
store data on EVS disks. development on OBS.

Data access Data can only be accessed in the internal network of data centers. Data can be accessed on the Internet.
mode

127.0.0.1:51299/icslite/print/pages/resource/print.do? 72/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Sharing mode Supports EVS disk sharing. Supports data sharing.


A shared EVS disk can be attached to a maximum of 16 ECSs in the Anonymous access is allowed and the quantity of access users
cluster management system. is unlimited.

Storage capacity Virtualized SAN storage: A single disk supports a maximum of 64 The capacity is unlimited. Therefore, planning is not required.
TB.
NAS storage: A single disk supports a maximum of 64 TB.
Scale-out block storage: A single disk supports a maximum of 32 TB.
eVOL storage: A single disk supports a maximum of 64 TB.
Block storage: A single disk supports a maximum of 64 TB.

Storage backend Supports virtualized SAN storage, NAS storage, Huawei scale-out OceanStor Pacific
block storage, eVOL storage, and block storage.

Recommended Scenarios such as database, enterprise office applications, and Scenarios such as big data storage, video and image storage,
scenario development and testing. and backup and archiving. It can also provide storage for
other private cloud services (such as IMS).

2.8.6.7.3 Relationships with Other Services


Figure 1 shows the dependencies between EVS and other services. Table 1 provides more details.

Figure 1 Relationships between EVS and other services

Table 1 Dependencies between EVS and other services

Cloud Service Name Description

ECS You can attach EVS disks to ECSs to provide scalable block storage.

BMS You can attach iSCSI-type EVS disks to BMSs to provide scalable block storage.

2.8.6.7.4 Implementation Principles

Architecture
The block storage service consists of the block storage service console, block storage service APIs, datastores, and storage devices. Figure 1 shows
the logical architecture of EVS.

Figure 1 Logical architecture of EVS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 73/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 EVS component description

Component Name Details

Block storage The block storage service console provides tenants with an entry to the block storage service. Tenants can apply for EVS disks on
service console the console.

API (block storage The block storage service API encapsulates or combines the logic based on the native Cinder interface to implement certain block
service) storage service functions. The block storage service API can be invoked by the EVS console or tenants.

Datastore A datastore provides persistent block storage and manages block storage resources. With it, you can create disk types, create disks
on storage devices, and attach disks to ECSs.

Infrastructure Infrastructure refers to the physical storage device that provides block storage based on physical resources. The following storage
devices can be used as the storage backend of the block storage service: virtualized SAN storage, NAS storage, Huawei scale-out
block storage, and eVOL storage.

Unified eDME Unified eDME operation provides quota management, order management, product management, and service detail records (SDRs)
operation for the block storage service.

Unified eDME Unified eDME O&M provides disk type management, performance monitoring, logging, and alarm reporting for the block storage
O&M service.

2.8.6.8 OBS
What Is the Object Storage Service?

Advantages

Related Concepts

Application Scenarios

Implementation Principles

User Roles and Permissions

Restrictions

How to Use the Object Storage Service

2.8.6.8.1 What Is the Object Storage Service?


127.0.0.1:51299/icslite/print/pages/resource/print.do? 74/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Definition
Object Storage Service (OBS) is a scale-out storage service that provides capabilities for mass, secure, reliable, and cost-effective data storage. With
OBS, you can easily create, modify, and delete buckets.
Object storage devices and services are becoming increasingly popular in research and markets, providing a viable alternative to established block
and file storage services. OBS is a cloud storage service that can store unstructured data such as documents, images, and audiovisual videos,
combining the advantages of block storage (direct and fast access to disks) and file storage (distributed and shared).
The OBS system and a single bucket do not have restrictions on the total data volume and number of objects, providing users ultra-large capacity to
store files of any type. OBS can be used by common users, websites, enterprises, and developers.
As an Internet-oriented service, OBS provides web service interfaces over Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol
Secure (HTTPS). Users can use the OBS console or a browser to access and manage data stored in OBS on any computer connected to the Internet
anytime, anywhere. In addition, OBS supports SDK and API interfaces, which enable users to easily manage data stored in OBS and develop
various upper-layer service applications.

Functions
OBS provides the following functions:

Basic Bucket Operations


Create, view, and delete buckets in a specific region.

Access key management


Create and delete access keys.

2.8.6.8.2 Advantages
Table 1 compares the block storage service and object storage service.

Table 1 Comparison between the block storage service and OBS

Comparison Block Storage Service Object Storage Service


Dimension

Usage mode Provides persistent block storage for compute services such as Provides RESTful APIs that are compatible with Amazon S3.
instances, ensuring high availability, high durability, and low You can use browsers or third-party tools to access object
latency. You can format, create file systems on, and persistently storage and use RESTful APIs to perform secondary
store data on EVS disks. development on OBS.

Data access Accesses data only in the internal network of data centers. Accesses data on the Internet.
mode

Sharing mode Supports EVS disk sharing. Supports data sharing.


A shared EVS disk can be attached to a maximum of 16 ECSs for Anonymous access is allowed and the quantity of access users
the cluster management system. is unlimited.

Storage capacity Virtualized SAN storage: A single disk supports a maximum of 64 The capacity is unlimited. Therefore, advance planning is not
TB. required.

NAS storage: A single disk supports a maximum of 64 TB.


Scale-out block storage: A single disk supports a maximum of 32
TB.
eVol storage: A single disk supports a maximum of 64 TB.
Block storage: A single disk supports a maximum of 64 TB.

Backend storage Supports virtualized SAN storage, NAS storage, Huawei scale-out OceanStor Pacific
block storage, eVol storage, and block storage.

Recommended Supports scenarios such as database, enterprise office applications, Supports scenarios such as big data storage, video and image
scenario and development and testing. storage, and backup and archiving. It can also provide storage
for other private cloud services (such as IMS).

2.8.6.8.3 Related Concepts

127.0.0.1:51299/icslite/print/pages/resource/print.do? 75/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Bucket
A bucket is a container that stores objects in OBS. OBS provides flat storage in the form of buckets and objects. Unlike the conventional multi-layer
directory structure of file systems, all objects in a bucket are stored at the same logical layer.
In OBS, each bucket name must be unique and cannot be changed. When you create a bucket, OBS creates a default access control list (ACL). You
can configure an ACL to grant users permissions (including READ, WRITE, and FULL_CONTROL) on the bucket. Only authorized users can
perform bucket operations, such as creating, deleting, viewing, and configuring the bucket ACL. A user can create a maximum of 100 buckets.
However, the number and total size of objects in a bucket are not restricted. Users do not need to worry about system scalability.
OBS is a service based on the Representational State Transfer (REST) style HTTP and HTTPS protocols. You can locate resources using Uniform
Resource Locator (URL).

HTTPS is recommended, as it is more secure than HTTP.

Figure 1 illustrates the relationship between buckets and objects in OBS.

Figure 1 Relationship between buckets and objects

Object
An object is a basic data storage unit of OBS. It consists of file data and metadata that describes the attributes. Data uploaded to OBS is stored into
buckets as objects.
An object consists of data, metadata, and a key.

A key specifies the name of an object. An object key is a string ranging from 1 to 1024 characters in UTF-8 format. Each object in a bucket
must have a unique key.

Metadata describes an object and contains system metadata and user metadata. All the metadata is uploaded to OBS as key-value pairs.

System metadata is automatically generated by OBS and is used for processing object data. It includes object attributes such as Date,
Content-length, Last-modify, and Content-MD5.

User metadata is specified by users to describe objects when they upload the objects.

Data is the content contained by an object.

Generally, objects are managed as files. However, OBS is an object-based storage service and it does not involve the file and folder concepts. For
easy data management, OBS provides a method to simulate virtual folders. By adding a slash (/) in an object name, for example, test/123.jpg, you
can simulate test as a folder and 123.jpg as the name of a file under the test folder. However, the key remains test/123.jpg.
On the OBS management console, users can directly use folders as they used to do.

AK/SK

127.0.0.1:51299/icslite/print/pages/resource/print.do? 76/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

An access credential of the object service includes an access key (AK) and a secret access key (SK). An AK and an SK are generated in pairs and are
character strings randomly generated by the authentication service. They are used in the authentication process of service requests.

An AK corresponds to only one tenant or user. A tenant or user can have two AKs at the same time. OBS (compatible with Amazon S3 APIs)
identifies a tenant or user accessing the system based on the AK.

A tenant or user generates authentication information based on the SK and request header. An SK corresponds to an AK.

Endpoint
Endpoint indicates the domain name used by OBS to provide services. OBS provides services for external systems in HTTP RESTful API mode.
Different domain names are required for accessing different regions. The endpoints required for accessing the same zone through the intranet and
extranet are different.

Quota Management
A quota is a resource management and control technology that allocates and manages the maximum number of resources (including resource
capacity and quantity) available to a single virtual data center (VDC), preventing resources from being overused by users in a single VDC and
affecting other VDCs. The platform allows you to set OBS quotas for VDCs at all levels.
OBS quotas include:

Total number of files (thousands)

Total space (GB)

If the number of resources in a VDC reaches the quota value, the resources cannot be requested. Delete idle resources or contact the administrator to
modify the quota. For details about how to modify quotas, see Managing Quotas .

When the object storage service of multiple storage devices is enabled, the resource types of all object storage services are displayed in the quota information,
that is, site name + total number of files or total space capacity. Change the total number of files or total space capacity of the object storage service based on
the site name.
If an account with the same name as the resource set ID exists on the storage device, the system automatically synchronizes the quota of the account and
displays the quota in the VDC or resource set quota information when the object storage service is enabled.
When a storage device is removed or the object storage service is disabled, the resource type of the object storage service at the target site in the quota
information is also removed.

Access Permission Control


For details about bucket policies and bucket ACLs, see "Configuration" > "Basic Service Configuration Guide for Object" > "Functions and
Features" > "Access Permission Control" in OceanStor Pacific Series 8.2.1 Product Documentation .

2.8.6.8.4 Application Scenarios

Backup and Active Archiving


OBS is a durable, scalable, and secure solution for backing up and archiving users' key data. Its versioning function further protects data. Its high
durability and secure infrastructure aim to provide an advanced data protection and disaster recovery solution. Additionally, OBS supports third-
party backup and archiving software.
Figure 1 shows the architecture.

Figure 1 Architecture in the backup and active archiving scenario

127.0.0.1:51299/icslite/print/pages/resource/print.do? 77/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Video Storage
OBS provides large storage capacity for video and image storage solutions and applies to mass and unstructured video data to meet requirements for
storing high quality video data.
Figure 2 shows the architecture.

Figure 2 Architecture in the video and image storage scenario

2.8.6.8.5 Implementation Principles

Logical Architecture
Figure 1 shows the logical architecture of OBS.

Figure 1 Logical architecture of OBS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 78/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 OBS components

Component Type Component Name Description

Unified operation of IAM/POE Provides identity identification and access management for OBS.
eDME

Unified O&M of Performance Manages infrastructure performance metrics and analyzes performance data.
eDME management

Log management Aggregates and queries the operation and running logs of tenants.

Alarm management Receives, stores, and centrally monitors and queries alarm data, helping O&M personnel quickly rectify
faults based on alarm information.

Cloud service OBS console Provides the OBS console.

Infrastructure OceanStor Pacific As the storage backend, it provides object storage functions.

Workflow
Figure 2 shows the OBS workflow.

Figure 2 OBS workflow

1. Operation administrators create resource management tenants and resource administrators on the eDME operation portal.

2. Resource administrators apply for object storage resources on the OBS console.

3. The OBS console invokes the S3 APIs of the OceanStor Pacific OBS object and big data storage device to create a bucket.

2.8.6.8.6 User Roles and Permissions


The eDME operation portal provides role management and access control functions for cloud services. Role management refers to the management
of users and user groups. Access control refers to the management of their permissions.
For OBS, user permissions provided by the eDME operation portal are mainly used to manage access to OBS resources. Table 1 lists OBS operation
permissions. A user can be assigned one or more of the permissions.

Table 1 User roles and permissions

Role Name Role Source Permission Description

127.0.0.1:51299/icslite/print/pages/resource/print.do? 79/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

OBS VDC VDC management permission A user with these permissions can perform any operation on OBS resources.
administrator administrator (VDC Admin) NOTE:

All cloud service management OBS RW Only supports only POE authentication and refers to permissions to
manage user keys, query buckets, and read and write bucket objects.
permission (Tenant
In the IAM authentication scenario, the Tenant Administrator or OBS Admin
Administrator)
permission must be granted to both organizations and resource sets. Otherwise, the
user does not have the permission to operate buckets.
VDC operator VDC query permission (VDC
User)
All cloud service management
permission (Tenant
Administrator)

Customized All cloud service management


permission (Tenant
Administrator)

OBS management permission


(OBS Admin)

Bucket object read and write


permissions (OBS RW Only)

Table 2 lists the operations that users in different roles can perform.

Table 2 Relationship between OBS operations and resource permissions

Operation OBS Administrator

Creating buckets Yes

Modifying buckets Yes

Deleting buckets Yes

Obtaining basic bucket information Yes

Creating an access key Yes

Deleting an access key Yes

2.8.6.8.7 Restrictions
The restrictions on OBS are as follows:

OBS is compatible with Amazon S3 standard interfaces.

OBS is accessed based on domain names. Before using OBS, configure the IP address of the DNS server on the client.

A user cannot use the global domain name to access the buckets and objects in a non-default region.

When a third-party S3 client is used to access the OBS, only the domain name of the default region and the global domain name can be used to
create buckets. You are advised to create buckets on the OBS console.

Even though a user is assigned all permissions of another tenant's buckets, the user's permissions are still restricted by its role.

OBS permission control and quota can be configured only in first-level VDCs.

Each tenant can create a maximum of 100 buckets.

Currently, only storage devices whose authentication mode is POE support tenant quotas.

You are not advised to modify bucket configurations on the storage device already added to eDME.

OBS and SFS cannot use the same storage device.

Currently, IAM authentication does not support the metric report function.

2.8.6.8.8 How to Use the Object Storage Service


The object service provides multiple modes for using object service resources and service functions. You can select a usage mode based on the
service scenario.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 80/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 1 describes the method of using the object service.

Figure 1 Method of using the object service

Third-Party Client Object Service API SDK Mainstream Software

Third-Party Client

Users can use a third-party client to manage storage resources of the object service. For example, users can install S3 Browser on a local host
and perform operations such as creating buckets and uploading and downloading objects on a GUI to facilitate resource management.
The object service is compatible with multiple types of clients. For details, visit Huawei Storage Interoperability Navigator.
How to Use S3 Browser uses S3 Browser as an example to describe how to configure and use S3 Browser.
For details about how to install and use each client, see the official website of the client.

Object Service API

The object service provides REST APIs. You can invoke these APIs using HTTP or HTTPS requests to create buckets, upload objects, and
download objects.
You can visit the OceanStor Scale-Out Storage Developer Center to view how to invoke object service APIs and related service APIs. We also
provide quick start of object service APIs to help you quickly understand how to use APIs in simple scenarios.

SDK

SDK encapsulates REST APIs provided by the object service to simplify user development. You can invoke the API functions provided by
SDK to use the service functions provided by the object service.
You can visit the OceanStor Scale-Out Storage Developer Center to view how to set up an SDK development environment and SDK API
descriptions. We also provide program samples with source codes for the object service to help you quickly get started.

Mainstream Software

127.0.0.1:51299/icslite/print/pages/resource/print.do? 81/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The object service can be used for data archiving (financial check images, medical images, government and enterprise electronic documents,
and IoV scenarios) and backup (carrier and enterprise application backup). It is compatible with multiple third-party backup and archiving
software, such as Veritas NetBackup, Commvault Simpana, and Rubrik. You need to configure the interconnection with the object service on
the third-party backup and archiving software. For details about the compatibility, see Huawei Storage Interoperability Navigator.

How to Use S3 Browser


S3 Browser is a desktop application on Windows and is provided by Amazon for the object service. It supports access to the object service through
Amazon S3 APIs.
If the object service is compatible with Amazon S3 APIs, you can use this tool to perform common operations, such as creating buckets, uploading
objects, and downloading objects. However, APIs and features supported by the object service are different from those supported by Amazon S3.
Therefore, when using this tool for access, compatibility issues may occur in some scenarios.

How to obtain

Download the software from the S3 Browser official website:


For details about the supported S3 Browser versions, visit Huawei Storage Interoperability Navigator.

Some functions on S3 Browser are not supported currently.


For details, visit Huawei Storage Interoperability Navigator.

Configuring and using S3 Browser


Figure 2 describes the procedure for configuring and using S3 Browser. Table 1 describes the key parameters for configuring S3 Browser.

For details about how to manage object service resources using S3 Browser, visit the S3 Browser official website.
When using S3 Browser to list objects in a bucket, if there are a large number of objects, set S3 Browser to display the objects in multiple pages.

Figure 2 Procedure for configuring and using S3 Browser

Table 1 Key parameters for configuring S3 Browser

Key Parameter Description

Account Name User-defined user name, which must be unique.

Account Type The value is S3 Compatible Storage.

REST Endpoint IPv4 format: Access domain name or service IP address of the object service:Port number
IPv6 format: Access domain name or [service IP address] of the object service:Port number
If the HTTP protocol is used, the port number is 5080 or 80. If the HTTPS protocol is used, the port number is 443
or 5443.
NOTE:

Obtain the access domain name of the object storage service by referring to Obtaining the Object Storage Access
Address .

Access Key ID AK and SK generated during account creation. For details, see Creating a User Access Key .

Secret Access Key

Encrypt Access Keys with a After selecting this parameter and setting a password, the account will be protected by the password.
password

Use secure transfer(SSL/TLS) Select this parameter only when the HTTPS protocol is used.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 82/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.9 SFS
What Is Scalable File Service?

Advantages

Relationship with Other Services

Application Scenario

Constraints and Limitations

Implementation Principle

2.8.6.9.1 What Is Scalable File Service?

Definition
Scalable File Service (SFS) provides Elastic Cloud Servers (ECSs) and Bare Metal Servers (BMSs) in high-performance computing (HPC)
scenarios with a high-performance shared file system that can be scaled on demand. It is compatible with standard file protocols (NFS, CIFS, OBS,
and DPC) and is scalable to petabytes of capacity to meet the needs of massive amounts of data and bandwidth-intensive applications. Figure 1
describes how to use SFS.

Figure 1 SFS function definition

Functions
SFS provides the following functions:

Creating a file system


Before using SFS, you must create a file system.

Attaching a file system


After a file system is created, you need to attach it to an ECS.

Managing a file system


You can manage file systems, including adjusting capacity, viewing, uninstalling, and deleting file systems.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 83/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.9.2 Advantages
Ease of use
An easy-to-use operation interface is provided for you to quickly create and manage file systems without worrying about the deployment,
expansion, and optimization of file systems.

File sharing
Multiple ECSs of different types can concurrently access videos and images.

Support for mainstream file protocols


Mainstream NFS, CIFS, and DPC protocols which you are used to are supported in common OS environments.

On-demand capacity allocation and elastic scaling


You can configure the initial storage capacity of a file system based on service requirements, and expand or reduce the file storage capacity
based on service changes.

High performance and reliability


The total bandwidth of a file system can increase with the capacity expansion, which is suitable for high-bandwidth applications. In addition,
data durability is ensured to meet service growth requirements.

Automatic attachment
After installing the automatic attachment plug-in on a VM, you can select a shared file system on the SFS page and the file system is
automatically attached to the VM.

2.8.6.9.3 Relationship with Other Services


Figure 1 and Table 1 list the relationships between SFS and other services.

Figure 1 Relationships between SFS and other services

Table 1 Relationships between SFS and other services

Cloud Service Name Description

ECS File systems can be mounted to ECSs for data sharing.

BMS In HPC scenarios, file systems can be mounted to BMSs for data sharing.

2.8.6.9.4 Application Scenario

Video Cloud
SFS applies to the video cloud scenario to store video and image files.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 84/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 1 shows the architecture of the video cloud scenario.

Video files vary with specific independent software vendors (ISVs). Generally, they are 1 GB to 4 GB large files.

Images are classified into checkpoint images and analysis images. Generally, they are mass amounts of small images (about 2 billion images in
a year) with sizes ranging from 30 KB to 500 KB.

Figure 1 Architecture of the video cloud scenario

Media Processing
SFS with high bandwidth and large capacity enables shared file storage for video editing, transcoding, composition, high-definition video, and 4K
video on demand, satisfying multi-layer HD video and 4K video editing requirements.
Figure 2 shows the architecture of the media processing scenario.

Figure 2 Architecture of media processing

127.0.0.1:51299/icslite/print/pages/resource/print.do? 85/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.9.5 Constraints and Limitations


Table 1 lists constraints and limitations on the SFS.

Table 1 Constraints and limitations

Item Constraint and Limitation

Capacity You can adjust the capacity only when the file system is in the Available state.
adjustment
If you adjust the capacity of a newly created file system, an error may be reported. In this case, wait for 5 to 10 minutes and then
adjust the capacity again.

Supported Currently, SFS supports NFS, CIFS, DPC, and OBS protocols. OceanStor Dorado/OceanStor 6.1.x supports NFSv3, NFSv4, and
protocols NFSv4.1, whereas OceanStor Pacific supports NFSv3 and NFSv4.1.
The DPC protocol can only be used in the attachment to BMSs.

File system If you delete a newly created file system, an error may be reported. In this case, wait 5 to 10 minutes and then delete the file system
deletion again.

2.8.6.9.6 Implementation Principle

Architecture
Figure 1 shows the logical architecture of SFS.

Figure 1 Logical architecture of SFS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 86/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 SFS components

Component Component Name Description


Type

Unified IAM Provides Identity and Access Management (IAM) for SFS.
operation
Order management Manages orders submitted by users.

Service management Different services are defined based on the registered cloud services, and unified service management is
provided.

SDR Provides the function of metering and charging resources.

Unified O&M Performance Monitors performance indicators of the infrastructure and analyzes monitoring data.
management

Log management Aggregates and queries the operation and running logs of tenants.

Alarm management Receives, stores, and centrally monitors and queries alarm data, helping O&M personnel quickly rectify
faults based on alarm information.

Cloud service SFS console Provides the SFS management console.

OceanStor DJ Functions as the SFS server to receive requests from the SFS console.
(Manila)

Infrastructure Storage device File storage device that provides file system storage space for the SFS.
The following storage devices are supported: OceanStor Dorado 6.1.x, OceanStor 6.1.x, and OceanStor
Pacific series.

Workflow
Figure 2 shows the SFS workflow.

Figure 2 SFS workflow

127.0.0.1:51299/icslite/print/pages/resource/print.do? 87/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. A user applies for file storage resources on the SFS console.

2. The SFS console invokes the API of OceanStor DJ (Manila) to deliver the request to the storage device.

3. OceanStor DJ (Manila) invokes the storage device API to create or manage file systems.

2.8.6.10 VPC Service


What Is Virtual Private Cloud?

Region Type Differences

Application Scenarios (Region Type II)

Application Scenarios (Region Type III)

Implementation Principles (Region Type II)

Constraints

Relationships with Other Cloud Services

2.8.6.10.1 What Is Virtual Private Cloud?

Concept
The Virtual Private Cloud (VPC) service enables you to provision logically isolated, configurable, and manageable virtual networks for Elastic
Cloud Servers (ECSs), improving the security of user resources and simplifying user network deployment.
You can select IP address ranges, create subnets, and customize security groups (SGs) and NAT rules in a VPC, which enables you to manage and
configure your network conveniently and modify your network securely and rapidly. You can also customize ECS access rules within a security
group and between security groups to enhance access control over cloud servers in subnets.

Function

127.0.0.1:51299/icslite/print/pages/resource/print.do? 88/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Private network customization


You can customize the CIDR block of a subnet in a VPC and deploy ECSs and other services in the subnets as required.

Elastic and flexible connection to an extranet (supported only in Region Type II)
A VPC enables you to access an extranet flexibly and with excellent performance.

Elastic IP address (EIP): An EIP is a static extranet IP address and can be dynamically bound to or unbound from an ECS. If your VPC
contains just one or only a few ECSs, you only need to bind an EIP to each ECS for the ECS to communicate with an extranet.

Figure 1 EIP (Region Type II)

Source network address translation (SNAT): The SNAT function maps the IP addresses of a subnet in a VPC to an EIP, thereby allowing
the ECSs in the subnet to access an extranet. After the SNAT function is enabled for a subnet, all ECSs in the subnet can access an
extranet using the same EIP.

Figure 2 SNAT (Region Type II)

Destination network address translation (DNAT): If ECSs in a VPC need to provide services for an extranet, you can use the DNAT
function. The requests for accessing an EIP using a specified protocol and port are forwarded based on the mapping between IP addresses

127.0.0.1:51299/icslite/print/pages/resource/print.do? 89/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

and ports to the specified port of the target ECS. In addition, multiple ECSs can share an EIP and bandwidth to precisely control
bandwidth resources.

Figure 3 DNAT (Region Type II)

Security and protection


You can use security groups to implement resource access control by port VPC, helping you comprehensively ensure ECS security.
You can use the security group function to divide ECSs in a VPC into multiple security zones and configure different access control rules for
each security zone.

DHCP
DHCP automates the assignment of IP addresses to ECSs in a subnet.
Users or network administrators can use DHCP to configure all computers in a centralized manner.

VPC peering
In the network overlay SDN scenario, you can create a VPC peering connection between two VPCs so that subnets under the VPCs can
communicate with each other.

Benefits
A VPC facilitates internal network management and configuration, and ensures secure and quick network changes.

Flexible deployment: You can customize network division to fully control private networks.

Secure and reliable network: Full logical isolation is implemented. You can configure different access rules on demand to improve network
security.

Various network connections: The VPC supports various network connections, meeting your cloud service requirements in a flexible and
efficient manner.

2.8.6.10.2 Region Type Differences


eDME supports two deployment modes: Region Type II and Region Type III. Table 1 describes the differences between the two scenarios.

Table 1 Comparison between the two scenarios

Scenario Feature Region Type II Region Type III

Infrastructure - network Three servers (used to deploy the SDN controller) No physical network node needs to be added.
node requirements
Network devices used with the SDN controller. For example:
Core/Aggregation switch
Access switch
Firewall

Resource pool - network Network overlay SDN. Non-SDN.


resource pool SDN controller oriented to data center networks, which provides In the Type III scenario, Elastic Volume Services
application-specific network automation functions and VXLAN networks (EVSs) are used to provide VLAN-based

127.0.0.1:51299/icslite/print/pages/resource/print.do? 90/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
for cloud services. networks for VPCs.

Cloud service Table 2 describes cloud service availability in the two scenarios.

Table 2 Cloud service availability in Region Type II and Region Type III scenarios

Cloud Service Region Type II Region Type III

Virtual private cloud (VPC) Supported (mandatory) Supported (mandatory)

Security group Supported (mandatory) Supported (mandatory)

Elastic IP address (EIP) Supported (mandatory) Not supported

Source network address translation (SNAT) Supported (mandatory) Not supported

Destination network address translation (DNAT) Supported (mandatory) Not supported

Elastic load balancing (ELB) Supported (mandatory) Not supported

Virtual firewall (vFW) Supported (mandatory) Not supported

Virtual private network (VPN) Supported (mandatory) Not supported

Domain name service (DNS) Supported (mandatory) Not supported

Public service network Supported (mandatory) Not supported

2.8.6.10.3 Application Scenarios (Region Type II)

Secure and Isolated Network Environment


The VPC service enables you to deploy a network environment that is isolated from the extranet for ECSs, such as those that function as database
nodes or server nodes when you build a website.
You can place multi-tier web applications into different security zones, and configure access control policies for each security zone as required. For
example, you can create two VPCs, add web servers to one VPC, and add database servers to the other. Then, you can create security groups for the
two VPCs and configure inbound and outbound rules so that the web servers can communicate with the extranet while the database servers cannot
communicate with the extranet. The purpose is to achieve security protection on database servers, meeting high security requirements.

Common Web Applications


You can deploy basic web applications in a VPC.
You can bind EIPs or use NAT to communicate with extranets. You can use security groups to control data traffic and ensure web application
security.

2.8.6.10.4 Application Scenarios (Region Type III)

Secure and Isolated Network Environment


The VPC enables you to deploy a network environment that is isolated from the extranet for ECSs, such as those that function as database nodes or
server nodes when you build a website.

Figure 1 Secure and isolated network environment (Region Type III)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 91/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.10.5 Implementation Principles (Region Type II)


Figure 1 shows the logical architecture of VPC and other network services in the Region Type II scenario.

Figure 1 Logical architecture (Region Type II)

Table 1 Logical architecture

Module Description

Service presentation and O&M layer Provides a user-oriented service interface.

Service collaboration layer Implements collaboration among compute, storage, and network resources.

Network control layer and resource pool (Region Implements service policy orchestration, network modeling, and network instantiation based on
Type II) hardware devices.

2.8.6.10.6 Constraints
Table 1 lists the constraints on the functions and features of the VPC service.

Table 1 Constraints

Function and Constraint


Feature

Subnet (Region Subnet: In a VPC, communication inside a subnet is at Layer 2, and different subnets communicate with each other at Layer 3. After a
Type II) subnet is created, the CIDR block cannot be changed.
Internal subnet: The ECSs in an internal subnet of a VPC can communicate with each other at Layer 2 but cannot communicate with the
ECSs in another subnet (VPC subnet or internal subnet) of the VPC. The internal subnet NIC of an ECS cannot be bound with an EIP,
and does not support the SNAT function. After a subnet is created, the CIDR block cannot be changed.
The IP addresses used for the gateway and DHCP cannot be changed.
NOTE:

The current version does not support multicast. Multicast packets sent by service VMs from the cloud platform or by extranets to the cloud
platform are processed as broadcast packets on the virtual network of the cloud platform. If there are a large number of such packets, broadcast
flooding may occur, which will affect the virtual network performance. Specifically, it will deteriorate the communication quality of other non-
multicast services. Before adding multicast to the cloud, contact technical support engineers for evaluation.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 92/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Subnet (Region Subnet: The ECSs in a subnet can communicate with each other at Layer 2, but cannot communicate with other subnets in the VPC.
Type III) After a subnet is created, the CIDR block cannot be changed.
The IP addresses used for the gateway and DHCP cannot be changed.
NOTE:

The current version does not support multicast. Multicast packets sent by service VMs from the cloud platform or by extranets to the cloud
platform are processed as broadcast packets on the virtual network of the cloud platform. If there are a large number of such packets, broadcast
flooding may occur, which will affect the virtual network performance. Specifically, it will deteriorate the communication quality of other non-
multicast services. Before adding multicast to the cloud, contact technical support engineers for evaluation.

Network type Only IPv4 networks are supported.

DHCP Allocation pools can be set only during subnet creation and cannot be modified.

VPC peering VPC peering is not transitive. For example, even if VPC B is peered with VPC A and VPC C, respectively, VPC A is not peered with
VPC C.
A VPC peering connection can be created between two VPCs in a region. The VPCs can belong to different resource sets.
Only one VPC peering connection can be created between two VPCs.

VPC peering You can add multiple routes for a VPC peering connection. To enable communication between multiple local subnets and multiple peer
connection subnets in two VPCs, you only need to add routes without the need to add VPC peering connections.
route
A VPC can be peered with multiple VPCs at the same time. The route destination address of the VPC cannot overlap with the VPC's
subnet. The route destination addresses of all VPC peering connections of the VPC cannot overlap with each other.

2.8.6.10.7 Relationships with Other Cloud Services


For details about the relationships between VPC and other cloud services, see Table 1.

Table 1 Relationships between VPC and other cloud services

Cloud Service Name Description

Elastic Cloud Server A VPC must be bound to an associated ECS.

2.8.6.11 EIP Service


What Is an EIP?

Benefits

Application Scenarios

Relationship with Other Cloud Services

Constraints

2.8.6.11.1 What Is an EIP?

Definition
An elastic IP address (EIP) is a static IP address based on an external network (referred to as extranet. An extranet can be the Internet or the local
area network (LAN) of an enterprise). An EIP is accessible from an extranet.
All IP addresses configured for instances in a LAN are private IP addresses which cannot be used to access an extranet. When applications running
on instances need to access an extranet, you can bind an EIP so that instances in a Virtual Private Cloud (VPC) can communicate with the extranet
using a fixed external IP address.
EIPs can be flexibly bound to or unbound from resources, such as Elastic Cloud Servers (ECSs) associated with subnets in a VPC. An instance
bound with an EIP can directly use this IP address to communicate with an extranet, but this IP address cannot be viewed on the instance.

Network Solution
127.0.0.1:51299/icslite/print/pages/resource/print.do? 93/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Hardware firewalls are adopted to realize conversion between private and external IP addresses.

Functions
Elastically binding an external IP address
EIPs provide flexible, high-performance access to an extranet. You can apply for an independent external IP address, which can be bound as
needed to an ECS to allow the ECS to access an extranet. The binding and unbinding take effect immediately.

Applying for and holding EIPs separately


You do not need to apply for EIPs together with other compute or storage resources. EIPs are independent resources.

Applying for EIPs in batches


You can apply for multiple EIPs at a time.

Manually specifying an EIP or automatically allocating an EIP


When applying for an EIP, you can specify an IP address, which can be applied for successfully if it has not been allocated, or you can let the
system allocate one.

2.8.6.11.2 Benefits
An EIP is used to enable an extranet to access cloud resources. An EIP be bound to or unbound from various service resources to meet different
service requirements.
You can bind an EIP to an ECS so that the ECS can access an extranet.

2.8.6.11.3 Application Scenarios

Using an EIP to Enable an ECS in a VPC to Access an Extranet


To enable an ECS in a VPC to access an extranet, bind an EIP to it.

Using an EIP and SNAT to Enable ECSs in a VPC to Access an Extranet


To enable multiple ECSs in a VPC to access an extranet, use an EIP and SNAT.
After you configure an EIP and a subnet in an SNAT rule, the ECSs in the subnet can access an extranet using the EIP.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 94/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.11.4 Relationship with Other Cloud Services


Refer to Table 1 to learn about the relationship between EIP and other cloud services.

Table 1 Relationship between EIP and other cloud services

Cloud Description
Service
Name

ECS An NIC of an ECS can be bound to an EIP. In this case, the ECS is associated with the EIP.

SNAT SNAT maps the IP addresses in a network segment in a VPC as EIPs so that the ECSs in the subnet can access an extranet. After an SNAT
rule is created, all ECSs in the subnet can access an extranet using the EIP configured.

DNAT This service uses the mapping between IP addresses and ports to forward the requests for accessing an EIP through specified protocols
and ports to the specified ports of target ECSs. In addition, multiple ECSs can share an EIP and the bandwidth to precisely control
bandwidth resources.

2.8.6.11.5 Constraints
Before using EIPs, learn about the constraints described in Table 1.

Table 1 Constraints of EIPs

Item Constraint

Network type An EIP can be an external IPv4 address.


An EIP cannot be an external IPv6 address.

Binding and unbinding An instance interface can be bound to only one EIP.
An EIP can be bound to only one instance interface.
EIP binding and unbinding take effect immediately.
EIP binding and unbinding do not affect the running of instances.
Each of the active and extension NICs can be bound to an EIP.
An EIP can be bounded only on a Type II network.

2.8.6.12 Security Group Service


Security Group Overview

Constraints and Limitations

127.0.0.1:51299/icslite/print/pages/resource/print.do? 95/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.12.1 Security Group Overview


A security group is a logical group that provides access control policies for cloud servers that have the same security protection requirements in a
resource set. After a security group is created, you can define different access rules in the security group to protect servers that are added to it. A
server NIC can be added to one security group, whereas a server using multiple NICs can be added to multiple security groups.

2.8.6.12.2 Constraints and Limitations


Before using a security group, familiarize yourself with the constraints and limitations listed in Table 1.

Table 1 Constraints and limitations of security groups

Item Constraint and Limitation

Security Only Elastic Cloud Servers (ECSs) are supported.


group
As a logical group, a security group works on resource sets. That is, cloud servers that have the same network security isolation
requirements in the same resource set can be bound to the same security group.

2.8.6.13 NAT Service


What Is the NAT Service?

Benefits

Application Scenarios

Constraints and Limitations

Relationships with Other Services

2.8.6.13.1 What Is the NAT Service?


The Network Address Translation (NAT) service provides the NAT service for Elastic Cloud Servers (ECSs) in a Virtual Private Cloud (VPC) so that
the ECSs can share one or more elastic IP addresses (EIPs) to access the Internet or be accessed from the Internet.
The NAT service provides Source Network Address Translation (SNAT) and Destination Network Address Translation (DNAT) functions.

The SNAT function translates private IP addresses in a VPC to a public IP address by binding an EIP to SNAT rules, providing multiple ECSs
across availability zones (AZs) in a VPC with secure and efficient access to the Internet.
Figure 1 shows how SNAT works.

Figure 1 SNAT architecture

127.0.0.1:51299/icslite/print/pages/resource/print.do? 96/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The DNAT function enables ECSs across AZs in a VPC to share an EIP to provide services for the Internet by binding an EIP to DNAT rules.
Figure 2 shows how DNAT works.

Figure 2 DNAT architecture

2.8.6.13.2 Benefits
Flexible deployment
The NAT service can be deployed across subnets and AZs. Across-AZ deployment ensures high availability (HA). Any fault in a single AZ
does not affect the service continuity of the NAT service.

Lower costs
Multiple ECSs can share an EIP. When ECSs in a VPC send data to the Internet or provide application services to the Internet, the NAT service
translates private IP addresses to a public IP address or maps a public IP address to the specified private IP address. Multiple ECSs share an
EIP. You do not need to apply for multiple public IP addresses and bandwidth resources for ECSs to access the Internet, which effectively
reduces costs.

2.8.6.13.3 Application Scenarios


Configuring SNAT rules to enable ECSs to access the Internet

127.0.0.1:51299/icslite/print/pages/resource/print.do? 97/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If ECSs in a VPC send a large number of requests for accessing the Internet, you can use the SNAT function to enable the ECSs to share one or
more EIPs to access the Internet without exposing their private IP addresses. In a VPC, each subnet corresponds to one SNAT rule, and each
SNAT rule is configured with one EIP. Figure 1 shows the networking diagram.

Figure 1 Using SNAT to access the Internet

Configuring DNAT rules to enable ECSs to provide services accessible from the Internet
If ECSs in a VPC need to provide services for the Internet, use the DNAT function.
When the DNAT function binds an EIP to DNAT rules and the Internet accesses the EIP using a specified protocol and a specified port, the
DNAT service forwards the request to the corresponding port of the target ECS based on the mapping between IP addresses and ports. In this
way, multiple ECSs can share an EIP and bandwidth resources.
Each ECS is configured with one DNAT rule. If there are multiple ECSs, you can create a DNAT rule for each ECS to share one or more EIPs.
Figure 2 shows the networking diagram.

Figure 2 Using DNAT to provide services for the Internet

2.8.6.13.4 Constraints and Limitations


127.0.0.1:51299/icslite/print/pages/resource/print.do? 98/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Note the following requirements about the NAT service:

Only one SNAT rule can be created for each VPC subnet.

SNAT and DNAT should not share EIPs. SNAT and DNAT rules are configured for different services. If SNAT and DNAT rules reuse the same
EIP, resource preemption will occur.

When both the EIP and NAT services are configured for an ECS, data will be forwarded through the EIP.

Each port on an ECS can have only one DNAT rule and be mapped to only one EIP.

2.8.6.13.5 Relationships with Other Services


Table 1 shows the relationships between NAT and other services.

Table 1 Related services

Service Interaction Reference

ECS The NAT service enables other cloud services in a VPC to access the Internet or Configuring an SNAT Rule to Enable ECSs to Access the
provide services for the Internet. Internet
Configuring DNAT Rules to Enable ECSs to Provide
Services Accessible from the Internet

VPC ECSs in a VPC can be interconnected with the Internet through NAT. Configuring an SNAT Rule to Enable ECSs to Access the
Internet

EIP The NAT service enables ECSs in a VPC to share one or more EIPs to access the Configuring an SNAT Rule to Enable ECSs to Access the
Internet or provide services for the Internet. Internet
Configuring DNAT Rules to Enable ECSs to Provide
Services Accessible from the Internet

2.8.6.14 ELB
What Is Elastic Load Balance?

Benefits

Application Scenarios

Relationships with Other Cloud Services

Accessing and Using ELB

2.8.6.14.1 What Is Elastic Load Balance?

Definition
Elastic Load Balance (ELB) is a service that automatically distributes incoming traffic across multiple backend servers based on predefined
forwarding policies.

ELB can expand the access handling capability of application systems through traffic distribution and achieve a higher level of fault tolerance
and performance.

ELB helps eliminate single points of failure (SPOFs), improving availability of the whole system.

In addition, ELB is deployed on the internal and external networks in a unified manner and supports access from the internal and external
networks.

You can create a load balancer and configure servers and listening ports required for services on a web-based, unified graphic user interface (GUI)
for cloud computing management.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 99/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Functions
ELB provides a way to configure the load balancing capability. A self-service web-based console is provided for you to easily configure the service
and quickly spin up more capacity for load balancing.
ELB provides the following functions:

Linear scaling and zero SPOFs

Load balancing over TCP, UDP, HTTPS, and HTTP

Support for the access from the internal and external networks.

2.8.6.14.2 Benefits
ELB has the following advantages:

High availability and security

Automatically detects and removes abnormal nodes and automatically routes the traffic to normal nodes.

Expands elastic capacity based on application loads without service interruption when traffic fluctuates.

High performance and flexibility

Concurrent connections: A large number of concurrent connections are supported, meeting users' traffic requirements.

Flexible combination of components: Various service components can be flexibly combined to meet various service and performance
requirements of customers.

Service deployment in seconds: Complex engineering deployment processes such as engineering planning and cabling are not required.
Services can be deployed and rolled out in seconds.

Low cost and easy upgrade

No fixed asset investment: Customers do not need to invest in fixed assets such as equipment rooms, power supply, construction, and
hardware materials. Services can be easily deployed and rolled out.

Seamless system update: Provides smooth and seamless rollout of all new services and fault upgrade to ensure service continuity.

Smooth performance improvement: When you need to expand deployment resources to meet service requirements, the one-stop
expansion service frees you from hardware upgrade troubles.

2.8.6.14.3 Application Scenarios

Load Distribution
For websites with heavy traffic or internal office systems of governments or enterprises, ELB helps distribute service loads to multiple backend
servers, improving service processing capabilities. ELB also performs health checks on backend servers to automatically remove malfunctioning
ones and redistribute service loads among backend server groups. A backend server group consists of multiple backend servers.

Figure 1 Load distribution

127.0.0.1:51299/icslite/print/pages/resource/print.do? 100/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Capacity Expansion
For applications featuring unpredictable and large fluctuations in demand, for example, video or e-commerce websites, ELB can automatically scale
their capacities.

Figure 2 Capacity expansion

2.8.6.14.4 Relationships with Other Cloud Services


Figure 1 and Table 1 show the relationships between ELB and other cloud services.

Figure 1 Relationships between ELB and other cloud services

Table 1 Relationships between ELB and other cloud services

127.0.0.1:51299/icslite/print/pages/resource/print.do? 101/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Cloud Service Name Description

Virtual Private Cloud (VPC) Requires the virtual IP address (VIP) and subnets assigned in the VPC service.

Elastic Cloud Server (ECS) Provides the traffic distribution control function for backend servers.
The backend servers for ELB can be ECSs or BMSs.
Bare Metal Server (BMS)

Elastic IP Address (EIP) An EIP can be bound to a load balancer. If the subnet is an internal subnet, EIPs cannot be bound to load balancers.

Elastic Container Engine (ECE) Provides load balancing services for external systems through ELB.

2.8.6.14.5 Accessing and Using ELB


You can use either of the following methods to access ELB:

Web UI
Log in to the eDME operation portal as a tenant user. In the navigation pane on the left, click Network and select the cloud service.

API
If you want to integrate the cloud service into third-party systems for secondary development, call APIs to access ELB.

2.8.6.15 vFW
What Is Virtual Firewall?

Advantages

Application Scenarios

Constraints

Relationships with Other Cloud Services

Accessing and Using vFW

2.8.6.15.1 What Is Virtual Firewall?


Virtual firewall (vFW) is a security service for Virtual Private Cloud (VPC) and provides security protection based on Access Control List (ACL)
rules. vFWs include the edge firewall and distributed firewall.

Edge firewall
An edge firewall is deployed at a VPC border or Internet border to control access to traffic reaching and leaving a VPC of a public network,
traffic between VPCs (through VPC peering), and traffic between a VPC and the local data center (public service BMS/VIP).

Distributed firewall
A distributed firewall can control access between ECSs in a VPC. Compared with a security group, a distributed firewall provides more
efficient and convenient security protection, with less impact on network performance.

2.8.6.15.2 Advantages
vFW provides layered and flexible network ACLs. It enables you to easily manage access rules for VPCs and ECSs, enhancing cloud server
protection.
vFW has the following advantages:

Supports traffic filtering based on the protocol number, source or destination port number, and source or destination IP address.

Allows multiple VPCs or ECSs to use the same ACL policy, improving usability.

Simplifies the customer configuration in scenarios where multiple projects are interconnected by default.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 102/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.15.3 Application Scenarios


vFW is suitable for security-demanding service scenarios. It can work with security groups to provide multiple security protection layers for cloud
servers. It supports traffic filtering based on the protocol number, source or destination port number, and source or destination IP address, as shown
in Figure 1.

Figure 1 vFW panorama

2.8.6.15.4 Constraints
Table 1 Constraints on vFWs

Resource Constraint

Edge firewall An edge firewall can be associated with multiple VPCs, but a VPC can be associated with only one edge firewall.
By default, an edge firewall denies all traffic. You need to add custom rules to allow required traffic.
An edge firewall does not affect the mutual access between cloud servers in an associated VPC.

Distributed Layer 3 ports (such as gateway and DHCP ports) cannot be associated.
Firewall
The public service network ECS is not protected by a distributed firewall.
A distributed firewall can be associated with multiple VM NICs, but a VM NIC can be associated with only one distributed firewall.
Based on the default rule, a distributed firewall denies all inbound traffic and allows all outbound traffic. You need to add custom rules
to allow required traffic.
For persistent connection applications, both inbound and outbound rules that allow all traffic must be configured. Otherwise, persistent
connections will be interrupted due to rule changes or cloud server migration.

Firewall rule Supported protocols: TCP, UDP, ICMP, and ANY (all protocols)
Supported policy types: Allow and Deny
The firewall can control traffic by source IP address, destination IP address, source port, and destination port.
A rule ahead in sequence takes precedence. If two rules of a firewall conflict, the rule ahead in sequence takes effect.
The firewall can control the traffic on IPv4 networks.

2.8.6.15.5 Relationships with Other Cloud Services


A vFW can be associated with a VPC to provide security protection for the VPC, as shown in Figure 1.

Figure 1 Relationships between vFW and other cloud services

127.0.0.1:51299/icslite/print/pages/resource/print.do? 103/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Relationships between vFW and other cloud services

Cloud Service Name Description

VPC An edge firewall can be associated with a VPC to provide security protection for the VPC.

ECS A distributed firewall can be associated with an ECS NIC to provide security protection for the ECS.

2.8.6.15.6 Accessing and Using vFW


You can use either of the following methods to access vFW:

Web UI
Log in to the eDME operation portal as a tenant user. In the navigation pane on the left, click Network. On the Network page, click Virtual
Firewalls.

API
If you need to integrate the cloud service into third-party systems for secondary development, call APIs to access vFW. For details, see section
"Network Services" in eDME 24.0.0 Operation Portal API ReferenceeDME 24.0.0 API Reference.

2.8.6.16 DNS
What Is Domain Name Service?

Advantages

Application Scenarios

Restrictions

Related Services

2.8.6.16.1 What Is Domain Name Service?


Domain Name Service (DNS) translates domain names like www.example.com into IP addresses like 192.168.2.2 used for servers to connect to
each other. You can enter a domain name in a browser to visit your website or web application.
DNS associates private domain names that take effect only within VPCs with private IP addresses to facilitate access to cloud services within the
VPCs. You can also directly access cloud services through private DNS servers.
Cloud servers in a VPC can access record sets of only the private zone associated with the VPC.

Figure 1 Process to resolve a private domain name

127.0.0.1:51299/icslite/print/pages/resource/print.do? 104/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

When a cloud server in a VPC requests a private domain name, the private DNS server directly returns a private IP address mapped to the
domain name.

When the cloud server requests a public domain name, the private DNS server forwards the request to a public DNS server on the Internet and
returns the public IP address obtained from the public DNS server.

2.8.6.16.2 Advantages
High performance: Offers a new generation of efficient and stable resolution services, enabling tens of millions of concurrent queries on a
single node.

Easy access to cloud resources: Applies for domain names for cloud resources and host them in DNS so that you can access your cloud
resources with domain names.

Isolation of core data: A private DNS server provides domain name resolution for cloud servers carrying core data, enabling communications
while safeguarding the core data. You do not need to bind EIPs to these cloud servers.

2.8.6.16.3 Application Scenarios


DNS is used in scenarios like Managing Host Names of Cloud Servers, Replacing a Cloud Server Without Service Interruption, and Accessing
Cloud Resources. It provides the following functions:

Enables you to flexibly customize private domain names in VPCs.

Allows one private zone to be associated with multiple VPCs for unified management.

Quickly responds to requests for accessing cloud servers in VPCs.

Managing Host Names of Cloud Servers


You can plan host names based on the locations, usages, and owners of cloud servers and map the host names to private IP addresses during
enterprise production, development, and testing. This helps you easily obtain information about the cloud servers to facilitate management.
For example, if you have deployed 20 cloud servers in an AZ, 10 used for website A and 10 for website B, you can plan their host names and private
domain names as follows:

Cloud servers for website A: weba01.region1.az1.com – weba10.region1.az1.com

Cloud servers for website B: webb01.region1.az1.com – webb10.region1.az1.com

After configuring the preceding private domain names, you will be able to quickly determine the locations and usages of cloud servers during
routine management and maintenance.

Replacing a Cloud Server Without Service Interruption


A website application usually is deployed on multiple servers to share service load. When services on a faulty cloud server need to be switched to
the backup cloud server, to ensure service continuity, you need to modify the DNS record to resolve the domain name into an IP address, without
changing the server IP address.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 105/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

For example, multiple cloud servers are deployed in the same VPC and communicate with each other using private IP addresses. The private IP
addresses are coded into the internal APIs called among the cloud servers. If one cloud server is replaced in the system, the private IP address
changes accordingly. In this case, you also need to change that IP address in the APIs and re-publish the website, bringing inconvenience for system
maintenance.
However, if you create a private zone for each cloud server in the VPCs and map domain names to private IP addresses, the cloud servers will be
able to communicate using private domain names. When you replace one of the cloud servers, you only need to change the IP address in record sets,
instead of modifying the code.

Accessing Cloud Resources


You can configure private domain names for cloud servers created in a VPC so that the cloud servers can access you cloud resources, such as SMN
and OBS, in either of the following ways:

If a public DNS server is configured for subnets of the VPC associated with a private zone, domain name requests for accessing cloud
resources from cloud servers in the VPC will be directed to the Internet. Steps 1 to 10 in the right part of Figure 1 illustrate how a domain name
is resolved when a cloud server accesses OBS and SMN within the VPC. The request is directed to the Internet, witnessing long access latency
and poor experience.

If a private DNS server has been configured for the VPC subnets, it directly processes domain name requests for accessing cloud resources
from cloud servers in the VPC. When a cloud server accesses cloud services like OBS and SMN, the private DNS server will return private IP
addresses of these services, instead of routing the requests to the Internet, reducing latency and improving performance. Steps 1 to 4 in the left
part of Figure 1 show the process.

Figure 1 Accessing cloud resources

2.8.6.16.4 Restrictions
Table 1 describes the restrictions on DNS.

Table 1 Restrictions

Function or Feature Restriction

Domain name When delivering a service domain name, use a root domain name that is different from the external service domain name of the
constraints cloud platform.

DNS Only IPv4 addresses are supported.


Only private domain name resolution within a region is supported.
A VPC cannot be associated with two domains with the same name. Otherwise, the DNS server cannot return a response while
DNS records are being queried.

Record set A maximum of 500 record sets can be added for each private zone.
By default, the system creates SOA and NS record sets for each private zone. These record sets cannot be deleted, modified, or
manually added.
You can add A, CNAME, MX, TXT, SRV, and PTR record sets for a private zone.

2.8.6.16.5 Related Services


127.0.0.1:51299/icslite/print/pages/resource/print.do? 106/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 1 and Table 1 show the relationship between DNS and other services.

Figure 1 Relationship between DNS and other services

Table 1 Relationship between DNS and other services

Service Description

Elastic Cloud Server (ECS)/Bare Metal DNS provides domain name resolution for ECSs or BMSs.
Server (BMS)

Virtual Private Cloud (VPC) The VPC service provides basic service networks for DNS. After a private zone is associated with a VPC,
record sets of the private zone are accessible to the VPC.

2.8.6.17 VPN
What Is Virtual Private Network?

Advantages

Application Scenarios

Related Services

Restrictions and Limitations

2.8.6.17.1 What Is Virtual Private Network?


A virtual private network (VPN) is a secure, encrypted communication tunnel established between a remote user and a virtual private cloud (VPC).
This tunnel meets the industry standards and can seamlessly extend your data center to a VPC.
By default, an Elastic Cloud Server (ECS) or bare metal server (BMS) in a VPC cannot communicate with your data center or private network. To
enable communication between them, use a VPN. If you are a remote user and you want to access the service resources of a VPC, you can use a
VPN to connect to the VPC.

Figure 1 VPN structure

127.0.0.1:51299/icslite/print/pages/resource/print.do? 107/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

VPN gateway
A VPN gateway is an egress gateway of a VPC. You can use a VPN gateway to enable encrypted communication between a VPC and your data
center or between a VPC in one region and a VPC in another region. A VPN gateway works together with the remote gateway in the local
center or a VPC in another region. Each local data center must have a remote gateway, and each VPC must have a VPN gateway. A VPN
gateway can connect to one or more remote gateways. The VPN service allows you to set up VPN connections from one point to one point or
from one point to multiple points.

Remote gateway
Specifies the public IP address of a VPN in your data center or a VPC in another region. This IP address is used for communicating with ECSs
or BMSs in a specified VPC.

VPN connection
A VPN connection is an Internet-based IPsec encryption technology. With the special tunnel encryption technology, VPN connections use
encrypted security services to establish confidential and secure communications tunnels between different networks.
A VPN connection connects VPN gateways and remote gateways of user data center through establishing a secure and reliable encryption
tunnel between them. Currently, only the Internet Protocol Security (IPsec) VPN is supported.

Networking Solution
Professional network hardware devices are used to establish an encrypted communication tunnel for network connectivity.

Key Technologies

Key Technology Description

Encryption algorithm AES-128, AES-192, and AES-256

Authentication algorithm SHA2-256, SHA2-384, and SHA2-512

Transmission protocol A variety of supported transfer protocols: ESP, AH, and AH-ESP

Version Multiple supported versions: V1 and V2

2.8.6.17.2 Advantages
Secure and reliable data
Professional Huawei devices are used to encrypt transmission data using Internet Key Exchange (IKE) and Internet Protocol Security (IPsec),
and provide a carrier-class reliability mechanism, ensuring the stable running of the VPN service concerning hardware, software, and links.

Seamless resource scaling up


Your local data center can be connected to a VPC, meeting the requirements for elastic scaling of applications and services.

Low-cost connection
IPsec channels are set up over the Internet. Compared with traditional connection modes, VPN connections produce lower costs.

Convenient provisioning operation


The VPN service and its configuration take effect immediately. This enables you to rapidly and efficiently deploy the VPN service.

Flexible architecture
Professional one-step services are provided through long-term cooperation and close contact with carriers.

Professional O&M capabilities


VPN can meet your requirement for either hybrid cloud access or remote DR backup.

2.8.6.17.3 Application Scenarios

Deploying a VPN to Connect a VPC to a Local Data Center

127.0.0.1:51299/icslite/print/pages/resource/print.do? 108/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

With the VPN between a VPC and your traditional data center, you can easily use the ECSs and block storage resources in the cloud. Applications
can be migrated to the cloud and additional web servers can be created to increase the computing capacity on a network. In this way, a hybrid cloud
is built, which reduces IT O&M costs and protects enterprise core data from being leaked.

Deploying a VPN to Connect a VPC to Multiple Local Data Centers


With VPN between VPC and multiple traditional data centers, you can easily use ECSs and block storage resources in the cloud. To connect multiple
sites, ensure that the subnet CIDR blocks of each site involved in the VPN connection cannot overlap.

Cross-Region Interconnection Between VPCs


In this scenario, a VPN tunnel is established between two VPCs in different regions to enable mutual access between the two VPCs.

2.8.6.17.4 Related Services


Figure 1 and Table 1 describe the relationship between VPN and other cloud services.

Figure 1 VPN-related services

Table 1 Relationship between VPN and other cloud services

127.0.0.1:51299/icslite/print/pages/resource/print.do? 109/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Service Name Description

VPC VPN builds a communication tunnel between VPC and a traditional data center, and therefore VPC will be used.

2.8.6.17.5 Restrictions and Limitations


Before using VPN, learn the restrictions described in Table 1.

Table 1 VPN restrictions

Item Restriction

CIDR blocks of the local The CIDR blocks of the local subnet in a data center must be within the CIDR blocks of the private network (all the subnet
subnet CIDR blocks of the VPC).

CIDR blocks of the remote The CIDR blocks of the remote subnet in a data center cannot overlap with all the subnet CIDR blocks of the VPC
subnet (excluding the CIDR blocks of the internal subnets).

VPN gateway Each VPN gateway can be associated with only one VPC.

VPN connection A VPN gateway can connect to multiple subnets in the associated VPC.
All VPN connections under the same VPN gateway cannot overlap with each other.
All remote subnets under the same VPN gateway cannot overlap with each other.
The CIDR blocks of the local subnets of all VPN connections under the same VPN gateway cannot overlap with each other.

Network type Only Region Type II is supported.

Correct example:
VPN connection 1: CIDR block of the local subnet is 10.0.0.0/24, and CIDR blocks of the remote subnet are 192.168.0.0/24 and 192.168.1.0/24.
VPN connection 2: CIDR block of the local subnet is 10.0.1.0/24, and CIDR block of the remote subnet is 192.168.2.0/24.
VPN connection 3: CIDR block of the local subnet is 10.0.2.0/24, and CIDR block of the remote subnet is 192.168.2.0/24.

2.8.6.18 Public Service Network


Concept

Function

Benefits

Application Scenarios

Constraints

Procedure

2.8.6.18.1 Concept
The public service network (supported only in Region Type II) is used for the communication between a server and ECSs, virtual IP addresses
(VIPs), or BMSs in all VPCs of a user. The IP addresses of the public service network are classified into two types: server IP address and client IP

127.0.0.1:51299/icslite/print/pages/resource/print.do? 110/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

address. To ensure that the client IP address pool is large enough, the IP mask cannot exceed 15. Generally, the address range is 100.64.xx.xx/10.
100.64 is an RFC reserved address and is reserved for the carrier-level NAT internal network. Generally, this address is not accessed. Therefore, this
address can be used as the client IP address. Each ECS or BMS is automatically assigned a client IP address when being created. The client IP
addresses are allocated in a unified manner to ensure that they do not overlap. The ECS and BMS cannot access each other through the client IP
address.

2.8.6.18.2 Function
Through the public service network, ECSs or BMSs in all VPCs of a user can access specified services (such as DNS and OBS services) deployed
by the user.

2.8.6.18.3 Benefits
With the public service network, you can quickly deploy the VPC share service.

Flexible deployment: The public service network is deployed outside the VPC network and they are independent of each other. The public
service network can be shared by all VPCs and does not need to be configured for each VPC. Both the server and client support ECSs or
BMSs.

Easy expansion: The public service network is independently deployed outside the VPC network and exclusively uses a network segment. The
number of the public service network servers can be dynamically adjusted based on the client access volume in the VPC.

2.8.6.18.4 Application Scenarios


1. Tenant DNS and NTP of the public service zone are configured with the server IP addresses of the public service. VMs in the service zone
can access the tenant DNS and NTP through the public service address.

2. The API gateway is configured with the server IP address of the public service. VMs in the service zone can call interfaces of the API
gateway through the public service address.

3. The OBS and SFS services may be accessed by all VMs. To solve the address overlapping problem, all VMs access the OBS and SFS server
IP addresses through the public service address.

2.8.6.18.5 Constraints
The restrictions on the public service network are as follows:

1. This feature is supported only in Region Type II.

2. ECSs or BMSs in a VPC internal subnet cannot access the public service network.

3. The network segments of the server and client cannot be modified after the public service network is created.

4. The public service network communicates with the multi-tenant switch through a route.

5. There must be sufficient client network segments reserved (the IP mask cannot exceed 15) to ensure that public service client IP addresses
can be allocated to ECSs and BMSs in all VPC subnets.

6. The network segments of the public service client and server cannot be the same as those of the VPC subnets.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 111/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2.8.6.18.6 Procedure
This section describes how to create and manage a public service network.

Figure 1 Procedure

Table 1 Procedure

Procedure Description

Preparation Plan the IP network segment where the server is deployed and pre-allocate a large network segment to the client on the public service
network.
Create a Layer 3 shared egress on the iMaster NCE-Fabric controller.
For details, see section "Direct Connection Between the Border Leaf Switches and Public Service TOR Switch" in Datacenter
Virtualization Solution 2.1.0 Multi-Tenant Network Configuration Best Practices (SDN).
Determine physical networking and configure a route for a switch.
For details, see section "PSN Switch Configuration" in Datacenter Virtualization Solution 2.1.0 Multi-Tenant Network Configuration Best
Practices (SDN).

Creation Create a public service network based on the network plan.


For details, see section "Public Service Configuration on the eDME O&M Portal" in Datacenter Virtualization Solution 2.1.0 Multi-Tenant
Network Configuration Best Practices (SDN).

Management After the public service network is created, the server and client networks cannot be modified.

The public service network cannot be deleted if it contains resources. You need to delete the NIC resources of all VPCs before deleting the
network.

2.8.6.19 CSHA
What Is Cloud Server High Availability?

Benefits

Application Scenarios

Implementation Principles

127.0.0.1:51299/icslite/print/pages/resource/print.do? 112/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Relationships with Other Cloud Services

Key Indicators

Access and Usage

2.8.6.19.1 What Is Cloud Server High Availability?

Definition
Cloud Server High Availability (CSHA) is a cross-AZ VM HA DR service implemented based on the active-active storage technology. It provides
DR protection with zero RPO for ECSs. If the production AZ is faulty, minute-level RTO DR switchover is provided to ensure service continuity.

Restrictions
Restrictions on CSHA are as follows:

Public

DR can be implemented for VMs that are created (manually or from images) or fully cloned.

Protected VM disks support only Huawei scale-out block storage and eVol storage.

DR is not supported for management VMs and linked clone VMs.

DR cannot be implemented for VMs with peripherals, such as GPUs and USB devices. If peripherals are added to a VM for which DR
has been configured, DR cannot be implemented for the peripherals.

DR cannot be implemented for VMs using non-persistent disks.

DR cannot be implemented for VMs with scalable datastores.

DR cannot be implemented for VMs without NICs and VM disks.

DR cannot be implemented for VMs that have VHD disks on datastores.

The HA solution supports DR on VMs where UltraVR is deployed.

DR cannot be implemented for VMs with SCSI transparent transmission disks and disks in SCSI transparent transmission mode.

DR cannot be implemented for VMs for which the Security VM option is enabled.

DR cannot be implemented for VMs that have snapshots.

DR cannot be implemented for VMs that have shared disks.

DR cannot be implemented for VMs that use raw device mapping (RDM) shared disks.

Planned migration is not supported in HA service scenarios.

DR cannot be implemented for VMs whose boot sequence is precisely specified.

DR cannot be implemented for VMs for which Security VM Type is set to SVM.

DR cannot be implemented for VMs that are bound to hosts.

DR cannot be performed for VMs for which the high-precision timer is enabled.

DR cannot be implemented for VMs that have snapshots. If a VM snapshot is created after a DR protected group is created, the VM
snapshot is also deleted when the protected group is deleted.

Only one-to-one active-active pair DR protection is supported. Active-active replication links must be configured for the used storage
devices. The types, versions, and computing architectures of the active-active storage devices at both ends must be the same.

DR can be implemented only for VMs of the VIRTIO bus type.

Only disks used by protected VMs can belong to the same Huawei storage device.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 113/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DR through mirroring is not supported.

The time of the node where UltraVR is installed must be synchronized with that of FusionCompute.
Flash storage

Supports HA service DR of the flash storage eVol storage type.

Huawei scale-out block storage

Supports HA service DR in Huawei scale-out block storage VBS scenarios.

Supports HA service DR in two scenarios: converged deployment and separated deployment of Huawei scale-out block storage.

DR volumes (volumes configured with HyperMetro pairs) do not support disk migration.

The port group access mode supports only the VLAN access mode and does not support the subnet access mode.

2.8.6.19.2 Benefits
Based on the UltraVR DR capability in virtualization scenarios, the DR management workload of O&M administrators is heavy. Tenant self-service
capabilities are insufficient, which cannot meet the self-management requirements of governments and enterprises in large-scale IT infrastructure
scenarios, resulting in high centralized O&M costs and slow response to service management requirements.
As a cross-AZ cloud server high availability service, CSHA enables the self-service management capability of users in each sub-department of the
customer, reduces the dependency on centralized management of the IT department, and improves management efficiency.

2.8.6.19.3 Application Scenarios


Customers can configure EVS disks of different service levels based on application requirements. The typical service levels are as follows:
Disaster in the production center (such as power failure and fire in the production center): In site-level fault scenarios, services are automatically
switched to the DR center to quickly start ECSs. This service level applies to scenarios where hour-level site-level HA switchover is required.
Physical device faults (such as device aging and breakdown): In VM-level fault scenarios, lightweight switchover can be performed for VMs
protected by CSHA. This service level applies to scenarios where minute-level VM-level HA switchover is required.

2.8.6.19.4 Implementation Principles


The HA solution uses the HyperMetro technology of storage devices to implement DR protection.
The data dual-write and DCL mechanisms are used to implement HyperMetro data at the storage layer.

Dual-write enables I/O requests of an application server to be synchronized to both a local LUN and a remote LUN.
Data Change Logs (DCLs) record data changes of the storage systems in the two data centers.

Write I/O process:


Dual-write and locking mechanisms are essential for active-active between two storage devices. Data changes can be synchronized using the
dual-write and DCL technologies while services are running, ensuring data consistency between two data centers.

Two HyperMetro storage systems can process hosts' I/O requests concurrently. To prevent access conflicts when different hosts access the same storage address
at the same time, a lock allocation mechanism needs to be designed. Data can be written into a storage system only when allowed by the lock allocation
mechanism. If a storage system is not granted by the locking mechanism, the storage system must wait until the previous I/O is complete and then obtains the
write permission after the previous storage system is released by the locking mechanism.

Dual-write enables application servers' I/O requests to be delivered to both local and remote caches, ensuring data consistency between
the caches.

If the storage system in one data center malfunctions, the DCL records data changes. After the storage system recovers, the data changes
are synchronized to the storage system, ensuring data consistency across data centers.

Figure 1 shows the HyperMetro write I/O process when an I/O request is delivered from an application server and causes data changes.

Figure 1 HyperMetro write I/O process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 114/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. An application host delivers a write I/O to HyperMetro.


2. The system records logs.
3. HyperMetro concurrently writes the write I/O to both the local cache and remote cache.
4. The local cache and remote cache return the write I/O result to HyperMetro.
5. The storage system returns the write I/O result to the application host after receiving the feedback from the local cache and remote cache.

6. The storage system determines whether dual-write is successful.

If it is, the log is deleted.

If writing to either cache fails, the system converts the log into a DCL that records the differential data between the local and remote
LUNs.

If writing to either cache fails, HyperMetro is suspended and each storage system sends an arbitration request to the cloud platform quorum server. The
winning storage system continues providing services while the other stops. In the background, the storage systems use the DCL to synchronize data
between them. Once the data on the local and remote LUNs is identical, HyperMetro services are restored.

Read I/O process:


The data of LUNs on both storage systems is synchronized in real time. Both storage systems are accessible to hosts. If one storage system
malfunctions, the other one continues providing services for hosts.

Only Huawei UltraPath can be used in the HyperMetro solution. Huawei UltraPath has the region-based access optimization capability, reducing the number of
interactions between sites. In addition, Huawei UltraPath is optimized for active-active scenarios. It can identify geographical locations and reduce cross-site
access, thereby reducing latency and improving storage system performance. UltraPath can read data from the local or remote storage system. However, if the
local storage system is working properly, UltraPath preferentially reads data from the local storage system, preventing data read across data centers.

Figure 2 shows the HyperMetro read I/O process.

Figure 2 HyperMetro read I/O process

1. The application server applies for the read permission from HyperMetro.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 115/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If the link between the storage systems in the two data centers is down, the cloud platform quorum server determines which storage system continues providing
services for application servers.

2. HyperMetro enables the local storage system to respond to the read I/O request of the application server.

3. HyperMetro reads data from the local storage system.

If the local storage system is operating properly, it returns data to HyperMetro.

If the local storage system is working improperly, HyperMetro enables the application server to read data from the remote storage system.
The remote storage system returns data to HyperMetro.

4. The read I/O request of the application server is processed successfully.

If the data of one storage system is abnormal, HyperMetro uses data on the other storage system to repair the data, ensuring data consistency between the two
data centers.

2.8.6.19.5 Relationships with Other Cloud Services


Table 1 illustrate the relationship between CSHA and other cloud services.

Table 1 Relationships between CSHA and other cloud services

Cloud Service Relying Party Description

Elastic Cloud Server (ECS) CSHA Configures ECS DR for CSHA.

EVS block storage service ECS and CSHA Provides the DR management capability of block storage services for CSHA.

2.8.6.19.6 Key Indicators


This section describes the key indicators of CSHA.
Table 1 describes the key indicators of CSHA.

Table 1 Specifications

Item Requirement Remarks

AZ 32 The same FusionCompute resource can be registered with multiple


AZs.
The same storage resource must be registered with the same AZ.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 116/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Protected Number of protected groups 2000 You are advised to set CPU of the DR management VM to 8
group vCPUs.
Protected groups in service-oriented scenarios and virtualization
scenarios cannot coexist.

Total number of VMs supported by 2000 -


the system

Consistency group of a single Inherits storage product -


storage device specifications.

Number of VMs in a protected 1 -


group

2.8.6.19.7 Access and Usage


The following methods are provided:

Using the GUI


Log in to the eDME operation portal as a tenant. In the navigation pane on the left, choose DR > CSHA. The CSHA page is displayed.

2.8.6.20 Backup Service


What Is the Backup Service?

User Roles and Permissions

Related Concepts

2.8.6.20.1 What Is the Backup Service?

Definition
The backup service provides a unified operation portal for tenants in DCS multi-tenant scenarios. Administrators can define backup service
specifications to form a logically unified backup resource pool for multiple physically dispersed backup devices, helping tenants quickly obtain the
backup service, simplifying configuration, and improving resource provisioning efficiency.
Tenants focus on the backup service capabilities required by services instead of the networking and configuration of backup resources. This greatly
simplifies the use of the backup service and facilitates the configuration of backup capabilities for VMs and disks.

2.8.6.20.2 User Roles and Permissions


The eDME operation portal provides role management and access control functions for cloud services. Role management refers to the management
of users and user groups. Access control refers to the management of their permissions.
For the backup service, user permissions provided by the eDME operation portal are mainly used to manage access to the backup service. For users,
one or more of the permissions listed in Table 1 based on the operation permissions of the backup service need to be set so that users can have the
operation permissions of the backup service.

Table 1 Operation permissions and user groups

Service Operation User Group Permission Description

Backup service operation Virtual data center (VDC) VDC management permission A user with this permission can perform any
permission administrator operation on the backup service.
Management permission on all cloud
services

VDC operator VDC operator permission


Management permission on all cloud
services

127.0.0.1:51299/icslite/print/pages/resource/print.do? 117/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Customized VDC query permission or VDC


management permission
Backup service management
permission

Table 2 lists the common operations that can be performed on the backup service by default after you have the backup service permissions.

Table 2 Relationships between backup service operations and resource permissions

Operation Name Backup Service Administrator

Applying for Capacity Yes

Viewing the Backup/Archive Capacity Yes

Expanding the Backup/Archive Capacity Yes

Switching to the Backup Console Yes

Viewing Replication Relationships Yes

2.8.6.20.3 Related Concepts


Backup service quota:
The platform allows you to set backup service quotas for VDCs at all levels and resource sets. The quota indicators include backup quota and
archive quota.
Backup resource pool:
Multiple sets of backup devices that are physically dispersed and have the same functions are grouped and managed to form a logically unified
backup resource pool.
Applying for capacity:
Before using the backup service, you need to apply for capacity. This operation enables the system to allocate a proper backup device from the
backup resource pool and set the capacity upper limit for the selected AZ and backup resource pool.
Replication relationship:
If a replication operation can be performed between backup devices in two backup resource pools, the two backup resource pools have a replication
relationship.
Remote replication configuration:
When applying for the capacity of an AZ, if replication relationships exist between the backup service resource pools in other AZs and the selected
backup resource pools, you can configure remote replication and apply for the capacity of the target AZ.
After the application is successful, you can perform backup service operations in the applied AZ or perform replication operations between AZs that
have replication relationships.
Total backup capacity:
Backup capacity applied by a user, which is used to control the upper limit of the data volume generated by the backup service.
Used backup capacity:
Actual capacity usage generated during backup service operations. Note that the capacity occupied by the replication operation is also counted in the
used capacity.
Total archive capacity:
Archive capacity applied by a user, which is used to control the upper limit of the data volume generated by the archive service.
Used archive capacity:
Actual capacity usage generated during archive service operations.
Capacity expansion:
After applying for the capacity of an AZ, you can expand the capacity to meet service requirements.
Backup console:
After applying for capacity, you can access Backup Console from the home page of the backup service to perform specific service operations.

2.8.6.21 VMware Cloud Service


Introduction to VMware Integration Service

Benefits

127.0.0.1:51299/icslite/print/pages/resource/print.do? 118/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Application Scenarios

Functions

2.8.6.21.1 Introduction to VMware Integration Service


VMware ECS

VMware EVS Disk

2.8.6.21.1.1 VMware ECS


A VMware Elastic Cloud Server (ECS) is a scalable computing server provided by VMware vCenter. VMware ECSs can be synchronized to and
centrally managed on eDME after the VMware integration service is interconnected with VMware vCenter.

2.8.6.21.1.2 VMware EVS Disk


A VMware EVS disk is a scalable virtual block storage device that is based on the distributed architecture. VMware EVS disks can be synchronized
to and centrally managed on eDME after the VMware integration service is interconnected with VMware vCenter.

2.8.6.21.2 Benefits
High density

Low power consumption

Easy management

System optimization

2.8.6.21.3 Application Scenarios


Inventory management

Existing VMware resource pools can be synchronized and managed by different tenants through the VMware integration service.

This solution can replace the managed VMware solution based on FusionSphere OpenStack.

The following resources can be managed: clusters, physical machines, resource pools, VMs, basic networks, storage, disks, and
templates on vCenter.

New management
VMware resource pools can be managed in the VMware integration service module. The resource management operations, such as requesting
VMs and EVS disks and managing compute, network, storage, image, and snapshot resources, are controlled by order quota.

2.8.6.21.4 Functions
The following figure shows the functions of the VMware integration service.

Figure 1 VMware integration service functions

127.0.0.1:51299/icslite/print/pages/resource/print.do? 119/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Function Description

VM lifecycle management Users can start, shut down, restart, and delete VMs.

Application for VMs with specified flavors An O&M administrator predefines different VM flavors. Tenants can apply for VMs of different flavors
based on application scenarios.

VM snapshot creation Tenants can create snapshots for their own VMs and restore the snapshots when necessary.

VM image creation Tenants can create images for their own VMs, provision VMs by using their own images, and publish
private images as public images.

VM cloning Tenants can quickly replicate VMs by cloning their own VMs.

VM flavor changing Tenants can change VM flavors as required.

EVS disk creation Tenants can create EVS disks of different specifications and attach them to VMs.

Host name setting Users can set initial names of hosts.

Login using VNC Users can log in to a VM by using VNC.

VM and EVS disk instance metering VMs and EVS disk instances can be metered based on flavors.

Support for NSX-T networks Network services include security group, load balancing, and security policy.

One-click synchronization of existing vCenter Existing vCenter resources, VMs, storage devices, images, resource pools, and clusters can be
resources synchronized by one-click.

VM management Existing VMs can be transferred to a specified tenant.

Task log viewing Asynchronous task execution records and logs can be viewed on the operation portal.

2.8.6.22 Application and Data Integration Service


Overview of the Application and Data Integration Service

2.8.6.22.1 Overview of the Application and Data Integration Service


Application and Data Integration is used to build an enterprise-level connection platform to connect enterprise IT systems and operational
technology (OT) devices. Application and Data Integration provides multiple connection options including API, message, data, and device access to
enable enterprises to create digital twins based on the physical world and speed up their digital transformation.
On the operation portal, Application and Data Integration provides the instance services System Integration Service and Device Integration Service
to efficiently connect IT systems and OT devices. In addition, the API gateway is provided as the portal of Service Openness to manage the calling,

127.0.0.1:51299/icslite/print/pages/resource/print.do? 120/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

traffic, authorization and access control, monitoring, and version of all OpenAPIs. On the O&M portal, Application and Data Integration provides
the PaaS Instance Management service to implement service instance provisioning, alarm reporting, and preventive maintenance for service
provisioning and maintenance.
Introduction to System Integration Service

Introduction to Device Integration Service

This section describes the definition, functions, benefits, application scenarios, and availability of the Device Integration Service.

Introduction to the APIGW Service

2.8.6.22.1.1 Introduction to System Integration Service


Functions

Values and Benefits

Usage Scenarios

2.8.6.22.1.1.1 Functions
The System Integration Service is a service that revolutionizes the connections of southbound subsystems on campus networks. It offers full-stack
integrated access channels for these subsystems, including service, data, and message access. Additionally, it allows users to build connection
management capabilities based on subsystems, integration assets, and connectors. Users can also take advantage of visualized integrated asset
development and O&M capabilities.

Related Concepts
Figure 1 Concepts

App: An app is a virtual unit for an integration task. It does not correspond to an actual external or internal system. It is an entity that manages
integration assets.

LinkFlow: provides API integration-oriented asset development capabilities, implements full lifecycle management from script-based
API design, development, test, to release, and supports API re-opening after orchestration.

MsgLink: provides asset development capabilities oriented to message integration, provides secure and standard message channels, and
supports message release, subscription, and permission management.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 121/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DataLink: provides asset development capabilities oriented to data integration, connects to external data sources based on connectors,
and implements flexible, fast, and non-intrusive data connections between multiple data sources such as texts, messages, APIs, and
structured data.
Subsystem: A subsystem is a digital description or modeling of a physical IT system connected to System Integration Service. Generally, one
subsystem corresponds to one physical IT system. You can define subsystem events, services, and connection resources on the page to build
subsystems and authorize apps to develop integrated assets.

Event: Events are defined by subsystems. Developers can invoke LinkFlow functions or MsgLink interfaces to send events defined in
subsystems.

Service: Services are defined by subsystems. When developing function APIs in LinkFlow, developers can use the functions to access
third-party services defined in subsystems.

Connection: Connections are defined by subsystems. After subsystems are associated with connector instances, LinkFlow and
DataLink are authorized as the data source of data APIs or the source and target ends of DataLink tasks.

Connection Management
Integration object management based on app center: Users can create multiple integration objects, such as APIs, connector instances, and
topics, to establish interconnection interfaces with integrated systems or apps.

Figure 2 App center

Figure 3 App center

Connection management based on subsystems: Users can build models for integrated subsystems to describe external IT systems and leverage
the lifecycle management capabilities, such as adding, deleting, modifying, and querying subsystems. This system also supports logical
integration of subsystems by apps.

Figure 4 Subsystem Management

System integration relationship view: The system provides a unified view of the number of subsystems contained in each integration app.

Fine-grained authorization management: Supports app-based sub-account authorization and isolation. Sub-accounts can manage, modify,
delete, and integrate authorized integrated apps, while apps belonging to different users (sub-accounts) are isolated from each other.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 122/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Administrators (primary accounts) can authorize apps to different users (sub-accounts) for ISV SaaS integration.

Connector management: Multiple data sources and integration objects can be created and connected through connectors, including data
connectors (MySQL, Oracle, PostgreSQL/openGauss, ClickHouse, DaMeng, Gauss100, Gauss200/DWS, SQL Server, eDataInsight Hive,
eDataInsight HBase, eDataInsight ClickHouse, eDataInsight StarRocks, HANA, and Vastbase G100) and message connectors (MsgLink,
Kafka, and WebSocket), and protocol connectors (FTP, LDAP, API, SOAP, and eDataInsight HDFS).

JWT token and AK/SK authentication: Both JWT token authentication and AK/SK authentication are supported.

System statistics analysis: The operation overview provides one-stop monitoring of core metrics, including apps, subsystems, APIs, connectors,
and topics. It also offers O&M visualization capabilities for service access, DataLink, and message access, allowing users to view run logs of
services or tasks, and release and subscription records and running tracks of messages.

Deployment and capacity expansion of hardware in multiple forms and specifications: Lite, Std, and Pro specifications are provided to be
compatible with different scenarios. The Std edition can be upgraded to the Pro edition for capacity expansion.

Elastic capacity expansion of software licenses: Users can purchase licenses on demand. The number of key integration objects that can be
created in the software is controlled by licenses.

Connection Tools
Web-based integrated asset development: offers web-based tools that allow users to easily develop integrated assets for their apps. This one-
stop solution includes visualized development tools and enables seamless integration with external subsystems.

Figure 5 Integrated asset development

DataLink development: provides asset development capabilities oriented to data integration, connects to external data sources based on
connectors, and implements flexible, fast, and non-intrusive data connections between multiple data sources such as texts, messages, APIs, and
structured data.

LinkFlow development: provides API integration-oriented asset development capabilities, implements full lifecycle management from script-
based API design, development, test, to release, and supports API re-opening after orchestration.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 123/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

MsgLink development: provides asset development capabilities oriented to message (MQ) integration, provides secure and standard message
channels, and supports message release, subscription, and permission management.

Integrated asset package management: provides the capabilities of importing and exporting integrated asset packages and supports on-demand
installation of integrated assets in different service scenarios.

Asset overview: displays the overall classification statistics and details of assets on which the user has permission.

Link Engine: DataLink


DataLink supports the following task scheduling capabilities:

1. Real-time scheduling

2. Periodic scheduling by minute, hour, day, week, or month, or by using the Quartz Cron expression

3. Scheduling creation in batches

4. Manual scheduling in batches

5. Scheduling stopping in batches

Figure 6 Task management

DataLink supports the following connectors:

Table 1 DataLink connectors

Connector Functions and Restrictions

MySQL Scheduled task collection and write of MySQL 5.7 are supported.
CDC function in Binlog mode of the MySQL database is supported.

API Data can be collected and written through scheduled tasks. The restrictions are as follows:
Only HTTP and HTTPS are supported. Other protocols such as SOAP and RPC are not supported.
Multiple authentication modes are supported, such as Basic Auth and OAuth 2.0.
Only JSON and XML messages can be parsed.
When in JSON format, only one-layer array structures are supported. Nesting is not allowed. Obtaining data in paths of
different levels is not supported.
Data can be written through real-time tasks.

Kafka Data can be collected and written through real-time tasks.


Data can be written through scheduled tasks.
Kafka 2.13-3.4.0 is supported.
Kafka services with SSL authentication enabled can be read.

MsgLink Data can be written through scheduled tasks.


Data can be collected and written through real-time tasks.

PostgreSQL Data can be collected and written through scheduled tasks.


Data can be written through real-time tasks.

openGauss Data can be collected and written through scheduled tasks.


Data can be written through real-time tasks.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 124/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

SQL Server Data can be collected and written through scheduled tasks.
Data can be collected and written through real-time tasks (by creating composite tasks). Data can be incrementally
synchronized. Currently, data can be synchronized from the SQL Server database to relational databases such as MySQL,
Oracle, and SQL Server.

GaussDB 100 For GaussDB 100 Kernel 503, data can be written through scheduled and real-time tasks.

DWS For DWS, data can be collected and written through scheduled tasks.
For DWS, data can be written through real-time tasks.

FTP FTP data sources of the FTP and SFTP protocols can be created.
Scheduled tasks can be used to collect or write data. Files in CSV, TXT, XLS, and XLSX formats can be parsed and mapped.
Other types of files can only be migrated.

LDAP Data can be collected through scheduled tasks.

Oracle Data can be collected and written through scheduled tasks.


Data can be written through real-time tasks.

Dameng Data can be collected and written through scheduled tasks.

WebSocket Data collection through real-time tasks


WSS and WS are supported.
It works only in form mode but not in orchestration mode.

Hive (DCS eDataInsight) Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.

HBase (DCS Data can be collected and written through scheduled tasks.
eDataInsight)
Data can be written through real-time tasks.

HDFS (DCS Data can be collected and written through scheduled tasks.
eDataInsight)
Data can be written through real-time tasks.

ClickHouse (DCS Data can be collected and written through scheduled tasks.
eDataInsight)
Data can be written through real-time tasks.

StarRocks connector Data can be written through scheduled tasks.


(DCS eDataInsight)

ClickHouse Connector Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.
20.7 and later versions are supported.

Vastbase G100 Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.

SAP HANA Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.

Link Engine: LinkFlow


Database-based data API: Database query SQL statements can be converted to REST APIs.

Script-based function API: JS scripts can be compiled to implement function APIs.

API design: Online API design, including API basic information, header, parameter, path, and method design are supported.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 125/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

API development:
Online API development and orchestration based on JavaScript scripts are supported.
During script orchestration of integrated apps, library functions can be used to call services and events defined on subsystems.

Figure 7 API development

API test: HTTPS requests are supported. The request header, request body, and request parameters can be customized, and a complete response
packet is returned.

API deployment: APIs can be deployed on the APIGW for external use.

API scheduled tasks: APIs can be called on demand.

Batch import and export APIs: Batch import and export in JSON or YAML format are supported.

API authorization: Authorized apps can be created and bound to specified APIs.

Link Engine: MsgLink


Message retry: Messages that will not be immediately consumed can be returned to the original queues for later use.

SSL link encryption: Native RocketMQ capabilities are supported. SSL encryption can be configured for instance access to ensure security.

Message query: Users can query message details by topic, publisher ID, message ID, message key, and creation time.

Figure 8 Message query

127.0.0.1:51299/icslite/print/pages/resource/print.do? 126/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Message statistics: The message statistics function is enhanced. The query of different time granularities is added. The query time range can be
flexibly set.

Figure 9 Message statistics

Message tracing: The system monitors the production and consumption processes of a single message, such as the production time,
consumption time, and client name.

Figure 10 Message tracing

Topic management: Users can view topic information, including basic topic information, publisher information, and subscriber information.

Figure 11 Topic management

Subscription Management: Users can view subscription configurations, number of backlogged messages, consumption rate, and consumption
details on the platform.

Dead letter queue: Messages that cannot be processed are stored in the dead letter queue for unified analysis and processing.

Interconnection between the client SDK and integrated apps: The client SDK is provided to send messages to or subscribe to topics of
integrated apps through interfaces exposed by the SDK. Only the Java SDK is provided.

Interconnection between the client SDK and subsystems: The system provides the client SDK and allows external IT systems to report events
to Link Services through interfaces exposed by the SDK.

External network mapping address access: The external network mapping addresses of service nodes can be configured to allow messages to
be published to and subscribed to using the internal and external network addresses at the same time.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 127/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Connection Assets: Integration Assets


The system is preconfigured with assets for video subsystem integration and interconnection.

1. Supports pre-integration with Huawei IVS video management and analysis subsystems.

2. Supports pre-integration with the Hikvision ISC platform.

3. Supports pre-integration with third-party video transcoding platforms (Jovision and AllCam).

The system is preconfigured with assets for integrating with IoT platforms.

1. Supports pre-integration with the Huawei Cloud IoTDA platform.

2. Supports pre-integration with third-party IoT platforms (Webuild and Ganwei).

The system is preconfigured with assets for other third-party subsystems.

1. Preconfigures over 500 assets for pre-integration with third-party subsystems, such as parking, access control, lighting, and building
facilities.

2. Supports optional installation and project instantiation of pre-integration assets.

I/O Asset Compatibility


The following table describes the mapping between LinkSoft of eCampusCore and earlier versions and the compatible IO assets of the ROMA
platform.

Table 2 Compatible I/O Assets

I/O Assets of Historical Versions Compatible

IO assets of earlier versions Yes

IO assets of ROMA 20.0 Yes

I/O assets of eCampusCore Yes

IO assets of ROMA Connect/Site No

Built-in Gateway Functions


The APIs developed in the System Integration Service can be deployed to the built-in APIGW service of Link Service. The APIGW service enables
campus customers to easily maintain, monitor, and protect APIs.

2.8.6.22.1.1.2 Values and Benefits


Fast application integration: The system offers easy app integration through configuration-based development, which includes preconfigured
integration assets as a reference. The assets can be imported and used as required. This allows for low-code app integration and shorter
development and rollout time for service apps.

Easy management: Multiple integration assets, such as data integration assets, API integration assets, and message integration assets, can be
centrally managed on one console.

2.8.6.22.1.1.3 Usage Scenarios


The LinkSoft service offers a comprehensive range of access solutions for data, messages, and services, enabling enterprise campuses to achieve all-
scenario intelligence.

Efficiently interconnects with multiple southbound subsystems.


With the LinkSoft service and baseline integration assets of eCampusCore, multiple IoT platforms or southbound subsystems can be quickly
connected simultaneously, avoiding redundant data collection.

Deploys a data foundation to provide the data extraction capability

127.0.0.1:51299/icslite/print/pages/resource/print.do? 128/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

All campus data is quickly integrated based on the data access and message access components, preprocessed, and opened to different backend
services. For instance, facial data from the turnstile system, device status from the video surveillance system, and switch and device
information from the street lamp system are transmitted to backend services in real time or in asynchronous batches for analysis and linkage
management.

Provides southbound control capabilities to help enterprises obtain data and build smart campus brains
The LinkSoft service provides a channel for integrating and sharing data, messages, and services, allowing enterprises to use AI, video
analysis, and big data cloud services to build a real smart campus brain that converges IT, OT, and AI.

2.8.6.22.1.2 Introduction to Device Integration Service


This section describes the definition, functions, benefits, application scenarios, and availability of the Device Integration Service.
Definition

The Device Integration Service enables the platform to model and manage IoT devices, implementing digital twins for IoT devices.

Functions

Values and Benefits

Usage Scenarios

2.8.6.22.1.2.1 Definition
The Device Integration Service enables the platform to model and manage IoT devices, implementing digital twins for IoT devices.

Concepts
The Device Integration Service manages devices by device model category, product, and device model.

Device model: A device model is a thing model. It allows you to abstract products in the same type from different models and vendors to form
a standard model for unified management. You can define basic device information, supported commands, and event information by attribute,
command, and event.

Device model classification: You can classify device models for easier management.

Product: A product model contains abstract definitions based on a device model. The product attributes, commands, and events are derived
from the corresponding subsets of the device model.

Device instance: A device instance is a specific instance of a product and also a management object in the Device Integration Service.

2.8.6.22.1.2.2 Functions

127.0.0.1:51299/icslite/print/pages/resource/print.do? 129/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

LinkDevice
The Device Integration Service enables the platform to model and manage IoT devices, implementing digital twins for IoT devices. Additionally, it
allows the access, aggregation, model mapping, and local control of IoT devices, provides various protocol drivers to connect different types of
southbound devices while shielding the details of southbound connection technology, and offers unified device data reporting and command delivery
interfaces.

LinkDeviceEdge
LinkDeviceEdge is a remote node of LinkDevice and interacts with LinkDevice to connect distributed devices, collect data of the devices, and
support local data preprocessing. LinkDeviceEdge is launched as an independent software product and can be installed in a cloud-based
environment or on an independent server.

Device Connection
Multiple access modes

Built-in and extended protocol connectors supported by LinkDevice can be connected to the devices or device systems.

Devices or device systems can be connected to LinkDevice through the partner gateway or LinkDeviceEdge.

Devices or device systems can be connected to LinkDevice through LinkSoft. The integration apps can be developed through
LinkFlow.

Devices or device systems can be connected using the LinkDevice SDK. The LinkDevice SDK supports the C and Java languages.

Built-in protocol connectors are supported.

Devices can be connected using ModbusTCP. Transmission encryption is supported.

Devices can be connected using OPC UA. Password and certificate authentication, transmission encryption, and security policies and
modes are supported.

Devices can be connected using standard MQTT protocol of LinkDevice. Password authentication and transmission encryption are
supported.

MQTT can be used to connect to T3 IoT devices, such as MEGSKY intelligent convergent terminals.

Extended protocol connectors: They can be connected to protocols through customized device connectors (maximum: 128) and protocol plug-
ins.

Connector management: IoT protocols can be managed as connectors (including built-in and extended protocol connectors). Connector
configuration templates can be imported or exported.

Data collection management

Product-based data collection templates can be defined, allowing users to configure attribute point mapping for all devices of a product
quickly.

Channels can be configured for connection to southbound devices based on the connector configuration of each protocol.

Mapping between protocol points and device thing models can be configured. Data can be imported or exported in batches.
Southbound points can be read or written based on the point mapping configuration.

Device Management
Thing model management: This function allows users to classify, add, delete, query, modify, import, and export thing models, or manage thing
model attributes and commands.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 130/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Product management: This function allows users to manage products based on thing models and distinguish gateways from devices.

Product information can be added, deleted, queried, or modified.

Product changes can be reported.

Data can be imported or exported in batches.

Device lifecycle management: This function allows users to add, delete, query, or modify device instances, monitor device statuses (online or
offline), query the device list based on filter criteria, import or export device registration data in batches, or report device status changes.

Device groups: This function allows users to add, delete, query, and modify device groups, or add device instances to device groups.

Device relationships: Association and management between gateways and their subdevices, between platforms and edge nodes, and between
platforms, edge nodes, gateways, and gateway subdevices are supported.

Dynamic device discovery: Dynamic device discovery can be configured. Partner turnstiles and access control systems can be dynamically
discovered and connected.

Device shadows: This function allows users to cache real-time device data, configure and convert point mapping configuration for device thing
models, or read and report device data.

Device operations: This function allows users to read and write device attributes, query device events, or deliver device commands.

Device data storage: This function allows users to store device data locally or define the storage period.

100 TPS is supported for device data storage. If the device data reporting rate exceeds 100 TPS, some data may not be processed, resulting in timeout and
packet loss.

Device linkage

Event triggering can be scheduled. Device status changes and attributes can be reported.

The device status, attributes, and time range can be configured as judgement conditions.

The following actions are supported: attribute and command delivery, and alarm and notification reporting.

Message Communication
Device data transfer: Device data status information can be transferred to third-party systems, including Kafka and OpenGaussDB.

Device data subscription and push: Device data can be pushed to MsgLink for apps to subscribe to and obtain the data. The following
operations are supported: product-based data source configuration, data push by consumer group, and push policy configuration.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 131/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Two-way transparent transmission: Device messages can be transparently transmitted to apps. Apps can deliver messages to devices in
asynchronous mode.

API openness: Device services can be provided through REST APIs. APIs for configuring and querying LinkDevice as well as device read and
write APIs are provided. Read and write instructions can be forwarded to devices.

Southbound MQTT

The southbound MQTT interface is provided for third-party devices to directly connect to LinkDevice.

Device attribute points and thing model data can be read or written. Device statuses and product information can be synchronized.

Monitoring and O&M


Overview statistics: Basic information about LinkDevice instances and statistics on various device instances can be provided.

Device status monitoring: The latest device statuses can be monitored and displayed in real time. The statuses include Online, Offline,
Unactivated, and Unactivated(Expired).

Connector status monitoring: The latest statuses of built-in and extended connectors can be monitored and displayed in real time. The statuses
include Online, Offline, Unactivated, and Unactivated(Expired).

Service logs: Service logs can be configured and viewed.

Users can configure the device statuses, attributes, commands, and event logs.

Device service data logs can be recorded and viewed.

System logs: Historical and real-time system logs can be viewed.

Message statistics: Statistics on the number of messages can be collected based on device instances and connectors.

System settings: Security settings and device data storage can be configured or queried.

Requirements for the maximum storage duration of historical data are described as follows:
If FusionCube is deployed as the foundation, the duration can be set to one year.

Edge Management
Edge node management

Edge nodes can be added, deleted, queried, and modified.

Edge node statuses can be synchronized and monitored.

Edge data processing: Reported device thing model data can be pre-processed.

Data filtering is supported to find the device data that meets the filter criteria.

Data deduplication is supported.

Data can be aggregated based on maximum, minimum, or latest values in a specified time window.

Edge data storage: When an edge device is disconnected from LinkDevice, device data can be stored for a maximum of seven days. When the
connection is restored, the device data generated during the disconnection period can be reported.

Edge message communication

MQTT is supported for upstream and downstream communication, and bidirectional communication between northbound devices and
LinkDevice.

Data of southbound devices, southbound gateways, and gateway subdevices can be read, written, and reported.

Edge connection management: IoT protocols can be managed as connectors, including built-in and third-party extended protocol connectors.
Built-in protocol connectors include ModbusTCP, MQTT, and OPC UA.

Edge data collection management

127.0.0.1:51299/icslite/print/pages/resource/print.do? 132/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Channels can be configured for connection to southbound devices based on the connector configuration of each protocol.

Mapping between protocol points and device thing models can be configured. Data can be imported or exported in batches.

Collected point data can be pre-processed, including point scaling, threshold check, and fluctuation suppression.

2.8.6.22.1.2.3 Values and Benefits


LinkDevice of the digital platform provides an IoT solution and has outstanding advantages in various aspects, such as capabilities, costs, O&M,
security, and ecosystem, as shown in the following table.

Aspect Item Advantages of LinkDevice

Capabilities Flexible protocols Supports mainstream device access protocols to meet the requirements of mainstream protocol devices and
access scenarios.
Provides a plugin mechanism for custom protocol parsing.

Costs Quick access Allows partner devices to be plug-and-play, requiring minimal manual intervention and can be used immediately
after being connected.

Quick deployment Works with LinkTool to realize fast configuration and deployment.

O&M Stable performance Supports secure and stable device connections.

Technical support Offers 24/7 professional support.

Security System security Provides digital certificates, one-device-one-key access security, and EulerOS security capabilities.

Ecosystem Third-party Integrates upstream and downstream ecosystem resources to provide value-added services.
interconnection

2.8.6.22.1.2.4 Usage Scenarios


LinkDevice is applicable to the following scenarios:

Intelligent campus, building, stadium

Intelligent city (including water affairs, water conservancy, electric power, urban management, environmental protection, and emergency
response)

Industrial campus, manufacturing campus, university campus

Intelligent warehousing

Intelligent transportation (airport, urban rail, and highway)

Intelligent healthcare

Intelligent agriculture and aquaculture

2.8.6.22.1.3 Introduction to the APIGW Service


Functions

Values and Benefits

Application Scenarios

2.8.6.22.1.3.1 Functions
The APIGW service functions as the service openness portal of and allows campus customers to easily create, publish, maintain, monitor, and
protect APIs, and manage the calls, traffic, authorization and access control, monitoring, and versions of the open APIs.

Gateway Management
API management based on the application center: It supports project management based on the application dimension, API release, test, and
authorization on applications, and API permission management based on the application dimension.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 133/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 1 App Management

Fine-grained authorization management: The system supports application-based sub-account authorization and isolation. This means that users
(sub-accounts) can manage, modify, delete, and integrate authorized integrated applications, while applications belonging to different users
(sub-accounts) are isolated from each other. And administrators (primary accounts) can authorize applications to different users (sub-accounts)
for ISV SaaS integration.

JWT token authentication, AK/SK authentication, and OAuth authentication are supported.

System analysis: One-stop monitoring of core metrics, such as the number of applications, APIs, and API visits, is available on the home page.
In addition, O&M visualization is supported, so that users can view API run logs and operator operation records.

Figure 2 Home page

API Lifecycle Management


API publishing: There are two API publishing modes. In one mode, APIs are published through services orchestrated using LinkSoft APIs. In
the other mode, APIs are published through external hosting services. The benefits of external hosting services are three-fold. First, insecure
services can be converted into secure services. Second, they prevent service addresses and access addresses from being exposed. Last but not
least, they improve security and prevent frontend access from affecting services.

API authorization: A specified API can be authorized to one or more apps.

API statistics analysis: This function is used to analyze the visit and request details of the system or a specific API, and collect and analyze API
access logs.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 134/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Security authentication: Apps can be authenticated using app keys and secrets.

API traffic control: Multi-dimensional API traffic control (IP address, app, and API) is supported.

API ACL: IP addresses can be managed using a blacklist and whitelist.

API test: Users can orchestrate and debug request headers, request bodies, and request parameters.

API routing:

Configuration of URLs, response timeout, and load balancing is supported.

Custom API backend forwarding policies are supported.

Backend service health check is supported.

Grayscale release of backend services is supported.

Backend load balancing: The round robin algorithm is supported to implement load balancing.

HTTPS cipher suite configuration:

The HTTPS encryption suite can be enabled to enhance security.

Redirection from HTTP to HTTPS is supported to enhance security.

Mock service: Mock service data can be configured.

2.8.6.22.1.3.2 Values and Benefits


API providers can concentrate on their API service capabilities, avoid duplicating public capabilities, guarantee the security and reliability of
open APIs, and monitor and analyze the API's execution and call status.

API consumers can access APIs through the gateway without the need to know the specific background service address, making integration
development simpler.

APIGW separates and protects backend services for API users, creating a security barrier that reduces the impact and damage on backend
services. This ensures the stable operation of backend services and optimizes the integration architecture.

2.8.6.22.1.3.3 Application Scenarios


The APIGW provides secure and reliable channels for API providers and consumers to call APIs.

Internal system decoupling


Standard APIs are used to quickly decouple internal systems and separate the frontend from the backend, reusing existing capabilities and
avoiding repetitive development.

Enterprise capability openness


The APIGW opens internal service capabilities to partners in the form of standard APIs to share services and data with partners.

2.8.7 O&M Management


O&M management mainly includes common O&M operations, such as alarm management, check policy, inventory management, and performance
analysis.

Alarm management

127.0.0.1:51299/icslite/print/pages/resource/print.do? 135/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

eDME provides the alarm management capability to monitor alarms generated by each component in DCS in real time and allows you to
acknowledge and clear alarms.
Definition
To ensure the normal running of the network, the network administrator or maintenance personnel must periodically monitor and handle
alarms.
Benefits

Beneficiary Benefit

Customer Alarm management allows you to view, monitor, and handle all alarms of DCS on one UI in real time.
Alarm management provides a series of functions, such as masking, aggregation, correlation, and automatic acknowledgment, to help you
automatically identify and reduce invalid alarms and efficiently handle alarms.
The alarms can be remotely notified by short message service (SMS) or email, helping you learn about the alarm status in the system in real
time.

Check policy
With the check policy function provided by eDME, you can periodically or manually check the system capacity and performance to detect
resource health risks in advance.
Definition
A check policy is used to periodically or manually check resources in terms of performance, capacity, availability, configuration, reclamation,
and low-load resources. If the preset check policies do not meet your check requirements, you can customize check policies.
Benefits

Beneficiary Benefit

Customer You can use the preset check policies and customized check policies to identify and handle resource risks in advance, ensuring healthy and
stable operation of the data center.
The system provides manual and scheduled checks so that you can customize check policies based on scenarios.
The system supports the remote notification function, including SMS and email notifications, helping you learn about the system health
status in real time.

Inventory management
eDME allows you to manage and maintain hardware resources, such as Ethernet switches, FC switches, servers, and storage devices.
Definition
After interconnecting with hardware resources such as Ethernet switches, FC switches, servers, and storage devices, eDME can display
resource attributes, status, performance, and capacity, and supports resource configuration and maintenance.
Benefits

Beneficiary Benefit

Customer After adding an Ethernet switch, you can view the resource, performance, and status information about the Ethernet switch, configure
VLANs, and manage link information, improving the O&M efficiency of the Ethernet switch.
After adding a physical server, you can view the resource, performance, and status information about the server, and perform maintenance
operations such as turning on indicators and restarting the server, improving the O&M efficiency of the physical server.
After adding an FC switch, you can view the resource, performance, and status information about the FC switch, and manage zone
information, improving the O&M efficiency of the FC switch.
After adding a storage device, you can view the resource, performance, and status information about the storage device, and manage
storage pools, improving the O&M efficiency of the storage device.

Performance analysis
eDME provides the end-to-end (E2E) performance analysis function. Then, you can analyze performance based on collected data and quickly
locate problems.
Definition
By creating an analysis view for a resource object (device or virtualization resource), you can analyze the performance of the specified
resource and its associated resources.
Benefits

Beneficiary Benefit

Customer With the performance analysis function, you can obtain the performance, alarm, and status information about a resource and its associated
resources, helping quickly locate the root cause of a fault and improving O&M efficiency.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 136/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Topology view
eDME provides the topology view function, allowing you to view the E2E association relationships between resources and quickly analyze the
impact scope of problems.
Definition
The system can display the topology associated with the selected resource object (device or virtualization resource).
Benefits

Beneficiary Benefit

Customer In the topology view, you can view the topology of a resource and its associated resources, including resource alarms and link information,
helping you quickly analyze the impact scope of problems.

Security management
eDME provides security management functions, such as user management, user policy management, and authentication management, to help
users ensure the security of user information and the system.
Definition
Security management covers user management, user policy management, and authentication management, helping manage user rights,
authentication modes, and sessions, and set account and password policies, and login modes.
Benefits

Beneficiary Benefit

Customer The security management function assigns roles to users and manages the role rights, implementing optimal resource allocation and
permission management, and improving O&M efficiency.
This function allows you to set user account access policies, password policies, and login modes, helping you customize secure user and
login policies to improve system security.

3 Installation and Deployment


Installation Overview

Installation Process

Preparing for Installation

Deploying Hardware

Deploying Software

(Optional) Installing DR and Backup Software

Verifying the Installation

Initial Service Configurations

Appendixes

3.1 Installation Overview


Deployment Solution

Network Overview

System Requirements

127.0.0.1:51299/icslite/print/pages/resource/print.do? 137/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.1.1 Deployment Solution

Overview
Datacenter Virtualization Solution (DCS) provides two deployment solutions.

Separated deployment: Compute and storage nodes are deployed separately, and the storage type is flash storage or scale-out storage.

The management domain system includes virtualization resource management (VRM) deployment and eDME cluster deployment.
Management domain services are virtualized on three physical nodes.

Hyper-converged deployment: Compute and storage nodes are deployed together, and FusionCube 1000H (FusionCompute) is used for the
compute resource pool.

The management domain system includes eDME cluster deployment only. eDME is deployed on two Management Computing Node
Agent (MCNA) nodes and one Storage Computing Node Agent (SCNA) node of FusionCube 1000H.

eDME can be deployed on multiple nodes. For details about the deployment specifications, see Management System Resource Requirements . This section uses the
three-node deployment mode as an example.

Separated Deployment Scenario


Figure 1 Separated deployment solution/non-multi-tenant

Figure 2 Separated deployment solution/multi-tenant

Figure 3 Decoupled storage-compute deployment framework with eDataInsight and HiCloud

127.0.0.1:51299/icslite/print/pages/resource/print.do? 138/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 4 shows relationships between components in the separated deployment scenario.

Figure 4 (Decoupled storage-compute: OceanStor Pacific block/file+OceanStor Pacific HDFS)

Table 1 Introduction to separated deployment scenario

Name Description

CNA A compute virtualization component, which provides compute resources for FusionCompute. A host also provides storage resources
when local disks are used for storage.

VRM A management node of FusionCompute, which provides an interface for users to centrally manage virtual resources.
VRM supports the active/standby deployment. Active/standby nodes are virtualized on two hosts. The VRM management IP
address is configured to the management plane IP address.
NOTE:

In the active/standby mode, the management nodes are deployed on two VMs. If the active node is faulty, the system rapidly switches
services to the standby node to ensure service continuity. Therefore, the active/standby deployment provides higher reliability than the
single-node deployment.

eDME eDME is a Huawei-developed intelligent O&M platform that centrally manages software and hardware for virtualization scenarios.
eDME supports three-node deployment (non-multi-tenant scenario) and five-node deployment (multi-tenant scenario). The failure
of a single node does not affect the management services.

(Optional) iMaster iMaster NCE-Fabric is used only in the network overlay SDN solution.
NCE-Fabric iMaster NCE-Fabric manages switches in the data center and automatically delivers service configurations.
FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric detects VM login, logout, and migration status and
automatically configures the VM interworking network.

(Optional) FSM A server that runs the FusionStorage Manager (FSM) process of OceanStor Pacific series storage. It provides operation and
maintenance (O&M) functions, such as alarm reporting, monitoring, logging, and configuration, for OceanStor Pacific series block
storage. FSM needs to be deployed in active/standby mode only in the scale-out storage deployment scenario.

OceanStor Dorado A flash storage device, which provides storage resources. The figure uses OceanStor Dorado 3000 V6 as an example. For other
3000 V6 models, see Huawei Storage Interoperability Navigator.

OceanStor Pacific A scale-out storage device, which provides storage resources. The figure uses OceanStor Pacific 9520 as an example. For other
9520 models, see Huawei Storage Interoperability Navigator.

eBackup This component is optional.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 139/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eBackup is a piece of Huawei-developed backup software for virtual environments. It provides comprehensive protection for user
data in virtualization scenarios based on VM/disk snapshot and Changed Block Tracking (CBT) technologies.

UltraVR This component is optional.


The DR management software uses storage devices to protect and restore VM data.

eDataInsight This component is optional.


eDataInsight is a distributed data processing system that supports large-capacity data storage, search, and analysis.
To deploy eDataInsight, you need to deploy OceanStor Pacific HDFS (decoupled storage-compute) first. OceanStor Pacific HDFS
provides a high-performance HDFS storage solution with decoupled storage-compute.

HiCloud This component is optional.


The HiCloud platform is an industry-leading hybrid cloud management platform that provides unified management of
heterogeneous resources. It supports management of cloud services such as Huawei hybrid cloud and VMware.

SFS This component is optional.


Scalable File Service (SFS) is used to manage file systems and file system networks. By masking the hardware differences of
different storage products, this component provides a unified and abstract file system management model, and delivers reliable,
end-to-end file system solutions that feature easy expansion and maintenance for storage services.

eCampusCore This component is optional and can be deployed only on the Region Type II network (network overlay SDN scenario).
eCampusCore is an enterprise-level platform for application and data integration. It provides connections between IT systems and
OT devices and pre-integrated assets for digital scenarios in the enterprise market.

Hyper-Converged Deployment Scenario


Hyper-converged deployment scenarios include multi-tenant and non-multi-tenant scenarios. In non-multi-tenant scenarios, only O&M nodes are
deployed. In multi-tenant scenarios, both O&M nodes and operation nodes need to be deployed.
eDME can be deployed on multiple nodes. For details about the deployment specifications, see Management System Resource Requirements . This
section uses the three-node deployment mode as an example.

Figure 5 Hyper-converged deployment solution (non-multi-tenant)

Figure 6 Hyper-converged deployment solution (multi-tenant)

Table 2 Introduction to hyper-converged deployment scenario

Name Description

127.0.0.1:51299/icslite/print/pages/resource/print.do? 140/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

FusionCube A hyper-converged deployment solution, which includes servers, storage devices, and switches. eDME is deployed on two MCNA
1000H nodes and one SCNA node of FusionCube 1000H.
MCNA: A node that provides management and compute functions. Management software, such as FusionCube Vision, VRM, and
FusionStorage Manager, is deployed on MCNA.
SCNA: A node that provides storage and compute functions.

eDME eDME is a Huawei-developed intelligent O&M platform that centrally manages software and hardware for virtualization scenarios.
eDME supports three-node deployment (non-multi-tenant scenario) and five-node deployment (multi-tenant scenario). The failure of a
single node does not affect the customer's management services.

eDataInsight This component is optional.


eDataInsight is a distributed data processing system that supports large-capacity data storage, search, and analysis. For details about
how to deploy eDataInsight, see Installation Using SmartKit .

HiCloud This component is optional.


The HiCloud platform is an industry-leading hybrid cloud management platform that provides unified management of heterogeneous
resources. It supports management of cloud services such as Huawei hybrid cloud and VMware. For details about how to deploy
HiCloud, see Installation Using SmartKit .

Deployment Modes and Principles


Table 3 lists the deployment solutions of each node in DCS.

Table 3 Deployment solution of DCS

Name Deployment Deployment Principle


Mode

CNA Physical Multiple hosts are deployed based on customer requirements on compute resources to provide virtual compute
deployment resources. A host also provides storage resources when the local storage is used.
When VRM nodes are deployed on VMs, hosts must be specified to create the VMs.
If a small number of hosts, for example, fewer than 10 hosts, are used, you can add all the hosts to the
management cluster, enabling integrated deployment of the management cluster and the user service cluster If a
large number of hosts are deployed, you are advised to add the hosts to one or multiple service clusters by the
services they provide to facilitate service management.
To maximize compute resource utilization for a cluster, you are advised to configure the same distributed
switches and datastores for hosts in the same cluster.

VRM Virtualization In VRM virtualization deployment, select two hosts in the management cluster and deploy the active and
deployment standby VRM VMs on these hosts.

eDME Virtualization In separated deployment, select three hosts in the management cluster for eDME and deploy three nodes on
deployment these hosts.
In hyper-converged deployment, eDME is deployed on two MCNA nodes and one SCNA node of FusionCube
1000H.

(Optional) iMaster Physical iMaster NCE-Fabric is used only in the network overlay SDN solution. The network overlay SDN solution
NCE-Fabric deployment applies only to separated deployment scenarios.
iMaster NCE-Fabric is delivered in appliance mode, facilitating deployment and improving reliability.
In the single-cluster deployment solution of iMaster NCE-Fabric, three servers are deployed as a cluster in a
data center (DC) to manage all switches in the DC. An iMaster NCE-Fabric cluster can also manage switches
in multiple DCs.
FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric detects VM login, logout, and
migration status and automatically configures the VM interworking network.

(Optional) FSM Virtualization FSM must be deployed in active/standby mode on VMs created on FusionCompute only in scale-out storage
deployment deployment scenarios.

eBackup/UltraVR Virtualization Single-node deployment


deployment

eDataInsight Virtualization This component is optional.


deployment
The storage device must be a shared storage device.
Three CloudSOP VMs must be deployed on different hosts.
To deploy eDataInsight, you need to deploy OceanStor Pacific HDFS (decoupled storage-compute) first.
OceanStor Pacific HDFS provides a high-performance HDFS storage solution with decoupled storage-
compute.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 141/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

HiCloud Virtualization This component is optional.


deployment The HiCloud platform is an industry-leading hybrid cloud management platform that provides unified
management of heterogeneous resources. It supports management of cloud services such as Huawei hybrid
cloud and VMware.

SFS/ECE/ Virtualization The SFS, Elastic Container Engine, or Auto Scaling service is deployed on two VMs. For each service, the two
AS service deployment VMs must be deployed at the same time, and you are advised to deploy the two VMs on different CNA nodes.

eCampusCore Virtualization At least three FusionCompute physical servers (CNA hosts) are required.
deployment
The type of the storage pool used by VMs must be Scale-Out Block Storage or Virtualized SAN Storage.
Virtualized local disks cannot be used.
The remaining resources of a single physical host are calculated based on the division scheme. Examples are as
follows:
The nfs-dns-1, foundation-1, and gaussv5-1 VMs are planned on the physical machine CNA01. The available
CPU and memory resources of CNA01 are greater than or equal to the total CPU and memory resources of all
VMs (20C72G).
The nfs-dns-2, foundation-2, and gaussv5-2 VMs are planned on physical machine CNA02. The available CPU
and memory resources of CNA02 are greater than or equal to the total CPU and memory resources of all VMs
(20C72G).
The installer, foundation-2, ops-1, and ops-2 VMs are planned on the physical machine CNA03. The available
CPU and memory resources of CNA03 are greater than or equal to the total CPU and memory resources of all
VMs (18C80G).

3.1.2 Network Overview

Network Plane Planning (Non-SDN)


The separated deployment solution of DCS consists of the following communication planes:

Management plane: monitors the whole system, performs maintenance for system operations, including system configuration, system loading,
and alarm reporting, and manages VMs, including creating, deleting, and scheduling VMs.

Service plane: provides communication for virtual network interface cards (NICs) of VMs with external devices.

Storage plane: provides communication for the storage system and storage resources for VMs. This plane is used for storing and accessing VM
data including data in the system disk and user disk of VMs.

Backend storage plane: This plane is provided for scale-out storage only and is used for interconnection between hosts and storage units of
storage devices and processing background data between storage nodes.

Flash storage: Figure 1 and Figure 2 show the relationship between system communication planes of DCS.

Figure 1 Communication plane relationship diagram (example: flash storage, four network ports)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 142/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 2 Communication plane relationship diagram (example: flash storage, six network ports)

Figure 3 Communication plane relationship diagram (example with eDataInsight included: flash storage, four network ports)

Scale-out storage: Figure 4 and Figure 5 show the relationship between system communication planes of DCS.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 143/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 4 Communication plane relationship diagram (example: scale-out storage, four network ports)

Figure 5 Communication plane relationship diagram (example: scale-out storage, six network ports)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 144/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The Baseboard Management Controller (BMC) network port of each node can be assigned to the BMC plane or the management plane.
You are advised to bind network ports on different NICs to the same plane to prevent network interruption caused by the fault of a single NIC.
When binding network ports on different NICs, ensure that the models of the NICs to be bound are the same. If the models of the NICs to be bound are
different, bind the network ports on the same NIC.

The hyper-converged deployment solution includes the following communication planes:

Management plane: connects BMC ports of nodes to provide remote hardware device management for system management and maintenance.

Storage plane: enables data communication between VBS and OSD nodes or between OSD nodes.

Service plane: enables communication between compute nodes and VBS nodes through the iSCSI protocol.

Figure 6 Communication plane relationship diagram (example: hyper-converged deployment)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 145/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

VLAN Planning Principles (Non-SDN)


According to different planning principles, different interfaces of access switches are assigned to different VLANs to isolate users of different
service types.
You can bond multiple physical ports on a host into an aggregation port (bond). The aggregation port can be used as the uplink of a DVS or the
logical interface of the host.
The VLAN planning principles vary depending on the DCS deployment solution. The details are as follows:

The VLAN planning principles for the separated deployment solution are as follows:

Flash storage: Table 1 shows the VLAN assignment on the system communication plane of DCS.

Scale-out storage: Table 2 shows the VLAN planning for the system communication plane of DCS.

For details about the VLAN planning principles for the hyper-converged deployment solution, see Installation and Configuration > Site
Deployment > Site Deployment (Preinstallation) > Planning Data > Network Parameters and Installation and Configuration > Site
Deployment > Site Deployment (Onsite Installation) > Planning Data > Network Parameters in FusionCube 1000H Product
Documentation (FusionCompute).

Table 1 VLAN assignment (flash storage)

Communication Device Network Port Virtual VLAN Planning Principle


Plane Network
Port

Management Network ports eth0 and Bond1 Network ports eth0 and eth1 on each node are assigned to the management plane VLAN,
plane eth1 on hosts and the VLAN to which network ports eth0 and eth1 on the node belong becomes the
default VLAN of the management plane.
Network ports eth0 and Bond1
eth1 on the active and
standby VRM nodes

BMC network ports on - The switch port connected to the BMC network port on each node is assigned to the
VRM and hosts BMC plane VLAN, and the VLAN to which the BMC network port on the node belongs
is the default VLAN of the BMC plane.
NOTE:

The BMC network port can be assigned to an independent BMC plane or to the same
VLAN to which the management network port is assigned. The specific assignment
depends on the actual network planning.

Storage plane Storage network ports A1, - VLAN is configured as required. At least one VLAN must be assigned. For higher
A2, A3, A4, B1, B2, B3 reliability, it is recommended that more VLANs be assigned.
and B4 on SAN storage The network port eth2 can access ports A1, A2, B1, and B2 over the Layer 2 network.
devices The network port eth3 can access ports A3, A4, B3, and B4 over the Layer 2 network.
This allows compute resources to access storage resources through multiple paths.
Storage network ports eth2 Bond2 Therefore, the storage plane network reliability is ensured.
and eth3 on hosts

Service plane Service network ports eth0 Bond1 The service plane is divided into multiple VLANs to isolate VMs. All data packets from
and eth1 on hosts different VLANs are forwarded over the service network ports on the CNA node. The
data packets are marked with VLAN tags and sent to the service plane network port of
the switch at the access layer.
NOTE:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 146/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the four-network-port scenario, the management plane and the service plane share
physical network ports and are logically isolated by VLANs.

Table 2 VLAN assignment (scale-out storage)

Communication Device Network Virtual VLAN Planning Principle


Plane Port Network
Port

Management Network ports eth0 Bond1 Network ports eth0 and eth1 on each node are assigned to the management plane VLAN, and
plane and eth1 on hosts the VLAN to which network ports eth0 and eth1 on the node belong becomes the default
VLAN of the management plane.
Network ports eth0 -
on active and
standby VRM nodes

BMC network ports - The switch port connected to the BMC network port on each node is assigned to the BMC
on VRM and hosts plane VLAN, and the VLAN to which the BMC network port on the node belongs is the
default VLAN of the BMC plane.
NOTE:

The BMC network port can be assigned to an independent BMC plane or to the same VLAN to
which the management network port is assigned. The specific assignment depends on the actual
network planning.

NIC1-1 of the Bond1 Storage management network port


storage device

BMC of the storage -


device

Storage plane SLOT5-1 and - The storage VLAN is assigned based on the planning.
SLOT5-2

Storage network Bond2 Storage network ports eth2 and eth3 form bond 2, which forms a VLAN with storage network
ports eth2 and eth3 ports.
on hosts

Backend storage SLOT3-1 and - The backend storage VLAN is assigned based on the network planning.
plane SLOT3-2 NOTE:

This plane is available only for scale-out storage.

Service plane Service network - The service plane is divided into multiple VLANs to isolate VMs. All data packets from
ports eth4 and eth5 different VLANs are forwarded over the service network ports (eth4 and eth5) on the CNA
on hosts node. The data packets are marked with VLAN tags and sent to the service plane network port
of the switch at the access layer.
NOTE:

In the four-network-port scenario, the management plane and the service plane share physical
network ports and are logically isolated by VLANs.

(Optional) IP Address Planning Principles (Non-SDN Solution)


Table 3 lists the requirements for server management IP addresses.

Table 3 Required server management IP addresses

Category Attribute Number of Number of Floating Number of Multi- Remarks


Management IP Management IP tenant IP Addresses
Addresses Addresses

eDME (non-multi- Optional 3 2 -- One management floating IP address


tenant scenario) One southbound floating IP address

eDME (multi-tenant Optional 3 2 11 Two multi-tenant management IP


scenario) addresses
One floating IP address of the operation
portal
One IP address for load balancing on
the operation portal
Two IP addresses of the ECE nodes

127.0.0.1:51299/icslite/print/pages/resource/print.do? 147/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Two floating IP addresses of the ECE
nodes
One IP address for ECE load balancing
Two IP addresses of the AS service
nodes

VRM Mandatory 2 1 -- BMC IP addresses are not required


when VRM nodes are deployed on
VMs.

FSM Optional 2 1 -- IP address of OceanStor Pacific Block


Manager

CNA Mandatory 3 + N N/A -- The total number of management


servers is 3. N indicates the number of
service servers.

eBackup Optional 3 2 -- One management floating IP address


One southbound floating IP address

UltraVR Optional 1 N/A -- IP address of the UltraVR management


plane

eDataInsight Optional 3 1 -- One management floating IP address

HiCloud Optional 2 1 -- One management plane IP address and


one floating IP address of paas-core
One IP address of the GKit
management plane

SFS Optional 2 2 -- One floating IP address of the


GaussDB database
One floating IP address of the SFS_DJ
management nodes

eCampusCore Optional 9 N/A -- --

Physical server Mandatory 3 + N N/A -- The total number of management


servers is 3. N indicates the number of
service servers.

Table 4 lists the requirements for storage management IP addresses.

Table 4 Requirements for storage management IP addresses

Device Number of Number of BMC Number of Number of Management Remarks


Type Storage Servers IP Addresses Management IP Floating IP Addresses
Addresses

IP SAN/FC 1 -- 2 -- For example, Huawei OceanStor 5500


SAN V5 Kunpeng or OceanStor 18000 V5.

If one set of IP SAN storage is used, at least two management IP addresses are required for the storage server.

Table 5 lists the requirements for management IP addresses of network devices. You can adjust the IP addresses based on the actual networking.

Table 5 Management IP addresses planned for network devices

Device Type Number of Number of Management Number of Management Remarks


Network Devices IP Addresses Floating IP Addresses

Leaf switch in the compute cabinet 2 2 -- Configure two switches in


and management cabinet M-LAG mode.

Leaf switch in the management 2 2 -- Configure two switches in


cabinet M-LAG mode.

Leaf switch in the storage cabinet 2 2 -- Configure two switches in


M-LAG mode.

Spine switch 2 2 -- Configure two switches in


M-LAG mode.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 148/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Border leaf switch 2 2 -- Configure two switches in


M-LAG mode.

Firewall 2 2 1 Configure HA.

If there are two compute leaf switches, two management leaf switches, two storage leaf switches, and two spine switches, at least 13 management IP addresses are
required.

Table 6 describes the IP address planning for the storage plane. The following table lists the requirements for storage plane IP addresses when IP
SAN storage is used. When FC SAN storage is used, storage plane IP addresses are not required. When the OceanStor Pacific block storage is used,
you need to allocate an OceanStor Pacific block storage plane IP address to each host so that each host can access the OceanStor Pacific block
storage resource pool.

Table 6 IP address planning for the storage plane

Item IP Address Requirement Remarks

IP SAN storage IP Each host is assigned with two IP addresses and The IP address of the IP SAN storage interface and the IP address of the storage
address each storage port is assigned with one IP address. sub-interface on the CNA server must be configured.

OceanStor Pacific Each host is assigned with an OceanStor Pacific Generally, the backend storage plane of the OceanStor Pacific block storage is a
block storage IP block storage plane IP address so that each host VLAN plane. The IP addresses are used only for the internal storage plane of the
address can access OceanStor Pacific block storage OceanStor Pacific block storage (they can be private addresses and do not
resources. communicate with external networks).

Total number of IP addresses required by the IP SAN storage plane = Number of IP SAN storage ports + 2 × Number of CNA nodes. For example, if one dual-
controller system is used, each controller has four ports for IP SAN storage, and 10 CNA nodes are deployed, 28 (8 + 10 × 2) storage IP addresses are required.
Total number of IP addresses required by the OceanStor Pacific block storage plane = Number of storage nodes + Number of CNA nodes. For example, if 20
storage nodes form an OceanStor Pacific block storage pool and 20 CNA nodes are deployed, 40 (20 + 20) storage IP addresses are required.

Service IP address planning: Refer to the number of VMs and vNICs and reserve certain resources. Total number of IP addresses required by the
service plane = Total number of VM NICs × 120%.
Public network IP address planning: For details, see the public network mapping.

(Optional) Network Plane Planning (Network Overlay SDN Solution)


The following figure shows the networking diagram of the network overlay SDN solution.

Figure 7 Networking diagram of the network overlay SDN solution

Table 7 Examples of network plane IP address/VLAN planning

Category Network Plane VLAN Single-Core VRF Description


Gateway
Location

127.0.0.1:51299/icslite/print/pages/resource/print.do? 149/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Management Management 3 Server leaf Manage eDME cloud platform management plane, which connects to the
plane switch FusionCompute management plane, storage management plane,
server BMC management plane, and switch management plane.
Internal management plane, which includes VRM, CNA, iMaster
NCE-Fabric interconnection, O&M management, network
configuration management, upgrade configuration management,
alarm channel, information collection, and common services.

iMaster NCE- iMaster NCE- 10 Server leaf Manage Used for northbound communication and Linux management,
Fabric Fabric_Manage switch such as FusionCompute interconnection, web access, and Linux
management login.
plane Internal communication plane of the iMaster NCE-Fabric node.

iMaster NCE- 11 Server leaf N/A Used for communication with network devices in the southbound
Fabric_Service switch direction through protocols like NETCONF, SNMP, and
OpenFlow.

Storage plane Storage_Data 16 to 30 Leaf switch N/A Used for communication between compute nodes and service
storage nodes. Gateways can be deployed on leaf nodes.

Service plane OverLay_Service 31 to 999/1000 N/A Tenant VMs carry services over network overlay tunnels.
to 1999/2000 to
2999

(Optional) VLAN Planning Principles (Network Overlay SDN Solution)


Table 8 Describes the VLAN planning.

Storage and server: The interconnection ports between spine and leaf nodes adopt the Layer 3 networking mode. Therefore, the switching
VLANs of server leaf nodes and storage leaf nodes can be planned independently, and the VLANs must be unique in the Layer 2
interconnection domain of the switch.

iMaster NCE-Fabric: This VLAN is used for interconnection between the border Leaf switch and firewalls and between the border leaf switch
and LBs. The global VLAN required for VPC service provisioning must be unique.

Table 8 Overall VLAN planning

VLAN Planning Data (Example) Description

Device 2 to 30 Usage: When the underlay network is manually constructed, VLANIF interfaces are used to establish links
interconnection between some devices, including iMaster NCE-Fabric in-band management links between firewalls and
VLAN gateways, service links between F5 (third-party load balancers) and gateways, and management links
between iMaster NCE-Fabric servers and server leaf nodes.
Quantity planning suggestions:
For management links between firewalls and gateways, a group of firewalls requires one VLAN.
For service links between F5 load balancers and gateways, each VPN requiring the load balancing function
in the VPC occupies one VLAN (which can be the service VLAN described in subsequent sections).
The management network of the iMaster NCE-Fabric cluster requires one VLAN.
To sum up, plan interconnection VLAN resources in advance based on the actual service design.

Service VLAN 31 to 999/1000 to Usage: This VLAN is used by physical machines and VMs to connect to server leaf nodes on the SDN when
1999/2000 to iMaster NCE-Fabric delivers overlay network configurations.
2999 Quantity planning suggestions:
Service VLANs of different subnets can be reused. It is recommended that 3000 service VLANs be reused.
This VLAN can be dynamically adjusted for future use.

iMaster NCE-Fabric 3000 to 3499 Usage: This VLAN is used for interconnection between logical routers in tenant VPCs and tenant vSYS
interconnection firewalls when iMaster NCE-Fabric delivers overlay network configurations.
VLAN Quantity planning suggestions:
In a VPC, each service VPN requiring the vSYS firewall occupies one VLAN. This VLAN can be the
service VLAN.
This VLAN can be dynamically adjusted for future use.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 150/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Reserved VLAN for 3500 to 4000 Usage: This VLAN is required for configuring Layer 3 main interfaces on CE68 series switches when the
Layer 3 main underlay network is manually constructed. Creating a Layer 3 main interface occupies a VLAN. Therefore,
interfaces you need to set the range of reserved VLANs in advance so that the system can automatically occupy the
reserved VLANs when creating a Layer 3 main interface.
Quantity planning suggestions: The reserved VLANs can be dynamically adjusted as required. It is
recommended that six reserved VLANs be configured for the CE6855 switch and 32 reserved VLANs be
configured for the CE7855 switch.

Default reserved 4064 to 4094 Usage: These VLANs serve as internal control plane channels of switches and channels for transmitting user
VLANs service data of some features.
Quantity planning suggestions: You are advised to retain the default value. The reserved VLAN range can be
changed on a CE series switch using command lines so that the default reserved VLAN range does not
overlap with the planned or existing ones.

DCS management 2 to 15 Usage: This VLAN is used for the management communication among FusionCompute, eDME, and BMC
plane VLAN management (1 VLAN), and management communication between iMaster NCE-Fabric southbound and
northbound (2 VLANs). The value must be unique in a device.
Quantity planning suggestions: You are advised to reserve 14 VLANs. By default, 3 VLANs are used. If the
BMC network plane is independently planned, 4 VLANs are required.

DCS storage plane 16 to 30 Usage: This VLAN is used for the communication plane between compute node hosts and storage devices.
VLAN The value must be unique in a device.
Quantity planning suggestions: You are advised to reserve 15 VLANs. By default, 4 VLANs are used. You
can assign VLANs based on the storage device and service type.

Except DCS management and storage plane VLANs, other VLANs are globally reserved for iMaster NCE-Fabric to build the overlay VPC network.

(Optional) IP Address Planning Principles (Network Overlay SDN Solution)


Different IP network segments are planned based on the customer network. For details, see Table 9.

Management plane: device management IP address, SDN controller management and control protocol IP address, and management plane IP
address (eDME, FusionCompute, iMaster NCE-Fabric, and storage management IP address).

Service plane: storage service IP address, Layer 3 interconnection IP address of underlay switches, overlay service IP address, and public
network IP address.

Table 9 Overall IP address planning

IP Address Planning Data (Example) Description

Out-of-band 192.168.39.11/24 Usage: out-of-band management address of the device, which is used to remotely log in to and manage
management IP the device.
address The following network ports are involved:
BMC network port of the server (including compute node, converged node, and iMaster NCE-Fabric
node)
Management network port of the switch (Meth)
Management network port of the firewall (GigabitEthernet0/0/0)
Management network port of the F5 LB (Mgmt)
Management network port of the storage device (Mgmt)
Planning suggestions: Plan the number of IP addresses based on the number of devices on the live
network.

Loopback IP address 10.125.99.1/32 Usage: This IP address is used as the VTEP address, router ID, iMaster NCE-Fabric in-band management
address, and DFS group address when the underlay network is manually deployed.
Planning suggestions: Each switch needs to be configured with two loopback addresses.
In the full M-LAG networking scenario, the loopback configuration of each CE series switch is as
follows. The two member devices in the M-LAG have the same loopback 0 address but different
loopback 1 addresses.
Loopback 0: VTEP address
Loopback 1: router ID, iMaster NCE-Fabric in-band management address, and DFS group address

127.0.0.1:51299/icslite/print/pages/resource/print.do? 151/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Device interconnection 10.125.97.1/29 Usage: This IP address is used as the IP address for the interconnection between spine leaf nodes and
IP address server leaf nodes and the management IP address for the interconnection between spine leaf nodes,
firewalls, and LBs when the underlay network is manually deployed.
Planning suggestions:
The Layer 3 interconnection link between a spine leaf node and a server leaf node occupies a 30-bit
network segment. If the networking scale is large, you can bundle multiple links into an Eth-Trunk and
configure the Eth-Trunk as a Layer 3 main interface to reduce the number of IP addresses to be used.
The management interconnection link between a group of firewalls and spine nodes occupies five IP
addresses (two firewall IP addresses, two physical addresses on spine nodes, and one virtual IP address of
VRRP).
The service interconnection link between a group of F5 LBs and spine nodes occupies at least seven IP
addresses (two interconnection interface IP addresses on F5 LBs, one floating IP address, two physical
addresses on spine nodes, and one virtual IP address of VRRP). The number of service VIPs depends on
the service type. Different service VPNs can use the same IP address.
Calculate the number of occupied address segments and the specific range based on the network scale.

iMaster NCE-Fabric 10.125.100.1/24 Usage: This address is used to deploy various addresses required by the iMaster NCE-Fabric cluster
cluster access IP server and configure the gateway of the cluster on server leaf switches when the underlay network is
address manually deployed. Planning suggestions:
For the dual-plane networking, two network segments need to be planned:
Each server in the cluster is configured with its own NIC bond address. Each server requires two IP
addresses (in different network segments). If the cluster has three nodes, 6 IP addresses are required. If
the cluster has five nodes, 10 IP addresses are required. The number of IP addresses required by other
nodes can be obtained in the same manner.
Server leaf nodes are deployed in the M-LAG mode, and VRRP is configured as the gateway of the
controller cluster. Each plane requires two physical IP addresses and one virtual IP address. Therefore, six
IP addresses are required for the two planes.
One southbound floating IP address is required for the controller cluster. That is, the entire cluster
requires only one IP address.
One northbound floating IP address is required for the controller cluster. That is, the entire cluster
requires only one IP address.
Four IP addresses are required for the internal communication of the controller cluster, which are in the
same network segment as the northbound floating IP address.

iMaster NCE-Fabric 10.125.97.240 to Usage: This address is used when iMaster NCE-Fabric delivers overlay network configurations. When
interconnection IP 10.125.97.255/30 tenant service traffic needs to pass through the firewall, this IP address is used for service interconnection
address between the tenant VPC and the tenant VSYS firewall through the spine node.
Planning suggestions: A pair of interconnection IP addresses with a mask having 30 consecutive leading
1-bits are required for a group of firewalls. This IP address can be dynamically adjusted for future use.

Public IP address - Usage: NAT (Network Address Translation) address pool of the SDN DC, which includes the NAT
addresses used by tenants and NAT addresses delivered by iMaster NCE-Fabric.
Planning suggestions: Set this IP address based on the actual public network services.

Interconnection IP 10.125.91.0 to Usage: This address is used when iMaster NCE-Fabric delivers overlay network configurations. If
address 10.125.91.255/30 multiple VPCs need to communicate with each other through different firewall groups or gateway groups,
iMaster NCE-Fabric needs to deliver interconnection IP addresses for interconnection. (For details, see
Configuration Guide > Traditional Mode > Commissioning > Resource Pool Management >
Configuring Global Resources in iMaster NCE-Fabric V100R024C00 Product Documentation.)
Planning suggestions: Set this IP address based on the actual networking application scenario. This IP
address is not involved in the scenario where a single gateway group and a single physical firewall group
are deployed. This IP address can be dynamically adjusted for future use.

Service IP address 10.132.1.0/24 Usage: This IP address is used to create a VBDIF Layer 3 interface as the gateway of PMs or VMs when
iMaster NCE-Fabric delivers overlay network configurations.
Planning suggestions: Plan this IP address based on the actual deployment and service scale. This IP
address can be dynamically adjusted for future use.

Storage IP address 192.168.1.11/24 Usage: This IP address is used for interconnection between compute node hosts and storage devices. If
this IP address is used as a Layer 3 interface, you need to plan the IP address of the VLANIF gateway.
Quantity planning suggestions: Plan the quantity based on the actual deployment and service scale. This
IP address can be dynamically adjusted for future use.

FusionCompute 192.168.40.11/24 Usage: The IP addresses are used for FusionCompute cluster interconnection and northbound
management IP management, which include two FusionCompute management IP addresses and one floating IP address.
address of the DCS Quantity planning suggestions: Plan the quantity based on the actual deployment and service scale. The
system IP addresses can be dynamically adjusted for future use.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 152/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

eDME management IP 192.168.40.11/24 Usage: The IP addresses are used for the network communication of eDME, including three management
address of the DCS IP addresses and two floating IP addresses of three nodes in the eDME cluster.
system Quantity planning suggestions: Plan the quantity based on the actual deployment and service scale. The
IP addresses can be dynamically adjusted for future use.

eBackup management 192.168.40.11/24 Usage: The IP addresses are used for the network communication of eBackup, including three
IP address of DCS management IP addressed and two floating IP addresses of eBackup.

UltraVR management 192.168.40.11/24 Usage: The IP address is used for the network communication of UltraVR, including one management IP
IP address of DCS address of UltraVR.

Except DCS management and storage IP address and out-of-band management IP address, other IP addresses are globally reserved for iMaster NCE-Fabric to build
the overlay VPC network.

NVMe over RoCE Networking Planning


Figure 8 shows how to connect a host to a storage system on a dual-switch RoCE network. Connect the ports with the IDs described in the table.

Figure 8 Dual-switch RoCE networking diagram

Table 10 Example of IP address planning for dual-switch RoCE networking

Port Port Description VLAN IP Address Subnet Mask


ID ID

1 Port Slot1.P0 on Host001 Connects to port A.IOM0.P0 and port B.IOM0.P0 on Storage001 using 55 192.168.5.5 255.255.255.0
Switch001.

2 Port Slot1.P1 on Host001 Connects to port A.IOM0.P1 and port B.IOM0.P1 on Storage001 using 66 192.168.6.5 255.255.255.0
Switch002.

3 Port A.IOM0.P0 on Connects to port Slot1.P0 on Host001 using Switch001. 55 192.168.5.6 255.255.255.0
Storage001

5 Port A.IOM0.P1 on Connects to port Slot1.P1 on Host001 using Switch002. 66 192.168.6.6 255.255.255.0
Storage001

4 Port B.IOM0.P0 on Connects to port Slot1.P0 on Host001 using Switch001. 55 192.168.5.7 255.255.255.0
Storage001

6 Port B.IOM0.P1 on Connects to port Slot1.P1 on Host001 using Switch002. 66 192.168.6.7 255.255.255.0
Storage001

3.1.3 System Requirements


Local PC Requirements

Management System Resource Requirements

Storage Device Requirements

127.0.0.1:51299/icslite/print/pages/resource/print.do? 153/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Network Requirements

Physical Networking Requirements

3.1.3.1 Local PC Requirements


Table 1 Local PC requirements

Item Requirement

Memory > 2 GB

Disk The available space of the disk partition where SmartKit is installed is greater than 50 GB.

Resolution For better visual effect, the recommended resolution is 1920 x 1080 or higher.

Network The local PC can communicate with the planned management plane.

OS Windows 10 and Windows 11

Browser Google Chrome 103 or later


(recommended)
Mozilla Firefox 101 or Mozilla Firefox 102
Edge 103 or later
Safari 15 or later

Security The firewall on the local PC has been disabled.

Permissions You are advised to use the administrator account on the local PC. Otherwise, SmartKit will display a dialog box asking you to
(recommended) obtain the administrator permissions during eDME deployment, and SmartKit can run the eDME deployment script only after you
confirm the operation.
You can perform the following operations to modify User Account Control settings to control whether to display the dialog box.
Open the Windows Control Panel.
Choose System > Security and Maintenance > Change User Account Control settings.
Move the slider on the left to Never notify. After the deployment is complete, restore the default value.

Installation tool You have installed SmartKit.

3.1.3.2 Management System Resource Requirements


DCS supports separated and hyper-converged deployment of compute and storage nodes. Separated deployment includes deployment for flash
storage and scale-out storage. The management systems of the two deployment modes have the following requirements on host resources:

Separated deployment

Flash storage deployment: The requirements on management system host resources include the requirements on VRM and eDME
components.

Scale-out storage deployment: The requirements on management system host resources include the requirements on VRM, eDME, and
(optional) FSM components.

Hyper-converged deployment

Hyper-converged deployment: The requirements on management system host resources include the requirements on eDME
components.

For details, see the following tables.

Table 1 VRM node specifications (Intel/Hygon/AMD/HiSilicon)

VRM Node Management Scale Specifications of VRM Nodes for Connecting to Specifications of VRM Nodes for Connecting to
eDME (Container Management Disabled on eDME (Container Management Enabled on
FusionCompute) FusionCompute)

1000 VMs or 1 to 50 physical hosts vCPUs ≥ 8 vCPUs ≥ 8

127.0.0.1:51299/icslite/print/pages/resource/print.do? 154/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Memory size ≥ 8 GB Memory size ≥ 12 GB
Disk size ≥ 140 GB Disk size ≥ 620 GB
NOTE:

The default disk capacity of the VRM VM is 120 GB.


The remaining disk capacity of the CNA host must be
greater than or equal to 140 GB.

3000 VMs or 51 to 100 physical hosts vCPUs ≥ 12 vCPUs ≥ 12


NOTE:
Memory size ≥ 16 GB Memory size ≥ 22 GB
You are advised to deploy VRM on
physical servers to support such Disk size ≥ 140 GB Disk size ≥ 920 GB
specifications. NOTE:

The default disk capacity of the VRM VM is 120 GB.


The remaining disk capacity of the CNA host must be
greater than or equal to 140 GB.

5000 VMs or 101 to 200 physical hosts vCPUs ≥ 30 vCPUs ≥ 30


NOTE:
Memory size ≥ 40 GB Memory size ≥ 50 GB
You are advised to deploy VRM on
physical servers to support such Disk size ≥ 140 GB Disk size ≥ 1220 GB
specifications. NOTE:

The default disk capacity of the VRM VM is 120 GB.


The remaining disk capacity of the CNA host must be
greater than or equal to 140 GB.

10,000 VMs or 201 to 1000 physical Not supported Not supported


hosts
NOTE:

VRM can be deployed on only


physical servers to support such
specifications.

Table 2 Specifications requirements for VRM nodes (Phytium)

VRM Node Management Scale Specifications of VRM Nodes for Connecting to Specifications of VRM Nodes for Connecting to
eDME (Container Management Disabled on eDME (Container Management Enabled on
FusionCompute) FusionCompute)

1000 VMs or 1 to 50 physical hosts vCPUs ≥ 12 vCPUs ≥ 12


NOTE:
Memory size ≥ 16 GB Memory size ≥ 22 GB
You are advised to deploy VRM on
physical servers to support such
Disk size ≥ 140 GB Disk size ≥ 920 GB
specifications. NOTE:

The default disk capacity of the VRM VM is 120 GB.


The remaining disk capacity of the CNA host must be
greater than or equal to 140 GB.

3000 VMs or 51 to 100 physical hosts vCPUs ≥ 30 vCPUs ≥ 30


NOTE:
Memory size ≥ 40 GB Memory size ≥ 50 GB
You are advised to deploy VRM on
physical servers to support such
Disk size ≥ 140 GB Disk size ≥ 1220 GB
specifications. NOTE:

The default disk capacity of the VRM VM is 120 GB.


The remaining disk capacity of the CNA host must be
greater than or equal to 140 GB.

5000 VMs or 101 to 200 physical hosts Not supported Not supported
NOTE:

VRM can be deployed on only


physical servers to support such
specifications.

Table 3 Host resource requirements of the eDME management system

Management HA Mode Management CPU Memory Storage Space


Level Scale

127.0.0.1:51299/icslite/print/pages/resource/print.do? 155/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

LITE Three-node cluster 1,000 VMs ≥6 ≥ 48 GB System disk: ≥ 55 GB


Data disk: ≥ 600 GB

Five-node cluster (three on the O&M 1,000 VMs ≥ 6 (O&M portal) ≥ 48 GB (O&M System disk: ≥ 55 GB
portal and two on the operation portal) ≥ 8 (Hygon or Arm) portal) Data disk:
(O&M portal) ≥ 32 GB ≥ 600 GB (O&M portal)
≥ 8 (operation (operation portal)
≥ 820 GB (operation portal)
portal)

L1 Three-node cluster 10,000 VMs ≥ 16 ≥ 64 GB System disk: ≥ 55 GB


Data disk:
≥ 1,024 GB when 3,000 VMs
are deployed
≥ 1,536 GB when 5,000 VMs
are deployed
≥ 2,560 GB when 10,000 VMs
are deployed

Five-node cluster (three on the O&M 10,000 VMs ≥ 16 (O&M portal) ≥ 64 GB (O&M System disk: ≥ 55 GB
portal and two on the operation portal) ≥ 16 (operation portal) Data disk:
portal) ≥ 32 GB ≥ 1,024 GB (O&M portal)
(operation portal) when 3,000 VMs are deployed
≥ 1,536 GB (O&M portal)
when 5,000 VMs are deployed
≥ 2,560 GB (O&M portal)
when 10,000 VMs are deployed
≥ 820 GB (operation portal)

Table 4 Resource planning for Elastic Container Engine

VM Host Name CPU (Cores) Memory (GB) Storage Space (GB)

eDMEContainer01 ≥4 ≥8 System disk ≥ 55 GB


Data disk ≥ 2,900 GB

eDMEContainer02 ≥4 ≥8 System disk ≥ 55 GB


Data disk ≥ 2,900 GB

Table 5 Resource planning for Auto Scaling

VM Host Name CPU (Cores) Memory (GB) Storage Space (GB)

eDMEAutoScaling01 ≥4 ≥8 System disk ≥ 55 GB


Data disk ≥ 500 GB

eDMEAutoScaling02 ≥4 ≥8 System disk ≥ 55 GB


Data disk ≥ 500 GB

Table 6 Resource planning for FSM, eBackup, and UltraVR

Component HA Mode Number of VMs Management Scale CPU Memory Storage Space

(Optional) FSM Active/standby 2 3 to 64 storage nodes ≥4 ≥ 8 GB ≥ 100 GB

65 to 128 storage nodes ≥8 ≥ 8 GB ≥ 100 GB

129 to 256 storage nodes ≥ 16 ≥ 16 GB ≥ 160 GB

eBackup Single-node deployment or multi-node deployment 1 - ≥8 ≥ 16 GB ≥ 120 GB

UltraVR Single-node deployment 1 Fewer than 3,000 VMs ≥4 ≥ 8 GB ≥ 60 GB

Table 7 HiCloud resource planning

VM Host VM Type CPU Memory System Disk Data Disk 1 Data Disk 2 Data Disk 3 Data Disk 4
Name (GB) (GB) (GB) (GB) (GB) (GB)

paas-core Management 8 24 40 100 1,060 50 /


plane

127.0.0.1:51299/icslite/print/pages/resource/print.do? 156/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

platform-node1 Data plane 8 24 40 100 765 200 310

platform-node2 Data plane 8 24 40 100 565 100 300

platform-node3 Data plane 8 24 40 100 765 200 310

gkit Management 4 8 40 200 / / /


plane

Table 8 eDataInsight resource planning

VM Function CPU vCPU Memory System Service Data Description


Overcommitment (Cores) (GB) Disk Disk Disk
Ratio (GB) (GB) (GB)

VMs for deploying the 1:1 8 32 92 200 - Used for CloudSOP installation. A maximum
eDataInsight of 1,000 hosts can be managed.
management plane:
three

VMs for deploying 1:1.5 16 64 92 200 ≥ 500 Used for eDataInsight component
eDataInsight installation. The number of VMs and CPU
components: three or cores and the sizes of memory and data disks
more can be dynamically adjusted.

Table 9 SFS resource planning

VM Host Name CPU (Cores) Memory (GB) System Disk (GB)

SFS_DJ01 8 8 240

SFS_DJ02 8 8 240

Table 10 VM requirements for eCampusCore deployment

VM Host Name Number of vCPUs (Cores) Memory (GB) System Disk (GB) Data Disk 1 (GB) Data Disk 2 (GB) Data Disk 3 (GB)

installer 4 32 280 560 200 -

nfs-dns-1 4 8 120 350 500 -

nfs-dns-2 4 8 120 350 500 -

foundation-1 8 32 120 200 200 -

foundation-2 8 32 120 200 200 -

foundation-3 8 32 120 200 200 -

ops-1 3 8 120 100 180 200

ops-2 3 8 120 100 180 200

gaussv5-1 8 32 120 100 500 2048

gaussv5-2 8 32 120 100 500 2048

3.1.3.3 Storage Device Requirements


Storage devices must meet certain requirements to ensure that software related to DCS can be correctly installed.

FusionCompute can be installed only on a local storage device.

If local storage devices are used, only available space on the disk where you install the host OS and other bare disks can be used for data
storage.

If shared storage devices are used, including SAN and NAS storage devices, you must configure the management IP addresses and storage link IP
addresses for them. The following conditions must be met for different storage devices:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 157/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If SAN devices are used and you have requirements for thin provisioning and storage cost reduction, you are advised to use the thin
provisioning function provided by the Virtual Image Management System (VIMS), rather than the Thin LUN function of SAN devices.
If you use the Thin LUN function of the underlying physical storage devices, an alarm indicating insufficient storage space may be generated
after you delete a VM commonly because the storage space used by the VM is not zeroed out.

If SAN storage devices are used, configure LUNs or storage pools (datastores) as planned and map them to corresponding hosts.

If NAS storage devices are used, configure shared directories (datastores) and a list of hosts that can access the shared directories as planned,
and configure no_all_squash and no_root_squash.

The OS compatibility of some non-Huawei SAN storage devices varies depending on the LUN space. For example, if the storage space of a
LUN on a certain SAN storage device is greater than 2 TB, certain OSs can identify only 2 TB storage space on the LUN. Therefore, review
your storage device product documentation to understand the OS compatibility of the non-Huawei SAN storage devices before you use the
devices.

If SAN storage devices are used, you are advised to connect storage devices to hosts using the iSCSI protocol. The iSCSI connection does not
require additional switches, thereby reducing costs.

Except for FusionCompute which uses local storage, other components should preferentially use shared storage. eDataInsight must use shared storage.

In the DCS environment, a single virtualized SAN datastore can be added only to hosts using the same type of CPUs.
In a system that uses datastores provided by shared storage, add the datastores to all hosts in the same cluster to ensure that the VM migration within a cluster is
not affected by the datastores.
Local disks can be provided only for the host accommodating the disks. Pay attention to the following when using local storage:
The size of the local disk is configured in approximate equivalence to the number of host compute resources. With this equivalence configured, if the host
compute resources become exhausted, local storage resources will also become exhausted, preventing unequal resource waste.

Storage virtualization provides better performance in small-scale scenarios. Therefore, it is recommended that a maximum of 16 hosts be connected
to the same virtual datastore.

3.1.3.4 Network Requirements


For details about how network planes communicate, see Network Overview . Switch configurations must meet the following requirements:

SNMP and SSH must be enabled for the switches to enhance security. SNMPv3 is recommended. For details, see "SNMP Configuration" in the
corresponding switch product documentation. For example, see "Configuration" > "Configuration Guide" > "System Management
Configuration" > "SNMP Configuration" in CloudEngine 8800 and 6800 Series Switches Product Documentation .

To ensure networking reliability, you are advised to configure M-LAG for switches. If NICs are sufficient, you can use two or more NICs for
connecting the host to each plane. For details, see "M-LAG Configuration" in the corresponding switch product documentation. For details, see
"Configuration" > "Configuration Guide" > "Ethernet Switching Configuration" > "M-LAG Configuration" in CloudEngine 8800 and 6800
Series Switches Product Documentation .

Table 1 describes the requirements for communication between network planes in the system.

Table 1 Interconnection requirements for each network plane

Communication Description Requirement


Plane

BMC plane Specifies the plane used by the BMC network ports on hosts. This plane enables The management plane and the BMC plane of
remote access to the BMC system of a server. the VRM node can communicate with those
planes of the eDME node. The management
plane and the BMC plane can be combined.

Management Specifies the plane used by the management system to manage all nodes in a VRM nodes can communicate with CNA nodes
plane unified manner. This plane provides the following IP addresses: over the management plane.
Management IP addresses of all hosts, that is, IP addresses of the management The eDataInsight management plane can
network ports on hosts communicate with the FusionCompute
IP addresses of management VMs management plane.

IP addresses of storage device controllers In the decoupled storage-compute scenario, the


eDataInsight management plane needs to
NOTE: communicate with the OceanStor Pacific HDFS
The management plane is accessible to the IP addresses in all network segments by management plane.
default, because the network plans of different customers vary. You can deploy
127.0.0.1:51299/icslite/print/pages/resource/print.do? 158/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
physical firewalls to deny access from IP addresses that are not included in the
network plan.
If you use a firewall to set access rules for the floating IP address of the VRM node,
set the same access rules for the management IP addresses of the active and standby
VRM nodes.
The DCS management plane has ports that provide management services for
external systems. If the management plane is deployed on an untrusted network,
there is a high probability that external DoS/DDoS attacks occur. You are advised to
deploy the management plane on the customer's private network or in the trusted
zone of the firewall to protect the DCS system from external attacks.
You are advised to configure the eth0 port on a host as the management plane
network port. If a host has more than four network ports, you are advised to
configure both eth0 and eth1 as the management plane network ports and bind
them to work in active/standby mode during installation.

Service plane Specifies the plane used by user VMs. The management plane where eDataInsight
management nodes reside needs to
communicate with the VM service plane of
compute nodes.
In the decoupled storage-compute scenario, the
eDataInsight service plane needs to
communicate with the OceanStor Pacific HDFS
service plane.

Storage plane This plane is used for interconnection between hosts and storage units of storage Hosts communicate properly with storage
devices and for processing foreground data between storage nodes. This plane devices over the storage plane.
provides the following IP addresses: You are advised not to use the management
Storage IP addresses of all hosts, that is, IP addresses of the storage network ports plane to carry storage services. This ensures
on hosts storage service continuity even when you
subsequently expand the capacity for the
Storage IP addresses of storage devices storage plane.
If the multipathing mode is in use, configure multiple VLANs for the storage
plane.

Backend storage Back-end storage plane of scale-out storage: -


plane Storage nodes are interconnected and all storage nodes use IP addresses of the
back-end storage plane.

SmartKit plane Specifies the management plane of the host where SmartKit is located. The SmartKit plane communicates with the
BMC plane and management plane.

iMaster NCE- Specifies the iMaster NCE-Fabric management plane. The iMaster NCE-Fabric plane communicates
Fabric plane with the management plane.

3.1.3.5 Physical Networking Requirements


For details about the physical networking requirements, see Basic Network Architecture .

Table 1 Networking requirements

Item Requirement

Network The management network and service network must be isolated.


If only one service subnet is available at a site, an additional internal subnet must be configured on a switch to function as the management
network.

Switch In the physical networking, a single leaf switch can be deployed or the M-LAG networking can be used.

Cluster All servers in a cluster are of the same model, and the VLAN assignments for all clusters are the same.

3.2 Installation Process


Figure 1 Installation process of DCS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 159/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Installation process of DCS

Phase Subtask Application Description


Scenario

Preparing for Obtaining Documents, Tools, Separated and hyper- Obtain the documents, tools, and software packages required for the
installation and Software Packages converged installation.
deployment
Integration Design scenarios The integration design phase covers the planning and design of DCS,
including the LLD of the system architecture, resource requirements,
compute system, network system, storage system, and O&M. The LLD
template is output to provide guidance for software and hardware
installation.

Planning Communication Plan the communication ports and protocols used for DCS.
Ports

Accounts and Passwords Obtain the passwords and accounts used for deploying DCS.

Preparing Data Plan the host and VRM information required for installing the software.

Deploying Installing Devices Hardware involved in DCS includes servers, storage devices, and switches,
hardware or hardware devices in hyper-converged scenarios.

Installing Signal Cables Install signal cables for servers and switches.

Configuring Hardware Configure the installed servers, storage devices, and switches, or hardware
Devices devices in hyper-converged scenarios.

Deploying software Unified DCS Deployment Separated Unified DCS deployment indicates that SmartKit is used to install
(Separated Deployment deployment scenario FusionCompute, eDME, UltraVR (optional), eBackup (optional),
Scenario) eDataInsight (optional), HiCloud (optional), and SFS (optional).
FusionCompute virtualizes hardware resources and centrally manages
virtual resources, service resources, and user resources. Create three
management VMs on FusionCompute. Management VMs are used to install
eDME and (optional) FSM.
eDME is a Huawei-developed intelligent O&M platform that centrally
manages software and hardware for virtualization scenarios.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 160/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Configuring Interconnection Separated This configuration is required only in the network overlay SDN solution.
Between iMaster NCE- deployment scenario FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric
Fabric and FusionCompute detects VM login, logout, and migration status and automatically configures
the VM interworking network.

Configuring Interconnection Separated This configuration is required only in the network overlay SDN solution.
Between iMaster NCE- deployment scenario Configure iMaster NCE-Fabric to interconnect with eDME so that iMaster
Fabric and eDME NCE-Fabric can be managed in eDME.

Installing FabricInsight Separated This configuration is required only in the network overlay SDN solution.
deployment scenario Install FabricInsight to interconnect with eDME.

(Optional) Installing FSM Separated Only in scale-out storage deployment scenarios, two VMs created on
deployment scenario FusionCompute are deployed in active/standby mode.

Installing eDME (Hyper- Hyper-converged When DCS is used in the hyper-converged deployment scenario, eDME is
Converged Deployment) deployment scenario deployed on VMs created on two MCNA nodes and one SCNA node of
FusionCube 1000H. For details about eDataInsight and HiCloud
deployment procedures, see Installation Using SmartKit .

Initial configuration Initial Service Separated and hyper- Initialize the system of DCS using the initial configuration wizard, such as
Configurations converged creating clusters, adding hosts, adding storage devices, and configuring
deployment networks.
scenarios

(Optional) (Optional) Installing DR and Separated and hyper- eBackup&UltraVR virtualization backup and DR software is used to
Installing DR and Backup Software converged implement VM data backup and DR, providing a unified DR and backup
backup software deployment protection solution for data centers in all regions and scenarios.
scenarios
UltraVR is a piece of DR management software that relies on storage to
provide VM data protection and restoration functions.
eBackup is a piece of Huawei-developed backup software for virtual
environments.

3.3 Preparing for Installation


Obtaining Documents, Tools, and Software Packages

Integration Design

Planning Communication Ports

Accounts and Passwords

Preparing Data

Compatibility Query

3.3.1 Obtaining Documents, Tools, and Software Packages

Preparing Documents
Table 1 lists the documents required for installing DCS.

Table 1 Documents to be prepared

Document Key Information How to Obtain

Datacenter Virtualization Solution Software deployment plan For enterprises: Visit


2.1.0 Integration Design Suite https://support.huawei.com/enterprise , search for the
document by name, and download it.
Datacenter Virtualization Solution Communication ports and protocols
2.1.0 Communication Matrix For carriers: Visit https://support.huawei.com , search for
the document by name, and download it.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 161/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Datacenter Virtualization Solution Information about the mapping between software and
2.1.0 Version Mapping hardware versions

Datacenter Virtualization Solution Before installing the software, download the software
2.1.0 Software Package Download packages of DCS components.
List (by SmartKit) Datacenter Virtualization Solution 2.1.0 Software
Package Download List (by SmartKit) can be imported
to SmartKit to automatically download software
packages.

iMaster NCE-Fabric This document is required only for the network overlay
V100R024C00 Product SDN solution.
Documentation Install iMaster NCE-Fabric and configure
interconnection between iMaster NCE-Fabric and
FusionCompute.

SmartKit 24.0.0 User Guide Describes how to use SmartKit. Among these available
methods:
For details about the SmartKit installation process, see
section "Installing SmartKit."
For details about how to use SmartKit to download the
software packages required by FusionCompute, see
section "Software Packages."

Tools
Table 2 describes the tools to be prepared before the installation.

Table 2 Tools to be obtained

Tool Function How to Obtain

SmartKit SmartKit is a collection of IT product service tools, including Huawei storage, server, and cloud Enterprises: Click here.
computing service tools, such as tools required for deployment, maintenance, and upgrade.
Carriers: Click here.
SmartKit installation package: SmartKit_24.0.0.zip.
Use SmartKit to deploy DCS in a unified manner. If Datacenter Virtualization Solution Deployment is
installed offline, you need to obtain the basic virtualization O&M software package
(SmartKit_24.0.0_Tool_Virtualization_Service.zip).

PuTTY A cross-platform remote access tool, which is used to access nodes on a Windows OS during software You can visit the chiark homepage
installation. to download the PuTTY software.
You are advised to use PuTTY of
the latest version for a successful
login to the storage system.

WinSCP A cross-platform file transfer tool, which is used to transfer files between Windows and Linux OSs. You can visit the WinSCP
homepage to download the
WinSCP software.

Verifying Software Packages


When downloading a software package, you also need to download the verification file of the software package. You can download the required
verification file for the software package from the Automatic Verification Signature File column in the software package list at the support
website.

FusionCompute Software Package


The FusionCompute installation tool SmartKit can be used to install hosts and VRM nodes in a unified manner. Obtain the software packages
described in Table 3 or Table 4.

Table 3 Software packages required for tool-based installation in the x86 architecture

Software Package Description How to Obtain

FusionCompute-LinuxInstaller-8.8.0-X86_64.zip FusionCompute installation tool Enterprise users: Click here.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 162/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Carrier users: Click here.
FusionCompute_CNA-8.8.0-X86_64.iso FusionCompute host OS

FusionCompute_VRM-8.8.0-X86_64.zip VRM VM template

FusionCompute_SIA-8.1.0.1-GuestOSDriver_X86.zip virtio-win driver package

Table 4 Software packages required for tool-based installation in the Arm architecture

Software Package Description How to Obtain

FusionCompute-LinuxInstaller-8.8.0-ARM_64.zip FusionCompute installation tool Enterprise users: Click here.

FusionCompute_CNA-8.8.0-ARM_64.iso FusionCompute host OS Carrier users: Click here.

FusionCompute_VRM-8.8.0-ARM_64.zip VRM VM template

After obtaining the software packages, do not change the names of the software packages. Otherwise, the software packages cannot be verified when they are
uploaded. As a result, the software packages cannot be installed.

eDME Software Package


eDME can be installed on a server or VM running EulerOS. Table 5 lists the software packages required for the installation. Download the required
software packages based on the OS architecture and save them in the same directory.

Table 5 Software package list (EulerOS)

Software Package Description How to Obtain

eDME_24.0.0_DeployTool.zip eDME software deployment tool Enterprises: Click here.


package.
Carriers: Click here.
eDME_24.0.0_Software_Euler_ARM.zip eDME installation package and CMS
eDME_24.0.0_Software_Euler_ARM.zip.cms and CRL digital signature files used
by Arm servers and VRM VMs.
eDME_24.0.0_Software_Euler_ARM.zip.crl
eDME_24.0.0_EulerOS_V2.0SP12_dvd_ARM_64.iso
eDME_24.0.0_EulerOS_V2.0SP12_dvd_ARM_64.iso.cms
eDME_24.0.0_EulerOS_V2.0SP12_dvd_ARM_64.iso.crl

eDME_24.0.0_Software_Euler_X86.zip eDME installation package and CMS


eDME_24.0.0_Software_Euler_X86.zip.cms and CRL digital signature files used
eDME_24.0.0_Software_Euler_X86.zip.crl by x86 servers and VRM VMs.
eDME_24.0.0_EulerOS_V2.0SP12_dvd_X86_64.iso
eDME_24.0.0_EulerOS_V2.0SP12_dvd_X86_64.iso.cms
eDME_24.0.0_EulerOS_V2.0SP12_dvd_X86_64.iso.crl

DCSIMS_24.0.0_Manager_Euler.zip DCSIMS software installation Enterprises: Click here.


DCSIMS_24.0.0_Manager_Euler.zip.cms package and CMS and CRL digital
signature files. Carriers: Click here.
DCSIMS_24.0.0_Manager_Euler.zip.crl
CAUTION: Go to the target FusionCompute version, and click
In the multi-tenant scenario, you next to the DCSIMS software installation
need to download the DCSIMS package to download the digital signature
software installation package. verification files.

To deploy services such as Elastic Load Balance (ELB) and Domain Name Service (DNS), import the following files on the Template
Management page of FusionCompute:

Software Package Description How to Obtain

eDME_DVN-24.0.0-ARM_64.zip ELB and DNS service templates Enterprises: Click here.


eDME_DVN-24.0.0-X86_64.zip
Carriers: Click here.

To deploy the DCS eDataInsight management plane, download the software installation package for the DCS eDataInsight management plane, and
the matching digital signature verification file and certificate verification file listed in the following table.

eDataInsight_24.0.0_Manager_Euler.zip Software installation package for the DCS Enterprises: Click here.
eDataInsight management plane

127.0.0.1:51299/icslite/print/pages/resource/print.do? 163/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Carriers: Click here.
eDataInsight_24.0.0_Manager_Euler.zip.cms Digital signature verification file of the software
installation package for the DCS eDataInsight Click next to the eDataInsight software
management plane installation package to download the digital
signature verification files.
eDataInsight_24.0.0_Manager_Euler.zip.crl Certificate verification file of the software
installation package for the DCS eDataInsight
management plane

To deploy the DCS ECE service, download the software installation package for the DCS ECE service, and the matching digital signature
verification file and certificate verification file listed in the following table.

DCSCCS_24.0.0_Manager_Euler.zip DCS ECE service installation package Enterprises: Click here.

DCSCCS_24.0.0_Manager_Euler.zip.cms Digital signature verification file for the DCS Carriers: Click here.
ECE service installation package
Click next to the DCSCCS software installation
package to download the digital signature verification files.
DCSCCS_24.0.0_Manager_Euler.zip.crl Certificate verification file for the DCS ECE
service software installation package

To deploy the DCS AS service, download the software installation package for the DCS AS service, and the matching digital signature verification
file and certificate verification file listed in the following table.

DCSAS_24.0.0_Manager_Euler.zip DCS AS service installation package Enterprises: Click here.

DCSAS_24.0.0_Manager_Euler.zip.cms Digital signature verification file for Carriers: Click here.


the DCS AS service software
installation package Go to the target FusionCompute version, and click next to the
DCSAS software installation package to download the digital
signature verification files.
DCSAS_24.0.0_Manager_Euler.zip.crl Certificate verification file for the DCS
AS service software installation
package

UltraVR Software Package

Table 6 Software package list

Software Package Name Description How to


Obtain

Template files: Download and install the UltraVR software package as Enterprises:
required. For details, see Installation and Uninstallation in Click here.
OceanStor_BCManager_8.6.0_UltraVR_VHD_for_Euler_X86.zip
the OceanStor BCManager 8.6.0 UltraVR User Guide.
OceanStor_BCManager_8.6.0_UltraVR_VHD_for_Euler_ARM.zip Then, install the patch file by following the instructions Carriers: Click
provided in OceanStor BCManager 8.6.0.SPC200 UltraVR here.
Patch file:
Patch Installation Guide.
OceanStor BCManager 8.6.0.SPC200_UltraVR_for_Euler.zip

Before connecting CSHA to eDME, obtain the required adaptation package, signature, and certificate file.

Table 7 Adaptation package, signature, and certificate file required for connecting CSHA to eDME

Software Package Name Description Download Link

resource_uniteAccess_csha_8.6.0 The .zip package contains the following For enterprise users, click here, search for
NOTE: files: the software package by name, and
Adaptation package download it.
The version number in the software package name varies with
site conditions. Use the actual version number. resource_uniteAccess_csha_8.6.0.tar.gz For carrier users, click here, search for the
Signature file software package by name, and download
it.
resource_uniteAccess_csha_8.6.0.tar.gz.cms
Certificate file
resource_uniteAccess_csha_8.6.0.tar.gz.crl

eBackup Software Package

Table 8 Software package list

Software Package Name Description How to Obtain

127.0.0.1:51299/icslite/print/pages/resource/print.do? 164/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

eBackup template files eBackup template files, which are used Enterprises: Click
to install eBackup. here.
Arm template file:
OceanStor BCManager Carriers: Click
8.5.1.SPC100_eBackup_KVMtemplate_euler_aarch64_virtualization.zip here.
x86 template file:
OceanStor BCManager
8.5.1.SPC100_eBackup_KVMtemplate_euler_x86_64_virtualization.zip

OceanStor Pacific Series Deployment Tool

Table 9 Software package list

Software Package Name Description How to Obtain

OceanStor- OceanStor Pacific series deployment tool, which is used to install the Enterprises: Click
Pacific_8.2.1_DeviceManagerClient.zip management node of the OceanStor Pacific series software. here.
Carriers: Click
here.

eDataInsight Software Package


Before using SmartKit to install eDataInsight, obtain software packages listed in Table 10.

Table 10 Required software packages for tool installation

Software Package Name Software Description How to Obtain

eDataInsight_24.0.0_DayuImage_Euler-x86_64.zip VM image files Enterprises: Click here.


eDataInsight_24.0.0_DayuImage_Euler-aarch64.zip Carriers: Click here.

eDataInsight_24.0.0_Software_Euler-x86_64.zip eDataInsight service software packages

eDataInsight_24.0.0_Software_Euler-aarch64.zip

After obtaining the software packages, do not change the names of the software packages. Otherwise, the software packages cannot be verified when they are
uploaded. As a result, the software packages cannot be installed.

HiCloud Software Package


You can use GKit Live to download all software packages in one-click mode, improving software package download efficiency and reducing
manual download errors. GKit Live is an online tool and does not have a strong mapping relationship with the CMP HiCloud version. Perform
specific operations based on actual pages.

1. Use a browser to access GKit Live at https://info.support.huawei.com/gkitlive/index?lang=zh#/?lang=en.

2. Log in to GKit Live by using your W3 account.

If you do not have a Huawei account, contact Huawei technical support to log in to GKit Live.

3. Set filter criteria by referring to Table 11 and click Filter. If you need help, click Help Center on the right of the page to view the detailed
process.

Table 11 Search criteria

Parameter Configuration

Domain Select IT Consulting & System Integration.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 165/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Product Select CMP HiCloud.

Version Select CMP HiCloud 25.1.0.

Bundle Select CMP HiCloud.

Support scene Select this parameter according to the architecture to be installed. Select EulerOS-X86 or EulerOS-ARM.

4. Select required software packages, click Apply for download, fill in the application based on actual conditions, and click Submit
Application. For details about the software packages, see Obtaining HiCloud Software Packages from Huawei Support Website .

5. (Optional) Download the build file.


A build file records information about downloaded software packages, facilitating software package tracing.

6. After the application is approved, select all software packages and their digital signature verification files, and click Download Files to
download them to the PC.

SFS Software Package


Before using SmartKit to install SFS, obtain the software packages listed in Table 12.

Table 12 SFS software package list

Software Package Name Software Description How to Obtain

Template files: Install the SFS software package specific to the architecture, and then install the Enterprises:
patch file by following instructed provided in STaaS Solution 8.5.0.SPC1 SFS Patch Click here.
DCS_SFS_8.5.0_ARM.zip
Installation Guide.
DCS_SFS_8.5.0_X86.zip Carriers: Click
here.
Patch file:
STaaS_Solution_SFS_DJ_PATCH-
8.5.0.SPC1.tar.gz

resource_uniteAccess_sfs_8.5.0.tar.gz Before connecting SFS to eDME, obtain the required adaptation package, signature,
and certificate file.
resource_uniteAccess_sfs_8.5.0.tar.gz.cms
Adaptation package: *.tar.gz
resource_uniteAccess_sfs_8.5.0.tar.gz.crl Signature file: *.tar.gz.cms
Certificate file: *.tar.gz.crl

eCampusCore Software Package

In the package names, <version> indicates the version number of the component. Download the corresponding software package based on the requirements.
In the package names listed in Table 13 and Table 14, <hardware-platform> indicates the hardware platform. The DCS application and data integration for
multi-tenant scenarios supports both x86_64 and aarch64 platforms. Tenants can select either of the platforms based on the requirements. Therefore, you need to
download both the x86_64 and aarch64 software packages.
Table 16 lists available VM templates. You need to download the VM templates corresponding to the architecture of the FusionCompute platform. Otherwise,
the installation may fail. You can view the FusionCompute architecture type on the host overview page.

Table 13 Service component software packages

Category Software Package Description

Multi-tenant management Common fundamental eCampusCore_<version>_PreInstallation_<hardware- Tool installation package.


software package (on the components platform>.zip
O&M portal)
eCampusCore_<version>_IntegrationFramework_<hardware- Integration framework
platform>.zip installation tool package.

eCampusCore_<version>_BasicService_EulerOS_<hardware- Software packages of the


platform>.tgz common fundamental
components.
eCampusCore_<version>_Middleware_EulerOS_<hardware-
platform>.tgz

127.0.0.1:51299/icslite/print/pages/resource/print.do? 166/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

O&M management eCampusCore_<version>_OPS_EulerOS_<hardware- Platform O&M installation


platform>.tgz package.

eCampusCore_<version>_Portal_EulerOS_<hardware- Installation package of the


platform>.tgz Platform Portal.

Link Services (System eCampusCore_<version>_LinkSoft_EulerOS_<hardware- Software package of the


Integration Service) platform>.tgz System Integration Service.

eCampusCore_<version>_IO.tgz Software package of the


campus link assets.

Link Services (Device eCampusCore_<version>_LinkDevice_EulerOS_<hardware- Software package of the


Integration Service) platform>.tgz Device Integration Service.

Service Openness CampusCore_<version>_APIGW_EulerOS_<hardware- Software package of the


(APIGW) platform>.tgz APIGW.

Instance management eCampusCore_<version>_PaaSServiceProvision_<hardware- Software package of the


service package platform>.tgz PaaS Instance Management
service.

Multi-tenant service Multi-tenant service eCampusCore_<version>_PaaSeLink.zip Adaptation package of


adaptation package (on the on eCampusCore eCampusCore.
operation portal)
APIGW service eCampusCore_<version>_PaaSAPIGW.zip Adaptation package of the
APIGW service.

Table 14 Asset package of the Device Integration Service

Category Software Package Description

Multi-tenant management Link Services (Device eCampusCore_<version>_DeviceAsset_EulerOS_<hardware- Asset package of the


software package (on the O&M Integration Service) platform>.tgz Device Integration
portal) Service.

Table 15 Version description document package

Category Software Package Description

Version description eCampusCore_<version>_vdd.zip Version description document package. It stores the current eCampusCore version
document information.

Table 16 VM templates

Server Software Package Description


Type

x86 server VMTemplate_x86_64_Euler2.12.zip Used to deploy nodes other than the installer node and the nodes managed by
eContainer.

VMTemplate_x86_64_Euler2.12_Installer.zip Used to deploy the installer node.

VMTemplate_x86_64_CampusContainerImage.zip Used to deploy the nodes managed by eContainer.

Arm server VMTemPlate_aarch64_Euler2.12.zip Used to deploy nodes other than the installer node and the nodes managed by
eContainer.

VMTemplate_aarch64_CampusContainerImage.zip Used to deploy the nodes managed by eContainer.

VMTemplate_aarch64_Euler2.12_Installer.zip Used to deploy the installer node.

The initial password of the root user in the VM template is Huawei@12F3. During preinstallation, the password for the root user of the VM operating system (OS) is
reset.

3.3.2 Integration Design


The integration design phase covers the planning and design of DCS, including the LLD of the system architecture, resource requirements, compute
system, network system, storage system, and O&M. The LLD template is output to provide guidance for software and hardware installation.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 167/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

LLDesigner provides functions such as hardware configuration, device networking, and resource provisioning to quickly complete product planning
and design. You can use LLDesigner to simply and quickly generate a high-quality LLD plan. For details, see Planning Using LLDesigner . For
details about the integration design, see Datacenter Virtualization Solution 2.1.0 Integration Design Suite.
After the integration design is complete, install hardware and software devices of the project based on the LLD.
Before the installation, plan the compatibility information properly. For details about the compatibility information, see Huawei Storage
Interoperability Navigator.
Planning Using LLDesigner

3.3.2.1 Planning Using LLDesigner

Scenario
LLDesigner provides an integrated tool for network planning of DCS site deployment delivery. It works with SmartKit to generate an LLD template
to guide software and hardware installation, improving delivery efficiency.

Prerequisites
A W3 account is required for login. If you do not have a W3 account, contact Huawei technical support engineers.

Operation Process
Figure 1 LLD planning flowchart

Procedure
1. Use your W3 account to log in to eService website and choose Delivery Service > Storage > Deployment & Delivery > LLDesigner. The
LLDesigner page is displayed.

2. Click Create LLD. On the LLDesigner page that is displayed, choose Datacenter Virtualization Solution > Customize Devices to Create
LLD to create a project.

3. Set Service Type, Rep Office/Business Dept, Project Name, and Contract No., and click OK. The Solution Design page is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 168/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Items marked with * are mandatory.

4. Design the solution.

a. Set the solution scenario.


Set the solution scenario based on the site requirements. Table 1 describes the related parameters. Click Next.

Table 1 Solution scenario parameters

Parameter Value Option Example Value

Solution Version DCS 1.2.0 and later DCS 2.0.0

Component FusionCompute FusionCompute

eDME
eBackup
UltraVR
HiCloud
eDataInsight
eCampusCore
SFS

Storage Type Flash Storage -

Scale-out Storage

Flash Storage Type This parameter is valid when Storage Type is set to Flash Storage. -
IP SAN Storage
FC SAN Storage

Scale-Out Storage Type This parameter is valid when Storage Type is set to Scale-out Storage. -
Structured

Network Type Non-SDN Network overlay SDN

Network overlay SDN

Out-of-Band Management This parameter is valid when Network Type is set to Network overlay SDN. Yes
Yes
No

Scale-Out Storage Networking Scenario Replication -

iSCSI
Separated front- and back-end storage networks

Networking Mode Layer 2 networking Layer 2 networking

Layer 3 networking

b. Configure the device management parameters.

Click Add Device and enter device information. Right-click slots in the device diagram to add or modify interface cards. After the
setting is complete, click OK. After the configuration is complete, click Next.

Table 2 Device information

Parameter Value Option

Device Type Server


Switch
Flash storage

127.0.0.1:51299/icslite/print/pages/resource/print.do? 169/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scale-out Storage
NOTE:
Flash storage is included in the options when Flash storage is selected for Storage Type.
Scale-out storage is included in the options when Scale-out storage is selected for Storage Type.

CPU Architecture This parameter is valid when Device Type is set to Server.
X86
ARM

Network Type This parameter is valid when Device Type is set to Switch.
IP
FC

Storage Series This parameter is valid when Device Type is set to Flash storage.
OceanStor 6.x
OceanStor Dorado

Subtype This parameter is valid when Device Type is set to Scale-out storage.
Block

Device Model Select a device model based on the site requirements.

Device Quantity The device quantity must be a positive integer.

Disk Type If Device Type is set to Flash storage, the options are as follows:
NL_SAS
SAS_SSD
SAS_HDD
NVMe_SSD
If Device Type is set to Scale-out storage, the options are as follows:
SATA
SAS_HDD
NVMe_SSD
SAS_SSD

Disk Capacity This parameter is valid when Device Type is set to Flash storage or Scale-out storage.
Set the disk capacity based on the site requirements.

Disk Quantity This parameter is valid when Device Type is set to Flash storage or Scale-out storage.
The device quantity must be a positive integer.

Add at least one server and one switch.


At least three devices of the same model must be configured for the scale-out storage block service.

c. Select device models.


Set Device Model and Device Quantity, and click Next.

5. Implement engineering design.

a. Scale-Out Storage Network Planning Table 3 describes the parameters.

This step is supported only for scale-out storage.

Table 3 Scale-out storage networking parameters

Parameter Value Option

127.0.0.1:51299/icslite/print/pages/resource/print.do? 170/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Back-End Storage Network Type 10GE TCP, 25GE TCP, 25GE RoCE, 100GE TCP, 100GE RoCE, and 100Gb IB

Multi-IP for Back-End Storage Network This parameter is valid when Back-End Storage Network Type is set to 25GE RoCE or 100GE RoCE.
The default value is 2.

Front-End Storage Network Type 10GE TCP, 25GE TCP, 25GE RoCE, 100GE TCP, 100GE RoCE, and 100Gb IB

Multi-IP for Front-End Storage Network This parameter is valid when Back-End Storage Network Type is set to 25GE RoCE or 100GE RoCE.
The default value is 2.

Replication Network Type 10GE TCP, 25GE TCP, and 100GE TCP

Replication Network IP Addresses The value can be 1 or 2.

iSCSI Network Type The value can be 10GE TCP or 25GE TCP.

iSCSI Network IP Address The value can be 1 or 2.

b. Configure traffic planning.


Select whether to use the shared service and management plane, configure Bond Mode for the management plane and service plane,
configure the MTU values, and click Next.

c. Configure naming rules.


Configure naming rules for cabinets, devices, and nodes. Columns with can be modified. You can view modification results in the
naming examples in the lower part. After the configuration is complete, click Next.

d. Configure the cabinet layout.

Configure the cabinet layout. You can click Add Cabinet and Reset Cabinet to perform corresponding operations. Click the setting
button in the upper left corner of the cabinet table to modify weight and power of devices. After the configuration is complete, click
Next.

The default cabinet power is 6,000 W. You can manually change it as required.

e. Plan disks.

To add disks, click Add Disk in the Operation column. To delete the added disks, click Remove in the Operation column.

This step is supported only for flash storage.

f. Configure device names.


Configure device names. You can click and modify specific nodes names and specific device names. After the configuration is
complete, click Next.

g. Plan clusters.

i. Compute cluster

Click Add, set Cluster Name and Management Cluster, select nodes to be added to the cluster from the node list, and click
OK. After the configuration is complete, click Next.

Only compute nodes are displayed in the node list.

ii. Control cluster


Set parameters such as Control Cluster Name, Metadata Storage Mode, and Available Nodes.

This step is supported only for scale-out storage.

iii. Replication Cluster

127.0.0.1:51299/icslite/print/pages/resource/print.do? 171/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Select nodes to be added to the replication cluster, and click Next.

This step is supported only for scale-out storage.

h. Design the network topology.

Modify the network topology based on the network planning. You can drag a device to the desired position, scroll the mouse wheel
forward or backward to zoom in or zoom out on the topology, click a device icon to view the device information, and click the setting
button in the upper right corner of the figure legend to modify the line color and width. After the configuration is complete, click
Next.

The number of switches in the network topology is automatically generated as required.

6. Plan resources.

a. Flash storage

i. Perform the storage pool planning. Set Performance Layer Name and Performance Layer Hot Spare Policy for
the flash storage device, and click Add Storage Pool in the Operation column to add a device to a storage pool.
Click Add Storage Pools in Batches to add storage pools for devices with the same disk specifications in batches.

Table 4 Storage pool parameters

Parameter Value Option

Storage Pool Name -

Disks The disk information is the same as that configured in 4.b.

RAID Policy RAID 5


RAID 6
RAID 10
RAID-TP

Max. RAID Member Disks 4 to 25

Capacity Layer Hot Spare Policy None


Low
High
Custom

Capacity Alarm Threshold (%) 1 to 95

Capacity Used Up Alarm Threshold (%) 2 to 99

Performance Layer Quota (TB) -

ii. Perform the LUN planning. Click Add LUN in the Operation column to add the device to a LUN. Click Add LUNs in
Batches to add LUNs for devices configured with storage pools.

Table 5 Configuring LUNs

Parameter Value Option

LUN Name Prefix -

LUN Capacity -

LUN Quantity -

Owning Storage Pool -

b. Scale-out storage

127.0.0.1:51299/icslite/print/pages/resource/print.do? 172/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

i. Click to configure storage pool information. Table 6 describes the related parameters.

Table 6 Storage pool information

Parameter Description

Storage Pool Name of a user-defined storage pool.


Name

Service Type Service type of the storage pool. In this example, the value is Block Service.

Main Storage Main storage type of the disk pool.


Type

Cache Type Cache type of the disk pool.

Encryption Encryption type of the storage pool. The options are Common and Self-encrypting. You are advised to select Common. If you have
Type. high requirements on data security, select Self-encrypting.
Common: This type of storage pools does not support data encryption.
Self-encrypting: This type of storage pools supports data encryption.

Security Level Security level of the disk pool. The options are Node and Cabinet.

Redundancy Redundancy policy of the storage pool. The options are EC and Data copy.
Policy

EC EC redundancy policy of the disk pool.


NOTE:

This parameter is valid when Redundancy Policy is set to EC.

Data Fragment The value must be an even number ranging from 4 to 22.
NOTE:

This parameter is valid when Redundancy Policy is set to Data Copy.

Data Copy Number of data copies allowed in the storage pool. The value can be 2 or 3.
Policy NOTE:

This parameter is valid when Redundancy Policy is set to Data Copy.

ii. After the storage pool information is configured, click next to the storage pool to configure the disk pool. Select nodes and
set Required Disk Quantity. After the setting is complete, click OK to save the information.

Each node must be configured with at least four main storage disks.

iii. Click Next.

7. Design virtual nodes.


Set virtual node parameters for each component as prompted. For details, see the deployment parameter description of each component in
Installation Using SmartKit . After the configuration is complete, click Next.

In deployment scenarios, the following parameters are mandatory:


Owning CNA Name of the eDME component
Parameters under SmartKit Parameters of the eBackup and UltraVR components
Owning CNA Name and VM Shared Storage Name of the eDataInsight component. Set VM Shared Storage Name based on the actual name on
FusionCompute.
Owning CNA Name and Datastore of the HiCloud, eCampusCore, and SFS components.

8. Perform network planning.


Configure network segments or VLANs for the network plane as prompted. You can click network parameter values to customize the
parameters. Click Add to add a network plane. Set network planning parameters for compute nodes, storage nodes, and virtual nodes in
sequence. For details, see the parameter description of each component in Installation Using SmartKit . After the configuration is complete,
click Next.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 173/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If VRMs are deployed in active/standby mode, an arbitration IP address can be used to determine the activity status of the active and standby VRM
nodes.
In deployment scenarios, DVS Port Group Name of the eBackup component on the Backup Nodes page is mandatory. Set DVS Port Group Name
based on the actual name on FusionCompute.
In deployment scenarios, Service IP Address, Service Plane Subnet Mask, and Service Plane Port Group of the cloudsop and ndp nodes of the
eDataInsight component on the HDFS Nodes page are mandatory. DVS Port Group Name of the HiCloud component is mandatory.

9. Perform FusionCompute deployment configuration. Set Login Information, General Host Information, Time Zone and NTP
Information, Management Data Backup Information, eDataInsight Information, and HiCloud Information. For details, see the CNA
installation parameter description and VRM installation parameter description in Installation Using SmartKit . After the configuration is
complete, click Next. If you do not need to set those parameters, click Skip to ignore the current configuration and proceed to the next step.

In deployment scenarios, parameters in the FusionCompute Deployment Configuration page are mandatory.

10. Export the document.

Select LLD Design Document and click Export to export the LLD design document package to the local PC. Decompress the
package to obtain the LLD design document. Delivery personnel can complete project delivery based on the data planned in the
LLD design document.

Select LLD Deployment Document and click Export to export the LLD deployment document package to the local PC.
Decompress the package to obtain the LLD deployment parameter template. Delivery personnel can import the LLD deployment
parameter template to SmartKit for deployment.

After the parameters are set and the LLD deployment document is exported, you need to manually enter information such as the user name,
password, and installation package path in the document, for example, BMC user name, BMC password, FusionCompute login user name, and
FusionCompute login password.
Configure Network Port of Management IP Address for the host as the first node.
Only one host can be configured as the first node and this node is the management node by default. Other hosts can only be configured as non-
first nodes. The first node also needs to be configured with the network port name of the management IP address.
You can leave parameter Network Port of Management IP Address blank for nodes other than the first node.

3.3.3 Planning Communication Ports


You need to understand and plan the communication ports and protocols used for DCS. For details about the port information to be planned for
DCS, see Datacenter Virtualization Solution 2.1.0 Communication Matrix.

3.3.4 Accounts and Passwords


For details about how to obtain the passwords and accounts used for installing DCS, see Datacenter Virtualization Solution 2.1.0 Account List.

3.3.5 Preparing Data


In DCS, SmartKit is used to install components. Data to be prepared before the installation is described in the following sections.
Preparing Data for FusionCompute

Preparing Data for eDataInsight

Preparing Data for eCampusCore

3.3.5.1 Preparing Data for FusionCompute


Table 1 lists the data required for installing FusionCompute using SmartKit.

Table 1 Data preparation

Data Type Parameter Description Example Value

127.0.0.1:51299/icslite/print/pages/resource/print.do? 174/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DHCP service parameters Start IP address of This parameter is mandatory. 192.168.60.6


the DHCP pool Specifies the start IP address and the number of IP addresses that can be
allocated by the DHCP service to the host to be installed.
The two parameters determine the IP address segment that can be allocated by
the DHCP service.
The IP address segment must meet the following requirements:
The IP address segment is planned in the management plane network segment
and does not conflict with the management plane IP address of the planned
node, such as the planned host management plane IP address and VRM node
DHCP pool management plane IP address. 50
capacity The IP addresses in the IP address segment must be unused IP addresses.
The DHCP IP addresses may be used by devices not included in the original
plan. Therefore, configure the number of IP addresses in the DHCP address
pool to at least twice the number of physical nodes.
No other DHCP service should exist in the environment. Only the DHCP
service of the installation tool can be used.
Check whether PXE boot is enabled.

DHCP mask/prefix This parameter is mandatory. 255.255.255.0


Specifies the subnet mask of the IP address segment assigned by the DHCP
pool.

DHCP gateway This parameter is mandatory. 192.168.60.1


Specifies the gateway of the IP address segment assigned by the DHCP pool.

Basic Node Information root Password This parameter is mandatory. -


Specifies the password of the OS account for logging in to the CNA node. You
need to set the password during the installation.

grub Password This parameter is mandatory. -


Specifies the password of the internal system account. You need to set the
password during the installation.

gandalf Password Specifies the password of user gandalf for logging in to the CNA node to be -
installed.

Redis Password Redis password, which is set during CNA environment initialization. -

Host node information Host name This parameter is mandatory. CNA01


NOTE: Identifies a host in the system.
You need to prepare The host name can contain only digits, letters, hyphens (-), and underscores (_).
information about at It must start with a letter or a digit and cannot exceed 63 characters.
least three hosts.
MAC address This parameter is optional. 08:19:A6:9A:**:**
Specifies the MAC address of the physical port on the host for PXE booting
host OSs. If the network configuration needs to be specified before host
installation, obtain the MAC address to identify the target host. For details, see
the host hardware documentation.
To obtain MAC Address, log in to the BMC system of the server, choose

Information > System Info > Network, and click on the left of the target
port name in the Port Properties area. When installing the Great Wall server,
you must manually enter the MAC address of an available NIC.
NOTE:

The BIOS page varies depending on the server model and iBMC version. The
method of obtaining the MAC address described here is for reference only.

BMC IP This parameter is mandatory. -


BMC IP indicates the BMC system IP address of the host server.

BMC username This parameter is mandatory. -


Authentication is required for BMC login. This parameter is mandatory.
Otherwise, the installation will fail.

BMC password This parameter is mandatory. -


Authentication is required for BMC login. This parameter is mandatory.
Otherwise, the installation will fail.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 175/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

CNA management This parameter is optional for Custom Installation. 192.168.60.6


IP This parameter is mandatory for One-Click Installation.

Subnet mask This parameter is optional for Custom Installation. -


This parameter is mandatory for One-Click Installation.

Gateway This parameter is optional for Custom Installation. -


This parameter is mandatory for One-Click Installation.

Management plane The default value is 0 and you can retain the default value. -
VLAN tag

Network Port of The current node is the first node. You need to specify a network port in -
Management IP management IP address configuration, for example, eth0.
Address

VRM VM information Management IP This parameter is mandatory. 192.168.60.31


address Specifies the IP address of the VRM VM. Ensure that the IP address is in the
management plane network segment. You are advised to select the management
IP address from a private network segment, because a management IP address
from a public network segment may pose security risks.

Management plane This parameter is optional. 2


VLAN It specifies the VLAN of the management plane. If no value is specified, the
system uses VLAN 0 by default.

VM specifications This parameter is mandatory. Customized


configuration Specifies the methods of configuring VRM VM specifications. configuration
method
The value can be By deployment scale or Customized configuration.

CPU and memory This parameter is mandatory. 8 CPUs, 8 GB


(GB) In the system scale configuration, CPU and memory specifications are as memory
follows:
1000 VMs, 50 PMs (not supported by Phytium): 4 CPUs, 6 GB memory
3000 VMs, 100 PMs: 8 CPUs, 8 GB memory
5000 VMs, 200 PMs: 12 CPUs, 16 GB memory
When performing customized configuration, you must refer to the relationship
between above system scales and specifications.

root This parameter is mandatory. -


Specifies the password of the OS account for logging in to the VRM node. You
need to set the password during the installation.

GRUB This parameter is mandatory. -


Specifies the password of the internal system account. You need to set the
password during the installation.

Active/Standby VRM Floating IP address This parameter is mandatory. 192.168.60.30


node information This parameter is required only when two VRM nodes are deployed in
active/standby mode.
Specifies the floating IP address of the VRM management plane.

Arbitration IP This parameter is mandatory. 192.168.60.1


address This parameter is required only when two VRM nodes are deployed in
active/standby mode.
A minimum of one arbitration IP address needs to be specified, and a maximum
of three arbitration IP addresses can be specified.
You are advised to set the first arbitration IP address to the gateway address of
the management plane, and set other arbitration IP addresses to IP addresses of
servers that can communicate with the management plane, such as the AD
server or the DNS server.

3.3.5.2 Preparing Data for eDataInsight


(Optional) Creating an Authentication User on OceanStor Pacific HDFS in the Decoupled Storage-Compute Scenario

(Optional) Collecting OceanStor Pacific HDFS Domain Names and Users in the Decoupled Storage-Compute Scenario

127.0.0.1:51299/icslite/print/pages/resource/print.do? 176/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.3.5.2.1 (Optional) Creating an Authentication User on OceanStor Pacific HDFS


in the Decoupled Storage-Compute Scenario
You need to create a user on OceanStor Pacific for authentication and interconnection based on the installed components of eDataInsight.

You can use the NTP service on the storage cluster or compute cluster to synchronize time. If the NTP service is used, ensure that the NTP server can
communicate with the management network of the storage cluster or compute cluster.
Multiple eDataInsight components cannot connect to the same OceanStor Pacific HDFS.

Deploying OceanStor Pacific Series HDFS Storage Service

Configuring Basic HDFS Storage Services

Configuring NTP Time Synchronization

Configuring Users on the Storage

3.3.5.2.1.1 Deploying OceanStor Pacific Series HDFS Storage Service


Deploy the service by following instructions provided in OceanStor Pacific Series 8.2.1 Software Installation Guide.

3.3.5.2.1.2 Configuring Basic HDFS Storage Services


Configure basic services, such as storage pools, namespaces, and service networks. For details, see OceanStor Pacific Series 8.2.1 Basic Service
Configuration Guide for HDFS.

3.3.5.2.1.3 Configuring NTP Time Synchronization


The time of the storage cluster must be the same as that of the compute cluster.

You can use the NTP service on the storage cluster or compute cluster to synchronize time. If the NTP service is used, ensure that the NTP server can communicate
with the management network of the storage cluster.

Procedure
1. Log in to DeviceManager.

2. Choose Settings > Time Settings and click Modify.

3. Select Synchronize with NTP server time.


Set NTP synchronization parameters. Table 1 lists the parameters.

Table 1 NTP synchronization parameters

Parameter Description

NTP Server IP address of an NTP server. A maximum of three NTP server addresses can be configured.
Address You can click Test to verify the availability of the NTP server.
Ensure that the time of multiple NTP servers is the same. Otherwise, the time synchronization function will be abnormal.

NTP Whether to enable NTP authentication. After NTP authentication is enabled, the system authenticates and identifies the NTP server.
Authentication Only when NTPv4 or later is used, NTP authentication can be enabled to authenticate the NTP server and automatically synchronize
the time to the storage device.

Time Zone Time zone where the cluster is located.

4. Click Save.
Confirm your operation as prompted.

3.3.5.2.1.4 Configuring Users on the Storage

127.0.0.1:51299/icslite/print/pages/resource/print.do? 177/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Configuring Static Mapping

Configuring Proxy Users on the Storage

3.3.5.2.1.4.1 Configuring Static Mapping

Procedure
1. Log in to DeviceManager, choose Resources > Access > Authentication User and click the UNIX Users tab.

Select the account corresponding to the namespace to be interconnected from the Account drop-down list.

2. Create a user-defined user group on the Local Authentication User Group tab page.

User groups ossgroup, supergroup, root, and hadoop must be created. Other user groups can be added as required.

The ID of user group root must be set to 0.

If the LDAP is connected, the ID of user group supergroup must be 10001. There is no requirement on the IDs of other user groups.

3. Create a user on the Local Authentication User tab page.

The primary group of user root is root, and the secondary group of user root is supergroup. The primary groups of other users are
supergroup.

Create users based on the following mapping between users and user groups:

Table 1 Mapping between users and user groups

User User Group 1 (Primary Group) User Group 2 User Group 3 User Group 4

hbase hadoop supergroup - -

ossuser ossgroup supergroup hadoop kafkaadmin

hdfs hadoop supergroup - -

hive hive hadoop supergroup -

127.0.0.1:51299/icslite/print/pages/resource/print.do? 178/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

spark2x hadoop ossgroup supergroup -

flink hadoop supergroup - -

HTTP hadoop - - -

kafka kafkaadmin - - -

mapred hadoop supergroup - -

yarn supergroup - - -

ftpserver supergroup hadoop - -

Users root, spark2x, hdfs, flink, ossuser, hbase, yarn, mapred, and HTTP must be created.

User HTTP is required for Kerberos interconnection.

Set Super User Group to supergroup, UMASK to 022, and Mapping Rule to DEFAULT.

4. On the Account page, click the desired account. On the displayed page, click the Protocol tab and click Modify. Type supergroup in Super
User Group and click OK.

3.3.5.2.1.4.2 Configuring Proxy Users on the Storage


Security mode: Users HTTP, spark2x, yarn, hive, and hdfs need to be configured as proxy users. You can add other proxy users as required.
The proxy users must correspond to the authentication users.

Simple mode: User omm needs to be configured as a proxy user. You can add other proxy users as required.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 179/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Log in to DeviceManager, choose Resources > Access > Accounts, and select the configured account. The following figure shows an
example. Click Protocol. The access control page of HDFS service is displayed.

2. In the Proxy User area, click Add to add proxy users.


Type * in the Host, User Group, and User text boxes.

3.3.5.2.2 (Optional) Collecting OceanStor Pacific HDFS Domain Names and


Users in the Decoupled Storage-Compute Scenario
In the decoupled storage-compute scenario, you need to collect OceanStor Pacific HDFS domain names and users, including the following
parameters: OceanStor Pacific HDFS DNS address, DNS IP address, OceanStor Pacific HDFS management IP address, and management plane user
name and password.

Obtaining the DNS IP Address


1. Log in to DeviceManager and choose Resources > Service Network.

2. Click the setting button next to Subnet. The Manage Subnet page is displayed.

3. Select a subnet and view the value of IP Address of General DNS Service in Subnet.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 180/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.3.5.3 Preparing Data for eCampusCore


Planning Data

Checking the FusionCompute Environment

Service deployment requires the container management capability of FusionCompute. Before the installation, you need to ensure that the environment has been

equipped with the capability.

Obtaining the eDME Certificate

Before the installation, you need to obtain the eDME certificate.

Obtaining the FusionCompute Certificate

Before the installation, you need to obtain the FusionCompute certificate.

Creating and Configuring the OpsMon User

3.3.5.3.1 Planning Data

Network Planning
Planning the IP addresses of the service component

Table 1 IP address planning for service VMs

VM Host FusionCompute Management VIP


Name Subnet

installer 10.168.52.11 N/A

nfs-dns-1 10.168.52.12 FusionCompute management subnet VIP (NFS_VIP) of NFS. An example is 10.168.52.20.

nfs-dns-2 10.168.52.13

foundation-1 10.168.52.14 FusionCompute management subnet VIP (CAMPUSLBSERVICE_VIP) of the internal


gateway.
foundation-2 10.168.52.15 An example is 10.168.52.21.

foundation-3 10.168.52.16

ops-1 10.168.52.17 N/A

ops-2 10.168.52.18

gaussv5-1 10.168.52.19 FusionCompute management subnet VIP (GAUSSV5_VIP) of the database.


An example is 10.168.52.22.
gaussv5-2 10.168.52.20

Planning IP addresses of the container cluster

127.0.0.1:51299/icslite/print/pages/resource/print.do? 181/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 2 IP addresses of the container cluster

Parameter IP Address Description


Example

FusionCompute eContainer cluster 10.168.52.20 Start IP address of the management network segment of the master node in the container cluster
master node manage start IP created on FusionCompute. The management network segment must be the same as the eDME
network segment.

FusionCompute eContainer cluster 10.168.52.24 End IP address of the management network segment of the master node in the container cluster
master node manage end IP created on FusionCompute.
The container cluster IP address segment must include at least five IP addresses.
All IP addresses between the start IP address and end IP address cannot conflict with other IP
addresses.

Password Planning
To ensure successful installation, the planned password must meet the following software verification rules:

Contain 10 to 32 characters, including uppercase letters, lowercase letters, digits, and the special character @.

Not contain more than 3 identical characters or more than 2 consecutive identical characters.

Not be the same as the reverse of it, regardless of the letter case. The password cannot contain admin or the reverse of it, a mobile phone
number, or an email address and cannot be a weak password.

Not contain root, huawei, admin, campusSnmp, sysomc, hicampus, or gandalf (case-sensitive).

The passwords involved are as follows.

Password Description

Password of the O&M Password of the O&M management console and database, password of the admin user of the eCampusCore O&M
management console and management console, password of the sysadmin user of the database, password of the eDME image repository, and
database machine-machine account password.

Password used in SNMPv3 SNMPv3 authorization and authentication password used by service components. The verification rules are the same as
authentication those for the passwords of the O&M console and database.
The password cannot be the same as the password of the O&M management console or database.
The value is the same as that of Common default password on SmartKit.

sysomc user password Password of the sysomc user created on the VM during installation.
The value is the same as that of Common default password 2 on SmartKit.

Password of the root user of Password of the root user of the VM.
the VM The password cannot be the same as the template password (Huawei@12F3).

Password of the interface To ensure that FusionCompute interfaces can be properly called during the installation, you need to create the interface
interconnection user interconnection user OpsMon with the system management permission on the FusionCompute page before the
installation. You can preconfigure the user password.

Machine-machine account Machine-machine account password for interconnecting with eDME after the service is deployed.
password

3.3.5.3.2 Checking the FusionCompute Environment


Service deployment requires the container management capability of FusionCompute. Before the installation, you need to ensure that the
environment has been equipped with the capability.

Prerequisites
Log in to FusionCompute as a user associated with the Administrator role.

Procedure
1. Log in to FusionCompute as the admin user.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 182/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Enter https://Floating IP address of the VRM node:8443 in the address bar.

2. In the navigation pane, click , choose System Management > System Configuration > License Management, and check whether the
license authorization information meets the requirements. If you do not have the related license, contact Huawei technical support.

For FusionCompute 8.7 and later versions, eContainer Suite License for 1 vCPU is available.

For versions earlier than FusionCompute 8.7, Platinum Edition is available.

3. Check whether the container management function is enabled.


Check whether Container Management is displayed in the navigation pane.

If yes, the container management function has been enabled and the environment meets the requirements.

If no, choose System Management > System Configuration > Service and Management Nodes. Then, choose More > Enable
Container Management to enable the container management function.

4. Check whether the VM image and software package have been uploaded to the content library of the container.

Check whether the VM image has been uploaded. For details, see "Configuring a VM Image" in FusionCompute 8.8.0 Product
Documentation.

Check whether the software package has been uploaded. For details, see "Configuring a Software Package" in FusionCompute 8.8.0
Product Documentation.

3.3.5.3.3 Obtaining the eDME Certificate


Before the installation, you need to obtain the eDME certificate.

Prerequisites
You have logged in to the eDME O&M portal as an O&M administrator user at https://eDME O&M portal IP address:31943.

Procedure
1. In the navigation pane, choose Settings > Certificate Management. On the page that is displayed, click APIGWService.

2. Locate the certificate whose Certificate Alia is server_chain.cer and click in the Operation column to obtain the certificate.

3.3.5.3.4 Obtaining the FusionCompute Certificate


Before the installation, you need to obtain the FusionCompute certificate.

Prerequisites
You have obtained the management IP address of one of the FusionCompute VRM nodes.

In the navigation pane of FusionCompute, click and search for the node name VRM01.

You have obtained the passwords for the gandalf and root users of the VRM nodes of FusionCompute.

Procedure

127.0.0.1:51299/icslite/print/pages/resource/print.do? 183/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Log in to the VRM node on FusionCompute as the gandalf user using the management IP address of the VRM node.

2. Switch to the root user.


$ su - root

3. Run the following command and obtain the content of the FusionCompute certificate from the command output:
# cat /etc/galax/certs/vrm/rootCA.crt

4. Exit the root user.


# exit

5. Create a text file on the local PC, change the file type to .crt, and rename the file rootCA.crt.

6. Copy the content obtained in 3 to the rootCA.crt file and save the file.

3.3.5.3.5 Creating and Configuring the OpsMon User


Before installing service components of eCampusCore, you need to configure the virtualization platform username and password. This section
describes how to create the OpsMon user.

Context
Requirements for creating an interface interconnection user on FusionCompute are described as follows:

Permission Type: Select Interface interconnection user.

Username: Enter OpsMon.

Permission Type: Select System administrator.

Role: Select administrator.

Procedure
FusionCompute 8.6.0 is used as an example. The GUIs of other versions are different. For details, see related FusionCompute documents.

1. Log in to FusionCompute as the admin user.

2. In the navigation pane of FusionCompute, click to go to the System Management page.

3. Choose Rights Management > User Management. On the User Management page, create the OpsMon user.

Permission Type: Select Interface interconnection user.

Username: Enter OpsMon.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 184/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Permission Type: Select System administrator.

Role:Select administrator.

Password: Set the OpsMon user password. The only special character that the password can contain is the at sign (@).
4. View the password policy.
Choose Rights Management > Rights Management Policy. On the page that is displayed, check whether the policy Interface
interconnection user forcibly change passwords upon a reset or initial login. is enabled.

If the value for the policy is Yes, call the interface by referring to FusionCompute xxx APIs to change the password.

If the value for the policy is No, no action is required.

If the policy does not exist, no action is required.

3.3.6 Compatibility Query


For details about the compatibility information of servers, storage devices, and switches, see Huawei Storage Interoperability Navigator.

3.4 Deploying Hardware


Hardware Scenarios

Installing Devices

Installing Signal Cables

Powering On the System

Configuring Hardware Devices

3.4.1 Hardware Scenarios

Scenario Overview
The typical configuration of DCS is separated deployment (with servers, leaf switches, spine switches, and flash storage or scale-out storage) or
hyper-converged deployment.

Figure 1 shows the networking of the separated deployment scenario.

For details about hyper-converged deployment, see Product Description > System Architecture > Logical Architecture in FusionCube
1000H Product Documentation (FusionCompute).

Figure 1 Typical scenario networking (separated deployment)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 185/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Networking description

All switching devices are configured in M-LAG mode on the standard Layer 2 network.

Leaf switches connect all the network ports on host network planes (management plane, service plane, and storage plane) to the network, as
shown in Figure 2.

Leaf switches connect the management plane and storage plane of storage devices to the network. Figure 3 shows the connection of flash
storage. For details about the connection of scale-out storage, see "Network Planning" in OceanStor Pacific Series 8.2.1 Product
Documentation. For details about the connection in hyper-converged scenarios, see FusionCube 1000H Product Documentation
(FusionCompute).

Leaf switches connect to each switch to implement plane interconnection between devices. Spine switches connect the cloud data center to
external networks.

Figure 2 Connecting hosts to access switches

Figure 3 Connecting storage devices (flash storage) to access switches

127.0.0.1:51299/icslite/print/pages/resource/print.do? 186/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.4.2 Installing Devices


This section describes the installation of hardware devices. Table 1 lists some typical models as a reference guide. For details about other supported
servers, storage devices, and switches, see Huawei Storage Interoperability Navigator.

Table 1 Supported device models (typical models)

Device Type Model Reference

Server TaiShan 200 (model For details about installation operations, see TaiShan Server Hardware Installation
2280) SOP.

TaiShan 200 (model


2480)
TaiShan 200K (model
2280K)

Storage device OceanStor 5310 For details about installation operations, see OceanStor 5310 Series V700R001C00
(Version 6) Installation Guide.

OceanStor 5510 For details about installation operations, see OceanStor 5510 Series and 5610
(Version 6) V700R001C00 Installation Guide.

OceanStor 5610
(Version 6)

OceanStor 6810 For details about installation operations, see OceanStor 6x10 and 18x10 Series
(Version 6) V700R001C00 Installation Guide.

OceanStor 18510
(Version 6)
OceanStor 18810
(Version 6)

OceanStor Dorado For details about installation operations, see OceanStor Dorado 3000 V700R001C00
3000 6.x.x Installation Guide.

OceanStor Dorado For details about installation operations, see OceanStor Dorado 5000 and Dorado
5000 6.x.x 6000 V700R001C00 Installation Guide.

OceanStor Dorado
6000 6.x.x

OceanStor Dorado For details about installation operations, see OceanStor Dorado 8000 and Dorado
8000 6.x.x 18000 V700R001C00 Installation Guide.

OceanStor Dorado
18000 6.x.x

127.0.0.1:51299/icslite/print/pages/resource/print.do? 187/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

OceanStor Pacific For details about installation operations, see OceanStor Pacific Series 8.2.1
9920 Installation Guide.

OceanStor Pacific
9540
OceanStor Pacific
9520

Hyper-convergence FusionCube 1000H For details about installation operations, see FusionCube 1000H Product
Documentation (FusionCompute).

Leaf or border leaf switch CE6881-48S6CQ For details about installation operations, see CloudEngine 9800, 8800, 7800, 6800,
and 5800 Series Switches Hardware Installation and Maintenance Guide (V100 and
CE6857F-48S6CQ V200).
CE6857F-48T6CQ
CE6860-SAN

Spine switch CE8850-SAN For details about installation operations, see CloudEngine 9800, 8800, 7800, 6800,
and 5800 Series Switches Hardware Installation and Maintenance Guide (V100 and
V200).

Out-of-band management switch (only CE5882-48T4S For details about installation operations, see CloudEngine 5882 Switch
for the network overlay SDN solution) V200R020C10 Product Documentation.

SDN controller (only for the network iMaster NCE-Fabric For details about installation operations, see iMaster NCE-Fabric V100R024C00
overlay SDN solution) appliance Product Documentation.

Firewall (only for the network overlay USG6650E For details about installation operations, see HUAWEI USG6000, USG9500, and
SDN solution) (enterprise) NGFW Module V500R005C20 Product Documentation.
Eudemon1000E-G5
(carrier)

When deploying a host, check whether the RAID independent power supply (battery or capacitor) system works properly. If any exception occurs, disable the
RAID cache. Otherwise, files may be damaged due to unexpected power-off.
For details about how to check whether the RAID independent power supply works properly and how to disable the RAID cache, see the corresponding product
documentation of the server.

3.4.3 Installing Signal Cables


Separated Deployment Networking

Hyper-Converged Deployment Networking

3.4.3.1 Separated Deployment Networking

This section applies only to flash storage and scale-out storage deployment scenarios.

Procedure
1. Determine the positions of ports on the TaiShan 200 server (model 2280).
You are advised to view the port location through the 3D display of the server. For details about the 3D display of the server, click here.

2. Determine the positions of ports on the switch.


Each CE6881-48S6CQ switch provides 48 10GE downlink ports and 6 uplink ports.
Each CE8850-SAN provides 32 downlink ports and 8 uplink ports.

3. (If the network overlay SDN solution is not used) Use cables to connect servers, switches, and storage devices.

The Mgmt ports are GE ports of servers for BMC hardware management. You can select a networking mode based on the site requirements.
Servers connect to 48-port switches through O/E converters.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 188/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Servers directly connect to GE switches.

Figure 1 Networking diagram (without the network overlay SDN solution)

Table 1 Port connection planning

Device Port Planning

Compute leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 1 to 4 of the spine switch.
(Using CE6881-48S6CQ as an Ports 3 and 4 are connected to uplink ports 3 and 4 of the other compute leaf switch and configured in
example) M-LAG mode.
Ports 5 and 6 are reserved.

Downlink Ports 1 to 12 are connected to ports 1 and 2 of the management plane planned for the server.
port Ports 13 to 24 are connected to ports 3 and 4 planned for the service plane of the server.
Ports 25 to 47 are reserved.
Port 48 is connected to uplink ports 1 to 4 of the BMC access switch.

Storage leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 5 to 8 of the spine switch.
(Using CE6881-48S6CQ as an Ports 3 and 4 are connected to uplink ports 3 and 4 of the other storage leaf switch and configured in
example) M-LAG mode.
Ports 5 and 6 are reserved.

Downlink Ports 1 to 30 are connected to ports 5 to 8 of the storage plane planned for the server.
port Ports 31 to 42 are connected to ports 1 to 8 of the storage device.
Ports 43 to 48 are reserved.

Spine switch Uplink port Ports 1 and 2 are connected to the uplink network.
(Using CE8850-SAN as an Ports 3 and 4 are connected to ports 3 and 4 of the other spine switch and configured in M-LAG mode.
example) Ports 5 to 8 are reserved.

Downlink Ports 1 to 4 are connected to uplink ports 1 and 2 of the compute leaf switch.
port Ports 5 to 8 are connected to uplink ports 1 and 2 of the storage leaf switch.
Ports 9 to 32 are reserved.

Server Uplink port Ports 1 and 2 are planned as management plane ports, which are connected to downlink ports 1 to 12
of the compute leaf switch.
Ports 3 and 4 are planned as service plane ports, which are connected to downlink ports 13 to 24 of the
compute leaf switch.
Ports 5 to 8 are planned as storage plane ports, which are connected to downlink ports 1 to 30 of the
storage leaf switch.
The management network port is connected to downlink ports 1 to 6 of the BMC access switch.

Storage device Uplink port Ports 1 to 8 are connected to downlink ports 31 to 42 of the storage leaf switch.
The management network port is connected to downlink ports 7 to 12 of the BMC access switch.

BMC access switch Uplink port Ports 1 to 4 are connected to downlink port 48 of the compute leaf switch.
(Using CE5882-48T4S as an
example) Downlink Ports 1 to 6 are connected to the management network port of the server.
port Ports 7 to 12 are connected to the management network port of the storage device.
Ports 13 to 48 are reserved.

4. Optional: (Only for the network overlay SDN solution) Confirm the port positions of the SDN controller and out-of-band management
switch.
Use cables to connect servers, switches, and storage devices.

Figure 2 Networking diagram (network overlay SDN solution)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 189/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 2 Port connection planning (network overlay SDN solution)

Device Port Planning

Compute leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 1 to 4 of the spine switch.
(Using CE6881-48S6CQ as an example) Ports 3 and 4 are connected to uplink ports 3 and 4 of the other compute leaf switch and
configured in M-LAG mode.
Port 5 is connected to downlink ports 8 and 9 of the out-of-band management switch.
Port 6 is reserved.

Downlink Ports 1 to 12 are connected to ports 1 and 2 of the management plane planned for the server.
port Ports 13 to 24 are connected to ports 3 and 4 planned for the service plane of the server.
Ports 25 to 47 are reserved.
Port 48 is connected to uplink ports 1 to 4 of the BMC access switch.

Storage leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 5 to 8 of the spine switch.
(Using CE6881-48S6CQ as an example) Ports 3 and 4 are connected to uplink ports 3 and 4 of the other storage leaf switch and
configured in M-LAG mode.
Port 5 is connected to downlink ports 10 and 11 of the out-of-band management switch.
Port 6 is reserved.

Downlink Ports 1 to 30 are connected to ports 5 to 8 of the storage plane planned for the server.
port Ports 31 to 42 are connected to ports 1 to 8 of the storage device.
Ports 43 to 48 are reserved.

Spine switch Uplink port Ports 1 and 2 are connected to the uplink network.
(Using CE8850-SAN as an example) Ports 3 and 4 are connected to uplink ports 3 and 4 of the other spine switch and configured
in M-LAG mode.
Port 5 is connected to downlink port 7 of the out-of-band management switch.
Ports 6 to 8 are reserved.

Downlink Ports 1 to 4 are connected to uplink ports 1 and 2 of the compute leaf switch.
port Ports 5 to 8 are connected to uplink ports 1 and 2 of the storage leaf switch.
Ports 9 to 32 are reserved.

Server Uplink port Ports 1 and 2 are planned as management plane ports, which are connected to downlink ports
1 to 12 of the compute leaf switch.
Ports 3 and 4 are planned as service plane ports, which are connected to downlink ports 13 to
24 of the compute leaf switch.
Ports 5 to 8 are planned as storage plane ports, which are connected to downlink ports 1 to
30 of the storage leaf switch.
The management network port is connected to downlink ports 4 to 9 of the BMC access
switch.

Storage device Uplink port Ports 1 to 8 are connected to downlink ports 31 to 42 of the storage leaf switch.
The management network port is connected to downlink ports 10 to 15 of the BMC access
switch.

SDN controller Uplink port Ports 1 and 2 are connected to downlink ports 1 to 3 of the out-of-band management switch.
(Using the iMaster NCE-Fabric appliance Ports 3 and 4 are connected to downlink ports 4 to 6 of the out-of-band management switch.
as an example) The management network port is connected to downlink ports 1 to 3 of the BMC access
switch.

Out-of-band management switch Uplink port Ports 1 and 2 are connected to uplink ports 1 and 2 of the other out-of-band management
(Using CE5882-48T4S as an example) switch and configured in M-LAG mode.
Ports 3 and 4 are reserved.

Downlink Ports 1 to 3 are connected to ports 1 and 2 of the SDN controller.


port Ports 4 to 6 are connected to ports 3 and 4 of the SDN controller.
Port 7 is connected to uplink port 5 of the spine switch.
Ports 8 and 9 are connected to uplink port 3 of the compute leaf switch.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 190/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Ports 10 and 11 are connected to uplink port 3 of the storage leaf switch.

BMC access switch Uplink port Ports 1 to 4 are connected to downlink port 48 of the compute leaf switch.
(Using CE5882-48T4S as an example)
Downlink Ports 1 to 3 are connected to the management network port of the SDN controller.
port Ports 4 to 9 are connected to the management network port of the server.
Ports 10 to 15 are connected to management network port of the storage device.
Ports 16 to 48 are reserved.

Firewall - 10GE port 1 in slots 1 and 3 is used to connect to the active and standby firewalls.
10GE port 0 in slots 1 to 5 is used to connect to the core switch.

3.4.3.2 Hyper-Converged Deployment Networking


For details about how to install signal cables for switches and servers in the hyper-converged deployment scenario, see Installation and
Configuration > Hardware Deployment > Installing Signal Cables in FusionCube 1000H Product Documentation (FusionCompute).

3.4.4 Powering On the System

Scenarios
After all hardware devices are installed, you need to power on the entire system for trial running and check whether the hardware devices are
successfully installed.

Operation Process
Figure 1 System power-on process

Procedure
1. Turn on the power switches of the power distribution cabinet (PDC).

The voltage of the AC input power supply to the cabinet is 200 V to 240 V to prevent damage on devices and ensure the safety of the installation
personnel.
Before powering on the system, ensure that uplink ports are not connected to customers' switching devices.
Power on basic cabinets and then power on extension cabinets.
Turn off switches on the power distribution units (PDUs) before turning on the switches that control power supply to all cabinets in the PDC.

a. On the PDC side, a professional from the customer turns on the power switches for all cabinets in sequence.

b. Obtain the information on the PDU monitor or use a multimeter to check if the PDU output voltage stays between 200 V and 240 V.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 191/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If the PDU output voltage is not between 200 V and 240 V, contact customers and Huawei technical support engineers immediately. Do not perform
the subsequent procedure.

2. Turn on the power switches of the PDUs one by one in the cabinet.

When the PDUs are powered on, the server is powered on.

3. Turn on the power switches of devices.

4. Check the status of devices.

a. Check the running status of the fans. Ensure that the fans run at full speed and then at even speed, and that the sound of the fans is
normal.

b. Check the indicator status on device panels to ensure that the devices are working properly.

3.4.5 Configuring Hardware Devices


After hardware devices are deployed, you need to perform initial configuration for the hardware devices.

For details about the separated deployment scenario, see Configuring Switches , Configuring Storage Devices and Configuring Servers . If the
network overlay SDN solution is used, see (Optional) Configuring Network Devices .

For details about the hyper-converged deployment scenario, see Configuring Hyper-Converged System Hardware Devices .

Configuring Servers

Configuring Storage Devices

Configuring Switches

Configuring Hyper-Converged System Hardware Devices

(Optional) Configuring Network Devices

3.4.5.1 Configuring Servers


This document provides guidance for engineers to configure servers. For details, see Table 1.

Table 1 Server configuration reference

Server Model Configuration Reference

TaiShan 200 (model 2280) For details, see Huawei Server OS Installation Guide (Arm).

TaiShan 200 (model 2480)

TaiShan 200K (model 2280K)

When deploying a host, check whether the RAID independent power supply (battery or capacitor) system works properly. If any exception occurs, disable the
RAID cache. Otherwise, files may be damaged due to unexpected power-off.
For details about how to check whether the RAID independent power supply works properly and how to disable the RAID cache, see the corresponding product
documentation of the server.

Logging In to a Server Using the BMC

Checking the Server

127.0.0.1:51299/icslite/print/pages/resource/print.do? 192/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Configuring RAID 1

3.4.5.1.1 Logging In to a Server Using the BMC

Scenarios
Log in to the iBMC page by the BMC IP address to set the parameters of the server.

Process
Figure 1 shows the process for logging in to the server using the BMC.

Figure 1 Login process

Procedure
Configure the login environment.

1. Connect the network port of the local computer to the BMC management port of the server using the network cable.

2. Set the IP address of the local computer and default BMC IP address of the server to the same network segment.
For example, set the IP address to 192.168.2.10, and subnet mask to 255.255.255.0.

The default BMC IP address of the server is 192.168.2.100, and the default subnet mask is 255.255.255.0.

Set the properties of Internet Explorer.

3. On the menu bar of the Internet Explorer, choose Tools > Internet Options.
The Internet Options dialog box is displayed.

Windows 10 having Internet Explorer 11 installed is used as an example in the following descriptions.

4. Click the Connections tab and then LAN setting.


The Local Area Network(LAN) Setting dialog box is displayed.

5. In the Proxy server area, deselect Use a proxy server for your LAN.

6. Click OK.
The Local Area Network(LAN) Setting dialog box is closed.

7. Click OK.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 193/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The Internet Options dialog box is closed.

Log in to the management page of the server.

8. Restart the browser, enter https://IP address of the BMC management port in the address bar, and press Enter.
For example, enter https://192.168.2.100.
The system prompts Certificate Error.

9. Click Continue to this website.


The iBMC login page is displayed.

10. Enter the username and password and select This iBMC from the Log on to drop-down list.

The default username for logging in to the iBMC system is Administrator, and the default password is Admin@9000.
Change the default password upon your first login to ensure the system security.

11. Click Log In.

12. Check whether the Security Information dialog box asking "Do you want to display the nonsecure items?" is displayed.

If yes, go to 13.

If no, no further action is required.

13. Click Yes.


The iBMC page is displayed.

3.4.5.1.2 Checking the Server

Scenarios
Log in to all servers through their BMC ports to check server version information and the number of hard disks.

Procedure
Checking the disk status

1. On the iBMC page, choose System Info > Storage > Views, and check the status of hard disks.
If Health Status is Normal, the disk is functional.

3.4.5.1.3 Configuring RAID 1


(Recommended) Configuring RAID 1 on the BMC WebUI

Logging In to a Server Using the BMC WebUI to Configure RAID 1

3.4.5.1.3.1 (Recommended) Configuring RAID 1 on the BMC WebUI

Scenarios
This section guides software commissioning engineers to log in to the BMC WebUI to configure the disks of a server to RAID 1.

Procedure
1. Log in to the iBMC WebUI.
For details, see Logging In to a Server Using the BMC .

2. In the navigation tree on the iBMC page, choose System Info.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 194/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Choose Storage > Configure.


The RAID configuration page is displayed, as shown in Figure 1.

Figure 1 RAID configuration in the Arm architecture

4. Click next to Logical Drive to open the logical disk configuration menu.

5. Click the option button before Create.


The logical disk creation page is displayed, as shown in Figure 2.

Figure 2 Logical disk creation in the Arm architecture

Table 1 Configuration items for logical disk creation

Configuration Item Description

Name Name of a logical disk

Strip Size Size of a data strip on each physical disk

Read Policy Data read policy for a logical disk


Read Ahead: enables the read ahead function. The controller pre-reads sequential data or predicts data to be used and stores it in
the cache.
No Read Ahead: disables the read ahead function.

Write Policy Data write policy for a logical disk


Write Through: After the disk subsystem receives all data, the controller notifies the host that data transmission is complete.
Write Back with BBU: When no battery backup unit (BBU) is configured or the configured BBU is faulty, the controller
automatically switches to the Write Through mode.
Write Back: After the controller cache receives all data, the controller notifies the host that data transmission is complete.

IO Policy I/O policy for reading data from special logical disks, which does not affect the pre-reading cache. The value can be either of the
following:
Cached IO: All the read and write requests are processed by the cache of the RAID controller. Select this value only when
CacheCade 1.1 is configured.
Direct IO: This value has different meanings in read and write scenarios.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 195/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In read scenarios, data is directly read from physical disks. (If Read Policy is set to Read Ahead, data read requests are processed
by the cache of the RAID controller.)
In write scenarios, data write requests are processed by the cache of the RAID controller. (If Write Policy is set to Write
Through, data is directly written into physical disks without being processed by the RAID controller cache.)

Disk Cache Policy The physical disk cache policy can be any of the following:
Enable: indicates that data is written into the cache before being written into a physical disk. This option improves data write
performance. However, data will be lost if there is no protection mechanism against power failures.
Disable: indicates that data is written into a physical disk without caching the data. Data is not lost if power failures occur.
Disk's default: indicates that the default cache policy is used.

Access Policy Access policy for a logical disk


Read Write: Read and write operations are allowed.
Read Only: The logical disk is read-only.
Blocked: Access to the logical disk is denied.

Initialization State Initialization method for a created logical disk


No Init: Initialization is not performed.
Quick Init: writes zeros for the first and last 10 MB of a logical disk. Then, the logical disk status changes to Optimal.
Full Init: initializes the entire logical disk to 0. Before the initialization is complete, the logical disk status is initialization.

Level RAID level of a logical disk

Number of Drives Number of physical disks in each subgroup when the RAID level is 10, 50, or 60.
per Span

Disk Physical disk to be added to a logical disk

Capacity Capacity of a logical disk

6. Set the parameters as described in Table 1 and click Save.

3.4.5.1.3.2 Logging In to a Server Using the BMC WebUI to Configure RAID 1

Scenarios
This section guides software commissioning engineers to log in to the server through the BMC WebUI to configure the disks of a server to RAID 1.

Operation Process
Figure 1 shows the process for configuring RAID 1.

Figure 1 RAID 1 configuration process in the Arm architecture

127.0.0.1:51299/icslite/print/pages/resource/print.do? 196/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Restart the server.

a. Log in to the iBMC WebUI.


For details, see Logging In to a Server Using the BMC .

b. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 2.

Figure 2 Remote Console

127.0.0.1:51299/icslite/print/pages/resource/print.do? 197/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

c. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated Remote
Console (Private), or HTML5 Integrated Remote Console (Shared). The real-time operation console of the server is displayed, as
shown in Figure 3 or Figure 4.

Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform
operations on the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform
operations on the server using iBMC. The users can view the operations of each other.

Figure 3 Real-time operation console (Java)

Figure 4 Real-time operation console (HTML5)

d. On Remote Virtual Console, click or on the menu bar.

e. Select Reset.
The Are you sure to perform this operation dialog box is displayed.

f. Click Yes.
The server restarts.

2. Log in to the management page of the Avago SAS3508.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 198/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

a. When the following page is displayed, press Delete quickly.

b. In the displayed dialog box, enter the BIOS password.

The default BIOS password is Admin@9000. After the initial login, set the administrator password immediately.
For security purposes, change the administrator password periodically.
Enter the administrator password to go to the administrator page. The server will be locked after three consecutive failures with wrong
passwords. You can restart the server to unlock it.

c. On the BIOS page, use arrow keys to select Advanced.

d. On the Advanced page, select Avago MegaRAID <SAS3508> Configuration Utility and press Enter. The Dashboard View page is
displayed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 199/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. On the Dashboard View page, select Main Menu and press Enter. Then select Configuration Management and press Enter. Select
Create Virtual Drive and press Enter. The Create Virtual Drive page is displayed.

If RAID has been configured, you need to format the disk. On the Configuration Management page, select Clear Configuration and press Enter. On the
displayed confirmation page, select Confirm and press Enter. Then select Yes and press Enter to format the disk.

4. On the Create Virtual Drive screen, select Select RAID level using the up and down arrow keys and press Enter. Select RAID1 from the
drop-down list and press Enter.

5. On the Create Virtual Drive screen, select Default Initialization using the up and down arrow keys and press Enter. Select Fast from the
drop-down list and press Enter.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 200/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

6. Select Select Drives From using the up and down arrow keys and press Enter. Select Unconfigured Capacity using the up and down arrow
keys.

7. Select Select Drives using the up and down arrow keys and press Enter. Select the first (Drive C0 & C1:01:02) and the second (Drive C0 &
C1:01:05) disks using the up and down arrow keys to configure RAID 1.

Drive C0 & C1 may vary on different servers. You can select a disk by entering 01:0x after Drive C0 & C1.
Press the up and down arrow keys to select the corresponding disk, and press Enter. [X] after a disk indicates that the disk has been selected.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 201/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

8. Select Apply Changes using the up and down arrow keys to save the settings. The message The operation has been performed
successfully. is displayed. Press the down arrow key to choose OK and press Enter to complete the configuration of member disks.

9. Select Save Configuration and press Enter. The operation confirmation page is displayed. Select Confirm and press Enter. Select Yes and
press Enter. The message The operation has been performed successfully. is displayed. Select OK using the down arrow key and press
Enter.

10. Check the configuration result.

a. Press Esc to return to the Main Menu page.

b. Select Virtual Drive Management and press Enter. Current RAID information is displayed.

3.4.5.2 Configuring Storage Devices


In the separated deployment of DCS, the storage can be flash storage and scale-out storage. Table 1 lists some typical models for reference. For
details about other supported storage devices, see Huawei Storage Interoperability Navigator.

Table 1 Supported device models (typical models)

Device Type Model Reference

Storage device OceanStor 5310 For details about the configuration, see "Configuring Basic Storage Services" in OceanStor V700R001C00
(Version 6) Initialization Guide and OceanStor V700R001C00 Basic Storage Service Configuration Guide for Block.

OceanStor 5510
(Version 6)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 202/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
OceanStor 5610
(Version 6)
OceanStor 6810
(Version 6)
OceanStor 18510
(Version 6)
OceanStor 18810
(Version 6)

OceanStor Pacific For details about the configuration, see Installation > Hardware Installation Guide and Installation > Software
9920 Installation Guide > Installing the Block Service > Connecting to FusionCompute in OceanStor Pacific Series
8.2.1 Product Documentation.
OceanStor Pacific
9540
OceanStor Pacific
9520

OceanStor Dorado For details about the configuration, see Configure > Basic Storage Service Configuration Guide for Block in
3000 OceanStor Dorado 2000, 3000, 5000, and 6000 V700R001C00 Product Documentation.

OceanStor Dorado
5000
OceanStor Dorado
6000

OceanStor Dorado For details about the configuration, see Configure > Basic Storage Service Configuration Guide for Block in
8000 OceanStor Dorado 8000 and Dorado 18000 V700R001C00 Product Documentation.

OceanStor Dorado
18000

Hyper- FusionCube For details about configuration, see FusionCube 1000H Product Documentation (FusionCompute).
convergence 1000H

3.4.5.3 Configuring Switches


DCS involves leaf and spine switches. Leaf switches connect to each switch to implement plane interconnection between devices. Spine switches
connect the cloud data center to external networks.

Use the latest matching software version (V200R022C00SPC500 or later) for the following switches. Earlier versions (such as V200R020C10SPC600) may cause
occasional traffic failures.

Table 1 lists some switches as a reference guide. For details about other supported switches, see Huawei Storage Interoperability Navigator.

Table 1 Configuring switches

Type Model Configuration Reference

Leaf or border CE6881-48S6CQ For details, see Configuration > Configuration Guide > Ethernet Switching Configuration in
leaf switch CloudEngine 8800 and 6800 Series Switches Product Documentation .
CE6857F-48S6CQ
CE6857F-48T6CQ
CE6860-SAN
CE6863-48S6CQ (only
for SDN)
CE6863E-48S6CQ (only
for SDN)
CE6881-48T6CQ (only
for SDN)

Spine switch CE8850-SAN For details, see Configuration > Configuration Guide > Ethernet Switching Configuration in
CloudEngine 8800 and 6800 Series Switches Product Documentation .
CE9860-4C-E1 (only for
SDN)
CE8850-64CQ-E1 (only
for SDN)
CE8851 (only for SDN)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 203/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
CE16804 (only for
SDN)

For details about typical configurations of switches in DCS, see Physical Network Interconnection Reference .

3.4.5.4 Configuring Hyper-Converged System Hardware Devices


Configure the hyper-converged server. For details, see "Site Deployment" in FusionCube 1000H Product Documentation (FusionCompute).

3.4.5.5 (Optional) Configuring Network Devices


For details about how to perform the initial configuration of the iMaster NCE-Fabric appliance, see Installation > Software Installation >
Installing iMaster NCE-Fabric > Installing iMaster NCE-Fabric (Preinstallation, Installation on 2288X V5 Physical Machines) in iMaster
NCE-Fabric V100R024C00 Product Documentation.
You are advised to configure the DCS network overlay SDN using Zero Touch Provisioning (ZTP).
ZTP allows newly delivered or unconfigured devices to automatically load version files, deploy the underlay network, and be managed by iMaster
NCE-Fabric after they are powered on and started. ZTP-based simplified deployment enables quick rollout and management of data center network
devices.
DCS is used with the CloudFabric Easy Data Center Network Solution and supports automatic network device management and initial network
configuration using ZTP. For details, see CloudFabric Data Center Network Solution V100R024C00 Best Practices (Easy Scenario).

3.5 Deploying Software


Unified DCS Deployment (Separated Deployment Scenario)

This section describes how to use SmartKit to install components, such as FusionCompute, eDME, DR and backup software, eDataInsight, HiCloud, and SFS,
and how to perform initial configuration.

Configuring Interconnection Between iMaster NCE-Fabric and FusionCompute

Configuring Interconnection Between iMaster NCE-Fabric and eDME

Installing FabricInsight

(Optional) Installing FSM

Installing eDME (Hyper-Converged Deployment)

3.5.1 Unified DCS Deployment (Separated Deployment Scenario)


This section describes how to use SmartKit to install components, such as FusionCompute, eDME, DR and backup software, eDataInsight, HiCloud,
and SFS, and how to perform initial configuration.
Installation Process

Installation Using SmartKit

Initial Configuration After Installation

Checking Before Service Provisioning

3.5.1.1 Installation Process


Figure 1 shows the installation process.

Figure 1 Unified deployment process of DCS

127.0.0.1:51299/icslite/print/pages/resource/print.do? 204/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Installation process

Phase Description

Installing using SmartKit Use SmartKit to install FusionCompute, UltraVR, eBackup, eDME, eDataInsight, HiCloud, SFS, and eCampusCore.
After FusionCompute is installed, SmartKit can be used to configure the backup server (optional) and FusionCompute
NTP clock source and time zone.
Before installing eDME, use SmartKit to create management VMs. Creating management VMs indicates to create
VMs on FusionCompute for installing eDME.

Configuring bonding for host After FusionCompute or eDME is installed, the system configures a bond port named Mgnt_Aggr consisting of one
network ports network port for the host by default. You need to manually add network ports to the bond port for improving the
network reliability of the system.

(Optional) Checking the system After the FusionCompute system is deployed and before services are provisioned, use SmartKit to perform preventive
environment before service maintenance, helping optimize system configurations.
provisioning

Configuring the FusionCompute After FusionCompute is installed, load license files and configure MAC address segments for sites on FusionCompute.
system

Installing Tools for eDME Tools provides drivers for VMs.


After VM creation and OS installation are complete for eDME, you need to install Tools provided by Huawei on the
VMs to improve the VM I/O performance and implement VM hardware monitoring and other advanced functions.
Some features are available only after Tools is installed. For details about such features, see their prerequisites or
constraints.

Performing initial configuration After eDME is installed, perform initial configuration to ensure that you can use the functions provided by eDME.
for eDME

3.5.1.2 Installation Using SmartKit

Scenarios
This section guides the administrator to use SmartKit to install the FusionCompute, UltraVR, eBackup, eDME, eDataInsight, HiCloud, SFS, and
eCampusCore components.

The UltraVR and SFS components of the current DCS version are patch versions. After installing the two components, upgrade them to the corresponding patch
versions.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 205/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Prerequisites
You have installed SmartKit. For details about how to install and run SmartKit, see "Deploying SmartKit" in SmartKit 24.0.0 User Guide.

You have installed Datacenter Virtualization Solution Deployment on SmartKit.

On the home page of SmartKit, click the Virtualization tab, click Function Management, and check whether the status of Datacenter Virtualization
Solution Deployment is Installed. If the status is Uninstalled, you can use either of the following methods to install the software:
On the home page of SmartKit, click the Virtualization tab, click Function Management, select Datacenter Virtualization Solution Deployment, and
click Install.
Import the software package for the basic virtualization O&M service (SmartKit_24.0.0_Tool_Virtualization_Service.zip).

1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is displayed, click
Import. In the Import dialog box, select the software package for the basic virtualization O&M service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click Install. In the dialog
box indicating a successful import, click OK. The status of Datacenter Virtualization Solution Deployment changes to Installed.

You have imported the software package of the eDME deployment tool (eDME_version_DeployTool.zip). The procedure is the same as that
for importing the basic virtualization O&M software package.

To install UltraVR, the following conditions must be met:

You have obtained the username and password for logging in to FusionCompute on the VM where the UltraVR is to be installed.

You have verified the software integrity of the UltraVR template file. For details, see Verifying the Software Package .

The stability of the server system time is critical to UltraVR. Do not change the system time when UltraVR is running. If the system
time changes, restart the UltraVR service. For details about how to restart UltraVR, see section "Starting the UltraVR" in OceanStor
BCManager 8.6.0 UltraVR User Guide.

To install eBackup, the following conditions must be met:

You have obtained the username and password for logging in to FusionCompute.

In the eDataInsight decoupled storage-compute scenario, deploy OceanStor Pacific HDFS in advance. For details, see OceanStor Pacific Series
8.2.1 Software Installation Guide.

Before installing eCampusCore, you need to install FusionCompute and eDME.

Procedure
1. On the home page of SmartKit, click the Virtualization tab. Click Datacenter Virtualization Solution Deployment in Site Deployment
Delivery.

2. Click DCS Deployment. The Site Deployment Delivery page is displayed.

3. On the Tasks tab page, click Create Task. The Basic Configuration page is displayed.

On the Site Deployment Delivery page, click Support List to view the list of servers supported by SmartKit.

4. Set Task Name, select DCS Deployment, and click Create.

5. In the Confirm Message dialog box that is displayed, click Continue.

6. On the Installation Policy page, select an installation policy as required. Click Next.

7. Configure parameters.

Online modification configuration: FusionCompute, UltraVR, eBackup, and eDME are supported.
Excel import configuration: All components are supported.
Quick configuration: FusionCompute, UltraVR, eBackup, and eDME are supported.

Modify the configuration online. Manually configure related parameters on the page. After the configuration is complete, go to 16.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 206/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

To configure FusionCompute parameters, go to 8.

To configure UltraVR parameters, go to 10.

To configure eBackup parameters, go to 11.

To configure eDME parameters, go to 9.


Import configurations using an EXCEL file. Click the Excel Import Configuration tab. Click Download File Template, fill in the
template, and import the parameter file. If the import fails, check the parameter file. If the parameters are imported successfully, you can
view the imported parameters on the Online Modification Configuration tab page. Then, go to 16.

To configure eDataInsight parameters, go to 12.

To configure HiCloud parameters, go to 13.

To configure SFS parameters, go to 14.

To configure eCampusCore parameters, go to 15.

Quickly fill in the configuration. Click the Quick Configuration tab, set parameters as prompted, and click Generate Parameter. If a
parameter error is reported, clear the error as prompted. If the parameters are correct, go to 16.
8. Configure FusionCompute parameters.
On the Online Modification Configuration tab page, click Add FusionCompute in the FusionCompute Parameter Configuration area.
On the Add FusionCompute page in the right pane, set related parameters.

a. Set System Name and select the path where the software package is stored. If the software package is not downloaded, download it as
instructed in FusionCompute Software Package .

b. Set Install the Mellanox virtual NIC driver. If it is set to Yes, ensure that the driver installation package exists in the path where the
software package is stored.

If the host uses Mellanox ConnectX-4 or Mellanox ConnectX-5 series NICs, and the NICs are in the compatibility list, you need to install the
Mellanox NIC driver.

c. CNA Installation. Select an installation scenario, set DHCP service parameters, configure basic node information, and configure the
node list.

Table 1 Parameters for installing CNA

Parameter Description

Installation Scenario Installation scenario.


New installation: In the new installation scenario, the user disk is formatted. For a new installation, you need to set Size of the
swap partition.
NOTE:

If an x86-based host is installed, the swap partition is 30 GB by default.

Fault recovery installation: In the fault recovery scenario, the user disk is not formatted.

Configure DHCP Service

DHCP Pool Start IP Start IP address and the number of IP addresses that can be assigned by the DHCP service to the host to be installed. The IP
Address address must be an unused IP address. You are advised to use an IP address in the management plane network segment and
ensure that the IP address does not conflict with other planned IP addresses.

DHCP Mask Subnet mask of the IP address segment assigned from the DHCP pool.

DHCP Gateway Gateway of the IP address segment assigned from the DHCP pool.

DHCP Pool Capacity The DHCP IP addresses may be used by devices not included in the original plan. Therefore, configure the number of IP
addresses in the DHCP address pool to at least twice the number of physical nodes.

Basic Node Information

Use Software RAID For HiSilicon and Phytium Arm architectures, you can configure whether to use the software RAID for system disks.
for System Disk
Yes

127.0.0.1:51299/icslite/print/pages/resource/print.do? 207/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
No

root Password Password of the OS account for logging in to the CNA node. You need to set the password during the installation.
NOTE:

After setting the password, click Downwards to automatically paste the password to the grub password, gandalf password, and Redis
password.

grub Password Password of the internal system account. You need to set the password during the installation.

gandalf Password Password of user gandalf for logging in to the CNA node to be installed.

Redis Password Redis password, which is set during CNA environment initialization.

Entering Node Information

Management Node or This parameter cannot be set for the first node.
Not
Yes
No

MAC Address This parameter cannot be set for the first node.
It specifies the MAC address of the physical port on the host for PXE booting host OSs. If the network configuration needs to be
specified before host installation, obtain the MAC address to identify the target host. For details, see the host hardware
documentation.

iBMC IP BMC system IP address of the standby server.

iBMC Username Username for BMC login authentication. This parameter is mandatory. Otherwise, the installation will fail.

iBMC Password Password for BMC login authentication. This parameter is mandatory. Otherwise, the installation will fail.

CNA Management IP If this parameter is not specified, the IP address assigned by the DHCP server is used. If this parameter is specified, the specified
Address IP address is used for configuration.

CNA Subnet Mask If this parameter is not specified, the DHCP subnet mask is used. If this parameter is specified, the specified subnet mask is used
for configuration.

CNA Subnet Gateway If this parameter is not specified, the DHCP gateway is used. If this parameter is specified, the specified gateway is used for
configuration.

CNA Cluster Name Name of the cluster to which the host belongs. This parameter cannot be set for the first node. This parameter can be set only
when Management Node or Not is set to No. After you enter a cluster name, the host is automatically added to the specified
cluster.

Management Plane Whether the management plane VLAN has a VLAN tag. The default value is 0. The value ranges from 0 to 4094.
VLAN Tag
0 indicates that the management plane VLAN is not specified. If you do not specify a management plane VLAN, set the VLAN
type on the access switch port connected to the management network port to untagged so that the aggregation switch is
reachable to the uplink IP packets from the management plane through the VLAN.
Other values indicate a specified management plane VLAN. If you specify a management plane VLAN, set the VLAN type on
the access switch port connected to the management network port to tagged so that the management plane and the switch can
communicate with each other.

Network Port of If the current node is the first node, you need to specify a network port in management IP address configuration, for example,
Management IP eth0.
Address NOTE:

For details about how to check the network port name of the first CNA node, see How Do I Determine the Network Port Name of the
First CNA Node? .

d. Install VRM nodes. Select a deployment scenario and set parameters of VRM nodes.

Table 2 Parameters for installing VRM nodes

Parameter Description

VRM Installation and Installation and deployment scenario of VRM nodes.


Deployment Scenarios
Active/Standby Installation
Single-node Installation

Active VRM Node Name VRM node name, that is, the VM name.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 208/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Management IP Address of IP address of the VRM VM. The IP address must be within the management plane network segment. You are advised to
Active VRM Node select an IP address from a private network segment, because a management IP address from a public network segment may
pose security risks.

CNA Management IP CNA management IP address of the active node in active and standby deployment mode.
Address of Active VRM
Node

CNA root Password of Password of user root for logging in to the CNA node.
Active VRM Node NOTE:

After setting the password, click Downwards. The system automatically pastes the password to the following passwords (from
CNA gandalf Password of Active Node to galax Password).

CNA gandalf Password of Password of user gandalf for logging in to the CNA node.
Active VRM Node

Standby VRM Node Name Name of the standby VRM node in active and standby deployment mode, that is, the VM name.

Management IP Address of IP address of the standby VRM VM in active and standby deployment mode. The IP address must be within the management
Standby VRM Node plane network segment. You are advised to select an IP address from a private network segment, because a management IP
address from a public network segment may pose security risks.

CNA Management IP CNA management IP address of the standby node in active and standby deployment mode.
Address of Standby VRM
Node

CNA root Password of Password of user root for the standby CNA node in active and standby deployment mode.
Standby VRM Node

CNA gandalf Password of Password of user gandalf for the standby CNA node in active and standby deployment mode.
Standby VRM Node

Subnet Mask Subnet mask.

Subnet Gateway Subnet gateway.

Management Plane VLAN VLAN of the management plane. If no value is specified, the system uses VLAN 0 by default. For details about the
configuration method, see Table 3.

Container Management Whether to enable the container management function in the VRM node.

Configuration Mode Configuration mode.


Normal: by scale
Custom

Configuration Item When Configuration Mode is set to Custom, configure the CPU and memory.
CPU: 4 to 20 (cores)
Memory: 6 to 30 (GB)
When Configuration Mode is set to Normal: Select one of the following configuration items:
Container Management disabled:
1000VM,50PM: VRM requires 4 CPUs, 6 GB memory, and 120 GB disk space.
3000VM,100PM: VRM requires 8 CPUs, 8 GB memory, and 120 GB disk space.
Container Management enabled:
1000VM,50PM,10 K8S Group,500 K8S Node: VRM requires 4 CPUs, 8 GB memory, 332 GB disk space plus the image
repository capacity (GB).
3000VM,100PM,20 K8S Group,1000 K8S Node: VRM requires 8 CPUs, 12 GB memory, 332 GB disk space plus the
image repository capacity (GB).
NOTE:
VM: virtual machine; PM: physical machine (host)
In the DCS scenario, FusionCompute requires at least 8 CPUs and 8 GB memory.

Image repository capacity After Container Management is enabled, Image repository capacity can be configured.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 209/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

User Rights Management User permission management mode.


Mode
Common (Recommended)
Role-based

admin Login Password Login password of user admin. This parameter is mandatory when User Rights Management Mode is set to Common.

sysadmin Login Password Login password of user sysadmin. This parameter is mandatory when User Rights Management Mode is set to Role-
based.

secadmin Login Password Login password of user secadmin. This parameter is mandatory when User Rights Management Mode is set to Role-
based.

secauditor Login Password Login password of user secauditor. This parameter is mandatory when User Rights Management Mode is set to Role-
based.

root Password Password of the OS account for logging in to the VRM node. You need to set the password during the installation.

grub Password Password of the internal system account. You need to set the password during the installation.

gandalf Password Password of user gandalf for logging in to the VM where VRM is to be installed.

postgres Password Password of user postgres for logging in to the VM where VRM is to be installed.

galax Password Password of user galax for logging in to the VM where VRM is to be installed.

Floating IP Address Floating IP address of the VRM management plane.

Arbitration IP Address01 Arbitration IP address.


At least one arbitration IP address needs to be specified, and a maximum of three arbitration IP addresses can be specified.
Arbitration IP Address02 You are advised to set the first arbitration IP address to the gateway address of the management plane, and set other
arbitration IP addresses to IP addresses of servers that can communicate with the management plane, such as the AD server
Arbitration IP Address03 or the DNS server.

Time Management

Time Zone Local time zone of the system. The system determines whether to enable Daylight Saving Time (DST) based on the time
zone you set.

NTP When configuring the NTP clock source, enable the NTP function.

NTP Server NTP server IP address or domain name. You can enter one to three IP addresses or domain names of the NTP servers. You
are advised to use an external NTP server running a Linux or Unix OS. If you enter a domain name for the configuration,
ensure that a DNS server is available.
If no external NTP server is deployed, set this parameter to the management IP address of the host where the active VRM
node resides.
NOTE:

If no external NTP server is available, set the time of the node that functions as the NTP server to the correct time. For details, see
How Do I Manually Change the System Time on a Node? .

Synchronization Interval Interval for time synchronization.


(s)

Management Data Backup Configuration

Data Backup to Third- Key information is backed up to a third-party FTP server or host. If the system is abnormal, the backup data can be used to
party FTP Server/Host restore the system.
If data is backed up to a third-party FTP server, the VRM node automatically uploads key data to the FTP backup server at
02:00 every day.
If data is backed up to a host, the system automatically copies the management data excluding monitoring data to the
/opt/backupdb directory on the host every hour. The host retains only data generated in one day.
NOTE:

If no FTP server is used, select Host (monitoring data will not be backed up).

Protocol Type If Third-party FTP server is selected, FTPS and FTP are supported.
You are advised to select FTPS to enhance file transmission security. If the FTP server does not support the FTPS protocol,
select FTP.
NOTE:

If FTPS is used, you need to deselect the TLS session resumption option of the FTP server.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 210/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

IP Address IP address of the third-party backup server.

Username Username for logging in to the FTP server.

Password Password for logging in to the FTP server.

Port Port number of the FTP server.

Backup Upload Path Relative path for storing backup files on the third-party backup server.
If multiple sites share the same backup server, set the directory to /Backup/VRM-VRM IP address/ for easy identification. If
the VRM nodes are deployed in active/standby mode, enter the floating IP address of the VRM nodes.

FTP Server Root If data is backed up to a third-party FTP server and the protocol type is FTPS, you need to import the root certificate of the
Certificate server certificate.
NOTE:

If the root certificate is not imported, an alarm is displayed, indicating that the certificate of the FTP server for management data
backup is invalid.

Node Name To back up data to a host, select the node corresponding to the host.
NOTE:
The backup directory is /opt/backupdb, which cannot be changed.
A maximum of five hosts can be selected. You are advised to select hosts in different clusters.

Table 3 Configuring the management plane VLAN

Link Layer 2 Networking Configuration on the Switch Description


Type

Access If the default VLANs of the switch ports are the same, you need to configure the For details about how to view the VLAN of a switch port,
management plane VLAN for the network ports on the host. see the official guide of the switch vendor.
This port type does not support allowing multiple
VLANs and layer 2 isolation. Therefore, you are advised
to not use it as the uplink of the storage plane or service
plane.

Trunk Based on the actual network plan: For details about how to view the VLAN of a switch port,
If the VLAN has been added to the list of allowed VLANs of the switch ports, you see the official guide of the switch vendor.
need to configure the management plane VLAN for the network ports on the host.
If the default VLANs of the switch ports have been added to the list of allowed
VLANs, you do not need to configure the management plane VLAN for the
network ports on the host.

Hybrid Based on the actual network plan: For details about how to view the VLAN of a switch port,
If the VLAN has been added to the list of allowed VLANs of the switch ports or the see the official guide of the switch vendor.
switch ports have been configured to carry the VLAN tag when sending data
frames, you need to configure a VLAN for the network ports on the host.
If the default VLANs of the switch ports have been added to the list of allowed
VLANs or the switch ports have been configured to remove the VLAN tag when
sending data frames, you do not need to configure a VLAN for the network ports on
the host.

Table 3 is for reference only. The actual networking depends on the actual network plan.

e. Click Confirm.

9. Set eDME parameters.


On the Online Modification Configuration tab page, click Add eDME in the eDME Parameter Configuration area. On the Add eDME
page in the right pane, set related parameters.

a. Select the path of the installation package, that is, the path of the folder where the deployment software package is stored. If the
software package is not downloaded, download it as instructed in eDME Software Package . After you select a path, the tool
automatically uploads the software package to the deployment node. If you do not select a path, manually upload the software package
to the /opt/install directory on the deployment node.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 211/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

b. Configure eDME information.

Table 4 eDME information

Parameter Description

System Language System language.

CPU Architecture CPU architecture of the deployment node.


X86
ARM

Management Level L1: In three-node deployment, 10,000 VMs can be managed.


LITE: In three-node deployment, 1,000 VMs can be managed.

OS Type The OS type is EulerOS.

Enable SFTP Whether to enable SFTP.


Yes: After the installation is complete, file upload and download are allowed on the node.
No: After the installation is complete, file upload and download are not allowed on the node.

Automatic VM creation Whether to automatically create a VM. The value can be Yes or No. Other parameters are valid only when this
parameter is set to Yes.

FusionCompute IP IP address of FusionCompute used for creating a VM.

FusionCompute login user Username for logging in to FusionCompute.

FusionCompute login password Password for logging in to FusionCompute.

Subnet mask of the node Subnet mask of the node.

Subnet gateway of the node Node gateway.

Primary Node Host Name Host name of the active node.

Primary Node IP Address IP address of the active node.

Primary Node root Password Password of user root for logging in to the active node.

CNA name of primary node Name of the CNA to which the active node belongs.

Disk space of primary node Disk space of the active node, in GB.

Child Node 1 Host Name Host name of child node 1.

Child Node 1 IP Address IP address of child node 1.

Child Node 1 root Password Password of user root on child node 1.

CNA name of child node 1 Name of the CNA to which child node 1 belongs.

Disk space of child node 1 Disk space of child node 1, in GB.

Child Node 2 Host Name Host name of child node 2.

Child Node 2 IP Address IP address of child node 2.

Child Node 2 root Password Password of user root on child node 2.

CNA name of child node 2 Name of the CNA to which child node 2 belongs.

Disk space of child node 2 Disk space of child node 2, in GB.

Deploy Operation Portal or not Whether to deploy an operation portal for eDME.
No
Yes

127.0.0.1:51299/icslite/print/pages/resource/print.do? 212/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Operation Portal Node 1 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name

Operation Portal Node 1 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.

Operation Portal Node 1 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.

CNA name of Operation Portal Name of the CNA to which operation portal node 1 belongs. This parameter is valid only when the operation portal
node 1 is to be deployed.

Disk space of Operation Portal node Disk space used by operation portal node 1. This parameter is valid only when the operation portal is to be
1 deployed.

Operation Portal Node 2 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name

Operation Portal Node 2 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.

Operation Portal Node 2 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.

CNA name of Operation Portal Name of the CNA to which operation portal node 2 belongs. This parameter is valid only when the operation portal
node 2 is to be deployed.

Disk space of Operation Portal node Disk space used by operation portal node 2 (unit: GB). This parameter is valid only when the operation portal is to
2 be deployed.

Operation Portal Floating IP Management floating IP address used to log in to the operation portal. It must be in the same network segment as
Address the node IP address and has not been used. This parameter is valid only when the operation portal is to be deployed.

Operation Portal Global Load This parameter is used to configure global load balancing. It must be in the same network segment as the IP address
Balancing IP Address of the operation portal node and has not been used. This parameter is valid only when the operation portal is to be
deployed.

Operation Portal Management Password for logging in to the operation portal as user bss_admin. This parameter is valid only when the operation
Password portal is to be deployed.

Operation Portal SDN Scenario HARD SDN


NO SDN
This parameter is valid only when Deploy Operation Portal or not is set to Yes.

Deploy DCS Auto Scaling Service No


Yes
This parameter is valid only when Deploy Operation Portal or not is set to Yes.
The AS service parameters take effect only when Deploy DCS Auto Scaling Service is set to Yes.

Auto Scaling Service Node 1 Host The host naming rules are as follows:
Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.

Auto Scaling Service Node 1 IP IP address of AS node 1.


Address

Auto Scaling Service Node 1 root Password of user root for logging in to AS node 1.
Password

CNA name of Auto Scaling Service Name of the CNA to which AS node 1 belongs.
node 1

Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 1

Auto Scaling Service Node 2 Host For details, see the parameter description of Auto Scaling Service Node 1 Host Name.
Name

Auto Scaling Service Node 2 IP IP address of AS node 2.


Address

127.0.0.1:51299/icslite/print/pages/resource/print.do? 213/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Auto Scaling Service Node 2 root Password of user root for logging in to AS node 2.
Password

CNA name of Auto Scaling Service Name of the CNA to which AS node 2 belongs.
node 2

Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 2

Deploy Elastic Container Engine No


Service
Yes
This parameter is valid only when Operation Portal SDN Scenario is set to HARD SDN.
The Elastic Container Engine (ECE) parameters are valid only when Deploy Elastic Container Engine Service is
set to Yes.

Elastic Container Engine Node 1 The host naming rules are as follows:
Host Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.

Elastic Container Engine Node 1 IP IP address of ECE node 1.


Address NOTE:
If Automatic VM creation is set to Yes, enter an IP address that is not in use.
If Automatic VM creation is set to No, enter the IP address of the node where the OS has been deployed.

Elastic Container Engine Node 1 Password of user root for logging in to ECE node 1.
root Password

Elastic Container Engine Node 1 Public service domain IP address of ECE node 1.
Public Service Domain IP Address NOTE:
If Automatic VM creation is set to Yes, enter an IP address that is not in use.
If Automatic VM creation is set to No, enter the IP address of the node where the OS has been deployed.

CNA name of Elastic Container Name of the CNA to which ECE node 1 belongs.
Engine Service node 1

Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 1

Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Host Name.
Host Name

Elastic Container Engine Node 2 IP For details, see the parameter description of Elastic Container Engine Node 1 IP Address.
Address

Elastic Container Engine Node 2 Password of user root for logging in to ECE node 2.
root Password

Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Public Service Domain IP
Public Service Domain IP Address Address.

CNA name of Elastic Container Name of the CNA to which ECE node 2 belongs.
Engine Service node 2

Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 2

Elastic Container Engine Floating Floating IP address used for the ECE service. It must be an idle IP address in the same network segment as the IP
IP Address address of the ECE node.

Elastic Container Engine Public Floating IP address used for the communication between the K8s cluster and the ECE node. It must be an idle IP
Service Domain Floating IP address in the same network segment as the public service domain IP address of the ECE node.
Address

Elastic Container Engine Global IP address used to configure load balancing for the ECE service. It must be an idle IP address in the same network
Load Balancing IP Address segment as the IP address of the ECE node.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 214/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Subnet mask of the public service Subnet mask of the public service domain of the ECE service.
domain of the Elastic Container
Engine Service

Port group of the Elastic Container Port group of the public service domain of the ECE service.
Engine Service in the public service NOTE:
domain
If a port group has been created on FusionCompute, set this parameter to the name of the created port group.
If no port group has been created on FusionCompute, set this parameter to the name of the port group planned for
FusionCompute.

IP Address Gateway of Elastic Set the IP address gateway of the ECE public service domain.
Container Engine public Service
Domain

Elastic Container Engine Public Set the BMS and VIP subnet segments for the ECE public service network.
Service Network-BMS&VIP
Subnet Segment

IP Address Segment of Elastic Set the IP address segment of the ECE public service network client.
Container Engine Public Service
Network Client

Manage FusionCompute or not Whether to enable eDME to take over FusionCompute.


No
Yes
NOTE:

eDME can manage FusionCompute only when both FusionCompute and eDME are deployed.

Interface Username Interface username. This parameter is valid only when Manage FusionCompute or not is set to Yes.

Interface Account Password Password of the interface account. This parameter is valid only when Manage FusionCompute or not is set to
Yes.

SNMP Security Username SNMP security username. This parameter is valid only when Manage FusionCompute or not is set to Yes.

SNMP Encryption Password SNMP encryption password. This parameter is valid only when Manage FusionCompute or not is set to Yes.

SNMP Authentication Password SNMP authentication password. This parameter is valid only when Manage FusionCompute or not is set to Yes.

Management Floating IP address Floating IP address of the management plane.


IP address used to access the management and O&M portals. It must be unused and in the same network segment
as the node IP addresses.

Southbound Floating IP Address Southbound floating IP address.


The southbound floating IP address is used by third-party systems to report alarms.

Network Port Network port name.


For the x86 architecture, the default value is enp4s1.
For the Arm architecture, the default value is enp4s0.

Whether to install eDataInsight Whether to deploy the DCS eDataInsight management plane. If yes, prepare the product software package of the
Manager DCS eDataInsight management plane in advance.
Yes
No

Initial admin Password of Initial password of user admin on the management portal.
Management Portal NOTE:

After setting the password, click Downwards. The system automatically pastes the password to the following passwords
(from Initial admin Password of Management Portal to sftpuser Password). The rules of each password are different.
If the verification fails after the password is copied downwards, you need to change the password separately.

Initial admin Password of O&M Initial password of user admin on the O&M portal.
Portal

sopuser Password Password of user sopuser. The sopuser account is used for routine O&M.

ossadm Password Password of user ossadm. The ossadm account is used to install and manage the system.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 215/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

ossuser Password Password of user ossuser. The ossuser account is used to install and run the product software.

Database sys Password Database sys password. The database sys account is used to manage and maintain the Zenith database and has the
highest operation rights on the database.

Database dbuser Password Database dbuser password.

rts Password Password of user rts. The rts account is used for authentication between processes and RabbitMQ during process
communication.

KMC Protection Password KMC protection password. KMC is a key management component.

ER Certificate Password ER certificate password. The ER certificate is used to authenticate the management or O&M portal when you
access the portal on a browser.

Elasticsearch Password Elasticsearch password, which is used for Elasticsearch authentication.

ETCD Password ETCD password, which is used for ETCD authentication.

ETCD root Password ETCD root password, which is used for ETCD root user authentication.

sftpuser Password Set the password of user sftpuser.

Whether to install object storage Whether to deploy the object storage service.
service
No
Yes-PoE Authentication
Yes-IAM Authentication

Whether to install application Whether to deploy the application backup service during operation portal deployment.
backup service

If you use Export Parameters to export an XLSX file, you can operate or view the file only in Office 2007 or later version.

10. Configure UltraVR parameters.


On the Online Modification Configuration tab page, click Add UltraVR in the UltraVR Parameter Configuration area. On the Add
UltraVR page in the right pane, set related parameters.

Table 5 UltraVR parameters

Parameter Description

UltraVR Template File Directory where the downloaded UltraVR template file and signature file are stored. If the template file is not
Directory downloaded, download it by as instructed in UltraVR Software Package .

Owning CNA Name of VM Name of the CNA to which the template VM and UltraVR VM are bound when they are created. The CNA provides
Template storage resources for the VMs.

FusionCompute IP IP address of FusionCompute used for creating a VM.

FusionCompute Login User Username for logging in to FusionCompute.

FusionCompute Login Password for logging in to FusionCompute.


Password

VM Name Name of the UltraVR VM created on FusionCompute.

Management IP Address Management IP address of the UltraVR VM.

Subnet Mask Subnet mask of the management plane.

Default Gateway Default gateway of the management plane.

If you use Export Parameters to export an XLSX file, you can operate or view the file only in Office 2007 or later version.

11. Configure eBackup parameters.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 216/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

On the Online Modification Configuration tab page, click Add eBackup in the eBackup Parameter Configuration area. On the Add
eBackup page in the right pane, set related parameters.

Table 6 eBackup parameters

Parameter Description

eBackup Template File Directory Directory where the downloaded eBackup template file and signature file are stored. If the software package is
not downloaded, download it as instructed in eBackup Software Package .

Owning CNA Name of VM Template Name of the CNA to which the template VM is bounded when the template VM is created. The CNA provides
storage resources for the VM.

FusionCompute IP IP address of FusionCompute used for creating a eBackup VM.

FusionCompute Login User Username for logging in to FusionCompute.

FusionCompute Login Password Password for logging in to FusionCompute.

Add VM You can click Add VM to add a VM.

Server Role Specifies the role for VM initialization. Only backup servers or backup proxies are supported.
Backup Server
Backup Proxy
NOTE:

If two VMs are deployed at the same time, only one backup server and one backup proxy are supported.

Owning CNA Name of VM Name of the CNA to which the eBackup VM is bound when the eBackup VM is created. The CNA provides
storage resources for the VM.

msuser Password Password of the interface interconnection user (msuser by default). The password is used for VM initialization.

hcp Password Password of the SSH connection user so that the password can be automatically changed during VM
initialization.

Internal Communication Plane IP Management IP address of the backup server connected to the backup proxy.
Address of Backup Server This parameter can be configured when Server Role is set to Backup Proxy.

hcp Password of Backup Server Password of the hcp account of the backup server connected to the backup proxy.
This parameter can be configured when Server Role is set to Backup Proxy.

Backup Server root Password Password of the root account of the backup server connected to the backup proxy. If the password is not
changed, you can view the default password in the eBackup account list.
This parameter can be configured when Server Role is set to Backup Proxy.

Production/Backup Management Plane NOTE:


Parameters If you want to combine this plane with another plane, click Use other plane parameters to select a plane. The
values of IP Address and Subnet Mask will be changed to those of the selected plane by default.

IP Address Management IP address of the production/backup management plane. The IP address must be in the same
network segment as the management plane of FusionCompute.

Subnet Mask Subnet mask of the production/backup management plane.

Default Gateway Default gateway of the production/backup management plane.

Internal Communication Plane NOTE:


Parameters If you want to combine this plane with another plane, click Use other plane parameters to select a plane. The
values of IP Address, Subnet Mask, and DVS Port Group Name will be changed to those of the selected plane by
default.

IP Address Management IP address of the internal communication plane. You are advised to use an IP address that is in the
same network segment as the management plane of FusionCompute.

Floating IP address Floating IP address of the internal communication plane. The value must be the same when multiple VMs are
deployed. You are advised to use an IP address that is in the same network segment as the management plane of
FusionCompute.

Subnet Mask Subnet mask of the internal communication plane.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 217/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DVS Port Group Name Port group name of a DVS.


If the FusionCompute management plane is selected, set this parameter to managePortgroup.
If an independent network plane is planned, you need to manually create DVSs and port groups as planned after
FusionCompute is deployed. In this case, you do not need to set this parameter. When deploying eBackup, set
the port group name as prompted.

IPv4 Route Information Route information of the network plane. Configure the information based on the network plan.

You can click to configure IPv4Destination Network, IPv4Destination Network Mask, and IPv4Route
Gateway.

Production Storage Plane Parameters; NOTE:


Backup Storage Plane Parameters If you want to combine this plane with another plane, click Use other plane parameters to select a plane. The
values of IP Address, Subnet Mask, and DVS Port Group Name will be changed to those of the selected plane by
default.

IP Address Management IP address of the production storage plane or backup storage plane. You are advised to use an
independent network plane.

Subnet Mask Subnet mask of the production storage plane or backup storage plane.

DVS Port Group Name Port group name of a DVS.


If the FusionCompute management plane is selected, set this parameter to managePortgroup.
If an independent network plane is planned, you need to manually create DVSs and port groups as planned after
FusionCompute is deployed. In this case, you do not need to set this parameter. When deploying eBackup, set
the port group name as prompted.

IPv4 Route Information Route information of the network plane. Configure the information based on the network plan.

You can click to configure IPv4Destination Network, IPv4Destination Network Mask, and IPv4Route
Gateway.

If network planes are combined based on the eBackup network plane planning and the DHCP service is deployed on the management plane, delete
unnecessary NICs by following instructions provided in "Deleting a NIC" in FusionCompute 8.8.0 Product Documentation. Otherwise, the eBackup service
may be inaccessible.

12. Set eDataInsight parameters.

Import configurations using an EXCEL file. If multiple components are deployed together, on the Parameter Configuration page,
click the Excel Import Configuration tab and click Download File Template. If eDataInsight or HiCloud is deployed
independently, on the Advanced Service Configuration page, click Download File Template. After filling in the template, import
the parameter file. If the import fails, check the parameter file. If the parameters are imported successfully, you can view the
imported parameters on the Advanced Service Configuration page.

a. In the storage-compute decoupled scenario, the eDataInsight management plane and the OceanStor Pacific HDFS management plane need to
communicate with each, and the eDataInsight service plane and the OceanStor Pacific HDFS service plane need to communicate with each
other.
b. The floating IP address of the OceanStor Pacific HDFS management plane is the IP address for logging in to the OceanStor Pacific HDFS web
UI and is on the same management plane as the management IP address.
c. The IP address of the DNS server of OceanStor Pacific HDFS belongs to the service plane and needs to communicate with the eDataInsight
service plane.
d. If OceanStor Pacific HDFS is not reinstalled when eDataInsight is reinstalled, you need to delete related data by referring to section "Deleting
Residual Historical HBase Data from OceanStor Pacific HDFS" in the eDataInsight Product Documentation of the corresponding version
before the reinstallation. Otherwise, HBase component usage will be affected after the reinstallation.
e. If there is a VM cluster with correct specifications that is created using the eDataInsight image, the VM cluster can be used to install and
deploy eDataInsight. VM Deploy Type must be set to Manual Deploy.
f. The value of eDataInsight Deployment Scenarios determines an installation mode:
If the value is custom, the custom installation mode is used. You can choose to install NdpYarn, NdpHDFS, NdpSpark,
NdpKafka, NdpHive, NdpHBase, NdpRanger, NdpFlume, NdpFTP, NdpFlink, NdpES, NdpRedis, NdpClickHouse, and
NdpStarRocks as needed. NdpDiagnos, NdpKerberos, NdpStatus, NdpTool, NdpZooKeeper, and NdpLDAP will be installed
by default.
If the value is hadoop, the Hadoop analysis cluster installation mode is used. NdpFlume, NdpHBase, NdpHDFS, NdpHive,
NdpSpark, NdpYarn, NdpFlink, NdpRanger, NdpDiagnos, NdpTool, NdpKerberos, NdpStatus, NdpZooKeeper, and
NdpLDAP will be installed by default and cannot be modified. If there is an NDP node specified for installing NdpHDFS,
the data disk size of the node must be at least 500 GB, and the data disk size of other nodes must be at least 100 GB. If there
is no node specified for installing NdpHDFS, the data disk size of all nodes must be at least 500 GB.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 218/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the value is clickhouse, the ClickHouse analysis cluster installation mode is installed. NdpHDFS, NdpFlume,
NdpClickHouse, NdpDiagnos, NdpStatus, NdpTool, NdpKerberos, NdpZooKeeper, and NdpLDAP will be installed by
default and cannot be modified. If there is an NDP node specified for installing NdpHDFS or NdpClickHouse, the data disk
size of the node must be at least 500 GB, and the data disk size of other nodes must be at least 100 GB. If there is no node
specified for installing NdpHDFS or NdpClickHouse, the data disk size of all nodes must be at least 500 GB.
If one or more of NdpSpark, NdpFlink, and NdpHive are installed, NdpHudi will be installed by default.

Table 7 eDataInsight parameters

Parameter Description

FusionCompute IP This parameter is mandatory.


In the separated deployment scenario, enter the management IP address of FusionCompute.

FusionCompute login user This parameter is mandatory.


In the separated deployment scenario, enter the FusionCompute administrator account, for example, admin or sysadmin.

FusionCompute login This parameter is mandatory.


password In the separated deployment scenario, enter the password of the FusionCompute administrator. The password can contain 8 to
64 characters and must contain uppercase letters, lowercase letters, digits, and special characters ~@#^*-_+[{}]:./?%=!. A
character can be repeated for a maximum of three times. The password cannot contain consecutive identical characters. The
password cannot be the same as the reverse of it, regardless of the letter case.

Language This parameter is mandatory.


Language used for the deployment.

Floating IP This parameter is mandatory.


IP address provided by the eDataInsight service that is accessible to external networks.

Floating gateway This parameter is mandatory.


Gateway provided by the eDataInsight service that is accessible to external networks.

Floating Subnet Mask This parameter is mandatory.


Subnet mask provided by the eDataInsight service that is accessible to external networks.

sopuser password This parameter is mandatory.


Password of a common user. This user is used for remote login in SSH mode and to transfer files based on SFTP.

ossadm password This parameter is mandatory.


Password of a management and maintenance user of the management plane. This user is used to install, start, stop, and
manage the management plane.

ossuser password This parameter is mandatory.


Password of a common user. This user is used to run and start services on the service plane.

ftpuser password This parameter is mandatory.


Password of a common user. This user is used to transfer files based on SFTP. Remote login in SSH mode is disabled for this
user.

redisDbUser password This parameter is mandatory.


Password of the Redis database administrator.

zenithSys password This parameter is mandatory.


Password of the Gauss database administrator. This user is used to modify database configurations, add, delete, modify, and
query databases and database users, and change user passwords. This user can only be used for local login.

adminWebService This parameter is mandatory.


password Password of a service plane administrator. This user is used for web login authentication and service plane operations.

adminWebControl This parameter is mandatory.


password Password of a management plane administrator. This user is used for web login authentication and management plane
operations.

ldapManager password This parameter is mandatory.


Password of the LDAP management user of the NDP component.

kerberosManager This parameter is mandatory.


password Password of the Kerberos management user of the NDP component.

root password This parameter is mandatory.


Password of user root during the deployment of a VM.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 219/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

allOsDefault password This parameter is mandatory.


Password of the default user of all hosts.

eDataInsight CNA name This parameter is mandatory.


of the VM Template Name of the CNA to which the eDataInsight VM template belongs, i.e., name of the CNA where the template resides after
being uploaded to FusionCompute.

eDataInsight datastore This parameter is mandatory.


name of the VM template Name of the shared storage to which the eDataInsight VM template belongs, which is used when the template is uploaded to
FusionCompute. If the shared storage of the VM template is of the NoF type, the shared storage of the VM must also be of
the NoF type.

eDataInsight software This parameter is mandatory.


package address eDataInsight software package address, i.e., address of a local eDataInsight software package.

eDataInsight Deployment This parameter is mandatory.


Scenarios eDataInsight deployment scenarios: custom/clickhouse/hadoop
custom: customized components are to be deployed.
clickhouse: ClickHouse cluster components are to be deployed.
hadoop: Hadoop cluster components are to be deployed.

VM Deploy Type This parameter is mandatory.


By default, Tool Deploy is selected, which indicates that the tool will provision eDataInsight VMs during the deployment
process. If you select Manual Deploy, the VM provisioning step will be skipped during the deployment process because
available VMs exist in the environment by default.
CAUTION:

If you select Manual Deploy, the VM manually provisioned must meet the following requirements:
The VM template must be provided in the eDataInsight_24.0.0_DayuImage_Euler-x86_64.zip (x86 architecture) or
eDataInsight_24.0.0_DayuImage_Euler-aarch64.zip (Arm architecture) software package.
IP addresses must be configured in a specified sequence. That is, the service plane port group and service plane IP address must be
configured for the network port of the first NIC of the VM.
After the VM is provisioned, delete unnecessary NICs, create and attach required disks, and then start the VM.

HDFS Storage and This parameter is mandatory.


Compute ID ISAC: storage-compute coupled; SACS: storage-compute decoupled.

DNS server address of the This parameter is mandatory for storage-compute decoupled.
Pacific HDFS Address of the OceanStor Pacific domain name server, which is used to convert a domain name to an IP address.

IP address of the DNS This parameter is mandatory for storage-compute decoupled.


server of the Pacific IP address of the OceanStor Pacific domain name server.
HDFS

Pacific HDFS This parameter is mandatory for storage-compute decoupled.


management IP address IP address of the OceanStor Pacific HDFS management node, which is used for permission configuration and mutual trust of
certificates.

Username for logging in This parameter is mandatory for storage-compute decoupled.


to the Pacific HDFS Username for logging in to OceanStor Pacific HDFS.

Password for logging in to This parameter is mandatory for storage-compute decoupled.


the Pacific HDFS Password for logging in to OceanStor Pacific HDFS.

Floating IP address of the This parameter is mandatory for storage-compute decoupled.


Pacific HDFS Floating IP address of the management plane of OceanStor Pacific HDFS, which is used to access the distributed file system.
management plane

Table 8 eDataInsight VM plane parameters

Parameter Description

VM Type The eDataInsight platform uses two types of VMs: CloudSOP (management VMs) and computeNode (service VMs).

VM Host Name 1. There are only three CloudSOP VMs. The VM names must be cloudsopdigit and should not be changed.
2. The number of computeNode VMs is N (N can be 3, 4, ...). The VM names must be ndpdigit (starts from 1 and must be
consecutive) and should not be changed.
3. You do not need to configure the management plane IP address for computeNode VMs.

CNA Name Set the owning CNA.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 220/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Management plane This parameter needs to be configured only for CloudSOP VMs.
IP

Custom Indicates the node where a component is to be installed.


Components NOTE:
In the storage-compute coupled scenario, if one of the NdpHDFS, NdpClickHouse, NdpES, and NdpStarRocks components is deployed,
the minimum size of the data disk on the node must be 500 GB. Otherwise, the minimum size of the data disk must be 100 GB.
In the storage-compute decoupled scenario, if one of the NdpClickHouse, NdpES, and NdpStarRocks components is deployed, the
minimum size of the data disk on the node must be 500 GB. Otherwise, the minimum size of the data disk must be 100 GB.

CPU The minimum number of CPUs of a CloudSOP VM is 8.


The minimum number of CPUs of a computeNode VM is 16.
NOTICE:
StarRocks has restrictions on the CPU instruction set. If StarRocks is select for eDataInsight installation and the node where StarRocks is
installed does not have a corresponding instruction set, the eDataInsight installation will fail in the environment check step.
The restrictions are as follows:
The x86 CPU must support the AVX2 instruction set. Run the following command to check whether to support the instruction set:
cat /proc/cpuinfo | grep avx2
If there is a return value in the command output, the AVX2 instruction set is supported.
The Arm CPU must support the ASIMD instruction set. Run the following command to check whether to support the instruction set:
cat /proc/cpuinfo | grep asimd
If there is a return value in the command output, the ASIMD instruction set is supported.
Huawei Kunpeng 916 and Kunpeng 920 support StarRocks.
If you use a VM and the CPU supports a corresponding instruction set, but you do not find the instruction set by running the command,
check whether the virtualization platform reduces the CPU instruction set types.

Memory (GB) The minimum memory size of a CloudSOP VM is 32 GB.


The minimum memory size of a computeNode VM is 64 GB.

Service Disk (GB) The minimum service disk size of a CloudSOP VM is 200 GB.
The minimum service disk size of a computeNode VM is 200 GB.

Data Disk (GB) No data disk is mounted to a CloudSOP VM (set this parameter to 0) by default. The minimum size of the data disk of a
computeNode VM is 100 GB.

Service plane IP Set the IP address of the service plane.


This IP address is used for service communication between eDataInsight components. This IP address needs to communicate with
the service plane of OceanStor Pacific HDFS in the storage-compute decoupled scenario.

Service plane Set the subnet mask of the node.


subnet mask

Service plane port Set the port group of the service plane.
group This port group cannot be the same as that of the management plane.

Shared Storage Name of the shared storage provided by FusionCompute.


name NOTE:
If you want to enter the name of a shared storage of the NoF type, ensure that the 1 GB hugepage memory has been enabled for the CNA
node to which the shared storage belongs.
If you want to use shared storage of the NoF type on a VM, ensure that the shared storage to which the VM template belongs is also of the
NoF type.

Complete configurations on the eDataInsight VM Plane List sheet by referring to Table 9.

Table 9 eDataInsight VM plane configuration example

VM Host CNA VM Type Management Custom CPU Memory Service Data Service Service Plane Service Shared
Name Name Plane IP Component (GB) Disk Disk Plane IP Subnet Mask Plane Storage
(GB) (GB) Port Name
Group

cloudsop1 linux- cloudSop 51.x.x.61 - 8 32 200 0 192.168.6.62 255.255.255.0 yewu NoF


CNA01

cloudsop2 linux- cloudSop 51.x.x.63 - 8 32 200 0 192.168.6.64 255.255.255.0 yewu NoF


CNA02

cloudsop3 linux- cloudSop 51.x.x.65 - 8 32 200 0 192.168.6.66 255.255.255.0 yewu NoF


CNA03

127.0.0.1:51299/icslite/print/pages/resource/print.do? 221/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

ndp1 linux- computeNode - NdpStarRocks 16 64 200 500 192.168.6.67 255.255.255.0 yewu NoF
CNA01

ndp2 linux- computeNode - NdpStarRocks 16 64 200 500 192.168.6.68 255.255.255.0 yewu NoF
CNA02

ndp3 linux- computeNode - NdpStarRocks 16 64 200 500 192.168.6.69 255.255.255.0 yewu NoF
CNA03

Table 10 eDataInsight components

Component Value of Install or Component Information


not

NdpHBase - Column-oriented high-performance, mass, real-time scale-out storage.

NdpYarn - A service that manages and allocates cluster resources in a unified manner.

NdpHDFS - Provides the Hadoop distributed file system service.

NdpRanger - Provides Ranger authentication for the NDP component.

NdpSpark - Provides memory-based distributed computing services.

NdpFlume - Provides a distributed computing framework oriented to stream processing and batch processing.

NdpFTP - Provides FTP-based access to HDFS.

NdpKafka - A distributed, partitioned, and replicated publish-subscribe messaging system.

NdpHive - Data warehouse service based on Hadoop.

NdpFlink - Provides a distributed computing framework oriented to stream processing and batch processing.

NdpRedis - High-performance, real-time KV datastore.

NdpClickHouse - Columnar database management system (DBMS) for online analysis (OLAP).

NdpES - Provides the full-text search service for eDataInsight.

NdpStarRocks - Uses StarRocks to implement a high-performance interactive analytical database with all-scenario massively
parallel processing (MPP).

13. Configure HiCloud parameters.


Import configurations using an EXCEL file. If multiple components are deployed together, on the Parameter Configuration page, click the
Excel Import Configuration tab and click Download File Template. If eDataInsight or HiCloud is deployed independently, on the
Advanced Service Configuration page, click Download File Template. After filling in the template, import the parameter file. If the import
fails, check the parameter file. If the parameters are imported successfully, you can view the imported parameters on the Advanced Service
Configuration page.

a. Configure parameters on the HiCloud Parameters sheet.

Configure related parameters based on Table 11. Retain the default values for the parameters that are not described in the table.

Table 11 Parameters on the HiCloud Parameters sheet

Category Parameter Description Mandatory Example Value

Virtualization FusionCompute IP In separated deployment mode, enter the management IP address Mandatory in 192.168.*.166
Basic of FusionCompute. separated
Information deployment
mode

FusionCompute login In separated deployment mode, enter the administrator account Mandatory in admin
user of FusionCompute. separated
deployment
mode

127.0.0.1:51299/icslite/print/pages/resource/print.do? 222/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

FusionCompute login In separated deployment mode, enter the password of an Mandatory in Huawei12!@
password administrator account of FusionCompute. separated
deployment
mode

General NTP IP address of the NTP IP address of the PaaS management zone, which is used to No -
Parameters PaaS management plane synchronize the clock source of the time zone. If there are
multiple NTP IP addresses, separate them with commas (,) and
ensure that the time of multiple clock sources is the same.
NOTE:

Before setting this parameter, ensure that the IP address of the


NTP clock source is correct and available. If the configuration is
incorrect, the installation will fail. You can also leave this
parameter blank. After the installation is successful, configure the
NTP by referring to Common Operations > Modifying NTP
Configurations in Installation Guide.

Software package path Local path for storing obtained software packages. Yes D:\Hicloud\package

GDE op_svc_servicestage Password of the op_svc_servicestage tenant of the management Yes cnp2024@HW
Management tenant password zone. This parameter can be customized.
Plane Password NOTE:

In addition to the password rules specified in the template, the


password cannot contain three or more consecutive identical or
adjacent characters (such as abc and 123).

op_svc_cfe tenant Password of the op_svc_cfe tenant of the management zone. Yes cnp2024@HW
password This parameter can be customized.
NOTE:

In addition to the password rules specified in the template, the


password cannot contain three or more consecutive identical or
adjacent characters (such as abc and 123).

op_svc_pom tenant Password of the op_svc_pom tenant of the management zone. Yes cnp2024@HW
password This parameter can be customized.
NOTE:

In addition to the password rules specified in the template, the


password cannot contain three or more consecutive identical or
adjacent characters (such as abc and 123).

tenant password Password of a tenant. The value must be the same as the Yes cnp2024@HW
password of the op_svc_cfe tenant of the management zone. This
is the confirmation password of the tenant.

install_op_svc_cfe user Password of the installation user of the tenant. This parameter Yes cnp2024@HW
password of tenant can be customized.
NOTE:

In addition to the password rules specified in the template, the


password cannot contain three or more consecutive identical or
adjacent characters (such as abc and 123).

Password of the paas Password of the paas user of the management zone node. Yes Image0@Huawei123
user of the management The password is preset in the image. The value is fixed at
zone node Image0@Huawei123 and does not need to be changed.

sshusr password of Password of the sshusr user of the management zone node. Yes Image0@Huawei123
management node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.

root password of Password of the root user of the management zone node. Yes Image0@Huawei123
management node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.

paas password of data Password of the paas user of the data zone node. Yes Image0@Huawei123
node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.

root password of data Password of the root user of the data zone node. Yes Image0@Huawei123
node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 223/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

GDE Data Plane system administrator Password of a system administrator. This parameter can be Yes cnp2024@HW
Password admin password customized.

security administrator Password of the security administrator. This parameter can be Yes cnp2024@HW
secadmin password customized.

security auditor Password of the security auditor. This parameter can be Yes cnp2024@HW
secauditor password customized.

GDE Network DVS port group name Name of the distributed virtual switch (DVS) port group in Yes managePortgroup
Plane FusionCompute Manager. The default name is
Configuration managePortgroup.
To query the name, perform the following steps:
Log in to FusionCompute Manager as the admin user.
Choose Resource Pool > DVS > Port Group.
Check the port group name, which is the required name.

Default gateway Default gateway of FusionCompute Manager. Yes 192.168.*.11


To query the gateway, perform the following steps:
Log in to FusionCompute Manager as the admin user.
Choose Resource Pool and find the VM named VRM02.
Click the Configuration tab, and then click NICs.
Click the NIC name and view the value of Default gateway,
which is the required value.

Subnet mask Subnet mask of FusionCompute Manager. Yes 255.255.224.0


To query the mask, perform the following steps:
Log in to FusionCompute Manager as the admin user.
Choose Resource Pool and find the VM named VRM02.
Click the Configuration tab, and then click NICs.
Click the NIC name, find the record whose IPv4 Destination
Network is the default gateway, and view the value of IPv4
Destination Network Mask, which is the required value.

Floating IP of Floating IP address of the management zone VM. Yes 192.168.*.15


management plane

Keepalived VIP (1) of keepalived VIP addresses of VMs in the data zone, each of which Yes 192.168.*.16
data zone must be unique.

Keepalived VIP (2) of keepalived VIP addresses of VMs in the data zone, each of which Yes 192.168.*.17
data zone must be unique.

Keepalived VIP (3) of keepalived VIP addresses of VMs in the data zone, each of which Yes 192.168.*.18
data zone must be unique.

Gaussdb VIP of data GaussDB VIP address of VMs in the data zone. Yes 192.168.*.19
zone

Service Machine account Machine-machine account password. This parameter can be Yes Changeme_123@
Deployment password customized.
Parameters
eDME OC plane IP IP address of the eDME O&M portal. No 192.168.*.250

eDME SC plane IP IP address of the eDME operation portal No 192.168.*.251

b. Configure parameters on the HiCloud VM Parameters sheet.


Complete configurations on this sheet based on the IP addresses in Table 12.

Table 12 Parameters on the HiCloud VM Parameters sheet

Parameter Host Name Example value

Management IP Address paas-core 192.168.*.190

platform-node1 192.168.*.191

127.0.0.1:51299/icslite/print/pages/resource/print.do? 224/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

platform-node2 192.168.*.192

platform-node3 192.168.*.193

gkit 192.168.*.194

Owning CNA Name paas-core MCNA01

platform-node1 MCNA01

platform-node2 MCNA02

platform-node3 MCNA02

gkit MCNA02

Datastore Name paas-core HCI_StoragePool0

platform-node1 HCI_StoragePool0

platform-node2 HCI_StoragePool0

platform-node3 HCI_StoragePool0

gkit autoDS_MCNA02

Table 13 Networking plan

Host VM Type Management Floating IP Keepalived VIP Keepalived VIP Keepalived VIP GaussDB VIP
Name Zone IP Address Address of Address of Data Address of Data Address of Data Address of Data
Management Zone (1) Zone (2) Zone (3) Zone
Zone

paas-core Management 192.168.**.177 192.168.**.200 / / / /


zone

platform- Data zone 192.168.**.178 / 192.168.**.201 192.168.**.202 192.168.**.203 192.168.**.210


node1

platform- Data zone 192.168.**.179 / 192.168.**.201 192.168.**.202 192.168.**.203 /


node2

platform- Data zone 192.168.**.180 / / / / 192.168.**.210


node3

GKit Management 192.168.**.240 / / / / /


zone

All types of VIP addresses must be planned on two nodes, and the VIP addresses of the same type must be the same.
The slash (/) in the table indicates that networking planning is not involved.
IP addresses in the table are only examples. Actual IP addresses may vary.

c. After the parameters are imported successfully, click Next.

14. Configure SFS parameters.


Import configurations using an EXCEL file. If multiple components are deployed together, on the Parameter Configuration page, click the
Excel Import Configuration tab and click Download File Template. If SFS is deployed independently, on the Advanced Service
Configuration page, click Download File Template. After filling in the template, import the parameter file. If the import fails, check the
parameter file. If the parameters are imported successfully, you can view the imported parameters on the Advanced Service Configuration
page.
Set the parameters listed in Table 14 and Table 15.

Table 14 SFS parameters

Category Parameter Description

127.0.0.1:51299/icslite/print/pages/resource/print.do? 225/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Virtualization FusionCompute IP Floating IP address of FusionCompute.


basic information
FusionCompute Login Username for logging in to FusionCompute.
User

FusionCompute Login Password for logging in to FusionCompute.


Password

General parameter Template File Local path for storing the template file of the SFS VM. If the software package is not downloaded,
download it as instructed in SFS Software Package .

Owning CNA of Name of the CNA to which the SFS VM template is uploaded.
Template

Owning Datastore of Name of the datastore where the SFS VM template resides after being uploaded to FusionCompute.
Template

Template Default Default password of user root.


Password NOTE:

Obtain the default password of account root from the "Type A (Background)" sheet in STaaS Solution 8.5.0
Account List (for DME and eDME). After the installation is complete, the password of account root is
automatically updated to the password of user root on the management plane.

SFS management Language OS language of the management plane VM, which must be the same as the language configured on eDME.
configuration
Floating IP Address of Floating IP address of the GaussDB database of the management plane VM. It is recommended that the IP
GaussDB address be the next consecutive IP address after the IP addresses of SFS VMs. For example, if the IP
addresses of SFS_DJ are 192.168.1.10 and 192.168.1.11, you are advised to set the IP address to
192.168.1.12.

Floating IP Address of Floating IP address of the management plane VM. It is recommended that the IP address be the next
Management Portal consecutive IP address after the floating IP address of the GaussDB database. For example, if the floating
IP address of the GaussDB database is 192.168.1.12, you are advised to set this IP address to 192.168.1.13.

Default Gateway Default gateway of the FusionCompute management plane.

Subnet Mask Subnet mask of the FusionCompute management plane.

eDME Operation Portal IP address used by SFS to interconnect with the eDME operation portal.
IP Address

SFS management root Password of Password of user root on the management plane.
password Management Portal

djmanager Password of Password of user djmanager on the management plane.


Management Portal

SFS-DJ Management SFS-DJ management password.


Password NOTE:

It is used as the password of the sfs_dj_admin account to log in to the SFS-DJ foreground management page,
and the password of machine-machine accounts (such as op_svc_sfs or op_service_sfs) for connecting SFS-DJ
background to other services.

Table 15 SFS VM parameters

Parameter Description

VM Host Name Host name of the SFS VM.


NOTE:

Two-node deployment. SFS needs to be deployed on two nodes simultaneously. You are advised to deploy SFS on different CNA
nodes.

Management IP Management IP address of the VM.


Address

Owning CNA Name of the CNA to which the SFS VM template is uploaded.

Datastore Name of the datastore associated with the CNA node.


NOTE:
If shared storage has been connected to FusionCompute, set Datastore to the name of the datastore associated with FusionCompute.
If shared storage has not been connected to FusionCompute, set Datastore to the name of the datastore planned for FusionCompute.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 226/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

CPU The minimum number of CPUs is 8.

Memory (GB) The minimum memory size is 8 GB.

System Disk (GB) The minimum system disk size is 240 GB.

15. Configure eCampusCore parameters.


Import configurations using an EXCEL file. If multiple components are deployed together, on the Parameter Configuration page, click the
Excel Import Configuration tab and click Download File Template. If eCampusCore is deployed independently, on the Advanced Service
Configuration page, click Download File Template. After filling in the template, import the parameter file. If the import fails, check the
parameter file. If the parameters are imported successfully, you can view the imported parameters on the Advanced Service Configuration
page.
Set the parameters listed in Table 16 and Table 17.

Table 16 eCampusCore parameters

Category Parameter Description

Virtualization Basic FusionCompute IP Floating IP address of FusionCompute.


Information
FusionCompute login user FusionCompute user used by SmartKit to create VMs.
The FusionCompute user must be admin or a user associated with the administrator role.
Type: Select Local user.
Permission Type: Select System administrator.

FusionCompute login Password of the FusionCompute user.


password

Parameters required for software package path Local path for storing VM templates, certificates, and software packages, for example,
deploying the eCampusCore D:\packages. The following files must be included:
basic edition
Obtained software packages and ASC verification files.
VM template. The template file does not need to be decompressed.
Obtained eDME and FusionCompute certificates.

Owning CNA of Template CNA to which the VM template belongs. The template is uploaded to the CNA where
FusionCompute is installed.

Owning Datastore of Data storage to which the eCampusCore VM template belongs. The template is uploaded to
Template the FusionCompute data storage.
NOTE:
If FusionCompute has been interconnected with shared datastore, set this parameter to the
name of the datastore.
If FusionCompute has not been interconnected with shared datastore, set this parameter to the
name of the planned datastore.

Template Default Password Preset password of the template file. Set this parameter to Huawei@12F3.

Password of the O&M Password of the O&M management console and database, password of the admin user of
management console and the eCampusCore O&M management console, password of the sysadmin user of the
database database, password of the eDME image repository, and machine-machine account
password.

Password used in SNMPv3 SNMPv3 authorization password used by service components. The verification rules are
authentication the same as those of the password of O&M management console and database, but the two
passwords must be different.

Upper-Level DNS Address You do not need to set this parameter.

FusionCompute eContainer Start IP address of the FusionCompute management subnet in the eContainer cluster. An
cluster master node manage example is 10.168.100.2.
start IP

FusionCompute eContainer End IP address of the FusionCompute management subnet in the eContainer cluster. An
cluster master node manage example is 10.168.100.6.
end IP

Floating IP address of the VIP of the FusionCompute management subnet segment of the internal gateway.
internal gateway

127.0.0.1:51299/icslite/print/pages/resource/print.do? 227/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

GaussV5 floating IP Database VIP.

NFS floating IP NFS VIP, which is used for internal access and NFS configuration.

Configuring IaaS Subnet Mask Subnet mask of the FusionCompute management subnet. An example is 255.255.255.0.
Information of the
Integration Framework Management Plane Port Subnet port group name. Reuse the FusionCompute management network port group and
Group Name set this parameter to managePortgroup.
To obtain the port group information, log in to FusionCompute and choose Resource Pool
> Network. On the page that is displayed, click the ManagementDVS switch to view
information on the Port Groups tab page.

Management Plane Gateway Gateway address of the FusionCompute management subnet. Set this parameter based on
the services and management network segments.

Interface Interconnection User used by eCampusCore to call FusionCompute interfaces to complete deployment
Username tasks. Set these parameters to the user and password configured during OpsMon user
configuration.
Password of the interface
interconnection user

Configuring Global VM root password Password of the root user of the VM to be applied for, which cannot be the same as
Parameters of the Template Default Password.
Integration Framework
sysomc user password Password of the sysomc user created on the VM during installation.

Language Language of the VM on the management plane. Set this parameter the same as the current
language configured on the eDME.

Time Zone/Region Region of the time zone to which the deployment environment belongs.

Time Zone/Area Region of the time zone to which the deployment environment belongs.

NTP IP IP address of the NTP server.


If an external NTP service has been configured for the project, you can log in to
FusionCompute and choose System Management > System Configuration > Time
Management to obtain the address.
If the project does not provide the external NTP service, set this parameter to the host IP
address, for example, the IP address of CNA01 or CNA02. You can click a host on the
Resource Pool page of FusionCompute to view the host details and obtain the address.

eDME Operation Portal Floating IP address of the eDME operation portal. Set this parameter to the IP address for
Floating IP Address logging in to the eDME operation portal.

Table 17 eCampusCore VM parameters

Parameter Description

VM Host Name Host name of the eCampusCore VM. Retain the default value in the template.

Management IP Address Management IP address of the VM. Set this parameter.

Owning CNA Name CNA where the VM that is automatically created during the installation is located.

Datastore Name Name of the datastore associated with the CNA node.
NOTE:
If FusionCompute has interconnected with shared datastore, set this parameter to the name of the datastore.
If FusionCompute has not interconnected with shared datastore, set this parameter to the name of the planned datastore.

CPU CPU specifications. Retain the default value in the template.

Memory (GB) Memory specifications. Retain the default value in the template.

System Disk (GB) System disk specifications. Retain the default value in the template.

Data Disk (GB) Data disk specifications. Retain the default value in the template.

16. Click Next. On the Confirm Parameter Settings page that is displayed, check the configuration information. If the information is correct,
click Deploy Now.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 228/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

17. Go to the Pre-deployment Check page, perform an automatic check and a manual check, and check whether each check item is passed. If
any check item is failed, perform operations as prompted in to meet the check standard. If all check items are passed, click Execute
Task.

18. Go to the Execute Deployment page and check the status of each execution item. After all items are successfully executed, click Finish.

If you use Export Report to export an XLSX file, you can operate or view the file only in Office 2007 or later version.
If the status of the Interconnecting and Configuring Shared Storage execution item is Pause, you need to configure the shared storage.
Click Continue in the Operation column of the Interconnecting and Configuring Shared Storage execution item.
Add shared storage based on the storage type. For details, see Operation and Maintenance > Service Management > Storage
Management in FusionCompute 8.8.0 Product Documentation.
After the shared storage is added, select the added shared storage or local disk from the datastore list to configure the datastore for the
VM.

Before using eVol storage (NVMe), ensure that the available hugepage memory capacity is greater than the memory capacity required for deploying
management VMs. Ensure that 1 GB hugepage memory has been enabled on the hosts where the VMs reside.
If an eVol storage device is used, ensure that the same association protocol is used between all associated hosts and the eVol storage device.
If VMs use scale-out storage, disk copy requirements must be met to ensure that the memory reservation is 100%.
When eVol storage or scale-out block storage is used, add virtualized local disks to the CNA nodes where eDME is deployed. For details, see
Operation and Maintenance > Service Management > Storage Resource Creation (for Local Disks) > Adding a Datastore in FusionCompute
8.8.0 Product Documentation. The disk space required by each eDME VM is 2 GB.
If the current deployment task fails and the deployment cannot continue, log in to FusionCompute and manually delete the VMs, VM templates, and
image files created in the current task.

19. If the first CNA node fails to be deployed, return to the Site Deployment Delivery page to view the list of supported servers.
If the server is not supported, you need to manually install the host. For details, see "Installing Hosts Using ISO Images (x86)" and "Installing
Hosts Using ISO Images (Arm)" in FusionCompute 8.8.0 Product Documentation.
After the CNA is manually installed, click Skip in the Operation column of the First CNA node deployment execution item on the
installation tool.

20. After FusionCompute is installed, click View Portal Link on the Execute Deployment page to view the FusionCompute address. Click the
FusionCompute address to go to the VRM login page.

After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is normal, and the alarm ALM-
10.1000027 Heartbeat Communication Between the Host and VRM Interrupted is generated, the alarm will be automatically cleared after 30 minutes.
Otherwise, clear the alarm by following the instructions provided in ALM-10.1000027 Heartbeat Communication Between the Host and VRM
Interrupted in FusionCompute 8.8.0 Product Documentation.

21. After eDME is installed, click View Portal Link on the Execute Deployment page to view the eDME address. Click the eDME address to
go to the login page of the O&M portal or the operation portal. You can log in to the O&M portal and operation portal to check whether the
eDME software is successfully installed. For details, see Verifying the Installation .

3.5.1.3 Initial Configuration After Installation


Configuring Bonding for Host Network Ports

Configuring FusionCompute After Installation

Configuring eDME After Installation

Configuring HiCloud After Installation

Configuring CSHA After Installation

Configuring eCampusCore After Installation

3.5.1.3.1 Configuring Bonding for Host Network Ports


To improve reliability, you are advised to configure bonding for host network ports that are connected to the management plane, service plane, and
storage plane. After FusionCompute is installed, the system configures a bond port named Mgnt_Aggr consisting of one network port for the host

127.0.0.1:51299/icslite/print/pages/resource/print.do? 229/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

by default. You need to manually add network ports.

Procedure
Determine the method of binding network ports.

1. Determine the method of binding network ports.

To manually bind network ports one by one, go to 2.


This mode is suitable for a scenario where there are a few hosts and only one or a small number of bound ports are needed for each host.

To bind network ports in batches, go to 10.


This mode is suitable for a scenario where there are a large number of hosts or multiple bound ports are required for each host.

Manually bind network ports one by one.

2. In the navigation pane, click .


The Resource Pool page is displayed.

3. Click the Hosts tab.

4. Click the host to which the network port to be bound locates.


The Summary tab page is displayed.

5. On the Configuration tab page, choose Network > Aggregation Port.

6. Click Bind Network Port.


The Bind Network Port page is displayed, as shown in Figure 1.

Figure 1 Binding network ports

7. Set Name and Binding Mode for the network ports.

In active-backup mode, you can specify a primary network port among the selected network ports. If the primary network port has been specified, you
can configure the updelay of the primary network port as prompted.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 230/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

In load sharing mode, configure port aggregation on the switches connected to the ports so that the host ports to be bound are configured in the same
Eth-trunk as the ports on the peer switches. Otherwise, network communication will be abnormal.
In LACP mode, some switches need to enable the bridge protocol data unit (BPDU) protocol packet forwarding function on the Eth-trunk. For details
about whether to enable the function, see the user guide of the corresponding switch model. If the function needs to be enabled and the switch is
Huawei S5300, run the following commands:
<S5352_01>sys
[S5352_01]interface Eth-Trunk x
[S5352_01-Eth-Trunkx]mode lacp-static
[S5352_01-Eth-Trunkx]bpdu enable

For details about how to configure port aggregation on a switch, see the switch user guide.

The following binding modes are available for common NICs:

Active-backup: applies to scenarios where two network ports are to be bound. This mode provides high reliability. The bandwidth of
the bound port in this mode equals to that of a member port.

Round-robin: applies to scenarios where two or more network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads in sequence.
This mode may result in data packet disorder because traffic is evenly sent to each port. Therefore, MAC address-based load
balancing prevails over Round-robin in load sharing mode.

IP address and port-based load balancing: applies to scenarios where two or more network ports are to be bound. The bandwidth of
the bound port in this mode is higher than that of a member port, because the member ports share workloads based on the IP address and
port-based load sharing algorithm.
Source-destination-port-based load balancing algorithm: When the packets contain IP addresses and ports, the member ports share
workloads based on the source and destination IP addresses, ports, and MAC addresses. When the packets contain only IP addresses, the
member ports share workloads based on the IP addresses and MAC addresses. When the packets contain only MAC addresses, the
member ports share workloads based on the MAC addresses.

MAC address-based load balancing: applies to scenarios where two or more network ports are to be bound. The bandwidth of the
bound port in this mode is higher than that of a member port, because the member ports share workloads based on the MAC addresses
of the source and destination ports.
This mode is recommended when most network traffic is on the layer 2 network. This mode allows network traffic to be evenly
distributed based on MAC addresses.

MAC address-based LACP: This mode is developed based on the MAC address-based load balancing mode. In MAC address-
based LACP mode, the bound port can automatically detect faults on the link layer and trigger a switchover if a link fails.

IP address-based LACP: applies to scenarios where two or more network ports are to be bound. The bandwidth of the bound port in
this mode is higher than that of a member port, because the member ports share workloads based on the source-destination-IP-address-
based load sharing algorithm. When the packets contain IP addresses, the member ports share workloads based on the IP addresses and

127.0.0.1:51299/icslite/print/pages/resource/print.do? 231/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

MAC addresses. When the packets contain only MAC addresses, the member ports share workloads based on the MAC addresses. In
this mode, the bound port can also automatically detect faults on the link layer and trigger a switchover if a link fails.
This mode is recommended when most network traffic goes across layer 3 networks.

IP address and port-based LACP: applies to scenarios where two or more network ports are to be bound. The bandwidth of the bound
port in this mode is higher than that of a member port, because the member ports share workloads based on the IP address and port-
based load sharing algorithm. In this mode, the bound port can also automatically detect faults on the link layer and trigger a switchover
if a link fails.

(x86 architecture) The following binding modes are available for DPDK-driven physical NICs:

DPDK-driven active/standby: used for the user-mode network port binding. The principle and application scenario are the
same as those of Active-backup for common NICs while this can provide better network packet processing than Active-
backup.

DPDK-driven LACP based on the source and destination MAC addresses: used for user-mode network port binding.
The principle and application scenario are the same as those of MAC address-based LACP for common NICs while this
mode can provide better network packet processing than MAC address-based LACP.

DPDK-driven LACP based on the source and destination IP addresses and ports: used for user-mode network port
binding. The principle and application scenario are the same as those of Source-destination-port-based load balancing
algorithm for common NICs while this mode can provide better network packet processing than Source-destination-port-
based load balancing algorithm.

Only NICs of the same type can be bound to Mellanox MT27712A0 NICs, and a bond supports a maximum of four Mellanox network ports.

8. In the network port list, select the physical network ports to be bound.

You are advised to bind network ports on different NICs to prevent network interruption caused by the fault of a single NIC.

9. Click OK.
The network ports are bound.
To change the binding mode of a bound port, locate the row that contains the bound port, click Modify, and change the binding mode in the
displayed dialog box.

In active-backup binding mode, you can select the primary network port from the Primary Network Port drop-down list. If the primary network port
has been specified, you can configure the updelay of the primary network port as prompted.

Switching between different load sharing modes or between different LACP modes interrupts network communication of the bound network port for 2s
to 3s.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 232/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the binding mode is changed from the active-backup mode to load sharing mode, port aggregation must be configured on the switch to which
network ports are connected. If the binding mode is changed from the load sharing mode to active-backup mode, the port aggregation configured on the
switch must be canceled. Otherwise, the network communication may be abnormal.
If the binding mode is changed from the LACP mode to another mode, port configurations on must be modified on the switch to which network ports
are connected. If the binding mode is changed from another mode to the LACP mode, port aggregation in LACP mode must be configured on the
switch to which network ports are connected. Otherwise, the network communication may be abnormal.
Configuration operations on the switch may interrupt the network communication. After the configurations are complete, the network communication is
automatically restored. If the network communication is not restored, perform either of the following methods:
Ping the destination IP address from the switch to trigger a MAC table update.
Select a member port in port aggregation, disable other ports on the switch, change the binding mode, and enable those ports.
When the resources of the host management domain are fully loaded, the Modify Aggregation Port operation cannot be scheduled due to insufficient
resources. As a result, the network interruption duration is prolonged. You can increase the number of CPUs in the management domain to solve this
problem.

After this step is complete, no further action is required.

Bind network ports in batches.

10. In the navigation pane, click .


The Resource Pool page is displayed.

11. On the Cluster tab page, click the cluster to be bound with network ports in batches.
The Summary tab page is displayed.

12. Choose More > Batch Operation > Batch Bind Network Ports in the upper part of the page.
The Batch Bind Network Ports page is displayed, as shown in Figure 2.

Figure 2 Binding network ports in batches

13. Click Obtain Template for Binding Network Ports in Batches.


The page for downloading templates is displayed.

14. Download the template to the local PC.


The default file name of the template is BindNetworkPort_template.xls.

15. Open the template file and set the network port parameters on the Config sheet based on the information provided on the
Host_BindNetworkPort sheet.
The parameters include Host IP Address, Host ID, Network Port Names, Network Port IDs, Bound Network Port Name, and Binding
Mode.

16. Save and close the template file.

17. On the Batch Bind Network Ports page, click Select File.
A dialog box is displayed.

18. Select the configured template file and click Open.

19. Click OK.


You can click Recent Tasks in the lower left corner to view the task progress.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 233/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If a dialog box is displayed, indicating that the operation fails, rectify the fault based on the failure cause and try again.

3.5.1.3.2 Configuring FusionCompute After Installation


For details about how to initialize FusionCompute services, see Operation and Maintenance > Service Management" in FusionCompute 8.8.0
Product Documentation.
Loading a FusionCompute License File

(Optional) Configuring MAC Address Segments

3.5.1.3.2.1 Loading a FusionCompute License File

Scenarios
This section guides software commissioning engineers to load a license file to a site after FusionCompute is installed so that FusionCompute can
provide licensed services for this site within the specified period.
You can obtain the license using either of the following methods:

Apply for a license based on the electronic serial number (ESN) and load the license file.

Share a license file with another site. When a license is shared, the total number of physical resources (CPUs) and container resources (vCPUs)
at each site cannot exceed the license limit.

Prerequisites
Conditions
You need to obtain the following information before sharing a license file with another site:

If VRM nodes are deployed in active/standby mode at the site, you have obtained the VRM node floating IP address.

Username and password of the FusionCompute administrator of the peer site

Data
Data preparation is not required for this operation.

Procedure
Log in to FusionCompute.

1. Log in to FusionCompute. For details, see Logging In to FusionCompute .

2. In the navigation pane of FusionCompute, click .


The System Management page is displayed.

3. Choose System Management > System Configuration > License Management.


The License Management page is displayed.

4. Click Load License File.


The Load License File page is displayed.

5. Choose a method to load the license file.

To load a new license file, go to 6.

To share a license file with another site, go to 15.

Load a new license file.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 234/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

6. Select License server and check whether the value of License server IP address is 127.0.0.1.

a. If yes, go to 7.

b. If no, set License server IP address to 127.0.0.1, and click OK. Then, go to 7.

7. Select Independent license.

8. Click Obtain ESN.


Make a note of the displayed ESN.
The ESN may be already displayed on the page before you click Obtain ESN.

9. Apply for a license file based on the obtained ESN.


For details, see FusionCompute 8.8.0 License Delivery Guide. To obtain the guide:

For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.

For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.

10. Click Select next to Upload path.


A dialog box is displayed.

11. Select the obtained license file and click OK.


The license file is loaded.

Share a license file with another site.

If the VRM version of the license client is FusionCompute 8.8.0, a VRM node in a version earlier than FusionCompute 8.8.0 cannot be used as a license server.

12. Check the client version.

If the version is earlier than 8.3.0, go to 13.

If the version is 8.3.0 or later, go to 14.

13. Run the following command on the VRM node of a later version to transfer the script to the /home/GalaX8800/ directory of the VRM node
of an earlier version. Then, move the script to the /opt/galax/gms/common/modsysinfo/ directory.
scp -o UserknownHostsFile=/dev/null -o StrictHostKeyChecking=no /opt/galax/gms/common/modsysinfo/keystoreManage.sh
gandalf@IP address of the VRM node of an earlier version:/home/GalaX8800/
cp /home/GalaX8800/keystoreManage.sh /opt/galax/gms/common/modsysinfo/

14. Import the VRM certificate.

a. Import the VRM certificate of the site where the license file has been loaded to the local end. For details, see "Manually Importing
the Root Certificate" in FusionCompute 8.8.0 O&M Guide.

b. Import the VRM certificate of the local end to the site where the license file has been loaded.

To obtain the VRM certificate of the local end, perform the following steps:

i. Use PuTTY and the management IP address to log in to the active VRM node as user gandalf.

ii. Run the following command and enter the password of user root to switch to user root:
su - root

iii. Run the following command to copy server.crt to the /home/GalaX8800 directory:
cp /etc/galax/certs/vrm/server.crt /home/GalaX8800/

iv. Run the following command to modify the permission on server.crt:


chmod 777 /home/GalaX8800/server.crt

v. Run the following command to enable SFTP:


sh /opt/galax/gms/common/util/configSshSftp.sh open
The command is successfully executed if information similar to the following is displayed:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 235/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

open sftp service succeed.

vi. Use WinSCP to upload server.crt in /home/GalaX8800 to the local PC.

vii. Run the following command to disable the SFTP service:


sh /opt/galax/gms/common/util/configSshSftp.sh close
The command is successfully executed if information similar to the following is displayed:

close sftp service succeed.

15. Select License server.

16. Set the following parameters:

License server IP address: Enter the management IP address of the VRM node of the site that has the license file loaded. If the site
has two VRM nodes working in active/standby mode, enter the floating IP address of the VRM nodes.

Account: Enter the username of the FusionCompute administrator of the site that has the license file loaded.

Password: Enter the password of the FusionCompute administrator of the site that has the license file loaded.

The FusionCompute administrator at the site where the license has been loaded must be a new machine-machine account whose Subrole is
administrator or a new system super administrator account.
The VRM that is activated in associated mode cannot be set as the license server.
The keys of VRM nodes that share the license must be the same. If they are different, change them to be the same.
If the VRM nodes of different versions share the license, change the keys of the later version to the keys of the earlier version. The procedure is as
follows:

a. Run the following command on the VRM nodes of the later version to transfer the script to the /home/GalaX8800/ directory of the VRM nodes
of the earlier version.
scp -o UserknownHostsFile=/dev/null -o StrictHostKeyChecking=no /opt/galax/root/vrm/tomcat/script/updateLmKey.sh gandalf@IP
address of a VRM node in an earlier version:/home/GalaX8800/
b. Run the following command on the VRM nodes of the earlier version to query the keys of VRM nodes of the earlier version.
sh /home/GalaX8800/updateLmKey.sh query
c. Run the following command on the VRM nodes of the later version to change the keys of the later version to those of the earlier version. After
this command is executed, the VRM service automatically restarts.
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh set
The following command output is displayed:
Please Enter aes key:

Enter the key and press Enter. If the following information is displayed in the command output, the key is changed successfully.
Redirecting to /bin/systemctl restart vrmd.service
success

If VRM nodes of the same version share a license, change the key of the client to that of the server. The procedure is as follows:

a. Run the following command on the server VRM node to query the key:
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh query
b. Run the following command on the client VRM node to set the key of the client to the key of the server: After this command is executed, the
VRM service automatically restarts.
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh set
The following command output is displayed:
Please Enter aes key:

Enter the key and press Enter. If the following information is displayed in the command output, the key is changed successfully.
Redirecting to /bin/systemctl restart vrmd.service
success

17. Click OK.


The license file is shared.

3.5.1.3.2.2 (Optional) Configuring MAC Address Segments

Scenarios

127.0.0.1:51299/icslite/print/pages/resource/print.do? 236/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

This section guides administrators to configure available MAC address segments for the system on FusionCompute to allocate a unique MAC
address to each VM.

FusionCompute provides 100,000 MAC addresses for users, ranging from 28:6E:D4:88:B2:A1 to 28:6E:D4:8A:39:40. The first 5000 addresses
(28:6E:D4:88:B2:A1 to 28:6E:D4:88:C6:28) are dedicated for VRM VMs. The default address segment for new VMs is 28:6E:D4:88:C6:29 to
28:6E:D4:8A:39:40.

If only one FusionCompute environment is available on the Layer 2 network, the FusionCompute environment can use the default address
segment (28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40) provided by the system. In this case, skip this section.

If multiple FusionCompute environments are available on the Layer 2 network, you need to divide the default address segment based on the
number of VMs in each FusionCompute environment and allocate unique MAC address segments to each FusionCompute environment.
Otherwise, MAC addresses allocated to VMs may overlap, adversely affecting VM communication.

When configuring a custom MAC address segment, change the default MAC address segment to the custom address segment or add a new
address segment. A maximum of five MAC address segments can be configured for each FusionCompute environment, and the segments
cannot overlap.

Prerequisites
Conditions
You have logged in to FusionCompute.
Data
The MAC address segments for user VMs have been planned.

The address segments to be configured and the reserved 5000 MAC addresses dedicated for VRM VMs cannot overlap.
If only one FusionCompute environment is available on the Layer 2 network, you can use the default MAC address segment (28:6E:D4:88:C6:29 to
28:6E:D4:8A:39:40).
If multiple FusionCompute environments are available on the Layer 2 network, you need to divide the default MAC address segment based on the number of
VMs in each FusionCompute environment.
For example, if two FusionCompute environments are available on the Layer 2 network, evenly allocate the first 95,000 MAC addresses to the two
FusionCompute environments: 45,000 MAC addresses to one environment and 50,000 MAC addresses to the other environment.
The following MAC address segments can be allocated:
The MAC address segment for FusionCompute 1 (the first 45,000 addresses): 28:6E:D4:88:C6:29 to 28:6E:D4:89:75:F0
The MAC address segment for FusionCompute 2 (the last 50,000 addresses): 28:6E:D4:89:75:F1 to 28:6E:D4:8A:39:40
The same rule applies when there are multiple environments.

Procedure

1. In the navigation pane, click .


The Resource Pool page is displayed.

2. Choose the Configuration tab.


The MAC Address Pool page is displayed.

3. Click Add MAC Address.


The Add MAC Address dialog box is displayed.

4. Specify the start and end MAC addresses.

5. Click OK.
The MAC address segment is configured.
To modify or delete a MAC address segment, locate the row where the target MAC address segment resides and click Modify or Delete.

3.5.1.3.3 Configuring eDME After Installation


After eDME is installed, perform initial configuration to ensure that you can use the functions provided by eDME.
(Optional) Configuring the NTP Service

(Optional) Loading a License File

127.0.0.1:51299/icslite/print/pages/resource/print.do? 237/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

(Optional) Configuring SSO for FusionCompute (Applicable to Virtualization Scenarios)

Expanding Partition Capacity

Enabling Optional Components

3.5.1.3.3.1 (Optional) Configuring the NTP Service


This operation enables you to configure the Network Time Protocol (NTP) service for eDME and ensure that the time of eDME is the same as that
of managed resources such as storage devices and VRMs.

If no NTP server is configured, the time of eDME may differ from that of managed resources and eDME may fail to obtain the performance data of the managed
resources. You are advised to configure the NTP service.

Context
NTP is a protocol that synchronizes the time of a computer system to Coordinated Universal Time (UTC). Servers that support NTP are called NTP
servers.

Precautions
Before configuring the NTP server, check the time difference between eDME and the NTP server. The time difference between the NTP server and
eDME cannot exceed 24 hours. The current NTP server time cannot be earlier than eDME installation time.
For example, if the current NTP server system time is 2021-04-05 16:01:49 UTC+08:00 and eDME is installed at 2021-04-06 16:30:20 UTC+08:00,
the NTP server time is earlier.
To check the system time of the eDME node, perform the following steps:

1. Use PuTTY to log in to the eDME node as user sopuser using the static IP address of the node.
The initial password of user sopuser is configured during eDME installation.

2. Run the sudo su ossadm command to switch to user ossadm.


The initial password of user ossadm is configured during eDME installation.

3. Run date to check whether the system time is consistent with the actual time.
If the system time of eDME is later than the NTP server time, you need to run the following command to restart the service after you
configure the NTP server and time synchronization is complete: cd /opt/oss/manager/agent/bin && . engr_profile.sh && export
mostart=true && ipmc_adm -cmd startapp. If the system time of eDME is earlier than the NTP server time, you do not need to run this
command.

Procedure
1. Visit https://Login IP address of the management portal:31945 and press Enter.

For eDME multi-node deployment (with or without two nodes of the operation portal), use the management floating IP address to log in.

2. Enter the username and password to log in to the eDME management portal.
The default username is admin, and the initial password is configured during installation of eDME.

3. Choose Maintenance > Time Management > Configure NTP.


The Configure NTP page is displayed.

4. Click Add and configure NTP information, as shown in Table 1.

Table 1 Parameters

Parameter Description Value Range

127.0.0.1:51299/icslite/print/pages/resource/print.do? 238/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

NTP Server IP IP address of the NTP server that functions as the clock source. IPv4 address
Address

Encryption Encryption mode of the NTP server. NTP v4 Authentication


Mode
NTP v4

Calculation Digest type of the NTP server. MD5


Digest
SHA256
NOTICE:

The MD5 encryption algorithm has security risks. You


are advised to use SHA256, which is more secure.

Key Index Used to quickly search for the key value and digest type during the The value is an integer ranging from 1 to 65,534,
communication authentication with the NTP server. excluding 10,000.
The value must be the same as Key Index configured on the NTP server.
NOTE:

This parameter is mandatory when Encryption Mode is set to NTP v4


Authentication.

Key NTP authentication character string, which is an important part for The value is a string of a maximum of 30 ASCII
generating a digest during the communication authentication with the NTP characters. Spaces and number signs (#) are not
server. supported.
The value must be the same as Key set on the NTP server.
NOTE:

This parameter is mandatory when Encryption Mode is set to NTP v4


Authentication.

Role Active or standby status of the NTP server. Active


Standby

Operation Operation that can be performed on the configured NTP server. Verify
Delete

5. Click Apply.

6. In the alert dialog box that is displayed, click OK.

7. For example, for storage devices, log in to the storage device management page and set the device time to be the same as that in eDME.

You can set the time in either of the following ways:

This section uses OceanStor Dorado 6.x devices as an example. The operations for setting the time vary according to the device model. For details, see the
online help of the storage device.

Set automatic NTP synchronization.

a. Choose Settings > Basic Information > Device Time.

b. Enable NTP Synchronization.

i. In NTP Server Address, enter the IPv4 address or domain name of the NTP server.

The value must be the same as that of <NTP server address> in 4.

ii. (Optional) Click Test.

iii. (Optional) Select Enable next to NTP Authentication. Import the NTP CA certificate to CA Certificate.

Only when NTPv4 or later is used, NTP authentication can be enabled to authenticate the NTP server and automatically synchronize
the time to the storage device.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 239/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

c. Click Save and confirm your operation as prompted.

Synchronize the time manually.

a. Choose Settings > Basic Information > Device Time.

b. Click next to Device Time to change the device time to be the same as the time of eDME.

If you set the time manually, there may be time difference. Ensure that the time difference is less than 1 minute.

3.5.1.3.3.2 (Optional) Loading a License File


For details about how to load the license file, see eDME License Application Guide.

3.5.1.3.3.3 (Optional) Configuring SSO for FusionCompute (Applicable to


Virtualization Scenarios)
This operation enables you to configure single sign-on (SSO) to log in to FusionCompute from eDME without entering a password.

After the SSO configuration is complete, if eDME is faulty, you may fail to log in to the connected FusionCompute system. For details, see "Failed to Log In to
FusionCompute Due to the Fault" in eDME Product Documentation.
During the SSO configuration, you must ensure that no virtualization resource-related task is running on eDME, such as creating a VM or datastore. Otherwise,
such tasks may fail.

Prerequisites
FusionCompute has been installed.

You have logged in to eDME.

FusionCompute uses the common rights management mode.

Procedure
1. Log in to the O&M portal as the admin user. The O&M portal address is https://IP address for logging in to the O&M portal:31943.

In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.
The default password of user admin is the password set during eDME installation.

2. In the navigation pane on the left of the eDME O&M portal, choose Settings > Security Management > Authentication.

3. In the left navigation pane, choose SSO Configuration > CAS SSO Configuration.

4. On the SSO Servers tab page, click Create.

5. Select IPv4 address or IPv6 address for IP Address Type.

FusionCompute supports IPv4 and IPv6 addresses.

6. In the text box of IPv4 address or IPv6 address, enter the IP address for logging in to the FusionCompute web client.

7. Click OK.

8. Log in to FusionCompute to be interconnected.

Login addresses:

IPv4: https://IP address for logging in to the FusionCompute web client:8443

IPv6: https://IP address for logging in to the FusionCompute web client:8443

127.0.0.1:51299/icslite/print/pages/resource/print.do? 240/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Username and password: Obtain them from the administrator.

9. In the navigation bar on the left of the FusionCompute home page, click to enter the System page.

10. Choose System > Connect To > Cloud Management.

11. (Optional) Upon the first configuration, click on the right of Interconnected Cloud Management to enable cloud management
settings.

12. Select ManageOne/eDME Maintenance Portal for Interconnected System.

13. Enter the login IP address of the eDME O&M portal in the System IP Address text box.

In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.

14. Click Save.

15. Click OK.

After the operation is complete, the system is interrupted for about 2 minutes. After the login mode is switched, you need to log out of the system and log in to
the system again.
If any fault occurs on the O&M portal of ManageOne or eDME after SSO is configured, the login to FusionCompute may fail. In this case, you need to log in to
the active VRM node to cancel SSO.
Run the following command on the active VRM to cancel SSO:
python /opt/galax/root/vrm/tomcat/script/omsconfig/bin/sm/changesso/changesso.py -m ge

3.5.1.3.3.4 Expanding Partition Capacity


If the free disk space of a mount point is insufficient, new disks can be added for capacity expansion. The procedure is as follows:

Procedure
1. Mount new disks to the node to be expanded.

2. Use PuTTY to log in to the eDME node as user sopuser via the static IP address of the node.
The initial password of sopuser is configured during eDME installation.

3. Run the sudo su - root command to switch to user root.

For the SUSE OS, run the su - root command to switch to user root.

The initial password of user root is configured during OS installation.

4. Run the bash /opt/dme/tools/expanding_disk_capacity.sh command.

5. Enter the mount point name as prompted.

6. Enter the disk name as prompted. If it is left blank, all the disks with free space are used.

DOS disks with four partitions and GPT disks with 128 partitions cannot be used for capacity expansion.

7. Enter the capacity to be expanded as prompted. If it is left blank, all free space of all disks is used for capacity expansion.

8. After the script is executed successfully, run the df -h command to check whether the capacity expansion is successful.

3.5.1.3.3.5 Enabling Optional Components


This section describes how to enable components after the eDME installation.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 241/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Prerequisites
You have obtained the address, account, and password for logging in to the eDME management portal. The account must have the permission to add
product features.

Procedure
1. Open a browser, visit https://IP address:31945, and press Enter.

If eDME is deployed on multiple nodes, use the floating management IP address.

2. On the top navigation bar, choose Product > Optional Component Management.

3. On the Optional Component Management page, locate the component to be enabled and toggle on the button to enable it.

When you enable optional components, only one component can be enabled at a time. That is, if a component is being enabled, the next component
cannot be enabled until the current one is enabled.
After you enable a component, the system checks whether the resources (CPU, memory, and disk) on which the component depends meet the
requirements. If the resources do not meet the requirements, a dialog box is displayed. If a disk is missing during component deployment, expand the
capacity by referring to Expanding Partition Capacity .
Currently, the following optional components are supported: LiteCA Service, Data Protection Service, and Container Management Service. After you
enable the container management service, a dialog box is displayed, asking you to set the graph database password. The password must meet the
strength requirements of the graph database password.
To modify the VM configuration when the Modify VM permission is disabled, enable the Modify VM permission. To ensure information security,
disable the permission after the modification is complete.

Data Protection Service: provides ransomware protection capabilities for VMs. When installing this component, add VM resources for the O&M
node: number of vCPU cores (1) and memory (2 GB).
LiteCA service: provides basic CA capabilities. When installing this component, add VM resources for the O&M node: number of vCPU cores (1) and
memory (500 MB).
Container management service: provides the capability of taking over container resources. When installing this component, add VM resources for the
O&M node: number of vCPU cores (2) and memory (10 GB).

3.5.1.3.4 Configuring HiCloud After Installation


Configuring kdump on FusionCompute

Configuring Certificates

Configuring CAS SSO

Importing an Adaptation Package (Either One)

3.5.1.3.4.1 Configuring kdump on FusionCompute

Procedure

127.0.0.1:51299/icslite/print/pages/resource/print.do? 242/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Log in to FusionCompute as an administrator.


You can query the user name and password and URL for logging in to FusionCompute on the HiCloud Parameters sheet of the SmartKit
installation template configured during installation.

2. In the navigation pane on the FusionCompute console, click . The Resource Pool page is displayed.

3. Query all VMs of HiCloud and power down the VMs.

a. Locate the row that contains the VM to be stopped, and choose More > Power > Stop.

b. Click OK.

There are four VMs for HiCloud: paas-core, platform-node1, platform-node2, and platform-node3.

4. On the Clusters tab page, click the name of the cluster where HiCloud is located. The Summary tab page of the cluster is displayed.

5. On the Configuration tab page, choose Configuration > VM Override Policy. The VM Override Policy page is displayed.

6. Click Add above the list. A dialog box is displayed.

7. In the displayed dialog box, select all VMs of HiCloud.

8. In the Edit VM override policy area, set the following parameters. Retain the default values for the parameters that are not listed in.

Table 1 Parameter description

Parameter Configuration

Host Fault Policy Set this parameter to Using Cluster Policy.

VM Fault Handling Policy Set this parameter to Restart VM.


Select VM Fault Handling Delay.
Set the VM fault handling delay to 900 seconds.

9. Click OK. The VM override item is added.

10. Power on all VMs of HiCloud.

a. Locate the row that contains the VM to be started, click More and choose Power > Start.

b. Click OK.

3.5.1.3.4.2 Configuring Certificates


Importing the CMP HiCloud Certificate to eDME

Obtaining Certificates to Be Imported to GDE

Importing Certificates to GDE

(Optional) Changing the Certificate Chain Verification Mode

Restarting CMP HiCloud Services

3.5.1.3.4.2.1 Importing the CMP HiCloud Certificate to eDME

Scenarios
If Digital Certificate Authentication is enabled, operations this section are mandatory. If Digital Certificate Authentication is disabled,
operations in this section are optional.

You can perform the following steps to query the status of Digital Certificate Authentication:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 243/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Log in to the eDME O&M portal as the O&M administrator admin at https://IP address:31943.
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured
during installation.
The password is set during eDME installation. Obtain the password from the environment administrator.

2. Choose Infrastructure > Configuration and Management > Access Settings. The Access Settings page is displayed.

3. Expand the Set certificates area and check the status of Digital Certificate Authentication.

indicates that the function is disabled.

indicates that the function is enabled.

The status of Digital Certificate Authentication is controlled by the solution. Generally, it is Enabled by default. You do not need to set it.

Before performing operations in this section, ensure that the certificates of FusionCompute, storage devices, and network services have been
imported and interconnected successfully. Otherwise, basic functions of HiCloud will be affected.
For details about how to import and interconnect certificates of FusionCompute, storage devices, and network services, see Datacenter
Virtualization Solution x.x.x Product Documentation at the Support website of Datacenter Virtualization Solution Product Documentation.

Procedure
1. Enter the URL of the GDE data zone in the address box of a browser, press Enter, and log in to the WebUI. URL of the GDE data zone
portal: https://IP address:38443.
IP address: is value of Keepalived VIP (2) of data plane described in 1.5.1.2-12 .
Password of the admin user: is the value of system administrator admin password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .

Figure 1 Exporting the certificate

2. Click in the address box and choose Certificate is not valid.

The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.

3. Click the Details tab and click Export.

4. Change the file name extension of the exported file to CER.

If the exported file does not have a file name extension, add the file name extension .cer.

5. Log in to the eDME O&M portal as the O&M administrator admin at https://IP address:31943.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 244/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured during
installation.
The password is set during eDME installation. Obtain the password from the environment administrator.

6. Choose Settings > System Management > Certificate Management > Service Certificate Management > ThirdPartyCloudService
Certificates > Trust Certificates and import the third-party trust certificate.

3.5.1.3.4.2.2 Obtaining Certificates to Be Imported to GDE


When CMP HiCloud services need to interconnect with a third-party system, the certificate of the third-party system must be imported to the GDE
data zone to ensure the normal operation of services. For details about certificates, see Table 1.

Table 1 Mapping between services and certificates

CMP HiCloud Third-Party System Certificate Function Precaution


Service Certificate

All services Certificate trust chain Used for the interconnection between If the interconnected eDME is deployed in active/standby mode, import
CMP HiCloud and eDME the certificates of both the active and standby eDMEs to GDE.

BMS service iBMC certificate Used for the interconnection between If the service interconnects with iBMCs of multiple vendors, import the
Root certificate the BMS service and iBMC iBMC certificate of each vendor.
The root certificate is a part of the certificate chain. The root certificate
and level-2 certificate also must be imported.
The xFusion root certificate is the same as the Huawei root certificate.
You can use the same method to obtain the xFusion root certificate.
If the root certificates of all or some servers cannot be obtained, change
the certificate chain verification mode. For details, see (Optional)
Changing the Certificate Chain Verification Mode .

VMware vCenter certificate Used for the interconnection between If the service interconnects with multiple vCenter systems, import the
integration the VMware integration service and certificate of each vCenter system.
service vCenter
The PM certificate managed by vCenter is used by the operation portal
to connect to VNC.

NSX-T certificate Used for the interconnection between If the service interconnects with multiple NSX-T systems, import the
the VMware integration service and certificate of each NSX-T system.
NSX-T

Security service DBAPPSecurity Used for the interconnection between The security service interconnects with only one set of DBAPPSecurity
cloud certificate the security service and cloud system. You only need to import one certificate.
DBAPPSecurity cloud

Database VastEM certificate Used for the interconnection between If the service interconnects with multiple VastEM systems, import the
service the database service and VastEM certificate in either of the following ways:
If multiple VastEM systems use the same certificate, you only need to
import the certificate once.
If VastEM systems use different certificates, import certificates with
different subjects.

Obtaining the Certificate Trust Chain

Exporting the iBMC Certificate and Root Certificate

Exporting vCenter Certificates

Exporting the NSX-T Certificate

Exporting the DBAPPSecurity Cloud Certificate

127.0.0.1:51299/icslite/print/pages/resource/print.do? 245/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Exporting the VastEM Certificate

3.5.1.3.4.2.2.1 Obtaining the Certificate Trust Chain

Procedure
1. Enter the IP address of the eDME management zone in the address box of the browser, press Enter, and log in. URL of eDME management
zone: https://IP address:31945
IP address: is the value of eDME OC plane IP described in 1.5.1.2-12 .
User name: is admin by default.
Password: Obtain it from the environment administrator.

2. Search for Certificate Management and go to this page.

3. Choose ER product.

4. On the Identity Certificate page, click Export Trust Chain on the right to export the trust chain to the local PC.

3.5.1.3.4.2.2.2 Exporting the iBMC Certificate and Root Certificate

Exporting the iBMC Certificate


1. Enter the URL of iBMC in the address box of a browser, press Enter, and log in to the WebUI.
The iBMC IP address is the IP address of your BMS.

2. Click in the address box and choose Certificate is not valid.

The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.

3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 246/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Exporting the Root Certificate


1. Use a browser to access Huawei PKI system at https://hwpki.huawei.com/pki/#/digitalCertificate and download the root certificate and level-
2 CA certificate.

2. Download the root certificate.

a. Click the Root certificate card. The Root certificate page is displayed.

b. Locate the row that contains Huawei Equipment CA.cer certificate.

c. Click in the Operation column to download the root certificate to the local PC.

3. Download the level-2 CA certificate.

a. Click the Level-2 CA certificate card. The Level-2 CA certificate page is displayed.

b. Locate the row that contains Huawei IT Product CA.cer.

c. Click in the Operation column to download the level-2 CA certificate to the local PC.

3.5.1.3.4.2.2.3 Exporting vCenter Certificates

Exporting the vCenter System Certificate


127.0.0.1:51299/icslite/print/pages/resource/print.do? 247/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Enter the URL of vCenter in the address box of a browser and press Enter to access the WebUI.
The vCenter URL is provided by service users.

2. Click in the address box and choose Certificate is not valid.

The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.

3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.

4. Change the file name extension of the exported file to CER.

Exporting the PM Certificate Managed by vCenter


1. Enter the URL of ESXi in the address box of a browser, press Enter, and log in to the WebUI.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 248/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The ESXi URL is provided by service users.

2. Click in the address box and choose Certificate is not valid.

The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.

3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.

4. Change the file name extension of the exported file to CER.

3.5.1.3.4.2.2.4 Exporting the NSX-T Certificate

Procedure

127.0.0.1:51299/icslite/print/pages/resource/print.do? 249/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Enter the URL of the NSX-T resource pool in the address box of a browser, press Enter, and log in to the WebUI.
The URL is provided by service users.

2. Click in the address box and choose Certificate is not valid.

The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.

3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.

4. Change the file name extension of the exported file to CER.

3.5.1.3.4.2.2.5 Exporting the DBAPPSecurity Cloud Certificate

Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 250/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Log in as a system administrator to the backend node of DBAPPSecurity cloud management platform in SSH mode.
The IP address, user name, and password for login are provided by the vendor of DBAPPSecurity.

2. Download the guanjia-nginx.crt certificate from the /etc/nginx/ssl/certs/ directory.

Figure 1 Downloading the certificate

3.5.1.3.4.2.2.6 Exporting the VastEM Certificate

Procedure
1. Enter the URL of VastEM in the address box of a browser, press Enter, and log in to the WebUI.
The VastEM URL is provided by service users.

2. Click in the address box and choose Certificate is not valid.

The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.

3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 251/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

4. Change the file name extension of the exported file to CER.

3.5.1.3.4.2.3 Importing Certificates to GDE


Service operation requires necessary certificates. You need to import certificates based on service requirements. Table 1 lists the mapping between
services and certificates to be imported.

Table 1 Mapping between services and certificates

Service Name Certificate Remarks

gatewayuser Certificate trust chain. For details about how to obtain it, see Obtaining the Certificate Trust Chain . All services require this
certificate.

bms Certificate trust chain, iBMC certificate, and root certificate. For details about how to obtain them, see The BMS service requires
Obtaining the Certificate Trust Chain and Exporting the iBMC Certificate and Root Certificate . the certificates.

vmware- Certificate trust chain, vCenter certificate, and NSX-T certificate. For details about how to obtain The VMware integration
computeservice them, see Obtaining the Certificate Trust Chain , Exporting the vCenter System Certificate , and service requires the
Exporting the NSX-T Certificate . certificates.

security The PM certificate managed by vCenter is used by the operation portal to connect to VNC. Therefore,
you need to import the certificate to the security service. For details about how to obtain it, see
Exporting the PM Certificate Managed by vCenter .

security Certificate trust chain and DBAPPSecurity cloud certificate. For details about how to obtain them, see The security service
Obtaining the Certificate Trust Chain and Exporting the DBAPPSecurity Cloud Certificate . requires them.

dbaas Certificate trust chain and VastEM certificate. For details about how to obtain them, see Obtaining the The database service
Certificate Trust Chain and Exporting the VastEM Certificate . requires the certificates.

Procedure
1. Log in as the admin user to the GDE data zone portal at https://IP address:38443.
IP address: is value of Keepalived VIP (2) of data plane described in 1.5.1.2-12 .
Password of the admin user: is the value of system administrator admin password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .

2. Click the GDE Common Configuration card.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 252/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Click the menu icon in the upper left corner of the page and choose Password and Key > Certificate Management.

4. On the Certificate Management tab page, click Edit Certificate in the Operation column of the service for which certificates need to be
imported. The gatewayuser service is used as an example.

5. On the Edit Certificate page, click the Trust Certificate tab and click Add in the Operation column.

6. In the Change Reminder dialog box, click OK. The Add xxx Trust Certificate dialog box is displayed.

7. In the displayed Add xxx Trust Certificate dialog box, customize a value for Alias Name and click Upload File under Trust Certificate
File.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 253/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

8. Upload the certificate to be imported and click Add.

To upload multiple certificates, click Add to upload them one by one. Ensure that all certificates are added.

9. Click Submit. Click OK in the High risk dialog box.

The system automatically reloads the certificate. The new settings will take effect within 3 minutes. If such settings do not take effect 3 minutes later, check
whether the certificate is correctly imported or restart the corresponding service. For details about the mapping between services and service applications, see
Table 2. For details about how to restart a service, see Restarting Services .

Table 2 Mapping between to-be-restarted services and corresponding service applications

Service Service Application Remarks

gatewayuser hicloud-common-user-gateway All services require this certificate.

hicloud-common-admin-gateway

bms hicloud-bms-service The BMS service requires the certificates.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 254/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

vmware-computeservice hicloud-vmware-coretask The VMware integration service requires the certificates.

hicloud-vmware-scheduler
hicloud-vmware-service

security hicloud-security-service The security service requires the certificates.

hicloud-security-gateway

dbaas hicloud-dbaas-service The database service requires the certificates.

3.5.1.3.4.2.4 (Optional) Changing the Certificate Chain Verification Mode


If the third-party systems interconnected with the VMware integration service, BMS service, or security gateway can provide a complete and correct
certificate chain, skip this section. If they cannot, perform operations in this section to change the certificate chain verification mode to certificate
lockout.

Operations in this section are mandatory for database services.

Prerequisites
You have obtained the value of the cert-white-list parameter and performed the following steps:

1. Obtain the certificate of the third-party system interconnected with the service. For details about how to obtain the certificate, see Obtaining
Certificates to Be Imported to GDE .

2. Open the obtained certificate as a text file to obtain the certificate body.

Figure 1 Certificate body

3. Delete the newline characters from the certificate body to obtain the certificate character string. The character string is the value of
cert_white_list.

Figure 2 Certificate string

127.0.0.1:51299/icslite/print/pages/resource/print.do? 255/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Log in to the GDE management zone as the op_svc_cfe tenant in tenant login mode at https://IP address:31943.
IP address: is the value of Floating IP of management plane on the HiCloud Parameters sheet described in 1.5.1.2-12 .
The password of the op_svc_cfe tenant is the value of op_svc_cfe tenant password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .

2. Choose Maintenance > Instance Deployment > TOSCA Stack Deployment. The TOSCA Stack Deployment page is displayed.

3. Search for the stack corresponding to the service as required.

Table 1 Stacks corresponding to the services

Service Stack

VMware integration service hicloud-vmware-service-hicloud-vmware-service

BMS service hicloud-bms-service-hicloud-bms-service

Database service hicloud-dbaas-service-hicloud-dbaas-service

Security gateway service hicloud-security-gateway-hicloud-security-gateway

4. Click Upgrade in the Operation column of the found stack.

5. Modify parameters.

a. Change the value of Value After Upgrade for the full-chain-check parameter to false.

The default value of full-chain-check is true.

b. Change the value of Value After Upgrade of cert-white-list to the character string obtained in Prerequisites.

If multiple certificates need to be imported, separate certificate character strings with commas (,).

6. Click Upgrade Now.

After clicking Upgrade Now, wait for 3 to 5 minutes. If the stack status changes to Normal, the parameter has been successfully modified.

7. Manually restart the applications corresponding to the involved services by referring to Restarting Services .

For details about the mapping between services and corresponding service applications, see Table 2 .

8. (Optional) Perform this step if the BMS service certificate needs to be imported. Otherwise, skip this step.

a. Log in to the FEP for accessing the BMS as the paas user in SSH mode.

b. Run the following command to edit the cert_white.list file:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 256/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

vi /home/paas/BMSProxy/conf/cert_white.list

c. Add the certificate content mentioned in Prerequisites to the file, save the file, and exit.

The certificate will take effect automatically 3 minutes later.

3.5.1.3.4.2.5 Restarting CMP HiCloud Services

Involved Services
To restart a service, you only need to restart the applications related to the service. The mapping is as follows:

Table 1 Service and corresponding service applications

Service Service Application Involved Service

gatewayuser hicloud-common-user-gateway All services

hicloud-common-admin-gateway

bms hicloud-bms-service BMS service

vmware-computeservice hicloud-vmware-coretask VMware integration service

hicloud-vmware-scheduler
hicloud-vmware-service
hicloud-security-gateway

security hicloud-security-service Security service

hicloud-security-gateway

dbaas hicloud-dbaas-service Database service

Procedure
Restart involved service applications by referring to Restarting Services .

If a new certificate is imported after a CMP HiCloud service is restarted, the service must be restarted again after certificate import.

3.5.1.3.4.3 Configuring CAS SSO

Procedure
1. Log in to the eDME O&M portal at https://IP address:31943 as the O&M administrator admin.
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured during
installation.
The password is set during eDME installation. Obtain the password from the environment administrator.

2. Click Enter System.

3. Click in the upper left corner and choose Settings > Security Management > Authentication.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 257/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

4. In the navigation pane, choose SSO Configuration > CAS SSO Configuration.

5. Repeat this step to configure all whitelisted addresses listed in Table 1.

a. On the SSO Servers tab page, click Create.

b. On the displayed CAS SSO Configuration page, set IP address type to IPv4 address.

c. Set IPv4 address to a whitelisted IP address.

d. Click OK.

Table 1 Whitelisted IP addresses

Whitelist IP Address Value

IP address of the LB port on GDE Value of Keepalived VIP (2) of data plane described in 1.5.1.2-12

IP address of the platform-node1 node Value of Management IP Address of platform-node1 described in 1.5.1.2-12

IP address of the platform-node2 node Value of Management IP Address of platform-node2 described in 1.5.1.2-12

IP address of the platform-node3 node Value of Management IP Address of platform-node3 described in 1.5.1.2-12

After the configuration is complete, you can view the configured IP address on the CAS SSO Configuration page.

3.5.1.3.4.4 Importing an Adaptation Package (Either One)


An adaptation package can be imported on the eDME O&M portal or using SmartKit. For details about how to import it on the eDME O&M portal,
see Importing an Adaptation Package on the eDME O&M Portal . For details about how to import it using SmartKit, see Importing an Adaptation
Package Using SmartKit . You can select either of the two methods.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 258/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Importing an Adaptation Package on the eDME O&M Portal

Importing an Adaptation Package Using SmartKit

3.5.1.3.4.4.1 Importing an Adaptation Package on the eDME O&M Portal

Prerequisites
You have obtained the adaptation package of each service by referring to Table 5 .

You have extracted adaptation packages and their certificates and signature files from the obtained packages.

The tar.gz file is the adaptation package.

The tar.gz.cms file is the signature file.

The tar.gz.crl file is the certificate file.

Procedure
1. Log in to the eDME O&M portal as the O&M administrator admin at https://IP address:31943.
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured during
installation.
The password is set during eDME installation. Obtain the password from the environment administrator.

2. Choose Multi-tenant Service > Service Management > Service Management. The Service Management page is displayed.

3. Click Add Third-Party Service. In the Configure Gateway Info step, click Next.

The gateway information is automatically configured.

4. In the Import Adaptation Package step, upload the obtained adaptation packages and the corresponding signature and certificate files. Click
Next.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 259/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

5. In the Configure Deployment Parameter step, set required parameters. Click Next.

address: Set this parameter to the value of eDME SC plane IP described in 1.5.1.2-12 .

gateway_ip: Set this parameter to the value of Keepalived VIP (2) of data plane described in 1.5.1.2-12 .

port: (Optional) If this parameter is displayed, retain the default value.

6. Set the service account by referring to Table 1.

Table 1 Parameter description

Parameter Configuration

Service Account Retain the default value.

Set Password Set the password to that of the machine-machine account. The machine-machine account password is the value of Machine account
password described in 1.5.1.2-12 .

Confirm Enter the password again.


Password

7. Click OK.

If Server error is reported after the adaptation package is imported, wait for 10 to 15 minutes for the machine-machine account configuration to take
effect.
To modify parameters after the adaptation package is imported, perform the following steps:

a. Choose Multi-tenant Service > Service Management > Adaptation Package Management. The Adaptation Package
Management page is displayed.
b. In the Operation column of the service whose parameters need to modified, click Modify Parameter. On the displayed page, modify
parameters and click OK.
c. Click Deploy in the Operation column of the service. Parameter modification is successful when the service status changes to
Installation succeeded.

If an imported adaptation package needs to be updated, perform the following steps:

a. Choose Multi-tenant Service > Service Management > Adaptation Package Management. The Adaptation Package
Management page is displayed.
b. Select the service whose adaptation package needs to be updated and click Update.
c. Import the new adaptation package and the corresponding signature and certificates, and click Next.
Click OK.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 260/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.5.1.3.4.4.2 Importing an Adaptation Package Using SmartKit

Prerequisites
You have obtained the adaptation package of each service by referring to Table 5 and saved it to a local directory.

Procedure
1. Log in to SmartKit, and click the Virtualization tab to access the Datacenter Virtualization Solution Deployment page.

2. Click DCS Deployment to start the DCS deployment tool.

3. Click Create Task. On the displayed page, enter the task name, select Connecting DCS Project Components to eDME, and click Create.

4. In the Interconnection Policy step, click HiCloud Interconnection. Then, click Next.

5. In the Parameter Configuration step, click Download Template on the left of the page to download the SmartKit interconnection template
to the local PC.

6. In the HiCloud Parameter List sheet of the template, set all parameters according to Table 1, save the settings, and close the file.

Table 1 Template parameter configuration

Category Parameter Description Mandatory or Example Value


Not

Basic Interconnection Floating IP Address of IP address for logging in to the eDME operation portal. Mandatory 192.168.*.251
Parameters eDME Operation Portal

Account Username of Administrator account of the eDME operation portal. Mandatory bss_admin
eDME Operation Portal

Account Password of Password of the administrator account for logging in to Mandatory cnp2024@HW
eDME Operation Portal the eDME operation portal.
NOTE:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 261/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The password must be the new password after the first
login. The initial password set during installation is
invalid.

HiCloud Adaptation Management IP Address of IP address for logging in to the GKit VM. Mandatory 192.168.*.194
Parameters GKit VM

PaaS Account Username of paas user for logging in to the GKit VM. Mandatory paas
Gkit VM

PaaS Account Password of Password of the paas user for logging in to the GKit Mandatory Image0@Huawei123
Gkit VM VM.

Keepalived VIP (2) of Data Set this parameter to the value of Keepalived VIP (2) of Mandatory 192.168.*.17
Zone Data Zone in Table 11 .

Machine-Machine Account Machine-machine account password. Mandatory Changeme_123@


Password Set this parameter to the value of Machine-Machine
Account Password in Table 11 .

Adaptation Package Local path for storing the mediation package. Mandatory D:\Hicloud\package01
Address

Interconnect with VMware Select Yes or No. Mandatory Yes


Integration Service

Interconnect with Security Select Yes or No. Mandatory Yes


Service

Interconnect with Database Select Yes or No. Mandatory Yes


Service

Interconnect with BMS Select Yes or No. Mandatory Yes

7. In the Upload Template area, click Browse Files and select the configured template to upload it to SmartKit.

8. Click Next to go to the Pre-interconnection Check step.

9. On the displayed page, perform an automatic check and a manual check, and check whether each item passes the check. If any check fails,
perform operations as prompted on the page to ensure that the check result meets the check standard.

10. After the check is successful, click Execute Task in the lower part of the page to interconnect with eDME.

The interconnection duration varies depending on the local network quality.

3.5.1.3.5 Configuring CSHA After Installation


Interconnecting with eDME

3.5.1.3.5.1 Interconnecting with eDME


This section describes how to install the CSHA adaptation package to interconnect CSHA with eDME.

Obtaining the Adaptation Package and Document


Before interconnecting with eDME, obtain the required adaptation package, signature, and certificate file.

Table 1 lists the adaptation package, tool, and reference document required for connecting to eDME.

Table 1 Adaptation package, signature, and certificate file required for connecting CSHA to eDME

Software Package Name Description Download Link

resource_uniteAccess_csha_8.6.0 The .zip package contains the following For enterprise users, click here, search for
NOTE: files: the software package by name, and
Adaptation package download it.
The version number in the software package name varies with
site conditions. Use the actual version number. resource_uniteAccess_csha_8.6.0.tar.gz

127.0.0.1:51299/icslite/print/pages/resource/print.do? 262/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Signature file For carrier users, click here, search for the
resource_uniteAccess_csha_8.6.0.tar.gz.cms software package by name, and download
it.
Certificate file
resource_uniteAccess_csha_8.6.0.tar.gz.crl

Connecting to eDME
1. Log in to the eDME O&M portal using a browser.
Login address: https://IP address of the node:31943, for example, https://192.168.125.10:31943
The default user name is admin, and the initial password is configured during installation of eDME.

2. Configure the UltraVR redirection link.

a. On the home page of the eDME O&M portal, click .

b. Choose My Favorites from the main menu and click Manage on the right of the Quick Links area. The Link Management page
is displayed. Click Add and add the UltraVR access link to Common Links as prompted. After the link is added, you can click it
in Common Links to go to the UltraVR page. For details, see Adding a Link in eDME Product Documentation.

Set Access Mode to Account and password authentication and System Type to UltraVR.

3. Obtain the UltraVR trust certificate.

a. Use PuTTY to log in to the UltraVR server as user root.

b. Go to the /opt/BCManager/Runtime/LegoRuntime/certs/cacert directory.

c. Copy trust certificate cacert.pem.

4. Import a trust UltraVR certificate.

a. On the home page of the eDME O&M portal, click .

b. Choose Settings > System Management > Certificate Management. The Certificate Management page is displayed.

c. Click SouthBoundNodeService, click the Trust Certificates tab, and import the certificate obtained in 3.

After the certificate function is enabled, if no certificate is imported or an incorrect certificate is imported, services may be affected.

5. On the Add Third-Party Service page of eDME, connect the CSHA service to the eDME platform.
For details, see Adding Third-Party Services in eDME Product Documentation.

When configuring gateway information, log in to the UltraVR system, choose Settings > Cloud Service Configuration Management > eDME
Configuration, and add Tenant Northbound API Configuration and Tenant Southbound API Configuration of the corresponding tenant. Set
Domain Name/IP Address and Port based on the procedure for configuring the gateway in accessing a third-party service on eDME.
When configuring tenant southbound APIs on UltraVR, set the user name and password to be the same as those set in the operation of configuring a
service account for accessing a third-party service on eDME.
Table 2 lists the CSHA parameters to be configured.

Table 2 Parameter description

Parameter Description

address IP address of the operation plane

ip UltraVR IP

port UltraVR port

6. Deploy the CSHA adaptation package.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 263/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

For details, see Deploying a Cloud Service Adaptation Package in eDME Product Documentation.

7. Configure AZs on eDME.

a. Create production and DR AZs.

i. Click in the upper left corner of the page and choose Infrastructure > Management and Configuration >
AZs from the main menu.

ii. On the O&M View tab page, click Create.

iii. Enter the name of the production AZ.


The name length ranges from 1 to 64 bytes. The name contains only letters, digits, underscores (_), hyphens (-), and periods (.).

iv. Click OK.

v. Repeat the preceding steps to create a DR AZ.

b. Add the FusionCompute cluster to be protected to the production and DR AZs.

i. Click in the upper left corner of the page and choose Infrastructure > Management and Configuration > AZs
from the main menu.

ii. Click the Operation View tab.

iii. Locate the row that contains the target production AZ, and click the number of Operation Resource Pools.

iv. On the Operation Resource Pools page, click Add above the cluster list.

v. On the Add Cluster page, select the FusionCompute cluster to be protected.

vi. Click OK.

vii. Repeat the preceding steps to add a cluster in the DR AZ.

c. Bring production and DR AZs online.

i. Click in the upper left corner of the page and choose Infrastructure > Management and Configuration > AZs
from the main menu.

ii. Click the Operation View tab.

iii. Locate the row that contains the target production AZ and choose > Operate Online.

iv. In the displayed dialog box, click OK.

v. Repeat the preceding steps to bring the DR AZ online.

8. Create production and DR host groups.

a. Click in the upper left corner of the page and choose Infrastructure > Virtualization > Virtualization Platform from the
main menu.

b. On the Virtualization Resources tab page, click the cluster under the FusionCompute site and click the Associated Resources
tab.

c. Click Create Host Group in the upper right corner.

d. Enter the name of the production host group, and select the production AZ and production host.

e. Click OK.

f. Repeat the preceding steps to create a DR host group.

9. Configure UltraVR.

a. Update sites.
Log in to the UltraVR server as the admin user and choose Resources > LocalServer > Update.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 264/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

b. Modify storage device information.

i. Click LocalServer.

ii. Select the production or DR site and the corresponding storage device, and choose More > Modify.

iii. In the displayed Modify Device Information dialog box, enter the user name and password.

Obtain the user name and password from the storage device administrator.
After the password of a storage device is changed, UltraVR continuously accesses the storage device. You are advised to create a user
for access by UltraVR on the storage device. The user must have administrator rights, for example, user mm_user of array storage and
user cmdadmin of Huawei distributed block storage.

c. Modify FusionCompute device information.

i. Click LocalServer.

ii. Select the production or DR site and the corresponding FusionCompute, and click Modify.

iii. In the displayed Modify Device Information dialog box, enter the user name and password.

Obtain the username and password of the interface interconnection user from the FusionCompute system administrator.

d. Create a datastore mapping.

i. Select the production site or DR site and select FusionCompute.

ii. On the Datastore Mapping tab page, click Add, select the production site and DR site, and click Next. The page for
configuring the mapping view is displayed.

iii. Select datastore resources with active-active relationships and add them to the mapping view.
You can view the added mapping in the Mapping View area.
Select other datastore resources with active-active relationships and add them to the mapping view.

iv. Click Finish.

3.5.1.3.6 Configuring eCampusCore After Installation


Interconnecting the O&M Plane with eDME

To ensure normal communication between eCampusCore and eDME, you need to configure the interconnection.

Importing a VM Template on the Operation Portal

To ensure that VMs can be created when subsequent service instances are applied for, you need to import the VM template to the FusionCompute environment
on the operation portal after the product installation and set VM specifications on eDME.

Configuring the eDME Image Repository

Before installing eCampusCore components on the operation portal, you need to create an eDME image repository.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 265/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.5.1.3.6.1 Interconnecting the O&M Plane with eDME


To ensure normal communication between eCampusCore and eDME, you need to configure the interconnection.
Interconnecting eCampusCore with eDME

This section describes how to use SmartKit to interconnect Link Services with eDME.

Importing the Service Certificate to the eDME

After the installation is complete, you need to import the service certificate to the eDME environment to ensure that the eDME can properly access service
interfaces.

Configuring the Login Mode

After the configuration, you can log in to eCampusCore on eDME in single sign-on (SSO) mode.

3.5.1.3.6.1.1 Interconnecting eCampusCore with eDME


This section describes how to use SmartKit to interconnect Link Services with eDME.

Prerequisites
You have installed eCampusCore by referring to Installation Using SmartKit .

You have obtained the service adaptation package eCampusCore_<version>_PaaSeLink.zip and its .asc verification file by referring to Table
13 , and stored them in the local directory, for example, D:\packages.

Procedure
1. On the home page of SmartKit, click the Virtualization tab. In Site Deployment Delivery, click Datacenter Virtualization Solution
Deployment.

2. Click DCS Deployment. The Site Deployment Delivery page is displayed.

3. On the Tasks tab page, click Create Task. The Basic Configuration page is displayed.

4. Set Task Name. In the Select Scenario area, select Connecting DCS Project Components to eDME and click Create in the lower part of
the page.

5. In the Interconnection Policy area, select eCampusCore Interconnection and click Next.

6. On the Parameter Configuration page, click Download File Template and set interconnection parameters in the template based on Table 1.

Table 1 eCampusCore parameters

Category Parameter Description

Basic Interconnection Floating IP Address of Floating IP address of the eDME operation portal. Set this parameter to the IP address for logging
Parameters eDME Operation Portal in to the eDME operation portal.

Account Username of eDME operation portal account bss_admin. This account is used to access the eDME operation
eDME Operation Portal portal to obtain the user authentication token for service interconnection.

Account Password of Password of the eDME operation portal account. This account is used to access the eDME
eDME Operation Portal operation portal to obtain the user authentication token for service interconnection.

eCampusCore Management IP address Set this parameter to the IP address of the installer VM configured in 15 .
Adaptation Parameters of the installer VM

Account Username of User for logging in to the installer VM. Set this parameter to sysomc.
Installer VM

Account Password of Set this parameter to the password of the sysomc user configured in 15 .
Installer VM

Floating IP address of the Set this parameter to the value of Floating IP address of the internal gateway in 15 .
internal gateway

127.0.0.1:51299/icslite/print/pages/resource/print.do? 266/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Machine-Machine Set this parameter to the value of Password of the O&M management console and database in
Account Password 15 .

Adaptation Package Address of the adaptation package, which is used to store the adaptation package and its
Address verification file on the local host for the interconnection with the eDME service.

Interconnect with Whether to interconnect with the PaaSeLink service. If this parameter is set to Yes, the
PaaSeLink Service eCampusCore PaaSeLink service adaptation package is imported and the target service is deployed
through eDME operation portal APIs.

Interconnect with Whether to interconnect with the PaaSAPIGW service. If this parameter is set to Yes, the
PaaSAPIGW Service eCampusCore PaaSAPIGW service adaptation package is imported and the target service is
deployed through eDME operation portal APIs.

7. Click Browse Directory, upload the parameter template, and click Next.

8. On the Pre-interconnection Check page, perform an automatic check and a manual check, and check whether each check item is passed. If
any check item fails, perform operations as prompted in to meet the check standard. If all check items are passed, click Execute Task.

9. The Execute Interconnection page is displayed. After all items are successfully executed, click Finish.

Task parameters cannot be modified. If the task fails to be created, create a task again.

3.5.1.3.6.1.2 Importing the Service Certificate to the eDME


After the installation is complete, you need to import the service certificate to the eDME environment to ensure that the eDME can properly access
service interfaces.

Prerequisites
You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M
portal:31943.

Procedure
1. Export the service certificate.

a. Choose Multi-tenant Service > Service Management > Service Management, and search for the service created in
Interconnecting eCampusCore with eDME .

b. Click Redirect in the Operation column.

c. On the error page that is displayed, click here to go to the login page.

d. Enter username and password to log in.

Username: admin

Password: password of the O&M management console and database configured in Table 16 .

e. Choose System > Certificate > CA and click the download button in the Operation column to download the certificate.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 267/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Import the certificate to the eDME environment.

a. On the eDME O&M portal, choose Settings > System Management > Certificate Management.

b. On the page that is displayed, select ThirdPartyCloudService.

c. Click the Trust Certificates tab and click Import to import the certificate.

Configuration Item Description

Certificate alias Set this parameter as required, for example, Link.

Certificate format Select PEM.

Certificate file Select the certificate to be exported.

Remarks You do not need to set this parameter.

3. Check the service status.


After the service certificate is imported, you can choose Multi-tenant Service > Service Management > Service Management and check
that the value of Running Status of the service is Normal.

3.5.1.3.6.1.3 Configuring the Login Mode


After the configuration, you can log in to eCampusCore on eDME in single sign-on (SSO) mode.
Configuring Multi-Session Login

After the installation is complete, you need to change the login mode so that multiple sessions are allowed for a single user and set the maximum number of
concurrent online users to ensure that the instance service page can be accessed properly.

Configuring SSO for the eDME O&M Portal

To ensure that redirection from the eDME O&M portal is normal, you need to configure SSO interconnection using SAML with the eDME O&M portal as the
server (IdP) and eCampusCore as the client (SP).

3.5.1.3.6.1.3.1 Configuring Multi-Session Login


After the installation is complete, you need to change the login mode so that multiple sessions are allowed for a single user and set the maximum
number of concurrent online users to ensure that the instance service page can be accessed properly.

Prerequisites
You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M portal:31943.

Procedure
1. Log in to the service.

a. Choose Multi-tenant Service > Service Management > Service Management, and search for the service created in
Interconnecting eCampusCore with eDME .

b. Click Redirect in the Operation column.

c. On the error page that is displayed, click here to go to the login page.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 268/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

d. Enter username and password to log in.

Username: admin

Password: passwords of the O&M console and database configured in Table 16 .

2. Choose System > Authentication > System Login Mode.

3. Toggle on Enable single-user multi-session and set Max. Allowed Online Users to 50.

3.5.1.3.6.1.3.2 Configuring SSO for the eDME O&M Portal


To ensure that redirection from the eDME O&M portal is normal, you need to configure SSO interconnection using SAML with the eDME O&M
portal as the server (IdP) and eCampusCore as the client (SP).

Procedure
1. Obtain the eDME product certificate.

a. Log in to a node on the eDME O&M portal as the sopuser user.

b. Run the following command to switch to the root user:


# su - root

c. Run the following commands to set the private key password:


# cd /opt/dme/cert_tool/
# bash deal_cert.sh

The entered password is the private key password. Remember the password for future use.

d. After the commands are executed, save the following three files that are generated in the /home/sopuser/SAML_SSO directory to
your local PC.

signing_cert.pem: public key file

signing_key_new.pem: private key file

ca.pem: trust certificate

e. Exit the root user.


# exit

2. Import the product certificate.

a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.

b. Import the product certificate. Choose Settings > System Management > Certificate Management from the main menu.

c. Click UniSSOWebsite_SAML_IdP.

d. Click the Identity Certificates tab page and click Import to import the public and private key files.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 269/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Configuration Item Description

Certificate alias Customize a certificate alias, for example, edme.

Public key file Select the public key file obtained in Obtain the eDME product certificate.

Private key file Select the private key file obtained in Obtain the eDME product certificate.

Private key password Select the private key password obtained in Obtain the eDME product certificate.

Certificate chain You do not need to set this parameter.

Remarks You do not need to set this parameter.

3. Obtain the metadata of the eDME O&M portal.

a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.

b. Choose Settings > Security Management > Authentication.

c. In the navigation pane, choose SSO Configuration > SAML SSO Configuration.

d. Click the SP Configuration tab page and click Export Metadata to obtain the eDME O&M portal metadata.

4. Log in to eCampusCore.

a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.

b. Choose Multi-tenant Service > Service Management > Service Management, and search for the service created during
installation.

c. Click Redirect in the Operation column.

d. On the error page that is displayed, click here to go to the login page.

e. Enter username and password to log in.

Username: admin

Password: password of the O&M management console and database configured in Table 16 .

5. Obtain the metadata and CA certificate of eCampusCore.

a. Choose System > Certificate > CA and click Download to obtain the certificate.

b. Choose System > Authentication > SAML, click the SAML Client Configuration tab page, and click Export Metadata to
obtain the metadata file.

6. Configure SAML on the eDME O&M portal.

a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.

b. Choose Settings > System Management > Certificate Management from the main menu.

c. Click UniSSOWebsite_SAML_IdP.

d. On the Trust Certificates tab page, click Import, set Certificate alias, and import the certificate file of eCampusCore obtained in
5.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 270/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

e. Choose Settings > Security Management > Authentication.

f. In the navigation pane, choose SSO Configuration > SAML SSO Configuration.

g. Click the SP Configuration tab, and click Create.

Parameter Description

Name Customize a name, for example, eCampusCore.

Protocol Retain the default value SAML2.0.

Status Enable it.

Description You do not need to set this parameter.

Metadata file Select the metadata file of eCampusCore obtained in 5 and click Upload.

Attribute Mapping Rules Click + to configure the following two mapping rules:
Local Attribute: Role; Mapped Key: roleName
Local Attribute: Username; Mapped Key: userName

7. Configure the SAML on eCampusCore.

a. On the eCampusCore page, choose System > Authentication > SAML.

b. On the SAML Client Configuration tab page, click Create.

Parameter Description

Identity Provider Set this parameter to eDME_OPS.

Status Enable it.

Upload Metadata Select the metadata file of the eDME O&M portal obtained in 3.

Convert User Name Choose Use Server Name.


NOTE:

Redirection is not allowed for the eDME user with the same name as the eCampusCore local user.

User Attribute Name Set this parameter to userName.

Role Attribute Name Set this parameter to roleName.

Binding relationship between a remote role Configure the role mapping rules. After the configuration, a remote role has the same permission as the
and a local role server role after logging to the server.
Local Role: DCS_Operations_Administrator
Remote Role: Administrators

8. Verify the configuration.

a. Exit the eCampusCore login page.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 271/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

b. Log in to the eDME O&M portal as a user attached to the Administrators role, choose Multi-tenant Service > Service
Management > Service Management, and search for the service created in Interconnecting eCampusCore with eDME .

c. Click Redirect in the Operation column and check that the service page can be displayed without login.

3.5.1.3.6.2 Importing a VM Template on the Operation Portal


To ensure that VMs can be created when subsequent service instances are applied for, you need to import the VM template to the FusionCompute
environment on the operation portal after the product installation and set VM specifications on eDME.
Importing a VM Template

Creating VM Specifications

Before the installation, ensure that related VM specifications have been created.

3.5.1.3.6.2.1 Importing a VM Template

Prerequisites
You have obtained the VM template corresponding to the FusionCompute architecture type by referring to Table 16 . The FusionCompute
architecture type can be viewed on the host overview page.

Procedure
1. Decompress the VM template package on the local PC to obtain the template files. Check whether the *.vhd and *.ovf files are in the same
directory. If no, obtain them again.

After decompression, ensure that the .vhd disk files are in the same directory as the .ovf files.

2. Log in to FusionCompute as a user associated with the Administrator role.

3. In the navigation pane, click .


The Resource Pool page is displayed.

4. Right-click Resource Pool and choose Import > Import Template from the shortcut menu.
The Create Template dialog box is displayed.
Select Import from Local PC as the template source. Upload the *.vhd and *.ovf files in the decompressed template folder on the local PC
respectively.

Figure 1 Creating a template

5. Click Next.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 272/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Go to the Basic Information step.

Name: Set the template name.

You need to enter the template name as required. If the name does not meet the requirements, the VM may fail to be created.
The name of the VM template VMTemplate_x86_64_CampusContainerImage.zip and VMTemplate_aarch64_CampusContainerImage.zip
is a fixed value, that is, eContainer-K8S-VMImage-EulerOS-2.12-64bit-fc.8.7.RC1-Campus.
If you use another template to create a VM, the template name must be consistent with the name of the decompressed directory.
For example, after VMTemplate_x86_64_Euler2.11.zip is decompressed, the *.vhd and *.ovf files are stored in the Euler2.11_23.2 directory.
Euler2.11_23.2 is the template name.

Set Compute Resource: Select a cluster at the site.

6. Click Next.
The Datastore page is displayed. Select the shared data store interconnected with FusionCompute, for example, HCI_StoragePool0.

7. Click Next to go to the Configure Template page.

If all configurations have default values, you do not need to change them.

Configure the NICs.

If the NIC configuration is empty, select a port group for NIC1 and NIC2, for example, managePortgroup.

If the NIC configuration is not empty, no configuration is required.

8. Click Next to go to the Confirm Info page. Check the configured values and start the creation
After the task is created, do not refresh the browser before the task is complete.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 273/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Do not refresh the browser page until the creation task is complete.
If the upload task fails to be created, a dialog box is displayed. Click Load Certificate to load the certificate and click Continue Uploading.

9. Import other templates by referring to 4 to 8.

Both the Arm and x86 templates need to be imported so that you can use the templates to apply for instances of different server types during instance
provisioning.

10. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M
portal:31943.

11. Choose Infrastructure > Virtualization > Virtualization Platform.


Select the VRM to which the templates are to be synchronized and click Synchronize.

12. Choose Infrastructure > Virtualization > VMs.


On the VM Templates tab page, click of the imported templates in the Operation column and select Operate Online from the drop-
down list.

3.5.1.3.6.2.2 Creating VM Specifications


Before the installation, ensure that related VM specifications have been created.

Context
The following VM specifications are required for applying for an ECS:

1C2G

2C4G

2C8G

3C8G

4C8G

4C16G

8C16G

8C32G

Procedure
1. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M
portal:31943.

2. Choose Infrastructure > Virtualization > Virtualization Platform.

3. On the VM Specifications tab page, click Create Specification and perform the following steps to create a specification:

a. Set basic information.

Specification Name and Display Name: Customize the names.

CPU Architecture and Vendor: Set them based on site requirements.

vCPUs and Memory Size: Set them as planned.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 274/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

b. Set the features.


Toggle on Advanced Parameters and deselect Enabled for Clock synchronization. Retain the default values for other parameters.

c. Set the available range. Select Public.

d. Associate clusters. Associate clusters based on site requirements.

4. Repeat 3 to create all VM specifications.

3.5.1.3.6.3 Configuring the eDME Image Repository


Before installing eCampusCore components on the operation portal, you need to create an eDME image repository.

Prerequisites

127.0.0.1:51299/icslite/print/pages/resource/print.do? 275/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You have created an eContainer image repository account and obtained the eContainer image repository information, including the repository
address, repository account and password, and CA certificate.

You have logged in to the eDME operation portal as a VDC user who has the CCS Admin permissions.

Procedure
1. Copy a repository address to a browser and open it. On the Harbor login page, use the repository account and password to log in to Harbor.

2. Choose Users > NEW USER to create an account for logging in to the image repository.

Username: Set this parameter to e-campus-core.

Email: Set this parameter as required.

First and last name: Set this parameter as required.

Password: The value is same as the password of the O&M management console and database in Planning Data .

Comments: Set this parameter based on the site requirements.

3. Select the new user and click SET AS ADMIN to set the new account as an administrator.

4. In the navigation pane, choose Projects. Click NEW PROJECT.

Project Name: Set this parameter to cube-repo-space.

Access Level: Set this parameter to Public.

Project quota limits: Set this parameter to -1. The unit is GiB.

Proxy Cache: Turn off this switch.

5. In the eDME navigation pane, choose Container > Elastic Container Engine > Repository.

6. Click Create. The create an image repository dialog box is displayed.

Name: The value is image-repo-cce.

Repository Address: address of the eContainer image repository.

Login Account: Set this parameter to e-campus-core.

Login Password: Enter the password for logging in to the image repository.

Description: Set this parameter as required.

CA Certificate: CA root certificate of the eContainer image repository server certificate.

7. Click OK.
The eDME image repository is created.

3.5.1.4 Checking Before Service Provisioning


System Management

Site Deployment Quality Check

3.5.1.4.1 System Management


This section describes how to configure the environment before performing DCS inspection operations.

Prerequisites
You have logged in to SmartKit.

The DCS inspection service has been installed on SmartKit.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 276/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Use either of the following methods to install the virtualization inspection service:
Method 1: On the SmartKit home page, click the Virtualization tab, click Function Management, select Datacenter Virtualization Solution
Inspection, and click Install.
Method 2: Import the software package of the DCS inspection service (SmartKit_version_Tool_Virtualization_Inspection.zip).

1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is displayed, click
Import. In the Import dialog box, select the software package of the virtualization inspection service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click Install. In the dialog
box that is displayed indicating a successful import, click OK. The status of Datacenter Virtualization Solution Inspection changes to
Installed.

Procedure
1. Access the inspection tool.

a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter Virtualization Solution
Inspection.

b. Click DCS Inspection to access the virtualization inspection service page.

2. Add environment information.

a. Choose System Management from the main menu.

b. In the navigation pane, choose Environment Configuration.

c. Click Create an Environment.

d. In the displayed Create an Environment dialog box, set Environment Name and Customer Cloud Name, and click OK.

The environment information can be added only once. If the environment information has been added, it cannot be added again. If you need to add an
environment, delete the original environment first.

3. Adds nodes.

a. Choose System Management from the main menu.

b. In the navigation pane, choose Environment Configuration.

c. Select the customer cloud and click Add Node in the Operation Column. The Add Node page is displayed.

d. Set parameters based on the selected product type.

e. Click OK. In the Add Node dialog box that is displayed, confirm the node information, select all nodes, and click OK. To add
multiple sets of devices of the same type, repeat the operations for adding a customer cloud and adding a node.

3.5.1.4.2 Site Deployment Quality Check


After the components are deployed, you can use the inspection function of SmartKit to check the environment before service provisioning, to
determine whether the current environment configuration is optimal.

Prerequisites
You have logged in to SmartKit.

The environment has been configured. For details, see System Management .

To inspect HiCloud, import the HiCloud inspection package by following instructions provided in Using SmartKit to Perform Health Check on
CMP HiCloud .

Procedure
1. Access the inspection tool.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 277/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter Virtualization Solution
Inspection.

b. Click DCS Inspection to access the virtualization inspection service page.


2. Create a health check task.

a. In the main menu, click Health Check to go to the Health Check Tasks page.

b. In the upper left corner, click Create to go to the Create Task page. In the Task Scenario area, select Quality check.

c. Set parameters based on Table 1.

Table 1 Creating a task

Parameter Description

Task Name Enter a name for the health check task.

Task Scenario Select a scenario where the health check task is executed.
Routine Health Check: Check basic check items required for routine O&M.
Pre-upgrade Check: Before the upgrade, check whether the system status meets the upgrade requirements.
Quality Check: After deploying the FusionCompute environment, check the environment before service rollout.
Post-upgrade acceptance: After the system is upgraded, check whether the system is normal.

Task Policy Indicates the execution policy of an inspection task.


Real-time task: The health check task is triggered immediately.
Scheduled task: The health check task is executed at a specified time. If you select this option, set Execution Time.

Send check report via email Indicates whether to enable the email push task.

Customer Cloud Select the target customer cloud where the health check task is executed.

Select Objects Select the target object for the health check task.
Management: nodes of services on the management plane
Select at least one node to execute a health check task.

Select Check Items Select the items for the health check task.
Select at least one item for the health check task.
NOTE:

By default, all check items of all nodes are selected. To modify the items, select the needed nodes.

d. Click Create Now.


You can view the created health task in the task list.

3. View the inspection result.

a. Click the name of a finished health check task.

b. On the task details page, perform operations described in Table 2.

Table 2 Operations on the task details page

Task Name Operation

Viewing basic information about In the Basic Information area, view the name and status of the current task.
the task

Viewing the object check pass The pass rate of objects and items are displayed in pie charts. You can select By environment or By product.
rate and check item pass rate

127.0.0.1:51299/icslite/print/pages/resource/print.do? 278/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Viewing component check Viewing the details of faults


results
On the Component Check Result tab page, click the Check Item Fault Details tab and view the faulty object
displayed in the Object Name column.
In the Object Name column, click a check item to view its check results.
Click a link in the Check Item ID column to view the troubleshooting suggestions for the check item.
Checking the status of each object in the task
On the Component Check Result tab page, click Object Check Details and click Details in the Operation column.
The Node Details dialog box is displayed showing the results of the check items selected for the health check task.
Click a link in the Check Item ID column to view the troubleshooting suggestions for the check item.
NOTE:

Determine the object status based on the results of check items. The object status can be Passed or Failed. If all check items
are passed, the object status is Passed. If any check item is failed, the object status is Failed.

Exporting the health check In the upper right corner of the page, click Export Report.
report
Select a report type (Basic Report, Quality Report, or Synthesis Report). If you select Synthesis Report, enter the
Customer Name (name of the user of the health check report) and Signature (name of the provider of the health
check report).
NOTE:

The synthesis report file is in Word format.

Click OK to export the report.

If a storage plane is not planned, the result of check item Network103 does not comply with the best practice. This issue does not need to be handled
and will be automatically resolved after storage interfaces are added.
If a large number of valid alarm information exists in the inspection environment, the inspection task may last for a long time. Wait patiently.

3.5.2 Configuring Interconnection Between iMaster NCE-Fabric and


FusionCompute
This operation is optional in the multi-tenant scenario and mandatory in the non-multi-tenant scenario.
For details about how to interconnect iMaster NCE-Fabric with FusionCompute, see Configuration Guide > Traditional Mode > Commissioning
> Interconnecting with FusionCompute in iMaster NCE-Fabric V100R024C00 Product Documentation.

3.5.3 Configuring Interconnection Between iMaster NCE-Fabric and eDME


In the multi-tenant scenario, log in to the eDME O&M portal, choose Infrastructure > Network > Network Service > Setting > SDN Connection
to configure SDN connection. For details, see Configuring SDN Connection .
(Optional) For details about how to interconnect iMaster NCE-Fabric with eDME, see O&M Portal > Settings > Security Management >
Authentication Management > SSO Configuration in eDME 24.0.0 Product Documentation.

3.5.4 Installing FabricInsight


For details about how to install FabricInsight, see sections "Single-Node System Installation (x86, VM) "and "Single-Node System Installation
(ARM, VM)"in the iMaster NCE-FabricInsight V100R024C00 Product Documentation.

1. When creating a VM, ensure that the memory size is greater than or equal to 128 GB. At least two disks are configured, including one system disk and one or
more data disks. It is better that the capacity of the system disk is greater than 900 GB and the total capacity of data disks is greater than 4000 GB. (The
minimum configuration feasible to the system: system disk: 900 GB; total available capacity of data disks: 500 GB). You need to configure the capacity of
the system and data disks before starting the VM, otherwise you have to recreate a VM.
2. After a VM is created, you can expand but not reduce the disk capacity. You are advised to set Configuration Mode to Thick provisioning lazy zeroed or
Thin Provisioning to reduce the time for creating disks. At least 40 CPU cores are required in the Arm environment.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 279/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Only one NIC needs to be configured for a VM. When creating a VM, do not select Start VM immediately after the creation. Delete the extra NICs before
starting the VM.

3.5.5 (Optional) Installing FSM


In the scale-out storage deployment scenario, DeviceManager_Client is used to deploy the active and standby management nodes of the OceanStor
Pacific series storage software.

Prerequisites
The network connection between the installation PC and all nodes is normal.

The command prompt for user root is #.

The communication between the management nodes and storage nodes is normal. You can ping the management IP addresses of other nodes
from one node to check whether the communication is normal.

Procedure
1. Install the scale-out block storage. For details, see Installation > Software Installation Guide > Installing the Block Service > Connecting
to FusionCompute in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.

3.5.6 Installing eDME (Hyper-Converged Deployment)


Network Planning

127.0.0.1:51299/icslite/print/pages/resource/print.do? 280/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Firewall Planning

SmartKit-based Installation (Recommended)

(Optional) Configuring Data Disk Partitions Using Commands (EulerOS)

Post-installation Check

Initial Configuration

Software Uninstallation

3.5.6.1 Network Planning


Table 1 describes the IP address planning for eDME multi-node deployment.

Table 1 IP address planning for eDME multi-node cluster deployment

Name Quantity Description

Primary node IP address 1 The primary node IP address, node IP address, floating management IP address, and
southbound floating IP address must be in the same network segment as the customer's
Node IP address 2 or 4 management network, and must be unused IP addresses.
NOTE:
The IP address and subnet mask must be in IPv4 format.
For three-node cluster
deployment, the number of Currently, all network parameters cannot be modified during the installation. Do not
node IP addresses is 2. perform any operation that may modify network parameters.
For five-node cluster The IP addresses of operation portal nodes must be in the same network segment as
deployment, the number of
tenants' management network and must communicate with the management plane at a
node IP addresses is 4.
layer 3 network. You are advised to plan the operation portal node IP addresses on the
same network as the management plane.
Floating management IP 1
address The floating IP address of the ECE, load balancing IP address of the ECE, and IP
address of the ECE node are in the same network segment and not in use.
Subnet mask 1
The floating IP address of the public service domain of the ECE and the IP address of
network port 2 of an ECE node are in the same network segment and not in use.
Gateway 1

Southbound floating IP 1
address

Operation Portal IP Address 2


(multi-tenant service)

Operation Portal Floating IP 1


Address (multi-tenant
service)

Operation Portal Global 1


Load Balancing IP Address
(multi-tenant service)

Elastic Container Engine IP 2


Address (ECE service)

Elastic Container Engine 2


Node 2 IP Address (ECE
service)

Elastic Container Engine 2


Floating IP Address (ECE
service)

Elastic Container Engine 1


Global Load Balancing IP
Address (ECE service)

Auto scaling service node 2

127.0.0.1:51299/icslite/print/pages/resource/print.do? 281/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Security service node 2


(situational awareness
service)

During network planning, ensure that each node can communicate with one another.

3.5.6.2 Firewall Planning


For details about the communication IP addresses, port numbers, and protocols configured on the firewall, see LLD Template.

3.5.6.3 SmartKit-based Installation (Recommended)


You can use SmartKit to install eDME.

Scenario
This section describes how to use SmartKit to install eDME.

Prerequisites
You have installed SmartKit. For details about how to install and run SmartKit, see "Deploying SmartKit" in SmartKit 24.0.0 User Guide.

You have installed Datacenter Virtualization Solution Deployment on SmartKit.

On the home page of SmartKit, click the Virtualization tab, click Function Management, and check whether the status of Datacenter Virtualization
Solution Deployment is Installed. If the status is Uninstalled, you can use either of the following methods to install the software:
On the home page of SmartKit, click the Virtualization tab, click Function Management, select Datacenter Virtualization Solution Deployment, and
click Install.
Import the software package for the basic virtualization O&M service (SmartKit_24.0.0_Tool_Virtualization_Service.zip).

1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is displayed, click
Import. In the Import dialog box, select the software package for the basic virtualization O&M service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click Install. In the dialog
box indicating a successful import, click OK. The status of Datacenter Virtualization Solution Deployment changes to Installed.

You have imported the software package of the eDME deployment tool (eDME_version_DeployTool.zip). The procedure is the same as that
for importing the basic virtualization O&M software package.

Procedure
1. On the home page of SmartKit, click the Virtualization tab. Click Datacenter Virtualization Solution Deployment in Site Deployment
Delivery.

2. Click DCS Deployment. The Site Deployment Delivery page is displayed.

3. On the Tasks tab page, click Create Task. The Basic Configuration page is displayed.

On the Site Deployment Delivery page, click Support List to view the list of servers supported by SmartKit.

4. Set Task Name, select DCS Deployment, and click Create.

5. In the Confirm Message dialog box that is displayed, click Continue.

6. On the Installation Policy page, select eDME Installation. Click Next.

7. Configure parameters.

To modify the configuration online, go to 8 to manually set related parameters on the page.

To import configurations using an EXCEL file, click the Excel Import Configuration tab. Click Download File Template, fill in
the template, and import the parameter file. If the import fails, check the parameter file. If the parameters are imported successfully,

127.0.0.1:51299/icslite/print/pages/resource/print.do? 282/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

you can view the imported parameters on the Online Modification Configuration tab page. Then, go to 9.

To quickly fill in the configuration, click the Quick Configuration tab, set parameters as prompted, and click Generate Parameter.
If a parameter error is reported, clear the error as prompted. If the parameters are correct, go to 9.
8. Set eDME parameters.
On the Online Modification Configuration tab page, click Add eDME in the eDME Parameter Configuration area. On the Add eDME
page in the right pane, set related parameters.

a. Select the path of the installation package, that is, the path of the folder where the deployment software package is stored. If the
software package is not downloaded, download it as instructed in eDME Software Package . After you select a path, the tool
automatically uploads the software package to the deployment node. If you do not select a path, manually upload the software package
to the /opt/install directory on the deployment node.

b. Configure eDME information.

Table 1 eDME information

Parameter Description

System Language System language.

CPU Architecture CPU architecture of the deployment node.


X86
ARM

Management Level L1: In three-node deployment, 10,000 VMs can be managed.


LITE: In three-node deployment, 1,000 VMs can be managed.

OS Type The OS type is EulerOS.

Enable SFTP Whether to enable SFTP.


Yes: After the installation is complete, file upload and download are allowed on the node.
No: After the installation is complete, file upload and download are not allowed on the node.

Automatic VM creation Whether to automatically create a VM. The value can be Yes or No. Other parameters are valid only when this
parameter is set to Yes.

FusionCompute IP IP address of FusionCompute used for creating a VM.

FusionCompute login user Username for logging in to FusionCompute.

FusionCompute login password Password for logging in to FusionCompute.

Subnet mask of the node Subnet mask of the node.

Subnet gateway of the node Node gateway.

Primary Node Host Name Host name of the active node.

Primary Node IP Address IP address of the active node.

Primary Node root Password Password of user root for logging in to the active node.

CNA name of primary node Name of the CNA to which the active node belongs.

Disk space of primary node Disk space of the active node, in GB.

Child Node 1 Host Name Host name of child node 1.

Child Node 1 IP Address IP address of child node 1.

Child Node 1 root Password Password of user root on child node 1.

CNA name of child node 1 Name of the CNA to which child node 1 belongs.

Disk space of child node 1 Disk space of child node 1, in GB.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 283/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Child Node 2 Host Name Host name of child node 2.

Child Node 2 IP Address IP address of child node 2.

Child Node 2 root Password Password of user root on child node 2.

CNA name of child node 2 Name of the CNA to which child node 2 belongs.

Disk space of child node 2 Disk space of child node 2, in GB.

Deploy Operation Portal or not Whether to deploy an operation portal for eDME.
No
Yes

Operation Portal Node 1 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name

Operation Portal Node 1 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.

Operation Portal Node 1 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.

CNA name of Operation Portal Name of the CNA to which operation portal node 1 belongs. This parameter is valid only when the operation portal
node 1 is to be deployed.

Disk space of Operation Portal node Disk space used by operation portal node 1. This parameter is valid only when the operation portal is to be
1 deployed.

Operation Portal Node 2 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name

Operation Portal Node 2 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.

Operation Portal Node 2 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.

CNA name of Operation Portal Name of the CNA to which operation portal node 2 belongs. This parameter is valid only when the operation portal
node 2 is to be deployed.

Disk space of Operation Portal node Disk space used by operation portal node 2 (unit: GB). This parameter is valid only when the operation portal is to
2 be deployed.

Operation Portal Floating IP Management floating IP address used to log in to the operation portal. It must be in the same network segment as
Address the node IP address and has not been used. This parameter is valid only when the operation portal is to be deployed.

Operation Portal Global Load This parameter is used to configure global load balancing. It must be in the same network segment as the IP address
Balancing IP Address of the operation portal node and has not been used. This parameter is valid only when the operation portal is to be
deployed.

Operation Portal Management Password for logging in to the operation portal as user bss_admin. This parameter is valid only when the operation
Password portal is to be deployed.

Operation Portal SDN Scenario HARD SDN


NO SDN
This parameter is valid only when Deploy Operation Portal or not is set to Yes.

Deploy DCS Auto Scaling Service No


Yes
This parameter is valid only when Deploy Operation Portal or not is set to Yes.
The AS service parameters take effect only when Deploy DCS Auto Scaling Service is set to Yes.

Auto Scaling Service Node 1 Host The host naming rules are as follows:
Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 284/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Auto Scaling Service Node 1 IP IP address of AS node 1.


Address

Auto Scaling Service Node 1 root Password of user root for logging in to AS node 1.
Password

CNA name of Auto Scaling Service Name of the CNA to which AS node 1 belongs.
node 1

Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 1

Auto Scaling Service Node 2 Host For details, see the parameter description of Auto Scaling Service Node 1 Host Name.
Name

Auto Scaling Service Node 2 IP IP address of AS node 2.


Address

Auto Scaling Service Node 2 root Password of user root for logging in to AS node 2.
Password

CNA name of Auto Scaling Service Name of the CNA to which AS node 2 belongs.
node 2

Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 2

Deploy Elastic Container Engine No


Service
Yes
This parameter is valid only when Operation Portal SDN Scenario is set to HARD SDN.
The Elastic Container Engine (ECE) parameters are valid only when Deploy Elastic Container Engine Service is
set to Yes.

Elastic Container Engine Node 1 The host naming rules are as follows:
Host Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.

Elastic Container Engine Node 1 IP IP address of ECE node 1.


Address NOTE:
If Automatic VM creation is set to Yes, enter an IP address that is not in use.
If Automatic VM creation is set to No, enter the IP address of the node where the OS has been deployed.

Elastic Container Engine Node 1 Password of user root for logging in to ECE node 1.
root Password

Elastic Container Engine Node 1 Public service domain IP address of ECE node 1.
Public Service Domain IP Address NOTE:
If Automatic VM creation is set to Yes, enter an IP address that is not in use.
If Automatic VM creation is set to No, enter the IP address of the node where the OS has been deployed.

CNA name of Elastic Container Name of the CNA to which ECE node 1 belongs.
Engine Service node 1

Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 1

Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Host Name.
Host Name

Elastic Container Engine Node 2 IP For details, see the parameter description of Elastic Container Engine Node 1 IP Address.
Address

Elastic Container Engine Node 2 Password of user root for logging in to ECE node 2.
root Password

Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Public Service Domain IP
Public Service Domain IP Address Address.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 285/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

CNA name of Elastic Container Name of the CNA to which ECE node 2 belongs.
Engine Service node 2

Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 2

Elastic Container Engine Floating Floating IP address used for the ECE service. It must be an idle IP address in the same network segment as the IP
IP Address address of the ECE node.

Elastic Container Engine Public Floating IP address used for the communication between the K8s cluster and the ECE node. It must be an idle IP
Service Domain Floating IP address in the same network segment as the public service domain IP address of the ECE node.
Address

Elastic Container Engine Global IP address used to configure load balancing for the ECE service. It must be an idle IP address in the same network
Load Balancing IP Address segment as the IP address of the ECE node.

Subnet mask of the public service Subnet mask of the public service domain of the ECE service.
domain of the Elastic Container
Engine Service

Port group of the Elastic Container Port group of the public service domain of the ECE service.
Engine Service in the public service NOTE:
domain
If a port group has been created on FusionCompute, set this parameter to the name of the created port group.
If no port group has been created on FusionCompute, set this parameter to the name of the port group planned for
FusionCompute.

IP Address Gateway of Elastic Set the IP address gateway of the ECE public service domain.
Container Engine public Service
Domain

Elastic Container Engine Public Set the BMS and VIP subnet segments for the ECE public service network.
Service Network-BMS&VIP
Subnet Segment

IP Address Segment of Elastic Set the IP address segment of the ECE public service network client.
Container Engine Public Service
Network Client

Manage FusionCompute or not Whether to enable eDME to take over FusionCompute.


No
Yes
NOTE:

eDME can manage FusionCompute only when both FusionCompute and eDME are deployed.

Interface Username Interface username. This parameter is valid only when Manage FusionCompute or not is set to Yes.

Interface Account Password Password of the interface account. This parameter is valid only when Manage FusionCompute or not is set to
Yes.

SNMP Security Username SNMP security username. This parameter is valid only when Manage FusionCompute or not is set to Yes.

SNMP Encryption Password SNMP encryption password. This parameter is valid only when Manage FusionCompute or not is set to Yes.

SNMP Authentication Password SNMP authentication password. This parameter is valid only when Manage FusionCompute or not is set to Yes.

Management Floating IP address Floating IP address of the management plane.


IP address used to access the management and O&M portals. It must be unused and in the same network segment
as the node IP addresses.

Southbound Floating IP Address Southbound floating IP address.


The southbound floating IP address is used by third-party systems to report alarms.

Network Port Network port name.


For the x86 architecture, the default value is enp4s1.
For the Arm architecture, the default value is enp4s0.

Whether to install eDataInsight Whether to deploy the DCS eDataInsight management plane. If yes, prepare the product software package of the
Manager DCS eDataInsight management plane in advance.
Yes

127.0.0.1:51299/icslite/print/pages/resource/print.do? 286/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
No

Initial admin Password of Initial password of user admin on the management portal.
Management Portal NOTE:

After setting the password, click Downwards. The system automatically pastes the password to the following passwords
(from Initial admin Password of Management Portal to sftpuser Password). The rules of each password are different.
If the verification fails after the password is copied downwards, you need to change the password separately.

Initial admin Password of O&M Initial password of user admin on the O&M portal.
Portal

sopuser Password Password of user sopuser. The sopuser account is used for routine O&M.

ossadm Password Password of user ossadm. The ossadm account is used to install and manage the system.

ossuser Password Password of user ossuser. The ossuser account is used to install and run the product software.

Database sys Password Database sys password. The database sys account is used to manage and maintain the Zenith database and has the
highest operation rights on the database.

Database dbuser Password Database dbuser password.

rts Password Password of user rts. The rts account is used for authentication between processes and RabbitMQ during process
communication.

KMC Protection Password KMC protection password. KMC is a key management component.

ER Certificate Password ER certificate password. The ER certificate is used to authenticate the management or O&M portal when you
access the portal on a browser.

Elasticsearch Password Elasticsearch password, which is used for Elasticsearch authentication.

ETCD Password ETCD password, which is used for ETCD authentication.

ETCD root Password ETCD root password, which is used for ETCD root user authentication.

sftpuser Password Set the password of user sftpuser.

Whether to install object storage Whether to deploy the object storage service.
service
No
Yes-PoE Authentication
Yes-IAM Authentication

Whether to install application Whether to deploy the application backup service during operation portal deployment.
backup service

If you use Export Parameters to export an XLSX file, you can operate or view the file only in Office 2007 or later version.

9. Click Next. On the displayed Confirm Parameter Settings page, check the configuration information. If the information is correct, click
Deploy Now.

10. Go to the Pre-deployment Check page and check whether each check item passes the check. If any check item is failed, perform operations
as prompted in to meet the check standard. If all check items are passed, click Execute Task.

11. Go to the Execute Deployment page. Check the status of each execution item of eDME. After all items are successfully executed, click
Finish.

If you use Export Report to export an XLSX file, you can operate or view the file only in Office 2007 or later version.

12. After eDME is installed, click View Portal Link on the Perform Deployment page to view the eDME address. Click it to go to the login
page of the O&M portal. You can log in to the O&M portal and operation portal to check whether eDME is successfully installed. For
details, see Post-installation Check .

127.0.0.1:51299/icslite/print/pages/resource/print.do? 287/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.5.6.4 (Optional) Configuring Data Disk Partitions Using Commands (EulerOS)


After the OS is installed, if partitions are insufficient for installing eDME and partitions /opt and /opt/log do not exist on the disk, configure them by
referring to this section.

Procedure
1. Log in to the eDME node as user root using SSH.

2. Run the bash parting_data_disk.sh command and set the sizes of /opt/log and /opt as prompted, as shown in Figure 1.

The capacity of /opt/log is 50 GiB.


After the configuration is complete, you can run the df -h command to view the current system disk usage.

Figure 1 Disk partitioning

3.5.6.5 Post-installation Check


Checking the O&M Portal After Installation

Checking the Operation Portal After Installation (Multi-Tenant Services)

3.5.6.5.1 Checking the O&M Portal After Installation


After eDME is installed, log in to the O&M portal to check whether the installation is successful.

Context
The default session timeout duration of the eDME O&M portal is 30 minutes. If you do not perform any operations within this timeout duration, you
will be logged out automatically.

Prerequisites
You have installed the required version of the browser.

You have obtained the address for accessing the eDME O&M portal.

You have obtained the login user password if you log in using password authentication.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 288/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You have set the required resolution on the local PC.

User Login Procedure


1. On the maintenance terminal, enter https://Node IP address:31943 (for example, https://192.168.125.10:31943) in the address box of a
browser and press Enter.

Click Advanced in the security warning dialog box that is displayed when no security certificate is installed. Select Proceed to Management floating
IP address of eDME (unsafe).
If multi-node clusters are deployed, use the management floating IP address to log in to eDME.
In the multi-node cluster deployment scenario, automatic active/standby switchover is supported. After the active/standby switchover, it takes about 10
minutes to start all services on the new active node. During this period, the O&M portal can be accessed, but some operations may fail. Wait until the
services are restarted and try again.

2. Select the password authentication mode for login.

Password authentication

a. Enter the username and password. The default username is admin, and the initial password is configured during installation of eDME.

b. Click Log In.

c. If you fail to log in to the O&M portal, check the following causes:

If the username or password is incorrect in the first login, you are required to enter a verification code in the second login.

If the system displays a message indicating that the login password must be changed upon the first login or be reset,
change the password as instructed.

If you forget your login password, you can use the email address or mobile number you specified to retrieve the
password.

3. Click Enter System.

After the installation is successful, the eDME function is available about 10 minutes later.
After eDME is successfully installed, the license file is in the grace period of 90 days. To better use this product, contact technical support engineers to
apply for a license as soon as possible.

Post-login Check

On the navigation bar, hover the mouse pointer over . The latest alarms are displayed. Click View All Alarms to go to the Alarms page.
If there is only one alarm indicating that the license is invalid, eDME is running properly.

3.5.6.5.2 Checking the Operation Portal After Installation (Multi-Tenant


Services)
After eDME is installed, log in to the operation portal to check whether the installation is successful.

Context
The default session timeout duration of the eDME operation portal is 30 minutes. If you do not perform any operations within this timeout duration,
you will be logged out automatically.

Prerequisites
You have installed the required version of the browser.

You have obtained the address for accessing the eDME operation portal.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 289/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You have obtained the login user password if you log in using password authentication. The default username of the operation administrator is
bss_admin.

You have set the required resolution on the local PC.

User Login Procedure


The following uses Google Chrome as an example.

1. Open Google Chrome.

2. Enter the login address in the address bar and press Enter.
Login address: https://IP address of the eDME operation portal, for example, https://10.10.10.10

3. Click Advanced in the security warning dialog box that is displayed when no security certificate is installed. Select Proceed to Floating
management IP address of eDME (unsafe).

4. Select the password authentication mode for login.

Password authentication

a. Enter the username and password.

b. Click Log In to log in to the eDME operation portal.

If the username or password is incorrect in the first login, you are required to enter a verification code in the second login.

If the system displays a message indicating that the login password must be changed upon the first login or be reset,
change the password as instructed.

The operation portal does not support password retrieval. Keep your password secure.

Post-login Check
Log in to the eDME operation portal as user bss_admin. If the login is successful and the page is displayed properly, eDME is running properly.

3.5.6.6 Initial Configuration


After eDME is installed, perform initial configuration to ensure that eDME functions properly.
(Optional) Configuring the NTP Service

(Optional) Loading a License File

(Optional) Configuring SSO for FusionCompute (Applicable to Virtualization Scenarios)

(Optional) Adding Static Routes

3.5.6.6.1 (Optional) Configuring the NTP Service


This operation enables you to configure the Network Time Protocol (NTP) service for eDME and ensure that the time of eDME is the same as that
of managed resources such as storage devices and VRMs.

If no NTP server is configured, the time of eDME may differ from that of managed resources and eDME may fail to obtain the performance data of the managed
resources. You are advised to configure the NTP service.

Context

127.0.0.1:51299/icslite/print/pages/resource/print.do? 290/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

NTP is a protocol that synchronizes the time of a computer system to Coordinated Universal Time (UTC). Servers that support NTP are called NTP
servers.

Precautions
Before configuring the NTP server, check the time difference between eDME and the NTP server. The time difference between the NTP server and
eDME cannot exceed 24 hours. The current NTP server time cannot be earlier than eDME installation time.
For example, if the current NTP server system time is 2021-04-05 16:01:49 UTC+08:00 and eDME is installed at 2021-04-06 16:30:20 UTC+08:00,
the NTP server time is earlier.
To check the system time of the eDME node, perform the following steps:

1. Use PuTTY to log in to the eDME node as user sopuser using the static IP address of the node.
The initial password of user sopuser is configured during eDME installation.

2. Run the sudo su ossadm command to switch to user ossadm.


The initial password of user ossadm is configured during eDME installation.

3. Run date to check whether the system time is consistent with the actual time.
If the system time of eDME is later than the NTP server time, you need to run the following command to restart the service after you
configure the NTP server and time synchronization is complete: cd /opt/oss/manager/agent/bin && . engr_profile.sh && export
mostart=true && ipmc_adm -cmd startapp. If the system time of eDME is earlier than the NTP server time, you do not need to run this
command.

Procedure
1. Visit https://Login IP address of the management portal:31945 and press Enter.

For eDME multi-node deployment (with or without two nodes of the operation portal), use the management floating IP address to log in.

2. Enter the username and password to log in to the eDME management portal.
The default username is admin, and the initial password is configured during installation of eDME.

3. Choose Maintenance > Time Management > Configure NTP.


The Configure NTP page is displayed.

4. Click Add and configure NTP information, as shown in Table 1.

Table 1 Parameters

Parameter Description Value Range

NTP Server IP IP address of the NTP server that functions as the clock source. IPv4 address
Address

Encryption Encryption mode of the NTP server. NTP v4 Authentication


Mode
NTP v4

Calculation Digest type of the NTP server. MD5


Digest
SHA256
NOTICE:

The MD5 encryption algorithm has security risks. You


are advised to use SHA256, which is more secure.

Key Index Used to quickly search for the key value and digest type during the The value is an integer ranging from 1 to 65,534,
communication authentication with the NTP server. excluding 10,000.
The value must be the same as Key Index configured on the NTP server.
NOTE:

This parameter is mandatory when Encryption Mode is set to NTP v4


Authentication.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 291/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Key NTP authentication character string, which is an important part for The value is a string of a maximum of 30 ASCII
generating a digest during the communication authentication with the NTP characters. Spaces and number signs (#) are not
server. supported.
The value must be the same as Key set on the NTP server.
NOTE:

This parameter is mandatory when Encryption Mode is set to NTP v4


Authentication.

Role Active or standby status of the NTP server. Active


Standby

Operation Operation that can be performed on the configured NTP server. Verify
Delete

5. Click Apply.

6. In the alert dialog box that is displayed, click OK.

7. For example, for storage devices, log in to the storage device management page and set the device time to be the same as that in eDME.

You can set the time in either of the following ways:

This section uses OceanStor Dorado 6.x devices as an example. The operations for setting the time vary according to the device model. For details, see the
online help of the storage device.

Set automatic NTP synchronization.

a. Choose Settings > Basic Information > Device Time.

b. Enable NTP Synchronization.

i. In NTP Server Address, enter the IPv4 address or domain name of the NTP server.

The value must be the same as that of <NTP server address> in 4.

ii. (Optional) Click Test.

iii. (Optional) Select Enable next to NTP Authentication. Import the NTP CA certificate to CA Certificate.

Only when NTPv4 or later is used, NTP authentication can be enabled to authenticate the NTP server and automatically synchronize
the time to the storage device.

c. Click Save and confirm your operation as prompted.

Synchronize the time manually.

a. Choose Settings > Basic Information > Device Time.

b. Click next to Device Time to change the device time to be the same as the time of eDME.

If you set the time manually, there may be time difference. Ensure that the time difference is less than 1 minute.

3.5.6.6.2 (Optional) Loading a License File


For details about how to load the license file, see eDME License Application Guide.

3.5.6.6.3 (Optional) Configuring SSO for FusionCompute (Applicable to


Virtualization Scenarios)
This operation enables you to configure single sign-on (SSO) to log in to FusionCompute from eDME without entering a password.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 292/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

After the SSO configuration is complete, if eDME is faulty, you may fail to log in to the connected FusionCompute system. For details, see "Failed to Log In to
FusionCompute Due to the Fault" in eDME Product Documentation.
During the SSO configuration, you must ensure that no virtualization resource-related task is running on eDME, such as creating a VM or datastore. Otherwise,
such tasks may fail.

Prerequisites
FusionCompute has been installed.

You have logged in to eDME.

FusionCompute uses the common rights management mode.

Procedure
1. Log in to the O&M portal as the admin user. The O&M portal address is https://IP address for logging in to the O&M portal:31943.

In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.
The default password of user admin is the password set during eDME installation.

2. In the navigation pane on the left of the eDME O&M portal, choose Settings > Security Management > Authentication.

3. In the left navigation pane, choose SSO Configuration > CAS SSO Configuration.

4. On the SSO Servers tab page, click Create.

5. Select IPv4 address or IPv6 address for IP Address Type.

FusionCompute supports IPv4 and IPv6 addresses.

6. In the text box of IPv4 address or IPv6 address, enter the IP address for logging in to the FusionCompute web client.

7. Click OK.

8. Log in to FusionCompute to be interconnected.

Login addresses:

IPv4: https://IP address for logging in to the FusionCompute web client:8443

IPv6: https://IP address for logging in to the FusionCompute web client:8443

Username and password: Obtain them from the administrator.

9. In the navigation bar on the left of the FusionCompute home page, click to enter the System page.

10. Choose System > Connect To > Cloud Management.

11. (Optional) Upon the first configuration, click on the right of Interconnected Cloud Management to enable cloud management
settings.

12. Select ManageOne/eDME Maintenance Portal for Interconnected System.

13. Enter the login IP address of the eDME O&M portal in the System IP Address text box.

In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.

14. Click Save.

15. Click OK.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 293/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

After the operation is complete, the system is interrupted for about 2 minutes. After the login mode is switched, you need to log out of the system and log in to
the system again.
If any fault occurs on the O&M portal of ManageOne or eDME after SSO is configured, the login to FusionCompute may fail. In this case, you need to log in to
the active VRM node to cancel SSO.
Run the following command on the active VRM to cancel SSO:
python /opt/galax/root/vrm/tomcat/script/omsconfig/bin/sm/changesso/changesso.py -m ge

3.5.6.6.4 (Optional) Adding Static Routes


By default, the eDME server component Docker uses network segments: 172.17.0.0/16, 172.18.0.0/16, and 172.19.0.0/16. To take over devices in
the 172.17.xxx.xxx/xx network segment, you need to add static routes.

Procedure
1. Use PuTTY to log in to the eDME node as user sopuser (which is set during deployment) using the management IP address.

2. Run the su - root command to switch to user root.

3. Run the vi /etc/sysconfig/static-routes command, press i to enter the insert mode, and add static routes to the configuration file. After the
addition, press Esc and enter :wq to save the configuration and exit. Enter :q! to forcibly exit without saving any changes.
Enter the network segment to be accessed and the next-hop address based on the site requirements. If multiple network segments need
to be taken over, configure multiple network segments based on site requirements.
Example

172.17.0.0/24 is the network segment to be accessed, and 192.168.1.1 is the next-hop address gateway.

any net 172.17.0.0/24 gw 192.168.1.1

In other formats, 172.17.0.0 indicates the network segment to be accessed, 255.255.255.0 indicates the mask of the network segment, and
192.168.1.1 indicates the next-hop gateway address.

any net 172.17.0.0 netmask 255.255.255.0 gw 192.168.1.1

Run the following command on the node to query the default gateway (next-hop IP address gateway):
ip route show default
The following information is displayed, where 192.168.1.1 is the next-hop address gateway:

default via 192.168.1.1 dev ens0

4. Run the following command to restart the network service:


service network restart

3.5.6.7 Software Uninstallation


Uninstall eDME if it is no longer needed or fails to be installed.

Prerequisites
The node where eDME is installed is running properly.

You have obtained the password of user root for logging in to the node where eDME is to be uninstalled from the administrator.

Precautions
Ensure that the eDME-residing device is not powered off or restarted when eDME is being uninstalled. Otherwise, exceptions may occur.

Procedure

127.0.0.1:51299/icslite/print/pages/resource/print.do? 294/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Use PuTTY to log in to the eDME node as user sopuser (which is set during deployment) using the management IP address.

2. Run the su - root command to switch to user root.

3. Run the cd /opt/dme/tools command to go to the directory where the uninstallation script resides.

4. Run the following command to uninstall the software:

sh uninstall.sh Southbound floating IP address set during eDME installation

If eDME is deployed in multi-node cluster mode (with or without two nodes of the operation portal), perform 1 to 6 on each node to uninstall eDME.

5. Enter y as prompted.

If the system displays the message "uninstalled eDME successfully", eDME has been uninstalled successfully.

If a message indicating the component uninstallation failed is displayed, go to /var/log/dme_data and open the uninstall.log file to
check the failure cause. Rectify the fault and uninstall eDME again. If eDME still fails to be uninstalled, contact Huawei technical
support engineers.

6. Run the exit command to close PuTTY and log out.

3.6 (Optional) Installing DR and Backup Software


Disaster Recovery (DR)

Backup

3.6.1 Disaster Recovery (DR)


Local HA

Metropolitan HA

Active-Standby DR

Geo-Redundant 3DC DR

Local HA

Metropolitan HA

Active-Standby DR

Geo-Redundant 3DC DR

3.6.1.1 Local HA
Local HA for Flash Storage

Local HA for Scale-Out Storage

Local HA for eVol Storage

3.6.1.1.1 Local HA for Flash Storage


Installing and Configuring the DR System

127.0.0.1:51299/icslite/print/pages/resource/print.do? 295/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DR Commissioning

3.6.1.1.1.1 Installing and Configuring the DR System


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Configuring Storage

Installing FusionCompute

Creating DR VMs

Configuring HA and Resource Scheduling Policies for a DR Cluster

3.6.1.1.1.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and configuration process

3.6.1.1.1.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

127.0.0.1:51299/icslite/print/pages/resource/print.do? 296/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Object Description Requirement Remarks

Local PC The PC that is used for the The local PC only needs to meet the requirements for For details about the requirements of
installation installing FusionCompute and there is no special FusionCompute for the local PC, servers,
requirement for it. and storage devices, see System
Requirements .
Server The server that functions as a The server must meet the following requirements:
host (CNA node) on
Meets the host requirements for installing
FusionCompute
FusionCompute.
Supports the FC HBA port and can communicate with
the FC switches.
NOTE:

If it is a blade server, such as the E6000 or E9000 server,


the blades must be able to connect to the FC switch
modules using dedicated FC network ports.

Flash storage Products used for storage This system must meet the following requirements for
management and storage DR installing flash storage:
The environment is satisfactory for installing flash
storage.
Deploy the quorum server in a third place.
The one-way delays of the network transmission between
the quorum server and the production site/the DR site is
less than or equal to 10 ms, which are suitable for the 1
Mbit/s bandwidth.

Access Access switches of the storage, There are no special requirements for the Ethernet access None
switch management, and service planes switches on the management and service planes.
The access switches on the storage plane must meet the
following requirements:
FC switches are recommended. Ethernet switches can
also be used.
FC switches or Ethernet switches must be compatible
with hosts and flash storage.

Aggregation Ethernet aggregation switches The Ethernet aggregation switches must support VRRP. None
switch and FC aggregation switches at
the production and DR sites

Core switch Core switches and firewalls at No special requirement None


Firewall the production and DR sites

Network Network between the production The network must meet the following requirements: None
site and the DR site
The flash storage heartbeat plane uses a large Layer 2
network.
In the large Layer 2 network, the RTT between any two
sites is less than or equal to 1 ms.
The quorum plane of flash storage must be connected
using a Layer 3 virtual private network (L3VPN).

Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Documents

Type Name Description How to Obtain

Integration design Network integration Describes the deployment plan, networking Obtain this document from the engineering supervisor.
document design plan, and the bandwidth plan.

Data planning template Provides the network data plan result, such
for the network as the IP plan of nodes, storage plan, and
integration design plan of VLANs, zones, gateways, and
routes.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 297/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Version document Datacenter Virtualization Provides information about hardware and For enterprise users: Visit
Solution 2.1.0 Version software version mapping. https://support.huawei.com/enterprise , search for the
Mapping document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.

FusionCompute FusionCompute Product Provides guidance on installation, initial For enterprise users: Visit
product Documentation configuration, and commissioning of https://support.huawei.com/enterprise , search for the
documentation FusionCompute. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.

Flash storage product OceanStor Series Product Includes storage installation, configuration, For enterprise users: Visit
documentation Documentation and commissioning as well as the https://support.huawei.com/enterprise , search for the
HyperMetro feature guide. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
NOTE:

For detailed version information, see Constraints and


Limitations .

Switch product Switch document Provides information about how to This document package is provided by the switch
documentation package configure the switches by running vendor.
commands.

Server product Server document package Provides information about how to This document package is provided by the server
documentation configure the servers. vendor.

After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.

Preparing Software Packages and Licenses


The DR solution has no special requirements for the software packages. Obtain the software packages and license files for the following products
based on Datacenter Virtualization Solution 2.1.0 Version Mapping and Constraints and Limitations :

FusionCompute

Flash storage

3.6.1.1.1.1.3 Configuring Switches

Scenarios
In the local HA for flash storage scenario, the switch configuration is the same as that in the normal deployment scenario. This section describes
only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure switches based on the network device documentation and the data plan.

Procedure
Configure Ethernet access switches.

1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.

Configure FC access switches.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 298/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Configure the FC access switches based on the data plan and the FC access switch documents.
The FC aggregation switches deployed at two sites must be connected to each other using optical fibers. The zones and cascading must be
configured. There are no other special requirements.

Configure aggregation switches.

3. Configure the Ethernet aggregation switches and FC aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:

Except the active-active quorum channel, configure the VLANs on other planes at another site.

When configuring the VLANs of a site at another site, configure VRRP for the active and standby gateways on the Ethernet aggregation
switches based on the VLANs. For a VLAN, the gateway at the site where VM services are deployed is configured as the active
gateway, and the gateway at the other site is configured as the standby gateway.

The Layer 2 interconnection between the Ethernet aggregation switches and the core switches needs to be configured.

The Layer 3 interconnection (implemented by using the VLANIF interface) between the Ethernet aggregation and core switches needs
to be configured for accessing services from external networks.

Configure Ethernet core switches.

4. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:

Configure the Layer 2 interconnection between a local core switch and the peer core switch on the local core switch. Then, bind the
multiple links between the two sites to a trunk to prevent loops.

Distribute exact routes (measured in VLAN) on the core switch working at the active gateway side, and distribute non-exact routes on
the core switch working at the standby gateway side. The route precision is controlled by the subnet mask.

If a firewall is deployed and e Network Address Translation (NAT) needs to be configured for the firewall, distribute the external routes on the firewall,
instead of on the core switch. Distribute exact routes on the firewall at the production site, and non-exact routes on the firewall at the DR site.
The firewall configurations, such as ACL and NAT must be manually set to be the same.

3.6.1.1.1.1.4 Configuring Storage

Scenarios
This task describes how to configure OceanStor V5/Dorado series storage in the flash storage HA scenario.

Procedure
For flash storage, see "HyperMetro Feature Guide for Block" in OceanStor Product Documentation for the desired model.

When configuring multipathing policies, follow instructions in OceanStor Dorado and OceanStor 6.x and V700R001 DM-Multipath Configuration Guide for
FusionCompute.

3.6.1.1.1.1.5 Installing FusionCompute

Scenarios
This section describes how to install FusionCompute in the HA solution for flash storage. The FusionCompute installation method depends on
whether the large Layer 2 network on the management or service plane is connected.

If the large Layer 2 network is connected on the management and service planes, install FusionCompute by following the normal procedure,
and deploy the standby VRM node at the DR site.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 299/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If the large Layer 2 network is not connected on the management or service plane, install the host by following the normal procedure. Note the
requirements for installing VRM: Deploy both the active and standby VRM nodes at the production site first. After the large Layer 2 network is
connected, deploy the standby VRM node at the DR site.

Prerequisites
Conditions

You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.

The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .

Data
You have obtained the password of the VRM database.

Process
Figure 1 shows the process for installing FusionCompute.

Figure 1 FusionCompute installation process

Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.

1. Install hosts at the production site and the DR site.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 300/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Check whether the large Layer 2 network is connected.

If yes, go to 3.

If no, go to 5.

3. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure, and deploy the standby VRM node at the DR site based on the
data plan.

4. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.

Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

After this step, the FusionCompute installation is complete.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.

5. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure.

6. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.

Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 301/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.

7. Add hosts at the DR site to a DR cluster and configure the DR hosts.


Rectify the fault by referring to "Datastore Is Read-Only Due to Removal of a HyperMetro LUN" in FusionCompute 8.8.0 Troubleshooting.
Note the following configuration requirements:

Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.

Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.

Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.

Ensure that the hosts planned to run the VRM VMs use the same distributed virtual switches (DVSs) as those used by hosts where VMs
of the original nodes are deployed.

After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).

Enable the VM template deployment function on the standby VRM node at the production site.

8. On FusionCompute, view and make a note of the ID of the standby VRM VM.

Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.

9. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

10. Run the following command to switch to user root:


su - root

11. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

12. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:
Please import database password:

13. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:

Open VM i-00000002 operating authority success.

Use the standby VRM template to deploy VMs at the DR site.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 302/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

14. On FusionCompute, stop the standby VRM VM.


For details, see "Stopping a VM" in FusionCompute 8.8.0 User Guide (Virtualization). After the standby VRM VM is stopped, the system
generates the Failed Heartbeat Communication Between Active and Standby VRM Nodes alarm.

15. On FusionCompute, convert the standby VRM VM to a VM template.


For details, see "Creating a VM Template" in FusionCompute 8.8.0 User Guide (Virtualization). Select the plan for converting a VM to a
template.

16. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).

In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.

Start the standby VRM VM at the DR site.

17. Use PuTTY to log in to the host where the standby VRM VM resides.
Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

18. Run the following command to switch to user root:


su - root

19. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

20. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.

21. Start the standby VRM VM.


For details, see "Starting/Waking Up a VM" in FusionCompute 8.8.0 User Guide (Virtualization).

After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.

Replace the license file.

22. Load the license file again.


Because the ESN of the standby VRM VM is changed, apply for a new license and load the new license file. For details, see Updating the
License File .

Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.

23. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

24. Run the following command to switch to user root:


su - root

25. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 303/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

26. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

27. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:

Close VM operating authority success.

After this step, the FusionCompute installation is complete.

3.6.1.1.1.1.6 Creating DR VMs

Scenarios
After deploying the DR system, create DR service VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.

Procedure
Create DR service VMs in the DR cluster.
For details, see VM Provisioning .

3.6.1.1.1.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster

Scenarios
In the flash storage HA scenario, you need to configure and enable the HA and compute resource scheduling policies for the DR cluster. In this way,
HA can be triggered for DR VMs to preferentially start them on the local host, preventing cross-site VM HA and start during normal running of the
system. If all hosts at the local site are faulty, the VMs automatically start and implement the HA function on the hosts at the DR site by using the
cluster resource scheduling function.

Prerequisites
Conditions
The HA and compute resource scheduling functions have been enabled for the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 304/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Log in to FusionCompute.

2. Configure HA policies for a DR cluster.


For details, see "Configuring the HA Policy for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).
Note the following configuration requirements:

Host Fault Policy: Set it to HA.

Datastore Fault Policy

Datastore Fault Handling by Host: Set this parameter to HA.

Policy Delay: You are advised to set this parameter to 5 minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.

Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.

3. Configure the compute resource scheduling policies for a DR cluster.


Note the following configuration requirements:

For details about how to configure compute resource scheduling policies, see "Configuring Compute Resource Scheduling Policies" in
FusionCompute 8.8.0 User Guide (Virtualization).

Automation Level: Set it to Automatic. In this case, the system automatically migrates VMs to achieve automatic service
DR.

Measure By: Set it to the object for determining the migration threshold. You are advised to set it to CPU and Memory.

Migration Threshold: The advanced rule takes effect if this parameter is set to Conservative for all time intervals.

Configure Host Group, VM Group, and Rule Group. For details, see "Configuring a Host Group for a Cluster", "Configuring a VM
Group for a Cluster", and "Configuring a Rule Group for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).

Add hosts running at the local site to the host group of the local site.

Add VMs running at the local site to the VM group of the local site.

If a host in the host group at the local site is faulty, the host resources will be preferentially scheduled to other hosts in the
host group based on the cluster HA policy. If the local site is faulty, the hosts will be scheduled to host groups at the other
site based on the cluster HA policy.

When setting a local VM group or local host group rule, set Type to VMs to hosts and Rule to Should run on host group.

3.6.1.1.1.2 DR Commissioning
Commissioning Process

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

3.6.1.1.1.2.1 Commissioning Process

Purpose

127.0.0.1:51299/icslite/print/pages/resource/print.do? 305/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Verify that the DR site properly takes over services if the production site is faulty.

Check that the data of the DR site and production site is synchronized after the DR site takes over services.

Verify that the production site properly takes over services back when it is recovered.

Prerequisites
The flash storage local HA system has been deployed.

DR services can be successfully deployed.

You have configured the HA and cluster scheduling policies for a DR cluster.

You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Procedure
Execute the following test cases:

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

Expected Result
The result of each test case meets expectation.

3.6.1.1.1.2.2 Commissioning DR Switchover

Purpose
By powering off the network devices at the production site, check whether DR can be automatically implemented on VMs, that is, check the
availability of the metropolitan active-active DR solution.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 306/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Constraints and Limitations


None

Prerequisites
The flash storage local HA system has been deployed.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.

2. Power off all the DR hosts and DR flash storage at the production site.

3. On FusionCompute, check the migration status of the running VMs.


If all the running VMs are migrated from the production site to the DR site and these VMs are in the running state, the system can implement
DR on VMs.

4. Select a VM and log in to it using VNC.


If the VNC login page is displayed, the VM is running properly.

5. Execute the following test cases and verify that all these cases can be executed successfully.

Create VMs.

Migrate VMs.

Stop VMs.

Restart VMs.

Start VMs.

Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

3.6.1.1.1.2.3 Commissioning DR Data Reprotection

Purpose
After rectifying the faults at the production site, commission the data resynchronization function by powering on the flash storage devices at the
production site and powering off the flash storage devices at the DR site.

Constraints and Limitations


None

Prerequisites

127.0.0.1:51299/icslite/print/pages/resource/print.do? 307/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You have executed the DR switchover test case.

Procedure
1. Randomly select a DR VM and save a test file on the VM.

2. Power on the flash storage devices at the production site.

3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.

4. Verify that the DR VM is in the running state.

5. Open the test file saved on the VM in 1 to check the file consistency.

Expected Result
The VM is running properly and the test file data is consistent.

Additional Information
None

3.6.1.1.1.2.4 Commissioning DR Switchback

Purpose
Commission the availability of the DR switchback function in the flash storage local HA scenario by powering on the DR hosts at the original
production site and enabling the compute resource scheduling function for the DR cluster.

Constraints and Limitations


None

Prerequisites
You have commissioned the DR data reprotection function.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Procedure
1. Power on the DR hosts at the production site.

2. On FusionCompute, enable the compute resource scheduling function.

3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.

4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.

Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.

Additional Information
None

3.6.1.1.2 Local HA for Scale-Out Storage


127.0.0.1:51299/icslite/print/pages/resource/print.do? 308/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Installing and Configuring the DR System

DR Commissioning

3.6.1.1.2.1 Installing and Configuring the DR System


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Installing FusionCompute

Configuring Storage Devices

Configuring HA Policies for a DR Cluster

Creating DR VMs

Creating a Protected Group

3.6.1.1.2.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and Configuration Process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 309/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.1.2.1.2 Preparing for Installation


Before the software installation, the following preparations are required.

Installation Requirements
Table 1 describes the installation requirements.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC PCs used for the installation Local PCs only need to meet the requirements for For details about the requirements of
installing FusionCompute and there is no special FusionCompute on local PCs, servers, and
requirement for them. storage devices, see System Requirements .

Server Server that functions as a The server must meet the following requirements:
host (CNA node) on
Meets the host requirements for installing
FusionCompute
FusionCompute.
Supports the FC HBA port and can communicate with the
FC switches.
NOTE:

In the x86 architecture, if it is a blade server, such as the


E6000 or E9000 server, the blades must be able to
connect to the FC switch modules using dedicated FC
network ports.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 310/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Storage device Storage devices used in the Huawei block storage must be used and meet the storage
DR solution compatibility requirements of UltraVR.
Independent GE/10GE/25GE network ports must be used
for block storage HA. Each block storage provides at least
two network ports for replication.

Access switch Access switches of the Each access switch has sufficient network ports to connect None
storage, management, and to the block storage HA network ports.
service planes

Aggregation Aggregation switches in the A route to the storage replication network must be None
switch production AZ and DR AZ configured for aggregation switches to route IP addresses.

Core switch Core switches in the A route to the storage replication network must be None
production AZ and DR AZ configured for core switches to route IP addresses.

Firewall Firewalls in the production No special requirement. None


AZ and DR AZ

Network Network environment The network environment between the production and DR None
environment between the production AZ AZs must meet the following requirements:
and DR AZ
The management plane has a bandwidth of at least 10
Mbit/s.
The bandwidth of the storage replication plane depends on
the total amount of data changed in a replication period.
The formula is as follows: Number of VMs to be protected
x Average amount of data changed per VM replication
period (MB) x 8/(VM replication period (minute) x 60).

Preparing Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Preparing documents

Document Category Document Name Description How to Obtain

Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.

Data planning template for Describes the network data plan,


the network integration such as the node IP address plan
design and the storage plan.

Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution 2.1.0 Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

OceanStor Pacific Series OceanStor Pacific Series Provides guidance on how to For enterprise users: Visit
Product Documentation Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
the block storage devices. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

Server product Server document package Provides information about how This document package is provided by the server vendor.
documentation to configure the servers.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 311/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Switch product Switch document package Provides information about how This document package is provided by the switch vendor.
documentation to configure the switches by
running commands.

After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.

Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR listed by referring to Datacenter
Virtualization Solution 2.1.0 Version Mapping.

FusionCompute

UltraVR

OceanStor Pacific series storage

3.6.1.1.2.1.3 Configuring Switches

Scenarios
In the local HA DR scenario for scale-out storage, the switch configuration is the same as that in the normal deployment scenario. This section only
describes the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure switches based on the network device documentation and the data plan.

Procedure
Configure Ethernet access switches.

1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.

Configure aggregation switches.

2. Configure the Ethernet aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:

Except the active-active quorum channel, configure the VLANs on other planes in another AZ.

When configuring the VLANs in another AZ, configure the Virtual Router Redundancy Protocol (VRRP) for the active and standby
gateways on the Ethernet aggregation switches based on the VLANs. For a VLAN, the gateway in the AZ where VM services are
deployed is configured as the active gateway, and the gateway in the other AZ is configured as the standby gateway.

Configure the Layer 2 interconnection between the Ethernet aggregation switches and core switches.

Configure the Layer 3 interconnection (implemented by using the VLANIF) between the Ethernet aggregation switches and core
switches for accessing services from external networks.

Configure Ethernet core switches.

3. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:

Configure the Layer 2 interconnection between the local core switch and the core switch in the peer AZ. Then, bind the multiple
Ethernet links between the AZs to a trunk to prevent loops.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 312/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Advertise exact routes (by VLAN) on the core switch working at the active gateway side, and non-exact routes (by VLAN) on the core
switch working at the standby gateway side. The route precision is controlled by the subnet mask.

If a firewall is deployed and NAT must be configured for the firewall, advertise the external routes on the firewall instead of the core switch. Advertise
exact routes on the firewall in the production AZ, and non-exact routes on the firewall in the DR AZ.
Manually set the firewall configurations such as the access control list (ACL) and NAT in the two AZs to be the same.

3.6.1.1.2.1.4 Installing FusionCompute

Scenarios
This section describes how to install FusionCompute in the local HA DR solution for scale-out storage. The FusionCompute installation method to
be used depends on whether the large Layer 2 network is connected on the management or service plane.

If the large Layer 2 network is connected, install FusionCompute by following the common procedure, and deploy the standby VRM node in
the DR AZ.

If the large Layer 2 network is not connected, install the host by following the normal procedure. Note the requirements for installing VRM
nodes compared with its normal installation procedure: Deploy both the active and standby VRM nodes in the production AZ. After the large
Layer 2 network is connected, deploy the standby VRM node in the DR AZ.

Prerequisites
Conditions

You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.

The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .

Data
You have obtained the password of the VRM database.

Operation Process
Figure 1 shows the FusionCompute installation process in the local HA DR scenario of scale-out storage.

Figure 1 FusionCompute installation process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 313/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.

1. Install hosts at the production site and the DR site.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.

2. Check whether the large Layer 2 network is connected.

If yes, go to 3.

If no, go to 6.

3. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure, and deploy the standby VRM node at the DR site based on the
data plan.

4. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.

Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).

127.0.0.1:51299/icslite/print/pages/resource/print.do? 314/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

5. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.

6. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure.

7. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.

Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.

8. Add hosts at the DR site to a DR cluster and configure the DR hosts.


For details, see "Adding Hosts" in FusionCompute 8.8.0 User Guide (Virtualization).
Note the following configuration requirements:

Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.

When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.

Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.

Ensure that the hosts planned to run the VRM VMs use the same DVSs as the hosts where VMs of the original nodes are deployed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 315/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).

9. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.

Enable the VM template deployment function on the standby VRM node at the production site.

10. On FusionCompute, view and make a note of the ID of the standby VRM VM.

Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.

11. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

12. Run the following command to switch to user root:


su - root

13. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

14. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

15. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
Open VM i-00000002 operating authority success.

Use the standby VRM template to deploy VMs at the DR site.

16. On FusionCompute, stop the standby VRM VM.


For details, see "Stopping a VM" in FusionCompute 8.8.0 User Guide (Virtualization). After the standby VRM VM is stopped, the system
generates the Failed Heartbeat Communication Between Active and Standby VRM Nodes alarm.

17. On FusionCompute, convert the standby VRM VM to a VM template.


For details, see "Creating a VM Template" in FusionCompute 8.8.0 User Guide (Virtualization). Select the plan for converting a VM to a
template.

18. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).

In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 316/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Start the standby VRM VM at the DR site.

19. Use PuTTY to log in to the host where the standby VRM VM resides.
Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

20. Run the following command to switch to user root:


su - root

21. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

22. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.

23. Start the standby VRM VM.


For details, see "Starting/Waking Up a VM" in FusionCompute 8.8.0 User Guide (Virtualization).

After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.

Replace the license file.

24. Load the license file again.


Because the ESN of the standby VRM VM is changed, apply for a new license and load the new license file. For details, see Updating the
License File .

Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.

25. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

26. Run the following command to switch to user root:


su - root

27. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

28. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

29. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 317/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Close VM operating authority success.

After this step, the FusionCompute installation is complete.

3.6.1.1.2.1.5 Configuring Storage Devices

Scenarios
In the local HA DR scenario for scale-out storage, the storage device configuration is the same as that in the normal deployment scenario. This
section only describes the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.

Procedure
1. Install the scale-out storage. For details, see "Installation" > "Software Installation Guide" > "Installing the Block Service" > "Connecting to
FusionCompute" in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.

2. Connect the storage system as instructed in "Storage Resource Creation (Scale-Out Block Storage)" in FusionCompute 8.8.0 User Guide
(Virtualization).
Plan the names of datastores in a unified manner, for example, DR_datastore01.

3. (Optional) In the converged deployment scenario, create a storage port as a replication port by following instructions provided in "Adding a
Storage Port" in FusionCompute 8.8.0 User Guide (Virtualization).

4. Add a remote device. For details, see "Checking the License", "Creating a Replication Cluster", and "Adding a Remote Device" in
"Configuration" > "Feature Guide" > "HyperMetro Feature Guide for Block"> "Installation and Configuration" > "Configuring HyperMetro"
in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.

5. Disable I/O suspension and forwarding.


Run the following command to switch to user dsware:
su - dsware -s /bin/bash

Disables the active-active I/O suspension function of scale-out storage.

[dsware@FS1_01 root]$ /opt/dsware/client/bin/dswareTool.sh --op globalParametersOperation -opType modify -parameter


g_dsware_io_hanging_switch:close

This operation is high risk,please input y to continue:y

[Sat Sep 18 16:28:20 CST 2021] DswareTool operation start.

Enter User Name:admin

Enter Password :

Login server success.

Operation finish successfully. Result Code:0

The count of successful nodes is 5.

The count of failed nodes is 0.

[Sat Sep 18 16:28:30 CST 2021] DswareTool operation end.

Disables the active-active forwarding function of scale-out storage.


[dsware@FS1_01 root]$ /opt/dsware/client/bin/dswareTool.sh --op globalParametersOperation -opType modify -parameter metro_io_fwd_switch:0

This operation is high risk,please input y to continue:y

[Sat Sep 18 16:25:52 CST 2021] DswareTool operation start.

Enter User Name:admin

127.0.0.1:51299/icslite/print/pages/resource/print.do? 318/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Enter Password :

Login server success.

Operation finish successfully. Result Code:0

The count of successful nodes is 5.

The count of failed nodes is 0.

[Sat Sep 18 16:26:03 CST 2021] DswareTool operation end.

You need to log in to the FSM node and run the preceding commands on both storage clusters.

6. (Optional) Run the dsware and diagnose commands to check the status of the I/O suspension and forwarding functions. For details, see
OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer).

3.6.1.1.2.1.6 Configuring HA Policies for a DR Cluster

Scenarios
In the HA scenario, configure the HA policy for the DR cluster to enable the DR VMs to start and implement the HA function on local hosts
preferentially.

Prerequisites
Conditions
The HA function has been enabled in the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.

Procedure
1. Log in to FusionCompute.

2. Configure HA policies for the DR cluster.


For details, see "Configuring the HA Policy for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).
Note the following configuration requirements:

The default HA function of the cluster must be enabled. Otherwise, the HA DR solution fails.

Host Fault Policy: Set it to HA.

Datastore Fault Policy

The following configuration takes effect only when the host datastore is created on virtualized SAN storage and the disk is a scale-out block storage
disk, an eVol disk, or an RDM shared storage disk.

Datastore Fault Handling by Host: Set it to HA.

Policy Delay: You are advised to set this parameter to 3 to 5 minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.

Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 319/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Fault Control Period (hour): The value ranges from 1 to 168, and the default value is 2.

Number of Hosts That Allow VM HA: The value ranges from 1 to 128. The default value is 2.

3.6.1.1.2.1.7 Creating DR VMs

Scenarios
After deploying the DR system, create DR VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.

Procedure
Create DR VMs in the DR cluster.
For details, see VM Provisioning .

3.6.1.1.2.1.8 Creating a Protected Group

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
Configure DR policies.
For details, see "DR Configuration" > "HA Solution" > "Creating a Protected Group" in OceanStor BCManager 8.6.0 UltraVR User Guide.

3.6.1.1.2.2 DR Commissioning
Commissioning Process

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

3.6.1.1.2.2.1 Commissioning Process

Purpose
Check whether the DR AZ properly takes over services if the production AZ is faulty.

Check whether services can be switched back after the fault in the production AZ is rectified.

Check whether the DR site properly takes over services when the production AZ is in maintenance as planned.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 320/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

After commissioning, back up the management data by exporting the configuration data. The data can be used to restore the system if an
exception occurs or an operation has not achieved the expected result.

Prerequisites
The local HA DR system for scale-out storage has been deployed.

DR services can be successfully deployed.

DR policies have been configured.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Procedure
Perform the following operations:

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

Expected Result
The result of each operation meets expectation.

3.6.1.1.2.2.2 Commissioning DR Switchover

Purpose
Verify the availability of the local HA DR solution for scale-out storage by disconnecting the storage link and executing a recovery plan to check
whether the VM can be recovered.

Constraints and Limitations

127.0.0.1:51299/icslite/print/pages/resource/print.do? 321/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

None

Prerequisites
The local HA DR system for scale-out storage has been deployed.

DR policies have been configured.

Procedure
1. Query and make a note of the number of DR VMs in the production AZ on FusionCompute of the production site.

2. Commission DR switchover.
If some hosts or VMs in the production AZ are faulty:
When a disaster occurs, VMs on CNA1 in the production AZ are unavailable for a short period of time (depending on the time taken to start
the VMs). After the disaster recovery, VMs on CNA1 are migrated to CNA2, and DR VMs in the DR AZ access storage resources in the DR
AZ. After the hosts in the DR AZ are recovered, migrate the VMs back to the production AZ.

After the DR switchover is successful, execute the required test cases at the DR site and ensure that the execution is successful.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the number of DR VMs is consistent with that at the production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on hosts in the DR AZ, and services in the DR AZ are normal.

Additional Information
None

3.6.1.1.2.2.3 Commissioning DR Data Reprotection

Purpose
By powering on the scale-out storage devices in the original production AZ and then powering off the scale-out storage devices in the original DR
AZ, commission the data resynchronization function after the DR switchover in the original DR AZ.

Constraints and Limitations


None

Prerequisites
You have executed the DR switchover test case.

Procedure
1. Randomly select a DR VM and save test files on the VM.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 322/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Power on the storage devices in the original production AZ.

3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices in the original
DR AZ.

4. Wait for a period of time until the VM status becomes normal and check the consistency of the VM testing files.

Expected Result
The VM is running properly and the test file data is consistent.

Additional Information
None

3.6.1.1.2.2.4 Commissioning DR Switchback

Purpose
Commission the availability of the HA DR switchback function of scale-out storage by powering on the DR hosts in the production AZ and
manually migrating VMs.

Constraints and Limitations


None

Prerequisites
You have commissioned the DR data protection function.

Procedure
1. Power on the DR hosts in the original production AZ.

2. Manually migrate the VMs back to the production AZ.

3. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts in the production AZ.

Expected Result
VMs are running properly on the hosts in the production AZ, and services in the production AZ are normal.

Additional Information
None

3.6.1.1.2.2.5 Backing Up Configuration Data

Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 323/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip

If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.

Prerequisites
Conditions

You have logged in to UltraVR.

You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.

Procedure
Automatic backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Automatic Backup.

4. Configure the backup server information.

SFTP IP

SFTP User Name

SFTP Password

SFTP Port

Encryption Password

To secure configuration data, the backup server must use the SFTP protocol.

5. Click OK.

6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.

After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.

Manual backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Manual Backup.

4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.

5. Download the ConfigData.zip file to your local system.

3.6.1.1.3 Local HA for eVol Storage


127.0.0.1:51299/icslite/print/pages/resource/print.do? 324/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DR System Installation and Configuration

DR Commissioning

3.6.1.1.3.1 DR System Installation and Configuration


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Installing FusionCompute

Configuring Storage Devices

Installing UltraVR

Creating DR VMs

Configuring DR Policies

3.6.1.1.3.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and configuration process

3.6.1.1.3.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 325/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:

In the x86 architecture, if it is a blade server, such as the E6000 or


E9000 server, the blades must be able to connect to the FC switch
modules using dedicated FC network ports.

Storage Storage devices used in eVol storage must be used and meet the storage compatibility
device the DR solution requirements of UltraVR.
Independent 10GE/25GE network ports are used for eVol storage
replication. Each eVol storage device provides at least two network
ports for storage replication.

Access switch Access switches of the Each access switch has sufficient network ports to connect to the data None
storage, management, replication network ports on the eVol storage devices.
and service planes

Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites

Firewall Firewalls at the No special requirement. None


production and DR sites

Network Network between the The network must meet the following requirements: None
environment production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the total
amount of data changed in the replication period, which is calculated
as follows: Number of VMs to be protected x Average amount of data
changed in the replication period per VM (MB) x 8/(Replication
period (minute) x 60).

Preparing Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Preparing documents

Document Category Document Name Description How to Obtain

Integration design Network Integration Describes the deployment plan, Obtain this document from the engineering supervisor.
document Design networking plan, and the
bandwidth plan.

Network Integration Describes the network data plan,


Design Data Planning such as the node IP address plan
and the storage plan.

Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution xxx Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
NOTE:
For carrier users: Visit https://support.huawei.com , search
xxx indicates the for the document by name, and download it.
software version.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 326/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

OceanStor Dorado OceanStor Dorado Series Provides guidance on how to For enterprise users: Visit
series product Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
documentation OceanStor Dorado storage. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

Server product Server document package Provides guidance on how to This document package is provided by the server vendor.
documentation configure the servers.

Switch product Switch document package Provides guidance on how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.

After obtaining the related documents by referring to Datacenter Virtualization Solution xxx Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.

Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution xxx Version Mapping.

FusionCompute

UltraVR

OceanStor Dorado series storage

3.6.1.1.3.1.3 Configuring Switches

Scenarios
In the replication DR scenario for eVol storage, the switch configuration is the same as that in a common deployment scenario without the DR
system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure switches based on the network device documentation and the data plan.

Procedure
Configure access switches.

1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the storage replication ports on the OceanStor Dorado devices. There are
no other special configuration requirements.

Configure aggregation switches.

2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the storage replication network must be configured for aggregation switches to route IP addresses.

Configure core switches.

3. Configure core switches based on the data plan and the core switch documents.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 327/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

A route to the storage replication network must be configured for core switches to route IP addresses. Management planes of the production
site and the DR site must be able to communicate with each other.

3.6.1.1.3.1.4 Installing FusionCompute

Scenarios
This section guides software commissioning engineers to install FusionCompute in the replication DR scenario for eVol storage.

Procedure
For details, see "Installing FusionCompute" in FusionCompute 8.8.0 Software Installation Guide.

3.6.1.1.3.1.5 Configuring Storage Devices

Scenarios
In the replication DR scenario for eVol storage, the storage device configuration is the same as that in a common deployment scenario without the
DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.

Procedure
1. Install OceanStor Dorado storage systems as instructed in "Install and Initialize" > "Installation Guide" in OceanStor Dorado Series Product
Documentation.

2. Connect storage devices. For details, see "Storage Resource Creation (for eVol Storage)" in FusionCompute 8.8.0 User Guide (Virtualization)
of this document.
Plan the names of datastores in a unified manner, for example, DR_datastore01.

3. Create a storage port as a remote replication port. For details, see "Adding a Storage Port" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.

You are advised to use two physical NICs to form an aggregation port and create a storage port on the aggregation port.
It is recommended that the remote replication plane be separated from the storage plane. That is, the remote replication port and storage port are created
on different aggregation ports or NICs.

4. Configure storage remote replication as instructed in "Configure" > "HyperReplication Feature Guide for Block" > "Configuring and
Managing HyperReplication (System Users)" > "Configuring HyperReplication" in OceanStor Dorado Series Product Documentation.

3.6.1.1.3.1.6 Installing UltraVR

Scenarios
This section guides software commissioning engineers to install and configure the UltraVR DR management software to implement the eVol
storage-based replication DR solution.

Prerequisites
Conditions

You have installed and configured FusionCompute.

You have obtained the software packages and the data required for installing UltraVR.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 328/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Install the UltraVR DR management software.
For details, see Installation and Uninstallation in UltraVR User Guide.

2. Configure the UltraVR DR management software.


For details, see DR Configuration > Active-Passive DR Solution in UltraVR User Guide.

3.6.1.1.3.1.7 Creating DR VMs

Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.

Prerequisites
Conditions
You have completed the initial service configuration.
Data
You have obtained the data required for creating VMs.

Procedure
1. Determine the DR VM creation mode.

To create DR VMs, go to 2.

To implement the DR solution for existing VMs, go to 3.

2. Create DR VMs on the planned DR volumes. For details, see Provisioning a VM .

3. Migrate VMs that do not reside on DR volumes to the DR volumes and migrate non-DR VMs residing on DR volumes to non-DR volumes.
For details, see "Migrating a VM (Change Compute Resource)" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.

After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.

3.6.1.1.3.1.8 Configuring DR Policies

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
1. Check whether DR policies are configured for the first time.

If yes, go to 2.

If no, no further action is required.

2. Configure DR policies for the first time.


For details, see "DR Configuration" > "Active-Passive DR Solution" > "Creating a Protected Group" in UltraVR User Guide.

3.6.1.1.3.2 DR Commissioning
127.0.0.1:51299/icslite/print/pages/resource/print.do? 329/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Commissioning Process

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

3.6.1.1.3.2.1 Commissioning Process

Purpose
Verify that the DR site properly takes over services if the production site is faulty.

Check that the data of the DR site and production site is synchronized after the DR site takes over services.

Verify that the production site properly takes over services back when it is recovered.

Prerequisites
A DR system for eVol storage has been deployed.

DR services can be successfully deployed.

You have configured the HA and cluster scheduling policies for a DR cluster.

You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Commissioning Procedure
Execute the following test cases:

Commissioning DR Switchover

Commissioning DR Data Reprotection


127.0.0.1:51299/icslite/print/pages/resource/print.do? 330/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Commissioning DR Switchback

Expected Result
The result of each test case meets expectation.

3.6.1.1.3.2.2 Commissioning DR Switchover

Purpose
By powering off the network devices deployed at the production site, check whether automatic DR is implemented for VMs and confirm the
availability of the DR solution for eVol storage.

Constraints and Limitations


None

Prerequisites
A DR system for eVol storage has been deployed.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Commissioning Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.

2. Power off all DR hosts and DR eVol storage at the production site.

3. On FusionCompute, check the migration status of the running VMs.


If all the running VMs are migrated from the production site to the DR site and these VMs are in the running state, the system can implement
DR on VMs.

4. Select a VM and log in to it using VNC.


If the VNC login page is displayed, the VM is running properly.

5. Execute the following test cases and verify that all these cases can be executed successfully.

Create VMs.

Migrate VMs.

Stop VMs.

Restart VMs.

Start VMs.

Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

127.0.0.1:51299/icslite/print/pages/resource/print.do? 331/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.1.3.2.3 Commissioning DR Data Reprotection

Purpose
By powering on the eVol storage device deployed at the original production site and then powering off the eVol storage device deployed at the
original DR site, commission the data resynchronization function after the original production site recovers.

Constraints and Limitations


None

Prerequisites
You have executed the DR switchover commissioning case.

Commissioning Procedure
1. Randomly select a DR VM and save a test file on the VM.

2. Power on the eVol storage device at the original production site.

3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.

4. Verify that the DR VM is in the running state.

5. Open the test file saved on the VM in 1 to check the file consistency.

Expected Result
The VM is running properly and the test file data is consistent.

Additional Information
None

3.6.1.1.3.2.4 Commissioning DR Switchback

Purpose
By powering on the DR hosts deployed at the production site and enabling the compute resource scheduling function in the DR cluster, commission
the availability of the switchback function provided by the DR solution for eVol storage.

Constraints and Limitations


None

Prerequisites
You have commissioned the DR data reprotection function.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Procedure
1. Power on the DR hosts at the production site.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 332/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. On FusionCompute, enable the compute resource scheduling function.

3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.

4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.

Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.

Additional Information
None

3.6.1.2 Metropolitan HA
Metropolitan HA for Flash Storage

Metropolitan HA for Scale-Out Storage

Metropolitan HA for eVol Storage

3.6.1.2.1 Metropolitan HA for Flash Storage


Installing and Configuring the DR System

DR Commissioning

3.6.1.2.1.1 Installing and Configuring the DR System


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Configuring Storage

Installing FusionCompute

Creating DR VMs

Configuring HA and Resource Scheduling Policies for a DR Cluster

3.6.1.2.1.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and configuration process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 333/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.1.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for the The local PC only needs to meet the requirements for For details about the requirements of
installation installing FusionCompute and there is no special FusionCompute for the local PC, servers,
requirement for it. and storage devices, see System
Requirements .
Server The server that functions as a The server must meet the following requirements:
host (CNA node) on
Meets the host requirements for installing
FusionCompute
FusionCompute.
Supports the FC HBA port and can communicate with
the FC switches.
NOTE:

If it is a blade server, such as the E6000 or E9000 server,


the blades must be able to connect to the FC switch
modules using dedicated FC network ports.

Flash storage Products used for storage This system must meet the following requirements for
management and storage DR installing flash storage:
The environment is satisfactory for installing flash
storage.
Deploy the quorum server in a third place.
The one-way delays of the network transmission between
the quorum server and the production site/the DR site is
less than or equal to 10 ms, which are suitable for the 1
Mbit/s bandwidth.

Access Access switches of the storage, There are no special requirements for the Ethernet access None
switch management, and service planes switches on the management and service planes.
The access switches on the storage plane must meet the
following requirements:
FC switches are recommended. Ethernet switches can
also be used.
FC switches or Ethernet switches must be compatible
with hosts and flash storage.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 334/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Aggregation Ethernet aggregation switches The Ethernet aggregation switches must support VRRP. None
switch and FC aggregation switches at
the production and DR sites

Core switch Core switches and firewalls at No special requirement None


Firewall the production and DR sites

Network Network between the production The network must meet the following requirements: None
site and the DR site
The flash storage heartbeat plane uses a large Layer 2
network.
In the large Layer 2 network, the RTT between any two
sites is less than or equal to 1 ms.
The quorum plane of flash storage must be connected
using a Layer 3 virtual private network (L3VPN).

Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Documents

Type Name Description How to Obtain

Integration design Network integration Describes the deployment plan, networking Obtain this document from the engineering supervisor.
document design plan, and the bandwidth plan.

Data planning template Provides the network data plan result, such
for the network as the IP plan of nodes, storage plan, and
integration design plan of VLANs, zones, gateways, and
routes.

Version document Datacenter Virtualization Provides information about hardware and For enterprise users: Visit
Solution 2.1.0 Version software version mapping. https://support.huawei.com/enterprise , search for the
Mapping document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.

FusionCompute FusionCompute Product Provides guidance on installation, initial For enterprise users: Visit
product Documentation configuration, and commissioning of https://support.huawei.com/enterprise , search for the
documentation FusionCompute. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.

Flash storage product OceanStor Series Product Includes storage installation, configuration, For enterprise users: Visit
documentation Documentation and commissioning as well as the https://support.huawei.com/enterprise , search for the
HyperMetro feature guide. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
NOTE:

For detailed version information, see Constraints and


Limitations .

Switch product Switch document Provides information about how to This document package is provided by the switch
documentation package configure the switches by running vendor.
commands.

Server product Server document package Provides information about how to This document package is provided by the server
documentation configure the servers. vendor.

After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.

Preparing Software Packages and Licenses


127.0.0.1:51299/icslite/print/pages/resource/print.do? 335/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The DR solution has no special requirements for the software packages. Obtain the software packages and license files for the following products
based on Datacenter Virtualization Solution 2.1.0 Version Mapping and Constraints and Limitations :

FusionCompute

Flash storage

3.6.1.2.1.1.3 Configuring Switches

Scenarios
In the metropolitan HA scenarios for flash storage, the switch configurations are the same as those in the normal deployment scenario. This section
only describes the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure the switches based on the data plan provided in the network device documents.

Procedure
Configure Ethernet access switches.

1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.

Configure FC access switches.

2. Configure the FC access switches based on the data plan and the FC access switch documents.
The FC aggregation switches deployed at two sites must be connected to each other using optical fibers. The zones and cascading must be
configured. There are no other special requirements.

Configure aggregation switches.

3. Configure the Ethernet aggregation switches and FC aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:

Except the active-active quorum channel, configure the VLANs on other planes at another site.

When configuring the VLANs of a site at another site, configure VRRP for the active and standby gateways on the Ethernet aggregation
switches based on the VLANs. For a VLAN, the gateway at the site where VM services are deployed is configured as the active
gateway, and the gateway at the other site is configured as the standby gateway.

The Layer 2 interconnection between the Ethernet aggregation switches and the core switches needs to be configured.

The Layer 3 interconnection (implemented by using the VLANIF interface) between the Ethernet aggregation and core switches needs
to be configured for accessing services from external networks.

Configure Ethernet core switches.

4. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:

Configure the Layer 2 interconnection between a local core switch and the peer core switch on the local core switch. Then, bind the
multiple links between the two sites to a trunk to prevent loops.

Distribute exact routes (measured in VLAN) on the core switch working at the active gateway side, and distribute non-exact routes on
the core switch working at the standby gateway side. The route precision is controlled by the subnet mask.

If a firewall is deployed and e Network Address Translation (NAT) needs to be configured for the firewall, distribute the external routes on the firewall,
instead of on the core switch. Distribute exact routes on the firewall at the production site, and non-exact routes on the firewall at the DR site.
The firewall configurations, such as ACL and NAT must be manually set to be the same.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 336/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.1.1.4 Configuring Storage

Scenarios
This section guides software commissioning engineers to configure flash storage in the metropolitan HA DR scenario.

Procedure
For flash storage, see "HyperMetro Feature Guide for Block" in OceanStor Product Documentation for the desired model.

When configuring multipathing policies, follow instructions in OceanStor Dorado and OceanStor 6.x and V700R001 DM-Multipath Configuration Guide for
FusionCompute.

3.6.1.2.1.1.5 Installing FusionCompute

Scenarios
This section describes how to install FusionCompute in the HA solution for flash storage. The FusionCompute installation method depends on
whether the large Layer 2 network on the management or service plane is connected.

If the large Layer 2 network is connected on the management and service planes, install FusionCompute by following the normal procedure,
and deploy the standby VRM node at the DR site.

If the large Layer 2 network is not connected on the management or service plane, install the host by following the normal procedure. Note the
requirements for installing VRM: Deploy both the active and standby VRM nodes at the production site first. After the large Layer 2 network is
connected, deploy the standby VRM node at the DR site.

Prerequisites
Conditions

You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.

The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .

Data
You have obtained the password of the VRM database.

Process
Figure 1 shows the process for installing FusionCompute.

Figure 1 FusionCompute installation process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 337/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.

1. Install hosts at the production site and the DR site.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.

2. Check whether the large Layer 2 network is connected.

If yes, go to 3.

If no, go to 5.

3. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure, and deploy the standby VRM node at the DR site based on the
data plan.

4. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.

Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 338/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

After this step, the FusionCompute installation is complete.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.

5. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure.

6. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.

Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.

7. Add hosts at the DR site to a DR cluster and configure the DR hosts.


Rectify the fault by referring to "Datastore Is Read-Only Due to Removal of a HyperMetro LUN" in FusionCompute 8.8.0 Troubleshooting.
Note the following configuration requirements:

Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.

Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.

Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.

Ensure that the hosts planned to run the VRM VMs use the same distributed virtual switches (DVSs) as those used by hosts where VMs
of the original nodes are deployed.

After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 339/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).

Enable the VM template deployment function on the standby VRM node at the production site.

8. On FusionCompute, view and make a note of the ID of the standby VRM VM.

Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.

9. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

10. Run the following command to switch to user root:


su - root

11. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

12. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

13. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:

Open VM i-00000002 operating authority success.

Use the standby VRM template to deploy VMs at the DR site.

14. On FusionCompute, stop the standby VRM VM.


For details, see "Stopping a VM" in FusionCompute 8.8.0 User Guide (Virtualization). After the standby VRM VM is stopped, the system
generates the Failed Heartbeat Communication Between Active and Standby VRM Nodes alarm.

15. On FusionCompute, convert the standby VRM VM to a VM template.


For details, see "Creating a VM Template" in FusionCompute 8.8.0 User Guide (Virtualization). Select the plan for converting a VM to a
template.

16. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).

In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.

Start the standby VRM VM at the DR site.

17. Use PuTTY to log in to the host where the standby VRM VM resides.
Ensure that the management IP address and user gandalf are used for login.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 340/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

18. Run the following command to switch to user root:


su - root

19. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

20. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.

21. Start the standby VRM VM.


For details, see "Starting/Waking Up a VM" in FusionCompute 8.8.0 User Guide (Virtualization).

After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.

Replace the license file.

22. Load the license file again.


Because the ESN of the standby VRM VM is changed, apply for a new license and load the new license file. For details, see Updating the
License File .

Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.

23. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

24. Run the following command to switch to user root:


su - root

25. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

26. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

27. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
Close VM operating authority success.

After this step, the FusionCompute installation is complete.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 341/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.1.1.6 Creating DR VMs

Scenarios
After deploying the DR system, create DR service VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.

Procedure
Create DR service VMs in the DR cluster.
For details, see VM Provisioning .

3.6.1.2.1.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster

Scenarios
In the metropolitan HA DR scenarios for flash storage, HA and compute resource scheduling policies need to be configured for a DR cluster. Then,
DR VMs can start and implement HA on local hosts preferentially, preventing cross-site VM start and HA. If all local hosts are faulty, the cluster
resource scheduling function automatically enables VMs to start and implement HA on hosts at the DR site.

Prerequisites
Conditions
The HA and compute resource scheduling functions have been enabled for the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.

Procedure
1. Log in to FusionCompute.

2. Configure HA policies for a DR cluster.


For details, see "Configuring the HA Policy for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).
Note the following configuration requirements:

Host Fault Policy: Set it to HA.

Datastore Fault Policy

Datastore Fault Handling by Host: Set this parameter to HA.

Policy Delay: You are advised to set this parameter to 5 minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.

Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 342/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Configure the compute resource scheduling policies for a DR cluster.


Note the following configuration requirements:

For details about how to configure compute resource scheduling policies, see "Configuring Compute Resource Scheduling Policies" in
FusionCompute 8.8.0 User Guide (Virtualization).

Automation Level: Set it to Automatic. In this case, the system automatically migrates VMs to achieve automatic service
DR.

Measure By: Set it to the object for determining the migration threshold. You are advised to set it to CPU and Memory.

Migration Threshold: The advanced rule takes effect if this parameter is set to Conservative for all time intervals.

Configure Host Group, VM Group, and Rule Group. For details, see "Configuring a Host Group for a Cluster", "Configuring a VM
Group for a Cluster", and "Configuring a Rule Group for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).

Add hosts running at the local site to the host group of the local site.

Add VMs running at the local site to the VM group of the local site.

If a host in the host group at the local site is faulty, the host resources will be preferentially scheduled to other hosts in the
host group based on the cluster HA policy. If the local site is faulty, the hosts will be scheduled to host groups at the other
site based on the cluster HA policy.

When setting a local VM group or local host group rule, set Type to VMs to hosts and Rule to Should run on host group.

3.6.1.2.1.2 DR Commissioning
Commissioning Process

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

3.6.1.2.1.2.1 Commissioning Process

Purpose
Verify that the DR site properly takes over services if the production site is faulty.

Check that the data of the DR site and production site is synchronized after the DR site takes over services.

Verify that the production site properly takes over services back when it is recovered.

Prerequisites
You have deployed the metropolitan HA system for flash storage.

DR services can be successfully deployed.

You have configured HA and cluster scheduling policies for the DR cluster.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 343/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
Perform the following operations:

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

Expected Result
The result of each test case meets expectation.

3.6.1.2.1.2.2 Commissioning DR Switchover

Purpose
By powering off the network devices at the production site, check whether automatic DR is implemented for VMs to verify the availability of the
metropolitan HA solution for flash storage.

Constraints and Limitations


None

Prerequisites
You have deployed the metropolitan HA system for flash storage.

Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.

2. Power off all the DR hosts and DR flash storage at the production site.

3. On FusionCompute, check the migration status of the running VMs.


If all the running VMs are migrated from the production site to the DR site and these VMs are in the running state, the system can implement
DR on VMs.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 344/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

4. Select a VM and log in to it using VNC.


If the VNC login page is displayed, the VM is running properly.

5. Execute the following test cases and verify that all these cases can be executed successfully.

Create VMs.

Migrate VMs.

Stop VMs.

Restart VMs.

Start VMs.

Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

3.6.1.2.1.2.3 Commissioning DR Data Reprotection

Purpose
After rectifying the faults at the production site, commission the data resynchronization function by powering on the flash storage devices at the
production site and powering off the flash storage devices at the DR site.

Constraints and Limitations


None

Prerequisites
You have executed the DR switchover test case.

Procedure
1. Randomly select a DR VM and save a test file on the VM.

2. Power on the flash storage devices at the production site.

3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.

4. Verify that the DR VM is in the running state.

5. Open the test file saved on the VM in 1 to check the file consistency.

Expected Result
The VM is running properly and the test file data is consistent.

Additional Information

127.0.0.1:51299/icslite/print/pages/resource/print.do? 345/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

None

3.6.1.2.1.2.4 Commissioning DR Switchback

Purpose
Verify the availability of the switchback function of the metropolitan HA DR switchback function for flash storage by powering on the DR hosts at
the original production site and enabling the compute resource scheduling function of the DR cluster.

Constraints and Limitations


None

Prerequisites
You have commissioned the DR data protection function.

Procedure
1. Power on the DR hosts at the production site.

2. On FusionCompute, enable the compute resource scheduling function.

3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.

4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.

Expected Result
VMs are running properly on the hosts at the production site, and services at the production site are normal.

Additional Information
None

3.6.1.2.2 Metropolitan HA for Scale-Out Storage


Installing and Configuring the DR System

DR Commissioning

3.6.1.2.2.1 Installing and Configuring the DR System


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Installing FusionCompute

Configuring Storage Devices

Configuring HA Policies for a DR Cluster

Creating DR VMs

127.0.0.1:51299/icslite/print/pages/resource/print.do? 346/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Creating a Protected Group

3.6.1.2.2.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and Configuration Process

3.6.1.2.2.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for the The local PC only needs to meet the requirements for For details about the requirements of
installation installing FusionCompute and there is no special FusionCompute for the local PC, servers,
requirement for it. and storage devices, see System
Requirements .

127.0.0.1:51299/icslite/print/pages/resource/print.do? 347/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Server The server that functions as The server must meet the following requirements:
a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the
FC switches.
NOTE:

In the x86 architecture, if it is a blade server, such as the


E6000 or E9000 server, the blades must be able to connect
to the FC switch modules using dedicated FC network
ports.

Storage Storage devices used in the Must be block storage meeting the storage compatibility
device DR solution requirements of UltraVR.
The metropolitan HA solution for scale-out storage uses
independent 10GE/25GE network ports. Each block storage
device provides at least two network ports for storage
replication.

Access switch Access switches of the Each access switch has sufficient network ports to connect None
storage, management, and to the HA network ports on the block storage devices.
service planes

Aggregation Aggregation switches at the A route to the data replication network is configured for None
switch production and DR sites aggregation switches to route IP addresses.

Core switch Core switches at the A route to the data replication network is configured for None
production and DR sites core switches to forward IP addresses.

Firewall Firewalls at the production No special requirement. None


and DR sites

Network Network between the The network must meet the following requirements: None
production site and the DR
The management plane has a bandwidth of at least 10
site
Mbit/s.
The bandwidth of the storage replication plane depends on
the total amount of data changed in the VM replication
period.
Calculation formula: Number of VMs to be protected x
Average amount of data changed in the replication period
per VM (MB) x 8/(Replication period (minute) x 60).

Preparing Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Preparing documents

Document Category Document Name Description How to Obtain

Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.

Data planning template for Describes the network data plan,


the network integration such as the node IP address plan
design and the storage plan.

Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution 2.1.0 Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 348/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

OceanStor Pacific Series OceanStor Pacific Series Provides guidance on how to For enterprise users: Visit
Product Documentation Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
the block storage devices. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

Server product Server document package Provides information about how This document package is provided by the server vendor.
documentation to configure the servers.

Switch product Switch document package Provides information about how This document package is provided by the switch vendor.
documentation to configure the switches by
running commands.

After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.

Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR listed by referring to Datacenter
Virtualization Solution 2.1.0 Version Mapping.

FusionCompute

UltraVR

OceanStor Pacific series storage

3.6.1.2.2.1.3 Configuring Switches

Scenarios
In the metropolitan HA scenario for scale-out storage, the switch configurations are the same as those in the common deployment scenario. This
section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure switches based on the network device documentation and the data plan.

Procedure
Configure Ethernet access switches.

1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.

Configure aggregation switches.

2. Configure the Ethernet aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:

Except the active-active quorum channel, configure the VLANs on other planes at another site.

When configuring the VLANs of a site at another site, configure VRRP for the active and standby gateways on the Ethernet aggregation
switches based on the VLANs. For a VLAN, the gateway at the site where VM services are deployed is configured as the active
gateway, and the gateway at the other site is configured as the standby gateway.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 349/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The Layer 2 interconnection between the Ethernet aggregation switches and the core switches needs to be configured.

The Layer 3 interconnection (implemented by using the VLANIF interface) between the Ethernet aggregation and core switches needs
to be configured for accessing services from external networks.
Configure Ethernet core switches.

3. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:

Configure the Layer 2 interconnection between a local core switch and the peer core switch on the local core switch. Then, bind the
multiple links between the two sites to a trunk to prevent loops.

Distribute exact routes (measured in VLAN) on the core switch working at the active gateway side, and distribute non-exact routes on
the core switch working at the standby gateway side. The route precision is controlled by the subnet mask.

If a firewall is deployed and e Network Address Translation (NAT) needs to be configured for the firewall, distribute the external routes on the firewall,
instead of on the core switch. Distribute exact routes on the firewall at the production site, and non-exact routes on the firewall at the DR site.
The firewall configurations, such as ACL and NAT must be manually set to be the same.

3.6.1.2.2.1.4 Installing FusionCompute

Scenarios
This section describes how to install FusionCompute in the metropolitan HA solution for scale-out storage. The FusionCompute installation method
depends on whether the large Layer 2 network on the management or service plane is connected.

If the large Layer 2 network is connected on the management and service planes, install FusionCompute by following the normal procedure,
and deploy the standby VRM node at the DR site.

If the large Layer 2 network is not connected on the management or service plane, install the host by following the normal procedure. Note the
requirements for installing VRM: Deploy both the active and standby VRM nodes at the production site first. After the large Layer 2 network is
connected, deploy the standby VRM node at the DR site.

Prerequisites
Conditions

You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.

The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .

Data
You have obtained the password of the VRM database.

Process
Figure 1 shows the FusionCompute installation process in the metropolitan HA scenario for scale-out storage.

Figure 1 FusionCompute installation process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 350/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.

1. Install hosts at the production site and the DR site.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.

2. Check whether the large Layer 2 network is connected.

If yes, go to 3.

If no, go to 6.

3. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure, and deploy the standby VRM node at the DR site based on the
data plan.

4. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.

Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O

127.0.0.1:51299/icslite/print/pages/resource/print.do? 351/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

5. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.

Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.

6. Install the active and standby VRM nodes.


Install the active and standby VRM nodes by following the normal procedure.

7. Perform the initial configuration of FusionCompute.


The initial configuration includes loading the license file, configuring the NTP clock source and the time zone, configuring the backup server,
creating clusters, adding hosts, adding storage devices to hosts, and adding network resources to VMs.
Note the following configuration requirements:

Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.

Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.

When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.

Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.

Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.

Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.

8. Add hosts at the DR site to a DR cluster and configure the DR hosts.


For details, see "Adding Hosts" in FusionCompute 8.8.0 User Guide (Virtualization).
Note the following configuration requirements:

Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.

When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.

Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.

Ensure that the hosts planned to run the VRM VMs use the same DVSs as the hosts where VMs of the original nodes are deployed.

After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).

9. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 352/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.

Enable the VM template deployment function on the standby VRM node at the production site.

10. On FusionCompute, view and make a note of the ID of the standby VRM VM.

Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.

11. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

12. Run the following command to switch to user root:


su - root

13. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

14. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

15. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:

Open VM i-00000002 operating authority success.

Use the standby VRM template to deploy VMs at the DR site.

16. On FusionCompute, stop the standby VRM VM.


For details, see "Stopping a VM" in FusionCompute 8.8.0 User Guide (Virtualization). After the standby VRM VM is stopped, the system
generates the Failed Heartbeat Communication Between Active and Standby VRM Nodes alarm.

17. On FusionCompute, convert the standby VRM VM to a VM template.


For details, see "Creating a VM Template" in FusionCompute 8.8.0 User Guide (Virtualization). Select the plan for converting a VM to a
template.

18. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).

In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.

Start the standby VRM VM at the DR site.

19. Use PuTTY to log in to the host where the standby VRM VM resides.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 353/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

20. Run the following command to switch to user root:


su - root

21. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

22. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.

23. Start the standby VRM VM.


For details, see "Starting/Waking Up a VM" in FusionCompute 8.8.0 User Guide (Virtualization).

After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.

Replace the license file.

24. Load the license file again.


Because the ESN of the standby VRM VM is changed, apply for a new license and load the new license file. For details, see Updating the
License File .

Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.

25. Use PuTTY to log in to the active VRM node.


Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

26. Run the following command to switch to user root:


su - root

27. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

28. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:

Please import database password:

29. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:

Close VM operating authority success.

After this step, the FusionCompute installation is complete.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 354/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.2.1.5 Configuring Storage Devices

Scenarios
In the metropolitan HA scenario for scale-out storage,the storage device configuration is the same as that in a normal deployment scenario without
the DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.

Procedure
1. Install the scale-out storage. For details, see "Installation" > "Software Installation Guide" > "Installing the Block Service" > "Connecting to
FusionCompute" in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.

2. Connect the storage system as instructed in "Storage Resource Creation (Scale-Out Block Storage)" in FusionCompute 8.8.0 User Guide
(Virtualization).
Plan the names of datastores in a unified manner, for example, DR_datastore01.

3. (Optional) In the converged deployment scenario, create a storage port as a replication port by following instructions provided in "Adding a
Storage Port" in FusionCompute 8.8.0 User Guide (Virtualization).

4. Add a remote device. For details, see "Checking the License", "Creating a Replication Cluster", and "Adding a Remote Device" in
"Configuration" > "Feature Guide" > "HyperMetro Feature Guide for Block"> "Installation and Configuration" > "Configuring HyperMetro"
in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.

5. Disable I/O suspension and forwarding.


Run the following command to switch to user dsware:
su - dsware -s /bin/bash

Disables the active-active I/O suspension function of scale-out storage.

[dsware@FS1_01 root]$ /opt/dsware/client/bin/dswareTool.sh --op globalParametersOperation -opType modify -parameter


g_dsware_io_hanging_switch:close

This operation is high risk,please input y to continue:y

[Sat Sep 18 16:28:20 CST 2021] DswareTool operation start.

Enter User Name:admin

Enter Password :

Login server success.

Operation finish successfully. Result Code:0

The count of successful nodes is 5.

The count of failed nodes is 0.

[Sat Sep 18 16:28:30 CST 2021] DswareTool operation end.

Disables the active-active forwarding function of scale-out storage.

[dsware@FS1_01 root]$ /opt/dsware/client/bin/dswareTool.sh --op globalParametersOperation -opType modify -parameter metro_io_fwd_switch:0

This operation is high risk,please input y to continue:y

[Sat Sep 18 16:25:52 CST 2021] DswareTool operation start.

Enter User Name:admin

Enter Password :

Login server success.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 355/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Operation finish successfully. Result Code:0

The count of successful nodes is 5.

The count of failed nodes is 0.

[Sat Sep 18 16:26:03 CST 2021] DswareTool operation end.

You need to log in to the FSM node and run the preceding commands on both storage clusters.

6. (Optional) Run the dsware and diagnose commands to check the status of the I/O suspension and forwarding functions. For details, see
OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer).

3.6.1.2.2.1.6 Configuring HA Policies for a DR Cluster

Scenarios
In the HA scenario, configure the HA policy for the DR cluster to enable the DR VMs to start and implement the HA function on local hosts
preferentially.

Prerequisites
Conditions
The HA function has been enabled in the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.

Procedure
1. Log in to FusionCompute.

2. Configure HA policies for the DR cluster.


For details, see "Configuring the HA Policy for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).
Note the following configuration requirements:

The default HA function of the cluster must be enabled. Otherwise, the HA DR solution fails.

Host Fault Policy: Set it to HA.

Datastore Fault Policy

The following configuration takes effect only when the host datastore is created on virtualized SAN storage and the disk is a scale-out block storage
disk, an eVol disk, or an RDM shared storage disk.

Datastore Fault Handling by Host: Set it to HA.

Policy Delay: You are advised to set this parameter to 3 to 5 minutes (configure it based on the environment requirements).

If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.

Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.

Fault Control Period (hour): The value ranges from 1 to 168, and the default value is 2.

Number of Hosts That Allow VM HA: The value ranges from 1 to 128. The default value is 2.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 356/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.2.1.7 Creating DR VMs

Scenarios
After deploying the DR system, create DR VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.

Procedure
Create DR VMs in the DR cluster.
For details, see VM Provisioning .

3.6.1.2.2.1.8 Creating a Protected Group

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
Configure DR policies.
For details, see "DR Configuration" > "HA Solution" > "Creating a Protected Group" in OceanStor BCManager 8.6.0 UltraVR User Guide.

3.6.1.2.2.2 DR Commissioning
Commissioning Process

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

3.6.1.2.2.2.1 Commissioning Process

Purpose
Verify that the DR site properly takes over services when the production site is faulty.

Verify that the production site properly takes over services back after it recovers.

Verify that the DR site properly takes over services when the production site is in maintenance as planned.

After commissioning, back up the management data by exporting the configuration data. The data can be used to restore the system if an
exception occurs or an operation has not achieved the expected result.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 357/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Prerequisites
The metropolitan HA system for scale-out storage has been deployed.

DR services can be successfully deployed.

DR policies have been configured.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Commissioning Procedure
Execute the following test cases:

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

Expected Result
The result of each test case meets expectation.

3.6.1.2.2.2.2 Commissioning DR Switchover

Purpose
By disconnecting the storage link or making a host faulty, check whether DR can be implemented for VMs and the metropolitan HA solution for
scale-out storage is available.

Constraints and Limitations


None

Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 358/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The metropolitan HA system for scale-out storage has been deployed.

DR policies have been configured.

Commissioning Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission DR switchover.
If some hosts or VMs at the production site are faulty:
When a disaster occurs, VMs on CNA1 at the production site are unavailable for a short period of time (depending on the time taken to start
the new VMs). After DR, VMs on CNA1 are migrated to CNA2, and VMs at the DR site access storage resources at the DR site. After the
hosts at the production site recover, you can migrate VMs back to the production site.

After the DR switchover is successful, execute the required test cases at the DR site and ensure that the execution is successful.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the number of DR VMs is consistent with that at the production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

3.6.1.2.2.2.3 Commissioning DR Data Reprotection

Purpose
Power on scale-out storage devices at the production site and power off the scale-out storage devices at the DR site to check whether the data
synchronization function is supported after commissioning the DR switchover.

Constraints and Limitations


None

Prerequisites
You have executed the DR switchover test case.

Procedure
1. Randomly select a DR VM and save a test file on the VM.

2. Power on the storage device at the original production site.

3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 359/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

4. Wait for a period of time until the VM status becomes normal and check the consistency of the VM test files.

Expected Result
The VM is running properly and the test file data is consistent.

Additional Information
None

3.6.1.2.2.2.4 Commissioning DR Switchback

Purpose
By powering on the DR hosts deployed at the original production site and manually migrating VMs, commission the availability of the switchback
function of the HA solution for scale-out storage.

Constraints and Limitations


None

Prerequisites
You have commissioned the DR data reprotection function.

Commissioning Procedure
1. Power on the DR hosts at the production site.

2. Manually migrate VMs back to the production site.

3. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.

Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.

Additional Information
None

3.6.1.2.2.2.5 Backing Up Configuration Data

Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.

If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 360/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip


Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip

If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.

Prerequisites
Conditions

You have logged in to UltraVR.

You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.

Procedure
Automatic backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Automatic Backup.

4. Configure the backup server information.

SFTP IP

SFTP User Name

SFTP Password

SFTP Port

Encryption Password

To secure configuration data, the backup server must use the SFTP protocol.

5. Click OK.

6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.

After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.

Manual backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Manual Backup.

4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.

5. Download the ConfigData.zip file to your local system.

3.6.1.2.3 Metropolitan HA for eVol Storage


DR System Installation and Configuration

DR Commissioning

127.0.0.1:51299/icslite/print/pages/resource/print.do? 361/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.3.1 DR System Installation and Configuration


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Installing FusionCompute

Configuring Storage Devices

Installing UltraVR

Creating DR VMs

Configuring DR Policies

3.6.1.2.3.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and configuration process

3.6.1.2.3.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 362/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:

In the x86 architecture, if it is a blade server, such as the E6000 or


E9000 server, the blades must be able to connect to the FC switch
modules using dedicated FC network ports.

Storage Storage devices used in eVol storage must be used and meet the storage compatibility
device the DR solution requirements of UltraVR.
Independent 10GE/25GE network ports are used for eVol storage
replication. Each eVol storage device provides at least two network
ports for storage replication.

Access switch Access switches of the Each access switch has sufficient network ports to connect to the data None
storage, management, replication network ports on the eVol storage devices.
and service planes

Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites

Firewall Firewalls at the No special requirement. None


production and DR sites

Network Network between the The network must meet the following requirements: None
environment production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the total
amount of data changed in the replication period, which is calculated
as follows: Number of VMs to be protected x Average amount of data
changed in the replication period per VM (MB) x 8/(Replication
period (minute) x 60).

Preparing Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Preparing documents

Document Category Document Name Description How to Obtain

Integration design Network Integration Describes the deployment plan, Obtain this document from the engineering supervisor.
document Design networking plan, and the
bandwidth plan.

Network Integration Describes the network data plan,


Design Data Planning such as the node IP address plan
and the storage plan.

Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution xxx Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
NOTE:
For carrier users: Visit https://support.huawei.com , search
xxx indicates the for the document by name, and download it.
software version.

UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 363/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

OceanStor Dorado OceanStor Dorado Series Provides guidance on how to For enterprise users: Visit
series product Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
documentation OceanStor Dorado storage. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

Server product Server document package Provides guidance on how to This document package is provided by the server vendor.
documentation configure the servers.

Switch product Switch document package Provides guidance on how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.

After obtaining the related documents by referring to Datacenter Virtualization Solution xxx Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.

Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution xxx Version Mapping.

FusionCompute

UltraVR

OceanStor Dorado series storage

3.6.1.2.3.1.3 Configuring Switches

Scenarios
In the replication DR scenario for eVol storage, the switch configuration is the same as that in a common deployment scenario without the DR
system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure switches based on the network device documentation and the data plan.

Procedure
Configure access switches.

1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the storage replication ports on the OceanStor Dorado devices. There are
no other special configuration requirements.

Configure aggregation switches.

2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the storage replication network must be configured for aggregation switches to route IP addresses.

Configure core switches.

3. Configure core switches based on the data plan and the core switch documents.
A route to the storage replication network must be configured for core switches to route IP addresses. Management planes of the production
site and the DR site must be able to communicate with each other.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 364/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.2.3.1.4 Installing FusionCompute

Scenarios
This section guides software commissioning engineers to install FusionCompute in the replication DR scenario for eVol storage.

Procedure
For details, see "Installing FusionCompute" in FusionCompute 8.8.0 Software Installation Guide.

3.6.1.2.3.1.5 Configuring Storage Devices

Scenarios
In the replication DR scenario for eVol storage, the storage device configuration is the same as that in a common deployment scenario without the
DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.

Procedure
1. Install OceanStor Dorado storage systems as instructed in "Install and Initialize" > "Installation Guide" in OceanStor Dorado Series Product
Documentation.

2. Connect storage devices. For details, see "Storage Resource Creation (for eVol Storage)" in FusionCompute 8.8.0 User Guide (Virtualization)
of this document.
Plan the names of datastores in a unified manner, for example, DR_datastore01.

3. Create a storage port as a remote replication port. For details, see "Adding a Storage Port" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.

You are advised to use two physical NICs to form an aggregation port and create a storage port on the aggregation port.
It is recommended that the remote replication plane be separated from the storage plane. That is, the remote replication port and storage port are created
on different aggregation ports or NICs.

4. Configure storage remote replication as instructed in "Configure" > "HyperReplication Feature Guide for Block" > "Configuring and
Managing HyperReplication (System Users)" > "Configuring HyperReplication" in OceanStor Dorado Series Product Documentation.

3.6.1.2.3.1.6 Installing UltraVR

Scenarios
This section guides software commissioning engineers to install and configure the UltraVR DR management software to implement the eVol
storage-based replication DR solution.

Prerequisites
Conditions

You have installed and configured FusionCompute.

You have obtained the software packages and the data required for installing UltraVR.

Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 365/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Install the UltraVR DR management software.


For details, see Installation and Uninstallation in UltraVR User Guide.

2. Configure the UltraVR DR management software.


For details, see DR Configuration > Active-Passive DR Solution in UltraVR User Guide.

3.6.1.2.3.1.7 Creating DR VMs

Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.

Prerequisites
Conditions
You have completed the initial service configuration.
Data
You have obtained the data required for creating VMs.

Procedure
1. Determine the DR VM creation mode.

To create DR VMs, go to 2.

To implement the DR solution for existing VMs, go to 3.

2. Create DR VMs on the planned DR volumes. For details, see Provisioning a VM .

3. Migrate VMs that do not reside on DR volumes to the DR volumes and migrate non-DR VMs residing on DR volumes to non-DR volumes.
For details, see "Migrating a VM (Change Compute Resource)" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.

After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.

3.6.1.2.3.1.8 Configuring DR Policies

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
1. Check whether DR policies are configured for the first time.

If yes, go to 2.

If no, no further action is required.

2. Configure DR policies for the first time.


For details, see "DR Configuration" > "Active-Passive DR Solution" > "Creating a Protected Group" in UltraVR User Guide.

3.6.1.2.3.2 DR Commissioning
Commissioning Process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 366/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

3.6.1.2.3.2.1 Commissioning Process

Purpose
Verify that the DR site properly takes over services if the production site is faulty.

Check that the data of the DR site and production site is synchronized after the DR site takes over services.

Verify that the production site properly takes over services back when it is recovered.

Prerequisites
A DR system for eVol storage has been deployed.

DR services can be successfully deployed.

You have configured the HA and cluster scheduling policies for a DR cluster.

You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Commissioning Procedure
Execute the following test cases:

Commissioning DR Switchover

Commissioning DR Data Reprotection

Commissioning DR Switchback

127.0.0.1:51299/icslite/print/pages/resource/print.do? 367/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Expected Result
The result of each test case meets expectation.

3.6.1.2.3.2.2 Commissioning DR Switchover

Purpose
By powering off the network devices deployed at the production site, check whether automatic DR is implemented for VMs and confirm the
availability of the DR solution for eVol storage.

Constraints and Limitations


None

Prerequisites
A DR system for eVol storage has been deployed.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Commissioning Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.

2. Power off all DR hosts and DR eVol storage at the production site.

3. On FusionCompute, check the migration status of the running VMs.


If all the running VMs are migrated from the production site to the DR site and these VMs are in the running state, the system can implement
DR on VMs.

4. Select a VM and log in to it using VNC.


If the VNC login page is displayed, the VM is running properly.

5. Execute the following test cases and verify that all these cases can be executed successfully.

Create VMs.

Migrate VMs.

Stop VMs.

Restart VMs.

Start VMs.

Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

3.6.1.2.3.2.3 Commissioning DR Data Reprotection

127.0.0.1:51299/icslite/print/pages/resource/print.do? 368/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Purpose
By powering on the eVol storage device deployed at the original production site and then powering off the eVol storage device deployed at the
original DR site, commission the data resynchronization function after the original production site recovers.

Constraints and Limitations


None

Prerequisites
You have executed the DR switchover commissioning case.

Commissioning Procedure
1. Randomly select a DR VM and save a test file on the VM.

2. Power on the eVol storage device at the original production site.

3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.

4. Verify that the DR VM is in the running state.

5. Open the test file saved on the VM in 1 to check the file consistency.

Expected Result
The VM is running properly and the test file data is consistent.

Additional Information
None

3.6.1.2.3.2.4 Commissioning DR Switchback

Purpose
By powering on the DR hosts deployed at the production site and enabling the compute resource scheduling function in the DR cluster, commission
the availability of the switchback function provided by the DR solution for eVol storage.

Constraints and Limitations


None

Prerequisites
You have commissioned the DR data reprotection function.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.

Procedure
1. Power on the DR hosts at the production site.

2. On FusionCompute, enable the compute resource scheduling function.

3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 369/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.

Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.

Additional Information
None

3.6.1.3 Active-Standby DR
Active-Standby DR Solution for Flash Storage

Active-Standby DR Solution for Scale-Out Storage

3.6.1.3.1 Active-Standby DR Solution for Flash Storage


DR System Installation and Configuration

DR Commissioning

3.6.1.3.1.1 DR System Installation and Configuration


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Configuring Storage Devices

Creating DR VMs

Configuring the Remote Replication Relationship

Configuring DR Policies

3.6.1.3.1.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and configuration process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 370/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.3.1.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:

If it is a blade server, such as the E6000 or E9000 server, the blades


must be able to connect to the FC switch modules using dedicated FC
network ports.

Storage Storage devices used in Must be the SAN devices meeting the storage compatibility
device the DR solution requirements of UltraVR.
The SAN devices use independent GE/10GE/25GE network ports for
data replication, and each SAN device provides at least two network
ports for data replication.

Access Access switches of the Each access switch has sufficient network ports to connect to the data None
switch storage, management, replication network ports on the SAN devices.
and service planes

127.0.0.1:51299/icslite/print/pages/resource/print.do? 371/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites

Firewall Core switches at the No special requirement None


production and DR sites

Network Network between the The network must meet the following requirements: None
production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the volume of
all the changed data in the VM replication period, which is calculated
as follows: Number of VMs to be protected x Average volume of the
changed data in the VM replication period (MB) x 8/(VM replication
period (minute) x 60).

Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Documents

Document Document Name Description How to Obtain


Category

Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.

Data planning template for the Describes the network data plan,
network integration design such as the node IP address plan
and the storage plan.

Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution 2.1.0 Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for
the document by name, and download it.

UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for
the document by name, and download it.

SAN product SAN device document Provides guidance on how to This document package is provided by the SAN device
documentation package install, configure, and commission vendor.
For example, OceanStor the SAN devices. NOTE:
Pacific Series Product
For detailed version information, see Constraints and
Documentation.
Limitations .

Switch product Switch document package Provides information about how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.

Server product Server document package Provides information about how to This document package is provided by the server vendor.
documentation configure the servers.

After obtaining the related documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.

Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution 2.1.0 Version Mapping.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 372/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.3.1.1.3 Configuring Switches

Scenarios
In the active-standby DR scenario for flash storage, the switch configuration is the same as that in a common deployment scenario without the DR
system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure switches based on the network device documentation and the data plan.

Procedure
Configure access switches.

1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the data replication network ports on the SAN devices. There are no
special configuration requirements.

Configure aggregation switches.

2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the data replication network is configured for aggregation switches to forward IP addresses.

Configure core switches.

3. Configure core switches based on the data plan and the core switch documents.
A route to the data replication network is configured for core switches to forward IP addresses. Management planes of the production site and
the DR site must be able to communicate with each other.

3.6.1.3.1.1.4 Configuring Storage Devices

Scenarios
In the active-standby DR scenario for flash storage, the storage device configuration is the same as that in a common deployment scenario without
the DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan. OceanStor V5 series storage is used as an
example. For details, see "Configuring Basic Storage Services" in the block service section in OceanStor Product Documentation.

Procedure
1. Complete the initial configuration of the SAN devices based on the data plan in the SAN device documents.

If the storage is limited by A LUN mapping to a host can not be a secondary LUN of remote replication, you cannot create remote replications on the
current LUN. Therefore, you can configure basic storage services only after completing Configuring the Remote Replication Relationship in the DR center.

2. On FusionCompute, use the planned DR LUNs to create datastores and name these datastores in a unified manner to simplify management,
such as DR_datastore01. Select Virtualization when creating datastores.

3.6.1.3.1.1.5 Creating DR VMs

Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 373/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating VMs.

Procedure
1. Determine the DR VM creation mode.

To create DR VMs, go to 2.

To implement the DR solution for existing VMs, go to 3.

2. Create DR VMs on the planned DR LUNs. For details, see VM Provisioning .

3. Migrate VMs that are not created on DR LUNs to the DR LUNs and migrate non-DR VMs created on DR LUNs to the non-DR LUNs.
For details, see "Migrating a Whole VM" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.

After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.

3.6.1.3.1.1.6 Configuring the Remote Replication Relationship

Scenarios
After the DR system is installed and configured and DR VMs are provisioned or migrated to DR LUNs, configure the remote replication
relationship and consistency groups for DR LUNs on storage devices. Then, the remote replication feature of storage devices can be used to
implement DR for VMs.

Prerequisites
Conditions

The DR system has been installed and configured.

DR VMs have been provisioned or migrated to DR LUNs.

Procedure
1. Configure asynchronous or synchronous remote replication for DR LUNs on the storage device as planned. For details, see
"HyperReplication Feature Guide for Block" in the storage device product documentation.
The sizes of the active and standby LUNs must be the same. If resource LUNs need to be created, you need to configure one resource LUN
for each of controller A and controller B of the storage system. It is recommended that the size of each resource LUN be set to half the
maximum size of a resource pool supported by the storage system. Resource LUNs and the active remote replication LUN must be in
different resource pools.

2. Perform initial data synchronization.


Because the initial data synchronization takes a long time, either of the following synchronization schemes can be used to reduce
synchronization duration:

Before initial synchronization, migrate the SAN devices at the DR site to the production site to reduce bandwidth consumption.

Deploy a high-bandwidth network for initial synchronization, for example, a 10GE network.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 374/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Precautions for initial synchronization are as follows:

Allow only necessary I/O services at the production site during the synchronization. The reason is that the remote replication may be interrupted when resources
of the resource LUN are used up. If this case occurs, you need to manually initiate incremental synchronization.
Before the synchronization is complete, do not perform DR operations, such as a DR switchover, a scheduled migration, or a DR drill.

Precautions for using remote asynchronous replication are as follows:

After the DR relationship is established between the production and DR sites, avoid creating VMs on or migrating VMs to DR LUNs and migrating or deleting
VMs from DR LUNs.
If capacity expansion is required, create a LUN, create VMs that use the storage resources of the LUN, evaluate the recovery point objective (RPO), and
configure a remote replication LUN pair. If data is copied using networks, the initial synchronization consumes more time.
If you have to provision VMs to the existing remote replication LUNs, the following conditions must be met:
Provision VMs during off-peak hours.
The VMs to be provisioned should use thin provisioning disks or thick provisioning lazy zeroed disks rather than common disks.
The total amount of data on all VMs provisioned each time must not exceed 90% of the resource LUN capacity. After provisioning a batch of VMs,
perform immediate synchronization on storage devices. Then, provision another batch of VMs.
After VM provisioning, check whether the RPO meets the service demand and adjust the RPO as required.

3.6.1.3.1.1.7 Configuring DR Policies

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
1. Check whether DR policies are configured for the first time.

If yes, go to 2.

If no, go to 3.

2. Configure DR policies for the first time.


For details, see "DR Configuration" > "Active-Passive DR Solution" > "Creating a Protected Group" in UltraVR User Guide.

3. Modify DR policies.
For details, see DR Management > Active-Passive DR Solution > DR Protection > Modifying Protection Policies in UltraVR User
Guide.

3.6.1.3.1.2 DR Commissioning
Commissioning Process

Commissioning a DR Test

Commissioning Scheduled Migration

Commissioning Fault Recovery

Commissioning Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

3.6.1.3.1.2.1 Commissioning Process

Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 375/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Verify that the DR site properly takes over services if the production site is faulty.

Verify that the production site properly takes over services back when it is recovered.

Verify that a recovery plan is feasible, and adjust and optimize the recovery procedure as required.

Verify that the DR site properly takes over services when the production site is in maintenance as planned.

After commissioning, back up the management data by exporting the configuration data. The data can be used to recover the system if an
exception occurs or an operation has not achieved the expected result.

Prerequisites
The active-standby DR system for flash storage has been deployed.

DR services can be successfully deployed.

DR policies have been configured.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Procedure
Execute the following test cases:

Commissioning a DR Test

Commissioning Scheduled Migration

Commissioning Fault Recovery

Commissioning Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

Expected Result
The result of each test case meets expectation.

3.6.1.3.1.2.2 Commissioning a DR Test


127.0.0.1:51299/icslite/print/pages/resource/print.do? 376/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Purpose
Verify that a recovery plan is correct and executable by testing the recovery plan.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for flash storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission a DR test.
For details, see "DR Management" > "Active-Passive DR Solution" > "DR Recovery" > "DR Testing in the DR Center" in OceanStor
BCManager 8.6.0 UltraVR User Guide.

During DR test commissioning, before clearing drilling data from the DR site, execute the required test cases at the DR site and ensure that the execution is
successful.
On FusionCompute at the DR site, view the number of drill VMs and ensure that the number of drill VMs is consistent with that of DR VMs at the
production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.

Additional Information
None

3.6.1.3.1.2.3 Commissioning Scheduled Migration

Purpose
Verify that the scheduled migration function is available by executing a recovery plan.

Constraints and Limitations


None

Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 377/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The active-standby DR system for flash storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

The test and clearing operations have been successfully performed.

Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission scheduled migration.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Planned Migration of Services in the Production
Center in UltraVR User Guide.

After the scheduled migration, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.

Additional Information
None

3.6.1.3.1.2.4 Commissioning Fault Recovery

Purpose
Disconnect the storage link and execute a recovery plan to check whether DR can be implemented for VMs and further confirm the availability of
the array-based replication DR solution.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for flash storage has been deployed.

DR policies have been configured.

A recovery plan that meets the requirements has been created.

The test and clearing operations have been successfully performed.

Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 378/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission fault recovery.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Service Migration in the Production Center After
Fault Recovery in UltraVR User Guide.

After the fault is rectified, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

3.6.1.3.1.2.5 Commissioning Reprotection

Purpose
Verify that the reprotection function is available by executing a recovery plan.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for flash storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

Scheduled VM migration or fault recovery has been performed.

All faults at the production site have been rectified.

Procedure
1. Check the reprotection type.

For scheduled migration, go to 2.

For fault recovery, go to 3.

2. Perform reprotection in scheduled migration scenarios.


For details, see reprotection related contents in DR Management > Active-Passive DR Solution > DR Recovery > Planned Migration of
Services in the Production Center in UltraVR User Guide.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 379/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Perform reprotection in the fault recovery scenarios.


For details, see reprotection related contents in DR Management > Active-Passive DR Solution > DR Recovery > Service Migration in
the Production Center After Fault Recovery in UltraVR User Guide.

Expected Result
The reprotection is successful.

Additional Information
None

3.6.1.3.1.2.6 Commissioning DR Switchback

Purpose
After services are switched from the production site to the DR site through the scheduled migration, switch the services back to the production site
based on the drill plan.
Services are migrated from the production site to the DR site when a recoverable fault, such as a power outage, occurs. After the production site
recovers from the fault, synchronize the data generated during DR from the DR site to the production site and then migrate services back to the
production site.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for flash storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

All faults at the production site have been rectified.

Reprotection has been performed.

Procedure
1. Determine the DR switchback type.

For scheduled migration, go to 2.

For fault recovery, go to 3.

2. Commission DR switchback in scheduled migration scenarios.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Planned Switchback of Services in the DR Center in
UltraVR User Guide.

3. Commission DR switchback in fault recovery scenarios.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Service Switchback in the DR Center After Fault
Recovery in UltraVR User Guide.

After commissioning DR switchback, execute the following test cases at the production site and ensure that the execution is successful.

Select a running VM randomly and log in to the VM using VNC.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 380/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
The DR switchback is successful. VMs are running on the hosts at the production site properly. Services at the production site are running properly
and services at the DR site are not affected.

Additional Information
None

3.6.1.3.1.2.7 Backing Up Configuration Data

Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.

If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip

If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.

Prerequisites
Conditions

You have logged in to UltraVR.

You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.

Procedure
Automatic backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Automatic Backup.

4. Configure the backup server information.

SFTP IP

SFTP User Name

127.0.0.1:51299/icslite/print/pages/resource/print.do? 381/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

SFTP Password

SFTP Port

Encryption Password

To secure configuration data, the backup server must use the SFTP protocol.

5. Click OK.

6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.

After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.

Manual backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Manual Backup.

4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.

5. Download the ConfigData.zip file to your local system.

3.6.1.3.2 Active-Standby DR Solution for Scale-Out Storage


DR System Installation and Configuration

DR Commissioning

Solution Overview

Maintenance Guide

3.6.1.3.2.1 DR System Installation and Configuration


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Configuring Storage Devices

Creating DR VMs

Configuring DR Policies

3.6.1.3.2.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the DR system.

Figure 1 Installation and configuration process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 382/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.3.2.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC switches.
NOTE:

In the x86 architecture, if it is a blade server, such as the E6000 or


E9000 server, the blades must be able to connect to the FC switch
modules using dedicated FC network ports.

Storage Storage devices used in Must be block storage meeting the storage compatibility requirements
device the DR solution of UltraVR.
The block storage uses independent 10GE/25GE network ports for
replication. Each block storage device provides at least two network
ports for storage replication.

Access Access switches of the Each access switch has sufficient network ports to connect to the data None
switch storage, management, replication network ports on the block storage devices.
and service planes

Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites

Firewall Firewalls at the No special requirement. None


production and DR sites

Network Network between the The network must meet the following requirements: None
production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the volume of
all the changed data in the VM replication period, which is calculated
as follows: Number of VMs to be protected x Average volume of the
changed data in the VM replication period (MB) x 8/(VM replication
period (minute) x 60).

127.0.0.1:51299/icslite/print/pages/resource/print.do? 383/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Preparing Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Preparing documents

Document Category Document Name Description How to Obtain

Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.

Data planning template for Describes the network data plan,


the network integration such as the node IP address plan
design and the storage plan.

Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution xxx Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
NOTE:
For carrier users: Visit https://support.huawei.com , search
xxx indicates the for the document by name, and download it.
software version.

UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

OceanStor Pacific series OceanStor Pacific Series Provides guidance on how to For enterprise users: Visit
product documentation Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
the block storage devices. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.

Server product Server document package Provides information about how to This document package is provided by the server vendor.
documentation configure the servers.

Switch product Switch document package Provides information about how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.

After obtaining the related documents by referring to Datacenter Virtualization Solution xxx Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.

Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution xxx Version Mapping.

FusionCompute

UltraVR

OceanStor Pacific series storage

3.6.1.3.2.1.3 Configuring Switches

Scenarios
In the active/standby DR scenario for scale-out storage, the switch configuration is the same as that in a common deployment scenario without the
DR system deployed. This section only describes the special configuration requirements and precautions in the DR scenario.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 384/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When deploying the DR system, configure switches based on the data plan provided in the network device documents.

Procedure
Configure access switches.

1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the block storage replication network ports. There are no special
configuration requirements.

Configure aggregation switches.

2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the data replication network is configured for aggregation switches to forward IP addresses.

Configure core switches.

3. Configure core switches based on the data plan and the core switch documents.
A route to the data replication network is configured for core switches to forward IP addresses. Management planes of the production site and
the DR site must be able to communicate with each other.

3.6.1.3.2.1.4 Configuring Storage Devices

Scenarios
In the active-standby DR scenario for scale-out storage, the storage device configuration is the same as that in a common deployment scenario
without the DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.

Procedure
1. Install the scale-out storage. For details, see "Installation" > "Software Installation Guide" > "Installing the Block Service" > "Connecting to
FusionCompute" in OceanStor Pacific Series Product Documentation.

2. Connect the storage system as instructed in "Storage Resource Creation (Scale-Out Block Storage)" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.
Plan the names of datastores in a unified manner, for example, DR_datastore01.

3. Create a storage port as a remote replication port. For details, see "Adding a Storage Port" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.

You are advised to use two physical NICs to form an aggregation port and create a storage port on the aggregation port.
It is recommended that the remote replication plane be separated from the storage plane. That is, the remote replication port and storage port are created
on different aggregation ports or NICs.

4. Complete storage remote replication configurations. For details, see "Checking the License", "Creating a Replication Cluster", and "Adding a
Remote Device" in "Configuration" > "Feature Guide" > "HyperReplication Feature Guide for Block" > "Configuring HyperReplication" in
OceanStor Pacific Series Product Documentation.

3.6.1.3.2.1.5 Creating DR VMs

Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 385/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating VMs.

Procedure
1. Determine the DR VM creation mode.

To create DR VMs, go to 2.

To implement the DR solution for existing VMs, go to 3.

2. Create DR VMs on the planned DR volumes. For details, see VM Provisioning .

3. Migrate VMs that are not created on DR volumes to the DR volumes and migrate non-DR VMs created on DR volumes to the non-DR
volumes.
For details, see "Migrating a VM" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.

After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.

3.6.1.3.2.1.6 Configuring DR Policies

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
1. Check whether DR policies are configured for the first time.

If yes, go to 2.

If no, no further action is required.

2. Configure DR policies for the first time.


For details, see "DR Configuration" > "Active-Passive DR Solution" > "Creating a Protected Group" in UltraVR User Guide.

3.6.1.3.2.2 DR Commissioning
Commissioning Process

Commissioning a DR Test

Commissioning Scheduled Migration

Commissioning Fault Recovery

Commissioning Reprotection

Commissioning DR Switchback

127.0.0.1:51299/icslite/print/pages/resource/print.do? 386/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Backing Up Configuration Data

3.6.1.3.2.2.1 Commissioning Process

Purpose
Verify that the DR site properly takes over services if the production site is faulty.

Verify that the production site properly takes over services back when it is recovered.

Verify that a recovery plan is feasible, and adjust and optimize the recovery procedure as required.

Verify that the DR site properly takes over services when the production site is in maintenance as planned.

After commissioning, back up the management data by exporting the configuration data. The data can be used to recover the system if an
exception occurs or an operation has not achieved the expected result.

Prerequisites
The active-standby DR system for scale-out storage has been deployed.

DR services can be successfully deployed.

DR policies have been configured.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Procedure
Execute the following test cases:

Commissioning a DR Test

Commissioning Scheduled Migration

Commissioning Fault Recovery

Commissioning Reprotection

Commissioning DR Switchback

127.0.0.1:51299/icslite/print/pages/resource/print.do? 387/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Backing Up Configuration Data

Expected Result
The result of each test case meets expectation.

3.6.1.3.2.2.2 Commissioning a DR Test

Purpose
Verify that a recovery plan is correct and executable by testing the recovery plan.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for scale-out storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission a DR test.
For details, see DR Management > Active-Passive DR Solution > DR Recovery > DR Testing in the DR Center in UltraVR User Guide.

During DR test commissioning, before clearing drilling data from the DR site, execute the required test cases at the DR site and ensure that the execution is
successful.
On FusionCompute at the DR site, view the number of drill VMs and ensure that the number of drill VMs is consistent with that of DR VMs at the
production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.

Additional Information
None

3.6.1.3.2.2.3 Commissioning Scheduled Migration

Purpose

127.0.0.1:51299/icslite/print/pages/resource/print.do? 388/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Verify that the scheduled migration function is available by executing a recovery plan.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for scale-out storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

The test and clearing operations have been successfully performed.

Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission scheduled migration.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Planned Migration of Services in the Production
Center in UltraVR User Guide.

After the scheduled migration, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.

Additional Information
None

3.6.1.3.2.2.4 Commissioning Fault Recovery

Purpose
Disconnect the storage link and execute a recovery plan to check whether VM DR is supported and the active-standby DR solution for scale-out
storage is available.

Constraints and Limitations


None

Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 389/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The active-standby DR system for scale-out storage has been deployed.

DR policies have been configured.

A recovery plan that meets the requirements has been created.

The test and clearing operations have been successfully performed.

Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.

2. Commission fault recovery.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Service Migration in the Production Center After
Fault Recovery in UltraVR User Guide.

After the fault is rectified, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.

Additional Information
None

3.6.1.3.2.2.5 Commissioning Reprotection

Purpose
Verify that the reprotection function is available by executing a recovery plan.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for scale-out storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

Scheduled VM migration or fault recovery has been performed.

All faults at the production site have been rectified.

Procedure

127.0.0.1:51299/icslite/print/pages/resource/print.do? 390/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1. Check the reprotection type.

For scheduled migration, go to 2.

For fault recovery, go to 3.

2. Perform reprotection in scheduled migration scenarios.


For details, see reprotection related contents in DR Management > Active-Passive DR Solution > DR Recovery > Planned Migration of
Services in the Production Center in UltraVR User Guide.

3. Perform reprotection in the fault recovery scenarios.


For details, see reprotection related contents in DR Management > Active-Passive DR Solution > DR Recovery > Service Migration in
the Production Center After Fault Recovery in UltraVR User Guide.

Expected Result
The reprotection is successful.

Additional Information
None

3.6.1.3.2.2.6 Commissioning DR Switchback

Purpose
After services are switched from the production site to the DR site through the scheduled migration, switch the services back to the production site
based on the drill plan.
Services are migrated from the production site to the DR site when a recoverable fault, such as a power outage, occurs. After the production site
recovers from the fault, synchronize the data generated during DR from the DR site to the production site and then migrate services back to the
production site.

Constraints and Limitations


None

Prerequisites
The active-standby DR system for scale-out storage has been deployed.

DR policies have been configured.

A recovery plan has been created.

All faults at the production site have been rectified.

Reprotection has been performed.

Procedure
1. Determine the DR switchback type.

For scheduled migration, go to 2.

For fault recovery, go to 3.

2. Commission DR switchback in scheduled migration scenarios.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Planned Switchback of Services in the DR Center in
UltraVR User Guide.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 391/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Commission DR switchback in fault recovery scenarios.


For details, see DR Management > Active-Passive DR Solution > DR Recovery > Service Switchback in the DR Center After Fault
Recovery in UltraVR User Guide.

After commissioning DR switchback, execute the following test cases at the production site and ensure that the execution is successful.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
The DR switchback is successful. VMs are running on the hosts at the production site properly. Services at the production site are running properly
and services at the DR site are not affected.

Additional Information
None

3.6.1.3.2.2.7 Backing Up Configuration Data

Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.

If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip

If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.

Prerequisites
Conditions

You have logged in to UltraVR.

You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.

Procedure
Automatic backup

1. On UltraVR, choose Settings.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 392/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Automatic Backup.

4. Configure the backup server information.

SFTP IP

SFTP User Name

SFTP Password

SFTP Port

Encryption Password

To secure configuration data, the backup server must use the SFTP protocol.

5. Click OK.

6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.

After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.

Manual backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Manual Backup.

4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.

5. Download the ConfigData.zip file to your local system.

3.6.1.4 Geo-Redundant 3DC DR


DR System Installation and Configuration

DR Commissioning

3.6.1.4.1 DR System Installation and Configuration


Installation and Configuration Process

Preparing for Installation

Configuring Switches

Installing FusionCompute

Configuring Storage Devices

Creating DR VMs

Configuring HA and Resource Scheduling Policies for a DR Cluster

Configuring the Remote Replication Relationship (Non-Ring Networking Mode)

Configuring DR Policies

127.0.0.1:51299/icslite/print/pages/resource/print.do? 393/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.4.1.1 Installation and Configuration Process


Figure 1 and Figure 2 show the process for installing and configuring the DR system.

Figure 1 Installation and configuration process (non-ring networking)

Figure 2 Installation and configuration process (ring networking)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 394/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.1.4.1.2 Preparing for Installation


Note the following requirements for installing the DR system.

Installation Requirements
Table 1 lists the installation requirements for the DR system.

Table 1 Installation requirements

Object Description Requirement Remarks

Local PC The PC that is used for the The local PC only needs to meet the requirements for installing For details about the requirements
installation FusionCompute and there is no special requirement for it. of FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions as a The server must meet the following requirements: System Requirements .
host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:

If it is a blade server, such as the E6000 or E9000 server, the blades


must be able to connect to the FC switch modules using dedicated
FC network ports.

Storage Storage devices used in the The storage devices must meet the following requirements:
device DR solution
Must be the SAN devices meeting the storage compatibility
requirements of UltraVR.
Meets the storage device requirements for installing
FusionCompute.

Access Access switches of the There are no special requirements for the Ethernet access switches None
switch storage, management, and on the management and service planes.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 395/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
service planes The access switches on the storage plane must meet the following
requirements:
The FC switches in use must be compatible with hosts and FC SAN
storage.
Access switches must have sufficient network ports to connect to IP
SAN storage replication network ports.

Aggregation Ethernet aggregation The Ethernet aggregation switches must support VRRP. None
switch switches and FC aggregation
switches at the production
and DR sites

Core switch Core switches and firewalls No special requirement None


Firewall at the production and DR
sites

Network Network between the The network must meet the following requirements: None
production site and the DR
The management plane has a bandwidth of at least 10 Mbit/s.
site
The bandwidth of the data replication plane depends on the volume
of all the changed data in the VM replication period, which is
calculated as follows: Number of VMs to be protected x Average
volume of the changed data in the VM replication period (MB) x
8/(VM replication period (minute) x 60).
In the production center and intra-city DR center, the management
plane connects to the service plane using a large Layer 2 network.
In the large Layer 2 network, the RTT between any two sites is less
than or equal to 1 ms.

Documents
Table 2 lists the documents required for deploying the DR solution.

Table 2 Documents

Document Category Document Name Description How to Obtain

Integration design Network integration Describes the deployment plan, networking Obtain this document from the engineering
document design plan, and the bandwidth plan. supervisor.

Data planning template Provides the network data plan result, such as
for the network the IP plan of nodes, storage plan, and plan of
integration design VLANs, zones, gateways, and routes.

Version document FusionCompute X.X.X Provides information about hardware and For enterprise users: Visit
Version Mapping software version mapping. https://support.huawei.com/enterprise , search for
the document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.

UltraVR product UltraVR User Guide Provides guidance on how to install, configure, For enterprise users: Visit
document and commission UltraVR. https://support.huawei.com/enterprise , search for
the document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.

V3, V5, or Dorado OceanStor Pacific Includes storage installation, configuration, and For enterprise users: Visit
series storage product Series Product commissioning as well as the HyperMetro https://support.huawei.com/enterprise , search for
documentation Documentation feature guide. In a ring network, the the document by name, and download it.
configuration guide about the 3DC scenario is
included. For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
NOTE:

For detailed version information, see Constraints


and Limitations .

Switch product Switch document Provides information about how to configure This document package is provided by the switch
documentation package the switches by running commands. vendor.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 396/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Server product Server document Provides information about how to configure This document package is provided by the server
documentation package the servers. vendor.

After obtaining the related documents by referring to FusionCompute X.X.X Version Mapping, make preparations for the installation, such as obtaining the software
packages, installation tools, and license files. For details about the installation preparations, see the related documents.

Software Packages and License Files


The DR solution has no special requirements for the software packages. Obtain the software packages and license files for the following products by
referring to FusionCompute X.X.X Version Mapping and Constraints and Limitations .

FusionCompute

UltraVR

V3, V5, or Dorado series storage

3.6.1.4.1.3 Configuring Switches

Scenarios
In the geo-redundant 3DC DR scenario, configure a switch. The switch configuration is the same as that in a common deployment scenario with no
DR system deployed. For details about the switch configurations for the production site and intra-city DR site, see Configuring Switches . For
details about the switch configurations for the remote DR site, see Configuring Switches .

When deploying the DR system, configure switches based on the network device documentation and the data plan.

3.6.1.4.1.4 Installing FusionCompute

Scenarios
In the 3DC DR scenario, install one set of FusionCompute system in the production center and intra-city DR center. The active and standby VRM
nodes are deployed in the production center and intra-city DR center, respectively. For details, see Installing FusionCompute . For details about how
to install one set of FusionCompute system in the remote DR center, see Installation Using SmartKit .

3.6.1.4.1.5 Configuring Storage Devices

Scenarios
In the geo-redundant 3DC DR scenario, the storage device configuration is the same as that in a common deployment scenario with no DR system
deployed. This section describes only the special configuration requirements and precautions in the DR scenario.

When deploying the DR system, configure storage devices based on the storage device documentation and the data plan. OceanStor V5 series storage is used as an
example. For details, see "Configuring Basic Storage Services" in the block service section in OceanStor Product Documentation.

Procedure
1. Check the networking mode in the geo-redundant 3DC DR scenario.

In a ring network, complete the initial configuration of the active-active and asynchronous remote replication in a DR Star consisting
of three DCs according to the online documentation of SAN storage. Then, go to 3.

In a non-ring network, configure storage devices in the production center and those in the intra-city DR center by following the steps
provided in Configuring Storage .

127.0.0.1:51299/icslite/print/pages/resource/print.do? 397/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Complete the initial configuration of the SAN storage devices in the remote DR center based on the SAN storage device documentation and
the data plan.

If the storage is limited by A LUN mapping to a host can not be a secondary LUN of remote replication, you cannot create remote replication on the
current LUN. Therefore, you can configure basic storage services only after configuring the remote replication relationship in the DR center. For details, see
Configuring the Remote Replication Relationship (Non-Ring Networking Mode) .

3. On FusionCompute, use the planned DR LUNs to create datastores and name these datastores in a unified manner to simplify management,
such as DR_datastore01. Select Virtualization when creating datastores.

3.6.1.4.1.6 Creating DR VMs

Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.

Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating VMs.

Procedure
1. Determine the DR VM creation mode.

To create DR VMs, go to 2.

To implement the DR solution for existing VMs, go to 3.

2. Create DR VMs on the planned DR LUNs. For details, see VM Provisioning .

3. Migrate VMs that are not created on DR LUNs to the DR LUNs and migrate non-DR VMs created on DR LUNs to the non-DR LUNs.
For details, see "Migrating a Whole VM" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.

After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Geo-Redundant DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.

3.6.1.4.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster

Scenarios
In the geo-redundant 3DC DR scenario, a production center and an intra-city DR center are deployed as an active-active data center. You need to
configure HA and resource scheduling policies for a DR cluster to meet the active-active scenario. For details, see Configuring HA and Resource
Scheduling Policies for a DR Cluster .

3.6.1.4.1.8 Configuring the Remote Replication Relationship (Non-Ring


Networking Mode)

Scenarios

127.0.0.1:51299/icslite/print/pages/resource/print.do? 398/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

In the geo-redundant 3DC DR scenario with the non-ring networking mode, if the remote replication relationship is established between an active-
active data center and a remote DR center, configure the remote replication relationship for DR LUNs. For details, see Configuring the Remote
Replication Relationship .

3.6.1.4.1.9 Configuring DR Policies

Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.

Procedure
1. Check whether DR policies are configured for the first time.

If yes, go to 2.

If no, go to 3.

2. Configure DR policies for the first time.


For details, see "DR Configuration" > "Geo-Redundant DR Solution" > "Creating a Protected Group" in UltraVR User Guide.

3. Modify DR policies.
For details, see DR Management > Geo-Redundant DR Solution > DR Protection > Modifying Protection Policies in UltraVR User
Guide.

3.6.1.4.2 DR Commissioning
Commissioning Process

Commissioning a DR Test

Commissioning Scheduled Migration

Commissioning Fault Recovery

Commissioning Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

3.6.1.4.2.1 Commissioning Process

Purpose
Verify that the DR site properly takes over services if the production site is faulty.

Verify that the production site properly takes over services back when it is recovered.

Verify that a recovery plan is feasible, and adjust and optimize the recovery procedure as required.

Verify that the DR site properly takes over services when the production site is in maintenance as planned.

After commissioning, back up the management data by exporting the configuration data. The data can be used to recover the system if an
exception occurs or an operation has not achieved the expected result.

Prerequisites

127.0.0.1:51299/icslite/print/pages/resource/print.do? 399/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The geo-redundant 3DC DR system has been deployed.

DR services can be successfully deployed.

DR policies have been configured.

Commissioning Process
Figure 1 shows the DR solution commissioning process.

Figure 1 Commissioning process

Procedure
For details about how to commission an active-active data center that consists of a production center and an intra-city DR center, see "DR and
Backup" > "Metropolitan Active-Active DR (Using OceanStor V3/V5/Dorado Series)" > "DR Commissioning" in FusionCompute 8.8.0 DR
and Backup.

To commission the array-based replication DR system, which consists of an active-active data center and a remote DR center, execute the
following test cases:

Commissioning a DR Test

Commissioning Scheduled Migration

Commissioning Fault Recovery

Commissioning Reprotection

Commissioning DR Switchback

Backing Up Configuration Data

Expected Result
The result of each test case meets expectation.

3.6.1.4.2.2 Commissioning a DR Test

Purpose
Verify that a recovery plan is correct and executable by testing the recovery plan.

Constraints and Limitations


127.0.0.1:51299/icslite/print/pages/resource/print.do? 400/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

None

Prerequisites
The geo-redundant 3DC DR system has been deployed.

DR policies have been configured.

A recovery plan has been created.

Procedure
1. On FusionCompute in the production center, make a note of the number of DR VMs in the production center.

2. Commission a DR test.
For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > DR Testing in UltraVR User Guide.

During DR test commissioning, before clearing drilling data from the remote DR center, execute the required test cases in the remote DR center and ensure that the
execution is successful.

On FusionCompute of the remote DR center, ensure that the number of drill VMs is consistent with that of DR VMs in the production center.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).

For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are properly running on the hosts in the remote DR center. The services in the remote DR center are running properly, and the services in the
active-active data center are not affected.

Additional Information
None

3.6.1.4.2.3 Commissioning Scheduled Migration

Purpose
To commission DR, switch production service systems to the remote DR center as planned.

Constraints and Limitations


None

Prerequisites
The geo-redundant 3DC DR system has been deployed.

DR policies have been configured.

A recovery plan has been created.

The test and clearing operations have been successfully performed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 401/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. On FusionCompute in the production center, make a note of the number of DR VMs in the production center.

2. Commission scheduled migration.


For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > Performing Planned Service
Migration from a HyperMetro Data Center to the Remote DR Center in the UltraVR User Guide.

After the scheduled migration, execute the required test cases in the remote DR center and ensure that the execution is successful before performing
reprotection.
On FusionCompute of the remote DR center, ensure that the number of DR VMs is consistent with that in the production center.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts in the remote DR center, and the services in the remote DR center are also running properly.

Services in the active-active data center are paused temporarily as planned.

The DR protection group becomes invalid.

Additional Information
None

3.6.1.4.2.4 Commissioning Fault Recovery

Purpose
If an unrecoverable fault occurs in an active-active data center that consists of the production center and intra-city DR center, enable the fault
recovery function of the DR plan to switch services to the remote DR center.
The fault recovery involves the reconstruction of the entire DR system. Therefore, exercise caution when using this function.

Constraints and Limitations


None

Prerequisites
The geo-redundant 3DC DR system has been deployed.

DR policies have been configured.

A recovery plan has been created.

The test and clearing operations have been successfully performed.

Procedure
1. On FusionCompute in the production center, make a note of the number of DR VMs in the production center.

2. Commission fault recovery.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 402/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > Migrating Services to the Remote
DR Center upon a Fault Occurring in HyperMetro Data Centers in UltraVR User Guide.

After the fault is rectified, execute the required test cases at the remote DR site and ensure that the execution is successful.
On FusionCompute of the remote DR center, ensure that the number of DR VMs is consistent with that in the production center.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
VMs are running properly on the hosts in the remote DR center, and the services in the remote DR center are also running properly.

Additional Information
None

3.6.1.4.2.5 Commissioning Reprotection

Purpose
Verify that the reprotection function is available by executing a recovery plan.

Constraints and Limitations


None

Prerequisites
The geo-redundant 3DC DR system has been deployed.

DR policies have been configured.

A recovery plan has been created.

Scheduled VM migration or fault recovery has been performed.

All faults at the production site have been rectified.

Procedure
1. Check the reprotection type.

For scheduled migration, go to 2.

For fault recovery, go to 3.

2. Perform reprotection in scheduled migration scenarios.


For details, see reprotection related contents in DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) >
Performing Planned Service Migration from a HyperMetro Data Center to the Remote DR Center in UltraVR User Guide.

3. Perform reprotection in the fault recovery scenarios.


For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > Migrating Data from the DR
Center back to HyperMetro Data Centers in UltraVR User Guide.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 403/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Expected Result
The reprotection is successful.

Additional Information
None

3.6.1.4.2.6 Commissioning DR Switchback

Purpose
After services are switched from the production center to the remote DR center through the scheduled migration, manually switch the services back
to the active-active data center that consists of the production center and intra-city DR center.
Services are migrated from the active-active data center to the remote DR center when a recoverable fault, such as a power failure, occurs. After the
active-active data center recovers from the fault, synchronize data generated during DR from the remote DR center to the active-active data center
and switch services back to the active-active data center.
This section describes how to commission a DR switchback.

Prerequisites
The geo-redundant 3DC DR system has been deployed.

DR policies have been configured.

A recovery plan has been created.

All faults at the production site have been rectified.

Procedure
1. Determine the DR switchback type.

For scheduled migration, go to 2.

For fault recovery, go to 3.

2. Commission DR switchback in scheduled migration scenarios.


For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > Performing Planned Failback of
Services in the Remote DR Center in UltraVR User Guide.

3. Commission DR switchback in fault recovery scenarios.


For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > Migrating Data from the DR
Center back to HyperMetro Data Centers in the UltraVR User Guide.

After commissioning DR switchback, perform the operations in Configuring HA and Resource Scheduling Policies for a DR Cluster and configure the production
center as the preferred site on the storage to ensure that services are running in the production center. Execute the required test cases in the production center, and
ensure that the execution is successful.

Select a running VM randomly and log in to the VM using VNC.


If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).

Expected Result
127.0.0.1:51299/icslite/print/pages/resource/print.do? 404/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The DR switchback is successful and VMs are running on hosts in the production center properly. Services in the production center and intra-
city DR center are running properly, and services in the remote DR center are not affected.

The status of the DR protection group is normal.

Services in the active-active data center are normal.

The remote replication service is normal.

Additional Information
None

3.6.1.4.2.7 Backing Up Configuration Data

Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.

If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip

If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.

Prerequisites
Conditions

You have logged in to UltraVR.

You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.

Procedure
Automatic backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Automatic Backup.

4. Configure the backup server information.

SFTP IP

SFTP User Name

SFTP Password

SFTP Port

Encryption Password

127.0.0.1:51299/icslite/print/pages/resource/print.do? 405/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

To secure configuration data, the backup server must use the SFTP protocol.

5. Click OK.

6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.

After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.

Manual backup

1. On UltraVR, choose Settings.

2. In the navigation pane, choose Data Maintenance > System Configuration Data.

3. Choose Manual Backup.

4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.

5. Download the ConfigData.zip file to your local system.

3.6.2 Backup
Centralized Backup Solution

3.6.2.1 Centralized Backup Solution


Installing and Configuring the Backup System

Backup Commissioning

3.6.2.1.1 Installing and Configuring the Backup System


Installation and Configuration Process

Preparing for Installation

Installing the eBackup Server

Connecting the eBackup Server to FusionCompute

3.6.2.1.1.1 Installation and Configuration Process


Figure 1 shows the process for installing and configuring the centralized backup solution.

Figure 1 Installation and configuration process of the centralized backup solution

127.0.0.1:51299/icslite/print/pages/resource/print.do? 406/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.6.2.1.1.2 Preparing for Installation


For details, see Installation and Uninstallation > Preparing for Installation in OceanStor BCManager eBackup User Guide (Virtualization). To
obtain the guide:

For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.

For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.

The following items need to be obtained or prepared:

Software packages and the license file

A PC

Network

Installation data

Documents of the related components

3.6.2.1.1.3 Installing the eBackup Server

Scenarios
1. This section guides software commissioning engineers to install eBackup in a virtual environment by referring to OceanStor BCManager
eBackup User Guide (Virtualization) after installing and configuring FusionCompute to back up VMs using eBackup.

2. This section describes how to install eBackup using a template on FusionCompute. For details, see Installation and Uninstallation >
Installing eBackup > Installing eBackup Using a Template in OceanStor BCManager eBackup User Guide (Virtualization).

3. After eBackup is installed on servers, configure one server as the backup server and set related parameters. For details, see Installation and
Uninstallation > Configuring eBackup Servers > Configuring a Backup Server in OceanStor BCManager eBackup User Guide
(Virtualization).

4. If the backup proxy has been planned in the eBackup backup management system, configure other servers on which eBackup is installed as
the backup proxies and set related parameters. For details, see Installation and Uninstallation > Configuring eBackup Servers >
(Optional) Configuring a Backup Proxy in OceanStor BCManager eBackup User Guide (Virtualization).

Prerequisites
Conditions

You have installed and configured FusionCompute.

You have obtained the username and password for logging in to FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 407/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You have completed the preparations for installing eBackup by referring to the prerequisites described in Installation and Uninstallation >
Installing eBackup > Installing eBackup Using a Template in OceanStor BCManager eBackup User Guide (Virtualization).

If scale-out block storage is used, eBackup VMs must be deployed on the compute nodes of scale-out block storage, and Switching Mode of
the host storage ports on the compute nodes must be set to OVS forwarding mode. For details, see Backup > Configuring Production
Storage > Adding an eBackup Server to a Huawei Distributed Block Storage Cluster in OceanStor BCManager eBackup User Guide
(Virtualization).

You have created VBS clients by referring to "Creating VBS Clients" in OceanStor Pacific Series Product Documentation (Huawei Engineer)
if scale-out block storage is used.

You have created a DVS connecting to the service plane and added an uplink to the DVS on the GE/10GE network.

The firewall on the maintenance terminal has been disabled.

Process
Figure 1 shows the process for installing and configuring eBackup.

Figure 1 eBackup installation and configuration process

Procedure
Perform the following operations on both the active and standby nodes.
Add an uplink to the DVS that connects to the management plane.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 408/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

After FusionCompute installation is complete, if no management plane network port is added, the DVS connecting to the management plane has
only two uplinks, that is, the management plane ports of the host where the active and standby VRM VMs reside. Before installing the eBackup
server, you must add the management plane port of the host where the eBackup backup server resides to the DVS connecting to the management
plane based on the data plan.

1. Log in to FusionCompute.

2. Select an operation based on the network.

For a GE network, go to 3.

For a 10GE network, go to 6.

3. Add an uplink to the DVS that connects to the management plane.


Add the management plane port of the host that runs the eBackup server based on the data plan.
For details, see "Adding an Uplink" in FusionCompute 8.8.0 User Guide (Virtualization).

Add an uplink to the DVS that connects to the storage plane.


If no storage plane network port is added, after FusionCompute installation is complete, the DVS connecting to the storage plane has only two
uplinks, that is, the storage plane ports of the host where the active and standby VRM VMs reside. Before installing the eBackup server, you must
add the storage plane port of the host where the eBackup backup server resides to the DVS connecting to the storage plane based on the data plan.

4. Select an operation based on the network.

For a GE network, go to 5.

For a 10GE network, go to 6.

5. Add an uplink to the DVS that connects to the storage plane.


Add the storage plane port of the host that runs the eBackup server based on the data plan. For details, see "Adding an Uplink" in
FusionCompute 8.8.0 User Guide (Virtualization).

Create port groups.

6. Create port groups on FusionCompute based on the data plan.


For details about the principles for planning port groups, see Networking Solution . For details about how to create a port group, see "Adding
a Port Group" in FusionCompute 8.8.0 User Guide (Virtualization).
Table 1 lists the requirements for creating a port group.

Table 1 Port group configuration requirements

Port Group Configuration Requirement

Management plane port The port group must be newly created on the management DVS.
group
The port group must be named based on the data plan. For example, the port group is named Mgmt_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set to 0.
Keep the default values for other settings.

Service plane port group The port group must be newly created on the service plane DVS.
01
The port group must be named based on the data plan. For example, the port group is named Svc01_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 409/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set based on the user's network plan.
Keep the default values for other settings.

Storage plane port group The port group must be newly created on the storage plane DVS.
01
The port group must be named based on the data plan. For example, the port group is named Strg01_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set based on the user's network plan.
Keep the default values for other settings.

Storage plane port group The port group must be newly created on the storage plane DVS.
02
The port group must be named based on the data plan. For example, the port group is named Strg02_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set based on the user's network plan.
Keep the default values for other settings.
NOTE:

If the production storage can communicate with the backup storage, the storage plane port group 01 and the storage plane port group
02 can be combined.

Install eBackup.

7. Create eBackup VMs. For details, see "Importing a VM using a Template" in FusionCompute 8.8.0 Product Documentation.

The eBackup software is installed in the /opt/huawei-cloud-protection/ebackup directory.


Installation logs are stored in the /opt/OceanStor_Backup_Software_xxx_Software and /var/log/messages directories.

8. Use a template to deploy eBackup server VMs based on the data plan and by referring to OceanStor BCManager eBackup User Guide
(Virtualization).
Note the following requirements for deploying eBackup VMs:

The eBackup VMs must be named as planned. For example, the eBackup VM is named eBackup Server 01.

eBackup VMs must be deployed on specified hosts.

Select the port group to which the VM NICs belong based on the rules listed in Table 2.

Select the datastores to be used by the eBackup servers based on the data plan.

The VM specifications must meet the minimal configuration requirements of the eBackup server. For details, see "Checking the
Deployment Environment" in OceanStor BCManager eBackup User Guide (Virtualization).

Deselect Synchronize with host clock when configuring the clock synchronization policy.

QoS settings:

In the CPU resource control area, set the value of Reserved (MHz) to the maximum value.

In the memory resource control area, set the value of Reserved (MB) to the maximum value. (x86 architecture)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 410/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Keep the default values for other settings.

Table 2 VM NIC port groups

NIC No. NIC Port Group Configuration Rule

NIC 1 Used as the network port for the eBackup management plane. Select the newly created management plane port group, for example,
(eth0) Mgmt_eBackup.

NIC 2 Used as the network port for the eBackup internal communication plane. Select the newly created service plane port group 01, for example,
(eth1) Svc01_eBackup.

NIC 3 Used as the network port for the eBackup backup storage plane. Select the newly created storage plane port group 01, for example,
(eth2) Strg01_eBackup.

NIC 4 Used as the network port for the eBackup production storage plane. Select the newly created storage plane port group 02, for example,
(eth3) Strg02_eBackup.

Configure eBackup.

9. Configure eBackup by referring to Installation and Uninstallation > Configuring eBackup Servers in OceanStor BCManager eBackup
User Guide (Virtualization).

Configure one eBackup-installed server as the backup server. For details, see "Configuring a Backup Server" in chapter
"Configuring eBackup Servers".

If backup proxies are planned in the eBackup backup management system, you need to initialize the servers that have eBackup
installed into backup proxies. For details, see "(Optional) Configuring a Backup Proxy" in chapter "Configuring eBackup Servers".

To configure eBackup as an HA system, you need to set HA parameters. For details, see section "(Optional) Configuring the HA
Function" in Installation and Uninstallation.

3.6.2.1.1.4 Connecting the eBackup Server to FusionCompute

Scenarios
Create storage units on the eBackup server and connect the eBackup server to FusionCompute.

Prerequisites
Conditions

You have installed the eBackup server.

You have installed and configured external shared storage devices.

The eBackup server is communicating properly with the management planes of FusionCompute and the external shared storage devices.

Storage units have been configured on eBackup. For details, see "Creating a Storage Unit" in OceanStor BCManager eBackup User Guide
(Virtualization).

Data
Obtain required data by referring to pre-backup preparation contents in OceanStor BCManager eBackup User Guide (Virtualization).

Procedure
1. On the FusionCompute web client, create an account for configuring interconnection with the eBackup server.
You are advised to create a dedicated account for the interconnection instead of using an existing account. For details about how to create an
account, see Creating a User . When creating the account, set User Type to Interface interconnection user and Role to administrator, as
shown in Figure 1.

Figure 1 Creating an account for interconnection (common mode)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 411/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Configure the VRM information on the eBackup management console.


For details, see the descriptions in "Adding a FusionSphere Protected Environment" in OceanStor BCManager eBackup User Guide
(Virtualization). The account used to add a FusionSphere protected environment is the new account created in 1.

3.6.2.1.2 Backup Commissioning


Commissioning VM Backup

Commissioning VM Restoration

3.6.2.1.2.1 Commissioning VM Backup

Purpose
Verify the availability of the VM backup function by creating backup policies and monitoring the execution results of the backup tasks. You are
advised to verify the availability of both the CBT backup plan and the snapshot plan.

Constraints and Limitations


The capacity of the eBackup storage unit is available.

Prerequisites
The backup system has been installed and configured.

Commissioning Procedure

Do not use a VM that is running at the production site as the commissioning VM. Instead, create a VM to perform the commissioning.

1. Create storage units.


For details, see "Creating a Storage Unit" in OceanStor BCManager eBackup User Guide (Virtualization).

127.0.0.1:51299/icslite/print/pages/resource/print.do? 412/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2. Create storage pools.


For details, see "Creating a Storage Pool" in OceanStor BCManager eBackup User Guide (Virtualization).

3. Create storage repositories.


For details, see "Creating a Repository" in OceanStor BCManager eBackup User Guide (Virtualization).

4. Create a protected set.


For details, see "Creating a Protected Set" in OceanStor BCManager eBackup User Guide (Virtualization).

5. Create backup policies.


For details, see "Creating a Backup Policy" in OceanStor BCManager eBackup User Guide (Virtualization).

6. Create a backup plan.


For details, see "Creating a Backup Plan" in OceanStor BCManager eBackup User Guide (Virtualization).

7. Execute the backup job immediately.


For details, see "(Optional) Executing a Backup Job Manually" in OceanStor BCManager eBackup User Guide (Virtualization).

8. On the eBackup management console, choose Monitor > Job to check the VM backup status.
For details, see "(Optional) Viewing a Backup Job" in OceanStor BCManager eBackup User Guide (Virtualization).

Expected Result
The VM backup is successful.

Additional Information
None

3.6.2.1.2.2 Commissioning VM Restoration

Purpose
Verify the availability of the VM restoration function by restoring a VM using the backup data.

Constraints and Limitations


The storage capacity on FusionCompute is available.

Prerequisites
The backup data of the VM is available.

Commissioning Procedure

Do not use a VM that is running at the production site as the commissioning VM. Instead, create a VM to perform the commissioning.

1. On the eBackup management console, perform the restoration task.

If the data on the VM is damaged, select Restore VM Disk to Original VM, Restore VM Disk to Specific VM or Restore VM
Disk to Specific Disk.

If the VM is faulty, select Restore VM to New Location.

For details, see "Restoring FusionSphere VMs" in OceanStor BCManager eBackup User Guide (Virtualization).

2. On the eBackup management console, choose Monitor > Job to check the VM or VM disk restoration status.
For details, see "(Optional) Viewing a Restore Job" in OceanStor BCManager eBackup User Guide (Virtualization).

127.0.0.1:51299/icslite/print/pages/resource/print.do? 413/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The restoration task is complete if the task state is successful.

3. x86 architecture: Log in to the restored VM using VNC and ensure that the VM is running properly without BSOD or black screen.
Arm architecture: Log in to the restored VM using VNC and ensure that the VM is running properly without black screen.

Expected Result
The restored VM is running properly.

Additional Information
None

3.7 Verifying the Installation


After the hardware and software of DCS are deployed, you need to verify the deployment to ensure that the DCS system is running properly.

Procedure
Verify the eDME installation.

1. On the maintenance terminal, type https://Node IP address:31943 (for example, https://192.168.125.10:31943) in the address box of a
browser and press Enter.

If eDME is deployed in a three-node cluster, use the management floating IP address to log in to eDME.
In the three-node cluster deployment scenario, automatic the active/standby switchover is supported. After the active/standby switchover, it takes about
10 minutes to start all services on the new active node. During this period, the O&M portal can be accessed, but some operations may fail. Wait until
the services are restarted and try again.

2. Enter the username and password.


The default username is admin, and the initial password is that configured during installation of eDME.

3. Click Log In.


If the login is successful, the eDME software has been successfully installed.

4. Connect to a VRM as instructed in Adding a VRM .

5. On the navigation bar, choose Provisioning > Virtualization Service > VMs.

6. Click Create.

7. On the Create VM page, select Customize.


Set basic VM parameters. Table 1 describes the related parameters.

Table 1 Basic VM information

Parameter Description

Name VM name

Description VM description

Compute Compute resource to which the VM belongs. You can select a cluster or host.
Resources NOTE:
If a cluster is selected, the system randomly selects a host in the cluster to create a VM.
If a host is selected, a VM is created on the specified host.
If a VM is bound to a host, the VM can run on this host only. The VM can be migrated to another host but the HA function becomes invalid.

OS Type of the OS to be installed on a VM. The options are Linux, Windows, and Other.
NOTE:
The OS type of the VM created in the Arm cluster or host must be set to Linux.
When you install an OS, the OS type and version must be consistent with the information you specify here. Otherwise, VM faults may occur
when the VM runs.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 414/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

8. Select a datastore for the VM and click Next.

9. Set VM configuration parameters. Table 2 describes the related parameters.

Table 2 VM configuration parameters

Parameter Description

CPU Number of VM CPU cores

Memory Memory capacity of a VM

Disk Disk capacity of a VM

NIC Port group used by the VM NIC on the DVS


Click Set and configure NIC Type, I/O Ring Size, and Queues.

GPU Resource Group GPU resource group used by a VM.


If a GPU resource group needs to be added, select the GPU virtualization mode. If you add a GPU resource group
with the Pass-through allocation method to the VM, reserve 100% memory resources and set no limit to memory
resources.

Graphics Card Type and size of the graphics card on a VM


NOTE:

VMs of the Arm architecture support only graphics cards of the virtio type.

Floppy Drive Floppy drive file to be mounted. The default value is Unmount. You can manually select the floppy drive file to be
NOTE: mounted.

This parameter is available only


when the OS type is not Linux.

If you need to add a disk, NIC, or GPU resource group to the VM, click Add Device and select the device to be added.

10. (Optional) Configure VM options.


Set Basic Options, including the VM boot mode and VNC keyboard settings.

Boot modes: Network, CD/DVD-ROM, Disk, and Specific device boot sequence

VNC keyboard settings: English (US), French, German, Italian, Russian, and Spanish

Set Advanced Settings, as shown in Table 3.

Table 3 VM advanced options

Parameter Description

Clock Synchronization Policy Selecting Synchronize with host clock: The VM periodically synchronizes time with the host.
Deselecting Synchronize with host clock: The user can set the VM time. The customized time depends on the VM
RTC time. After setting the VM system time, you need to synchronize the RTC time with the customized VM system
time. The system time of all hosts at a site must be the same. Otherwise, the time changes after a VM HA task is
performed, a VM is migrated, a VM is woken up from the hibernated state (x86 architecture), or a VM is restored using
a snapshot.

Boot Firmware You do not need to set this parameter when Boot Firmware of an Arm VM is UEFI.
NOTE:

This parameter is
unavailable when you
create a VM from a
template or clone a VM.

Latency (ms) This parameter is available only when Boot Firmware is BIOS.

EVS Affinity Selecting EVS Affinity: The VM CPUs, memory, and EVS forwarding cores share the same NUMA node on a host,
and VMs with vhost-user NICs feature the optimal NIC rate.
Deselecting EVS Affinity: The VM EVS affinity does not take effect. The VM CPUs and memory are allocated to
multiple NUMA nodes on a host randomly, and VMs with vhost-user NICs do not feature the optimal NIC rate.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 415/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

NUMA Topology Adjustment Enabling NUMA Topology Adjustment: The system automatically calculates the VM NUMA topology based on VM
configurations, advanced NUMA parameters, and the physical server's NUMA configurations and sets the NUMA
affinity of the VM and physical server, enabling optimal VM memory access performance.

HPET Selecting this option can meet the precise timing requirements of multimedia players and other applications (such as
attendance and billing).

Security VM Set the security VM type. Security Type and Security VM Type are available when this option is selected.
Security Type can be set to the following:
Antivirus: provides antivirus for virtualization.
Deep packet inspection (DPI): provides network protection for virtualization.
Security VM Type can be set to SVM or GVM:
SVM: secure VM
When Security Type is set to Antivirus, an SVM provides antivirus services for a guest VM (GVM), such as virus
scanning, removal, and real-time monitoring. The VM template is provided by the antivirus vendor.
When Security Type is set to DPI, an SVM provides the following for a GVM: network intrusion detection, network
vulnerability scanning, and firewall services. The VM template is provided by the third-party security vendor.
GVM: An end user VM that uses the antivirus or DPI function provided by the SVM. If DPI virtualization is used, the
GVM performance deteriorates.

11. Click Next.


The Confirm Information page is displayed. Confirm the information about the VM.
If you need to start a VM after the VM is created, select Start VM immediately after creation.

If you do not need to confirm the VM information, click OK.

Verify the FusionCompute installation.

1. After FusionCompute is installed, click View Portal Link on the Execute Deployment page to view the FusionCompute address. Click the
FusionCompute address to go to the VRM login page.

After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is normal, and the alarm ALM-
10.1000027 Heartbeat Communication Between the Host and VRM Interrupted is generated, the alarm will be automatically cleared after 30 minutes.
Otherwise, clear the alarm by following the instructions provided in ALM-10.1000027 Heartbeat Communication Between the Host and VRM
Interrupted in FusionCompute 8.8.0 Product Documentation.

2. Log in to the VRM to check whether FusionCompute is successfully installed.

Verify the UltraVR installation.

1. After UltraVR is installed, click View Portal Link on the Execute Deployment page to view the UltraVR address. Click the UltraVR
address to go to the login page of the management page.

2. Log in to the management page to check whether UltraVR is successfully installed. The login user name and password can be obtained from
Datacenter Virtualization Solution 2.1.0 Account List.

Verify the eBackup installation.

1. After eBackup is installed, click View Portal Link on the Execute Deployment page to view the eBackup address. Click the eBackup
address to go to the login page of the management page.

2. Log in to the management page to check whether eBackup is successfully installed. The login user name and password can be obtained from
Datacenter Virtualization Solution 2.1.0 Account List.

3.8 Initial Service Configurations


Perform initial service configurations on FusionCompute, including host and cluster management, storage management, and network management.

Figure 1 Resource creation process

127.0.0.1:51299/icslite/print/pages/resource/print.do? 416/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 1 Resource creation process

Main Process Procedure

1. Host and cluster Create a cluster. For details, see Operation and Maintenance > Service Management > Compute Resource Management >
management Cluster Management > Creating a Cluster in FusionCompute 8.8.0 Product Documentation.

Add hosts to the cluster. For details, see Operation and Maintenance > Service Management > Compute Resource
Management > Host Management > Adding Hosts in FusionCompute 8.8.0 Product Documentation.

Set time synchronization on a host. For details, see Operation and Maintenance > Service Management > Compute Resource
Management > Host Management > Setting Time Synchronization on a Host in FusionCompute 8.8.0 Product
Documentation.

Add storage ports to the host. For details, see Operation and Maintenance > Service Management > Compute Resource
Management > System Port Management > Adding a Storage Port in FusionCompute 8.8.0 Product Documentation.

2. Storage Add storage resources to the site. For details, see Operation and Maintenance > Service Management > Storage Management
management > Storage Resource Management > Add Storage Resources to a Site in FusionCompute 8.8.0 Product Documentation.

Associate storage resources with a host. For details, see Operation and Maintenance > Service Management > Storage
Management > Storage Resource Management > Associate Associating Storage Resources with a Host in FusionCompute
8.8.0 Product Documentation.

Scan for storage devices. For details, see Operation and Maintenance > Service Management > Storage Management >
Storage Resource Management > Scanning Storage Devices in FusionCompute 8.8.0 Product Documentation.

Add datastores. For details, see Operation and Maintenance > Service Management > Storage Management > Data Storage
Management > Add Datastores in FusionCompute 8.8.0 Product Documentation.

Create a disk. For details, see Operation and Maintenance > Service Management > Storage Management > Disk
Management > Creating a Disk in FusionCompute 8.8.0 Product Documentation.

3. Network Create a DVS. For details, see Operation and Maintenance > Service Management > Network Management > DVS
management Management > Create a DVS in FusionCompute 8.8.0 Product Documentation.

Add an uplink. For details, see Operation and Maintenance > Service Management > Network Management > Upstream
Link Group Management > Adding an Uplink in FusionCompute 8.8.0 Product Documentation.

Add a VLAN pool. For details, see Operation and Maintenance > Service Management > Network Management >
Distributed Virtual Switch Management > Adding a VLAN Pool in FusionCompute 8.8.0 Product Documentation.

Add a MUX VLAN. For details, see Operation and Maintenance > Service Management > Network Management >
Distributed Virtual Switch Management > Adding a MUX VLAN in FusionCompute 8.8.0 Product Documentation.

Create a port group. For details, see Operation and Maintenance > Service Management > Network Management > Port
Group Management > Adding a Port Group in FusionCompute 8.8.0 Product Documentation.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 417/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.9 Appendixes
FAQ

Common Operations

Physical Network Interconnection Reference

Introduction to Tools

Verifying the Software Package

VM-related Concepts

3.9.1 FAQ
How Do I Handle the Issue that System Installation Fails Because the Disk List Cannot Be Obtained?

How Do I Handle the Issue that VM Creation Fails Due to Time Difference?

What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported During System Installation?

How Do I Handle Common Problems During Hygon Server Installation?

How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be Added Due to a GPT Partition During Tool-based Installation?

How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for Customized VRM Installation?

How Do I Handle the Issue that the Mozilla Firefox Browser Prompts Connection Timeout During the Login to FusionCompute?

How Do I Handle the Storage Device Detection Failure on a FusionCompute Host During VRM Installation?

How Do I Configure an IP SAN Initiator?

How Do I Configure an FC SAN Initiator?

How Do I Configure Time Synchronization Between the System and an NTP Server of the w32time Type?

How Do I Configure Time Synchronization Between the System and a Host When an External Linux Clock Source Is Used?

How Do I Reconfigure Host Parameters?

How Do I Replace Huawei-related Information in FusionCompute?

How Do I Measure Disk IOPS?

What Should I Do If a Linux VM with More Than 32 CPU Cores Cannot Be Started?

How Do I Query the FusionCompute SIA Version?

What Should I Do If Tools Installed on Some OSs Fails to be Started?

Expanding the Data Disk Capacity

127.0.0.1:51299/icslite/print/pages/resource/print.do? 418/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

How Do I Manually Change the System Time on a Node?

How Do I Handle the Issue that VRM Services Become Abnormal Because the DNS Is Unavailable?

What Can I Do If an Error Message Is Displayed Indicating That the Sales Unit HCore Is Not Supported When I Import Licenses on
FusionCompute?

How Do I Determine the Network Port Name of the First CNA Node?

Troubleshooting

3.9.1.1 How Do I Handle the Issue that System Installation Fails Because the Disk
List Cannot Be Obtained?

Symptom
System installation fails because the disk list cannot be obtained. Figure 1 or Figure 2 shows failure information.

Figure 1 Installation failure information in the x86 architecture

Figure 2 Installation failure information in the Arm architecture

Possible Causes
No installation disk is available in the system. As a result, the installation fails and the preceding information is reported.

Storage media on the server are not initialized. As a result, the installation fails and the preceding information is reported.

The server was used, and its RAID controllers and disks contain residual data. As a result, the installation fails and the preceding information is
reported.

The system may not have a RAID controller card driver. You need to confirm the hardware driver model, download the driver from the official
website, and install it. For details about how to install the driver, see FusionCompute SIA Device Driver Installation Guide.

Troubleshooting Guideline
Before installing the system, initialize the RAID controllers and disks on the server and delete their residual data.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 419/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Check the system architecture.

For an Arm architecture, go to 2.

For an x86 architecture, go to 26.

Restart the server.

2. Log in to the iBMC WebUI.

3. For details, see Logging In to a Server Using the BMC .

4. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 3.

Figure 3 Remote Console

5. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated Remote Console
(Private), or HTML5 Integrated Remote Console (Shared). The real-time desktop of the server is displayed, as shown in Figure 4 or
Figure 5.

Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform operations on
the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform operations
on the server using the iBMC. The users can view the operations of each other.

Figure 4 Real-time operation console (Java)

127.0.0.1:51299/icslite/print/pages/resource/print.do? 420/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 5 Real-time operation console (HTML5)

6. On Remote Virtual Console, click or on the menu bar.

7. Select Reset.
The Are you sure to perform this operation dialog box is displayed.

8. Click Yes.
The server restarts.

Log in to the Configuration Utility screen of the Avago SAS3508.

9. When the following information is displayed during the server restart, press Delete quickly.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 421/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

10. In the displayed dialog box, enter the BIOS password.

The default password for logging in to the BIOS is Admin@9000. Change the administrator password immediately after your first login.
For security purposes, change the administrator password periodically.
The system will be locked if incorrect passwords are entered three consecutive times. You need to restart the server to unlock it.

11. On the BIOS screen, use arrow keys to select Advanced.

12. On the Advanced screen, select Avago MegaRAID <SAS3508> Configuration Utility and press Enter. The Dashboard View screen is
displayed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 422/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

13. Check whether the RAID array has been created for system disks on the server.

If yes, go to 14.

If no, go to 16.

14. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press Enter.

15. On the Configuration Management screen, select Clear Configuration and press Enter. On the displayed confirmation screen, select
Confirm and press Enter. Then select Yes and press Enter to format the hard disk.

Figure 6 Clear Configuration

127.0.0.1:51299/icslite/print/pages/resource/print.do? 423/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Figure 7 Confirmation screen

16. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press Enter. Select
Create Virtual Drive and press Enter. The Create Virtual Drive screen is displayed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 424/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

17. On the Create Virtual Drive screen, select Select RAID level using the up and down arrow keys and press Enter. Create a RAID array
(RAID 1 is used as an example) using disks. Select RAID1 from the drop-down list box, and press Enter.

18. On the Create Virtual Drive screen, select Default Initialization using the up and down arrow keys and press Enter. Select Fast from the
drop-down list box and press Enter.

19. Select Select Drives From using the up and down arrow keys and press Enter. Select Unconfigured Capacity using the up and down
arrow keys.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 425/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

20. Select Select Drives using the up and down arrow keys and press Enter. Select the first (Drive C0 & C1:01:02) and the second (Drive C0 &
C1:01:05) disks using the up and down arrow keys to configure RAID 1.

Drive C0 & C1 may vary on different servers. You can select a disk by entering 01:0x after Drive C0 & C1.
Press the up and down arrow keys to select the corresponding disk, and press Enter. [X] after a disk indicates that the disk has been selected.

21. Select Apply Changes using the up and down arrow keys to save the settings. The message "The operation has been performed
successfully." is displayed. Press the down arrow key to choose OK and press Enter to complete the configuration of member disks.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 426/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

22. Select Save Configuration and press Enter. The operation confirmation screen is displayed. Select Confirm and press Enter. Select Yes
and press Enter. The message "The operation has been performed successfully." is displayed. Select OK using the down arrow key and
press Enter.

23. Press ESC to return to the Main Menu screen. Select Virtual Drive Management and press Enter to view the RAID information.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 427/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

24. Press F10 to save all the configurations and exit the BIOS.

25. Reinstall the system.


No further action is required.

26. Before installing a system, access the disk RAID controller page to view disk information. Figure 8 shows disk information. The method for
accessing the RAID controller page varies depending on the RAID controller card in use. For example, if RAID controller card 2308 is
used, press Ctrl+C to access the disk RAID controller page.

Figure 8 Disk information

27. Check whether the RAID array has been created for system disks on the server.

If yes, select Manage Volume in Figure 8 to access the page shown in Figure 9 and then click Delete Volume to delete the residual
RAID disk information from the system.

If no, go to 28.

Figure 9 RAID operation page

28. Create a RAID array using disks, as shown in Figure 10.

Figure 10 Creating a RAID array

127.0.0.1:51299/icslite/print/pages/resource/print.do? 428/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

29. After configuration is complete, select Save changes then exit this menu on the screen to exit, as shown in Figure 11.

Figure 11 Completing configuration

30. Reinstall the system.

3.9.1.2 How Do I Handle the Issue that VM Creation Fails Due to Time
Difference?

Symptom
The VM fails to be created when VRM is installed using an installation tool. The message indicating that this issue may be caused by time difference
will be displayed in some scenarios. If the message is not displayed, when you query the log, you may find that the time difference exceeds 5
minutes before the VM creation failure.

Procedure
1. Click Install VRM.
Check whether the VM is successfully created.

If yes, no further action is required.

If no, the local PC may be a VM and is not restarted for a long time. In this case, go to 2.

2. Close the tool.


A dialog box is displayed.

3. Select Save.

4. Restart the PC and open the tool again.


A dialog box is displayed.

5. Select Continue.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 429/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If data is not saved when you close the tool or VRM is not continuously installed but is uninstalled and then installed when you open the tool again, the
residual host data is not cleared. In this case, you need to install the host and VRM again. Otherwise, the system may prompt you that the host has been added
to another site when you configure the host.

6. Check whether the issue is resolved.

If yes, no further action is required.

If no, contact technical support for assistance.

3.9.1.3 What Do I Do If the Error "kernel version in isopackage.sdf file does not
match current" Is Reported During System Installation?

Symptom
During the system installation, an error is reported when the installation information is compared with the isopackage.sdf file. As a result, the
installation fails. Figure 1 shows the reported information.

Figure 1 Installation failure information

Possible Causes
During the installation, both the local and remote ISO files are mounted to the server.

Procedure
1. Confirm the ISO file to be installed.

If the ISO file is stored in the local CD/DVD-ROM drive, go to 2.

If the ISO file is stored in the remote CD/DVD-ROM drive, go to 3.

2. Disconnect the ISO file in the iBMC system of the host.


After performing this step, go to 4.

3. Obtain the ISO file of the local CD/DVD-ROM drive.


After performing this step, go to 5.

4. Reinstall the host using the ISO file of the remote CD/DVD-ROM drive.
Install the host. For details, see "Installing Hosts Using ISO Images (x86)" or "Installing Hosts Using ISO Images (Arm)" in FusionCompute
8.8.0 Product Documentation.
No further action is required.

5. Reinstall the host using the ISO file of the local CD/DVD-ROM drive.

3.9.1.4 How Do I Handle Common Problems During Hygon Server Installation?


Problem 1: After CNA is installed on a Hygon server and the cat /proc/cpuinfo command is executed in the OS to query the CPU type, the CPU
type is displayed as AMD.
Solution: Upgrade the BMC version to 0.93(QL1-R12-059-0000) and the BIOS version to 0SSSX249. For details about how to upgrade BMC and
BIOS, see the corresponding product documentation.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 430/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Problem 2: After the Hygon server is added to the FusionCompute cluster and the server hardware configuration is queried, an extra USB device
American Megatrends Inc.. Virtual Cdrom Device is displayed in the USB device list.
Solution: The BMC virtual media (keyboard, mouse, and CD/DVD) of the Hygon server uses the virtual USB protocol. As a result, the BMC virtual
media is displayed as USB devices in the OS. Ignore this problem.
Problem 3: Setting PXE polling on the Hygon server fails. If the first network port in the boot device is not the planned PXE network port, the PXE
installation fails.
Solution: Set the planned PXE network port to the first network port in the PXE boot device in the BIOS.
Problem 4: After a USB device is inserted into the Hygon server after the system is installed, the host startup sequence changes.
Solution: After the USB device is inserted, access the BIOS to change the boot sequence and set the disk as the first boot device.
Problem 5: After the Hygon server is restarted, the host boot sequence changes, and the OS cannot be accessed after the restart.
Solution: Restart the server, enter the BIOS, and set the disk as the first boot device.

3.9.1.5 How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be
Added Due to a GPT Partition During Tool-based Installation?

Symptom
add storage failed is displayed when you add a datastore during the VRM installation using the FusionCompute installation tool.

Error message "Storage device exists GPT partition, please clear and try again." is displayed in the log information.

Procedure

The following operations are high-risk operations because they will delete and format the specified storage device. Before performing the following operations, ensure
that the local disk of the GPT partition to be deleted is not used.

1. Take a note of the value of the storageUnitUrn field displayed in the log information about the storage device that fails to be added.
For example: urn:sites:54830A53:storageunits:F1E6FF755C8C4AB49A8BD2791F1A4E3E

2. Use PuTTY to log in to the host.


Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

3. Run the following command and enter the password of user root to switch to user root:
su - root

4. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 431/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

5. Run the following command to go to the installation tool directory:


cd /home/GalaX8800/Directory where the installation tool is located
Example: cd /home/GalaX8800/FusionCompute-LinuxInstaller-8.8.0-ARM_64

6. Run the following command to check the storage device name:


Obtain the storage device name based on the value of urn in 1.
vim data/datastore.json

The value of the name field is the name of the storage device.

In the following example command output, the storage device name is HUS726T4TALA600_V6KV5K2S.

7. Run the following command to query the IP address of the host:


vim interface/vrm.json

If the storage device name is on the right of master in 6, the host IP address is the value of master_ip. If the storage device name is on the right of slave in 6,
the host IP address is the value of slave_ip.

8. Use PuTTY to log in to the host.


Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

The host IP address is the IP address recorded in 7.

9. Run the following command and enter the password of user root to switch to user root:
su - root

10. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

11. Run the following command to view the storage device path:
redis-cli -p 6543 -a Redis password hget StorageUnit:Storage device name op_path

For details about the default password of Redis, see "Account Information Overview" in FusionCompute 8.8.0 O&M Guide. The storage device name is the
name obtained in 6.

12. Run the following command to delete the signature of the file system on the local disk:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 432/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

This operation will clear the original data on the disk, which is a high-risk operation. Before performing this operation, ensure that the disk is not in use.

wipefs -a Device path


For example, run the following command: wipefs -a /dev/disk/by-id/scsi-36101b5442bcc70002792c1db0ef69d4d
If information similar to the following is displayed, the signature of the file system on the local disk is successfully deleted:

13. Check whether the preceding command is executed successfully.

If yes, go to 14.

If no, contact technical support for assistance.

14. Perform the following operations based on the installation mode:

For Custom Installation, click Install VRM.

For One-Click Installation, click Start Installation.

15. Check whether the task is executed successfully.

If yes, no further action is required.

If no, contact technical support for assistance.

3.9.1.6 How Can I Handle the Issue that the Node Fails to Be Remotely
Connected During the Host Configuration for Customized VRM Installation?

Symptom
When a host is installed using an ISO image, gandalf is not initialized. As a result, the system displays a message indicating that the remote
connection to the node fails during the host configuration for customized VRM installation.

Solution
Check whether the IP address of the host where the VRM is to be installed is correct.

Check whether the password of user root for logging in to the host where the VRM is to be installed is correct.

Check whether the following command has been executed on the host to set the password of user gandalf and whether the password is correct:
cnaInit

In this document, the letter I in cnaInit is i in upper case.

If you enter an incorrect password of user gandalf for logging in to the host, the user will be locked for 5 minutes. To manually unlock the
user, log in to the locked CNA node as user root through remote control (KVM) and run the faillock --reset command.

3.9.1.7 How Do I Handle the Issue that the Mozilla Firefox Browser Prompts
Connection Timeout During the Login to FusionCompute?

Symptom
FusionCompute is reinstalled multiple times and the Mozilla Firefox browser is used to log in to the management page. As a result, too many
certificates are loaded to the Mozilla Firefox browser. When FusionCompute is installed again and the Mozilla Firefox browser is used to log in to
the management page, the certificate cannot be loaded. As a result, the login fails and the browser prompts connection timeout.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 433/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Possible Causes
FusionCompute is reinstalled repeatedly.

Procedure

1. Click in the upper right corner of the browser and choose Options.

2. In the Network Proxy area of the General page, click Settings (E).
The Connection Settings dialog box is displayed.

3. Check whether the network proxy has been disabled.

If yes, go to 5.

If no, go to 4.

4. Select No proxy and click OK to disable the network proxy.

5. In the Privacy and Security area, locate Security.

6. In the Security area, click View Certificates under the Certificate module.
The Certificate Manager dialog box is displayed.

7. On the Servers and Authorities tab pages, delete certificates that conflict with those used by the current management interface.
The certificates to be deleted are those that use the same IP address as the VRM node. For example, if the IP address of the VRM node is
192.168.62.27:

On the Servers tab page, delete the certificates of servers whose IP addresses are 192.168.62.27:XXX.

On the Authorities tab page, delete the certificates named 192.168.62.27.

8. Close the browser and then open it to log in to FusionCompute.

3.9.1.8 How Do I Handle the Storage Device Detection Failure on a


FusionCompute Host During VRM Installation?

Scenarios
In the following scenarios, the host may fail to scan storage resources on the corresponding storage devices:

During the installation of hosts using the x86 architecture, the size of the swap partition is 30 GB by default. If you select auto to automatically
configure the swap partition size, the swap partition size is in proportion to the memory size. When the host has a large memory size, the swap
partition may occupy large storage space so that the system disk does not have available space, and no other disks are available except the
system disk.

The local disk of the host has residual partition information. In this case, you need to manually clear the residual information on the storage
devices.

Prerequisites
Conditions
You have obtained the IP address for logging in to the host.
Data
Data preparation is not required for this operation.

Procedure
1. Use PuTTY to log in to the host.
Ensure that the management IP address and username gandalf are used for login.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 434/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

4. Run the following command to query host disk information:


lsblk

For a host using the Arm architecture, go to 5.

For a host using the x86 architecture, determine the number of disks based on the value in the NAME column in the command output.

If the host has only one disk, install the host again, manually specify the swap partition size, and install VRM again.

The swap partition size is required to be greater than or equal to 30 GB. If the disk space is insufficient for host installation and VRM installation,
replace it with another disk.

If the host has other disks except the system disk and VRM can be created on other disks, go to 5.

Delete disk partitions on the host.

Do not clear the partitions on the system disk when clearing the disk partitions on the host. Otherwise, the host becomes unavailable unless you reinstall an OS on the
host.
/dev/sda is the default system disk on a host. However, the system may select another disk as the system disk, or a user may specify a system disk during the host
installation. Therefore, distinguish between the system disk and user disks when deleting host disk partitions.

5. Run the following command to query the name of the existing disk on the host:
fdisk -l

6. In the command output, locate the Device column that contains the partitioned disks, and make a note of the disk names.
Information similar to the following is displayed:

Disk /dev/sdb: 300.0 GB, 300000000000 bytes

256 heads, 63 sectors/track, 36330 cylinders, total 585937500 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Device Boot Start End Blocks Id System

/dev/sdb1 * 1 583983103 291991551+ ee GPT

/dev/sdb2 583983104 824621840 291991551+ ee Linux

...

Partition /dev/sdb1 of the disk is displayed in the Device column, and you need to make a note of the disk name /dev/sdb.

7. Run the following command to switch to the cleared disk:


fdisk Disk name

127.0.0.1:51299/icslite/print/pages/resource/print.do? 435/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Information similar to the following is displayed:

Command (m for help):

8. Enter d and press Enter.


Information similar to the following is displayed:

Partition number (1-4)

If the disk has only one partition, the partition will be automatically deleted, and information similar to the following will be displayed:

Selected partition 1

Run the d command to automatically delete the unique partition, and then go to 10.

9. Enter the ID of the partition to be deleted and press Enter.

10. Check whether all partitions are deleted.

If yes, go to 12.

If no, go to 11.

11. Repeat 8 to 10 to delete other partitions.

12. Enter w to save the configuration and exit the fdisk mode.

13. Check whether the local storage resources can be used.

If yes, no further action is required.

If no, contact technical support for assistance.

3.9.1.9 How Do I Configure an IP SAN Initiator?

Scenarios
An IP SAN initiator is required for IP SAN storage devices to map hosts and storage devices using the world wide name (WWN) generated after the
storage devices are associated with hosts.
OceanStor 5500 V3 is used as an example in this section. For more details, see the documentation delivered with the storage device.

Prerequisites
Conditions

You have logged in to the storage management system, and the storage devices have been detected.

You have obtained the host WWN.

You have configured the logical host (group) and LUNs on the storage management system of the SAN storage device, including creating a
logical host (group), dividing LUNs, and configuring the mapping between LUNs and the logical host (group).

You have obtained OceanStor 5500 V3 Product Documentation:

For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download the document for
the desired version.

For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document for the desired
version.

Data
Data preparation is not required for this operation.

Procedure
1. Check whether the storage resource has been associated with the host.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 436/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If yes, go to 3.

If no, go to 2.
2. Create an initiator.
For details, see "Creating an Initiator" in OceanStor 5500 V3 Product Documentation.

3. Add the initiator to the host.


For details, see "Adding an Initiator to a Host" in OceanStor 5500 V3 Product Documentation.

3.9.1.10 How Do I Configure an FC SAN Initiator?

Scenarios
An FC SAN initiator is required for FC SAN devices to map hosts and storage devices using the world wide name (WWN) generated after the
storage devices are associated with hosts. This section describes how to obtain the WWN of the host and configure the FC SAN initiator.
OceanStor 5500 V3 is used as an example in this section. For more details, see the documentation delivered with the storage device.

Prerequisites
Conditions

Hosts have been added to FusionCompute.

You have logged in to FusionCompute.

You have configured the logical host (group) and LUNs on the storage management system of the SAN storage device, including creating a
logical host (group), dividing LUNs, and configuring the mapping between LUNs and the logical host (group).

You have obtained OceanStor 5500 V3 Product Documentation:

For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download the document for
the desired version.

For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document for the desired
version.

Data
Data preparation is not required for this operation.

Procedure

1. In the navigation pane of FusionCompute, click .


The Resource Pool page is displayed.

2. Click the Host tab.

3. Click the target host.


The Summary tab page is displayed.

4. On the Configuration tab page, choose Storage > Storage Adapter.

5. Click Scan.

6. Click Recent Tasks in the lower left corner. In the expanded task list, verify that the scan operation is successful.

7. Make a note of the WWN value of the host.


If the system uses FC SAN devices, the host WWN cannot be customized.

8. Configure the FC initiator based on the WWN value.


Configure the obtained host WWN as a new initiator on the logic host (group) initiator of the storage device.
For details, see "Creating an Initiator" and "Adding an Initiator to a Host" in OceanStor 5500 V3 Product Documentation.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 437/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.9.1.11 How Do I Configure Time Synchronization Between the System and an


NTP Server of the w32time Type?

Scenarios
If the clock source is an NTP server of the w32time type, configure one host or a VRM node when the VRM node is deployed on a physical server
to synchronize time with the clock source, and then set this host or VRM node as the system clock source. This type of clock source is called the
internal clock source. Configure time synchronization between the system and the internal clock source.

Prerequisites
Conditions

You have obtained the IP address or domain name of the NTP server of the w32time type of the Windows OS.

If the NTP server domain name is to be used, ensure that a domain name server (DNS) is available. For details, see "Configuring the DNS
Server" in FusionCompute 8.8.0 O&M Guide.

You have obtained the password of user root and the management IP address of the host or VRM node that is to be configured as the internal
clock source.

You have logged in to FusionCompute.

Procedure
Configure time synchronization between a host or VRM node and a w32time-type NTP server.

1. Use PuTTY to log in to the host or VRM node to be set as the internal clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

4. Run the following command to synchronize time between the host or VRM node and the NTP server:
service ntpd stop;/usr/sbin/ntpdate NTPServer && /sbin/hwclock -w -u > /dev/null 2>&1; service ntpd start
You can set NTPServer to the NTP server IP address or domain name. If you enter a domain name for the configuration, ensure that a DNS
is available.

If the command output contains the following information, run this command again:

the NTP socket is in use, exiting

5. Run the following commands to set the time synchronization interval to 20 minutes:
sed -i -e '/ntpdate/d' /etc/crontab
echo "*/20 * * * * root service ntpd stop;/usr/sbin/ntpdate NTPServer > /dev/null 2>&1 && /sbin/hwclock -w -u > /dev/null
2>&1;service ntpd start" >>/etc/crontab
You can set NTPServer to the NTP server IP address or domain name. If you enter a domain name for the configuration, ensure that a DNS
is available.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 438/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

6. Run the following command to restart the service for the configuration to take effect:
service crond restart
The configuration is successful if information similar to the following is displayed:

Redirecting to /bin/systemctl restart crond.service

7. Run the following command to configure the host or VRM node as the internal clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip Management IP address of the host or VRM node that is to be configured as the
internal clock source -cycle 6 -timezone Local time zone -force true
In the preceding command, the value of Local time zone is in Continent/Region format and must be the time zone used by the external clock
source.
For example, Local time zone is set to Asia/Beijing.
If the command output contains the following information, the configuration is successful:

excute configNtp.pl success.

If a host is set as an internal clock source, the configuration causes the restart of the service processes on the host. If more than 40 VMs run on the host, the
service process restart will take a long time, triggering VM fault recovery tasks. However, the VMs will not be migrated to another host. After the service
processes restart, the fault recovery tasks will be automatically canceled.

8. Run the following command to check whether the synchronization status is normal:
ntpq -p
Information similar to the following is displayed:

remote refid st t when poll reach delay offset jitter

==============================================================================

*LOCAL(0) .LOCL. 10 l 34 64 377 0.000 0.000 0.001

If the remote column contains * 6 to 10 minutes after you run the command, the synchronization status is normal.

Configure time synchronization for FusionCompute.

9. In the navigation pane of FusionCompute, click .


The System Management page is displayed.

10. Choose System Management > System Configuration > Time Management.
The Time Management page is displayed.

11. Configure the following parameters in the Time Management area:

NTP Server: Set it to the management IP address of the host or VRM node that has been configured as the internal clock source.

If a VRM node is set as the internal clock source and the VRM nodes are deployed in active/standby mode, NTP server must be set to the management IP
address of the active VRM node instead of the floating IP address of the VRM nodes.

Synchronization Interval (seconds): Set it to 64 seconds.

12. Click Save.


A dialog box is displayed.

13. Click OK.


The time zone and NTP clock source are configured.

The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption, and abnormal antivirus service.
Proceed with the subsequent operation only after the service processes restart.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 439/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.9.1.12 How Do I Configure Time Synchronization Between the System and a


Host When an External Linux Clock Source Is Used?

Scenarios
If an external Linux clock source is used, manually configure a host to synchronize time with the external clock source. Set the host or VRM node as
the system clock source that is also called an internal clock source. Configure time synchronization between the system and the internal clock
source.

Impact on the System


After the NTP clock source is configured, the system restarts the VRM process. As a result, FusionCompute will be interrupted for 3 minutes, and
the antivirus service may become abnormal.

If a host is set as an internal clock source, service processes on the host will restart during the configuration. If more than 40 VMs run on the host, it may take a long
time to restart the processes, triggering fault recovery tasks for these VMs. However, these VMs will not be migrated to another host. After the service processes on
the host have restarted, the fault recovery tasks will be automatically canceled.

Prerequisites
Conditions

You have obtained the IP address or domain name of the external clock source.

If the NTP server domain name is to be used, ensure that a DNS is available. For details, see "Configuring the DNS Server" in FusionCompute
8.8.0 O&M Guide.

You have obtained the password of user root and the management IP address of the host or VRM node that is to be configured as the internal
clock source.

You have logged in to FusionCompute.

Procedure
Configure time synchronization between the system and the host or VRM node functioning as the internal clock source.

1. Use PuTTY to log in to the host or VRM node to be set as the internal clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

4. Manually set the time on the host to be consistent with that on the external clock source. For details, see How Do I Manually Change the
System Time on a Node?

5. In the navigation pane of FusionCompute, click .


The System Management page is displayed.

6. Choose System > System Configuration > Time Management.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 440/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The Time Management page is displayed.

7. Configure the following parameters in the Time Management area:

NTP Server: Set it to the management IP address of the host that has been configured as the internal clock source.

Synchronization Interval (seconds): Set it to 64 seconds.

8. Click Save.
A dialog box is displayed.

9. Click OK.
The time zone and NTP clock source are configured.

The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption, and abnormal antivirus
service. Proceed with the subsequent operation only after the service processes restart.

Configure time synchronization between the host or VRM node and the external Linux clock source.

10. Configure time synchronization between the host and the external clock source. For details, see "Setting Time Synchronization on a Host" in
FusionCompute 8.8.0 Product Documentation.

11. Switch back to the VRM node that is configured as the internal clock source and run the following command to synchronize time between
the VRM node and the external clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip External clock source IP address or domain name -cycle 6 -force true

3.9.1.13 How Do I Reconfigure Host Parameters?

Scenarios
During the host installation process, some parameters are incorrectly configured. As a result, the host cannot be added to FusionCompute. In this
case, you can run the hostconfig command to reconfigure the host parameters.
The following parameters can be reconfigured:

Host management IP address

Gateway address of the management plane

Host name

VLAN

Prerequisites
The OS has been installed on the host.

You have obtained the IP address, username, and password for logging in to the BMC system of the host.

You have obtained the password of user root for logging in to the host.

The host is not added to the site or cluster that has FusionCompute installed.

Procedure
Log in to the host.

1. Open the browser on the local PC, enter the following IP address in the address bar, and press Enter:
https://Host BMC IP address

2. Log in to the BMC system on the host as prompted.


For details about the default username and password for logging in to the BMC system, see the required server documentation. If the
username and password have been changed, obtain the new username and password from the administrator.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 441/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The host management page is displayed.

If you cannot log in to the BMC system of a single blade server (in the x86 architecture), you are advised to log in to the SMM of the blade server and open
the remote control window of the server.

3. Click Remote Control.


For some Huawei servers, you may need to click Remote Virtual Console (requiring JRE) on the Remote Control page.
The remote control window is displayed.

4. Log in to the host as user root.

5. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

Modify host configurations.

6. Run the following command to enter Main Installation Window, as shown in Figure 1:
hostconfig

Figure 1 Main Installation Window

When configuring host data:


Press Tab or the up and down arrow keys to move the cursor.
Press Enter to select or execute the item on which the cursor is located.

7. Choose Network > eth0 to enter the IP Configuration for eth0 screen, as shown in Figure 2.

Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.

Figure 2 IP Configuration for eth0

127.0.0.1:51299/icslite/print/pages/resource/print.do? 442/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

8. Configure the IP address and subnet mask of eth0.

IP Address: Enter the IP address of the host management plane.

Netmask: Enter the subnet mask of the host.

During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad on the right.

9. Select OK to finish the eth0 configuration.

10. Enter the gateway address of the host management plane in Default Gateway, as shown in Figure 3.

Figure 3 Network Configuration

11. After the configuration is complete, select OK.

After the network configuration is complete, you can set the gateway address of the management plane and IP addresses of other planes in Test Network to
check whether the newly configured IP addresses are available.

12. Select Hostname. The Hostname Configuration screen is displayed, as shown in Figure 4.

Figure 4 Hostname Configuration

13. Delete existing information, enter the new host name, and select OK.

14. Check the operations that need to be performed on the VLAN.

To add or modify a VLAN, go to 15.

To delete the VLAN, go to 17.

If you do not need to perform any operation on the VLAN, go to 20.

15. Select VLAN. The VLAN Configuration screen is displayed, as shown in Figure 5.

Figure 5 VLAN Configuration

127.0.0.1:51299/icslite/print/pages/resource/print.do? 443/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

16. Configure the VLAN ID, IP address, and subnet mask and select OK to complete VLAN configuration.

VLAN ID: Enter the VLAN of the host management plane.

IP Address: Enter the IP address of the host management plane.

Netmask: Enter the subnet mask of the host.

During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad on the right.

After the operations are complete, go to 19.

17. Select VLAN. The VLAN Configuration screen is displayed, as shown in Figure 6.

Figure 6 VLAN Configuration

18. Select Delete to delete the VLAN.

After deleting the VLAN, switch to the Network screen to reconfigure network information.

19. After the VLAN is configured, select Network and check whether the gateway is successfully configured based on the gateway information
in the Network Information list.

If the modified gateway exists, go to 20.

If the modified gateway does not exist, go to 10.

20. Press Esc to exit the host configuration screen.


The configurations take effect immediately. The changed host name is displayed after your next login.

3.9.1.14 How Do I Replace Huawei-related Information in FusionCompute?

Scenarios
The FusionCompute web client displays Huawei-related information, including the product name, technical support website, product documentation
links, online help links, copyrights information, system language, system logo (displayed in different areas on the web client), background images

127.0.0.1:51299/icslite/print/pages/resource/print.do? 444/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

on the login page, system page, and About page, as shown in Figure 1. This section guides you to change or shield such information.

Figure 1 Huawei-related information on the web client

: product name

: technical support website

: product documentation links

127.0.0.1:51299/icslite/print/pages/resource/print.do? 445/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

: online help links

: copyrights information

: system name

: system logo in the browser address box

: system logo on the login page and About page

: background image

: system logo in the upper left corner of Online Help

Prerequisites
Conditions
You have prepared the following images:

: A system logo in 16 x 16 pixel size displayed in the browser address box. The image must be named favicon.ico and saved in ICO
format.

: A system logo in 48 x 48 pixel size displayed on the login page and About page. The image must be named huaweilogo.png and saved
in PNG format.

: A background image in 550 x 550 pixel size. The image must be named login_enbg.png. The image is saved in PNG format.

: A system logo in 33 x 33 pixel size displayed in the upper left corner of Online Help. The image must be named huaweilogo.gif and
saved in GIF format.

PuTTY is available.

WinSCP is available.

Ensure that SFTP is enabled on CNA or VRM nodes. For details, see "Enabling SFTP on CNA Nodes" in FusionCompute 8.8.0 O&M Guide.

Procedure
1. Use WinSCP to log in to the active VRM node.
Ensure that the management plane floating IP address and username gandalf are used for login.

2. Set the transfer mode of WinSCP to Binary.


Method for setting the WinSCP transfer mode: Open WinSCP, choose Options > Preferences > Transfer, select Binary on the right side,
and click OK.

3. Copy the prepared images to the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third directory to replace the original images.

4. Use PuTTY to log in to the active VRM node.


Ensure that the management plane floating IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

5. Run the following command and enter the password of user root to switch to user root:

127.0.0.1:51299/icslite/print/pages/resource/print.do? 446/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

su - root

6. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

7. Run the following command to open and edit the configuration file:
vi /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third/systitle.conf
Figure 2 shows information in the configuration file.

Figure 2 Modifying the configuration file

The entered information can contain only letters, digits, spaces, and special characters _-,.©:/

Set title to the new content. The value is a string of 1 to 18 characters (one uppercase letter is considered as two characters).

Set link to the new content. The value is a string of 1 to 100 characters (one uppercase letter is considered as two characters).

Set loginProductSupportText to false (to display information) or true (to hide information).

Set headProductSupportText to false (to display information) or true (to hide information).

Set copyrightEnUs to the new content displayed when the system language is English. The value is a string of 1 to 100
characters (one uppercase letter is considered as two characters).

Set portalsysNameEnUs to the new content displayed when the system language is English. The value is a string of 1 to 18
characters (one uppercase letter is considered as two characters).

8. Press Esc and enter :wq to save the configuration and exit the vi editor.

9. Install JDK 1.8.0.211 on the local PC.

10. Configure environment variables.

a. Right-click Computer and choose Properties > Advanced system settings > Environment Variables.

b. In the System variables area, find Path and click Edit.

c. Add the JDK installation directory, such as C:\Program Files\Java\jdk1.8.0_77\bin, to Path.


Every two directories are separated using a semicolon (;).

11. Transcode the configuration file.

a. Use WinSCP to copy file systitle.conf in the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third directory to the local
PC.

b. Open the CLI on the local PC and switch to the directory in which file systitle.conf is saved.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 447/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

c. Run the following command to transcode the configuration file:


native2ascii -encoding UTF-8 "systitle.conf" systeminfo_en_US.properties

d. Copy the newly generated file systeminfo_en_US.properties to the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third


directory.
If the file already exists, overwrite it.

12. Run the following command to make the configuration take effect:
sh /opt/galax/root/vrmportal/tomcat/script/portalSh/syslogo/modifylogo.sh third
The configuration is successful if information similar to the following is displayed:

change syslogo and systitle to third success.

Redirecting to /bin/systemctl restart portal.service

13. Use the browser to access the FusionCompute web client and check whether the new information is displayed, such as the system logo,
product name, copyrights information, and support website.

14. Disable the SFTP service. For details, see "Disabling SFTP on CNA Nodes" in FusionCompute 8.8.0 O&M Guide.

Additional Information
Related Tasks
Restore the default Huawei logo.

1. Use PuTTY to log in to the active VRM node.


Ensure that the management plane floating IP address and username gandalf are used for login.

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

4. Install JDK 1.8.0.211 on the local PC.

5. Transcode the configuration file.

a. Use WinSCP to copy file systitle.conf in the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/huawei directory to the local
PC.

b. Open the CLI on the local PC and switch to the directory in which file systitle.conf is saved.

c. Run the following command to transcode the configuration file:


native2ascii -encoding UTF-8 "systitle.conf" systeminfo_en_US.properties

d. Copy the newly generated file systeminfo_en_US.properties to the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/huawei


directory.
If the file already exists, overwrite it.

6. Run the following command to restore the default logo:


sh /opt/galax/root/vrmportal/tomcat/script/portalSh/syslogo/modifylogo.sh huawei
The configuration is successful if information similar to the following is displayed:
change syslogo and systitle to huawei success.

Redirecting to /bin/systemctl restart portal.service

7. Use the browser to access the FusionCompute web client and check whether the default Huawei interface is displayed.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 448/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.9.1.15 How Do I Measure Disk IOPS?


This section describes how to use the third-party fio tool to check the disk IOPS.

Procedure
1. Download the DeployTool deployment tool package by referring to Table 1.

Table 1 Tool package for DeployTool deployment

eDME_24.0.0_DeployTool.zip Software deployment package of eDME Enterprises: Click here .


Carriers: Click here .

2. Decompress eDME_24.0.0_DeployTool.zip on the local PC. The fio software is stored in the \DeployTool\pkgs directory.

Figure 1 fio software list

3. Use PuTTY to log in to the node to be detected as user root through the static IP address of the node. Run the mkdir -p /opt/disk_test
command to create a test directory. The following uses the x86 EulerOS as an example to describe how to upload the fio_Euler_X86 file to
the /opt/disk_test directory.

4. Run the following command to test the I/O performance of the disk:
chmod 700 /opt/disk_test/fio_Euler_X86;/opt/disk_test/fio_Euler_X86 --name=Stress --rw=randwrite --direct=1 --ioengine=libaio --
numjobs=1 --filename=/opt/disk_test/fio_tmp_file --bs=8k --iodepth=1 --loops=100 --runtime 20 --size=90% of the remaining disk
spaceGB;rm -f /opt/disk_test/fio_tmp_file
The information in the red box is the IOPS value of the disk.

In the preceding commands, x86 EulerOS is used as an example. If other OSs are used, replace fio_Euler_X86 in the commands with other names. For
example, if EulerOS in the Arm architecture is used, replace fio_Euler_X86 in the commands with fio_Euler_ARM.

5. Run the rm -rf /opt/disk_test command to delete test-related files.

3.9.1.16 What Should I Do If a Linux VM with More Than 32 CPU Cores Cannot
Be Started?

Scenarios
If more than 32 CPU cores are required for VMs running certain OSs, you need to upgrade the Linux OS kernel. For details about supported OSs,
see FusionCompute SIA Huawei Guest OS Compatibility Guide (KVM) (x86 architecture) or FusionCompute SIA Huawei Guest OS Compatibility
Guide (Arm) (Arm architecture).

For details about how to query the FusionCompute SIA version, see How Do I Query the FusionCompute SIA Version?" in FusionCompute 8.8.0 O&M Guide.
To obtain FusionCompute SIA Huawei Guest OS Compatibility Guide (KVM) or FusionCompute SIA Huawei Guest OS Compatibility Guide (Arm), perform the
following steps:

For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download the document for the desired version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document for the desired version.

Prerequisites
Conditions

127.0.0.1:51299/icslite/print/pages/resource/print.do? 449/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

You have obtained the following RPM packages (available at https://vault.centos.org/6.6/updates/x86_64/Packages/) for upgrading the system
kernel:

kernel-2.6.32-504.12.2.el6.x86_64.rpm

kernel-firmware-2.6.32-504.12.2.el6.noarch.rpm

Procedure
1. Use WinSCP to copy the RPM packages to any directory on the VM.
For example, copy the packages to the /home directory.

2. Use PuTTY to log in to the VM OS.


Ensure that username root is used for login.

3. Run the following command to switch to the directory where the RPM packages are saved:
cd /home

4. Run the following command to install the RPM packages:


rpm -ivh kernel-2.6.32-504.12.2.el6.x86_64.rpm
rpm -ivh kernel-firmware-2.6.32-504.12.2.el6.noarch.rpm

5. After the packages are installed, run the following command to restart the VM and make the new kernel take effect:
reboot

3.9.1.17 How Do I Query the FusionCompute SIA Version?

Scenarios
This section guides you to query the version of FusionCompute SIA installed in the system.

Procedure
1. Use PuTTY to log in to the active VRM node.
Log in to the node using the management IP address as user gandalf.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

4. Run the following command to query the FusionCompute SIA version:


rpm -qa | grep SIA
Information similar to the following is displayed:

In this figure, 24.0.RC2 is the FusionCompute SIA version.

3.9.1.18 What Should I Do If Tools Installed on Some OSs Fails to be Started?

Symptom
127.0.0.1:51299/icslite/print/pages/resource/print.do? 450/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Tools fails to be started after it is installed on a VM running a Linux OS.

Possible Causes
The open source qemu-guest-agent service is installed on the VM, which occupies the vport. However, vm-agent also needs to use the vport. As a
result, Tools cannot be started normally.

Procedure

1. In the navigation pane, click .


The Resource Pool page is displayed.

2. On the VM tab page, enter the search criteria and click .


The query result is displayed.
Search criteria can be Type, Status, Name, IP Address, MAC Address, ID, Description, and UUID.

3. (Optional) On the VM page, click Advanced Search on the top of the VM list, enter or select the search criteria, and then click Search.
The query result is displayed.
Search criteria can be IP Address, VM ID, VM Name, MAC Address, Description, UUID, Tools Status, Owning Cluster/Host, Type,
and Status.

4. Locate the row that contains the target VM and click Log In Using VNC.
The VNC login window is displayed, and the VM can be viewed in the VNC window.

5. Log in to the VM using VNC as the root user.

6. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.

7. Determine whether Tools is installed.


If yes, uninstall Tools as instructed in "Uninstalling the Tools from a Linux VM" in FusionCompute 8.8.0 User Guide (Virtualization), and
then perform 8.
If no, go to 8.

8. Run the following command to check whether the qemu-guest-agent service exists:
ps -eaf | grep qemu-ga
If the command output contains qemu-guest-agent, the qemu-guest-agent service exists in the system.

root 618 1 0 20:27 ? 00:00:00 /usr/bin/qemu-ga -p /dev/virtio-ports/org.qemu.guest_agent.0

root 12341 1663 0 20:30 tty1 00:00:00 grep --color=auto qemu-ga

If yes, go to 9.

If no, go to 13.

9. Run the following command to delete the open source qemu-guest-agent service.
The following uses CentOS as an example. For details about the commands for other OSs, see the corresponding guide.
rpm -e qemu-guest-agent

10. Run the following command to restart the VM:


reboot

11. Install Tools for the VM.


For details, see "Installing Tools on a Linux OS" in FusionCompute 8.8.0 User Guide (Virtualization).

12. Run the following command to check whether Tools is started normally:
service vm-agent status
If the command output shows that the server is in the running state, Tools is started properly.

Active: active (running) since Rri 2020-12-18.....

127.0.0.1:51299/icslite/print/pages/resource/print.do? 451/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If yes, no further action is required.

If no, go to 13.

13. Contact technical support for assistance.

3.9.1.19 Expanding the Data Disk Capacity


This section describes how to expand the resource management scale by expanding the data disk capacity of a eDME node after eDME is installed.
You can expand the data disk capacity of a eDME node in either of the following ways: expanding the capacity of existing disks of the node or
adding disks to the node. You are advised to expand the data disk capacity by adding disks to the node.

Method of Adding Disks


1. Add virtual disks to the node. For a virtual disk, see Creating and Attaching a Disk to a VM .

2. Use PuTTY to log in to the eDME node to be expanded as user sopuser.

3. Run the su - root command to switch to user root.

4. (Optional) Run the following command to query the capacity of the opt_vol partition and the name of the new disk:

The capacity of the opt_vol partition on the data disk is critical to the resource management scale. The following expands the capacity of the opt_vol partition
as an example.

lsblk

5. Run the following command to create a physical volume group with the same name on the new disk:

pvcreate /dev/Name of the new disk

6. Run the following command to add the capacity of the new physical volume group to the oss_vg volume group:

vgextend oss_vg /dev/Name of the new disk

7. Run the following command to expand the opt_vol partition:

The unit of the expanded capacity is GB, for example, 500 GB.
The expanded capacity must be less than the capacity of the disk to be added. For example, if the capacity of the new disk is 100 GB, the expanded
capacity can be 99.9 GB at most.

lvextend -L + Expanded capacity /dev/oss_vg/opt_vol

8. Run the following command to check whether the capacity expansion is successful:

lsblk

Expanding the Capacity of Existing Disks


1. Increase the disk capacity of the node. For details, see Expanding the VM Disk Capacity .

2. Use PuTTY to log in to the eDME node to be expanded as user sopuser.

3. Run the su - root command to switch to user root.

4. Run the following command to check the disk information of the node:
fdisk -l

5. Run the following command to create a partition: /dev/vdb is used as an example. Replace it with the actual name.
fdisk /dev/vdb
Enter n, press Enter, and then enter w as prompted.

[root@eDME01 sopuser]# fdisk /dev/vdb

127.0.0.1:51299/icslite/print/pages/resource/print.do? 452/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Welcome to fdisk (util-linux 2.35.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

Command (m for help): n //Enter n.

Partition type

p primary (1 primary, 0 extended, 3 free)

e extended (container for logical partitions)

Select (default p): //Press Enter.

Using default response p.

Partition number (2-4, default 2): //Press Enter.

First sector (1237319680-1279262719, default 1237319680): //Press Enter.

Last sector, +/-sectors or +/-size{K,M,G,T,P} (1237319680-1279262719, default 1279262719): //Press Enter.

Created a new partition 2 of type 'Linux' and of size 20 GiB.

Command (m for help): w //Enter w to save the partition.

The partition table has been altered.

Syncing disks.

6. Run the following command to view the new partition: The /dev/vdb2 partition is used as an example in the following commands.
fdisk /dev/vdb
[root@eDME 01 sopuser]# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.35.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

Command (m for help): p //Enter p to view the partition information.

Disk /dev/vdb: 610 GiB, 654982512640 bytes, 1279262720 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0xd0bbf5a8

Device Boot Start End Sectors Size Id Type

127.0.0.1:51299/icslite/print/pages/resource/print.do? 453/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

/dev/vdb1 2048 1237319679 1237317632 590G 8e Linux LVM

/dev/vdb2 1237319680 1279262719 41943040 20G 83 Linux

To exit the command, press Ctrl+C.

7. Run the following command to read the partition again:


partprobe

8. Run the following command to create a physical volume:


pvcreate /dev/vdb2

[root@eDME01 sopuser]# pvcreate /dev/vdb2

Physical volume "/dev/vdb2" successfully created.

9. Run the following command to add the new physical volume to the oss_vg volume group:
vgextend oss_vg /dev/vdb2

10. Run the following command to expand the opt_vol partition:


lvextend -L +Expanded capacity /dev/oss_vg/opt_vol
[root@eDME01 ~]# lvextend -L +29G /dev/oss_vg/opt_vol

Size of logical volume oss_vg/opt_vol changed from 550.02 GiB (140806 extents) to 579.02 GiB (148230 extents).

Logical volume oss_vg/opt_vol successfully resized.

The unit of the expanded capacity is GB, for example, 500 GB.
The expanded capacity must be less than the expanded node disk capacity. For example, if the expanded node disk capacity is 100 GB, the maximum
expanded capacity can be 99.9 GB.

11. Run the following command to identify the partition size again:
resize2fs /dev/mapper/oss_vg-opt_vol

12. Run the following command to check whether the capacity expansion is successful:
lsblk

3.9.1.20 How Do I Manually Change the System Time on a Node?

Scenarios
If no external clock source is deployed, configure the host accommodating the VRM VM as the NTP clock source. In this case, the system time on
the target host or physical server must be accurate.

Prerequisites
You have obtained the passwords of users gandalf and root of the node to be configured as the NTP clock source.

Procedure
Log in to the operating system of the node.

1. Use PuTTY to log in to the node to be set as the NTP clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

2. Run the following command and enter the password of user root to switch to user root:
su - root

127.0.0.1:51299/icslite/print/pages/resource/print.do? 454/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

Manually change the system time on a node.

4. Check whether any external NTP clock source is configured for the node.

If yes, go to 5.

If no, go to 6.

5. Run the following command to set the node as its NTP clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip 127.0.0.1 -cycle 6 -timezone Local time zone -force true
For example, if the local time zone is Asia/Beijing and the node is a physical server that has VRM installed, run the following command:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip 127.0.0.1 -cycle 6 -timezone Asia/Beijing -force true

6. Run the date command to check whether the current system time is accurate.

If yes, go to 11.

If no, go to 7.

7. Run the required command to stop a corresponding process based on the node type.

If the node is a host, run the following command:


perl /opt/galax/gms/common/config/restartCnaProcess.pl

If the node is a VRM node, run the following command:


sh /opt/galax/gms/common/ha/stop_ha.sh

8. Run the following command to rectify the system time of the node:
date -s Current time
The current time must be set in HH:MM:SS format.
For example, if the current time is 16:20:15, run the following command:
date -s 16:20:15

9. Run the following command to synchronize the new time to the basic input/output system (BIOS) clock:
/sbin/hwclock -w -u

10. Run the required command to start a corresponding process based on the node type.

If the node is a host, run the following command:


service monitord start

If the node is a VRM node, run the following command:


service had start

11. After 3 minutes, run the ntpq -p command.


Information similar to the following is displayed:

remote refid st t when poll reach delay offset jitter

==============================================================================

*LOCAL(0) .LOCL. 5 l 58 64 377 0.000 0.000 0.001

If * is displayed on the left of LOCAL, the time service is running properly on the node. The node can be used as an NTP clock source.
If * is not displayed, run the ntpq -p command again five to ten minutes later to check the time service running status.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 455/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.9.1.21 How Do I Handle the Issue that VRM Services Become Abnormal
Because the DNS Is Unavailable?

Symptom
When a DNS is invalid or faulty, the system becomes faulty after the following operations are performed. As a result, the user fails to log in to
FusionCompute for changing the DNS configuration.

Configuring the NTP

Adjusting the network

Changing the quorum server IP address

Restarting VRM nodes

Other operations that may cause VRM service or node restart

Possible Causes
An invalid DNS is configured.

The configured DNS is faulty.

Procedure
1. Use PuTTY to log in to one VRM node.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.

The unit of XXX is second.


If you run the TMOUT=0 command, no timeout occurs, but this poses security risks. Exercise caution when running this command.

4. Run the following command to clear the DNS configurations:


echo > /etc/resolv.conf

5. Run the following command to restart the VRM service:


service vrmd restart

6. Repeat 1 to 5 to clear the DNS configurations for the other VRM node.

7. Wait for 10 minutes and then check whether you can log in to FusionCompute successfully.

If yes, no further action is required.

If no, go to 8.

After the system is recovered, configure the DNS on FusionCompute again.

8. Contact technical support for assistance.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 456/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

3.9.1.22 What Can I Do If an Error Message Is Displayed Indicating That the


Sales Unit HCore Is Not Supported When I Import Licenses on FusionCompute?

Symptom
After FusionCompute is upgraded from a version earlier than 850 to 850 or later, or FusionCompute 850 or later is installed, an error message is
displayed indicating that licenses with the sales unit HCore are not supported when licenses with the sales unit HCore are imported.

Possible Causes
The imported licenses contain licenses with the sales unit HCore.

Fault Diagnosis
None

Procedure
1. Log in to the iAuth platform using a W3 account and password.

2. Click Apply By Application.

3. On the Apply By Application page, select ESDP-Electronic Software Delivery Platform in Enter an application, enter GTS in Enter
Privilege, and click Search.

4. Select the required permissions and click Next.

5. Enter the application information as prompted and submit the application.

6. Log in to ESDP using the obtained ESDP account.

7. In the navigation pane, choose License Commissioning and Maintenance > License Split.

8. Click Add Node, enter the ESN, and click Search to search for the license information. Select the license information and click OK.

9. Set Region, Rep Office, Approver, and Description.


Example: Application reason: Modify the license sales unit of FusionCompute 8.6.0 from HCore to CPU.

10. After the splitting, set Product Name to FusionCompute, Version to 8, and set ESN, and click Preview License.

11. Confirm that the license splitting result meets the expectation (16 HCore: 1 CPU (round down to the nearest integer)) and click Submit.

12. Confirm the information in the dialog box displayed and click OK. (The dialog box asks you whether to continue license splitting, because
this operation will result in the following: The annual fee NE is processed as a common NE, and the annual fee time and annual fee code
remains unchanged. Only common BOMs are changed, but the annual fee time remains unchanged.) Confirm the settings and click OK.

13. After the AMS manager approves the modification, the license splitting is complete.

14. Refresh the license on ESDP to obtain the split license file.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 457/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Related Information
None

3.9.1.23 How Do I Determine the Network Port Name of the First CNA Node?
Method 1: View the onboard network ports where network cables are inserted. They are numbered from 0 from left to right. If the network
ports are numbered to X, the corresponding deployment network port is ethX.

Method 2:

1. Retain only the network connection of the network port corresponding to the first CNA node.

a. If the operation is performed on site, you can remove unnecessary network cables.

b. If the operation is performed remotely, you can disable unnecessary network ports on the switch.

2. Mount the CNA image to view the network port name.

a. Log in to the iBMC WebUI of the first node and go to the remote virtual console.

b. Click the CD/DVD icon on the top, select the image file, and click Connect.

c. After the connection is successful, click the Boot Options icon on the navigation bar and change the boot mode to
CD/DVD.

d. Click Power Control, select Forced Restart, and click OK. Wait for the server to restart.

e. After the server is started, select Installation on the installation option page and press Enter.

f. When the main installation window is displayed, choose Network > IPv4. The IPv4 network configuration page is
displayed.

g. On the IPv4 network configuration page, you can view all NIC addresses. If a network port name is marked with an
asterisk (*), the network is connected. ethX corresponding to the network port marked with an asterisk (*) is the
network port name of the first CNA node.

Method 3: Log in to the target host in SSH mode and run the ethtool -p ethX command. The indicator of the corresponding network port is on.
You can determine the eth number of the network port of the first CNA node by consecutive attempts.

Method 4: For some servers, you can view the MAC address of each network port on iBMC and determine the network port name of the first
CNA node based on the MAC address of this network port.

1. On iBMC, check the MAC address of the network port of the first CNA node.

2. Perform step 2 in method 2. ethX corresponding to the MAC address of the network port of the first CNA node is the name of the
network port on the first CNA node.

3.9.1.24 Troubleshooting

Problem 1: Changing the Non-default Password of gandalf for Logging In to Host 02 to the
Default One
Possible cause: The default password of the gandalf user for logging in to host 02 is changed.
Solution: Change the password of the gandalf user for logging in to host 02 to the default password.

Problem 2: Host Unreachable


Possible cause 1: The host is faulty.
Solution: Check whether the host is faulty (for example, restart or stop the host). Ensure that the host is running properly and perform the operation
that failed last time again.
Possible cause 2: When the host OS starts, the user changes the default boot path. In this case, the host does not boot from the default KVM kernel.
Solution: Restart the host. After the host is successfully restarted, perform the operation that failed last time again.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 458/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Possible cause 3: The IP address of a host conflicts with the IP address of another device on the network.

Solution:

Reinstall the target host and assign an IP address that does not conflict with that of other devices on the network to the host.

Use the tool to install VRM nodes again.

Possible cause 4: When the installation tool PXE is used to install hosts in a batch, the host installation progress varies. The IP addresses of the
installed hosts are temporarily occupied by those which are being installed.
Solution: Ensure that the address segment of the DHCP pool is different from the IP addresses of the planned host nodes to avoid IP address
conflicts. For details, see "Data Preparation" in FusionCompute 8.8.0 Product Documentation. You are advised to install a maximum of 10 hosts at a
time.

Problem 3: Incorrect Password of root for Logging In to Host 02


Possible causes:

The root password is incorrect.

The root account is locked because incorrect passwords are entered for multiple consecutive times.

Solution:

If the password of the root user is incorrect, enter the correct password.

If the root account is locked, wait for 5 minutes and try again.

Problem 4: Duplicate Host OS Names at the Same Site


Possible cause: The host OS names at the same site are duplicate.
Solution:
Log in to the OS of host 02 and run the following command:
sudo hostnamectl --static set-hostname host-name
If no command output is displayed, the change is successful. Continue the installation.

Problem 5: The Host Where the Installation Tool Is Installed Does Not Automatically Start
Services After Being Restarted
Possible cause: The command for automatically starting services fails to be executed during the host startup.
Solution:

1. Use PuTTY to log in to the MCNA node.


Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.

The MCNA node is the CNA node on which the installation tool is installed.

2. Run the following command and enter the password of user root to switch to user root:
su - root

3. Run the following command to start the installation service:


webInstaller start

Problem 6: PXE-based Host Installation Failed or Timed Out

127.0.0.1:51299/icslite/print/pages/resource/print.do? 459/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Possible cause: The firewall on the local PC blocks the communication between the PC and the host.
Solution: Disable the Windows firewall and other software firewalls on the local PC and start the hosts through the network.

Possible cause: The IP address of the installation tool node (that is, the configured DHCP service address) cannot communicate with the
installation plane.

Solution:

Check the physical connection between the installation tool node and the host to be installed. Ensure that no hardware fault, such as
network cable or network port damage, occurs.

Check the physical devices between the installation tool node and the host to be installed, such as switches and firewalls. Ensure that the
DHCP, TFTP, and FTP ports are not disabled or the rates of the ports are not limited.

TFTP and FTP have security risks. You are advised to use secure protocols such as SFTP and FTPS.

If the IP address of the installation tool is not in the network segment of the installation plane, check whether the DHCP relay is
configured on the switch.

If a VLAN is configured for the host management plane, ensure that the installation plane and the host management plane are in different
VLANs and the installation tool can communicate with the two planes.

During PXE-based installation, ensure that the data packets on the PXE port do not carry VLAN tags, and allow these data packets in network settings.
The tag of nodes to be installed in PXE mode is the PVID of nodes to be installed in non-PXE mode + untag.

After the possible faults are rectified, boot the hosts from the network again.

Possible cause: Multiple DHCP servers are deployed on the installation plane.
Solution: Disable redundant DHCP servers to ensure that the installation tool provides DHCP services.

Possible cause: The host to be installed is connected to multiple network ports, and DHCP servers exist on the network planes of multiple
network ports.
Solution: Disable DHCP servers on non-installation planes to ensure that the installation tool provides DHCP services.

Possible cause: The host to be installed supports boot from the network, but this function is not configured during booting.
Solution: Configure hosts to be installed to boot from the network (by referring to corresponding hardware documentation), and then update
the host installation progress to Installation Progress in the Install Host step of the PXE process.

Possible cause: Hosts to be installed do not support booting from the network.
Solution: Install the hosts by mounting ISO images.

Possible cause: Hosts to be installed fail to boot from the network.


Solution: Use the remote control interface of the host baseboard management controller (BMC) to observe the host boot process and
troubleshoot the hosts based on the messages displayed on the interface and the host hardware documentation.

Possible cause: Packet loss or delay occurs due to network congestion or high loads on the switch.
Solution: Ensure that the network workloads are light during the installation process. If more than 10 hosts are to be installed, boot 10 hosts
from the network per batch.

Problem 7: Automatic Logout After Login Using a Firefox Browser Is Successful but an Error
Message Indicating that the User Has Not Logged In or the Login Times Out Is Displayed When
the User Clicks on the Operation Page
Possible cause: The time of the server where the FusionCompute web installation tool is deployed is not synchronized with the local time. As a
result, the Firefox browser considers that the registered session has expired.
Solution: Change the local time or run the date -s xx:xx:xx command (xx:xx:xx:xx indicates hours:minutes:seconds respectively) on the server to
ensure that the local time is the same as the time of the server where the web installation tool is deployed, refresh the browser, and log in again.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 460/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Problem 8: Alarm "ALM-15.1000103 VM Disk Usage Exceeds the Threshold" Is Generated


During Software Installation
Possible cause: The system detects the VM disk usage every 60 seconds. This alarm is generated when the system detects that the VM disk usage is
greater than or equal to the specified alarm threshold for three consecutive times.
Solution: The current system disk is used for system installation and is not allocated to service systems. The capacity meets the planning
requirements. You do not need to handle the alarm. For details about how to mask the alarm for a VM object, see Adding an Alarm Masking Rule .

3.9.2 Common Operations


How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?

Logging In to FusionCompute

Installing Tools for eDME

Uninstalling the Tools from a Linux VM

Checking the Status and Version of the Tools

Configuring the BIOS on Hygon Servers

Setting Google Chrome (Applicable to Self-Signed Certificates)

Setting Mozilla Firefox

Obtaining HiCloud Software Packages from Huawei Support Website

Restarting Services

3.9.2.1 How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair


Authentication Mode?

Scenarios
This section guides you to use PuTTY and the private key required for authentication to log in to a target node.

Prerequisites
You have obtained the private key certificate matching the public key certificate.
You have obtained the password of the private key certificate if the private key certificate is encrypted.

Procedure
1. Check whether PuTTY was used to log in to the target node in private-public key pair authentication mode on the local PC.

If yes, go to 7.

If no or you cannot confirm, go to 2.

2. Run PuTTY and enter the IP address of the target node and the SSH port number (22 by default).

3. In the Category area in the left pane, choose Connection > SSH > Auth.
The SSH authentication configuration page is displayed.

4. Click Browse, select the prepared private key certificate in the displayed window, and click Open.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 461/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The file name extension of the private key certificate is *.ppk. Contact the administrator to obtain the private key certificate.
The following figure shows the screen after the configuration.

Figure 1 Configuring the private key certificate

5. In the Category area in the left pane, select Session.


The main page is displayed.

6. To facilitate subsequent access, create a custom session in Saved Sessions and click Save.
The following figure shows the session configuration page.

Figure 2 Saving a session

127.0.0.1:51299/icslite/print/pages/resource/print.do? 462/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

After this step, go to 8.

7. Select a saved session and click Load.

8. Click Open.

9. Enter the username for logging in to the target node as prompted.


If the private key certificate is encrypted, enter the password of the private key certificate as prompted.

3.9.2.2 Logging In to FusionCompute

Scenarios
This section guides administrators to log in to FusionCompute to manage virtual, service, and user resources in a centralized manner.

Prerequisites
Conditions

The browser for logging in to FusionCompute is available.

You have configured the Google Chrome or Mozilla Firefox browser. For details, see Setting Google Chrome (Applicable to Self-Signed
Certificates) or Setting Mozilla Firefox .

The browser resolution is set to 1280 x 1024 or higher based on the service requirement to ensure the optimum display effect on
FusionCompute.

If the security certificate was not installed when the Google Chrome browser was set, the browser may display a message indicating that the web page cannot be
displayed upon first login to FusionCompute or to a VM using VNC. In this case, press F5 to refresh the web page.
The system supports the following browsers:

Google Chrome 118, Google Chrome 119, and Google Chrome 120
Mozilla Firefox 118, Mozilla Firefox 119, and Mozilla Firefox 120
Microsoft Edge 118, Microsoft Edge 119, and Microsoft Edge 120

Data
Table 1 describes the data required for performing this operation.

Table 1 Required data

Parameter Description Example Value

IP address of the VRM Specifies the floating IP address of the VRM nodes if the 192.168.40.3
node VRM nodes are deployed in active/standby mode.

Username/Password Specifies the username and password used for logging in Common mode:
to FusionCompute.
Username: admin
Password:
Tool-based VRM installation: Set the password during the
installation.
Manual VRM installation using an ISO image: Set the password
when executing the initialization script after the installation is
complete.

User type Specifies the type of the user to log in to the system. Local user
Local user: Log in to the system using a local username
and password.
Domain user: Log in to the system using a domain
username and password.

Login type Specifies the login type. Local


Local: used to log in to FusionCompute of the current
site.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 463/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Procedure
1. Open Mozilla Firefox.

2. Enter the following address and press Enter.


For IPv4, enter https://[IP address of the VRM node]:8443.
If two VRM nodes are deployed in active/standby mode, the IP address is the floating IP address of the VRM nodes.

When accessing the IP address, the system automatically converts the IP address into the HTTPS address to improve access security.
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.

The login page is displayed.


If the FusionCompute management system page is displayed, single sign-on (SSO) is not configured.

3. Set Username and Password, select User type, and click Login. If you attempt to log in to the system again after the initial login fails, you
also need to set Verification code.
Enter the username and password based on the permission management mode configured during VRM installation.

If it is your first login using the administrator username, the system will ask you to change the password of the admin user.
The password must meet the following requirements:
The password contains 8 to 32 characters.
The password must contain at least one space or one of the following special characters: `~!@#$%^&*()-_=+\|[{}];:'",<.>/?
The password must contain at least two types of the following characters:
Uppercase letters
Lowercase letters
Digits

The FusionCompute management page is displayed after you log in to the system.

The user is automatically logged out of the FusionCompute management system in case of any of the following circumstances:

The current user's session times out.


The system administrator deletes the current user.
The system administrator manually locks the current user out.
The maximum number of connections allowed by the current user has been changed to a smaller value, and the number of login sessions of this user has
exceeded the changed value.

After you log in to FusionCompute, you can learn the product functions from the online help, product tutorial, and alarm help.

3.9.2.3 Installing Tools for eDME

Scenarios
After bare VM creation and OS installation are complete for eDME, you need to install Tools provided by Huawei on the VMs to improve the VM
I/O performance and implement VM hardware monitoring and other advanced functions. Some features are available only after Tools is installed.
For details about such features, see their prerequisites or constraints.
In addition to using Tools delivered with FusionCompute, you can also obtain the FusionCompute SIA software package that is compatible with
FusionCompute and OS from Huawei official website. After obtaining the FusionCompute SIA software package, you can install the latest Tools to
use new features. For details about the compatibility information and installation guide, see the FusionCompute SIA product documentation.

Prerequisites
Conditions

An OS has been installed on the VM.

Tools has not been installed on the VM. If Tools has been installed, uninstall it by referring to Uninstalling the Tools from a Linux VM .

127.0.0.1:51299/icslite/print/pages/resource/print.do? 464/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The free space of the system disk must be greater than 20 MB.

You have installed the gzip tool for the VM OS. For details about how to install gzip, see the product documentation of the OS in use. You can
run the tar command to decompress the software package.

Procedure
Mount Tools to the VM.
The Tools installation file is stored on the host. The VM can access the installation file only after Tools is mounted to the VM.

1. In the navigation pane, click .


The Resource Pool page is displayed.

2. Click the VM tab.

3. Locate the row that contains the target VM and choose More > Tools > Mount Tools.
A dialog box is displayed.

4. Click OK.

Log in to the VM using VNC.

5. Locate the row that contains the target VM and choose Log In Using VNC.
The VNC login window is displayed.

6. Log in to the VM using VNC as user root.

Install Tools.

7. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.

8. Run the following command to check whether the qemu-guest-agent service exists:
ps -eaf | grep qemu-ga
If information similar to the following is displayed and contains qemu-ga or qemu-guest-agent, the qemu-guest-agent service exists in the
OS:

root 618 1 0 20:27 ? 00:00:00 /usr/bin/qemu-ga -p /dev/virtio-ports/org.qemu.guest_agent.0

root 12341 1663 0 20:30 tty1 00:00:00 grep --color=auto qemu-ga

If yes, go to 9.

If no, go to 11.

9. Run the following command to delete the open source qemu-guest-agent service.
The following uses CentOS as an example. For details about the commands for other OSs, see the corresponding guide.
rpm -e qemu-guest-agent

10. Run the following command to restart the VM:


reboot

11. Run the following command to create an xvdd directory:


mkdir xvdd

12. Run the following command to mount a CD/DVD-ROM drive to the VM:
mount mounting path xvdd
For example, mount /dev/sr0 xvdd

For the Kylin V10 OS, run the following command to mount the CD/DVD-ROM drive to the VM.
mount -t iso9660 -o,loop mounting path xvdd

127.0.0.1:51299/icslite/print/pages/resource/print.do? 465/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The directory for mounting a CD/DVD-ROM drive to a VM varies with the version of Linux OS running on the VM.

For details about the OSs and corresponding mounting directories in the x86 architecture, see Table 1.

All OSs supported in the Arm architecture support both the ISO file and CD/DVD-ROM drive, and the mount directory can only be
/dev/sr0. Table 2 lists some OSs and mount directories. For details about other OSs, see FusionCompute SIA Guest OS Compatibility
Guide (Arm).

Table 1 Mapping between Linux OS versions and mounting directories (x86 architecture)

Linux Version Supported CD/DVD-ROM Drive Type Mounting Directory

All EulerOS versions (including openEuler) ISO file or CD/DVD-ROM drive /dev/sr0

Table 2 Mapping between Linux OS versions and mounting directories (Arm architecture)

Linux Version Supported CD/DVD-ROM Drive Type Mounting Directory

EulerOS 2.8 to 2.9 64-bit ISO file or CD/DVD-ROM drive /dev/sr0

13. Run the following command to switch to the xvdd directory:


cd xvdd

14. Run the following command to view the required Tools installation package:
ls
The following information is displayed:

.bz2 package:

...

vmtools-xxxx.tar.bz2

vmtools-xxxx.tar.bz2.sha256

.gz package:

...

vmtools-xxxx.tar.gz

vmtools-xxxx.tar.gz.sha256

15. Run the following commands to copy the Tools installation package to the root directory:

.bz2 package:
cp vmtools-xxxx.tar.bz2 /root
cd /root

.gz package:
cp vmtools-xxxx.tar.gz /root
cd /root

16. Run the following command to decompress the Tools installation package:

.bz2 package:
tar -xjvf vmtools-xxxx.tar.bz2

.gz package:
tar -xzvf vmtools-xxxx.tar.gz

17. Run the following command to go to the Tools installation directory:


cd vmtools

18. Run the following command to install Tools:


./install

127.0.0.1:51299/icslite/print/pages/resource/print.do? 466/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

If the following information is displayed, the installation is complete:

The UVP VMTools is installed successfully.

Reboot the system for the installation to take effect.

The version of Tools is incompatible with the VM OS if the following information is displayed (Download the latest Tools and install it.
If it fails to install, contact technical support):

unsupported linux version

The latest Tools software package is stored in FusionCompute_SIA-xxx-GuestOSDriver_xxx.zip. Download the software package of
the latest version from the following website:

For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.

For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.

After downloading the software package, perform the following steps:

a. Obtain the vmtools-linux.iso package from the following directory in the software package (version 8.1.0.1 in the x86 scenario
is used as an example here):
FusionCompute_SIA-8.1.0.1-GuestOSDriver_X86.zip\uvp-vmtools-3.0.0-019.060.x86_64.rpm\uvp-vmtools-3.0.0-
019.060.x86_64.cpio\.\opt\patch\programfiles\vmtools\

b. Unmount Tools. Locate the row that contains the target VM and choose More > Tools > Unmount Tools.

c. Mount the file to the VM. For details, see "Mounting a CD/DVD-ROM Drive or an ISO File" in FusionCompute 8.8.0 User
Guide (Virtualization).

d. Perform 12 again.

Run the ./install -i command to install Tools if the x86 VM runs one of the following OSs:
DOPRA ICTOM V002R003 EIMP
DOPRA ICTOM V002R003 IMAOS
Red Hat Enterprise Linux 3.0
Red Hat Enterprise Linux 3.4

Verify the installation.

After Tools is installed, restart the VM for Tools to take effect.

19. Run the following command to restart the VM:


reboot

20. After the VM restarts, log in to the VM using VNC as user root.

21. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.

22. Run the following command to check the Tools installation:


service vm-agent status
If the server is in the running status, Tools is successfully installed.

kvm-SLES11SP1x64:~ # service vm-agent status

server (pid 4190 4159) is running ...

If Tools fails to be installed on some Arm-based OSs, for example, Kylin and UOS, see "What Should I Do If Tools Installed on Some OSs Fails to be
Started?" in FusionCompute 8.8.0 Maintenance Cases.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 467/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Additional Information
Related Tasks
Uninstalling the Tools from a Linux VM
Related Concepts
Introduction to Tools

3.9.2.4 Uninstalling the Tools from a Linux VM

Scenarios
Uninstall the Tools from a VM if the Tools malfunctions or you have misoperated the VM.

After you uninstall the Tools, install it again in a timely manner. Otherwise, the VM performance deteriorates, VM hardware cannot be monitored, and other advanced
VM functions become unavailable.
For details about Tools functions, see Introduction to Tools .

Impact on the System


After the Tools is uninstalled, the VM performance deteriorates, VM hardware cannot be monitored, and other advanced VM functions become
unavailable.

Prerequisites
Conditions
Tools has been installed on the VM.

Procedure
Uninstall the Tools.

1. Log in to the VM as user root using Virtual Network Computing (VNC)

2. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.

3. Run the following command to uninstall the Tools:


sh /etc/.vmtools/uninstall

Warning: If the guest is suspicious of having no virtio driver,uninstall UVP VMTools may cause the guest inoperatable after being rebooted.

Press 'Y/y' to continue. Do you want to uninstall? [Y/n]

If Tools cannot be uninstalled by performing the preceding operation, run the following command to uninstall it again:
cd /etc/.vmtools
./uninstall

4. Enter y and press Enter.


The Tools is uninstalled if the following information is displayed:
The UVP VMTools is uninstalled successfully.

Reboot the system for the installation to take effect.

5. Run the following command to restart the VM:


reboot

127.0.0.1:51299/icslite/print/pages/resource/print.do? 468/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Follow-up Procedure
After you uninstall the Tools, install it again in a timely manner. Otherwise, the VM performance deteriorates, VM hardware cannot be monitored,
and other advanced VM functions become unavailable.
For details, see Installing Tools for eDME .

3.9.2.5 Checking the Status and Version of the Tools

Scenarios
During Tools upgrade preparation or verification process, check the running status and version of the Tools on FusionCompute.
Tools can be in one of the following states:

Not Running: No Tools is installed on the VM.

Running: Tools is running properly. However, the system failed to obtain Tools version due to a network fault or the version mismatch
between the VRM node and the host.

Running (Current Version: x.x.x.xx): Tools is running properly and the Tools version is x.x.x.xx.

Not Running (Current Version: x.x.x.xx): Tools has been installed on the VM that is in the Stopped or Hibernated state and Tools was
x.x.x.xx when the VM was running last time.

Prerequisites
Conditions
You have logged in to FusionCompute.

Procedure
Search for a VM.

1. In the navigation pane, click .


The Resource Pool page is displayed.

2. On the VM tab page, enter the search criteria and click .


The query result is displayed.
Search criteria can be Type, Status, Name, IP Address, MAC Address, ID, Description, and UUID.

3. (Optional) On the VM page, click Advanced Search on the top of the VM list, enter or select the search criteria, and then click Search.
The query result is displayed.
Search criteria can be IP Address, VM ID, VM Name, MAC Address, Description, UUID, Tools Status, Cluster/Host, Type, and Status.

View the Tools version.

4. Click the name of the VM to be queried.


The Summary page is displayed.

5. Locate the row that contains Tools on the Summary page and view the Tools version.

Tools can be in one of the following states:

Not Running: No Tools is installed on the VM.

Running: Tools is running properly. However, the system failed to obtain Tools version due to a network fault or the version mismatch
between the VRM node and the host.

Running (Current Version: x.x.x.xx): Tools is running properly and the Tools version is x.x.x.xx.

Not Running (Current Version: x.x.x.xx): Tools has been installed on the VM that is in the Stopped or Hibernated state and the
Tools version was x.x.x.xx when the VM was running last time.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 469/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Antivirus Virtualization Management

3.9.2.6 Configuring the BIOS on Hygon Servers


This section describes how to configure the BIOS, including the server boot mode, network driver mode, NIC PXE configuration, BIOS password,
and GUI language. For details about BIOS settings, see the user manual of the corresponding server model at the official website of the server.

Only Hygon servers support BIOS configuration. For details about the BIOS parameters, see the server vendor's configurations.

Two Methods for Accessing the BIOS


Enter the BIOS during startup. When the screen displays Suma, press Del. The system enters the BIOS setting program. Then, select subitems
using arrow keys and press Enter to enter the submenu.

Access the BIOS in the OS and run the ipmitool chassis bootdev bios && ipmitool power reset or ipmitool power cycle command. The
BIOS setting page is displayed.

The dimmed options are unavailable. The items marked with have submenus.
For details about how to set baseline parameters, see Table 1.

Table 1 Basic parameters

Main Menu Sub Menu Required Parameter

CPU SMT Mode Enable

SVM Mode Enable

Core Performance Boost Enabled

Global C-state Control Disable

SR-IOV Enable

RDSEED and RDRAND Control Auto

L1 Stream HW Prefetcher Auto

L2 Stream HW Prefetcher Auto

Numa Mode Enabled

Memory Bank Interleaving Enable

Channel Interleaving Enable

IO IOMMU Enabled

SATA Mode AHCI

Volatile Write Cache Disabled

Management Configuration Console Redirection (SOL) Enabled

Terminal Type (SOL) VT100+

Bits per second (SOL) 115200

Flow Control (SOL) None

Data Bits (SOL) 8

Parity (SOL) None

Stop Bits (SOL) 1

BMC Time SyncMode Local Time

127.0.0.1:51299/icslite/print/pages/resource/print.do? 470/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Boot Option Boot mode UEFI only

Network Do not launch

Legacy PXE boot retry 0

UEFI PXE retry Count 1

Network Stack Configuration Network Stack Enabled

Misc Option BIOS Hotkey Support Enabled

3.9.2.7 Setting Google Chrome (Applicable to Self-Signed Certificates)

Scenarios
This section guides administrators to configure the Google Chrome browser before logging in to FusionCompute for the first time. After the
configuration, you can use Google Chrome to perform operations on FusionCompute.
Related configurations, such as certificate configuration, for Google Chrome are required.
Google Chrome 115 is used as an example.

If the security certificate is not installed when the Google Chrome browser is configured, the download capability and speed for converting a VM to a template and
importing a template are limited.

Prerequisites
Conditions

You have logged in to FusionCompute using Google Chrome 118 or later.

You have obtained the IP address of the VRM node.

Data
Data preparation is not required for this operation.

Procedure
Enter the login page.

1. Open Google Chrome.

2. Enter the following address and press Enter.


IPv4

Active/standby deployment: https://[Floating IP address of the VRM nodes]:8443

Single-node deployment: https://[IP address of the VRM node]:8443

3. Click Continue to this website (not recommended).


In common mode, the FusionCompute login page is displayed.

If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
The HTTPS protocol used by FusionCompute supports only TLS 1.2. If SSL 2.0, SSL 3.0, TLS 1.0, or TLS 1.1 is used, the FusionCompute system cannot be
accessed.
If Google Chrome slows down after running for a period of time and no data needs to be saved, press F6 on the current page to move the cursor to the address
bar of the browser. Then, press F5 to refresh the page and increase the browser running speed.

Export the root certificate (Windows 10).

127.0.0.1:51299/icslite/print/pages/resource/print.do? 471/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

4. In the address bar of the browser, click Not secure.


Select Certificate is not valid. The Certificate Viewer dialog box is displayed.

5. Click the Details tab.


In the Certificate Hierarchy area, select the root certificate (that is, ssoserver at the top layer).

6. Click Export in the lower right corner of the dialog box.


Select a path for saving the certificate and click Save.

Import the root certificate (Windows 10).

7. Locate the row that contains the address bar of the browser, and click Settings.
Select Privacy and security.

8. On the Privacy and security tab page, click Security.


Click Manage certificates.

9. Click Import.
The Certificate Import Wizard dialog box is displayed.

10. Perform operations as prompted and go to the next step.

11. Click Browse on the line where the file name is located.
Select the exported certificate.

To use a self-signed certificate, you need to generate a root certificate, issue a level-2 certificate based on the root certificate, use the level-2 certificate as the
web certificate, and import the root certificate to the certificate management page of the browser.

12. Click Next.


The Certificate Store dialog box is displayed.

13. Select Place all certificates in the following store and click Browse.
The Select Certificate Store dialog box is displayed.

14. Select Trusted Root Certification Authorities and click OK.


The Certificate Store dialog box is displayed.

15. Click Next.


The Completing the Certificate Import Wizard dialog box is displayed.

16. Click Finish.


The Security Warning dialog box is displayed.

17. Click OK.


A dialog box is displayed, indicating that the importing is successful.

18. Click OK.


The homepage of Internet Explorer is displayed.

Clear browsing data.

19. Locate the row that contains the address bar of the browser and select More tools.
Click Clear browsing data.
The Clear browsing data dialog box is displayed.

20. Select the following options:

Browsing history

Cookies and other site data

Cached images and files

21. Click Clear data.


Browsing data is deleted.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 472/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

22. Close all Chrome tabs and restart Chrome.

23. In the address box of the browser, repeat 2 to access the login page. You can see that Not secure is no longer displayed in non-Chinese
cryptographic algorithm scenarios.

3.9.2.8 Setting Mozilla Firefox

Scenarios
This section guides administrators to set the Mozilla Firefox browser before logging in to FusionCompute the first time so that they can use a
Mozilla Firefox browser to perform operations normally on FusionCompute.

Prerequisites
Conditions
You have obtained the floating IP address of the VRM management nodes.
Data
Data preparation is not required for this operation.

Procedure
1. Open Mozilla Firefox.

2. Enter the following address and press Enter.


For IPv4, enter https://[IP address of the VRM node]:8443.

If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.

3. Expand I Understand the Risks and click Add Exception.


The Add Security Exception window is displayed.

4. Verify that Permanently store this exception is selected and click Confirm Security Exception.
Mozilla Firefox setting is complete.

3.9.2.9 Obtaining HiCloud Software Packages from Huawei Support Website


Obtaining GDE Software Packages

Obtaining Product Software Packages

3.9.2.9.1 Obtaining GDE Software Packages


Obtain x86-based and Arm-based GDE software packages by referring to x86 and Arm , respectively.
x86

Arm

3.9.2.9.1.1 x86

GDE Kernel Software Packages

Table 1 GDE Kernel software packages

No. Package Name Package Description Obtaining Method

127.0.0.1:51299/icslite/print/pages/resource/print.do? 473/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

1 GDEKernel_25.1.0.SPC6_Software_Any-Any_Pkg- - Access the path for obtaining GDE


ovs-upgrade-tool-Any.zip Kernel software packages at the Support
website, select the 25.1.0.SPC6 version,
2 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel basic O&M software package. and download the required software
X86_Docker-apmbase-Any.7z packages.

3 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel stack component installation


X86_Docker-aos-Any.7z package.

4 GDEKernel_25.1.0.SPC6_Software_EulerOS- Basic package of the GDE Kernel OM zone.


X86_Docker-apme-Any.7z

5 GDEKernel_25.1.0.SPC6_Software_EulerOS- Log package of the GDE Kernel OM zone.


X86_Docker-apmlog-Any.7z

6 GDEKernel_25.1.0.SPC6_Software_EulerOS- Installation packages of the backup and


X86_Docker-BackupService-Any.7z restoration service.

7 GDEKernel_25.1.0.SPC6_Software_EulerOS-
X86_Docker-BackupWebsite-Any.7z

8 GDEKernel_25.1.0.SPC6_Software_EulerOS-
X86_Pkg-BackupAgentOM-Any.7z

9 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel frontend framework package.


X86_Docker-console-Any.7z

10 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel IAM component installation


X86_Docker-iam-Any.7z package.

11 GDEKernel_25.1.0.SPC6_Software_EulerOS- Basic component package of the GDE Kernel


X86_Docker-k8s-Any.7z management zone.

12 GDEKernel_25.1.0.SPC6_Software_EulerOS- Plugin package of the nodes that are


X86_Docker-k8s-plugin-Any.7z heterogeneously managed, which is used to
manage data zone x86 nodes when the
management zone is deployed in Arm+EulerOS
scenarios.

13 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel OVS and Canal component


X86_Docker-ovs-canal-Any.7z installation package. If the installation package
is available on GKit, you do not need to add
them again.

14 GDEKernel_25.1.0.SPC6_Software_EulerOS- Management zone plugin package. The p2p


X86_Docker-p2p-Any.7z plugin can accelerate the image and software
package download in a large-scale cluster.

15 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel POM component installation


X86_Docker-pom-Any.7z package.

16 GDEKernel_25.1.0.SPC6_Software_EulerOS- Prometheus component installation package.


X86_Docker-Prometheus-OP.7z The component is mandatory in the
management zone and optional in the data zone.

17 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel PSM component installation


X86_Docker-psm-Any.7z package.

18 GDEKernel_25.1.0.SPC6_Software_EulerOS- SLM component installation package.


X86_Docker-SLM-Any.7z

19 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel TRM component installation


X86_Docker-trm-Any.7z package.

20 GDEKernel_25.1.0.SPC6_Software_EulerOS- Software package required by the management


X86_Pkg-ovs-ipvlan-Any.zip zone, and common DSP software package in
the data zone.

21 GDEKernel_25.1.0.SPC6_Tool_Any-Any_Pkg- Management zone online upgrade tool package.


NoKit-Any.7z

22 DSP_25.1.0.SPC6_Tool_Any-Any_Pkg-GKit- GKit tool package.


Any.zip

127.0.0.1:51299/icslite/print/pages/resource/print.do? 474/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

DSP Software Packages

Table 2 DSP software packages

No. Package Name Package Description Obtaining Method

1 DSP_25.1.0.SPC6_Software_Any-Any_Any- Asset package of information collection and Access the path for downloading DSP
Assets-Any.zip inspection. The package is used to help O&M software packages at the Support website,
personnel detect, locate, and resolve problems, find Digital Service Platform
improving the maintainability of Data Cube services. 25.1.0.SPC6, and download the required
software packages.
2 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- Software package on which the data zone depends. It
DSPBase-Any.7z supports public key creation, installation, upgrade,
and secondary development in the data zone.

3 DSP_25.1.0.SPC6_Software_Any-Any_Any- Middleware service inspection collection package,


MiddlewareAssets-Any.zip which is mandatory.

4 DSP_25.1.0.SPC6_Software_EulerOS- GaussDB visualized O&M service package for the


X86_Docker-WhitZard-Any.7z management zone.

5 DSP_25.1.0.SPC6_Software_EulerOS- GaussDB V1R3 installation package.


X86_Pkg-GaussDBV1R3-Any.7z

6 DSP_25.1.0.SPC6_Software_EulerOS- GaussDB V3R1 installation package.


X86_Pkg-GaussDBV3R1-Any.7z

7 DSP_25.1.0.SPC6_Software_EulerOS- Installation package of the container-based GaussDB


X86_Docker-GaussDBV1R3-Any.7z service used in the management zone.

8 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- Huawei JRE installation package.


JREClient-Any.zip

9 DSP_25.1.0.SPC6_Software_EulerOS- Layered service image package.


X86_Docker-Boot-Any.7z

10 DSP_25.1.0.SPC6_Software_EulerOS- CSE installation package.


X86_Docker-CSEBase-Any.7z

11 DSP_25.1.0.SPC6_Software_EulerOS- Image package used by services in the data zone to


X86_Docker-CSEMesher-Any.7z interwork with CSE in Mesher mode.

12 DSP_25.1.0.SPC6_Software_EulerOS- CSE installation package.


X86_Docker-CSEService-Any.7z

13 DSP_25.1.0.SPC6_Software_EulerOS- Software package required when the custom indicator


X86_Docker-ExporterService-Any.7z collection function is enabled. The PCE, DIS, and
BDI services of Data Cube depend on this software
package.

14 DSP_25.1.0.SPC6_Software_EulerOS- Installation package of GDE Log Service (GLS).


X86_Docker-GLS-Any.7z

15 DSP_25.1.0.SPC6_Software_EulerOS- LB service installation packages.


X86_Docker-LB-Any.7z

16 DSP_25.1.0.SPC6_Software_EulerOS-
X86_Docker-LBKeepalived-Any.7z

17 DSP_25.1.0.SPC6_Software_EulerOS- Software package required by the management zone,


X86_Docker-LibForLayer-OP.7z and common DSP software package in the data zone.

18 DSP_25.1.0.SPC6_Software_EulerOS- License service installation package.


X86_Docker-License-Any.7z

19 DSP_25.1.0.SPC6_Software_EulerOS- Software package of the log analysis service. To


X86_Docker-LogAnalysisService-OP.7z install the log analysis service, you need to install the
ClickHouse, Filebeat, and ConfigCenterService
services that the log analysis service depends on.

20 DSP_25.1.0.SPC6_Software_EulerOS- MiddlewareConsole service installation package.


X86_Docker-MiddlewareConsole-Any.7z

127.0.0.1:51299/icslite/print/pages/resource/print.do? 475/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

21 DSP_25.1.0.SPC6_Software_EulerOS- Object storage service installation package.


X86_Docker-Minio-Any.7z

22 DSP_25.1.0.SPC6_Software_EulerOS- RabbitMQ service installation package.


X86_Docker-RabbitMQ-Any.7z

23 DSP_25.1.0.SPC6_Software_EulerOS- Data-zone Redis service installation package on


X86_Docker-Redis-Any.7z which the management zone depends.

24 DSP_25.1.0.SPC6_Software_EulerOS- RedisCluster service installation package.


X86_Docker-RedisCluster-Any.7z

25 DSP_25.1.0.SPC6_Software_EulerOS- TSM service installation package.


X86_Docker-TSM-Any.7z

26 DSP_25.1.0.SPC6_Software_EulerOS- XMS installation package.


X86_Docker-XMS-Any.7z

27 DSP_25.1.0.SPC6_Software_EulerOS- MiddlewareAgent service installation package, which


X86_Pkg-MiddlewareAgent-Any.7z is required when the GaussDB V1R3, GaussDB
V3R1, Elasticsearch, and MinIO services need to be
installed.

28 DSP_25.1.0.SPC6_Software_EulerOS- Software package of OMAgent which is the extended


X86_Pkg-OMAgent-Any.7z plugin of XMS and provides the function of changing
the OS user passwords in the data zone.

29 DSP_25.1.0.SPC6_Tool_EulerOS- CSE installation package.


X86_Docker-CSEInstallTool-Any.7z

30 DSP_25.1.0.SPC6_Software_EulerOS- ZooKeeper service installation package.


X86_Docker-ZooKeeper-Any.7z

31 DSP_25.1.0.SPC6_Software_EulerOS- GdeAdapter dependency package.


X86_Docker-ConfigCenterService-Any.7z

32 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- GaussDB V1R3 client installation package.


GaussDBV1R3Client-Any.zip

33 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- GaussDB V3R1 client installation package.


GaussDBV3R1Client-Any.zip

34 DSP_25.1.0.SPC6_Software_EulerOS- Software package required by the management zone,


X86_Docker-MiddlewareLibs-Any.7z and common DSP software package in the data zone.

35 DSP_25.1.0.SPC6_Software_EulerOS- GAMAdapter installation package.


X86_Docker-GAMAdapter-Any.7z

36 DSP_25.1.0.SPC6_Software_EulerOS- Installation package of the unified authentication and


X86_Docker-GAM-Any.7z authorization service.

IT Infra Software Packages

Table 3 IT Infra software packages

No. Package Name Package Description Obtaining Method

1 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- Software package required by the Go to the path for obtaining IT Infra software
X86_Docker-DockerForLayer-OP.7z management zone, and common DSP packages at the Support website, find IT Infra
software package in the data zone. 5.1.0.SPC8, and download the required
software packages.
2 ITInfra_5.1.0.SPC8_Software_Euleros2sp12-X86_Pkg- EulerOS V2.0 SP10 (x86) OS image
FusionSphereVMImage40g-Any.zip package. It can be used to create VMs
on x86-based FusionSphere
OpenStack.

3 ITInfra_5.1.0.SPC8_Software_Euleros2sp12-X86_Pkg- Original GKit VM image package,


OriginalISO-OP.zip which is used to create the GKit VM.

ADC Software Packages


127.0.0.1:51299/icslite/print/pages/resource/print.do? 476/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 4 ADC software packages

No. Package Name Package Description Obtaining Method

1 ADC_25.1.0.SPC6_Software_EulerOS- Basic installation packages Access the path for obtaining ADC software packages at the
X86_Docker-Base-Web-Package-Any.7z of the general job Support website, click Application Development Center
orchestration service. 25.1.0.SPC6, and download the required software packages.
2 ADC_25.1.0.SPC6_Software_EulerOS-
X86_Docker-Base-Package-Any.7z

3 ADC_25.1.0.SPC6_Software_EulerOS-
X86_Docker-Base-Package-Lib-Any.7z

4 ADC_25.1.0.SPC6_Software_Any-Any_Docker- portal-init service


Portal-Init-Any.7z installation package.

5 ADC_25.1.0.SPC6_Software_Any-Any_Docker- portal-web service


Portal-Web-Any.7z installation package.

6 ADC_25.1.0.SPC6_Software_Any-Any_Docker- Portal service installation


Portal-Any.7z package.

3.9.2.9.1.2 Arm

GDE Kernel Software Packages

Table 1 GDE Kernel software packages

No. Package Name Package Description Obtaining Method

1 GDEKernel_25.1.0.SPC6_Software_Any-Any_Pkg- - Access the path for obtaining GDE


ovs-upgrade-tool-Any.zip Kernel software packages at the Support
website, select the 25.1.0.SPC6 version,
2 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel basic O&M software package. and download the required software
Aarch64_Docker-apmbase-Any.7z packages.

3 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel stack component installation


Aarch64_Docker-aos-Any.7z package.

4 GDEKernel_25.1.0.SPC6_Software_EulerOS- Basic package of the GDE Kernel OM zone.


Aarch64_Docker-apme-Any.7z

5 GDEKernel_25.1.0.SPC6_Software_EulerOS- Log package of the GDE Kernel OM zone.


Aarch64_Docker-apmlog-Any.7z

6 GDEKernel_25.1.0.SPC6_Software_EulerOS- Installation packages of the backup and


Aarch64_Docker-BackupService-Any.7z restoration service.

7 GDEKernel_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-BackupWebsite-Any.7z

8 GDEKernel_25.1.0.SPC6_Software_EulerOS-
Aarch64_Pkg-BackupAgentOM-Any.7z

9 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel frontend framework package.


Aarch64_Docker-console-Any.7z

10 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel IAM component installation


Aarch64_Docker-iam-Any.7z package.

11 GDEKernel_25.1.0.SPC6_Software_EulerOS- Basic component package of the GDE Kernel


Aarch64_Docker-k8s-Any.7z management zone.

12 GDEKernel_25.1.0.SPC6_Software_EulerOS- Plugin package of the nodes that are


Aarch64_Docker-k8s-plugin-Any.7z heterogeneously managed, which is used to
manage data zone x86 nodes when the
management zone is deployed in
Arm+EulerOS scenarios.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 477/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

13 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel OVS and Canal component


Aarch64_Docker-ovs-canal-Any.7z installation package. If the installation package
is available on GKit, you do not need to add
them again.

14 GDEKernel_25.1.0.SPC6_Software_EulerOS- Management zone plugin package. The p2p


Aarch64_Docker-p2p-Any.7z plugin can accelerate the image and software
package download in a large-scale cluster.

15 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel POM component installation


Aarch64_Docker-pom-Any.7z package.

16 GDEKernel_25.1.0.SPC6_Software_EulerOS- Prometheus component installation package.


Aarch64_Docker-Prometheus-OP.7z The component is mandatory in the
management zone and optional in the data
zone.

17 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel PSM component installation


Aarch64_Docker-psm-Any.7z package.

18 GDEKernel_25.1.0.SPC6_Software_EulerOS- SLM component installation package.


Aarch64_Docker-SLM-Any.7z

19 GDEKernel_25.1.0.SPC6_Software_EulerOS- GDE Kernel TRM component installation


Aarch64_Docker-trm-Any.7z package.

20 GDEKernel_25.1.0.SPC6_Software_EulerOS- Software package required by the management


Aarch64_Pkg-ovs-ipvlan-Any.zip zone, and common DSP software package in
the data zone.

21 GDEKernel_25.1.0.SPC6_Tool_Any-Any_Pkg- Management zone online upgrade tool


NoKit-Any.7z package.

22 DSP_25.1.0.SPC6_Tool_Any-Any_Pkg-GKit-Any.zip GKit tool package.

DSP Software Packages

Table 2 DSP software packages

No. Package Name Package Description Obtaining Method

1 DSP_25.1.0.SPC6_Software_Any-Any_Any- Asset package of information collection and Access the path for downloading DSP
Assets-Any.zip inspection. The package is used to help O&M software packages at the Support website,
personnel detect, locate, and resolve problems, find Digital Service Platform
improving the maintainability of Data Cube services. 25.1.0.SPC6, and download the required
software packages.
2 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- Software package on which the data zone depends. It
DSPBase-Any.7z supports public key creation, installation, upgrade,
and secondary development in the data zone.

3 DSP_25.1.0.SPC6_Software_Any-Any_Any- Middleware service inspection collection package,


MiddlewareAssets-Any.zip which is mandatory.

4 DSP_25.1.0.SPC6_Software_EulerOS- GaussDB visualized O&M service package for the


Aarch64_Docker-WhitZard-Any.7z management zone.

5 DSP_25.1.0.SPC6_Software_EulerOS- GaussDB V1R3 installation package.


Aarch64_Pkg-GaussDBV1R3-Any.7z

6 DSP_25.1.0.SPC6_Software_EulerOS- GaussDB V3R1 installation package.


Aarch64_Pkg-GaussDBV3R1-Any.7z

7 DSP_25.1.0.SPC6_Software_EulerOS- Installation package of the container-based GaussDB


Aarch64_Docker-GaussDBV1R3-Any.7z service used in the management zone.

8 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- Huawei JRE installation package.


JREClient-Any.zip

9 DSP_25.1.0.SPC6_Software_EulerOS- Layered service image package.


Aarch64_Docker-Boot-Any.7z

127.0.0.1:51299/icslite/print/pages/resource/print.do? 478/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

10 DSP_25.1.0.SPC6_Software_EulerOS- CSE installation package.


Aarch64_Docker-CSEBase-Any.7z

11 DSP_25.1.0.SPC6_Software_EulerOS- Image package used by services in the data zone to


Aarch64_Docker-CSEMesher-Any.7z interwork with CSE in Mesher mode.

12 DSP_25.1.0.SPC6_Software_EulerOS- CSE installation package.


Aarch64_Docker-CSEService-Any.7z

13 DSP_25.1.0.SPC6_Software_EulerOS- Software package required when the custom indicator


Aarch64_Docker-ExporterService-Any.7z collection function is enabled. The PCE, DIS, and
BDI services of Data Cube depend on this software
package.

14 DSP_25.1.0.SPC6_Software_EulerOS- Installation package of GDE Log Service (GLS).


Aarch64_Docker-GLS-Any.7z

15 DSP_25.1.0.SPC6_Software_EulerOS- LB service installation packages.


Aarch64_Docker-LB-Any.7z

16 DSP_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-LBKeepalived-Any.7z

17 DSP_25.1.0.SPC6_Software_EulerOS- Software package required by the management zone,


Aarch64_Docker-LibForLayer-OP.7z and common DSP software package in the data zone.

18 DSP_25.1.0.SPC6_Software_EulerOS- License service installation package.


Aarch64_Docker-License-Any.7z

19 DSP_25.1.0.SPC6_Software_EulerOS- Software package of the log analysis service. To


Aarch64_Docker-LogAnalysisService-OP.7z install the log analysis service, you need to install the
ClickHouse, Filebeat, and ConfigCenterService
services that the log analysis service depends on.

20 DSP_25.1.0.SPC6_Software_EulerOS- MiddlewareConsole service installation package.


Aarch64_Docker-MiddlewareConsole-Any.7z

21 DSP_25.1.0.SPC6_Software_EulerOS- Object storage service installation package.


Aarch64_Docker-Minio-Any.7z

22 DSP_25.1.0.SPC6_Software_EulerOS- RabbitMQ service installation package.


Aarch64_Docker-RabbitMQ-Any.7z

23 DSP_25.1.0.SPC6_Software_EulerOS- Data-zone Redis service installation package on


Aarch64_Docker-Redis-Any.7z which the management zone depends.

24 DSP_25.1.0.SPC6_Software_EulerOS- RedisCluster service installation package.


Aarch64_Docker-RedisCluster-Any.7z

25 DSP_25.1.0.SPC6_Software_EulerOS- TSM service installation package.


Aarch64_Docker-TSM-Any.7z

26 DSP_25.1.0.SPC6_Software_EulerOS- XMS installation package.


Aarch64_Docker-XMS-Any.7z

27 DSP_25.1.0.SPC6_Software_EulerOS- MiddlewareAgent service installation package, which


Aarch64_Pkg-MiddlewareAgent-Any.7z is required when the GaussDB V1R3, GaussDB
V3R1, Elasticsearch, and MinIO services need to be
installed.

28 DSP_25.1.0.SPC6_Software_EulerOS- Software package of OMAgent which is the extended


Aarch64_Pkg-OMAgent-Any.7z plugin of XMS and provides the function of changing
the OS user passwords in the data zone.

29 DSP_25.1.0.SPC6_Tool_EulerOS- CSE installation package.


Aarch64_Docker-CSEInstallTool-Any.7z

30 DSP_25.1.0.SPC6_Software_EulerOS- ZooKeeper service installation package.


Aarch64_Docker-ZooKeeper-Any.7z

31 DSP_25.1.0.SPC6_Software_EulerOS- GdeAdapter dependency package.


Aarch64_Docker-ConfigCenterService-Any.7z

127.0.0.1:51299/icslite/print/pages/resource/print.do? 479/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

32 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- GaussDB V1R3 client installation package.


GaussDBV1R3Client-Any.zip

33 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- GaussDB V3R1 client installation package.


GaussDBV3R1Client-Any.zip

34 DSP_25.1.0.SPC6_Software_EulerOS- Software package required by the management zone,


Aarch64_Docker-MiddlewareLibs-Any.7z and common DSP software package in the data zone.

35 DSP_25.1.0.SPC6_Software_EulerOS- GAMAdapter installation package.


Aarch64_Docker-GAMAdapter-Any.7z

36 DSP_25.1.0.SPC6_Software_EulerOS- Installation package of the unified authentication and


Aarch64_Docker-GAM-Any.7z authorization service.

IT Infra Software Packages

Table 3 IT Infra software packages

No. Package Name Package Description Obtaining Method

1 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- Software package required by the Go to the path for obtaining IT Infra software
Aarch64_Docker-DockerForLayer-OP.7z management zone, and common DSP packages at the Support website, find IT Infra
software package in the data zone. 5.1.0.SPC8, and download the required
software packages.
2 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- EulerOS V2.0 SP10 (Arm) OS image
Aarch64_Pkg-FusionSphereVMImage40g-OP.zip package. It can be used to create VMs
on Arm-based FusionSphere
OpenStack.

3 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- Original GKit VM image package,


Aarch64_Pkg-OriginalISO-OP.zip which is used to create the GKit VM.

ADC Software Packages

Table 4 ADC software packages

No. Package Name Package Description Obtaining Method

1 ADC_25.1.0.SPC6_Software_EulerOS- Basic installation packages Access the path for obtaining ADC software packages at the
Aarch64_Docker-Base-Web-Package-Any.7z of the general job Support website, click Application Development Center
orchestration service. 25.1.0.SPC6, and download the required software packages.
2 ADC_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-Base-Package-Any.7z

3 ADC_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-Base-Package-Lib-Any.7z

4 ADC_25.1.0.SPC6_Software_Any-Any_Docker- portal-init service


Portal-Init-Any.7z installation package.

5 ADC_25.1.0.SPC6_Software_Any-Any_Docker- portal-web service


Portal-Web-Any.7z installation package.

6 ADC_25.1.0.SPC6_Software_Any-Any_Docker- Portal service installation


Portal-Any.7z package.

3.9.2.9.2 Obtaining Product Software Packages


Table 1 VM creation script package

No. Package Name Package Description Obtaining Method

1 HiCloud_25.1.0_Tool_Any- VM creation script package, which is Go to the CMP HiCloud path at the Support website, select
Any_HicloudVMToolBox-OP.zip used to automatically create the GKit the 25.1.0 version, and download the software package in the
VM. Software area.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 480/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Table 2 Scenario package and installation script package

No. Package Name Package Description Obtaining Method

1 HiCloud_25.1.0_Scene_Any- Scenario package. Go to the CMP HiCloud path at the Support website, select the
Any_DCS.zip 25.1.0 version, and download the software package in the
Version Documentation area.

2 HiCloud_25.1.0_Tool_Any- Silent installation script package, Go to the CMP HiCloud path at the Support website, select the
Any_HicloudAutoInstallTool-OP.zip which is used to install GDE and CMP 25.1.0 version, and download the software package in the
HiCloud services. Software area.

Table 3 QEMU binary package (obtaining packages by architecture)

No. Package Name Package Description Obtaining Method

1 HiCloud_25.1.0_Tool_Euler- QEMU binary package, which is used to process the Go to the CMP HiCloud path at the Support
Aarch64_Docker-Qemu.zip QCOW2 image and is integrated into VMTools. This website, select the 25.1.0 version, and download
package applies to the Arm architecture. the software package in the Software area.

HiCloud_25.1.0_Tool_Euler- QEMU binary package, which is used to process the


X86_Docker-Qemu.zip QCOW2 image and is integrated into VMTools. This
package applies to the x86 architecture.

Table 4 CMP HiCloud service packages

No. Package Name Package Description Obtaining Method

1 HiCloud_25.1.0_Software_Euler-Any_Docker- CommonUserGateway service Go to the CMP HiCloud path at the Support website, select
CommonUserGateway.7z package the 25.1.0 version, and download the software package in
the Software area.
2 HiCloud_25.1.0_Software_Euler-Any_Docker- CommonAdminGateway service
CommonAdminGateway.7z package

3 HiCloud_25.1.0_Software_Euler-Any_Docker- DBaasService package


DBaasService.7z

4 HiCloud_25.1.0_Software_Euler-Any_Docker- VMwareService package


VMwareService.7z

5 HiCloud_25.1.0_Software_Euler-Any_Docker- VMwareCoretask service


VMwareCoretask.7z package

6 HiCloud_25.1.0_Software_Euler-Any_Docker- VMwareScheduler service


VMwareScheduler.7z package

7 HiCloud_25.1.0_Software_Euler-Any_Docker- CommonAdminUI frontend


CommonAdminUI.7z service package

8 HiCloud_25.1.0_Software_Euler-Any_Docker- BMS service package


BMSService.7z

9 HiCloud_25.1.0_Software_Euler-Any_Docker- Security service package


SecurityService.7z

10 HiCloud_25.1.0_Software_Euler-Any_Docker- Security gateway service


SecurityGateway.7z package

11 HiCloud_25.1.0_Software_Euler-Any_Docker- Chinese cryptographic service


Hsm-Adapter-Service.7z package

12 HiCloud_25.1.0_Software_Euler-Any_Docker- Package used to mount general


GdeAdapter.7z volumes

Table 5 Service adaptation package.

No. Package Name Package Description Obtaining Method

1 HiCloud_25.1.0_Software_Euler-Any_Docker- DBaas adaptation Go to the CMP HiCloud path at the Support website, select the 25.1.0
DBaasSCUI.7z package. version, and download the software package in the Software area.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 481/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

2 HiCloud_25.1.0_Software_Euler-Any_Docker- BMS service adaptation


BMSSCUI.7z package.

3 HiCloud_25.1.0_Software_Euler-Any_Docker- VMware adaptation


VMwareSCUI.7z package.

4 HiCloud_25.1.0_Software_Euler-Any_Docker- Security service


SecuritySCUI.7z adaptation package.

3.9.2.10 Restarting Services

Procedure
1. Log in to the GDE management zone as the op_svc_cfe tenant in tenant login mode at https://IP address:31943.
IP address: is the value of Floating IP of management plane on the HiCloud Parameters sheet described in 1.5.1.2-12 .
The password of the op_svc_cfe tenant is the value of op_svc_cfe tenant password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .

2. Choose Maintenance > Instance Deployment > K8S Application Deployment. The K8S Application Deployment page is displayed.

3. Search for the application corresponding to the service to be restarted and click Stop in the Operation column of the found record.

4. After the application is stopped, click Start in the Operation column.

The restart takes about 20 seconds.

3.9.3 Physical Network Interconnection Reference


The single-DC single-core networking is used. The management and service leaf switches are deployed together and connected to the core spine
switch. Traffic on the management and data planes is isolated by VLANs.
It is recommended that access leaf switches, core spine switches, and border leaf switches be configured in M-LAG mode. This enables simple
networking configuration, Layer 2 interconnection within a data center, and clear logical networking.

The data center egress connects to external routers of the customer network through border leaf switches. The networking topology supports Layer 2
or Layer 3:

Layer 2 networking topology: spine + leaf (integrated deployment of border and spine switches)

Layer 3 networking topology: border + spine + leaf

The following uses Layer 3 networking as an example to describe the networking and configuration.

Figure 1 Networking diagram

127.0.0.1:51299/icslite/print/pages/resource/print.do? 482/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Topology description

1. Core spine switches, service and management leaf switches, storage leaf switches, and border leaf switches are configured in M-LAG mode.
Leaf and border leaf switches use 40GE or 100GE ports in the uplink and implement full-mesh networking with core spine switches by
configuring Eth-Trunks. The number of uplink ports can be adjusted based on project bandwidth requirements.

2. Border leaf switches are connected to external routers of the customer network through 10GE ports. The number of ports depends on the
actual networking requirements. If spine switches and border leaf switches are deployed in an integrated manner, configure the ports on spine
switches.

3. Management and service leaf switches as well as storage leaf switches are connected to converged nodes and compute nodes through 10GE
ports. Each node is configured with a two-port bond, supporting four network ports (two storage network ports and two management and
storage network ports) or six network ports (two storage network ports, two management network ports, and two storage network ports).
Select the number of ports depending on the actual situation.

4. Firewalls and load balancers are deployed in load balancing mode and are connected to border leaf switches through 10GE ports in bypass
mode. Two 10GE ports are used in the uplink and downlink, respectively. If spine switches and border leaf switches are deployed in an
integrated manner, configure Eth-Trunks on spine switches.

5. IP SAN storage nodes are connected to storage leaf switches through 10GE ports. At least two 10GE ports are required. The number of ports
to be configured depends on the networking scale and performance requirements.

6. BMC access switches are connected to BMC ports of each node, storage devices, and management ports of switches. BMC access switches
are also connected to spine core switches for remote management and routine maintenance of devices.

7. Firewalls, load balancers, and border leaf switches implement full-mesh networking at Layer 3 by configuring the VRF, static routing
protocol, or OSPF routing protocol. If spine switches and border leaf switches are deployed in an integrated manner, configure the VRF,
static routing protocol, or OSPF routing protocol on spine switches.

8. Border leaf switches and core spine switches implement full-mesh networking at Layer 3 by configuring the static routing protocol or OSPF
routing protocol. If spine switches and border leaf switches are deployed in an integrated manner, no configuration is required.

9. Border leaf switches and routers of the customer network implement full-mesh networking at Layer 3 by configuring the static routing
protocol or OSPF routing protocol. If spine switches and border leaf switches are deployed in an integrated manner, configure the static
routing protocol or OSPF routing protocol on spine switches.

Configuration prerequisites

Border leaf switches and spine switches are deployed in an integrated manner. Basic configurations, such as M-LAG mode, have been
completed for spine and leaf switches.

Leaf switches are connected to spine switches through the 40GE0/0/47 and 40GE0/0/48 ports in the uplink.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 483/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

The network segment provided by the customer is used. The gateway is configured on spine switches, and a routing protocol is configured
between routers of the customer network and spine switches.

Management plane VLAN: The VLAN ID is 2, the gateway is 192.168.10.1, and the subnet mask is 255.255.255.0. Two network ports on the
server are configured in the active-standby bonding mode and connected to the 10GE0/0/1 port of leaf switches.

Storage plane VLAN: The VLAN ID is 11, the gateway is 192.168.11.1, and the subnet mask is 255.255.255.0. Two network ports on the
server are configured in the static Link Aggregation Control Protocol (LACP) bonding mode and connected to the 10GE0/0/2 port of leaf
switches.

Service plane VLAN: The VLAN ID is 12, the gateway is 192.168.12.1, and the subnet mask is 255.255.255.0. Two network ports on the
server are configured in the static LACP bonding mode and connected to the 10GE0/0/3 port of leaf switches.

When configuring network port bonding for servers to connect to corresponding switches, you can use the commands for configuring the active-standby mode and
LACP mode for the management network, service network, and storage network.

Configuration example
The following uses Huawei switches as an example to describe network device configurations:

1. Configurations for access leaf switches (The following commands need to be used for configurations on two leaf switches respectively.)

a. Create VLANs.
Create VLANs for the management plane, service plane, and storage plane.
vlan batch 2 11 12

b. Create an Eth-Trunk.
# Configure uplink ports for connecting leaf switches to spine switches.
# Configure the 40GE0/0/47 and 40GE0/0/48 ports as the Eth-Trunk interface to connect to spine switches in the uplink.
interface Eth-Trunk1
trunkport 40GE0/0/47
trunkport 40GE0/0/48
port link-type trunk
undo port trunk allow-pass vlan 1
port trunk allow-pass vlan 2 11 to 12
# Configure the switch ports connected to the management NIC.
# Configure the 10GE0/0/1 port as the Eth-Trunk interface of the management plane to connect to servers in the downlink.
# If the management NIC on the server is configured in the active-backup bonding mode, trunk configuration is not required for
switches.
port hybrid pvid vlan 2
undo port hybrid vlan 1
port hybrid untagged vlan 2
stp edged-port enable
# Configure the switch ports connected to the service NIC.
# Configure the 10GE0/0/3 port as the Eth-Trunk interface of the service plane to connect to servers in the downlink.
# If the service NIC of the server is configured in a bonding mode, the switch must be configured in the static LACP mode.
interface Eth-Trunk12
trunkport 10GE0/0/3
undo port hybrid vlan 1
port hybrid tagged vlan 12
stp edged-port enable
mode lacp-static
dfs-group 1 m-lag 1
# Configure the switch ports connected to the storage NIC.
# Configure the 10GE0/0/2 port as the Eth-Trunk interface of the storage plane to connect to servers in the downlink.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 484/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

# If the storage NIC of the server is configured in a bonding mode, the switch must be configured in the static LACP mode.
interface Eth-Trunk11
trunkport 10GE0/0/2
undo port hybrid vlan 1
port hybrid tagged vlan 11
stp edged-port enable
mode lacp-static
dfs-group 1 m-lag 2

2. Configurations for core spine switches (The following commands need to be used for configurations on two leaf switches respectively.)

a. Create VLAN interfaces.


# Configure the management IP address and gateway of a core switch on the core switch.
interface Vlanif2
ip address 192.168.10.1 255.255.255.0
# Configure the gateway address of the storage plane of the switch.
interface Vlanif11
ip address 192.168.11.1 255.255.255.0
# Configure the gateway address of the service plane of the switch.
interface Vlanif12
ip address 192.168.12.1 255.255.255.0

b. Create an Eth-Trunk.
# Create an Eth-Trunk for connecting to network devices in the downlink.
# Assume that ports 40GE0/0/1 and 40GE0/0/2 are used to connect to CE6881 in the downlink.
interface Eth-Trunk1
trunkport 40GE0/0/1
trunkport 40GE0/0/2
port link-type trunk
undo port trunk allow-pass vlan 1
port trunk allow-pass vlan 2 11 to 12

In this interconnection mode, gateways are configured on the customer network. Therefore, you only need to focus on the uplink port trunk configuration.

3.9.4 Introduction to Tools

Overview
Tools provides drivers for VMs.
After VM creation and OS installation are complete, you need to install Tools provided by Huawei on the VMs to improve the VM I/O performance
and implement VM hardware monitoring and other advanced functions. Some features are available only after the Tools is installed. For details
about such features, see the prerequisites or constraints about features.

Functions
After the Tools is installed and started on a VM, the VM provides the following functions:

Table 1 Functions provided by Tools

Function Description

High-performance I/O Providing high-performance disk I/O and network I/O functions for a VM.

Hardware monitoring Obtaining the IP address of a specified NIC.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 485/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Obtaining the CPU and memory usage of a VM.
Obtaining the space usage of each disk or partition on a VM.

Advanced functions Adjusting the CPU specifications of the VM in the running state
Creating a VM snapshot
(Intel architecture) VM BSOD detection
Synchronizing time from the host to the VM
Advanced functions for VM NICs, including (QoS) settings
Automatically upgrading drivers for the VM, including Tools driver

Declaration: This feature is a high-risk feature. Using this feature complies with industry practices. However, end user data may be required for implementing the
feature. Exercise caution and obtain end users' consent when using this feature.
You can use Tools to query the name of the host housing the VM, the NIC IP address, system time, and VM idle values.

Precautions
Do not install any non-Huawei Tools on VMs running on the FusionCompute system.

Ensure that the Tools matches the OS version of the VM.

Do not perform any of the following operations when installing Tools. Otherwise, the installation may fail or the system may be unstable after
installation:

Forcibly exiting the installation process

Forcibly stopping the VM

Forcibly powering off the host

3.9.5 Verifying the Software Package


To prevent a software package from being maliciously tampered with during transmission or storage, download the corresponding digital signature
file for integrity verification when downloading the software package.
After the software package is downloaded from the Huawei Support website, verify its PGP digital signature by referring to the OpenPGP Signature
Verification Guide. If the software package fails the verification, do not use the software package, and contact Huawei technical support.
Before a software package is used for installation or upgrade, its digital signature also needs to be verified according to the OpenPGP Signature
Verification Guide to ensure that the software package is not tampered with.
For carrier customers, visit https://support.huawei.com/carrier/digitalSignatureAction
Enterprises: Visit https://support.huawei.com/enterprise/en/tool/pgp-verify-TL1000000054.

3.9.6 VM-related Concepts

Related Concepts
VM
A VM is a virtual computer that runs an OS and applications.
A VM runs on a host, obtains CPUs, memory, and other compute resources, as well as USB devices, and uses network connection and storage access
capabilities of the host. Multiple VMs can run concurrently on one host.
The VM creation location can be a host or a cluster. After a VM is created, you can migrate it to another host, or perform operations to adjust its
specifications and peripherals, such as adding NICs, attaching disks, binding USB devices, and mounting CD/DVD-ROM drives.
VM template
A VM template is a copy of a VM, containing an OS, applications, and VM specification configurations. It is used to create VMs that have OSs
installed. Compared with creating bare VMs, creating VMs using a template reduces much time.
A VM template can be created by converting a VM, cloning a VM, or cloning an existing template. You can convert a template to a VM or deploy a
VM from a template. You can also export a template from a site and import the template to another site to create a VM.
A VM template file can be in OVA or OVF format, and an image file can be in QCOW2 or VHD format.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 486/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

A template in OVA format contains only one OVA file. A VM template in OVF format consists of one OVF file and multiple VHD files.

OVF file: provides the description information about the VM. The file name is the same as the VM template, for example, template01.ovf.

VHD file: indicates the VM disk file. A VM disk file is generated for each VM disk. The file name format is Template name-Disk
identifier.vhd, for example, template01-sdad.vhd.
Disk identifier in the VHD file name consists of the disk bus type and disk serial number.

hd (x86) indicates that the disk bus type is IDE, a maximum of 3 disks in this bus type are supported, the slot number ranges from 1 to 3,
the disks are numbered as a, c, and d, and the corresponding disk slot number ranges from 1 to 3.

vd indicates that the disk bus type is VIRTIO, a maximum of 25 disks in this bus type are supported, the slot number ranges from 1 to 25,
the disks are numbered from a to y, and the corresponding disk slot number ranges from 1 to 25.

sd indicates that the disk bus type is SCSI, a maximum of 60 disks in this bus type are supported, the slot number ranges from 1 to 60, the
disks are numbered from a to z, aa to az, and ba to bh, and the corresponding disk slot number ranges from 1 to 26, 27 to 52, and 53 to 60.

QCOW2 file: A QCOW2 image is a disk image supported by the QEMU simulator. Such a file is used to represent a block device disk with a
fixed size. Compared with a RAW image, a QCOW2 image has the following features:

Occupies less disk space.

Supports Copy-On-Write (COW). The image file only represents changes made to an underlying disk.

Supports snapshots and can contain multiple historical snapshots.

Supports zlib compression and encryption by following Advanced Encryption Standard (AES).

You can import OVF and QCOW2 files from the local PC or a share directory.
You can import OVA files only from a share directory, and the OVA files must be OVA templates exported from FusionCompute.

What is a CPU Socket?


A VM CPU socket is used to install a vCPU, which displays the vCPU topology.

VM Creation Methods
Table 1 describes the VM creation methods provided by FusionCompute.

Table 1 VM creation methods

Creation Method Description Suggested Scenario

Creating a bare A bare VM is like a blank physical computer without an OS After the system is installed, the first VM is needed.
VM installed.
You can create a bare VM on a host or in a cluster, and configure The OSs or specifications of existing VMs or templates in the
VM specifications, such the number of CPUs and NICs, the size of system do not meet user requirements.
memory or disks. A bare VM is required to be converted or cloned to a template
After a bare VM is created, install an OS on it. The procedure for for VM creation. Before cloning or converting it to a template,
installing an OS on a bare VM is the same as that for installing an install an OS on it.
OS on a physical computer.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 487/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?

Creating a VM Use an existing VM template to create a VM that has similar The OSs and specifications of templates in the system meet
from a template specifications with the template. user requirements. Creating a VM from a template can reduce
You can convert an existing template on the site to VM, or deploy a time.
VM from this template. You can also export a template from a peer site, and import the
You can also export a template from a peer site, and import the template to the current site to create a VM.
template to the current site to create a VM.
After a template is converted to a VM, the template disappears. All
attributes of the VM are identical to the template attributes.
The new VM inherits the following attributes of the template. You
can customize other attributes.
VM OS type and version
Number and size of VM disks and bus type
Number of VM NICs

Cloning a VM Clone a VM to a VM that has similar specifications with the VM. If multiple similar VMs are required, you can install different
from an existing The new VM inherits the following attributes of the used VM. You software products on one VM and clone it for multiple times
VM can customize other attributes. to obtain required VMs.

VM OS type and version


Number and size of VM disks and bus type
Number of VM NICs
If a VM is planned to be frequently used to create VMs using
cloning, you can convert or clone it to a template.

Requirements for VM Creation


All tasks required in the software commissioning phase are complete, and the compute, storage, and network resources for creating a VM are
available.

For computing-intensive and network-intensive services, VMs may fail to meet service requirements after virtualization. Therefore, you need to evaluate
whether services can be migrated to the cloud before migrating services to the cloud.

During VM creation, only OSs supported by FusionCompute can be installed on VMs. Click FusionCompute Compatibility Query to obtain the
supported OSs.

For details about the VM OSs that support the SCSI or VIRTIO bus type and the maximum CPU and memory specifications supported by the
VM OSs, see FusionCompute SIA Guest OS Compatibility Guide (x86) or FusionCompute SIA Guest OS Compatibility Guide (Arm).

For details about how to query the FusionCompute SIA version, see "How Do I Query the FusionCompute SIA Version?" in FusionCompute 8.8.0 O&M Guide.

127.0.0.1:51299/icslite/print/pages/resource/print.do? 488/488

You might also like