Doc01 - Datacenter Virtualization Solution Product
Doc01 - Datacenter Virtualization Solution Product
127.0.0.1:51299/icslite/print/pages/resource/print.do? 1/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services
and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all
statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either
express or implied.
The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of
the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.huawei.com
Email: support@huawei.com
127.0.0.1:51299/icslite/print/pages/resource/print.do? 2/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Contents
Contents
1 Library Information
1.1 Change History
1.2 Conventions
Symbol Conventions
General Conventions
Command Conventions
Command Use Conventions
GUI Conventions
GUI Image Conventions
Keyboard Operations
Mouse Operations
1.3 How to Obtain and Update Documentation
Obtaining Documentation
Updating Documentation
1.4 Feedback
1.5 Technical Support
2 Product Overview
2.1 Solution Overview
2.1.1 Datacenter Virtualization Solution
Definition
Features
2.1.2 Multi-Tenant Service Overview
2.2 Application Scenario
2.3 Solution Architecture
2.3.1 Functional Architecture
2.3.2 Basic Network Architecture
2.4 Software Description
2.4.1 FusionCompute
Overview
Technical Highlights
2.4.2 eDME
Overview
Application Scenarios
2.4.3 eBackup
Overview
Virtual Backup Solution
2.4.4 UltraVR
Overview
Highlights
2.4.5 HiCloud
Overview
Highlights
2.4.6 eDataInsight
Overview
Application Scenarios
2.4.7 iMaster NCE-Fabric
Overview
Highlights
2.4.8 eCampusCore
Overview
Application and Data Integration Service
2.4.9 eContainer
Overview
Application Scenarios
2.5 Hardware Description
2.5.1 Server
2.5.2 Switch
2.5.3 Storage Device
127.0.0.1:51299/icslite/print/pages/resource/print.do? 3/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.5.4 iMaster NCE-Fabric Appliance
2.6 System Security
Security Threats
Security Architecture
Security Value
2.7 Technical Specifications
2.8 Feature Description
2.8.1 Compute
Overview
Technical Highlights
2.8.2 Storage
Virtualized storage
Block storage service
File storage service
2.8.3 Network
2.8.3.1 Network Overlay SDN
2.8.4 DR
2.8.5 Backup
2.8.6 Multi-Tenancy
2.8.6.1 ECS
2.8.6.1.1 What Is an ECS?
Definition
Functions
2.8.6.1.2 Advantages
2.8.6.1.3 Application Scenarios
2.8.6.1.4 Related Services
2.8.6.1.5 Implementation Principles
Architecture
Workflow
2.8.6.2 BMS
2.8.6.2.1 BMS Definition
2.8.6.2.2 Benefits
2.8.6.2.3 Application Scenarios
2.8.6.2.4 Functions
2.8.6.2.5 Related Services
2.8.6.3 IMS
2.8.6.3.1 What Is Image Management Service?
Definition
Type
2.8.6.3.2 Advantages
2.8.6.3.3 Application Scenarios
2.8.6.3.4 Relationship with Other Services
2.8.6.3.5 Working Principle
Architecture
Specifications
2.8.6.4 AS
2.8.6.4.1 Introduction
Definition
Functions
2.8.6.4.2 Benefits
2.8.6.4.3 Application Scenarios
2.8.6.4.4 Usage Restrictions
2.8.6.4.5 Working Principles
Architecture
2.8.6.5 Elastic Container Engine
2.8.6.5.1 Introduction
Definition
Functions
2.8.6.5.2 Benefits
Ease of Use
High Performance
Security and Reliability
127.0.0.1:51299/icslite/print/pages/resource/print.do? 4/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Fault and Performance Monitoring
2.8.6.5.3 Relationship with Other Services
2.8.6.5.4 Working Principles
2.8.6.5.5 Basic Concepts
K8s Cluster and Node
K8s Cluster Storage Class
K8s Cluster Namespace
2.8.6.6 SWR
2.8.6.6.1 Overview
Definition
Functions
2.8.6.6.2 Benefits
Ease of Use
Security and Reliability
2.8.6.6.3 Relationship with Other Services
2.8.6.6.4 Basic Concepts
Image
Container
Image Repository
2.8.6.7 Block Storage Service
2.8.6.7.1 What Is the Block Storage Service?
Definition
Functions
2.8.6.7.2 Advantages
2.8.6.7.3 Relationships with Other Services
2.8.6.7.4 Implementation Principles
Architecture
2.8.6.8 OBS
2.8.6.8.1 What Is the Object Storage Service?
Definition
Functions
2.8.6.8.2 Advantages
2.8.6.8.3 Related Concepts
Bucket
Object
AK/SK
Endpoint
Quota Management
Access Permission Control
2.8.6.8.4 Application Scenarios
Backup and Active Archiving
Video Storage
2.8.6.8.5 Implementation Principles
Logical Architecture
Workflow
2.8.6.8.6 User Roles and Permissions
2.8.6.8.7 Restrictions
2.8.6.8.8 How to Use the Object Storage Service
Third-Party Client
Object Service API
SDK
Mainstream Software
How to Use S3 Browser
2.8.6.9 SFS
2.8.6.9.1 What Is Scalable File Service?
Definition
Functions
2.8.6.9.2 Advantages
2.8.6.9.3 Relationship with Other Services
2.8.6.9.4 Application Scenario
Video Cloud
Media Processing
127.0.0.1:51299/icslite/print/pages/resource/print.do? 5/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.9.5 Constraints and Limitations
2.8.6.9.6 Implementation Principle
Architecture
Workflow
2.8.6.10 VPC Service
2.8.6.10.1 What Is Virtual Private Cloud?
Concept
Function
Benefits
2.8.6.10.2 Region Type Differences
2.8.6.10.3 Application Scenarios (Region Type II)
Secure and Isolated Network Environment
Common Web Applications
2.8.6.10.4 Application Scenarios (Region Type III)
Secure and Isolated Network Environment
2.8.6.10.5 Implementation Principles (Region Type II)
2.8.6.10.6 Constraints
2.8.6.10.7 Relationships with Other Cloud Services
2.8.6.11 EIP Service
2.8.6.11.1 What Is an EIP?
Definition
Network Solution
Functions
2.8.6.11.2 Benefits
2.8.6.11.3 Application Scenarios
Using an EIP to Enable an ECS in a VPC to Access an Extranet
Using an EIP and SNAT to Enable ECSs in a VPC to Access an Extranet
2.8.6.11.4 Relationship with Other Cloud Services
2.8.6.11.5 Constraints
2.8.6.12 Security Group Service
2.8.6.12.1 Security Group Overview
2.8.6.12.2 Constraints and Limitations
2.8.6.13 NAT Service
2.8.6.13.1 What Is the NAT Service?
2.8.6.13.2 Benefits
2.8.6.13.3 Application Scenarios
2.8.6.13.4 Constraints and Limitations
2.8.6.13.5 Relationships with Other Services
2.8.6.14 ELB
2.8.6.14.1 What Is Elastic Load Balance?
Definition
Functions
2.8.6.14.2 Benefits
2.8.6.14.3 Application Scenarios
Load Distribution
Capacity Expansion
2.8.6.14.4 Relationships with Other Cloud Services
2.8.6.14.5 Accessing and Using ELB
2.8.6.15 vFW
2.8.6.15.1 What Is Virtual Firewall?
2.8.6.15.2 Advantages
2.8.6.15.3 Application Scenarios
2.8.6.15.4 Constraints
2.8.6.15.5 Relationships with Other Cloud Services
2.8.6.15.6 Accessing and Using vFW
2.8.6.16 DNS
2.8.6.16.1 What Is Domain Name Service?
2.8.6.16.2 Advantages
2.8.6.16.3 Application Scenarios
Managing Host Names of Cloud Servers
Replacing a Cloud Server Without Service Interruption
Accessing Cloud Resources
127.0.0.1:51299/icslite/print/pages/resource/print.do? 6/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.16.4 Restrictions
2.8.6.16.5 Related Services
2.8.6.17 VPN
2.8.6.17.1 What Is Virtual Private Network?
Networking Solution
Key Technologies
2.8.6.17.2 Advantages
2.8.6.17.3 Application Scenarios
Deploying a VPN to Connect a VPC to a Local Data Center
Deploying a VPN to Connect a VPC to Multiple Local Data Centers
Cross-Region Interconnection Between VPCs
2.8.6.17.4 Related Services
2.8.6.17.5 Restrictions and Limitations
2.8.6.18 Public Service Network
2.8.6.18.1 Concept
2.8.6.18.2 Function
2.8.6.18.3 Benefits
2.8.6.18.4 Application Scenarios
2.8.6.18.5 Constraints
2.8.6.18.6 Procedure
2.8.6.19 CSHA
2.8.6.19.1 What Is Cloud Server High Availability?
Definition
Restrictions
2.8.6.19.2 Benefits
2.8.6.19.3 Application Scenarios
2.8.6.19.4 Implementation Principles
2.8.6.19.5 Relationships with Other Cloud Services
2.8.6.19.6 Key Indicators
2.8.6.19.7 Access and Usage
2.8.6.20 Backup Service
2.8.6.20.1 What Is the Backup Service?
Definition
2.8.6.20.2 User Roles and Permissions
2.8.6.20.3 Related Concepts
2.8.6.21 VMware Cloud Service
2.8.6.21.1 Introduction to VMware Integration Service
2.8.6.21.1.1 VMware ECS
2.8.6.21.1.2 VMware EVS Disk
2.8.6.21.2 Benefits
2.8.6.21.3 Application Scenarios
2.8.6.21.4 Functions
2.8.6.22 Application and Data Integration Service
2.8.6.22.1 Overview of the Application and Data Integration Service
2.8.6.22.1.1 Introduction to System Integration Service
2.8.6.22.1.1.1 Functions
Related Concepts
Connection Management
Connection Tools
Link Engine: DataLink
Link Engine: LinkFlow
Link Engine: MsgLink
Connection Assets: Integration Assets
I/O Asset Compatibility
Built-in Gateway Functions
2.8.6.22.1.1.2 Values and Benefits
2.8.6.22.1.1.3 Usage Scenarios
2.8.6.22.1.2 Introduction to Device Integration Service
2.8.6.22.1.2.1 Definition
Concepts
2.8.6.22.1.2.2 Functions
LinkDevice
127.0.0.1:51299/icslite/print/pages/resource/print.do? 7/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
LinkDeviceEdge
Device Connection
Device Management
Message Communication
Monitoring and O&M
Edge Management
2.8.6.22.1.2.3 Values and Benefits
2.8.6.22.1.2.4 Usage Scenarios
2.8.6.22.1.3 Introduction to the APIGW Service
2.8.6.22.1.3.1 Functions
Gateway Management
API Lifecycle Management
2.8.6.22.1.3.2 Values and Benefits
2.8.6.22.1.3.3 Application Scenarios
2.8.7 O&M Management
3 Installation and Deployment
3.1 Installation Overview
3.1.1 Deployment Solution
Overview
Separated Deployment Scenario
Hyper-Converged Deployment Scenario
Deployment Modes and Principles
3.1.2 Network Overview
Network Plane Planning (Non-SDN)
VLAN Planning Principles (Non-SDN)
(Optional) IP Address Planning Principles (Non-SDN Solution)
(Optional) Network Plane Planning (Network Overlay SDN Solution)
(Optional) VLAN Planning Principles (Network Overlay SDN Solution)
(Optional) IP Address Planning Principles (Network Overlay SDN Solution)
NVMe over RoCE Networking Planning
3.1.3 System Requirements
3.1.3.1 Local PC Requirements
3.1.3.2 Management System Resource Requirements
3.1.3.3 Storage Device Requirements
3.1.3.4 Network Requirements
3.1.3.5 Physical Networking Requirements
3.2 Installation Process
3.3 Preparing for Installation
3.3.1 Obtaining Documents, Tools, and Software Packages
Preparing Documents
Tools
Verifying Software Packages
FusionCompute Software Package
eDME Software Package
UltraVR Software Package
eBackup Software Package
OceanStor Pacific Series Deployment Tool
eDataInsight Software Package
HiCloud Software Package
SFS Software Package
eCampusCore Software Package
3.3.2 Integration Design
3.3.2.1 Planning Using LLDesigner
Scenario
Prerequisites
Operation Process
Procedure
3.3.3 Planning Communication Ports
3.3.4 Accounts and Passwords
3.3.5 Preparing Data
3.3.5.1 Preparing Data for FusionCompute
3.3.5.2 Preparing Data for eDataInsight
127.0.0.1:51299/icslite/print/pages/resource/print.do? 8/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.3.5.2.1 (Optional) Creating an Authentication User on OceanStor Pacific HDFS in the Decoupled Storage-Compute Scenario
3.3.5.2.1.1 Deploying OceanStor Pacific Series HDFS Storage Service
3.3.5.2.1.2 Configuring Basic HDFS Storage Services
3.3.5.2.1.3 Configuring NTP Time Synchronization
Procedure
3.3.5.2.1.4 Configuring Users on the Storage
3.3.5.2.1.4.1 Configuring Static Mapping
Procedure
3.3.5.2.1.4.2 Configuring Proxy Users on the Storage
Procedure
3.3.5.2.2 (Optional) Collecting OceanStor Pacific HDFS Domain Names and Users in the Decoupled Storage-Compute Scenario
Obtaining the DNS IP Address
3.3.5.3 Preparing Data for eCampusCore
3.3.5.3.1 Planning Data
Network Planning
Password Planning
3.3.5.3.2 Checking the FusionCompute Environment
Prerequisites
Procedure
3.3.5.3.3 Obtaining the eDME Certificate
Prerequisites
Procedure
3.3.5.3.4 Obtaining the FusionCompute Certificate
Prerequisites
Procedure
3.3.5.3.5 Creating and Configuring the OpsMon User
Context
Procedure
3.3.6 Compatibility Query
3.4 Deploying Hardware
3.4.1 Hardware Scenarios
Scenario Overview
3.4.2 Installing Devices
3.4.3 Installing Signal Cables
3.4.3.1 Separated Deployment Networking
Procedure
3.4.3.2 Hyper-Converged Deployment Networking
3.4.4 Powering On the System
Scenarios
Operation Process
Procedure
3.4.5 Configuring Hardware Devices
3.4.5.1 Configuring Servers
3.4.5.1.1 Logging In to a Server Using the BMC
Scenarios
Process
Procedure
3.4.5.1.2 Checking the Server
Scenarios
Procedure
3.4.5.1.3 Configuring RAID 1
3.4.5.1.3.1 (Recommended) Configuring RAID 1 on the BMC WebUI
Scenarios
Procedure
3.4.5.1.3.2 Logging In to a Server Using the BMC WebUI to Configure RAID 1
Scenarios
Operation Process
Procedure
3.4.5.2 Configuring Storage Devices
3.4.5.3 Configuring Switches
3.4.5.4 Configuring Hyper-Converged System Hardware Devices
3.4.5.5 (Optional) Configuring Network Devices
127.0.0.1:51299/icslite/print/pages/resource/print.do? 9/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.5 Deploying Software
3.5.1 Unified DCS Deployment (Separated Deployment Scenario)
3.5.1.1 Installation Process
3.5.1.2 Installation Using SmartKit
Scenarios
Prerequisites
Procedure
3.5.1.3 Initial Configuration After Installation
3.5.1.3.1 Configuring Bonding for Host Network Ports
Procedure
3.5.1.3.2 Configuring FusionCompute After Installation
3.5.1.3.2.1 Loading a FusionCompute License File
Scenarios
Prerequisites
Procedure
3.5.1.3.2.2 (Optional) Configuring MAC Address Segments
Scenarios
Prerequisites
Procedure
3.5.1.3.3 Configuring eDME After Installation
3.5.1.3.3.1 (Optional) Configuring the NTP Service
Context
Precautions
Procedure
3.5.1.3.3.2 (Optional) Loading a License File
3.5.1.3.3.3 (Optional) Configuring SSO for FusionCompute (Applicable to Virtualization Scenarios)
Prerequisites
Procedure
3.5.1.3.3.4 Expanding Partition Capacity
Procedure
3.5.1.3.3.5 Enabling Optional Components
Prerequisites
Procedure
3.5.1.3.4 Configuring HiCloud After Installation
3.5.1.3.4.1 Configuring kdump on FusionCompute
Procedure
3.5.1.3.4.2 Configuring Certificates
3.5.1.3.4.2.1 Importing the CMP HiCloud Certificate to eDME
Scenarios
Procedure
3.5.1.3.4.2.2 Obtaining Certificates to Be Imported to GDE
3.5.1.3.4.2.2.1 Obtaining the Certificate Trust Chain
Procedure
3.5.1.3.4.2.2.2 Exporting the iBMC Certificate and Root Certificate
Exporting the iBMC Certificate
Exporting the Root Certificate
3.5.1.3.4.2.2.3 Exporting vCenter Certificates
Exporting the vCenter System Certificate
Exporting the PM Certificate Managed by vCenter
3.5.1.3.4.2.2.4 Exporting the NSX-T Certificate
Procedure
3.5.1.3.4.2.2.5 Exporting the DBAPPSecurity Cloud Certificate
Procedure
3.5.1.3.4.2.2.6 Exporting the VastEM Certificate
Procedure
3.5.1.3.4.2.3 Importing Certificates to GDE
Procedure
3.5.1.3.4.2.4 (Optional) Changing the Certificate Chain Verification Mode
Prerequisites
Procedure
3.5.1.3.4.2.5 Restarting CMP HiCloud Services
Involved Services
127.0.0.1:51299/icslite/print/pages/resource/print.do? 10/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
3.5.1.3.4.3 Configuring CAS SSO
Procedure
3.5.1.3.4.4 Importing an Adaptation Package (Either One)
3.5.1.3.4.4.1 Importing an Adaptation Package on the eDME O&M Portal
Prerequisites
Procedure
3.5.1.3.4.4.2 Importing an Adaptation Package Using SmartKit
Prerequisites
Procedure
3.5.1.3.5 Configuring CSHA After Installation
3.5.1.3.5.1 Interconnecting with eDME
Obtaining the Adaptation Package and Document
Connecting to eDME
3.5.1.3.6 Configuring eCampusCore After Installation
3.5.1.3.6.1 Interconnecting the O&M Plane with eDME
3.5.1.3.6.1.1 Interconnecting eCampusCore with eDME
Prerequisites
Procedure
3.5.1.3.6.1.2 Importing the Service Certificate to the eDME
Prerequisites
Procedure
3.5.1.3.6.1.3 Configuring the Login Mode
3.5.1.3.6.1.3.1 Configuring Multi-Session Login
Prerequisites
Procedure
3.5.1.3.6.1.3.2 Configuring SSO for the eDME O&M Portal
Procedure
3.5.1.3.6.2 Importing a VM Template on the Operation Portal
3.5.1.3.6.2.1 Importing a VM Template
Prerequisites
Procedure
3.5.1.3.6.2.2 Creating VM Specifications
Context
Procedure
3.5.1.3.6.3 Configuring the eDME Image Repository
Prerequisites
Procedure
3.5.1.4 Checking Before Service Provisioning
3.5.1.4.1 System Management
Prerequisites
Procedure
3.5.1.4.2 Site Deployment Quality Check
Prerequisites
Procedure
3.5.2 Configuring Interconnection Between iMaster NCE-Fabric and FusionCompute
3.5.3 Configuring Interconnection Between iMaster NCE-Fabric and eDME
3.5.4 Installing FabricInsight
3.5.5 (Optional) Installing FSM
Prerequisites
Procedure
3.5.6 Installing eDME (Hyper-Converged Deployment)
3.5.6.1 Network Planning
3.5.6.2 Firewall Planning
3.5.6.3 SmartKit-based Installation (Recommended)
Scenario
Prerequisites
Procedure
3.5.6.4 (Optional) Configuring Data Disk Partitions Using Commands (EulerOS)
Procedure
3.5.6.5 Post-installation Check
3.5.6.5.1 Checking the O&M Portal After Installation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 11/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Context
Prerequisites
User Login Procedure
Post-login Check
3.5.6.5.2 Checking the Operation Portal After Installation (Multi-Tenant Services)
Context
Prerequisites
User Login Procedure
Post-login Check
3.5.6.6 Initial Configuration
3.5.6.6.1 (Optional) Configuring the NTP Service
Context
Precautions
Procedure
3.5.6.6.2 (Optional) Loading a License File
3.5.6.6.3 (Optional) Configuring SSO for FusionCompute (Applicable to Virtualization Scenarios)
Prerequisites
Procedure
3.5.6.6.4 (Optional) Adding Static Routes
Procedure
3.5.6.7 Software Uninstallation
Prerequisites
Precautions
Procedure
3.6 (Optional) Installing DR and Backup Software
3.6.1 Disaster Recovery (DR)
3.6.1.1 Local HA
3.6.1.1.1 Local HA for Flash Storage
3.6.1.1.1.1 Installing and Configuring the DR System
3.6.1.1.1.1.1 Installation and Configuration Process
3.6.1.1.1.1.2 Preparing for Installation
Installation Requirements
Documents
Preparing Software Packages and Licenses
3.6.1.1.1.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.1.1.1.4 Configuring Storage
Scenarios
Procedure
3.6.1.1.1.1.5 Installing FusionCompute
Scenarios
Prerequisites
Process
Procedure
3.6.1.1.1.1.6 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.1.1.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.1.1.2 DR Commissioning
3.6.1.1.1.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.1.1.2.2 Commissioning DR Switchover
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 12/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.1.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.1.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.2 Local HA for Scale-Out Storage
3.6.1.1.2.1 Installing and Configuring the DR System
3.6.1.1.2.1.1 Installation and Configuration Process
3.6.1.1.2.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.1.2.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.1.2.1.4 Installing FusionCompute
Scenarios
Prerequisites
Operation Process
Procedure
3.6.1.1.2.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.1.2.1.6 Configuring HA Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.1.2.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.1.2.1.8 Creating a Protected Group
Scenarios
Procedure
3.6.1.1.2.2 DR Commissioning
3.6.1.1.2.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.1.2.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
127.0.0.1:51299/icslite/print/pages/resource/print.do? 13/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.6.1.1.2.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.2.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.1.2.2.5 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.1.1.3 Local HA for eVol Storage
3.6.1.1.3.1 DR System Installation and Configuration
3.6.1.1.3.1.1 Installation and Configuration Process
3.6.1.1.3.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.1.3.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.1.3.1.4 Installing FusionCompute
Scenarios
Procedure
3.6.1.1.3.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.1.3.1.6 Installing UltraVR
Scenarios
Prerequisites
Procedure
3.6.1.1.3.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.1.3.1.8 Configuring DR Policies
Scenarios
Procedure
3.6.1.1.3.2 DR Commissioning
3.6.1.1.3.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Commissioning Procedure
Expected Result
3.6.1.1.3.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.1.3.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
127.0.0.1:51299/icslite/print/pages/resource/print.do? 14/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.1.3.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2 Metropolitan HA
3.6.1.2.1 Metropolitan HA for Flash Storage
3.6.1.2.1.1 Installing and Configuring the DR System
3.6.1.2.1.1.1 Installation and Configuration Process
3.6.1.2.1.1.2 Preparing for Installation
Installation Requirements
Documents
Preparing Software Packages and Licenses
3.6.1.2.1.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.2.1.1.4 Configuring Storage
Scenarios
Procedure
3.6.1.2.1.1.5 Installing FusionCompute
Scenarios
Prerequisites
Process
Procedure
3.6.1.2.1.1.6 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.2.1.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.2.1.2 DR Commissioning
3.6.1.2.1.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.2.1.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.1.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.1.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
127.0.0.1:51299/icslite/print/pages/resource/print.do? 15/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.2 Metropolitan HA for Scale-Out Storage
3.6.1.2.2.1 Installing and Configuring the DR System
3.6.1.2.2.1.1 Installation and Configuration Process
3.6.1.2.2.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.2.2.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.2.2.1.4 Installing FusionCompute
Scenarios
Prerequisites
Process
Procedure
3.6.1.2.2.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.2.2.1.6 Configuring HA Policies for a DR Cluster
Scenarios
Prerequisites
Procedure
3.6.1.2.2.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.2.2.1.8 Creating a Protected Group
Scenarios
Procedure
3.6.1.2.2.2 DR Commissioning
3.6.1.2.2.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Commissioning Procedure
Expected Result
3.6.1.2.2.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.2.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.2.2.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.2.2.5 Backing Up Configuration Data
127.0.0.1:51299/icslite/print/pages/resource/print.do? 16/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
Prerequisites
Procedure
3.6.1.2.3 Metropolitan HA for eVol Storage
3.6.1.2.3.1 DR System Installation and Configuration
3.6.1.2.3.1.1 Installation and Configuration Process
3.6.1.2.3.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.2.3.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.2.3.1.4 Installing FusionCompute
Scenarios
Procedure
3.6.1.2.3.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.2.3.1.6 Installing UltraVR
Scenarios
Prerequisites
Procedure
3.6.1.2.3.1.7 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.2.3.1.8 Configuring DR Policies
Scenarios
Procedure
3.6.1.2.3.2 DR Commissioning
3.6.1.2.3.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Commissioning Procedure
Expected Result
3.6.1.2.3.2.2 Commissioning DR Switchover
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.3.2.3 Commissioning DR Data Reprotection
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.1.2.3.2.4 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3 Active-Standby DR
3.6.1.3.1 Active-Standby DR Solution for Flash Storage
3.6.1.3.1.1 DR System Installation and Configuration
3.6.1.3.1.1.1 Installation and Configuration Process
127.0.0.1:51299/icslite/print/pages/resource/print.do? 17/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.6.1.3.1.1.2 Preparing for Installation
Installation Requirements
Documents
Software Packages
3.6.1.3.1.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.3.1.1.4 Configuring Storage Devices
Scenarios
Procedure
3.6.1.3.1.1.5 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.3.1.1.6 Configuring the Remote Replication Relationship
Scenarios
Prerequisites
Procedure
3.6.1.3.1.1.7 Configuring DR Policies
Scenarios
Procedure
3.6.1.3.1.2 DR Commissioning
3.6.1.3.1.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.3.1.2.2 Commissioning a DR Test
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.3 Commissioning Scheduled Migration
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.4 Commissioning Fault Recovery
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.5 Commissioning Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.1.2.6 Commissioning DR Switchback
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
127.0.0.1:51299/icslite/print/pages/resource/print.do? 18/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Additional Information
3.6.1.3.1.2.7 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.1.3.2 Active-Standby DR Solution for Scale-Out Storage
3.6.1.3.2.1 DR System Installation and Configuration
3.6.1.3.2.1.1 Installation and Configuration Process
3.6.1.3.2.1.2 Preparing for Installation
Installation Requirements
Preparing Documents
Software Packages
3.6.1.3.2.1.3 Configuring Switches
Scenarios
Procedure
3.6.1.3.2.1.4 Configuring Storage Devices
Scenarios
Procedure
3.6.1.3.2.1.5 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.3.2.1.6 Configuring DR Policies
Scenarios
Procedure
3.6.1.3.2.2 DR Commissioning
3.6.1.3.2.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.3.2.2.2 Commissioning a DR Test
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.3 Commissioning Scheduled Migration
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.4 Commissioning Fault Recovery
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.5 Commissioning Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.6 Commissioning DR Switchback
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 19/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.3.2.2.7 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.1.4 Geo-Redundant 3DC DR
3.6.1.4.1 DR System Installation and Configuration
3.6.1.4.1.1 Installation and Configuration Process
3.6.1.4.1.2 Preparing for Installation
Installation Requirements
Documents
Software Packages and License Files
3.6.1.4.1.3 Configuring Switches
Scenarios
3.6.1.4.1.4 Installing FusionCompute
Scenarios
3.6.1.4.1.5 Configuring Storage Devices
Scenarios
Procedure
3.6.1.4.1.6 Creating DR VMs
Scenarios
Prerequisites
Procedure
3.6.1.4.1.7 Configuring HA and Resource Scheduling Policies for a DR Cluster
Scenarios
3.6.1.4.1.8 Configuring the Remote Replication Relationship (Non-Ring Networking Mode)
Scenarios
3.6.1.4.1.9 Configuring DR Policies
Scenarios
Procedure
3.6.1.4.2 DR Commissioning
3.6.1.4.2.1 Commissioning Process
Purpose
Prerequisites
Commissioning Process
Procedure
Expected Result
3.6.1.4.2.2 Commissioning a DR Test
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.3 Commissioning Scheduled Migration
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.4 Commissioning Fault Recovery
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
127.0.0.1:51299/icslite/print/pages/resource/print.do? 20/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.6.1.4.2.5 Commissioning Reprotection
Purpose
Constraints and Limitations
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.6 Commissioning DR Switchback
Purpose
Prerequisites
Procedure
Expected Result
Additional Information
3.6.1.4.2.7 Backing Up Configuration Data
Scenarios
Prerequisites
Procedure
3.6.2 Backup
3.6.2.1 Centralized Backup Solution
3.6.2.1.1 Installing and Configuring the Backup System
3.6.2.1.1.1 Installation and Configuration Process
3.6.2.1.1.2 Preparing for Installation
3.6.2.1.1.3 Installing the eBackup Server
Scenarios
Prerequisites
Process
Procedure
3.6.2.1.1.4 Connecting the eBackup Server to FusionCompute
Scenarios
Prerequisites
Procedure
3.6.2.1.2 Backup Commissioning
3.6.2.1.2.1 Commissioning VM Backup
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.6.2.1.2.2 Commissioning VM Restoration
Purpose
Constraints and Limitations
Prerequisites
Commissioning Procedure
Expected Result
Additional Information
3.7 Verifying the Installation
Procedure
3.8 Initial Service Configurations
3.9 Appendixes
3.9.1 FAQ
3.9.1.1 How Do I Handle the Issue that System Installation Fails Because the Disk List Cannot Be Obtained?
Symptom
Possible Causes
Troubleshooting Guideline
Procedure
3.9.1.2 How Do I Handle the Issue that VM Creation Fails Due to Time Difference?
Symptom
Procedure
3.9.1.3 What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported During System Installation?
Symptom
Possible Causes
127.0.0.1:51299/icslite/print/pages/resource/print.do? 21/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
3.9.1.4 How Do I Handle Common Problems During Hygon Server Installation?
3.9.1.5 How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be Added Due to a GPT Partition During Tool-based Installation?
Symptom
Procedure
3.9.1.6 How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for Customized VRM
Installation?
Symptom
Solution
3.9.1.7 How Do I Handle the Issue that the Mozilla Firefox Browser Prompts Connection Timeout During the Login to FusionCompute?
Symptom
Possible Causes
Procedure
3.9.1.8 How Do I Handle the Storage Device Detection Failure on a FusionCompute Host During VRM Installation?
Scenarios
Prerequisites
Procedure
3.9.1.9 How Do I Configure an IP SAN Initiator?
Scenarios
Prerequisites
Procedure
3.9.1.10 How Do I Configure an FC SAN Initiator?
Scenarios
Prerequisites
Procedure
3.9.1.11 How Do I Configure Time Synchronization Between the System and an NTP Server of the w32time Type?
Scenarios
Prerequisites
Procedure
3.9.1.12 How Do I Configure Time Synchronization Between the System and a Host When an External Linux Clock Source Is Used?
Scenarios
Impact on the System
Prerequisites
Procedure
3.9.1.13 How Do I Reconfigure Host Parameters?
Scenarios
Prerequisites
Procedure
3.9.1.14 How Do I Replace Huawei-related Information in FusionCompute?
Scenarios
Prerequisites
Procedure
Additional Information
3.9.1.15 How Do I Measure Disk IOPS?
Procedure
3.9.1.16 What Should I Do If a Linux VM with More Than 32 CPU Cores Cannot Be Started?
Scenarios
Prerequisites
Procedure
3.9.1.17 How Do I Query the FusionCompute SIA Version?
Scenarios
Procedure
3.9.1.18 What Should I Do If Tools Installed on Some OSs Fails to be Started?
Symptom
Possible Causes
Procedure
3.9.1.19 Expanding the Data Disk Capacity
Method of Adding Disks
Expanding the Capacity of Existing Disks
3.9.1.20 How Do I Manually Change the System Time on a Node?
Scenarios
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 22/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
3.9.1.21 How Do I Handle the Issue that VRM Services Become Abnormal Because the DNS Is Unavailable?
Symptom
Possible Causes
Procedure
3.9.1.22 What Can I Do If an Error Message Is Displayed Indicating That the Sales Unit HCore Is Not Supported When I Import Licenses on
FusionCompute?
Symptom
Possible Causes
Fault Diagnosis
Procedure
Related Information
3.9.1.23 How Do I Determine the Network Port Name of the First CNA Node?
3.9.1.24 Troubleshooting
Problem 1: Changing the Non-default Password of gandalf for Logging In to Host 02 to the Default One
Problem 2: Host Unreachable
Problem 3: Incorrect Password of root for Logging In to Host 02
Problem 4: Duplicate Host OS Names at the Same Site
Problem 5: The Host Where the Installation Tool Is Installed Does Not Automatically Start Services After Being Restarted
Problem 6: PXE-based Host Installation Failed or Timed Out
Problem 7: Automatic Logout After Login Using a Firefox Browser Is Successful but an Error Message Indicating that the User Has Not
Logged In or the Login Times Out Is Displayed When the User Clicks on the Operation Page
Problem 8: Alarm "ALM-15.1000103 VM Disk Usage Exceeds the Threshold" Is Generated During Software Installation
3.9.2 Common Operations
3.9.2.1 How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?
Scenarios
Prerequisites
Procedure
3.9.2.2 Logging In to FusionCompute
Scenarios
Prerequisites
Procedure
3.9.2.3 Installing Tools for eDME
Scenarios
Prerequisites
Procedure
Additional Information
3.9.2.4 Uninstalling the Tools from a Linux VM
Scenarios
Impact on the System
Prerequisites
Procedure
Follow-up Procedure
3.9.2.5 Checking the Status and Version of the Tools
Scenarios
Prerequisites
Procedure
3.9.2.6 Configuring the BIOS on Hygon Servers
Two Methods for Accessing the BIOS
3.9.2.7 Setting Google Chrome (Applicable to Self-Signed Certificates)
Scenarios
Prerequisites
Procedure
3.9.2.8 Setting Mozilla Firefox
Scenarios
Prerequisites
Procedure
3.9.2.9 Obtaining HiCloud Software Packages from Huawei Support Website
3.9.2.9.1 Obtaining GDE Software Packages
3.9.2.9.1.1 x86
GDE Kernel Software Packages
DSP Software Packages
127.0.0.1:51299/icslite/print/pages/resource/print.do? 23/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IT Infra Software Packages
ADC Software Packages
3.9.2.9.1.2 Arm
GDE Kernel Software Packages
DSP Software Packages
IT Infra Software Packages
ADC Software Packages
3.9.2.9.2 Obtaining Product Software Packages
3.9.2.10 Restarting Services
Procedure
3.9.3 Physical Network Interconnection Reference
3.9.4 Introduction to Tools
Overview
Functions
Precautions
3.9.5 Verifying the Software Package
3.9.6 VM-related Concepts
Related Concepts
VM Creation Methods
Requirements for VM Creation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 24/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 Library Information
Change History
Conventions
Feedback
Technical Support
1.2 Conventions
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Indicates a hazard with a high level of risk which, if not avoided, will result in death or serious injury.
Indicates a hazard with a medium level of risk which, if not avoided, could result in death or serious injury.
Indicates a hazard with a low level of risk which, if not avoided, could result in minor or moderate injury.
Indicates a potentially hazardous situation which, if not avoided, could result in equipment damage, data loss, performance
deterioration, or unanticipated results.
NOTICE is used to address practices not related to personal injury.
General Conventions
Format Description
Boldface Names of files, directories, folders, and users are in boldface. For example, log in as user root. File paths are in boldface, for
example, C:\Program Files\Huawei.
Courier New Terminal display is in Courier New. The messages input on terminals by users are displayed in boldface.
"" Double quotation marks indicate the section name in the document.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 25/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Command Conventions
Format Description
Italic Command arguments (replaced by specific values in an actual command) are in italic.
{ x | y | ... } Optional items are grouped in braces ({ }) and separated by vertical bars (|). One item must be selected.
[ x | y | ... ] Optional items are grouped in square brackets ([ ]) and separated by vertical bars (|). Only one item or no item can be selected.
{ x | y | ... Optional items are grouped in braces ({ }) and separated by vertical bars (|). A minimum of one item or a maximum of all items can be
* selected.
}
* Optional items are grouped in square brackets ([ ]) and separated by vertical bars (|). Multiple items or no item can be selected.
[ x | y | ... ]
GUI Conventions
Format Description
Boldface Buttons, menus, parameters, tabs, windows, and dialog titles are in boldface. For example, click OK.
> Multi-level menus are in boldface and separated by greater-than signs (>). For example, choose File > Create > Folder.
Keyboard Operations
Format Description
Key Press the key. For example, press Enter, Tab, Backspace, and a.
Key 1+Key 2 Press the keys simultaneously. For example, pressing Ctrl+Alt+A means that the three keys should be pressed concurrently.
Key 1, Key 2 Press the keys in turn. For example, press Alt and F in turn.
Mouse Operations
Format Description
Click Select and release the primary mouse button without moving the pointer.
Double-click Press the primary mouse button twice continuously and quickly without moving the pointer.
Drag Press and hold the primary mouse button and move the pointer to a certain position.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 26/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Obtaining Documentation
You can obtain documentation using either of the following methods:
Use the online search function provided by ICS Lite to find the documentation package you want and download it. This method is
recommended because you can directly load the required documentation to ICS Lite. For details about how to download a documentation
package, see the online help of ICS Lite.
Visit the Huawei support website to download the desired documentation package.
Apply for the documentation CD-ROM from your local Huawei office.
To use ICS Lite or to visit Huawei technical support website, you need a registered user account. You can apply for a user account at the support website or apply for
an account by contacting the service manager at Huawei local office.
Updating Documentation
You can update documentation using either of the following methods:
Enable the documentation upgrade function of ICS Lite to automatically detect the latest version for your local documentation and load it to
ICS Lite as required. This method is recommended. For details, see the online help of ICS Lite.
Download the latest documentation packages from the Huawei technical support websites.
To use ICS Lite or to visit Huawei technical support website, you need a registered user account. You can apply for a user account at the support website or apply for
an account by contacting the service manager at Huawei local office.
1.4 Feedback
Your opinions and suggestions are warmly welcomed. You can send your feedback on the product documents, online help, or release notes in the
following ways:
Give your feedback using the information provided on the Contact Us page at the Huawei technical support websites:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 27/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the problem persists, you can contact local Huawei representative office or the company's headquarters.
Give your feedback using the information provided on the Contact Us page at the Huawei technical support websites:
2 Product Overview
Solution Overview
Application Scenario
Solution Architecture
Software Description
Hardware Description
System Security
Technical Specifications
Feature Description
Definition
The advent of new data center technologies and business demands poses tremendous challenges to traditional data centers (DCs). To rise to these
challenges and follow technology trends, Huawei launches the next-generation Datacenter Virtualization Solution (DCS).
DCS uses eDME as the full-stack management software for data centers. With a unified management interface, open APIs, cloud-based AI
enablement, and multi-dimensional intelligent risk prediction and optimization, eDME implements automatic management and intelligent O&M of
resources throughout the lifecycle from planning, construction, and O&M to optimization, and manages multiple data centers in a unified manner,
helping customers simplify management and improve data center O&M efficiency. FusionCompute is used as the cloud operating system (OS)
software to consolidate resources in each physical data center. It virtualizes hardware resources to help carriers and enterprises build secure, green,
and energy-saving data centers, reducing operating expense (OPEX) and ensuring system security and reliability. eBackup and UltraVR are used to
implement VM data backup and disaster recovery (DR), and provide a unified DR protection solution for data centers in all regions and scenarios.
HiCloud is used to provide the security service, which supports heterogeneous management and resource provisioning of VMware. eDataInsight
functions as a big data platform to meet application requirements in typical big data scenarios. DCS uses software-defined networking (SDN)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 28/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
hardware and iMaster NCE-Fabric to implement automated network linkage configuration and control network devices, enabling automatic and fast
service orchestration.
DCS is a service-driven solution that features cloud-pipe synergy and helps carriers and enterprises manage physically distributed, logically unified
resources throughout their lifecycles.
Physical distribution
Physical distribution indicates that multiple data centers of an enterprise are distributed in different regions. After data center virtualization
components are deployed in physical data centers in different regions, IT resources can be consolidated to provide services in a unified manner.
Logical unification
Logical unification indicates that data center full-stack management software uniformly manages multiple data centers in different regions.
Features
Reliability
This solution enhances the reliability of a single device, data, and the solution. The distributed architecture improves the overall reliability and
therefore, lowers the reliability requirements on a single device.
Availability
The system delivers remarkable availability by employing hardware/link redundancy deployment, high-availability clusters, and application
fault tolerance (FT) features.
Security
The solution complies with the industry security specifications to ensure data center security. It focuses on the security of networks, hosts,
virtualization, and data.
Lightweight
You can start small with just three nodes and flexibly select DCS deployment specifications according to your service scale.
Scalability
Data center resources must be flexibly adjusted to meet actual service load requirements, and the IT infrastructure is loosely coupled with
service systems. Therefore, you only need to add IT hardware devices when service systems require capacity expansion.
Openness
Various types of servers and storage devices based on the x86 or Arm hardware platform and mainstream Linux and Windows OSs are
supported in data centers for flexible selection. Open APIs are provided to flexibly interconnect with cloud management software.
Service Description
Elastic Cloud An ECS is a compute server that consists of vCPUs, memory, and disks. ECSs are easy to obtain and scalable, and can be used on-
Server (ECS) demand. ECSs work with other services including storage service, Virtual Private Cloud (VPC), and Cloud Server Backup Service
(CSBS) to build an efficient, reliable, and secure compute environment, ensuring uninterrupted and stable running of your services.
Bare Metal Server A BMS features both the scalability of VMs and high performance of physical servers. It provides dedicated servers on the cloud for
(BMS) users and enterprises, delivering the performance and security required by core databases, critical applications, high-performance
computing (HPC), and big data. Tenants can apply for and use BMSs on demand.
Image An image is an ECS template that contains software and mandatory configurations, including operating systems (OSs), preinstalled
Management public applications, and the user's private applications or service data. Images are classified into public, private, and shared images.
Service (IMS) IMS allows you to manage images easily. You can apply for ECSs using a public or private image. In addition, you can also create a
private image from an ECS or external image file.
Auto Scaling (AS) AS is a service that automatically adjusts service resources according to AS policies configured based on user service requirements.
When service demands increase, AS automatically adds ECS instances to ensure computing capabilities. When service demands
decrease, AS automatically reduces ECS instances to reduce costs.
Service Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 29/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Block storage The block storage service provides block storage space for VMs. You can create Elastic Volume Service (EVS) disks in online mode
service and attach them to VMs. The block storage service provides various persistent storage devices. You can choose disk types based on
your needs and store files and build databases on EVS disks.
Object Storage The OBS is an object-based mass storage service. It provides mass, secure, reliable, and cost-effective data storage capabilities,
Service (OBS) including bucket creation, modification, and deletion.
Scalable File SFS provides ECSs and BMSs in HPC scenarios with a high-performance shared file system that can be scaled on demand. It is
Service (SFS) compatible with standard file protocols (NFS, CIFS, OBS, and DPC) and is scalable to petabytes of capacity to meet the needs of mass
data and bandwidth-intensive applications.
Service Description
VPC service A VPC is a logically isolated virtual network environment that is built for ECSs and is configured and managed by users for improving
the security of user resources and simplifying user network deployment.
Elastic IP An EIP is a static IP address on a network outside the cloud (also called external network), can be directly accessed through the network,
address (EIP) and is mapped to the instance bound to the EIP using network address translation (NAT).
service
Security group A security group is a logical group which provides access policies for cloud servers that have the same security protection requirements
service and are mutually trusted in the same VPC. After a security group is created, you can define different access rules in the security group to
protect servers that are added to it.
NAT service The NAT service provides network address translation for cloud servers in a VPC so that the cloud servers can share an EIP to access the
Internet or can be accessed by an external network.
The NAT service provides two functions: source network address translation (SNAT) and destination network address translation
(DNAT).
Elastic load ELB distributes access traffic to multiple backend cloud servers based on forwarding policies. ELB can expand the access handling
balance (ELB) capability of application systems through traffic distribution and achieve a higher level of fault tolerance and performance. ELB also
service improves system availability by eliminating single points of failure (SPOFs). In addition, ELB is deployed on the internal and external
networks in a unified manner and supports access from the internal and external networks.
Virtual firewall The vFW controls VPC access and supports blacklists and whitelists (allow and deny policies). Based on the inbound and outbound
(vFW) Access Control List (ACL) rules associated with a VPC, the vFW determines whether data packets are allowed to flow into or out of the
VPC.
Domain Name A DNS service translates frequently-used domain names into IP addresses for servers to connect to each other. You can enter a domain
Service (DNS) name in a browser to visit a website or web application.
Virtual Private A VPN provides a secure, reliable encrypted communication channel that meets industry standards between remote users and their
Network VPCs. Such channel can seamlessly extend users' data center (DC) to a VPC.
(VPN)
Public service The public service network is used for a server to communicate with ECSs, VIPs, or BMSs in all VPCs of a user. With the public service
network network, you can quickly deploy VPC share services.
Service Description
Cloud Server High CSHA adopts the HyperMetro feature of storage to implement cross-AZ VM HA. It is mainly used to provide DR protection for
Availability (CSHA) ECSs with zero recovery point objective (RPO). If the production AZ is faulty, it can implement the fast DR switchover with
minute-level recovery time objective (RTO) to ensure service continuity.
Backup service The backup service provides a unified operation portal for tenants in the multi-tenant scenario. Administrators can define backup
service specifications to form a logically unified backup resource pool for multiple physically dispersed backup devices, helping
tenants quickly obtain backup services, simplifying configuration, and improving resource provisioning efficiency.
Service Description
VMware cloud A VMware ECS is a compute server provided by vCenter that can be obtained and elastically expanded at any time. After adding the
server VMware cloud service to the vCenter resource pool, you can synchronize VMware ECSs to eDME and implement unified management.
Service Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 30/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Elastic Container As an enterprise-level K8s cluster hosting service, Elastic Container Engine (ECE) enables management for cluster lifecycle,
Engine container images, and containerized applications, as well as container monitoring and O&M. In addition, it provides highly scalable
and reliable cloud-native application deployment and management solutions. Therefore, it is a good choice for you to achieve
application modernization.
SoftWare SoftWare Repository for Container (SWR) provides easy, secure, and reliable management of container images throughout their
Repository for lifecycle. It is compatible with the Registry V2 protocol of the community and allows you to manage container images through a GUI,
Container CLI, or native APIs. SWR can be seamlessly integrated with ECE to help customers quickly deploy containerized applications and
build a one-stop solution for cloud native applications.
Service Description
Application and data The application and data integration service is used to build an enterprise-level connection platform for connecting enterprise IT
integration service systems with OT devices. It provides multiple connection options including API, message, data, and device access to enable
enterprises to create digital twins based on the physical world and speed up their digital transformation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 31/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Function Description
Data center It provides full-stack O&M management of site resources from hardware infrastructure to virtual resources, supports unified
virtualization management of resource sites in multiple data centers in different regions, and enables system administrators (including
management platform hardware administrators) manage hardware and software on one easy-to-use, unified, and intelligent O&M portal.
Virtualization + Virtual resource pools are built upon the physical infrastructure, which are classified into virtual compute, virtual storage, and
Container dual-stack virtual network resource pools.
resource pool Container resource pools are containerized application resource pools constructed based on virtualization resource pools.
Virtualization DR and Active-standby, active-active, and 3DC DR management is provided based on storage remote replication and active-active
backup capabilities to ensure service continuity. Data is replicated to dump devices. If a system fault or data loss occurs, the backup
data can be used to recover the system or data. This function provides a unified DR protection solution for data centers in all
regions and scenarios.
Virtualization security Full-stack security protection is implemented for data storage, network transmission, management and O&M, host systems,
and VMs to ensure secure service access.
Hardware infrastructure Hardware infrastructure includes servers, storage devices, network devices, backup devices, and security devices required by
data centers. Based on different service requirements, this layer provides multiple hardware deployment architectures.
Professional services Professional full-lifecycle virtualization services are offered covering consulting, planning and design, delivery
implementation, migration, and training and certification.
Component Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 32/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eCampusCore This component is optional and can be deployed only on the Region Type II network (network overlay SDN scenario).
eCampusCore is an enterprise-level platform for application and data integration. It provides connections between IT systems and OT
devices and pre-integrated assets for digital scenarios in the enterprise market.
BMC plane This plane is used by the baseboard The management planes of eDME and Configure the port on the switch connected to the
management controller (BMC) network Virtual Resource Management (VRM) BMC network interface card (NIC) on the server to
port on a host. The BMC plane enables nodes can communicate with the BMC allow VLANs of the BMC plane to pass without any
remote access to the BMC system of plane. The management plane and the tags. If the management plane and the BMC plane are
the server and controllers of storage BMC plane can be combined. The deployed in a converged manner, you are advised to
devices. BMC port and the storage configure the port on the switch connected to the
management port are connected to the management plane VLAN to work in Access mode, or
leaf switch. set the management plane VLAN to PVID VLAN in
Trunk mode.
Management This plane is used for the management The eDME, VRM, and FSM nodes are Configure the port on the switch connected to the
plane of all nodes in a unified manner, the deployed on the management plane management plane NIC on the server to allow VLANs
communication between all nodes, and and can communicate with each other. of the management plane to pass without any tags.
the monitoring, O&M, and VM You are advised to configure the port on the switch
management of the entire system. connected to the management plane VLAN to work in
Management plane IP addresses include Access mode, or set the management plane VLAN to
host management IP addresses and IP PVID VLAN in Trunk mode.
addresses for the management of VMs.
Storage plane Enterprise IP SAN storage service The storage plane of each host can Configure the port on the switch connected to the
access: communicate with the storage plane of SAN storage plane NIC on the server to allow VLANs
Hosts can communicate with the storage devices. of the storage plane to pass with tags. You are advised
storage devices through this plane. to configure the port on the switch connected to the
Storage plane IP addresses include the storage plane NIC on the server to work in Trunk
storage IP addresses of all hosts and mode and allow VLANs of the storage plane to pass.
storage devices. VLAN tags need to be removed when the traffic of the
storage plane passes the storage port of SAN devices.
Configure the port on the switch connected to the
storage port on IP SAN devices to allow VLANs of
the storage plane to pass without any tags. You are
advised to configure the port on the switch connected
to the storage port of SAN devices to work in Access
mode.
Enterprise FC SAN storage service The storage plane of each host can FC switch ports connected to the storage plane HBAs
access: communicate with the storage plane of on the server are configured to the same zone.
Hosts can communicate with the storage devices.
storage devices through this plane. Host
bus adapters (HBAs) on all hosts are
connected to Fiber Channel (FC)
switches.
Scale-out storage service access: The storage plane of each host can Configure the port on the switch connected to the
Hosts can communicate with scale-out communicate with the service plane of service plane NIC of the scale-out storage to allow
storage services through this plane. The the scale-out storage nodes. VLANs of the storage plane to pass with tags. You are
IP addresses used on this plane include advised to configure the port on the switch connected
storage IP addresses of all hosts and to the service plane NIC of OceanStor Pacific Block
service IP addresses of scale-out to work in Trunk mode and allow VLANs of the
storage. storage plane to pass.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 33/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Back-end Back-end storage plane of scale-out The scale-out storage nodes must use Configure the port on the switch connected to the
storage plane storage: an independent plane to communicate back-end storage plane NIC of scale-out storage to
Storage nodes are interconnected and with each other. allow VLANs of the back-end storage plane to pass
all storage nodes use IP addresses of the with tags. You are advised to configure the port on the
back-end storage plane. switch connected to the scale-out back-end storage
plane NIC to work in Trunk mode and allow VLANs
of the back-end storage plane to pass.
Service plane This plane is used by the service data of VMs communicate with the service VLAN tags do not need to be removed when the
user VMs. plane. traffic of the service plane passes the service NIC of a
host. Configure the port on the switch connected to
the service plane NIC on the server to allow VLANs
of the service plane to pass with tags. You are advised
to configure the port on the switch connected to the
service plane NIC on the server to work in Trunk
mode and allow VLANs of the service plane to pass.
Storage Replication service plane between Devices in the storage site are FC or IP ports on storage devices are connected across
replication storage devices: interconnected through the wavelength sites through WDM devices.
plane Devices at storage sites are division multiplexing (WDM)
interconnected through IP or FC ports technology.
for storage data replication.
External Used for communication between the Border leaf switches are connected to Static or dynamic routes need to be configured for
network internal network and external network customers' physical endpoints (PEs) or firewalls, load balancers, border leaf switches, and
plane of data centers: routing devices. egress routers.
This plane is the egress for VMs to
access external network services and
ingress for external networks to access
the data center network. Public IP
addresses need to be planned and
configured by customers for border leaf
switches.
Public The public service network consists of Border leaf switches can communicate Border leaf switches are connected to the server
service plane the client network and server network. with the server network (for example, network of public services through routes.
The client network plane reuses the OBS) of public services.
service plane.
The server network plane is determined
by the carried service. For example, the
Object Storage Service (OBS) is
deployed on the server, reuses the
storage plane, and connects the client
network to the OBS storage network
through route configuration.
DCS typical networking includes single-DC single-core networking, single-DC single-core Layer 2 networking, single-DC single-core Layer 3
networking, and network overlay SDN networking.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 34/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Border leaf node: border function nodes that connect to firewalls, routers, or transmission devices to transmit external traffic to data
center networks and implement interconnection between data centers. Border leaf and spine nodes can be deployed in converged mode.
Value-added service (VAS): Firewalls and load balancers are connected to spine or border leaf nodes in bypass mode, and 10GE ports are
used for networking.
Spine: core nodes that provide high-speed IP forwarding and connect to each function leaf node through high-speed interfaces.
Leaf: access nodes, including the leaf switches connected to the compute nodes and converged nodes and the leaf switches connected to
the storage nodes. The leaf switches connected to compute nodes and converged nodes provide the capability of connecting compute and
management resources such as virtualized or non-virtualized servers to the fabric network. Leaf switches connected to storage nodes
provide the capability of connecting IP storage nodes to the fabric network.
For the interconnection between leaf and spine nodes, it is recommended that 40GE or 100GE ports be used for networking based on the
oversubscription ratio (OSR).
BMC access switch: connects to the BMC network port of converged and compute nodes or the management port of storage devices, and
accesses to spine switches in the uplink. If the BMC access switch is not deployed, the BMC network port or the management port of
storage devices needs to be directly connected to leaf access switches.
FusionCompute and eDME system management software can be deployed on converged nodes, and remaining resources can be used to
deploy VM compute services. You can deploy either IP or FC storage nodes, depending on your network performance requirements.
Distributed virtual switch (DVS): runs on compute and converged nodes to connect VMs to fabric networks.
Compute/Converged nodes: The nodes are connected to leaf nodes and can be configured with management, storage, and service ports.
DR service ports are optional. The service NICs use 10GE networking.
Storage replication service: Storage nodes are aggregated by storage top of rack (TOR) or FC switches and then interconnected with
remote storage devices through optical transport network (OTN) or WDM devices. This enables synchronous replication, asynchronous
replication, and active-active services on storage nodes, which can be configured based on customers' networking DR requirements.
For a new data center, it is recommended that the customer configure leaf, border, and spine switches according to the preceding
networking (spine and border switches can be deployed in converged mode). If the customer has planned the network, submit network
requirements to the customer based on the number of converged nodes, compute nodes, and TOR switches, for example, the number of
10GE ports on leaf switches and the number of 40GE or 100GE uplink ports on TOR switches.
A fabric network refers to a large switching network consisting of multiple switches, which is distinguished from the network with a single switch.
A fabric network consists of a group of interconnected spine and leaf nodes. All nodes connected to the network can communicate with each other.
Multiple tenants can share one physical device and one tenant can also use multiple servers, greatly saving costs and improving resource usage.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 35/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Compute nodes support the deployment with four or six network ports. When four network ports are used, the management and service ports
are deployed in converged mode, and the storage ports are deployed independently. When six network ports are used, the management ports,
service ports, and storage ports are deployed independently and isolated by VLANs. If the DR service software is deployed on compute nodes,
the DR service ports and storage services port are deployed in converged mode in the six- or four-port configuration scenarios. Compute nodes
can also be configured with eight network ports. In this case, the management ports, service ports, storage ports, and DR ports are deployed
independently and isolated by VLANs. The following two networking topologies are recommended based on the service bandwidth and scale:
Layer 2 networking topology: spine + leaf (converged deployment of border and spine nodes)
The network has a single region, single DC, and single physical egress, with the scale of no more than 2,000 Single-DC single-core Layer 2
VMs and 350 servers. The management plane and service plane do not need to be physically isolated and can networking (spine + leaf)
share core switches.
The inter-rack east-west traffic is small, and the service traffic OSR is less than 4:1.
Spine, service leaf, and border leaf nodes are deployed in converged mode.
The network has a single region, single DC, and single physical egress, with the scale of no more than 5,000 Single-DC single-core Layer 3
VMs and 1,000 servers. networking (border leaf + spine +
leaf)
The management plane and service plane do not need to be physically isolated and can share core switches.
The inter-rack east-west traffic is large, and the service traffic OSR is less than 2:1.
Spine, service leaf, and border leaf nodes are deployed separately.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 36/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the Layer 3 networking topology and iMaster NCE management networking diagram, border leaf nodes are independently deployed as
border routers. Firewalls and load balancers are connected to border leaf nodes in bypass mode. The networking of four or six network ports is
supported. The networking diagram is shown in Figure 5.
Figure 4 Layer 2 networking topology and iMaster NCE management networking diagram (converged deployment of spine and border nodes)
Figure 5 Layer 3 networking topology and iMaster NCE management networking diagram (separated deployment of spine and border nodes)
eDME
127.0.0.1:51299/icslite/print/pages/resource/print.do? 37/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eBackup
UltraVR
HiCloud
eDataInsight
iMaster NCE-Fabric
eCampusCore
eContainer
2.4.1 FusionCompute
Overview
FusionCompute is a cloud OS. It virtualizes hardware resources and centrally manages virtual resources, service resources, and user resources. It
uses compute, storage, and network virtualization technologies to virtualize compute, storage, and network resources. It centrally schedules and
manages virtual resources over unified interfaces. FusionCompute provides high system security and reliability and reduces the OPEX, helping
carriers and enterprises build secure, green, and energy-saving data centers.
Technical Highlights
FusionCompute adopts virtualization management software, divides compute resources into multiple VM resources, and provides you with high-
performance, operational, and manageable VMs.
Uses QoS to ensure resource allocation and prevents users from affecting each other.
2.4.2 eDME
Overview
eDME is an intelligent O&M platform designed for DCS to centrally manage software and hardware. It also provides lightweight cloud solutions for
small- and medium-sized data centers.
Provides automatic management and intelligent O&M throughout the lifecycle of data center virtualization infrastructure, including planning,
construction, O&M, and optimization, helping customers simplify management and improve data center O&M efficiency.
Provides lightweight, elastic, agile, and efficient lightweight solutions, and provides multi-level VDCs, computing, storage, network, DR and
backup, security, database, and heterogeneous virtualization capabilities as services. This solves the challenges of difficult planning, use, and
management for customers and improves the efficiency of enterprise business departments.
Northbound and southbound APIs are opened. Northbound APIs can be used to connect to customers' existing O&M platforms and various
cloud management platforms. Southbound APIs can be used to take over third-party devices through plug-ins or standard protocols.
Application Scenarios
eDME centrally manages software and hardware in virtualization scenarios. It controls, manages, and collaboratively analyzes databases, data tools,
servers, switches, and storage devices. Based on Huawei's unified intelligent O&M management platform, eDME provides real-time O&M and
127.0.0.1:51299/icslite/print/pages/resource/print.do? 38/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
closed-loop problem solving based on alarms, a wide range of report analysis capabilities based on unified datasets, and automatic O&M capabilities
based on policies and AI. In the southbound and northbound ecosystem, eDME can connect to customers' existing O&M platforms and various
cloud management platforms through northbound protocols such as RESTful, SNMP, Telnet, and Redfish. In addition, eDME supports typical third-
party devices, including servers, switches, and storage devices, through the open standard southbound interfaces in the industry.
2.4.3 eBackup
Overview
eBackup is a Huawei-developed backup software for cloud and virtual environments. Employing VM and disk snapshot, and CBT, eBackup
provides comprehensive protection for user data in virtualization scenarios.
eBackup supports backup and restoration of VM data and disk data in virtualization scenarios.
Backup object
Indicates an object to be backed up. eBackup Virtual Backup Solution can protect data of VMs on Huawei FusionCompute.
Backup storage
Supports Network Attached Storage (NAS), Simple Storage Service (S3), and Storage Area Network (SAN) storage.
Backup policy
Supports the creation of protection policies for VMs and one or multiple disks. Permanent incremental backup is supported to reduce the
amount of data to be backed up.
2.4.4 UltraVR
Overview
UltraVR is a piece of DR service management software for enterprise data centers. It enables you to configure DR services in a simple and efficient
manner, monitor the running status of DR services in a visualized manner, and quickly complete data recovery and tests.
Highlights
Convenience and efficiency
UltraVR supports process-based DR service configuration. In addition, it supports one-click DR tests, planned migration, fault recovery, and
reprotection.
Visualization
UltraVR enables you to easily manage the entire DR process by graphically displaying physical topologies of global DR and logical topologies
of service protection. You can easily understand the execution status of protected groups and recovery plans.
Integration
UltraVR integrates with storage resource management. It meets the DR O&M requirements in various application scenarios, such as active-
standby data centers, geo-redundancy with three data centers, and active-active data centers, reducing O&M costs and improving O&M
efficiency.
High reliability
127.0.0.1:51299/icslite/print/pages/resource/print.do? 39/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The multi-site deployment of UltraVR improves the reliability of the DR service management and scheduling system. In addition, the
automatic backup of management data ensures quick recovery of the management system.
2.4.5 HiCloud
Overview
CMP is a core component that implements unified management of heterogeneous resource pools in cloud data centers and converts resources into
cloud services. It distinguishes cloud data centers from traditional data centers.
CMP HiCloud is an industry-leading hybrid cloud management platform that provides unified management of heterogeneous resources. It supports
management of cloud services such as Huawei hybrid cloud and VMware. In addition, CMP supports dual stacks of x86 and Arm, and features
unified service orchestration, resource scheduling, and cross-cloud deployment, which enable it to quickly adapt to customer's cloud platforms and
promote service innovation.
Highlights
HiCloud builds industry-leading competitiveness based on the following aspects:
Supports Docker image installation and fast boot based on the Docker container technologies.
Supports interconnection with third-party devices of IT O&M management systems (ITSM, ITOM, and CMDB).
Provides HA could management platform solutions based on the customers' service requirements and live network environment.
Implements automatic allocation, automatic billing, unified management, and unified O&M on tenant resources, improving rollout and O&M
efficiency.
2.4.6 eDataInsight
Overview
Huawei eDataInsight is a distributed data processing system, which provides large-capacity data storage, query, and analysis capabilities. It can meet
the following enterprise requirements:
Application Scenarios
The advent of competition in Internet finance poses challenges to financial enterprises and urged them to reconstruct a decision-making and service
system based on big data analysis and mining to improve their competitiveness and customer satisfaction. In the big data era, banks need to focus on
data instead of transactions to address challenges of real-time processing of multidimensional, massive data and Internet business.
Huawei eDataInsight can solve the problems of financial enterprises from different aspects and improve their competitiveness. For example:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 40/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Precise push
Distributed online banking logs can be collected more efficiently. User preference is analyzed based on the online banking logs to
achieve precise push, greatly improving online banking users' experience.
All target users can be covered with just less than 20% of original recommendation SMS messages to achieve precise push.
Overview
iMaster NCE-Fabric functions as the SDN controller to manage switches in data centers and automatically deliver service configurations.
Highlights
iMaster NCE-Fabric implements network automation and automatic and fast service orchestration.
iMaster NCE-Fabric associates network resources with compute resources to implement automatic network configuration, reducing the
configuration workload of network administrators.
FusionCompute interconnects with iMaster NCE-Fabric to associate compute and network resources, implementing automatic provisioning of
virtual networks and automatic network configuration during VM provisioning, HA, and migration.
2.4.8 eCampusCore
Overview
As a core component of the campus digital platform, eCampusCore is dedicated to building an enterprise-level integration platform. It provides IT
system and OT device connection, heterogeneous AI algorithm integration, lightweight data processing, and application O&M capabilities for digital
scenarios in the enterprise market. In addition, it provides flexible multi-form deployment adaptation capabilities for industry solutions to support
enterprise digital transformation.
2.4.9 eContainer
Overview
Huawei eContainer is a cloud-native container platform that provides cloud-native infrastructure management and unified orchestration and
scheduling of K8s-based containerized applications. This platform addresses the core requirements of enterprises undergoing cloud-native digital
127.0.0.1:51299/icslite/print/pages/resource/print.do? 41/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
transformation. eContainer offers two services in DCS: Elastic Container Engine (ECE) and SoftWare Repository for Container (SWR). The primary
capabilities include:
You can manage container images easily without having to build and maintain a platform.
You can manage container images throughout their lifecycles, including uploading, downloading, and deleting container images.
The community Registry V2 protocol is supported. Container images can be managed through the community CLI (such as containerd,
iSula, and Docker) and native APIs.
A container image security isolation mechanism is provided at the resource set granularity based on the multi-tenant service to secure
data access.
The image storage space can be flexibly configured based on service requirements, reducing initial resource costs.
Application Scenarios
DCS AI full-stack
The DCS AI full-stack solution is built based on the DCS eContainer platform. It provides an underlying AI platform for training and inference in
healthcare, finance, coal mining, scientific research, and many other scenarios.
The XPU K8s cluster provided by ECE interconnects with AI development platform ModelMate, enabling AI users to provision their own AI
platforms to complete end-to-end AI training and inference as well as data processing.
SWR provides container image management capabilities for the AI full-stack solution. It supports unified management of container images
related to AI training and inference and data processing services, simplifying the process of deploying containerized applications.
Switch
127.0.0.1:51299/icslite/print/pages/resource/print.do? 42/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Storage Device
2.5.1 Server
For details about servers supported by DCS, visit Huawei Storage Interoperability Navigator.
2.5.2 Switch
Table 1 lists the switches supported by DCS. For details about other switches, visit Huawei Storage Interoperability Navigator.
Category Model
CE6857F-48S6CQ
FM6857E-48S6CQ
FM6857-48S6CQ-EI
GE switch FM5855E-48T4S2Q
127.0.0.1:51299/icslite/print/pages/resource/print.do? 43/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Security Threats
In addition to addressing security threats of traditional data centers, new data centers also face the following new security threats and challenges:
Storage-layer infrastructure
If static data is damaged and the error cannot be detected immediately, incorrect data may be returned to the host, causing service
exceptions.
Data may not be entirely cleared after the compute resource or storage space is released.
Network-layer infrastructure
The distributed deployment of data center resources complicates route and domain name configuration and therefore makes the data
center more vulnerable to network attacks, such as domain name server (DNS) and distributed denial-of-service (DDoS) attacks. DDoS
attacks come not only from the external network but also from the internal network.
Logical isolation instead of physical isolation and the change of the network isolation model produce security vulnerabilities in the
original isolation of an enterprise network.
Multiple tenants share compute resources, which may result in risks in resource sharing such as user data leakage, caused by improper
isolation measures.
Host-layer infrastructure
Hypervisor works with the highest priority (even higher than the priority of the OS). If Hypervisor is hacked, all VMs running on
Hypervisor are completely under attack.
If VMs do not have security measures or security measures are not automatically created, keys for accessing and managing VMs may
be stolen, services (such as FTP and SSH) that are not patched in a timely manner may be attacked, accounts with weak passwords or
without passwords may be stolen, and systems that are not protected by host firewalls may be attacked.
Security Architecture
Huawei provides a security solution to face the threats and challenges posed to virtualization. Figure 1 shows the security solution architecture.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 44/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
User data on different VMs is isolated at the virtualization layer to prevent data theft and ensure data resilience.
Data access control is implemented. In FusionCompute, different access policies are configured for different volumes. Only users with
the access permission can access a volume, and different volumes are isolated from each other.
Remaining information is protected. When reclaiming resources, the system can format the physical bits of logical volumes to ensure
data resilience. After the physical disks of a data center are replaced, the system administrator of the data center needs to degauss them
or physically destroy them to prevent data leakage. Data storage uses a reliability mechanism. One or more copies of backup data are
stored so that data is not lost and services are not affected even if storage devices such as disks become faulty.
Cyber resilience
Network isolation is adopted. The network communication plane is divided into the service plane, storage plane, and management
plane, and these planes are isolated from each other. As a result, operations of the management platform do not affect service running
and end users cannot damage basic platform management.
Network transmission security must be ensured. Data transmission may be interrupted, and data may be replicated, modified, forged,
intercepted, or monitored during transmission. Therefore, it is necessary to ensure the integrity, confidentiality, and validity of data
during network transmission. Data transmission security must be ensured. HTTPS is used for pages that contain sensitive data and
SSL-based transmission channels are used for system administrators to access the management system. Users access VMs using
HTTPS, and data transmission channels are encrypted using SSL.
Multiple tenants share compute resources, which may result in risks in resource sharing such as user data leakage, caused by improper
isolation measures.
Host security
VM isolation is implemented. Resources of VMs on the same physical host are isolated, preventing data theft and malicious attacks
and ensuring the independent running environment for each VM. Users can only access resources allocated to their own VMs, such as
hardware and software resources and data, ensuring secure VM isolation.
OS hardening is implemented. Compute nodes, storage nodes, and management nodes run on EulerOS Linux. Host OS security is
ensured by the following security configurations:
Harden SSH services. Control access permissions for files and directories.
Security patches are provided. Software design defects result in system vulnerabilities. System security patches must be installed
periodically to fix these vulnerabilities and protect the system against attacks by viruses, worms, and hackers. The patches include
virtualization platform security patches and user VM security patches.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 45/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Web security is ensured. The web service platform automatically redirects users' access requests to HTTPS links. When users access a
web service platform using HTTP, the web service platform automatically redirects the users' access requests to HTTPS links to
enhance access security.
Role-based permission management assigns different permissions for different resources to users to ensure system security.
Log management is implemented. Logs record system running statuses and users' operations on the system, and can be used to query
user behaviors and locate problems. Logs are classified into operation logs and run logs. Operation logs record system security
information.
Security Value
Unified and comprehensive security policies
The centralized management of compute resources makes it easier to deploy boundary protection. Comprehensive security management
measures, such as security policies, unified data management, security patch management, and unexpected event management, can be taken to
manage compute resources. For users, this also means having professional security expert teams to protect user resources and data.
Storage
Network
DR
Backup
Multi-Tenancy
O&M Management
2.8.1 Compute
FusionCompute virtualizes compute resources and uses eDME to provision VMs on a unified page.
Overview
FusionCompute uses the compute resource virtualization technology to virtualize compute resources and manage virtual resources and service
resources in a centralized manner. Multiple VMs can be deployed on one physical server so that one server can function as multiple servers.
Technical Highlights
Improves resource utilization of data center infrastructure.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 46/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Leverages high availability and powerful restoration capabilities of virtualized infrastructure to provide rapid automatic fault recovery for
services, reducing data center costs and increasing system uptime.
2.8.2 Storage
Virtualized storage
Storage virtualization abstracts storage devices as datastores. VMs are stored as a group of files in their own directories in datastores. A datastore is a
logical container that is similar to a file system. It hides the features of each storage device and provides a unified model to store VM files. Storage
virtualization helps the system better manage virtual infrastructure storage resources, greatly improving storage resource utilization and flexibility
and increasing application uptime.
The following storage units can be encapsulated as datastores:
Logical unit numbers (LUNs) on storage area network (SAN) storage, including Internet Small Computer Systems Interface (iSCSI) and fibre
channel (FC) SAN storage
File systems on network attached storage (NAS) devices
Storage pools of mass block storage
Local disks of hosts
Storage pools of eVol storage
Provisions resources automatically based on service levels. Storage resources are allocated on demand based on service scenarios, maximizing
storage resource utilization.
Provides convenient task management. You can view the steps and execution results of the block storage service provisioning process in the
task center.
Orchestrates the creation, deletion, and modification operations of file systems. You can customize parameters to configure related resources in
one step.
Provides convenient task management. You can view the steps and execution results of the file storage service provisioning process in the task
center.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 47/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.3 Network
The distributed virtual switch (DVS) service depends on the FusionCompute virtualization suite. After hardware virtualization is complete, you can
provision and manage DVSs in eDME to enable communication among VMs and between VMs and external networks.
Definition
A DVS connects to VM NICs through port groups and to host physical NICs through uplinks, thereby connecting VMs to the external network, as
shown in Figure 1.
Table 1 describes the concepts of each network element (NE) in the figure.
Table 1 Concepts
NE Description
DVS A DVS is similar to a layer-2 physical switch. It connects to VMs through port groups and connects to physical networks through uplinks.
Port A port group is a virtual logical port which is similar to a network attribute template, used to define the VM NIC attributes and the mode in
group which a VM NIC connects to the network through a DVS.
When VLAN is used: No IP address is assigned to the VM NIC that uses the port group corresponding to the VLAN (you need to manually
assign an IP address to the VM NIC), but the VM connects to the VLAN defined by the port group.
Uplink An uplink connects a DVS to a physical NIC on a host for VM data transfer.
Benefits
Beneficiary Benefit
Customer The logical architecture is similar to that of traditional switches, which is easy for IT O&M personnel to understand and use.
You can view the steps and execution results of the DVS provisioning process on the task center page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 48/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Network service provisioning: The network administrator uses the network controller to allocate network resources to specified services
or applications. The network controller automatically delivers service configurations of overlay networks and access configurations of
VMs or PMs.
Compute service provisioning: The compute administrator creates, deletes, and migrates compute and storage resources using
FusionCompute. iMaster NCE-Fabric can automatically detect the operations performed by FusionCompute on compute resources.
In the network virtualization – computing scenario, the functions of each layer are as follows:
The service presentation layer is oriented to data center users. iMaster NCE-Fabric provides GUIs for network administrators,
implementing network service orchestration, policy provisioning, automated deployment, and O&M management.
The network control layer is the core of the network virtualization - computing scenario. iMaster NCE-Fabric implements
network modeling and instantiation, collaborates virtual and physical networks, and provides network resource pooling and
automation. In addition, as the key component for separating SDN network control from forwarding, iMaster NCE-Fabric
constructs an entire network view to uniformly control and deliver service flow tables.
The network service layer is the infrastructure of a data center network, providing high-speed channels for carrying services, including
L2-L3 basic network services and L4-L7 value-added network services. The network service layer uses the flat spine-leaf architecture. As
core nodes on the Virtual Extensible LAN (VXLAN) fabric network, spine nodes provide high-speed IP forwarding, and connect to leaf
nodes of various functions through high-speed interfaces. As access nodes on a VXLAN fabric network, leaf nodes connect various
network devices to the VXLAN network.
The compute access layer supports access from virtualized servers and physical servers.
A virtualized server indicates that a physical server is virtualized into multiple VMs and vSwitches using virtualization
technologies. VMs connect to the fabric network through vSwitches. iMaster NCE-Fabric is compatible with mainstream
products that have virtualized servers.
Physical servers are considered as logical ports by iMaster NCE-Fabric. Physical servers are connected to the fabric network
through logical ports.
iMaster NCE-Fabric
Definition
On the iMaster NCE-Fabric page, configure a port group and associate the specified VLAN of the port group with the logical switch. Different
port groups of the same logical switch communicate with each other at Layer 2 through the logical switch, as shown in Figure 2.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 49/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Table 1 describes the concepts of each network element (NE) in the figure.
Table 1 Concepts
NE Description
LogicSwitch By configuring a port group, you can associate a specified VLAN of the port group with a logical switch.
PortGroup A port group is a virtual logical port which is similar to a network attribute template, used to define the VM NIC attributes and the mode in
which a VM NIC connects to the network through a DVS.
When VLAN is used: No IP address is assigned to the VM NIC that uses the port group corresponding to the VLAN (you need to manually
assign an IP address to the VM NIC), but the VM connects to the VLAN defined by the port group.
vm VM
SDN: DCS uses iMaster NCE-Fabric to implement network automation and automatic and fast service orchestration.
Computing association: FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric detects VM login, logout, and
migration status and automatically configures the VM interworking network.
FusionCompute: The solution uses the network overlay SDN solution and supports association between FusionCompute and iMaster
NCE-Fabric to implement automatic provisioning of virtual network services and automatic network configuration during VM
provisioning, HA, and migration.
Benefits
Beneficiary Benefit
Customer The overlay virtualized network based on the Virtual Extensible LAN (VXLAN) and SDN enables configuration of server virtualization
and network automation without changing the existing network.
This simplifies conventional network deployment, enables fast service rollout, improves service deployment flexibility, and meets
customers' requirements for dynamic service changes.
2.8.4 DR
DCS uses the unified management software UltraVR to provide multiple DR solutions, including the local high availability (HA) solution,
metropolitan HA solution, active-standby DR solution, and geo-redundant 3DC DR solution. It provides customers with all-region and all-scenario
DR solutions within a single data center, between data centers, and to the cloud, maximizing the service continuity.
DCS provides key capabilities such as unified DR management, DR process automation, failover, reprotection, scheduled migration, and DR drills.
The local HA solution uses the storage active-active feature (HyperMetro) and virtualization HA capability to ensure zero RPO and minute-level
RTO in a single data center in the event that storage devices or hosts are faulty, maximizing the customer service continuity.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 50/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The metropolitan HA solution uses the storage active-active feature and VM HA capability of two data centers in the same city to ensure that the
intra-city DR center can quickly take over services from the production center if the production center is faulty. The solution enables zero RPO and
minute-level RTO, and maximizing the service continuity.
The active-standby DR solution replicates data between two data centers that are located far from each other. If the production center is faulty, the
DR software can be used for failover, providing minute-level RPO and hour-level RTO DR capabilities, and maximizing the service continuity. In
127.0.0.1:51299/icslite/print/pages/resource/print.do? 51/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
addition, the solution also provides multiple automatic management functions, such as DR drills, reprotection, and failback.
The active-standby DR solution is classified into the active-standby DR solution based on the storage-layer replication and the DR solution based on
the application-layer replication.
The active-standby DR solution based on storage-layer replication uses the Huawei storage replication capability (HyperReplication) and Huawei
DR management software UltraVR to implement synchronous or asynchronous DR protection.
The active-standby DR solution based on application-layer replication uses Information2's byte-level replication and SQL semantic-level replication
to implement asynchronous active-standby DR protection.
The geo-redundant 3DC DR solution is a combination of the metropolitan HA solution and active-standby DR solution. It provides DR protection by
storing multiple copies of the same data across data centers. This enables the service recovery when any of two data centers fails, maximizing the
service continuity.
The network architecture of the geo-redundant 3DC DR solution is classified into the following two types:
Cascading architecture: The metropolitan active-active DR solution is deployed between the production center and intra-city DR center. In addition,
the active-standby DR solution is deployed between the intra-city DR center and remote DR center.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 52/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Ring architecture: The metropolitan HA solution is deployed between the production center and intra-city DR center, and the active-standby DR
solution is deployed between the production center and remote DR center. In addition, the active-standby DR solution is deployed between the intra-
city DR center and remote DR center as a backup DR solution. If the replication link between the production center and remote DR center is faulty, a
replication link is enabled between the intra-city DR center and the remote DR center.
2.8.5 Backup
DCS provides backup solutions for VMs and applications on Huawei virtualization platforms, such as centralized backup, to effectively cope with
data damage or loss caused by human errors, viruses, or natural disasters.
The centralized backup solution combines eBackup with Huawei backup storage, general-purpose storage, or cloud storage, uses VM snapshot and
Changed Block Tracking (CBT) technologies to implement high-performance full and incremental backup of VMs, and provides VM- or disk-level
restoration and ultimate experience of deduplication and compression.
2.8.6 Multi-Tenancy
Multi-tenant service management can simulate a set of virtual resources into multiple sets for multiple tenants. These tenants share the same
platform. However, resources of different tenants are isolated, and each tenant can view and use only its own resources. This can save IT investment
and simplify O&M management.
ECS
127.0.0.1:51299/icslite/print/pages/resource/print.do? 53/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
BMS
IMS
AS
SWR
OBS
SFS
VPC Service
EIP Service
NAT Service
ELB
vFW
DNS
VPN
CSHA
Backup Service
2.8.6.1 ECS
What Is an ECS?
Advantages
Application Scenarios
Related Services
Implementation Principles
127.0.0.1:51299/icslite/print/pages/resource/print.do? 54/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Definition
An Elastic Cloud Server (ECS) is a virtual compute server that consists of vCPUs, memory, disks, and other required resources. ECSs are easy to
obtain and scalable. In addition, you can use ECSs on demand. The ECS service works with storage services, Virtual Private Cloud (VPC), and
Cloud Server Backup Service (CSBS) services to build an efficient, reliable, and secure computing environment, ensuring stability and continuity of
your data and applications. The resources used by the ECS service, including vCPUs and memory, are hardware resources that are consolidated
using the virtualization technology.
When creating an ECS, you can customize the number of vCPUs, memory size, image type, and more. After an ECS is created, you can use it like
using your local computer or physical server. They provide you with relatively inexpensive compute and storage resources on demand. A unified
management platform simplifies management and maintenance, enabling you to focus on services.
Functions
The ECS service allows you to perform the following operations. For details about the application process and supported functions, see Table 1 .
When applying for an ECS, you can configure the ECS's specifications, images, network, disks, and advanced parameters.
Manage the lifecycle of ECSs, including starting, stopping, restarting, and deleting them; clone ECSs; convert ECSs into images; create
snapshots for ECSs; modify vCPUs and memory of ECSs.
2.8.6.1.2 Advantages
Compared with traditional servers, ECSs are easy to provision and use, and have high reliability, security, and scalability.
Reliability The ECS service can work with other cloud services, such as storage services Traditional servers, subject to hardware reliability issues,
and disaster recovery & backup, to allow specification modification, data may easily fail. You need to manually back up their data.
backup, recovery using a backup, and rapid recovery from a fault.
You need to manually restore their data, which may be
complex and time-consuming.
Security The security service ensures that ECSs work in a secure environment and You need to purchase and deploy security measures
protects your data, hosts, and web pages, and checks whether ECSs are under additionally.
brute force attacks and whether remote logins are performed, enhancing your
system security and mitigating the risks of hacker intrusion. It is difficult to perform access control on multiple users
to multiple servers.
Scalability You can modify the ECS specifications, including the number of vCPUs and Configurations are fixed and are difficult to meet
memory size. changing needs.
You can expand the capacity of the system disk and data disk. Hardware upgrade is required for modifying
configuration, which takes a long time and the service
interruption time is uncontrollable. Service scalability
and continuity cannot be guaranteed.
Ease of use A simple and easy-to-use unified management console streamlines operations Without software support, users must repeat all steps
and maintenance. when adding each new server.
A wide range of products are provided, including network, storage, DR, and It is difficult for you to obtain all required services from
more products, which can be provisioned and deployed at the one-stop manner. one service provider.
Ease of After deploying an entire cloud and finishing necessary configurations, you can When using traditional servers, you must buy and
provision customize the number of vCPUs, memory size, images, and networks to apply assemble the components and install the operating
for ECSs at any time. systems (OSs).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 55/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
We provide multiple types of ECSs to meet requirements of various scenarios. ECSs are widely used for:
Databases and other applications that require fast data exchange and processing
For high-performance relational databases, NoSQL databases, and other applications that require high I/O performance on servers, you can
choose ultra-high I/O ECSs and use high-performance local NVMe SSDs as data disks to provide better read and write performance and lower
latency, improving the file read and write rate.
Block storage service The block storage service provides the storage function for ECSs. Users can create EVS disks online and attach them to
ECSs.
Image Management Service You can apply for an ECS using a public, private, or shared image. You can also convert an ECS to a private image.
(IMS)
Virtual Private Cloud (VPC) VPC provides networks for ECSs. You can use the rich functions of VPC to flexibly configure a secure running
environment for ECSs.
Architecture
Figure 1 ECS logical architecture
127.0.0.1:51299/icslite/print/pages/resource/print.do? 56/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Type Description
Console ECS_UI is a console centered on the ECS service and manages relevant resources.
Composite API Provides a backend service for ECSs. It can be seen as the server end of ECS_UI, and can call FusionCompute components. Requests
(ECS) sent by an ECS from the console are forwarded by ECS_UI to Composite API and are returned to ECS_UI after being processed by
Composite API.
Unified operation Composite API reports ECS quota, order, product information, and metering and charging information to the eDME operation
module.
Unified O&M Composite API reports ECS log, monitoring, and alarm information to the eDME O&M module.
Workflow
The following figure shows the workflow for creating an ECS.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 57/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Submit the application on the ECS page, corresponding to step 1 in the preceding figure.
a. The ECS API of Composite API calls the EVS API of Composite API.
b. EVS creates volumes in the storage pool according to storage resource application policies.
a. The ECS API of Composite API calls the VPC API of Composite API.
b. Region Type II: The VPC API calls the DVN service to create an EIP, a port, and more.
Region Type III: The VPC API calls the DVN service to create a port and more.
a. The ECS interface delivers the request to FusionCompute to create an ECS in the compute resource pool.
2.8.6.2 BMS
BMS Definition
Benefits
Application Scenarios
Functions
127.0.0.1:51299/icslite/print/pages/resource/print.do? 58/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Related Services
2.8.6.2.2 Benefits
Compared with VMs and PMs, BMSs have no feature or performance loss. For details, see Table 1. Y indicates supported, N indicates unsupported,
and N/A indicates that the function is not involved.
No performance loss Y Y N
Network VPC Y N Y
User-defined network Y N N
Security-demanding scenario
Financial and security industries have high compliance requirements, and some customers have strict data security requirements. BMSs meet
the requirements for exclusive, dedicated resource usage, data isolation, as well as operation monitoring and tracking.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 59/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.2.4 Functions
The following figure shows the functions of the BMS service.
Function Description
BMS lifecycle management Users can start, shut down, restart, and delete BMSs.
Application for BMS with specified flavors O&M administrators can predefine BMS flavors and associate them with BMSs. Tenants can apply for
BMSs of different flavors based on application scenarios.
BMS OS installation using an image O&M administrators can create images which are generally common standard OS images. This function
also implements OS pre-installation.
Attachment of multiple EVS disks and The following operations are supported for EVS disks: attachment, detachment, capacity expansion, and
management of EVS disks retry.
Multiple NICs and specified IP addresses for Users can configure IP addresses for each physical NIC of a BMS.
NICs
EIP binding Users can bind public IP addresses that have been applied for to BMSs.
BMS management Users can configure BMC IP address segments, BMC user names, and passwords to manage BMSs.
Task log viewing Users can view asynchronous task execution records and logs on the BMS O&M portal.
Block storage service The block storage service provides storage for BMSs. You can create EVS disks online and attach them to BMSs. EVS disks of
BMSs can only use centralized storage.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 60/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Virtual Private Cloud VPC provides networks for BMSs. You can use the rich functions of VPC to flexibly configure a secure running environment
(VPC) for BMSs.
2.8.6.3 IMS
What Is Image Management Service?
Advantages
Application Scenarios
Working Principle
Definition
An image is an Elastic Cloud Server (ECS) template that contains software and other necessary configurations, including the OS, preinstalled public
applications, and the user's private applications or service data. Images are classified into public, private, and shared images.
Image Management Service (IMS) provides easy-to-use, self-service image management functions. You can apply for an ECS using a public,
private, or shared image. You can also create a private image from an ECS or an external image file.
Type
Public image
A public image is a standard image provided by the cloud platform system. A public image contains the common standard OS and preinstalled
public applications. It provides easy and convenient self-service image management functions, and is visible to all users. You can conveniently
use a public image to create an ECS.
Private image
A private image is created by a user based an ECS or external image file. A private image is only visible to the user who has created it. A
private image contains the OS, preinstalled public applications, and the user's private applications and service data.
Private images can be classified into the following types by user service:
ECS image
An ECS image contains the OS, preinstalled public applications, and the user's private applications and service data.
You can use a system disk image to create ECSs so that you do not need to repeatedly configure the ECSs.
You can apply for an EVS disk using a created data disk image to quickly migrate data.
You can use an ECS image to create ECSs so that an ECS can be migrated quickly as a whole.
Shared image
A shared image is a private image that you have created and is shared with other resource sets. After the image is shared, the recipient can use
the shared image to quickly create a cloud server running the same image environment.
2.8.6.3.2 Advantages
127.0.0.1:51299/icslite/print/pages/resource/print.do? 61/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Convenience
You can create private images from ECSs or external image files, and create ECSs in batches using an image.
Security
An image file has multiple redundant copies, ensuring high data durability.
Flexibility
You can manage images in custom mode on the GUI or using the API.
Consistency
You can deploy and upgrade application systems using images so that O&M will be more efficient and the application environments will be
consistent.
You can create a private image using an existing ECS and use the private image to create ECSs in batches. In this way, services can be quickly
migrated or deployed in batches. The advantages of this scenario are as follows:
A private image can be created using an ECS so that services can be flexibly migrated.
The design specifications are durable, and the image data will not be lost.
Block storage service You can apply for an EVS disk using a data disk image.
ECS You can create ECSs using an image and can convert an ECS to an image.
Auto scaling (AS) service You can create AS configuration using an image.
Architecture
Figure 1 Logical architecture of IMS
127.0.0.1:51299/icslite/print/pages/resource/print.do? 62/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Network service Manages VPCs, security groups (SGs), and elastic IP addresses (EIPs).
Unified operation Reports ECS-related quota, order, product information, and metering information to eDME for unified operation.
Unified O&M Reports ECS-related operation logs, monitoring information, and alarms to the eDME O&M module.
Specifications
Table 2 describes the image specifications.
Number of images supported by a single region 500, including public and private images.
Size of a private image file that can be uploaded The size of the image file uploaded in HTTPS mode is less than 6 GB.
The maximum size of the image file uploaded in NFS or CIFS mode depends on the corresponding
image specifications. The maximum size is 256 GB.
Size of a public image file that can be exported The maximum size of the image file that can be exported to the local PC is 6 GB.
The maximum size of the image that can be exported to a shared path depends on the corresponding
image specifications. The maximum size is 256 GB.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 63/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.4 AS
Introduction
Benefits
Application Scenarios
Usage Restrictions
Working Principles
2.8.6.4.1 Introduction
Definition
Auto Scaling (AS) is a service that automatically adjusts resources based on your service requirements and configured AS policies. When service
demands increase, AS automatically adds elastic cloud server (ECS) instances to ensure computing capabilities. When service demands decrease, AS
automatically reduces ECS instances to reduce costs.
Functions
AS provides the following functions:
Manages the AS group life cycle, including creating, enabling, disabling, modifying, and deleting an AS group.
Automatically adds instances to or removes them from an AS group based on configured AS policies.
Configures the image, flavors, and other configuration information for implementing scaling actions based on the AS configurations.
Manages the expected, minimum, and maximum numbers of instances in an AS group and maintains the expected number of ECS instances to
ensure that services run properly.
Checks the health of ECS instances in an AS group and automatically replaces unhealthy instances.
Works with the elastic load balance (ELB) service to automatically bind load balancers to ECS instances in an AS group.
2.8.6.4.2 Benefits
AS has the following advantages:
Improved availability
AS helps users ensure that the application system consistently has a proper resource capacity to comply with traffic requirements. When AS
works with ELB, an AS group automatically adds available instances to the load balancer listener, through which the incoming traffic is evenly
distributed across the instances in the AS group.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 64/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
or decrease. In addition, you can set the expected values in the AS group when or after creating the AS group, and AS ensures that the number
of ECS instances in the AS group is always the expected value.
The number of service requests increases abruptly or the access volume fluctuates.
Computing and storage resources need to be dynamically adjusted based on amount of calculation. AS checks the health of ECS instances in an
AS group and automatically replaces unhealthy instances.
Common deployment
AS adds new instances to the application when necessary and stop instance adding when unnecessary. In this way, you do not need to prepare a
large number of ECS instances for an expected marketing activity or unexpected peak hours, thereby ensuring system reliability and reducing
system operating costs.
AS can work with the object storage service to send to-be-processed data back to the object storage. Additionally, AS can integrate with ELB to
use ECSs in an AS group for data processing, and perform capacity expansion or reduction based on the ECS load.
Only applications that are stateless and can be scaled out can run on ECS instances in an AS group. AS automatically releases ECS instances.
Therefore, the ECS instances in AS groups cannot save application status information (such as sessions) and related data (such as database data
and logs). If an application requires that ECS instances save status or log information, you can save required information to an independent
server.
Stateless: There is no record or reference for previous transactions of an application. Each transaction is made as if from scratch for the first time.
Stateless application instances do not locally store data that needs to be persisted.
For example, a stateless transaction can be regarded as a vending machine: one request corresponds to one response.
Stateful: Applications and processes can be repeated and occur repeatedly. Operations are performed based on previous transactions, and the current
transaction may be affected by previous transactions. Stateful application instances will locally store data that needs to be persisted.
For example, stateful transactions can be regarded as online banking or email, which are performed in the context of previous transactions.
Scale-out: An application can be deployed on multiple ECSs.
Resource requirements of AS: AS is a native service and needs to be deployed on two new ECSs. The service requires a quad-core CPU, 8 GB
memory, system disk with a minimum capacity of 55 GB, and data disk with a minimum capacity of 500 GB.
Table 1 AS quotas
127.0.0.1:51299/icslite/print/pages/resource/print.do? 65/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Architecture
Figure 1 Logical architecture of AS
Type Description
Elastic cloud It is a service that manages the lifecycle of elastic cloud servers (ECSs).
service (ECS)
Virtual private It provides network services for ECSs. You can use the functions provided by VPC to configure the operating environment for ECSs
cloud (VPC) in a secure and flexible manner.
Elastic load It distributes traffic across multiple backend servers based on the configured rules.
balance (ELB)
Elastic IP (EIP) It provides independent public IP addresses and bandwidth for Internet access.
Data monitoring It displays the CPU usage, memory usage, NIC inbound and outbound traffic, and instance change trends of instances in each AS
group.
AsService01 The two AS backend services (which exist in active-active mode) provide the lifecycle management for AS groups, AS configurations,
AsService02 and AS policies. After a tenant performs an operation on the eDME operation portal UI, the request is sent from the gateway and route
bus to AsService01 or AsService02. After processing the request, AsService01 or AsService02 will return the response to the UI.
AS-schedule AS task scheduling module, which provides scheduling by periodic or scheduled policy.
AS-monitor AS monitoring module, which provides scheduling by monitoring policy and monitoring data management.
AS-service AS core module, which provides the function of creating ECS instances after policies are triggered.
Benefits
127.0.0.1:51299/icslite/print/pages/resource/print.do? 66/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Working Principles
Basic Concepts
2.8.6.5.1 Introduction
Definition
As an enterprise-level K8s cluster hosting service, Elastic Container Engine (ECE) enables management for cluster lifecycle, container images, and
containerized applications, as well as container monitoring and O&M. In addition, it provides highly scalable and reliable containerized application
deployment and management solutions. Therefore, it is a good choice for you to achieve application modernization.
Functions
Container cluster management
You can: 1. create, display, delete, and configure K8s clusters, and configure cluster certificates, DNS, and NTP; 2. create, delete, and scale
container node pools, manage nodes, and configure node labels and annotations in batches; 3. delete nodes, configure labels, annotations, and
schedulability for container nodes, and drain and evict container groups; 4. add, delete, configure namespaces, and configure quotas and
resource limits; 5. view the VPC and subnet of a cluster, including the management network and service network, as well as view the container
network plug-in, network type, Service CIDR, and Pod CIDR; 6. upgrade K8s cluster versions and K8s cluster node OS versions.
2.8.6.5.2 Benefits
ECE is a container service built on popular Docker and Kubernetes technologies and offers a wealth of features best suited to enterprises' demands
for running container clusters at scale. With unique advantages in system reliability, performance, and compatibility with open-source communities,
ECE can meet the diverse needs of enterprises interested in building containerized services.
Ease of Use
Create a K8s cluster in one-click mode on the web UI, manage Elastic Cloud Server (ECS) or Bare Metal Server (BMS) nodes, and implement
automatic deployment and O&M for containerized applications in a one-stop manner.
Easily add or remove cluster nodes and workloads on the web UI, and upgrade K8s clusters in one-click mode.
Utilize deeply-integrated Application Service Mesh (ASM) and Helm charts, which ensure out-of-the-box usability.
High Performance
ECE supports the iSula container engine. The engine provides high-performance container cluster services to support high-concurrency and large-
scale scenarios, with featuring fast startup and low resource usage.
ECE offers enhanced capabilities such as high availability, domain-based tenant management, quota control, and authentication.
ECS Tenants can create node pools in K8s clusters based on ECSs.
BMS Tenants can create node pools in K8s clusters based on BMSs.
Scalable File Service (SFS) SFS can be used as persistent storage for a container, and the storage file is mounted to the container during Job creation.
Block storage It provides disk storage services for ECSs and BMSs, and supports elastic binding, scalability, and sharing.
Virtual Private Cloud (VPC) When creating K8s clusters, tenants can select VPCs and subnets. A VPC can contain multiple clusters.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 68/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Type Description
Web portal Provides a unified and easy-to-use operation interface for users.
eDME unified Provides unified OMS functions for microservices on the platform, including alarms and logs.
microservice
eDME framework Provides functions such as the traffic gateway, service gateway, and service center for microservices on the platform.
Installation and Provide installation, deployment, and upgrade tools of the container platform.
deployment tools
K8s cluster Provides a container operating environment for users, including the service K8s cluster management plane components, CCDB,
container engines Docker and iSula, container Calico network, container storage CSI plug-in, and traffic ingress.
K8s cluster node OS Provides an operating environment for K8s clusters, including the OS and driver.
Control management (kube-controller-manager): controls resources and puts resources in the expected state.
Scheduler (kube-scheduler): schedules resources to nodes based on the resource usage of nodes and the scheduling policy specified by
the user.
The worker nodes run K8s agents and service containers, including:
Node agent (kubelet): receives Pod requests and ensures that the container specified by the Pod is running properly.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 69/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.6 SWR
Overview
Benefits
Basic Concepts
2.8.6.6.1 Overview
Definition
SoftWare Repository for Container (SWR) provides easy, secure, and reliable management of container images throughout their lifecycle. It is
compatible with the Registry V2 protocol of the community and allows you to manage container images through a GUI, CLI, or native APIs. SWR
can be seamlessly integrated with Elastic Container Engine (ECE) to help customers quickly deploy containerized applications and build a one-stop
solution for cloud native applications.
Functions
Container image storage configuration
O&M administrators can configure container image storage and view the storage space occupied by container images on the O&M portal.
O&M administrators can manually collect image garbage on the O&M portal to release container image storage space.
2.8.6.6.2 Benefits
Ease of Use
You can directly push and pull container images without building a platform or performing O&M.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 70/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You can manage container images throughout their lifecycle on the SWR console.
Image
A container image is a template that provides a standard format for packaging containerized applications. When deploying containerized
applications, you can use images from the public image repository or your private image repository. For example, a container image can contain a
complete Ubuntu OS, and can be installed with only the required application and its dependencies. A container image is used to create a container.
Docker provides an easy way to create and update images. You can also download images created by other users.
Container
A container is a running instance created by a container image. Multiple containers can run on one node. A container is essentially a process. Unlike
a process directly executed on a host, the container process runs in its own independent namespace.
The relationship between the image and the container is similar to that between the class and the instance in the object-oriented program design. An
image provides a static definition, and a container is the entity of the image that is running. Containers can be created, started, stopped, deleted, and
suspended.
Image Repository
An image repository is used to store container images. A single image repository can correspond to a specific containerized application and host
different versions of the application.
Advantages
127.0.0.1:51299/icslite/print/pages/resource/print.do? 71/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Implementation Principles
Definition
The block storage service provides block storage space for instances. Users can create disks online and attach them to instances.
In this document, instances refer to the Elastic Cloud Servers (ECSs) or bare metal servers (BMSs) that you apply for. Elastic Cloud Server (ECS)
disks are also referred to as disks in this document.
Functions
The block storage service provides various persistent storage devices. You can choose disk types based on your needs and store files and build
databases on EVS disks. The block storage service has the following features:
Elastic scalability
You can configure storage capacity and expand the capacity on demand to deal with your service data increase.
Shared disk
Multiple instances can access (read and write) a shared disk at the same time, meeting the requirements of key enterprises that require cluster
deployment and high availability (HA).
2.8.6.7.2 Advantages
Table 1 compares the block storage service and object storage service.
Table 1 Comparison between the block storage service and object storage service
Usage mode Provides persistent block storage for compute services such as Provides RESTful APIs that are compatible with Amazon S3.
instances. EVS disks feature high availability, high durability, and You can use browsers or third-party tools to access object
low latency. You can format, create file systems on, and persistently storage and use RESTful APIs to perform secondary
store data on EVS disks. development on OBS.
Data access Data can only be accessed in the internal network of data centers. Data can be accessed on the Internet.
mode
127.0.0.1:51299/icslite/print/pages/resource/print.do? 72/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Storage capacity Virtualized SAN storage: A single disk supports a maximum of 64 The capacity is unlimited. Therefore, planning is not required.
TB.
NAS storage: A single disk supports a maximum of 64 TB.
Scale-out block storage: A single disk supports a maximum of 32 TB.
eVOL storage: A single disk supports a maximum of 64 TB.
Block storage: A single disk supports a maximum of 64 TB.
Storage backend Supports virtualized SAN storage, NAS storage, Huawei scale-out OceanStor Pacific
block storage, eVOL storage, and block storage.
Recommended Scenarios such as database, enterprise office applications, and Scenarios such as big data storage, video and image storage,
scenario development and testing. and backup and archiving. It can also provide storage for
other private cloud services (such as IMS).
ECS You can attach EVS disks to ECSs to provide scalable block storage.
BMS You can attach iSCSI-type EVS disks to BMSs to provide scalable block storage.
Architecture
The block storage service consists of the block storage service console, block storage service APIs, datastores, and storage devices. Figure 1 shows
the logical architecture of EVS.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 73/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Block storage The block storage service console provides tenants with an entry to the block storage service. Tenants can apply for EVS disks on
service console the console.
API (block storage The block storage service API encapsulates or combines the logic based on the native Cinder interface to implement certain block
service) storage service functions. The block storage service API can be invoked by the EVS console or tenants.
Datastore A datastore provides persistent block storage and manages block storage resources. With it, you can create disk types, create disks
on storage devices, and attach disks to ECSs.
Infrastructure Infrastructure refers to the physical storage device that provides block storage based on physical resources. The following storage
devices can be used as the storage backend of the block storage service: virtualized SAN storage, NAS storage, Huawei scale-out
block storage, and eVOL storage.
Unified eDME Unified eDME operation provides quota management, order management, product management, and service detail records (SDRs)
operation for the block storage service.
Unified eDME Unified eDME O&M provides disk type management, performance monitoring, logging, and alarm reporting for the block storage
O&M service.
2.8.6.8 OBS
What Is the Object Storage Service?
Advantages
Related Concepts
Application Scenarios
Implementation Principles
Restrictions
Definition
Object Storage Service (OBS) is a scale-out storage service that provides capabilities for mass, secure, reliable, and cost-effective data storage. With
OBS, you can easily create, modify, and delete buckets.
Object storage devices and services are becoming increasingly popular in research and markets, providing a viable alternative to established block
and file storage services. OBS is a cloud storage service that can store unstructured data such as documents, images, and audiovisual videos,
combining the advantages of block storage (direct and fast access to disks) and file storage (distributed and shared).
The OBS system and a single bucket do not have restrictions on the total data volume and number of objects, providing users ultra-large capacity to
store files of any type. OBS can be used by common users, websites, enterprises, and developers.
As an Internet-oriented service, OBS provides web service interfaces over Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol
Secure (HTTPS). Users can use the OBS console or a browser to access and manage data stored in OBS on any computer connected to the Internet
anytime, anywhere. In addition, OBS supports SDK and API interfaces, which enable users to easily manage data stored in OBS and develop
various upper-layer service applications.
Functions
OBS provides the following functions:
2.8.6.8.2 Advantages
Table 1 compares the block storage service and object storage service.
Usage mode Provides persistent block storage for compute services such as Provides RESTful APIs that are compatible with Amazon S3.
instances, ensuring high availability, high durability, and low You can use browsers or third-party tools to access object
latency. You can format, create file systems on, and persistently storage and use RESTful APIs to perform secondary
store data on EVS disks. development on OBS.
Data access Accesses data only in the internal network of data centers. Accesses data on the Internet.
mode
Storage capacity Virtualized SAN storage: A single disk supports a maximum of 64 The capacity is unlimited. Therefore, advance planning is not
TB. required.
Backend storage Supports virtualized SAN storage, NAS storage, Huawei scale-out OceanStor Pacific
block storage, eVol storage, and block storage.
Recommended Supports scenarios such as database, enterprise office applications, Supports scenarios such as big data storage, video and image
scenario and development and testing. storage, and backup and archiving. It can also provide storage
for other private cloud services (such as IMS).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 75/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Bucket
A bucket is a container that stores objects in OBS. OBS provides flat storage in the form of buckets and objects. Unlike the conventional multi-layer
directory structure of file systems, all objects in a bucket are stored at the same logical layer.
In OBS, each bucket name must be unique and cannot be changed. When you create a bucket, OBS creates a default access control list (ACL). You
can configure an ACL to grant users permissions (including READ, WRITE, and FULL_CONTROL) on the bucket. Only authorized users can
perform bucket operations, such as creating, deleting, viewing, and configuring the bucket ACL. A user can create a maximum of 100 buckets.
However, the number and total size of objects in a bucket are not restricted. Users do not need to worry about system scalability.
OBS is a service based on the Representational State Transfer (REST) style HTTP and HTTPS protocols. You can locate resources using Uniform
Resource Locator (URL).
Object
An object is a basic data storage unit of OBS. It consists of file data and metadata that describes the attributes. Data uploaded to OBS is stored into
buckets as objects.
An object consists of data, metadata, and a key.
A key specifies the name of an object. An object key is a string ranging from 1 to 1024 characters in UTF-8 format. Each object in a bucket
must have a unique key.
Metadata describes an object and contains system metadata and user metadata. All the metadata is uploaded to OBS as key-value pairs.
System metadata is automatically generated by OBS and is used for processing object data. It includes object attributes such as Date,
Content-length, Last-modify, and Content-MD5.
User metadata is specified by users to describe objects when they upload the objects.
Generally, objects are managed as files. However, OBS is an object-based storage service and it does not involve the file and folder concepts. For
easy data management, OBS provides a method to simulate virtual folders. By adding a slash (/) in an object name, for example, test/123.jpg, you
can simulate test as a folder and 123.jpg as the name of a file under the test folder. However, the key remains test/123.jpg.
On the OBS management console, users can directly use folders as they used to do.
AK/SK
127.0.0.1:51299/icslite/print/pages/resource/print.do? 76/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
An access credential of the object service includes an access key (AK) and a secret access key (SK). An AK and an SK are generated in pairs and are
character strings randomly generated by the authentication service. They are used in the authentication process of service requests.
An AK corresponds to only one tenant or user. A tenant or user can have two AKs at the same time. OBS (compatible with Amazon S3 APIs)
identifies a tenant or user accessing the system based on the AK.
A tenant or user generates authentication information based on the SK and request header. An SK corresponds to an AK.
Endpoint
Endpoint indicates the domain name used by OBS to provide services. OBS provides services for external systems in HTTP RESTful API mode.
Different domain names are required for accessing different regions. The endpoints required for accessing the same zone through the intranet and
extranet are different.
Quota Management
A quota is a resource management and control technology that allocates and manages the maximum number of resources (including resource
capacity and quantity) available to a single virtual data center (VDC), preventing resources from being overused by users in a single VDC and
affecting other VDCs. The platform allows you to set OBS quotas for VDCs at all levels.
OBS quotas include:
If the number of resources in a VDC reaches the quota value, the resources cannot be requested. Delete idle resources or contact the administrator to
modify the quota. For details about how to modify quotas, see Managing Quotas .
When the object storage service of multiple storage devices is enabled, the resource types of all object storage services are displayed in the quota information,
that is, site name + total number of files or total space capacity. Change the total number of files or total space capacity of the object storage service based on
the site name.
If an account with the same name as the resource set ID exists on the storage device, the system automatically synchronizes the quota of the account and
displays the quota in the VDC or resource set quota information when the object storage service is enabled.
When a storage device is removed or the object storage service is disabled, the resource type of the object storage service at the target site in the quota
information is also removed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 77/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Video Storage
OBS provides large storage capacity for video and image storage solutions and applies to mass and unstructured video data to meet requirements for
storing high quality video data.
Figure 2 shows the architecture.
Logical Architecture
Figure 1 shows the logical architecture of OBS.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 78/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Unified operation of IAM/POE Provides identity identification and access management for OBS.
eDME
Unified O&M of Performance Manages infrastructure performance metrics and analyzes performance data.
eDME management
Log management Aggregates and queries the operation and running logs of tenants.
Alarm management Receives, stores, and centrally monitors and queries alarm data, helping O&M personnel quickly rectify
faults based on alarm information.
Infrastructure OceanStor Pacific As the storage backend, it provides object storage functions.
Workflow
Figure 2 shows the OBS workflow.
1. Operation administrators create resource management tenants and resource administrators on the eDME operation portal.
2. Resource administrators apply for object storage resources on the OBS console.
3. The OBS console invokes the S3 APIs of the OceanStor Pacific OBS object and big data storage device to create a bucket.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 79/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
OBS VDC VDC management permission A user with these permissions can perform any operation on OBS resources.
administrator administrator (VDC Admin) NOTE:
All cloud service management OBS RW Only supports only POE authentication and refers to permissions to
manage user keys, query buckets, and read and write bucket objects.
permission (Tenant
In the IAM authentication scenario, the Tenant Administrator or OBS Admin
Administrator)
permission must be granted to both organizations and resource sets. Otherwise, the
user does not have the permission to operate buckets.
VDC operator VDC query permission (VDC
User)
All cloud service management
permission (Tenant
Administrator)
Table 2 lists the operations that users in different roles can perform.
2.8.6.8.7 Restrictions
The restrictions on OBS are as follows:
OBS is accessed based on domain names. Before using OBS, configure the IP address of the DNS server on the client.
A user cannot use the global domain name to access the buckets and objects in a non-default region.
When a third-party S3 client is used to access the OBS, only the domain name of the default region and the global domain name can be used to
create buckets. You are advised to create buckets on the OBS console.
Even though a user is assigned all permissions of another tenant's buckets, the user's permissions are still restricted by its role.
OBS permission control and quota can be configured only in first-level VDCs.
Currently, only storage devices whose authentication mode is POE support tenant quotas.
You are not advised to modify bucket configurations on the storage device already added to eDME.
Currently, IAM authentication does not support the metric report function.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 80/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Third-Party Client
Users can use a third-party client to manage storage resources of the object service. For example, users can install S3 Browser on a local host
and perform operations such as creating buckets and uploading and downloading objects on a GUI to facilitate resource management.
The object service is compatible with multiple types of clients. For details, visit Huawei Storage Interoperability Navigator.
How to Use S3 Browser uses S3 Browser as an example to describe how to configure and use S3 Browser.
For details about how to install and use each client, see the official website of the client.
The object service provides REST APIs. You can invoke these APIs using HTTP or HTTPS requests to create buckets, upload objects, and
download objects.
You can visit the OceanStor Scale-Out Storage Developer Center to view how to invoke object service APIs and related service APIs. We also
provide quick start of object service APIs to help you quickly understand how to use APIs in simple scenarios.
SDK
SDK encapsulates REST APIs provided by the object service to simplify user development. You can invoke the API functions provided by
SDK to use the service functions provided by the object service.
You can visit the OceanStor Scale-Out Storage Developer Center to view how to set up an SDK development environment and SDK API
descriptions. We also provide program samples with source codes for the object service to help you quickly get started.
Mainstream Software
127.0.0.1:51299/icslite/print/pages/resource/print.do? 81/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The object service can be used for data archiving (financial check images, medical images, government and enterprise electronic documents,
and IoV scenarios) and backup (carrier and enterprise application backup). It is compatible with multiple third-party backup and archiving
software, such as Veritas NetBackup, Commvault Simpana, and Rubrik. You need to configure the interconnection with the object service on
the third-party backup and archiving software. For details about the compatibility, see Huawei Storage Interoperability Navigator.
How to obtain
For details about how to manage object service resources using S3 Browser, visit the S3 Browser official website.
When using S3 Browser to list objects in a bucket, if there are a large number of objects, set S3 Browser to display the objects in multiple pages.
REST Endpoint IPv4 format: Access domain name or service IP address of the object service:Port number
IPv6 format: Access domain name or [service IP address] of the object service:Port number
If the HTTP protocol is used, the port number is 5080 or 80. If the HTTPS protocol is used, the port number is 443
or 5443.
NOTE:
Obtain the access domain name of the object storage service by referring to Obtaining the Object Storage Access
Address .
Access Key ID AK and SK generated during account creation. For details, see Creating a User Access Key .
Encrypt Access Keys with a After selecting this parameter and setting a password, the account will be protected by the password.
password
Use secure transfer(SSL/TLS) Select this parameter only when the HTTPS protocol is used.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 82/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.9 SFS
What Is Scalable File Service?
Advantages
Application Scenario
Implementation Principle
Definition
Scalable File Service (SFS) provides Elastic Cloud Servers (ECSs) and Bare Metal Servers (BMSs) in high-performance computing (HPC)
scenarios with a high-performance shared file system that can be scaled on demand. It is compatible with standard file protocols (NFS, CIFS, OBS,
and DPC) and is scalable to petabytes of capacity to meet the needs of massive amounts of data and bandwidth-intensive applications. Figure 1
describes how to use SFS.
Functions
SFS provides the following functions:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 83/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.9.2 Advantages
Ease of use
An easy-to-use operation interface is provided for you to quickly create and manage file systems without worrying about the deployment,
expansion, and optimization of file systems.
File sharing
Multiple ECSs of different types can concurrently access videos and images.
Automatic attachment
After installing the automatic attachment plug-in on a VM, you can select a shared file system on the SFS page and the file system is
automatically attached to the VM.
BMS In HPC scenarios, file systems can be mounted to BMSs for data sharing.
Video Cloud
SFS applies to the video cloud scenario to store video and image files.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 84/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Video files vary with specific independent software vendors (ISVs). Generally, they are 1 GB to 4 GB large files.
Images are classified into checkpoint images and analysis images. Generally, they are mass amounts of small images (about 2 billion images in
a year) with sizes ranging from 30 KB to 500 KB.
Media Processing
SFS with high bandwidth and large capacity enables shared file storage for video editing, transcoding, composition, high-definition video, and 4K
video on demand, satisfying multi-layer HD video and 4K video editing requirements.
Figure 2 shows the architecture of the media processing scenario.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 85/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Capacity You can adjust the capacity only when the file system is in the Available state.
adjustment
If you adjust the capacity of a newly created file system, an error may be reported. In this case, wait for 5 to 10 minutes and then
adjust the capacity again.
Supported Currently, SFS supports NFS, CIFS, DPC, and OBS protocols. OceanStor Dorado/OceanStor 6.1.x supports NFSv3, NFSv4, and
protocols NFSv4.1, whereas OceanStor Pacific supports NFSv3 and NFSv4.1.
The DPC protocol can only be used in the attachment to BMSs.
File system If you delete a newly created file system, an error may be reported. In this case, wait 5 to 10 minutes and then delete the file system
deletion again.
Architecture
Figure 1 shows the logical architecture of SFS.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 86/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Unified IAM Provides Identity and Access Management (IAM) for SFS.
operation
Order management Manages orders submitted by users.
Service management Different services are defined based on the registered cloud services, and unified service management is
provided.
Unified O&M Performance Monitors performance indicators of the infrastructure and analyzes monitoring data.
management
Log management Aggregates and queries the operation and running logs of tenants.
Alarm management Receives, stores, and centrally monitors and queries alarm data, helping O&M personnel quickly rectify
faults based on alarm information.
OceanStor DJ Functions as the SFS server to receive requests from the SFS console.
(Manila)
Infrastructure Storage device File storage device that provides file system storage space for the SFS.
The following storage devices are supported: OceanStor Dorado 6.1.x, OceanStor 6.1.x, and OceanStor
Pacific series.
Workflow
Figure 2 shows the SFS workflow.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 87/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. The SFS console invokes the API of OceanStor DJ (Manila) to deliver the request to the storage device.
3. OceanStor DJ (Manila) invokes the storage device API to create or manage file systems.
Constraints
Concept
The Virtual Private Cloud (VPC) service enables you to provision logically isolated, configurable, and manageable virtual networks for Elastic
Cloud Servers (ECSs), improving the security of user resources and simplifying user network deployment.
You can select IP address ranges, create subnets, and customize security groups (SGs) and NAT rules in a VPC, which enables you to manage and
configure your network conveniently and modify your network securely and rapidly. You can also customize ECS access rules within a security
group and between security groups to enhance access control over cloud servers in subnets.
Function
127.0.0.1:51299/icslite/print/pages/resource/print.do? 88/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Elastic and flexible connection to an extranet (supported only in Region Type II)
A VPC enables you to access an extranet flexibly and with excellent performance.
Elastic IP address (EIP): An EIP is a static extranet IP address and can be dynamically bound to or unbound from an ECS. If your VPC
contains just one or only a few ECSs, you only need to bind an EIP to each ECS for the ECS to communicate with an extranet.
Source network address translation (SNAT): The SNAT function maps the IP addresses of a subnet in a VPC to an EIP, thereby allowing
the ECSs in the subnet to access an extranet. After the SNAT function is enabled for a subnet, all ECSs in the subnet can access an
extranet using the same EIP.
Destination network address translation (DNAT): If ECSs in a VPC need to provide services for an extranet, you can use the DNAT
function. The requests for accessing an EIP using a specified protocol and port are forwarded based on the mapping between IP addresses
127.0.0.1:51299/icslite/print/pages/resource/print.do? 89/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
and ports to the specified port of the target ECS. In addition, multiple ECSs can share an EIP and bandwidth to precisely control
bandwidth resources.
DHCP
DHCP automates the assignment of IP addresses to ECSs in a subnet.
Users or network administrators can use DHCP to configure all computers in a centralized manner.
VPC peering
In the network overlay SDN scenario, you can create a VPC peering connection between two VPCs so that subnets under the VPCs can
communicate with each other.
Benefits
A VPC facilitates internal network management and configuration, and ensures secure and quick network changes.
Flexible deployment: You can customize network division to fully control private networks.
Secure and reliable network: Full logical isolation is implemented. You can configure different access rules on demand to improve network
security.
Various network connections: The VPC supports various network connections, meeting your cloud service requirements in a flexible and
efficient manner.
Infrastructure - network Three servers (used to deploy the SDN controller) No physical network node needs to be added.
node requirements
Network devices used with the SDN controller. For example:
Core/Aggregation switch
Access switch
Firewall
127.0.0.1:51299/icslite/print/pages/resource/print.do? 90/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
for cloud services. networks for VPCs.
Cloud service Table 2 describes cloud service availability in the two scenarios.
Table 2 Cloud service availability in Region Type II and Region Type III scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 91/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Module Description
Service collaboration layer Implements collaboration among compute, storage, and network resources.
Network control layer and resource pool (Region Implements service policy orchestration, network modeling, and network instantiation based on
Type II) hardware devices.
2.8.6.10.6 Constraints
Table 1 lists the constraints on the functions and features of the VPC service.
Table 1 Constraints
Subnet (Region Subnet: In a VPC, communication inside a subnet is at Layer 2, and different subnets communicate with each other at Layer 3. After a
Type II) subnet is created, the CIDR block cannot be changed.
Internal subnet: The ECSs in an internal subnet of a VPC can communicate with each other at Layer 2 but cannot communicate with the
ECSs in another subnet (VPC subnet or internal subnet) of the VPC. The internal subnet NIC of an ECS cannot be bound with an EIP,
and does not support the SNAT function. After a subnet is created, the CIDR block cannot be changed.
The IP addresses used for the gateway and DHCP cannot be changed.
NOTE:
The current version does not support multicast. Multicast packets sent by service VMs from the cloud platform or by extranets to the cloud
platform are processed as broadcast packets on the virtual network of the cloud platform. If there are a large number of such packets, broadcast
flooding may occur, which will affect the virtual network performance. Specifically, it will deteriorate the communication quality of other non-
multicast services. Before adding multicast to the cloud, contact technical support engineers for evaluation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 92/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Subnet (Region Subnet: The ECSs in a subnet can communicate with each other at Layer 2, but cannot communicate with other subnets in the VPC.
Type III) After a subnet is created, the CIDR block cannot be changed.
The IP addresses used for the gateway and DHCP cannot be changed.
NOTE:
The current version does not support multicast. Multicast packets sent by service VMs from the cloud platform or by extranets to the cloud
platform are processed as broadcast packets on the virtual network of the cloud platform. If there are a large number of such packets, broadcast
flooding may occur, which will affect the virtual network performance. Specifically, it will deteriorate the communication quality of other non-
multicast services. Before adding multicast to the cloud, contact technical support engineers for evaluation.
DHCP Allocation pools can be set only during subnet creation and cannot be modified.
VPC peering VPC peering is not transitive. For example, even if VPC B is peered with VPC A and VPC C, respectively, VPC A is not peered with
VPC C.
A VPC peering connection can be created between two VPCs in a region. The VPCs can belong to different resource sets.
Only one VPC peering connection can be created between two VPCs.
VPC peering You can add multiple routes for a VPC peering connection. To enable communication between multiple local subnets and multiple peer
connection subnets in two VPCs, you only need to add routes without the need to add VPC peering connections.
route
A VPC can be peered with multiple VPCs at the same time. The route destination address of the VPC cannot overlap with the VPC's
subnet. The route destination addresses of all VPC peering connections of the VPC cannot overlap with each other.
Benefits
Application Scenarios
Constraints
Definition
An elastic IP address (EIP) is a static IP address based on an external network (referred to as extranet. An extranet can be the Internet or the local
area network (LAN) of an enterprise). An EIP is accessible from an extranet.
All IP addresses configured for instances in a LAN are private IP addresses which cannot be used to access an extranet. When applications running
on instances need to access an extranet, you can bind an EIP so that instances in a Virtual Private Cloud (VPC) can communicate with the extranet
using a fixed external IP address.
EIPs can be flexibly bound to or unbound from resources, such as Elastic Cloud Servers (ECSs) associated with subnets in a VPC. An instance
bound with an EIP can directly use this IP address to communicate with an extranet, but this IP address cannot be viewed on the instance.
Network Solution
127.0.0.1:51299/icslite/print/pages/resource/print.do? 93/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Hardware firewalls are adopted to realize conversion between private and external IP addresses.
Functions
Elastically binding an external IP address
EIPs provide flexible, high-performance access to an extranet. You can apply for an independent external IP address, which can be bound as
needed to an ECS to allow the ECS to access an extranet. The binding and unbinding take effect immediately.
2.8.6.11.2 Benefits
An EIP is used to enable an extranet to access cloud resources. An EIP be bound to or unbound from various service resources to meet different
service requirements.
You can bind an EIP to an ECS so that the ECS can access an extranet.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 94/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Cloud Description
Service
Name
ECS An NIC of an ECS can be bound to an EIP. In this case, the ECS is associated with the EIP.
SNAT SNAT maps the IP addresses in a network segment in a VPC as EIPs so that the ECSs in the subnet can access an extranet. After an SNAT
rule is created, all ECSs in the subnet can access an extranet using the EIP configured.
DNAT This service uses the mapping between IP addresses and ports to forward the requests for accessing an EIP through specified protocols
and ports to the specified ports of target ECSs. In addition, multiple ECSs can share an EIP and the bandwidth to precisely control
bandwidth resources.
2.8.6.11.5 Constraints
Before using EIPs, learn about the constraints described in Table 1.
Item Constraint
Binding and unbinding An instance interface can be bound to only one EIP.
An EIP can be bound to only one instance interface.
EIP binding and unbinding take effect immediately.
EIP binding and unbinding do not affect the running of instances.
Each of the active and extension NICs can be bound to an EIP.
An EIP can be bounded only on a Type II network.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 95/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Benefits
Application Scenarios
The SNAT function translates private IP addresses in a VPC to a public IP address by binding an EIP to SNAT rules, providing multiple ECSs
across availability zones (AZs) in a VPC with secure and efficient access to the Internet.
Figure 1 shows how SNAT works.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 96/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The DNAT function enables ECSs across AZs in a VPC to share an EIP to provide services for the Internet by binding an EIP to DNAT rules.
Figure 2 shows how DNAT works.
2.8.6.13.2 Benefits
Flexible deployment
The NAT service can be deployed across subnets and AZs. Across-AZ deployment ensures high availability (HA). Any fault in a single AZ
does not affect the service continuity of the NAT service.
Lower costs
Multiple ECSs can share an EIP. When ECSs in a VPC send data to the Internet or provide application services to the Internet, the NAT service
translates private IP addresses to a public IP address or maps a public IP address to the specified private IP address. Multiple ECSs share an
EIP. You do not need to apply for multiple public IP addresses and bandwidth resources for ECSs to access the Internet, which effectively
reduces costs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 97/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If ECSs in a VPC send a large number of requests for accessing the Internet, you can use the SNAT function to enable the ECSs to share one or
more EIPs to access the Internet without exposing their private IP addresses. In a VPC, each subnet corresponds to one SNAT rule, and each
SNAT rule is configured with one EIP. Figure 1 shows the networking diagram.
Configuring DNAT rules to enable ECSs to provide services accessible from the Internet
If ECSs in a VPC need to provide services for the Internet, use the DNAT function.
When the DNAT function binds an EIP to DNAT rules and the Internet accesses the EIP using a specified protocol and a specified port, the
DNAT service forwards the request to the corresponding port of the target ECS based on the mapping between IP addresses and ports. In this
way, multiple ECSs can share an EIP and bandwidth resources.
Each ECS is configured with one DNAT rule. If there are multiple ECSs, you can create a DNAT rule for each ECS to share one or more EIPs.
Figure 2 shows the networking diagram.
Only one SNAT rule can be created for each VPC subnet.
SNAT and DNAT should not share EIPs. SNAT and DNAT rules are configured for different services. If SNAT and DNAT rules reuse the same
EIP, resource preemption will occur.
When both the EIP and NAT services are configured for an ECS, data will be forwarded through the EIP.
Each port on an ECS can have only one DNAT rule and be mapped to only one EIP.
ECS The NAT service enables other cloud services in a VPC to access the Internet or Configuring an SNAT Rule to Enable ECSs to Access the
provide services for the Internet. Internet
Configuring DNAT Rules to Enable ECSs to Provide
Services Accessible from the Internet
VPC ECSs in a VPC can be interconnected with the Internet through NAT. Configuring an SNAT Rule to Enable ECSs to Access the
Internet
EIP The NAT service enables ECSs in a VPC to share one or more EIPs to access the Configuring an SNAT Rule to Enable ECSs to Access the
Internet or provide services for the Internet. Internet
Configuring DNAT Rules to Enable ECSs to Provide
Services Accessible from the Internet
2.8.6.14 ELB
What Is Elastic Load Balance?
Benefits
Application Scenarios
Definition
Elastic Load Balance (ELB) is a service that automatically distributes incoming traffic across multiple backend servers based on predefined
forwarding policies.
ELB can expand the access handling capability of application systems through traffic distribution and achieve a higher level of fault tolerance
and performance.
ELB helps eliminate single points of failure (SPOFs), improving availability of the whole system.
In addition, ELB is deployed on the internal and external networks in a unified manner and supports access from the internal and external
networks.
You can create a load balancer and configure servers and listening ports required for services on a web-based, unified graphic user interface (GUI)
for cloud computing management.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 99/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Functions
ELB provides a way to configure the load balancing capability. A self-service web-based console is provided for you to easily configure the service
and quickly spin up more capacity for load balancing.
ELB provides the following functions:
Support for the access from the internal and external networks.
2.8.6.14.2 Benefits
ELB has the following advantages:
Automatically detects and removes abnormal nodes and automatically routes the traffic to normal nodes.
Expands elastic capacity based on application loads without service interruption when traffic fluctuates.
Concurrent connections: A large number of concurrent connections are supported, meeting users' traffic requirements.
Flexible combination of components: Various service components can be flexibly combined to meet various service and performance
requirements of customers.
Service deployment in seconds: Complex engineering deployment processes such as engineering planning and cabling are not required.
Services can be deployed and rolled out in seconds.
No fixed asset investment: Customers do not need to invest in fixed assets such as equipment rooms, power supply, construction, and
hardware materials. Services can be easily deployed and rolled out.
Seamless system update: Provides smooth and seamless rollout of all new services and fault upgrade to ensure service continuity.
Smooth performance improvement: When you need to expand deployment resources to meet service requirements, the one-stop
expansion service frees you from hardware upgrade troubles.
Load Distribution
For websites with heavy traffic or internal office systems of governments or enterprises, ELB helps distribute service loads to multiple backend
servers, improving service processing capabilities. ELB also performs health checks on backend servers to automatically remove malfunctioning
ones and redistribute service loads among backend server groups. A backend server group consists of multiple backend servers.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 100/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Capacity Expansion
For applications featuring unpredictable and large fluctuations in demand, for example, video or e-commerce websites, ELB can automatically scale
their capacities.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 101/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Virtual Private Cloud (VPC) Requires the virtual IP address (VIP) and subnets assigned in the VPC service.
Elastic Cloud Server (ECS) Provides the traffic distribution control function for backend servers.
The backend servers for ELB can be ECSs or BMSs.
Bare Metal Server (BMS)
Elastic IP Address (EIP) An EIP can be bound to a load balancer. If the subnet is an internal subnet, EIPs cannot be bound to load balancers.
Elastic Container Engine (ECE) Provides load balancing services for external systems through ELB.
Web UI
Log in to the eDME operation portal as a tenant user. In the navigation pane on the left, click Network and select the cloud service.
API
If you want to integrate the cloud service into third-party systems for secondary development, call APIs to access ELB.
2.8.6.15 vFW
What Is Virtual Firewall?
Advantages
Application Scenarios
Constraints
Edge firewall
An edge firewall is deployed at a VPC border or Internet border to control access to traffic reaching and leaving a VPC of a public network,
traffic between VPCs (through VPC peering), and traffic between a VPC and the local data center (public service BMS/VIP).
Distributed firewall
A distributed firewall can control access between ECSs in a VPC. Compared with a security group, a distributed firewall provides more
efficient and convenient security protection, with less impact on network performance.
2.8.6.15.2 Advantages
vFW provides layered and flexible network ACLs. It enables you to easily manage access rules for VPCs and ECSs, enhancing cloud server
protection.
vFW has the following advantages:
Supports traffic filtering based on the protocol number, source or destination port number, and source or destination IP address.
Allows multiple VPCs or ECSs to use the same ACL policy, improving usability.
Simplifies the customer configuration in scenarios where multiple projects are interconnected by default.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 102/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.15.4 Constraints
Table 1 Constraints on vFWs
Resource Constraint
Edge firewall An edge firewall can be associated with multiple VPCs, but a VPC can be associated with only one edge firewall.
By default, an edge firewall denies all traffic. You need to add custom rules to allow required traffic.
An edge firewall does not affect the mutual access between cloud servers in an associated VPC.
Distributed Layer 3 ports (such as gateway and DHCP ports) cannot be associated.
Firewall
The public service network ECS is not protected by a distributed firewall.
A distributed firewall can be associated with multiple VM NICs, but a VM NIC can be associated with only one distributed firewall.
Based on the default rule, a distributed firewall denies all inbound traffic and allows all outbound traffic. You need to add custom rules
to allow required traffic.
For persistent connection applications, both inbound and outbound rules that allow all traffic must be configured. Otherwise, persistent
connections will be interrupted due to rule changes or cloud server migration.
Firewall rule Supported protocols: TCP, UDP, ICMP, and ANY (all protocols)
Supported policy types: Allow and Deny
The firewall can control traffic by source IP address, destination IP address, source port, and destination port.
A rule ahead in sequence takes precedence. If two rules of a firewall conflict, the rule ahead in sequence takes effect.
The firewall can control the traffic on IPv4 networks.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 103/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VPC An edge firewall can be associated with a VPC to provide security protection for the VPC.
ECS A distributed firewall can be associated with an ECS NIC to provide security protection for the ECS.
Web UI
Log in to the eDME operation portal as a tenant user. In the navigation pane on the left, click Network. On the Network page, click Virtual
Firewalls.
API
If you need to integrate the cloud service into third-party systems for secondary development, call APIs to access vFW. For details, see section
"Network Services" in eDME 24.0.0 Operation Portal API ReferenceeDME 24.0.0 API Reference.
2.8.6.16 DNS
What Is Domain Name Service?
Advantages
Application Scenarios
Restrictions
Related Services
127.0.0.1:51299/icslite/print/pages/resource/print.do? 104/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When a cloud server in a VPC requests a private domain name, the private DNS server directly returns a private IP address mapped to the
domain name.
When the cloud server requests a public domain name, the private DNS server forwards the request to a public DNS server on the Internet and
returns the public IP address obtained from the public DNS server.
2.8.6.16.2 Advantages
High performance: Offers a new generation of efficient and stable resolution services, enabling tens of millions of concurrent queries on a
single node.
Easy access to cloud resources: Applies for domain names for cloud resources and host them in DNS so that you can access your cloud
resources with domain names.
Isolation of core data: A private DNS server provides domain name resolution for cloud servers carrying core data, enabling communications
while safeguarding the core data. You do not need to bind EIPs to these cloud servers.
Allows one private zone to be associated with multiple VPCs for unified management.
After configuring the preceding private domain names, you will be able to quickly determine the locations and usages of cloud servers during
routine management and maintenance.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 105/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For example, multiple cloud servers are deployed in the same VPC and communicate with each other using private IP addresses. The private IP
addresses are coded into the internal APIs called among the cloud servers. If one cloud server is replaced in the system, the private IP address
changes accordingly. In this case, you also need to change that IP address in the APIs and re-publish the website, bringing inconvenience for system
maintenance.
However, if you create a private zone for each cloud server in the VPCs and map domain names to private IP addresses, the cloud servers will be
able to communicate using private domain names. When you replace one of the cloud servers, you only need to change the IP address in record sets,
instead of modifying the code.
If a public DNS server is configured for subnets of the VPC associated with a private zone, domain name requests for accessing cloud
resources from cloud servers in the VPC will be directed to the Internet. Steps 1 to 10 in the right part of Figure 1 illustrate how a domain name
is resolved when a cloud server accesses OBS and SMN within the VPC. The request is directed to the Internet, witnessing long access latency
and poor experience.
If a private DNS server has been configured for the VPC subnets, it directly processes domain name requests for accessing cloud resources
from cloud servers in the VPC. When a cloud server accesses cloud services like OBS and SMN, the private DNS server will return private IP
addresses of these services, instead of routing the requests to the Internet, reducing latency and improving performance. Steps 1 to 4 in the left
part of Figure 1 show the process.
2.8.6.16.4 Restrictions
Table 1 describes the restrictions on DNS.
Table 1 Restrictions
Domain name When delivering a service domain name, use a root domain name that is different from the external service domain name of the
constraints cloud platform.
Record set A maximum of 500 record sets can be added for each private zone.
By default, the system creates SOA and NS record sets for each private zone. These record sets cannot be deleted, modified, or
manually added.
You can add A, CNAME, MX, TXT, SRV, and PTR record sets for a private zone.
Figure 1 and Table 1 show the relationship between DNS and other services.
Service Description
Elastic Cloud Server (ECS)/Bare Metal DNS provides domain name resolution for ECSs or BMSs.
Server (BMS)
Virtual Private Cloud (VPC) The VPC service provides basic service networks for DNS. After a private zone is associated with a VPC,
record sets of the private zone are accessible to the VPC.
2.8.6.17 VPN
What Is Virtual Private Network?
Advantages
Application Scenarios
Related Services
127.0.0.1:51299/icslite/print/pages/resource/print.do? 107/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VPN gateway
A VPN gateway is an egress gateway of a VPC. You can use a VPN gateway to enable encrypted communication between a VPC and your data
center or between a VPC in one region and a VPC in another region. A VPN gateway works together with the remote gateway in the local
center or a VPC in another region. Each local data center must have a remote gateway, and each VPC must have a VPN gateway. A VPN
gateway can connect to one or more remote gateways. The VPN service allows you to set up VPN connections from one point to one point or
from one point to multiple points.
Remote gateway
Specifies the public IP address of a VPN in your data center or a VPC in another region. This IP address is used for communicating with ECSs
or BMSs in a specified VPC.
VPN connection
A VPN connection is an Internet-based IPsec encryption technology. With the special tunnel encryption technology, VPN connections use
encrypted security services to establish confidential and secure communications tunnels between different networks.
A VPN connection connects VPN gateways and remote gateways of user data center through establishing a secure and reliable encryption
tunnel between them. Currently, only the Internet Protocol Security (IPsec) VPN is supported.
Networking Solution
Professional network hardware devices are used to establish an encrypted communication tunnel for network connectivity.
Key Technologies
Transmission protocol A variety of supported transfer protocols: ESP, AH, and AH-ESP
2.8.6.17.2 Advantages
Secure and reliable data
Professional Huawei devices are used to encrypt transmission data using Internet Key Exchange (IKE) and Internet Protocol Security (IPsec),
and provide a carrier-class reliability mechanism, ensuring the stable running of the VPN service concerning hardware, software, and links.
Low-cost connection
IPsec channels are set up over the Internet. Compared with traditional connection modes, VPN connections produce lower costs.
Flexible architecture
Professional one-step services are provided through long-term cooperation and close contact with carriers.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 108/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
With the VPN between a VPC and your traditional data center, you can easily use the ECSs and block storage resources in the cloud. Applications
can be migrated to the cloud and additional web servers can be created to increase the computing capacity on a network. In this way, a hybrid cloud
is built, which reduces IT O&M costs and protects enterprise core data from being leaked.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 109/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VPC VPN builds a communication tunnel between VPC and a traditional data center, and therefore VPC will be used.
Item Restriction
CIDR blocks of the local The CIDR blocks of the local subnet in a data center must be within the CIDR blocks of the private network (all the subnet
subnet CIDR blocks of the VPC).
CIDR blocks of the remote The CIDR blocks of the remote subnet in a data center cannot overlap with all the subnet CIDR blocks of the VPC
subnet (excluding the CIDR blocks of the internal subnets).
VPN gateway Each VPN gateway can be associated with only one VPC.
VPN connection A VPN gateway can connect to multiple subnets in the associated VPC.
All VPN connections under the same VPN gateway cannot overlap with each other.
All remote subnets under the same VPN gateway cannot overlap with each other.
The CIDR blocks of the local subnets of all VPN connections under the same VPN gateway cannot overlap with each other.
Correct example:
VPN connection 1: CIDR block of the local subnet is 10.0.0.0/24, and CIDR blocks of the remote subnet are 192.168.0.0/24 and 192.168.1.0/24.
VPN connection 2: CIDR block of the local subnet is 10.0.1.0/24, and CIDR block of the remote subnet is 192.168.2.0/24.
VPN connection 3: CIDR block of the local subnet is 10.0.2.0/24, and CIDR block of the remote subnet is 192.168.2.0/24.
Function
Benefits
Application Scenarios
Constraints
Procedure
2.8.6.18.1 Concept
The public service network (supported only in Region Type II) is used for the communication between a server and ECSs, virtual IP addresses
(VIPs), or BMSs in all VPCs of a user. The IP addresses of the public service network are classified into two types: server IP address and client IP
127.0.0.1:51299/icslite/print/pages/resource/print.do? 110/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
address. To ensure that the client IP address pool is large enough, the IP mask cannot exceed 15. Generally, the address range is 100.64.xx.xx/10.
100.64 is an RFC reserved address and is reserved for the carrier-level NAT internal network. Generally, this address is not accessed. Therefore, this
address can be used as the client IP address. Each ECS or BMS is automatically assigned a client IP address when being created. The client IP
addresses are allocated in a unified manner to ensure that they do not overlap. The ECS and BMS cannot access each other through the client IP
address.
2.8.6.18.2 Function
Through the public service network, ECSs or BMSs in all VPCs of a user can access specified services (such as DNS and OBS services) deployed
by the user.
2.8.6.18.3 Benefits
With the public service network, you can quickly deploy the VPC share service.
Flexible deployment: The public service network is deployed outside the VPC network and they are independent of each other. The public
service network can be shared by all VPCs and does not need to be configured for each VPC. Both the server and client support ECSs or
BMSs.
Easy expansion: The public service network is independently deployed outside the VPC network and exclusively uses a network segment. The
number of the public service network servers can be dynamically adjusted based on the client access volume in the VPC.
2. The API gateway is configured with the server IP address of the public service. VMs in the service zone can call interfaces of the API
gateway through the public service address.
3. The OBS and SFS services may be accessed by all VMs. To solve the address overlapping problem, all VMs access the OBS and SFS server
IP addresses through the public service address.
2.8.6.18.5 Constraints
The restrictions on the public service network are as follows:
2. ECSs or BMSs in a VPC internal subnet cannot access the public service network.
3. The network segments of the server and client cannot be modified after the public service network is created.
4. The public service network communicates with the multi-tenant switch through a route.
5. There must be sufficient client network segments reserved (the IP mask cannot exceed 15) to ensure that public service client IP addresses
can be allocated to ECSs and BMSs in all VPC subnets.
6. The network segments of the public service client and server cannot be the same as those of the VPC subnets.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 111/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2.8.6.18.6 Procedure
This section describes how to create and manage a public service network.
Figure 1 Procedure
Table 1 Procedure
Procedure Description
Preparation Plan the IP network segment where the server is deployed and pre-allocate a large network segment to the client on the public service
network.
Create a Layer 3 shared egress on the iMaster NCE-Fabric controller.
For details, see section "Direct Connection Between the Border Leaf Switches and Public Service TOR Switch" in Datacenter
Virtualization Solution 2.1.0 Multi-Tenant Network Configuration Best Practices (SDN).
Determine physical networking and configure a route for a switch.
For details, see section "PSN Switch Configuration" in Datacenter Virtualization Solution 2.1.0 Multi-Tenant Network Configuration Best
Practices (SDN).
Management After the public service network is created, the server and client networks cannot be modified.
The public service network cannot be deleted if it contains resources. You need to delete the NIC resources of all VPCs before deleting the
network.
2.8.6.19 CSHA
What Is Cloud Server High Availability?
Benefits
Application Scenarios
Implementation Principles
127.0.0.1:51299/icslite/print/pages/resource/print.do? 112/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Key Indicators
Definition
Cloud Server High Availability (CSHA) is a cross-AZ VM HA DR service implemented based on the active-active storage technology. It provides
DR protection with zero RPO for ECSs. If the production AZ is faulty, minute-level RTO DR switchover is provided to ensure service continuity.
Restrictions
Restrictions on CSHA are as follows:
Public
DR can be implemented for VMs that are created (manually or from images) or fully cloned.
Protected VM disks support only Huawei scale-out block storage and eVol storage.
DR cannot be implemented for VMs with peripherals, such as GPUs and USB devices. If peripherals are added to a VM for which DR
has been configured, DR cannot be implemented for the peripherals.
DR cannot be implemented for VMs with SCSI transparent transmission disks and disks in SCSI transparent transmission mode.
DR cannot be implemented for VMs for which the Security VM option is enabled.
DR cannot be implemented for VMs that use raw device mapping (RDM) shared disks.
DR cannot be implemented for VMs for which Security VM Type is set to SVM.
DR cannot be performed for VMs for which the high-precision timer is enabled.
DR cannot be implemented for VMs that have snapshots. If a VM snapshot is created after a DR protected group is created, the VM
snapshot is also deleted when the protected group is deleted.
Only one-to-one active-active pair DR protection is supported. Active-active replication links must be configured for the used storage
devices. The types, versions, and computing architectures of the active-active storage devices at both ends must be the same.
Only disks used by protected VMs can belong to the same Huawei storage device.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 113/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The time of the node where UltraVR is installed must be synchronized with that of FusionCompute.
Flash storage
Supports HA service DR in two scenarios: converged deployment and separated deployment of Huawei scale-out block storage.
DR volumes (volumes configured with HyperMetro pairs) do not support disk migration.
The port group access mode supports only the VLAN access mode and does not support the subnet access mode.
2.8.6.19.2 Benefits
Based on the UltraVR DR capability in virtualization scenarios, the DR management workload of O&M administrators is heavy. Tenant self-service
capabilities are insufficient, which cannot meet the self-management requirements of governments and enterprises in large-scale IT infrastructure
scenarios, resulting in high centralized O&M costs and slow response to service management requirements.
As a cross-AZ cloud server high availability service, CSHA enables the self-service management capability of users in each sub-department of the
customer, reduces the dependency on centralized management of the IT department, and improves management efficiency.
Dual-write enables I/O requests of an application server to be synchronized to both a local LUN and a remote LUN.
Data Change Logs (DCLs) record data changes of the storage systems in the two data centers.
Two HyperMetro storage systems can process hosts' I/O requests concurrently. To prevent access conflicts when different hosts access the same storage address
at the same time, a lock allocation mechanism needs to be designed. Data can be written into a storage system only when allowed by the lock allocation
mechanism. If a storage system is not granted by the locking mechanism, the storage system must wait until the previous I/O is complete and then obtains the
write permission after the previous storage system is released by the locking mechanism.
Dual-write enables application servers' I/O requests to be delivered to both local and remote caches, ensuring data consistency between
the caches.
If the storage system in one data center malfunctions, the DCL records data changes. After the storage system recovers, the data changes
are synchronized to the storage system, ensuring data consistency across data centers.
Figure 1 shows the HyperMetro write I/O process when an I/O request is delivered from an application server and causes data changes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 114/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If writing to either cache fails, the system converts the log into a DCL that records the differential data between the local and remote
LUNs.
If writing to either cache fails, HyperMetro is suspended and each storage system sends an arbitration request to the cloud platform quorum server. The
winning storage system continues providing services while the other stops. In the background, the storage systems use the DCL to synchronize data
between them. Once the data on the local and remote LUNs is identical, HyperMetro services are restored.
Only Huawei UltraPath can be used in the HyperMetro solution. Huawei UltraPath has the region-based access optimization capability, reducing the number of
interactions between sites. In addition, Huawei UltraPath is optimized for active-active scenarios. It can identify geographical locations and reduce cross-site
access, thereby reducing latency and improving storage system performance. UltraPath can read data from the local or remote storage system. However, if the
local storage system is working properly, UltraPath preferentially reads data from the local storage system, preventing data read across data centers.
1. The application server applies for the read permission from HyperMetro.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 115/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the link between the storage systems in the two data centers is down, the cloud platform quorum server determines which storage system continues providing
services for application servers.
2. HyperMetro enables the local storage system to respond to the read I/O request of the application server.
If the local storage system is working improperly, HyperMetro enables the application server to read data from the remote storage system.
The remote storage system returns data to HyperMetro.
If the data of one storage system is abnormal, HyperMetro uses data on the other storage system to repair the data, ensuring data consistency between the two
data centers.
EVS block storage service ECS and CSHA Provides the DR management capability of block storage services for CSHA.
Table 1 Specifications
127.0.0.1:51299/icslite/print/pages/resource/print.do? 116/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Protected Number of protected groups 2000 You are advised to set CPU of the DR management VM to 8
group vCPUs.
Protected groups in service-oriented scenarios and virtualization
scenarios cannot coexist.
Related Concepts
Definition
The backup service provides a unified operation portal for tenants in DCS multi-tenant scenarios. Administrators can define backup service
specifications to form a logically unified backup resource pool for multiple physically dispersed backup devices, helping tenants quickly obtain the
backup service, simplifying configuration, and improving resource provisioning efficiency.
Tenants focus on the backup service capabilities required by services instead of the networking and configuration of backup resources. This greatly
simplifies the use of the backup service and facilitates the configuration of backup capabilities for VMs and disks.
Backup service operation Virtual data center (VDC) VDC management permission A user with this permission can perform any
permission administrator operation on the backup service.
Management permission on all cloud
services
127.0.0.1:51299/icslite/print/pages/resource/print.do? 117/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Table 2 lists the common operations that can be performed on the backup service by default after you have the backup service permissions.
Benefits
127.0.0.1:51299/icslite/print/pages/resource/print.do? 118/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Application Scenarios
Functions
2.8.6.21.2 Benefits
High density
Easy management
System optimization
Existing VMware resource pools can be synchronized and managed by different tenants through the VMware integration service.
This solution can replace the managed VMware solution based on FusionSphere OpenStack.
The following resources can be managed: clusters, physical machines, resource pools, VMs, basic networks, storage, disks, and
templates on vCenter.
New management
VMware resource pools can be managed in the VMware integration service module. The resource management operations, such as requesting
VMs and EVS disks and managing compute, network, storage, image, and snapshot resources, are controlled by order quota.
2.8.6.21.4 Functions
The following figure shows the functions of the VMware integration service.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 119/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Function Description
VM lifecycle management Users can start, shut down, restart, and delete VMs.
Application for VMs with specified flavors An O&M administrator predefines different VM flavors. Tenants can apply for VMs of different flavors
based on application scenarios.
VM snapshot creation Tenants can create snapshots for their own VMs and restore the snapshots when necessary.
VM image creation Tenants can create images for their own VMs, provision VMs by using their own images, and publish
private images as public images.
VM cloning Tenants can quickly replicate VMs by cloning their own VMs.
EVS disk creation Tenants can create EVS disks of different specifications and attach them to VMs.
VM and EVS disk instance metering VMs and EVS disk instances can be metered based on flavors.
Support for NSX-T networks Network services include security group, load balancing, and security policy.
One-click synchronization of existing vCenter Existing vCenter resources, VMs, storage devices, images, resource pools, and clusters can be
resources synchronized by one-click.
Task log viewing Asynchronous task execution records and logs can be viewed on the operation portal.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 120/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
traffic, authorization and access control, monitoring, and version of all OpenAPIs. On the O&M portal, Application and Data Integration provides
the PaaS Instance Management service to implement service instance provisioning, alarm reporting, and preventive maintenance for service
provisioning and maintenance.
Introduction to System Integration Service
This section describes the definition, functions, benefits, application scenarios, and availability of the Device Integration Service.
Usage Scenarios
2.8.6.22.1.1.1 Functions
The System Integration Service is a service that revolutionizes the connections of southbound subsystems on campus networks. It offers full-stack
integrated access channels for these subsystems, including service, data, and message access. Additionally, it allows users to build connection
management capabilities based on subsystems, integration assets, and connectors. Users can also take advantage of visualized integrated asset
development and O&M capabilities.
Related Concepts
Figure 1 Concepts
App: An app is a virtual unit for an integration task. It does not correspond to an actual external or internal system. It is an entity that manages
integration assets.
LinkFlow: provides API integration-oriented asset development capabilities, implements full lifecycle management from script-based
API design, development, test, to release, and supports API re-opening after orchestration.
MsgLink: provides asset development capabilities oriented to message integration, provides secure and standard message channels, and
supports message release, subscription, and permission management.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 121/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
DataLink: provides asset development capabilities oriented to data integration, connects to external data sources based on connectors,
and implements flexible, fast, and non-intrusive data connections between multiple data sources such as texts, messages, APIs, and
structured data.
Subsystem: A subsystem is a digital description or modeling of a physical IT system connected to System Integration Service. Generally, one
subsystem corresponds to one physical IT system. You can define subsystem events, services, and connection resources on the page to build
subsystems and authorize apps to develop integrated assets.
Event: Events are defined by subsystems. Developers can invoke LinkFlow functions or MsgLink interfaces to send events defined in
subsystems.
Service: Services are defined by subsystems. When developing function APIs in LinkFlow, developers can use the functions to access
third-party services defined in subsystems.
Connection: Connections are defined by subsystems. After subsystems are associated with connector instances, LinkFlow and
DataLink are authorized as the data source of data APIs or the source and target ends of DataLink tasks.
Connection Management
Integration object management based on app center: Users can create multiple integration objects, such as APIs, connector instances, and
topics, to establish interconnection interfaces with integrated systems or apps.
Connection management based on subsystems: Users can build models for integrated subsystems to describe external IT systems and leverage
the lifecycle management capabilities, such as adding, deleting, modifying, and querying subsystems. This system also supports logical
integration of subsystems by apps.
System integration relationship view: The system provides a unified view of the number of subsystems contained in each integration app.
Fine-grained authorization management: Supports app-based sub-account authorization and isolation. Sub-accounts can manage, modify,
delete, and integrate authorized integrated apps, while apps belonging to different users (sub-accounts) are isolated from each other.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 122/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Administrators (primary accounts) can authorize apps to different users (sub-accounts) for ISV SaaS integration.
Connector management: Multiple data sources and integration objects can be created and connected through connectors, including data
connectors (MySQL, Oracle, PostgreSQL/openGauss, ClickHouse, DaMeng, Gauss100, Gauss200/DWS, SQL Server, eDataInsight Hive,
eDataInsight HBase, eDataInsight ClickHouse, eDataInsight StarRocks, HANA, and Vastbase G100) and message connectors (MsgLink,
Kafka, and WebSocket), and protocol connectors (FTP, LDAP, API, SOAP, and eDataInsight HDFS).
JWT token and AK/SK authentication: Both JWT token authentication and AK/SK authentication are supported.
System statistics analysis: The operation overview provides one-stop monitoring of core metrics, including apps, subsystems, APIs, connectors,
and topics. It also offers O&M visualization capabilities for service access, DataLink, and message access, allowing users to view run logs of
services or tasks, and release and subscription records and running tracks of messages.
Deployment and capacity expansion of hardware in multiple forms and specifications: Lite, Std, and Pro specifications are provided to be
compatible with different scenarios. The Std edition can be upgraded to the Pro edition for capacity expansion.
Elastic capacity expansion of software licenses: Users can purchase licenses on demand. The number of key integration objects that can be
created in the software is controlled by licenses.
Connection Tools
Web-based integrated asset development: offers web-based tools that allow users to easily develop integrated assets for their apps. This one-
stop solution includes visualized development tools and enables seamless integration with external subsystems.
DataLink development: provides asset development capabilities oriented to data integration, connects to external data sources based on
connectors, and implements flexible, fast, and non-intrusive data connections between multiple data sources such as texts, messages, APIs, and
structured data.
LinkFlow development: provides API integration-oriented asset development capabilities, implements full lifecycle management from script-
based API design, development, test, to release, and supports API re-opening after orchestration.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 123/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
MsgLink development: provides asset development capabilities oriented to message (MQ) integration, provides secure and standard message
channels, and supports message release, subscription, and permission management.
Integrated asset package management: provides the capabilities of importing and exporting integrated asset packages and supports on-demand
installation of integrated assets in different service scenarios.
Asset overview: displays the overall classification statistics and details of assets on which the user has permission.
1. Real-time scheduling
2. Periodic scheduling by minute, hour, day, week, or month, or by using the Quartz Cron expression
MySQL Scheduled task collection and write of MySQL 5.7 are supported.
CDC function in Binlog mode of the MySQL database is supported.
API Data can be collected and written through scheduled tasks. The restrictions are as follows:
Only HTTP and HTTPS are supported. Other protocols such as SOAP and RPC are not supported.
Multiple authentication modes are supported, such as Basic Auth and OAuth 2.0.
Only JSON and XML messages can be parsed.
When in JSON format, only one-layer array structures are supported. Nesting is not allowed. Obtaining data in paths of
different levels is not supported.
Data can be written through real-time tasks.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 124/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SQL Server Data can be collected and written through scheduled tasks.
Data can be collected and written through real-time tasks (by creating composite tasks). Data can be incrementally
synchronized. Currently, data can be synchronized from the SQL Server database to relational databases such as MySQL,
Oracle, and SQL Server.
GaussDB 100 For GaussDB 100 Kernel 503, data can be written through scheduled and real-time tasks.
DWS For DWS, data can be collected and written through scheduled tasks.
For DWS, data can be written through real-time tasks.
FTP FTP data sources of the FTP and SFTP protocols can be created.
Scheduled tasks can be used to collect or write data. Files in CSV, TXT, XLS, and XLSX formats can be parsed and mapped.
Other types of files can only be migrated.
Hive (DCS eDataInsight) Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.
HBase (DCS Data can be collected and written through scheduled tasks.
eDataInsight)
Data can be written through real-time tasks.
HDFS (DCS Data can be collected and written through scheduled tasks.
eDataInsight)
Data can be written through real-time tasks.
ClickHouse (DCS Data can be collected and written through scheduled tasks.
eDataInsight)
Data can be written through real-time tasks.
ClickHouse Connector Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.
20.7 and later versions are supported.
Vastbase G100 Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.
SAP HANA Data can be collected and written through scheduled tasks.
Data can be written through real-time tasks.
API design: Online API design, including API basic information, header, parameter, path, and method design are supported.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 125/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
API development:
Online API development and orchestration based on JavaScript scripts are supported.
During script orchestration of integrated apps, library functions can be used to call services and events defined on subsystems.
API test: HTTPS requests are supported. The request header, request body, and request parameters can be customized, and a complete response
packet is returned.
API deployment: APIs can be deployed on the APIGW for external use.
Batch import and export APIs: Batch import and export in JSON or YAML format are supported.
API authorization: Authorized apps can be created and bound to specified APIs.
SSL link encryption: Native RocketMQ capabilities are supported. SSL encryption can be configured for instance access to ensure security.
Message query: Users can query message details by topic, publisher ID, message ID, message key, and creation time.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 126/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Message statistics: The message statistics function is enhanced. The query of different time granularities is added. The query time range can be
flexibly set.
Message tracing: The system monitors the production and consumption processes of a single message, such as the production time,
consumption time, and client name.
Topic management: Users can view topic information, including basic topic information, publisher information, and subscriber information.
Subscription Management: Users can view subscription configurations, number of backlogged messages, consumption rate, and consumption
details on the platform.
Dead letter queue: Messages that cannot be processed are stored in the dead letter queue for unified analysis and processing.
Interconnection between the client SDK and integrated apps: The client SDK is provided to send messages to or subscribe to topics of
integrated apps through interfaces exposed by the SDK. Only the Java SDK is provided.
Interconnection between the client SDK and subsystems: The system provides the client SDK and allows external IT systems to report events
to Link Services through interfaces exposed by the SDK.
External network mapping address access: The external network mapping addresses of service nodes can be configured to allow messages to
be published to and subscribed to using the internal and external network addresses at the same time.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 127/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Supports pre-integration with Huawei IVS video management and analysis subsystems.
3. Supports pre-integration with third-party video transcoding platforms (Jovision and AllCam).
The system is preconfigured with assets for integrating with IoT platforms.
1. Preconfigures over 500 assets for pre-integration with third-party subsystems, such as parking, access control, lighting, and building
facilities.
Easy management: Multiple integration assets, such as data integration assets, API integration assets, and message integration assets, can be
centrally managed on one console.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 128/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
All campus data is quickly integrated based on the data access and message access components, preprocessed, and opened to different backend
services. For instance, facial data from the turnstile system, device status from the video surveillance system, and switch and device
information from the street lamp system are transmitted to backend services in real time or in asynchronous batches for analysis and linkage
management.
Provides southbound control capabilities to help enterprises obtain data and build smart campus brains
The LinkSoft service provides a channel for integrating and sharing data, messages, and services, allowing enterprises to use AI, video
analysis, and big data cloud services to build a real smart campus brain that converges IT, OT, and AI.
The Device Integration Service enables the platform to model and manage IoT devices, implementing digital twins for IoT devices.
Functions
Usage Scenarios
2.8.6.22.1.2.1 Definition
The Device Integration Service enables the platform to model and manage IoT devices, implementing digital twins for IoT devices.
Concepts
The Device Integration Service manages devices by device model category, product, and device model.
Device model: A device model is a thing model. It allows you to abstract products in the same type from different models and vendors to form
a standard model for unified management. You can define basic device information, supported commands, and event information by attribute,
command, and event.
Device model classification: You can classify device models for easier management.
Product: A product model contains abstract definitions based on a device model. The product attributes, commands, and events are derived
from the corresponding subsets of the device model.
Device instance: A device instance is a specific instance of a product and also a management object in the Device Integration Service.
2.8.6.22.1.2.2 Functions
127.0.0.1:51299/icslite/print/pages/resource/print.do? 129/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
LinkDevice
The Device Integration Service enables the platform to model and manage IoT devices, implementing digital twins for IoT devices. Additionally, it
allows the access, aggregation, model mapping, and local control of IoT devices, provides various protocol drivers to connect different types of
southbound devices while shielding the details of southbound connection technology, and offers unified device data reporting and command delivery
interfaces.
LinkDeviceEdge
LinkDeviceEdge is a remote node of LinkDevice and interacts with LinkDevice to connect distributed devices, collect data of the devices, and
support local data preprocessing. LinkDeviceEdge is launched as an independent software product and can be installed in a cloud-based
environment or on an independent server.
Device Connection
Multiple access modes
Built-in and extended protocol connectors supported by LinkDevice can be connected to the devices or device systems.
Devices or device systems can be connected to LinkDevice through the partner gateway or LinkDeviceEdge.
Devices or device systems can be connected to LinkDevice through LinkSoft. The integration apps can be developed through
LinkFlow.
Devices or device systems can be connected using the LinkDevice SDK. The LinkDevice SDK supports the C and Java languages.
Devices can be connected using OPC UA. Password and certificate authentication, transmission encryption, and security policies and
modes are supported.
Devices can be connected using standard MQTT protocol of LinkDevice. Password authentication and transmission encryption are
supported.
MQTT can be used to connect to T3 IoT devices, such as MEGSKY intelligent convergent terminals.
Extended protocol connectors: They can be connected to protocols through customized device connectors (maximum: 128) and protocol plug-
ins.
Connector management: IoT protocols can be managed as connectors (including built-in and extended protocol connectors). Connector
configuration templates can be imported or exported.
Product-based data collection templates can be defined, allowing users to configure attribute point mapping for all devices of a product
quickly.
Channels can be configured for connection to southbound devices based on the connector configuration of each protocol.
Mapping between protocol points and device thing models can be configured. Data can be imported or exported in batches.
Southbound points can be read or written based on the point mapping configuration.
Device Management
Thing model management: This function allows users to classify, add, delete, query, modify, import, and export thing models, or manage thing
model attributes and commands.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 130/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Product management: This function allows users to manage products based on thing models and distinguish gateways from devices.
Device lifecycle management: This function allows users to add, delete, query, or modify device instances, monitor device statuses (online or
offline), query the device list based on filter criteria, import or export device registration data in batches, or report device status changes.
Device groups: This function allows users to add, delete, query, and modify device groups, or add device instances to device groups.
Device relationships: Association and management between gateways and their subdevices, between platforms and edge nodes, and between
platforms, edge nodes, gateways, and gateway subdevices are supported.
Dynamic device discovery: Dynamic device discovery can be configured. Partner turnstiles and access control systems can be dynamically
discovered and connected.
Device shadows: This function allows users to cache real-time device data, configure and convert point mapping configuration for device thing
models, or read and report device data.
Device operations: This function allows users to read and write device attributes, query device events, or deliver device commands.
Device data storage: This function allows users to store device data locally or define the storage period.
100 TPS is supported for device data storage. If the device data reporting rate exceeds 100 TPS, some data may not be processed, resulting in timeout and
packet loss.
Device linkage
Event triggering can be scheduled. Device status changes and attributes can be reported.
The device status, attributes, and time range can be configured as judgement conditions.
The following actions are supported: attribute and command delivery, and alarm and notification reporting.
Message Communication
Device data transfer: Device data status information can be transferred to third-party systems, including Kafka and OpenGaussDB.
Device data subscription and push: Device data can be pushed to MsgLink for apps to subscribe to and obtain the data. The following
operations are supported: product-based data source configuration, data push by consumer group, and push policy configuration.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 131/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Two-way transparent transmission: Device messages can be transparently transmitted to apps. Apps can deliver messages to devices in
asynchronous mode.
API openness: Device services can be provided through REST APIs. APIs for configuring and querying LinkDevice as well as device read and
write APIs are provided. Read and write instructions can be forwarded to devices.
Southbound MQTT
The southbound MQTT interface is provided for third-party devices to directly connect to LinkDevice.
Device attribute points and thing model data can be read or written. Device statuses and product information can be synchronized.
Device status monitoring: The latest device statuses can be monitored and displayed in real time. The statuses include Online, Offline,
Unactivated, and Unactivated(Expired).
Connector status monitoring: The latest statuses of built-in and extended connectors can be monitored and displayed in real time. The statuses
include Online, Offline, Unactivated, and Unactivated(Expired).
Users can configure the device statuses, attributes, commands, and event logs.
Message statistics: Statistics on the number of messages can be collected based on device instances and connectors.
System settings: Security settings and device data storage can be configured or queried.
Requirements for the maximum storage duration of historical data are described as follows:
If FusionCube is deployed as the foundation, the duration can be set to one year.
Edge Management
Edge node management
Edge data processing: Reported device thing model data can be pre-processed.
Data filtering is supported to find the device data that meets the filter criteria.
Data can be aggregated based on maximum, minimum, or latest values in a specified time window.
Edge data storage: When an edge device is disconnected from LinkDevice, device data can be stored for a maximum of seven days. When the
connection is restored, the device data generated during the disconnection period can be reported.
MQTT is supported for upstream and downstream communication, and bidirectional communication between northbound devices and
LinkDevice.
Data of southbound devices, southbound gateways, and gateway subdevices can be read, written, and reported.
Edge connection management: IoT protocols can be managed as connectors, including built-in and third-party extended protocol connectors.
Built-in protocol connectors include ModbusTCP, MQTT, and OPC UA.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 132/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Channels can be configured for connection to southbound devices based on the connector configuration of each protocol.
Mapping between protocol points and device thing models can be configured. Data can be imported or exported in batches.
Collected point data can be pre-processed, including point scaling, threshold check, and fluctuation suppression.
Capabilities Flexible protocols Supports mainstream device access protocols to meet the requirements of mainstream protocol devices and
access scenarios.
Provides a plugin mechanism for custom protocol parsing.
Costs Quick access Allows partner devices to be plug-and-play, requiring minimal manual intervention and can be used immediately
after being connected.
Quick deployment Works with LinkTool to realize fast configuration and deployment.
Security System security Provides digital certificates, one-device-one-key access security, and EulerOS security capabilities.
Ecosystem Third-party Integrates upstream and downstream ecosystem resources to provide value-added services.
interconnection
Intelligent city (including water affairs, water conservancy, electric power, urban management, environmental protection, and emergency
response)
Intelligent warehousing
Intelligent healthcare
Application Scenarios
2.8.6.22.1.3.1 Functions
The APIGW service functions as the service openness portal of and allows campus customers to easily create, publish, maintain, monitor, and
protect APIs, and manage the calls, traffic, authorization and access control, monitoring, and versions of the open APIs.
Gateway Management
API management based on the application center: It supports project management based on the application dimension, API release, test, and
authorization on applications, and API permission management based on the application dimension.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 133/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Fine-grained authorization management: The system supports application-based sub-account authorization and isolation. This means that users
(sub-accounts) can manage, modify, delete, and integrate authorized integrated applications, while applications belonging to different users
(sub-accounts) are isolated from each other. And administrators (primary accounts) can authorize applications to different users (sub-accounts)
for ISV SaaS integration.
JWT token authentication, AK/SK authentication, and OAuth authentication are supported.
System analysis: One-stop monitoring of core metrics, such as the number of applications, APIs, and API visits, is available on the home page.
In addition, O&M visualization is supported, so that users can view API run logs and operator operation records.
API statistics analysis: This function is used to analyze the visit and request details of the system or a specific API, and collect and analyze API
access logs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 134/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Security authentication: Apps can be authenticated using app keys and secrets.
API traffic control: Multi-dimensional API traffic control (IP address, app, and API) is supported.
API test: Users can orchestrate and debug request headers, request bodies, and request parameters.
API routing:
Backend load balancing: The round robin algorithm is supported to implement load balancing.
API consumers can access APIs through the gateway without the need to know the specific background service address, making integration
development simpler.
APIGW separates and protects backend services for API users, creating a security barrier that reduces the impact and damage on backend
services. This ensures the stable operation of backend services and optimizes the integration architecture.
Alarm management
127.0.0.1:51299/icslite/print/pages/resource/print.do? 135/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eDME provides the alarm management capability to monitor alarms generated by each component in DCS in real time and allows you to
acknowledge and clear alarms.
Definition
To ensure the normal running of the network, the network administrator or maintenance personnel must periodically monitor and handle
alarms.
Benefits
Beneficiary Benefit
Customer Alarm management allows you to view, monitor, and handle all alarms of DCS on one UI in real time.
Alarm management provides a series of functions, such as masking, aggregation, correlation, and automatic acknowledgment, to help you
automatically identify and reduce invalid alarms and efficiently handle alarms.
The alarms can be remotely notified by short message service (SMS) or email, helping you learn about the alarm status in the system in real
time.
Check policy
With the check policy function provided by eDME, you can periodically or manually check the system capacity and performance to detect
resource health risks in advance.
Definition
A check policy is used to periodically or manually check resources in terms of performance, capacity, availability, configuration, reclamation,
and low-load resources. If the preset check policies do not meet your check requirements, you can customize check policies.
Benefits
Beneficiary Benefit
Customer You can use the preset check policies and customized check policies to identify and handle resource risks in advance, ensuring healthy and
stable operation of the data center.
The system provides manual and scheduled checks so that you can customize check policies based on scenarios.
The system supports the remote notification function, including SMS and email notifications, helping you learn about the system health
status in real time.
Inventory management
eDME allows you to manage and maintain hardware resources, such as Ethernet switches, FC switches, servers, and storage devices.
Definition
After interconnecting with hardware resources such as Ethernet switches, FC switches, servers, and storage devices, eDME can display
resource attributes, status, performance, and capacity, and supports resource configuration and maintenance.
Benefits
Beneficiary Benefit
Customer After adding an Ethernet switch, you can view the resource, performance, and status information about the Ethernet switch, configure
VLANs, and manage link information, improving the O&M efficiency of the Ethernet switch.
After adding a physical server, you can view the resource, performance, and status information about the server, and perform maintenance
operations such as turning on indicators and restarting the server, improving the O&M efficiency of the physical server.
After adding an FC switch, you can view the resource, performance, and status information about the FC switch, and manage zone
information, improving the O&M efficiency of the FC switch.
After adding a storage device, you can view the resource, performance, and status information about the storage device, and manage
storage pools, improving the O&M efficiency of the storage device.
Performance analysis
eDME provides the end-to-end (E2E) performance analysis function. Then, you can analyze performance based on collected data and quickly
locate problems.
Definition
By creating an analysis view for a resource object (device or virtualization resource), you can analyze the performance of the specified
resource and its associated resources.
Benefits
Beneficiary Benefit
Customer With the performance analysis function, you can obtain the performance, alarm, and status information about a resource and its associated
resources, helping quickly locate the root cause of a fault and improving O&M efficiency.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 136/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Topology view
eDME provides the topology view function, allowing you to view the E2E association relationships between resources and quickly analyze the
impact scope of problems.
Definition
The system can display the topology associated with the selected resource object (device or virtualization resource).
Benefits
Beneficiary Benefit
Customer In the topology view, you can view the topology of a resource and its associated resources, including resource alarms and link information,
helping you quickly analyze the impact scope of problems.
Security management
eDME provides security management functions, such as user management, user policy management, and authentication management, to help
users ensure the security of user information and the system.
Definition
Security management covers user management, user policy management, and authentication management, helping manage user rights,
authentication modes, and sessions, and set account and password policies, and login modes.
Benefits
Beneficiary Benefit
Customer The security management function assigns roles to users and manages the role rights, implementing optimal resource allocation and
permission management, and improving O&M efficiency.
This function allows you to set user account access policies, password policies, and login modes, helping you customize secure user and
login policies to improve system security.
Installation Process
Deploying Hardware
Deploying Software
Appendixes
Network Overview
System Requirements
127.0.0.1:51299/icslite/print/pages/resource/print.do? 137/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Overview
Datacenter Virtualization Solution (DCS) provides two deployment solutions.
Separated deployment: Compute and storage nodes are deployed separately, and the storage type is flash storage or scale-out storage.
The management domain system includes virtualization resource management (VRM) deployment and eDME cluster deployment.
Management domain services are virtualized on three physical nodes.
Hyper-converged deployment: Compute and storage nodes are deployed together, and FusionCube 1000H (FusionCompute) is used for the
compute resource pool.
The management domain system includes eDME cluster deployment only. eDME is deployed on two Management Computing Node
Agent (MCNA) nodes and one Storage Computing Node Agent (SCNA) node of FusionCube 1000H.
eDME can be deployed on multiple nodes. For details about the deployment specifications, see Management System Resource Requirements . This section uses the
three-node deployment mode as an example.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 138/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Name Description
CNA A compute virtualization component, which provides compute resources for FusionCompute. A host also provides storage resources
when local disks are used for storage.
VRM A management node of FusionCompute, which provides an interface for users to centrally manage virtual resources.
VRM supports the active/standby deployment. Active/standby nodes are virtualized on two hosts. The VRM management IP
address is configured to the management plane IP address.
NOTE:
In the active/standby mode, the management nodes are deployed on two VMs. If the active node is faulty, the system rapidly switches
services to the standby node to ensure service continuity. Therefore, the active/standby deployment provides higher reliability than the
single-node deployment.
eDME eDME is a Huawei-developed intelligent O&M platform that centrally manages software and hardware for virtualization scenarios.
eDME supports three-node deployment (non-multi-tenant scenario) and five-node deployment (multi-tenant scenario). The failure
of a single node does not affect the management services.
(Optional) iMaster iMaster NCE-Fabric is used only in the network overlay SDN solution.
NCE-Fabric iMaster NCE-Fabric manages switches in the data center and automatically delivers service configurations.
FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric detects VM login, logout, and migration status and
automatically configures the VM interworking network.
(Optional) FSM A server that runs the FusionStorage Manager (FSM) process of OceanStor Pacific series storage. It provides operation and
maintenance (O&M) functions, such as alarm reporting, monitoring, logging, and configuration, for OceanStor Pacific series block
storage. FSM needs to be deployed in active/standby mode only in the scale-out storage deployment scenario.
OceanStor Dorado A flash storage device, which provides storage resources. The figure uses OceanStor Dorado 3000 V6 as an example. For other
3000 V6 models, see Huawei Storage Interoperability Navigator.
OceanStor Pacific A scale-out storage device, which provides storage resources. The figure uses OceanStor Pacific 9520 as an example. For other
9520 models, see Huawei Storage Interoperability Navigator.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 139/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eBackup is a piece of Huawei-developed backup software for virtual environments. It provides comprehensive protection for user
data in virtualization scenarios based on VM/disk snapshot and Changed Block Tracking (CBT) technologies.
eCampusCore This component is optional and can be deployed only on the Region Type II network (network overlay SDN scenario).
eCampusCore is an enterprise-level platform for application and data integration. It provides connections between IT systems and
OT devices and pre-integrated assets for digital scenarios in the enterprise market.
Name Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 140/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCube A hyper-converged deployment solution, which includes servers, storage devices, and switches. eDME is deployed on two MCNA
1000H nodes and one SCNA node of FusionCube 1000H.
MCNA: A node that provides management and compute functions. Management software, such as FusionCube Vision, VRM, and
FusionStorage Manager, is deployed on MCNA.
SCNA: A node that provides storage and compute functions.
eDME eDME is a Huawei-developed intelligent O&M platform that centrally manages software and hardware for virtualization scenarios.
eDME supports three-node deployment (non-multi-tenant scenario) and five-node deployment (multi-tenant scenario). The failure of a
single node does not affect the customer's management services.
CNA Physical Multiple hosts are deployed based on customer requirements on compute resources to provide virtual compute
deployment resources. A host also provides storage resources when the local storage is used.
When VRM nodes are deployed on VMs, hosts must be specified to create the VMs.
If a small number of hosts, for example, fewer than 10 hosts, are used, you can add all the hosts to the
management cluster, enabling integrated deployment of the management cluster and the user service cluster If a
large number of hosts are deployed, you are advised to add the hosts to one or multiple service clusters by the
services they provide to facilitate service management.
To maximize compute resource utilization for a cluster, you are advised to configure the same distributed
switches and datastores for hosts in the same cluster.
VRM Virtualization In VRM virtualization deployment, select two hosts in the management cluster and deploy the active and
deployment standby VRM VMs on these hosts.
eDME Virtualization In separated deployment, select three hosts in the management cluster for eDME and deploy three nodes on
deployment these hosts.
In hyper-converged deployment, eDME is deployed on two MCNA nodes and one SCNA node of FusionCube
1000H.
(Optional) iMaster Physical iMaster NCE-Fabric is used only in the network overlay SDN solution. The network overlay SDN solution
NCE-Fabric deployment applies only to separated deployment scenarios.
iMaster NCE-Fabric is delivered in appliance mode, facilitating deployment and improving reliability.
In the single-cluster deployment solution of iMaster NCE-Fabric, three servers are deployed as a cluster in a
data center (DC) to manage all switches in the DC. An iMaster NCE-Fabric cluster can also manage switches
in multiple DCs.
FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric detects VM login, logout, and
migration status and automatically configures the VM interworking network.
(Optional) FSM Virtualization FSM must be deployed in active/standby mode on VMs created on FusionCompute only in scale-out storage
deployment deployment scenarios.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 141/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SFS/ECE/ Virtualization The SFS, Elastic Container Engine, or Auto Scaling service is deployed on two VMs. For each service, the two
AS service deployment VMs must be deployed at the same time, and you are advised to deploy the two VMs on different CNA nodes.
eCampusCore Virtualization At least three FusionCompute physical servers (CNA hosts) are required.
deployment
The type of the storage pool used by VMs must be Scale-Out Block Storage or Virtualized SAN Storage.
Virtualized local disks cannot be used.
The remaining resources of a single physical host are calculated based on the division scheme. Examples are as
follows:
The nfs-dns-1, foundation-1, and gaussv5-1 VMs are planned on the physical machine CNA01. The available
CPU and memory resources of CNA01 are greater than or equal to the total CPU and memory resources of all
VMs (20C72G).
The nfs-dns-2, foundation-2, and gaussv5-2 VMs are planned on physical machine CNA02. The available CPU
and memory resources of CNA02 are greater than or equal to the total CPU and memory resources of all VMs
(20C72G).
The installer, foundation-2, ops-1, and ops-2 VMs are planned on the physical machine CNA03. The available
CPU and memory resources of CNA03 are greater than or equal to the total CPU and memory resources of all
VMs (18C80G).
Management plane: monitors the whole system, performs maintenance for system operations, including system configuration, system loading,
and alarm reporting, and manages VMs, including creating, deleting, and scheduling VMs.
Service plane: provides communication for virtual network interface cards (NICs) of VMs with external devices.
Storage plane: provides communication for the storage system and storage resources for VMs. This plane is used for storing and accessing VM
data including data in the system disk and user disk of VMs.
Backend storage plane: This plane is provided for scale-out storage only and is used for interconnection between hosts and storage units of
storage devices and processing background data between storage nodes.
Flash storage: Figure 1 and Figure 2 show the relationship between system communication planes of DCS.
Figure 1 Communication plane relationship diagram (example: flash storage, four network ports)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 142/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Figure 2 Communication plane relationship diagram (example: flash storage, six network ports)
Figure 3 Communication plane relationship diagram (example with eDataInsight included: flash storage, four network ports)
Scale-out storage: Figure 4 and Figure 5 show the relationship between system communication planes of DCS.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 143/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Figure 4 Communication plane relationship diagram (example: scale-out storage, four network ports)
Figure 5 Communication plane relationship diagram (example: scale-out storage, six network ports)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 144/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The Baseboard Management Controller (BMC) network port of each node can be assigned to the BMC plane or the management plane.
You are advised to bind network ports on different NICs to the same plane to prevent network interruption caused by the fault of a single NIC.
When binding network ports on different NICs, ensure that the models of the NICs to be bound are the same. If the models of the NICs to be bound are
different, bind the network ports on the same NIC.
Management plane: connects BMC ports of nodes to provide remote hardware device management for system management and maintenance.
Storage plane: enables data communication between VBS and OSD nodes or between OSD nodes.
Service plane: enables communication between compute nodes and VBS nodes through the iSCSI protocol.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 145/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The VLAN planning principles for the separated deployment solution are as follows:
Flash storage: Table 1 shows the VLAN assignment on the system communication plane of DCS.
Scale-out storage: Table 2 shows the VLAN planning for the system communication plane of DCS.
For details about the VLAN planning principles for the hyper-converged deployment solution, see Installation and Configuration > Site
Deployment > Site Deployment (Preinstallation) > Planning Data > Network Parameters and Installation and Configuration > Site
Deployment > Site Deployment (Onsite Installation) > Planning Data > Network Parameters in FusionCube 1000H Product
Documentation (FusionCompute).
Management Network ports eth0 and Bond1 Network ports eth0 and eth1 on each node are assigned to the management plane VLAN,
plane eth1 on hosts and the VLAN to which network ports eth0 and eth1 on the node belong becomes the
default VLAN of the management plane.
Network ports eth0 and Bond1
eth1 on the active and
standby VRM nodes
BMC network ports on - The switch port connected to the BMC network port on each node is assigned to the
VRM and hosts BMC plane VLAN, and the VLAN to which the BMC network port on the node belongs
is the default VLAN of the BMC plane.
NOTE:
The BMC network port can be assigned to an independent BMC plane or to the same
VLAN to which the management network port is assigned. The specific assignment
depends on the actual network planning.
Storage plane Storage network ports A1, - VLAN is configured as required. At least one VLAN must be assigned. For higher
A2, A3, A4, B1, B2, B3 reliability, it is recommended that more VLANs be assigned.
and B4 on SAN storage The network port eth2 can access ports A1, A2, B1, and B2 over the Layer 2 network.
devices The network port eth3 can access ports A3, A4, B3, and B4 over the Layer 2 network.
This allows compute resources to access storage resources through multiple paths.
Storage network ports eth2 Bond2 Therefore, the storage plane network reliability is ensured.
and eth3 on hosts
Service plane Service network ports eth0 Bond1 The service plane is divided into multiple VLANs to isolate VMs. All data packets from
and eth1 on hosts different VLANs are forwarded over the service network ports on the CNA node. The
data packets are marked with VLAN tags and sent to the service plane network port of
the switch at the access layer.
NOTE:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 146/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the four-network-port scenario, the management plane and the service plane share
physical network ports and are logically isolated by VLANs.
Management Network ports eth0 Bond1 Network ports eth0 and eth1 on each node are assigned to the management plane VLAN, and
plane and eth1 on hosts the VLAN to which network ports eth0 and eth1 on the node belong becomes the default
VLAN of the management plane.
Network ports eth0 -
on active and
standby VRM nodes
BMC network ports - The switch port connected to the BMC network port on each node is assigned to the BMC
on VRM and hosts plane VLAN, and the VLAN to which the BMC network port on the node belongs is the
default VLAN of the BMC plane.
NOTE:
The BMC network port can be assigned to an independent BMC plane or to the same VLAN to
which the management network port is assigned. The specific assignment depends on the actual
network planning.
Storage plane SLOT5-1 and - The storage VLAN is assigned based on the planning.
SLOT5-2
Storage network Bond2 Storage network ports eth2 and eth3 form bond 2, which forms a VLAN with storage network
ports eth2 and eth3 ports.
on hosts
Backend storage SLOT3-1 and - The backend storage VLAN is assigned based on the network planning.
plane SLOT3-2 NOTE:
Service plane Service network - The service plane is divided into multiple VLANs to isolate VMs. All data packets from
ports eth4 and eth5 different VLANs are forwarded over the service network ports (eth4 and eth5) on the CNA
on hosts node. The data packets are marked with VLAN tags and sent to the service plane network port
of the switch at the access layer.
NOTE:
In the four-network-port scenario, the management plane and the service plane share physical
network ports and are logically isolated by VLANs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 147/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Two floating IP addresses of the ECE
nodes
One IP address for ECE load balancing
Two IP addresses of the AS service
nodes
If one set of IP SAN storage is used, at least two management IP addresses are required for the storage server.
Table 5 lists the requirements for management IP addresses of network devices. You can adjust the IP addresses based on the actual networking.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 148/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If there are two compute leaf switches, two management leaf switches, two storage leaf switches, and two spine switches, at least 13 management IP addresses are
required.
Table 6 describes the IP address planning for the storage plane. The following table lists the requirements for storage plane IP addresses when IP
SAN storage is used. When FC SAN storage is used, storage plane IP addresses are not required. When the OceanStor Pacific block storage is used,
you need to allocate an OceanStor Pacific block storage plane IP address to each host so that each host can access the OceanStor Pacific block
storage resource pool.
IP SAN storage IP Each host is assigned with two IP addresses and The IP address of the IP SAN storage interface and the IP address of the storage
address each storage port is assigned with one IP address. sub-interface on the CNA server must be configured.
OceanStor Pacific Each host is assigned with an OceanStor Pacific Generally, the backend storage plane of the OceanStor Pacific block storage is a
block storage IP block storage plane IP address so that each host VLAN plane. The IP addresses are used only for the internal storage plane of the
address can access OceanStor Pacific block storage OceanStor Pacific block storage (they can be private addresses and do not
resources. communicate with external networks).
Total number of IP addresses required by the IP SAN storage plane = Number of IP SAN storage ports + 2 × Number of CNA nodes. For example, if one dual-
controller system is used, each controller has four ports for IP SAN storage, and 10 CNA nodes are deployed, 28 (8 + 10 × 2) storage IP addresses are required.
Total number of IP addresses required by the OceanStor Pacific block storage plane = Number of storage nodes + Number of CNA nodes. For example, if 20
storage nodes form an OceanStor Pacific block storage pool and 20 CNA nodes are deployed, 40 (20 + 20) storage IP addresses are required.
Service IP address planning: Refer to the number of VMs and vNICs and reserve certain resources. Total number of IP addresses required by the
service plane = Total number of VM NICs × 120%.
Public network IP address planning: For details, see the public network mapping.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 149/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Management Management 3 Server leaf Manage eDME cloud platform management plane, which connects to the
plane switch FusionCompute management plane, storage management plane,
server BMC management plane, and switch management plane.
Internal management plane, which includes VRM, CNA, iMaster
NCE-Fabric interconnection, O&M management, network
configuration management, upgrade configuration management,
alarm channel, information collection, and common services.
iMaster NCE- iMaster NCE- 10 Server leaf Manage Used for northbound communication and Linux management,
Fabric Fabric_Manage switch such as FusionCompute interconnection, web access, and Linux
management login.
plane Internal communication plane of the iMaster NCE-Fabric node.
iMaster NCE- 11 Server leaf N/A Used for communication with network devices in the southbound
Fabric_Service switch direction through protocols like NETCONF, SNMP, and
OpenFlow.
Storage plane Storage_Data 16 to 30 Leaf switch N/A Used for communication between compute nodes and service
storage nodes. Gateways can be deployed on leaf nodes.
Service plane OverLay_Service 31 to 999/1000 N/A Tenant VMs carry services over network overlay tunnels.
to 1999/2000 to
2999
Storage and server: The interconnection ports between spine and leaf nodes adopt the Layer 3 networking mode. Therefore, the switching
VLANs of server leaf nodes and storage leaf nodes can be planned independently, and the VLANs must be unique in the Layer 2
interconnection domain of the switch.
iMaster NCE-Fabric: This VLAN is used for interconnection between the border Leaf switch and firewalls and between the border leaf switch
and LBs. The global VLAN required for VPC service provisioning must be unique.
Device 2 to 30 Usage: When the underlay network is manually constructed, VLANIF interfaces are used to establish links
interconnection between some devices, including iMaster NCE-Fabric in-band management links between firewalls and
VLAN gateways, service links between F5 (third-party load balancers) and gateways, and management links
between iMaster NCE-Fabric servers and server leaf nodes.
Quantity planning suggestions:
For management links between firewalls and gateways, a group of firewalls requires one VLAN.
For service links between F5 load balancers and gateways, each VPN requiring the load balancing function
in the VPC occupies one VLAN (which can be the service VLAN described in subsequent sections).
The management network of the iMaster NCE-Fabric cluster requires one VLAN.
To sum up, plan interconnection VLAN resources in advance based on the actual service design.
Service VLAN 31 to 999/1000 to Usage: This VLAN is used by physical machines and VMs to connect to server leaf nodes on the SDN when
1999/2000 to iMaster NCE-Fabric delivers overlay network configurations.
2999 Quantity planning suggestions:
Service VLANs of different subnets can be reused. It is recommended that 3000 service VLANs be reused.
This VLAN can be dynamically adjusted for future use.
iMaster NCE-Fabric 3000 to 3499 Usage: This VLAN is used for interconnection between logical routers in tenant VPCs and tenant vSYS
interconnection firewalls when iMaster NCE-Fabric delivers overlay network configurations.
VLAN Quantity planning suggestions:
In a VPC, each service VPN requiring the vSYS firewall occupies one VLAN. This VLAN can be the
service VLAN.
This VLAN can be dynamically adjusted for future use.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 150/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Reserved VLAN for 3500 to 4000 Usage: This VLAN is required for configuring Layer 3 main interfaces on CE68 series switches when the
Layer 3 main underlay network is manually constructed. Creating a Layer 3 main interface occupies a VLAN. Therefore,
interfaces you need to set the range of reserved VLANs in advance so that the system can automatically occupy the
reserved VLANs when creating a Layer 3 main interface.
Quantity planning suggestions: The reserved VLANs can be dynamically adjusted as required. It is
recommended that six reserved VLANs be configured for the CE6855 switch and 32 reserved VLANs be
configured for the CE7855 switch.
Default reserved 4064 to 4094 Usage: These VLANs serve as internal control plane channels of switches and channels for transmitting user
VLANs service data of some features.
Quantity planning suggestions: You are advised to retain the default value. The reserved VLAN range can be
changed on a CE series switch using command lines so that the default reserved VLAN range does not
overlap with the planned or existing ones.
DCS management 2 to 15 Usage: This VLAN is used for the management communication among FusionCompute, eDME, and BMC
plane VLAN management (1 VLAN), and management communication between iMaster NCE-Fabric southbound and
northbound (2 VLANs). The value must be unique in a device.
Quantity planning suggestions: You are advised to reserve 14 VLANs. By default, 3 VLANs are used. If the
BMC network plane is independently planned, 4 VLANs are required.
DCS storage plane 16 to 30 Usage: This VLAN is used for the communication plane between compute node hosts and storage devices.
VLAN The value must be unique in a device.
Quantity planning suggestions: You are advised to reserve 15 VLANs. By default, 4 VLANs are used. You
can assign VLANs based on the storage device and service type.
Except DCS management and storage plane VLANs, other VLANs are globally reserved for iMaster NCE-Fabric to build the overlay VPC network.
Management plane: device management IP address, SDN controller management and control protocol IP address, and management plane IP
address (eDME, FusionCompute, iMaster NCE-Fabric, and storage management IP address).
Service plane: storage service IP address, Layer 3 interconnection IP address of underlay switches, overlay service IP address, and public
network IP address.
Out-of-band 192.168.39.11/24 Usage: out-of-band management address of the device, which is used to remotely log in to and manage
management IP the device.
address The following network ports are involved:
BMC network port of the server (including compute node, converged node, and iMaster NCE-Fabric
node)
Management network port of the switch (Meth)
Management network port of the firewall (GigabitEthernet0/0/0)
Management network port of the F5 LB (Mgmt)
Management network port of the storage device (Mgmt)
Planning suggestions: Plan the number of IP addresses based on the number of devices on the live
network.
Loopback IP address 10.125.99.1/32 Usage: This IP address is used as the VTEP address, router ID, iMaster NCE-Fabric in-band management
address, and DFS group address when the underlay network is manually deployed.
Planning suggestions: Each switch needs to be configured with two loopback addresses.
In the full M-LAG networking scenario, the loopback configuration of each CE series switch is as
follows. The two member devices in the M-LAG have the same loopback 0 address but different
loopback 1 addresses.
Loopback 0: VTEP address
Loopback 1: router ID, iMaster NCE-Fabric in-band management address, and DFS group address
127.0.0.1:51299/icslite/print/pages/resource/print.do? 151/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Device interconnection 10.125.97.1/29 Usage: This IP address is used as the IP address for the interconnection between spine leaf nodes and
IP address server leaf nodes and the management IP address for the interconnection between spine leaf nodes,
firewalls, and LBs when the underlay network is manually deployed.
Planning suggestions:
The Layer 3 interconnection link between a spine leaf node and a server leaf node occupies a 30-bit
network segment. If the networking scale is large, you can bundle multiple links into an Eth-Trunk and
configure the Eth-Trunk as a Layer 3 main interface to reduce the number of IP addresses to be used.
The management interconnection link between a group of firewalls and spine nodes occupies five IP
addresses (two firewall IP addresses, two physical addresses on spine nodes, and one virtual IP address of
VRRP).
The service interconnection link between a group of F5 LBs and spine nodes occupies at least seven IP
addresses (two interconnection interface IP addresses on F5 LBs, one floating IP address, two physical
addresses on spine nodes, and one virtual IP address of VRRP). The number of service VIPs depends on
the service type. Different service VPNs can use the same IP address.
Calculate the number of occupied address segments and the specific range based on the network scale.
iMaster NCE-Fabric 10.125.100.1/24 Usage: This address is used to deploy various addresses required by the iMaster NCE-Fabric cluster
cluster access IP server and configure the gateway of the cluster on server leaf switches when the underlay network is
address manually deployed. Planning suggestions:
For the dual-plane networking, two network segments need to be planned:
Each server in the cluster is configured with its own NIC bond address. Each server requires two IP
addresses (in different network segments). If the cluster has three nodes, 6 IP addresses are required. If
the cluster has five nodes, 10 IP addresses are required. The number of IP addresses required by other
nodes can be obtained in the same manner.
Server leaf nodes are deployed in the M-LAG mode, and VRRP is configured as the gateway of the
controller cluster. Each plane requires two physical IP addresses and one virtual IP address. Therefore, six
IP addresses are required for the two planes.
One southbound floating IP address is required for the controller cluster. That is, the entire cluster
requires only one IP address.
One northbound floating IP address is required for the controller cluster. That is, the entire cluster
requires only one IP address.
Four IP addresses are required for the internal communication of the controller cluster, which are in the
same network segment as the northbound floating IP address.
iMaster NCE-Fabric 10.125.97.240 to Usage: This address is used when iMaster NCE-Fabric delivers overlay network configurations. When
interconnection IP 10.125.97.255/30 tenant service traffic needs to pass through the firewall, this IP address is used for service interconnection
address between the tenant VPC and the tenant VSYS firewall through the spine node.
Planning suggestions: A pair of interconnection IP addresses with a mask having 30 consecutive leading
1-bits are required for a group of firewalls. This IP address can be dynamically adjusted for future use.
Public IP address - Usage: NAT (Network Address Translation) address pool of the SDN DC, which includes the NAT
addresses used by tenants and NAT addresses delivered by iMaster NCE-Fabric.
Planning suggestions: Set this IP address based on the actual public network services.
Interconnection IP 10.125.91.0 to Usage: This address is used when iMaster NCE-Fabric delivers overlay network configurations. If
address 10.125.91.255/30 multiple VPCs need to communicate with each other through different firewall groups or gateway groups,
iMaster NCE-Fabric needs to deliver interconnection IP addresses for interconnection. (For details, see
Configuration Guide > Traditional Mode > Commissioning > Resource Pool Management >
Configuring Global Resources in iMaster NCE-Fabric V100R024C00 Product Documentation.)
Planning suggestions: Set this IP address based on the actual networking application scenario. This IP
address is not involved in the scenario where a single gateway group and a single physical firewall group
are deployed. This IP address can be dynamically adjusted for future use.
Service IP address 10.132.1.0/24 Usage: This IP address is used to create a VBDIF Layer 3 interface as the gateway of PMs or VMs when
iMaster NCE-Fabric delivers overlay network configurations.
Planning suggestions: Plan this IP address based on the actual deployment and service scale. This IP
address can be dynamically adjusted for future use.
Storage IP address 192.168.1.11/24 Usage: This IP address is used for interconnection between compute node hosts and storage devices. If
this IP address is used as a Layer 3 interface, you need to plan the IP address of the VLANIF gateway.
Quantity planning suggestions: Plan the quantity based on the actual deployment and service scale. This
IP address can be dynamically adjusted for future use.
FusionCompute 192.168.40.11/24 Usage: The IP addresses are used for FusionCompute cluster interconnection and northbound
management IP management, which include two FusionCompute management IP addresses and one floating IP address.
address of the DCS Quantity planning suggestions: Plan the quantity based on the actual deployment and service scale. The
system IP addresses can be dynamically adjusted for future use.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 152/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eDME management IP 192.168.40.11/24 Usage: The IP addresses are used for the network communication of eDME, including three management
address of the DCS IP addresses and two floating IP addresses of three nodes in the eDME cluster.
system Quantity planning suggestions: Plan the quantity based on the actual deployment and service scale. The
IP addresses can be dynamically adjusted for future use.
eBackup management 192.168.40.11/24 Usage: The IP addresses are used for the network communication of eBackup, including three
IP address of DCS management IP addressed and two floating IP addresses of eBackup.
UltraVR management 192.168.40.11/24 Usage: The IP address is used for the network communication of UltraVR, including one management IP
IP address of DCS address of UltraVR.
Except DCS management and storage IP address and out-of-band management IP address, other IP addresses are globally reserved for iMaster NCE-Fabric to build
the overlay VPC network.
1 Port Slot1.P0 on Host001 Connects to port A.IOM0.P0 and port B.IOM0.P0 on Storage001 using 55 192.168.5.5 255.255.255.0
Switch001.
2 Port Slot1.P1 on Host001 Connects to port A.IOM0.P1 and port B.IOM0.P1 on Storage001 using 66 192.168.6.5 255.255.255.0
Switch002.
3 Port A.IOM0.P0 on Connects to port Slot1.P0 on Host001 using Switch001. 55 192.168.5.6 255.255.255.0
Storage001
5 Port A.IOM0.P1 on Connects to port Slot1.P1 on Host001 using Switch002. 66 192.168.6.6 255.255.255.0
Storage001
4 Port B.IOM0.P0 on Connects to port Slot1.P0 on Host001 using Switch001. 55 192.168.5.7 255.255.255.0
Storage001
6 Port B.IOM0.P1 on Connects to port Slot1.P1 on Host001 using Switch002. 66 192.168.6.7 255.255.255.0
Storage001
127.0.0.1:51299/icslite/print/pages/resource/print.do? 153/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Network Requirements
Item Requirement
Memory > 2 GB
Disk The available space of the disk partition where SmartKit is installed is greater than 50 GB.
Resolution For better visual effect, the recommended resolution is 1920 x 1080 or higher.
Network The local PC can communicate with the planned management plane.
Permissions You are advised to use the administrator account on the local PC. Otherwise, SmartKit will display a dialog box asking you to
(recommended) obtain the administrator permissions during eDME deployment, and SmartKit can run the eDME deployment script only after you
confirm the operation.
You can perform the following operations to modify User Account Control settings to control whether to display the dialog box.
Open the Windows Control Panel.
Choose System > Security and Maintenance > Change User Account Control settings.
Move the slider on the left to Never notify. After the deployment is complete, restore the default value.
Separated deployment
Flash storage deployment: The requirements on management system host resources include the requirements on VRM and eDME
components.
Scale-out storage deployment: The requirements on management system host resources include the requirements on VRM, eDME, and
(optional) FSM components.
Hyper-converged deployment
Hyper-converged deployment: The requirements on management system host resources include the requirements on eDME
components.
VRM Node Management Scale Specifications of VRM Nodes for Connecting to Specifications of VRM Nodes for Connecting to
eDME (Container Management Disabled on eDME (Container Management Enabled on
FusionCompute) FusionCompute)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 154/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Memory size ≥ 8 GB Memory size ≥ 12 GB
Disk size ≥ 140 GB Disk size ≥ 620 GB
NOTE:
VRM Node Management Scale Specifications of VRM Nodes for Connecting to Specifications of VRM Nodes for Connecting to
eDME (Container Management Disabled on eDME (Container Management Enabled on
FusionCompute) FusionCompute)
5000 VMs or 101 to 200 physical hosts Not supported Not supported
NOTE:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 155/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Five-node cluster (three on the O&M 1,000 VMs ≥ 6 (O&M portal) ≥ 48 GB (O&M System disk: ≥ 55 GB
portal and two on the operation portal) ≥ 8 (Hygon or Arm) portal) Data disk:
(O&M portal) ≥ 32 GB ≥ 600 GB (O&M portal)
≥ 8 (operation (operation portal)
≥ 820 GB (operation portal)
portal)
Five-node cluster (three on the O&M 10,000 VMs ≥ 16 (O&M portal) ≥ 64 GB (O&M System disk: ≥ 55 GB
portal and two on the operation portal) ≥ 16 (operation portal) Data disk:
portal) ≥ 32 GB ≥ 1,024 GB (O&M portal)
(operation portal) when 3,000 VMs are deployed
≥ 1,536 GB (O&M portal)
when 5,000 VMs are deployed
≥ 2,560 GB (O&M portal)
when 10,000 VMs are deployed
≥ 820 GB (operation portal)
Component HA Mode Number of VMs Management Scale CPU Memory Storage Space
VM Host VM Type CPU Memory System Disk Data Disk 1 Data Disk 2 Data Disk 3 Data Disk 4
Name (GB) (GB) (GB) (GB) (GB) (GB)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 156/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VMs for deploying the 1:1 8 32 92 200 - Used for CloudSOP installation. A maximum
eDataInsight of 1,000 hosts can be managed.
management plane:
three
VMs for deploying 1:1.5 16 64 92 200 ≥ 500 Used for eDataInsight component
eDataInsight installation. The number of VMs and CPU
components: three or cores and the sizes of memory and data disks
more can be dynamically adjusted.
SFS_DJ01 8 8 240
SFS_DJ02 8 8 240
VM Host Name Number of vCPUs (Cores) Memory (GB) System Disk (GB) Data Disk 1 (GB) Data Disk 2 (GB) Data Disk 3 (GB)
If local storage devices are used, only available space on the disk where you install the host OS and other bare disks can be used for data
storage.
If shared storage devices are used, including SAN and NAS storage devices, you must configure the management IP addresses and storage link IP
addresses for them. The following conditions must be met for different storage devices:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 157/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If SAN devices are used and you have requirements for thin provisioning and storage cost reduction, you are advised to use the thin
provisioning function provided by the Virtual Image Management System (VIMS), rather than the Thin LUN function of SAN devices.
If you use the Thin LUN function of the underlying physical storage devices, an alarm indicating insufficient storage space may be generated
after you delete a VM commonly because the storage space used by the VM is not zeroed out.
If SAN storage devices are used, configure LUNs or storage pools (datastores) as planned and map them to corresponding hosts.
If NAS storage devices are used, configure shared directories (datastores) and a list of hosts that can access the shared directories as planned,
and configure no_all_squash and no_root_squash.
The OS compatibility of some non-Huawei SAN storage devices varies depending on the LUN space. For example, if the storage space of a
LUN on a certain SAN storage device is greater than 2 TB, certain OSs can identify only 2 TB storage space on the LUN. Therefore, review
your storage device product documentation to understand the OS compatibility of the non-Huawei SAN storage devices before you use the
devices.
If SAN storage devices are used, you are advised to connect storage devices to hosts using the iSCSI protocol. The iSCSI connection does not
require additional switches, thereby reducing costs.
Except for FusionCompute which uses local storage, other components should preferentially use shared storage. eDataInsight must use shared storage.
In the DCS environment, a single virtualized SAN datastore can be added only to hosts using the same type of CPUs.
In a system that uses datastores provided by shared storage, add the datastores to all hosts in the same cluster to ensure that the VM migration within a cluster is
not affected by the datastores.
Local disks can be provided only for the host accommodating the disks. Pay attention to the following when using local storage:
The size of the local disk is configured in approximate equivalence to the number of host compute resources. With this equivalence configured, if the host
compute resources become exhausted, local storage resources will also become exhausted, preventing unequal resource waste.
Storage virtualization provides better performance in small-scale scenarios. Therefore, it is recommended that a maximum of 16 hosts be connected
to the same virtual datastore.
SNMP and SSH must be enabled for the switches to enhance security. SNMPv3 is recommended. For details, see "SNMP Configuration" in the
corresponding switch product documentation. For example, see "Configuration" > "Configuration Guide" > "System Management
Configuration" > "SNMP Configuration" in CloudEngine 8800 and 6800 Series Switches Product Documentation .
To ensure networking reliability, you are advised to configure M-LAG for switches. If NICs are sufficient, you can use two or more NICs for
connecting the host to each plane. For details, see "M-LAG Configuration" in the corresponding switch product documentation. For details, see
"Configuration" > "Configuration Guide" > "Ethernet Switching Configuration" > "M-LAG Configuration" in CloudEngine 8800 and 6800
Series Switches Product Documentation .
Table 1 describes the requirements for communication between network planes in the system.
BMC plane Specifies the plane used by the BMC network ports on hosts. This plane enables The management plane and the BMC plane of
remote access to the BMC system of a server. the VRM node can communicate with those
planes of the eDME node. The management
plane and the BMC plane can be combined.
Management Specifies the plane used by the management system to manage all nodes in a VRM nodes can communicate with CNA nodes
plane unified manner. This plane provides the following IP addresses: over the management plane.
Management IP addresses of all hosts, that is, IP addresses of the management The eDataInsight management plane can
network ports on hosts communicate with the FusionCompute
IP addresses of management VMs management plane.
Service plane Specifies the plane used by user VMs. The management plane where eDataInsight
management nodes reside needs to
communicate with the VM service plane of
compute nodes.
In the decoupled storage-compute scenario, the
eDataInsight service plane needs to
communicate with the OceanStor Pacific HDFS
service plane.
Storage plane This plane is used for interconnection between hosts and storage units of storage Hosts communicate properly with storage
devices and for processing foreground data between storage nodes. This plane devices over the storage plane.
provides the following IP addresses: You are advised not to use the management
Storage IP addresses of all hosts, that is, IP addresses of the storage network ports plane to carry storage services. This ensures
on hosts storage service continuity even when you
subsequently expand the capacity for the
Storage IP addresses of storage devices storage plane.
If the multipathing mode is in use, configure multiple VLANs for the storage
plane.
SmartKit plane Specifies the management plane of the host where SmartKit is located. The SmartKit plane communicates with the
BMC plane and management plane.
iMaster NCE- Specifies the iMaster NCE-Fabric management plane. The iMaster NCE-Fabric plane communicates
Fabric plane with the management plane.
Item Requirement
Switch In the physical networking, a single leaf switch can be deployed or the M-LAG networking can be used.
Cluster All servers in a cluster are of the same model, and the VLAN assignments for all clusters are the same.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 159/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Preparing for Obtaining Documents, Tools, Separated and hyper- Obtain the documents, tools, and software packages required for the
installation and Software Packages converged installation.
deployment
Integration Design scenarios The integration design phase covers the planning and design of DCS,
including the LLD of the system architecture, resource requirements,
compute system, network system, storage system, and O&M. The LLD
template is output to provide guidance for software and hardware
installation.
Planning Communication Plan the communication ports and protocols used for DCS.
Ports
Accounts and Passwords Obtain the passwords and accounts used for deploying DCS.
Preparing Data Plan the host and VRM information required for installing the software.
Deploying Installing Devices Hardware involved in DCS includes servers, storage devices, and switches,
hardware or hardware devices in hyper-converged scenarios.
Installing Signal Cables Install signal cables for servers and switches.
Configuring Hardware Configure the installed servers, storage devices, and switches, or hardware
Devices devices in hyper-converged scenarios.
Deploying software Unified DCS Deployment Separated Unified DCS deployment indicates that SmartKit is used to install
(Separated Deployment deployment scenario FusionCompute, eDME, UltraVR (optional), eBackup (optional),
Scenario) eDataInsight (optional), HiCloud (optional), and SFS (optional).
FusionCompute virtualizes hardware resources and centrally manages
virtual resources, service resources, and user resources. Create three
management VMs on FusionCompute. Management VMs are used to install
eDME and (optional) FSM.
eDME is a Huawei-developed intelligent O&M platform that centrally
manages software and hardware for virtualization scenarios.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 160/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Configuring Interconnection Separated This configuration is required only in the network overlay SDN solution.
Between iMaster NCE- deployment scenario FusionCompute associates with iMaster NCE-Fabric. iMaster NCE-Fabric
Fabric and FusionCompute detects VM login, logout, and migration status and automatically configures
the VM interworking network.
Configuring Interconnection Separated This configuration is required only in the network overlay SDN solution.
Between iMaster NCE- deployment scenario Configure iMaster NCE-Fabric to interconnect with eDME so that iMaster
Fabric and eDME NCE-Fabric can be managed in eDME.
Installing FabricInsight Separated This configuration is required only in the network overlay SDN solution.
deployment scenario Install FabricInsight to interconnect with eDME.
(Optional) Installing FSM Separated Only in scale-out storage deployment scenarios, two VMs created on
deployment scenario FusionCompute are deployed in active/standby mode.
Installing eDME (Hyper- Hyper-converged When DCS is used in the hyper-converged deployment scenario, eDME is
Converged Deployment) deployment scenario deployed on VMs created on two MCNA nodes and one SCNA node of
FusionCube 1000H. For details about eDataInsight and HiCloud
deployment procedures, see Installation Using SmartKit .
Initial configuration Initial Service Separated and hyper- Initialize the system of DCS using the initial configuration wizard, such as
Configurations converged creating clusters, adding hosts, adding storage devices, and configuring
deployment networks.
scenarios
(Optional) (Optional) Installing DR and Separated and hyper- eBackup&UltraVR virtualization backup and DR software is used to
Installing DR and Backup Software converged implement VM data backup and DR, providing a unified DR and backup
backup software deployment protection solution for data centers in all regions and scenarios.
scenarios
UltraVR is a piece of DR management software that relies on storage to
provide VM data protection and restoration functions.
eBackup is a piece of Huawei-developed backup software for virtual
environments.
Integration Design
Preparing Data
Compatibility Query
Preparing Documents
Table 1 lists the documents required for installing DCS.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 161/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Datacenter Virtualization Solution Information about the mapping between software and
2.1.0 Version Mapping hardware versions
Datacenter Virtualization Solution Before installing the software, download the software
2.1.0 Software Package Download packages of DCS components.
List (by SmartKit) Datacenter Virtualization Solution 2.1.0 Software
Package Download List (by SmartKit) can be imported
to SmartKit to automatically download software
packages.
iMaster NCE-Fabric This document is required only for the network overlay
V100R024C00 Product SDN solution.
Documentation Install iMaster NCE-Fabric and configure
interconnection between iMaster NCE-Fabric and
FusionCompute.
SmartKit 24.0.0 User Guide Describes how to use SmartKit. Among these available
methods:
For details about the SmartKit installation process, see
section "Installing SmartKit."
For details about how to use SmartKit to download the
software packages required by FusionCompute, see
section "Software Packages."
Tools
Table 2 describes the tools to be prepared before the installation.
SmartKit SmartKit is a collection of IT product service tools, including Huawei storage, server, and cloud Enterprises: Click here.
computing service tools, such as tools required for deployment, maintenance, and upgrade.
Carriers: Click here.
SmartKit installation package: SmartKit_24.0.0.zip.
Use SmartKit to deploy DCS in a unified manner. If Datacenter Virtualization Solution Deployment is
installed offline, you need to obtain the basic virtualization O&M software package
(SmartKit_24.0.0_Tool_Virtualization_Service.zip).
PuTTY A cross-platform remote access tool, which is used to access nodes on a Windows OS during software You can visit the chiark homepage
installation. to download the PuTTY software.
You are advised to use PuTTY of
the latest version for a successful
login to the storage system.
WinSCP A cross-platform file transfer tool, which is used to transfer files between Windows and Linux OSs. You can visit the WinSCP
homepage to download the
WinSCP software.
Table 3 Software packages required for tool-based installation in the x86 architecture
127.0.0.1:51299/icslite/print/pages/resource/print.do? 162/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Carrier users: Click here.
FusionCompute_CNA-8.8.0-X86_64.iso FusionCompute host OS
Table 4 Software packages required for tool-based installation in the Arm architecture
After obtaining the software packages, do not change the names of the software packages. Otherwise, the software packages cannot be verified when they are
uploaded. As a result, the software packages cannot be installed.
To deploy services such as Elastic Load Balance (ELB) and Domain Name Service (DNS), import the following files on the Template
Management page of FusionCompute:
To deploy the DCS eDataInsight management plane, download the software installation package for the DCS eDataInsight management plane, and
the matching digital signature verification file and certificate verification file listed in the following table.
eDataInsight_24.0.0_Manager_Euler.zip Software installation package for the DCS Enterprises: Click here.
eDataInsight management plane
127.0.0.1:51299/icslite/print/pages/resource/print.do? 163/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Carriers: Click here.
eDataInsight_24.0.0_Manager_Euler.zip.cms Digital signature verification file of the software
installation package for the DCS eDataInsight Click next to the eDataInsight software
management plane installation package to download the digital
signature verification files.
eDataInsight_24.0.0_Manager_Euler.zip.crl Certificate verification file of the software
installation package for the DCS eDataInsight
management plane
To deploy the DCS ECE service, download the software installation package for the DCS ECE service, and the matching digital signature
verification file and certificate verification file listed in the following table.
DCSCCS_24.0.0_Manager_Euler.zip.cms Digital signature verification file for the DCS Carriers: Click here.
ECE service installation package
Click next to the DCSCCS software installation
package to download the digital signature verification files.
DCSCCS_24.0.0_Manager_Euler.zip.crl Certificate verification file for the DCS ECE
service software installation package
To deploy the DCS AS service, download the software installation package for the DCS AS service, and the matching digital signature verification
file and certificate verification file listed in the following table.
Template files: Download and install the UltraVR software package as Enterprises:
required. For details, see Installation and Uninstallation in Click here.
OceanStor_BCManager_8.6.0_UltraVR_VHD_for_Euler_X86.zip
the OceanStor BCManager 8.6.0 UltraVR User Guide.
OceanStor_BCManager_8.6.0_UltraVR_VHD_for_Euler_ARM.zip Then, install the patch file by following the instructions Carriers: Click
provided in OceanStor BCManager 8.6.0.SPC200 UltraVR here.
Patch file:
Patch Installation Guide.
OceanStor BCManager 8.6.0.SPC200_UltraVR_for_Euler.zip
Before connecting CSHA to eDME, obtain the required adaptation package, signature, and certificate file.
Table 7 Adaptation package, signature, and certificate file required for connecting CSHA to eDME
resource_uniteAccess_csha_8.6.0 The .zip package contains the following For enterprise users, click here, search for
NOTE: files: the software package by name, and
Adaptation package download it.
The version number in the software package name varies with
site conditions. Use the actual version number. resource_uniteAccess_csha_8.6.0.tar.gz For carrier users, click here, search for the
Signature file software package by name, and download
it.
resource_uniteAccess_csha_8.6.0.tar.gz.cms
Certificate file
resource_uniteAccess_csha_8.6.0.tar.gz.crl
127.0.0.1:51299/icslite/print/pages/resource/print.do? 164/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eBackup template files eBackup template files, which are used Enterprises: Click
to install eBackup. here.
Arm template file:
OceanStor BCManager Carriers: Click
8.5.1.SPC100_eBackup_KVMtemplate_euler_aarch64_virtualization.zip here.
x86 template file:
OceanStor BCManager
8.5.1.SPC100_eBackup_KVMtemplate_euler_x86_64_virtualization.zip
OceanStor- OceanStor Pacific series deployment tool, which is used to install the Enterprises: Click
Pacific_8.2.1_DeviceManagerClient.zip management node of the OceanStor Pacific series software. here.
Carriers: Click
here.
eDataInsight_24.0.0_Software_Euler-aarch64.zip
After obtaining the software packages, do not change the names of the software packages. Otherwise, the software packages cannot be verified when they are
uploaded. As a result, the software packages cannot be installed.
If you do not have a Huawei account, contact Huawei technical support to log in to GKit Live.
3. Set filter criteria by referring to Table 11 and click Filter. If you need help, click Help Center on the right of the page to view the detailed
process.
Parameter Configuration
127.0.0.1:51299/icslite/print/pages/resource/print.do? 165/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Support scene Select this parameter according to the architecture to be installed. Select EulerOS-X86 or EulerOS-ARM.
4. Select required software packages, click Apply for download, fill in the application based on actual conditions, and click Submit
Application. For details about the software packages, see Obtaining HiCloud Software Packages from Huawei Support Website .
6. After the application is approved, select all software packages and their digital signature verification files, and click Download Files to
download them to the PC.
Template files: Install the SFS software package specific to the architecture, and then install the Enterprises:
patch file by following instructed provided in STaaS Solution 8.5.0.SPC1 SFS Patch Click here.
DCS_SFS_8.5.0_ARM.zip
Installation Guide.
DCS_SFS_8.5.0_X86.zip Carriers: Click
here.
Patch file:
STaaS_Solution_SFS_DJ_PATCH-
8.5.0.SPC1.tar.gz
resource_uniteAccess_sfs_8.5.0.tar.gz Before connecting SFS to eDME, obtain the required adaptation package, signature,
and certificate file.
resource_uniteAccess_sfs_8.5.0.tar.gz.cms
Adaptation package: *.tar.gz
resource_uniteAccess_sfs_8.5.0.tar.gz.crl Signature file: *.tar.gz.cms
Certificate file: *.tar.gz.crl
In the package names, <version> indicates the version number of the component. Download the corresponding software package based on the requirements.
In the package names listed in Table 13 and Table 14, <hardware-platform> indicates the hardware platform. The DCS application and data integration for
multi-tenant scenarios supports both x86_64 and aarch64 platforms. Tenants can select either of the platforms based on the requirements. Therefore, you need to
download both the x86_64 and aarch64 software packages.
Table 16 lists available VM templates. You need to download the VM templates corresponding to the architecture of the FusionCompute platform. Otherwise,
the installation may fail. You can view the FusionCompute architecture type on the host overview page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 166/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Version description eCampusCore_<version>_vdd.zip Version description document package. It stores the current eCampusCore version
document information.
Table 16 VM templates
x86 server VMTemplate_x86_64_Euler2.12.zip Used to deploy nodes other than the installer node and the nodes managed by
eContainer.
Arm server VMTemPlate_aarch64_Euler2.12.zip Used to deploy nodes other than the installer node and the nodes managed by
eContainer.
The initial password of the root user in the VM template is Huawei@12F3. During preinstallation, the password for the root user of the VM operating system (OS) is
reset.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 167/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
LLDesigner provides functions such as hardware configuration, device networking, and resource provisioning to quickly complete product planning
and design. You can use LLDesigner to simply and quickly generate a high-quality LLD plan. For details, see Planning Using LLDesigner . For
details about the integration design, see Datacenter Virtualization Solution 2.1.0 Integration Design Suite.
After the integration design is complete, install hardware and software devices of the project based on the LLD.
Before the installation, plan the compatibility information properly. For details about the compatibility information, see Huawei Storage
Interoperability Navigator.
Planning Using LLDesigner
Scenario
LLDesigner provides an integrated tool for network planning of DCS site deployment delivery. It works with SmartKit to generate an LLD template
to guide software and hardware installation, improving delivery efficiency.
Prerequisites
A W3 account is required for login. If you do not have a W3 account, contact Huawei technical support engineers.
Operation Process
Figure 1 LLD planning flowchart
Procedure
1. Use your W3 account to log in to eService website and choose Delivery Service > Storage > Deployment & Delivery > LLDesigner. The
LLDesigner page is displayed.
2. Click Create LLD. On the LLDesigner page that is displayed, choose Datacenter Virtualization Solution > Customize Devices to Create
LLD to create a project.
3. Set Service Type, Rep Office/Business Dept, Project Name, and Contract No., and click OK. The Solution Design page is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 168/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
eDME
eBackup
UltraVR
HiCloud
eDataInsight
eCampusCore
SFS
Scale-out Storage
Flash Storage Type This parameter is valid when Storage Type is set to Flash Storage. -
IP SAN Storage
FC SAN Storage
Scale-Out Storage Type This parameter is valid when Storage Type is set to Scale-out Storage. -
Structured
Out-of-Band Management This parameter is valid when Network Type is set to Network overlay SDN. Yes
Yes
No
iSCSI
Separated front- and back-end storage networks
Layer 3 networking
Click Add Device and enter device information. Right-click slots in the device diagram to add or modify interface cards. After the
setting is complete, click OK. After the configuration is complete, click Next.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 169/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scale-out Storage
NOTE:
Flash storage is included in the options when Flash storage is selected for Storage Type.
Scale-out storage is included in the options when Scale-out storage is selected for Storage Type.
CPU Architecture This parameter is valid when Device Type is set to Server.
X86
ARM
Network Type This parameter is valid when Device Type is set to Switch.
IP
FC
Storage Series This parameter is valid when Device Type is set to Flash storage.
OceanStor 6.x
OceanStor Dorado
Subtype This parameter is valid when Device Type is set to Scale-out storage.
Block
Disk Type If Device Type is set to Flash storage, the options are as follows:
NL_SAS
SAS_SSD
SAS_HDD
NVMe_SSD
If Device Type is set to Scale-out storage, the options are as follows:
SATA
SAS_HDD
NVMe_SSD
SAS_SSD
Disk Capacity This parameter is valid when Device Type is set to Flash storage or Scale-out storage.
Set the disk capacity based on the site requirements.
Disk Quantity This parameter is valid when Device Type is set to Flash storage or Scale-out storage.
The device quantity must be a positive integer.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 170/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Back-End Storage Network Type 10GE TCP, 25GE TCP, 25GE RoCE, 100GE TCP, 100GE RoCE, and 100Gb IB
Multi-IP for Back-End Storage Network This parameter is valid when Back-End Storage Network Type is set to 25GE RoCE or 100GE RoCE.
The default value is 2.
Front-End Storage Network Type 10GE TCP, 25GE TCP, 25GE RoCE, 100GE TCP, 100GE RoCE, and 100Gb IB
Multi-IP for Front-End Storage Network This parameter is valid when Back-End Storage Network Type is set to 25GE RoCE or 100GE RoCE.
The default value is 2.
Replication Network Type 10GE TCP, 25GE TCP, and 100GE TCP
iSCSI Network Type The value can be 10GE TCP or 25GE TCP.
Configure the cabinet layout. You can click Add Cabinet and Reset Cabinet to perform corresponding operations. Click the setting
button in the upper left corner of the cabinet table to modify weight and power of devices. After the configuration is complete, click
Next.
The default cabinet power is 6,000 W. You can manually change it as required.
e. Plan disks.
To add disks, click Add Disk in the Operation column. To delete the added disks, click Remove in the Operation column.
g. Plan clusters.
i. Compute cluster
Click Add, set Cluster Name and Management Cluster, select nodes to be added to the cluster from the node list, and click
OK. After the configuration is complete, click Next.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 171/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Modify the network topology based on the network planning. You can drag a device to the desired position, scroll the mouse wheel
forward or backward to zoom in or zoom out on the topology, click a device icon to view the device information, and click the setting
button in the upper right corner of the figure legend to modify the line color and width. After the configuration is complete, click
Next.
6. Plan resources.
a. Flash storage
i. Perform the storage pool planning. Set Performance Layer Name and Performance Layer Hot Spare Policy for
the flash storage device, and click Add Storage Pool in the Operation column to add a device to a storage pool.
Click Add Storage Pools in Batches to add storage pools for devices with the same disk specifications in batches.
ii. Perform the LUN planning. Click Add LUN in the Operation column to add the device to a LUN. Click Add LUNs in
Batches to add LUNs for devices configured with storage pools.
LUN Capacity -
LUN Quantity -
b. Scale-out storage
127.0.0.1:51299/icslite/print/pages/resource/print.do? 172/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
i. Click to configure storage pool information. Table 6 describes the related parameters.
Parameter Description
Service Type Service type of the storage pool. In this example, the value is Block Service.
Encryption Encryption type of the storage pool. The options are Common and Self-encrypting. You are advised to select Common. If you have
Type. high requirements on data security, select Self-encrypting.
Common: This type of storage pools does not support data encryption.
Self-encrypting: This type of storage pools supports data encryption.
Security Level Security level of the disk pool. The options are Node and Cabinet.
Redundancy Redundancy policy of the storage pool. The options are EC and Data copy.
Policy
Data Fragment The value must be an even number ranging from 4 to 22.
NOTE:
Data Copy Number of data copies allowed in the storage pool. The value can be 2 or 3.
Policy NOTE:
ii. After the storage pool information is configured, click next to the storage pool to configure the disk pool. Select nodes and
set Required Disk Quantity. After the setting is complete, click OK to save the information.
Each node must be configured with at least four main storage disks.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 173/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If VRMs are deployed in active/standby mode, an arbitration IP address can be used to determine the activity status of the active and standby VRM
nodes.
In deployment scenarios, DVS Port Group Name of the eBackup component on the Backup Nodes page is mandatory. Set DVS Port Group Name
based on the actual name on FusionCompute.
In deployment scenarios, Service IP Address, Service Plane Subnet Mask, and Service Plane Port Group of the cloudsop and ndp nodes of the
eDataInsight component on the HDFS Nodes page are mandatory. DVS Port Group Name of the HiCloud component is mandatory.
9. Perform FusionCompute deployment configuration. Set Login Information, General Host Information, Time Zone and NTP
Information, Management Data Backup Information, eDataInsight Information, and HiCloud Information. For details, see the CNA
installation parameter description and VRM installation parameter description in Installation Using SmartKit . After the configuration is
complete, click Next. If you do not need to set those parameters, click Skip to ignore the current configuration and proceed to the next step.
In deployment scenarios, parameters in the FusionCompute Deployment Configuration page are mandatory.
Select LLD Design Document and click Export to export the LLD design document package to the local PC. Decompress the
package to obtain the LLD design document. Delivery personnel can complete project delivery based on the data planned in the
LLD design document.
Select LLD Deployment Document and click Export to export the LLD deployment document package to the local PC.
Decompress the package to obtain the LLD deployment parameter template. Delivery personnel can import the LLD deployment
parameter template to SmartKit for deployment.
After the parameters are set and the LLD deployment document is exported, you need to manually enter information such as the user name,
password, and installation package path in the document, for example, BMC user name, BMC password, FusionCompute login user name, and
FusionCompute login password.
Configure Network Port of Management IP Address for the host as the first node.
Only one host can be configured as the first node and this node is the management node by default. Other hosts can only be configured as non-
first nodes. The first node also needs to be configured with the network port name of the management IP address.
You can leave parameter Network Port of Management IP Address blank for nodes other than the first node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 174/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
gandalf Password Specifies the password of user gandalf for logging in to the CNA node to be -
installed.
Redis Password Redis password, which is set during CNA environment initialization. -
Information > System Info > Network, and click on the left of the target
port name in the Port Properties area. When installing the Great Wall server,
you must manually enter the MAC address of an available NIC.
NOTE:
The BIOS page varies depending on the server model and iBMC version. The
method of obtaining the MAC address described here is for reference only.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 175/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Management plane The default value is 0 and you can retain the default value. -
VLAN tag
Network Port of The current node is the first node. You need to specify a network port in -
Management IP management IP address configuration, for example, eth0.
Address
(Optional) Collecting OceanStor Pacific HDFS Domain Names and Users in the Decoupled Storage-Compute Scenario
127.0.0.1:51299/icslite/print/pages/resource/print.do? 176/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You can use the NTP service on the storage cluster or compute cluster to synchronize time. If the NTP service is used, ensure that the NTP server can
communicate with the management network of the storage cluster or compute cluster.
Multiple eDataInsight components cannot connect to the same OceanStor Pacific HDFS.
You can use the NTP service on the storage cluster or compute cluster to synchronize time. If the NTP service is used, ensure that the NTP server can communicate
with the management network of the storage cluster.
Procedure
1. Log in to DeviceManager.
Parameter Description
NTP Server IP address of an NTP server. A maximum of three NTP server addresses can be configured.
Address You can click Test to verify the availability of the NTP server.
Ensure that the time of multiple NTP servers is the same. Otherwise, the time synchronization function will be abnormal.
NTP Whether to enable NTP authentication. After NTP authentication is enabled, the system authenticates and identifies the NTP server.
Authentication Only when NTPv4 or later is used, NTP authentication can be enabled to authenticate the NTP server and automatically synchronize
the time to the storage device.
4. Click Save.
Confirm your operation as prompted.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 177/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Log in to DeviceManager, choose Resources > Access > Authentication User and click the UNIX Users tab.
Select the account corresponding to the namespace to be interconnected from the Account drop-down list.
2. Create a user-defined user group on the Local Authentication User Group tab page.
User groups ossgroup, supergroup, root, and hadoop must be created. Other user groups can be added as required.
If the LDAP is connected, the ID of user group supergroup must be 10001. There is no requirement on the IDs of other user groups.
The primary group of user root is root, and the secondary group of user root is supergroup. The primary groups of other users are
supergroup.
Create users based on the following mapping between users and user groups:
User User Group 1 (Primary Group) User Group 2 User Group 3 User Group 4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 178/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
HTTP hadoop - - -
kafka kafkaadmin - - -
yarn supergroup - - -
Users root, spark2x, hdfs, flink, ossuser, hbase, yarn, mapred, and HTTP must be created.
Set Super User Group to supergroup, UMASK to 022, and Mapping Rule to DEFAULT.
4. On the Account page, click the desired account. On the displayed page, click the Protocol tab and click Modify. Type supergroup in Super
User Group and click OK.
Simple mode: User omm needs to be configured as a proxy user. You can add other proxy users as required.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 179/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Log in to DeviceManager, choose Resources > Access > Accounts, and select the configured account. The following figure shows an
example. Click Protocol. The access control page of HDFS service is displayed.
2. Click the setting button next to Subnet. The Manage Subnet page is displayed.
3. Select a subnet and view the value of IP Address of General DNS Service in Subnet.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 180/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Service deployment requires the container management capability of FusionCompute. Before the installation, you need to ensure that the environment has been
Network Planning
Planning the IP addresses of the service component
nfs-dns-1 10.168.52.12 FusionCompute management subnet VIP (NFS_VIP) of NFS. An example is 10.168.52.20.
nfs-dns-2 10.168.52.13
foundation-3 10.168.52.16
ops-2 10.168.52.18
127.0.0.1:51299/icslite/print/pages/resource/print.do? 181/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute eContainer cluster 10.168.52.20 Start IP address of the management network segment of the master node in the container cluster
master node manage start IP created on FusionCompute. The management network segment must be the same as the eDME
network segment.
FusionCompute eContainer cluster 10.168.52.24 End IP address of the management network segment of the master node in the container cluster
master node manage end IP created on FusionCompute.
The container cluster IP address segment must include at least five IP addresses.
All IP addresses between the start IP address and end IP address cannot conflict with other IP
addresses.
Password Planning
To ensure successful installation, the planned password must meet the following software verification rules:
Contain 10 to 32 characters, including uppercase letters, lowercase letters, digits, and the special character @.
Not contain more than 3 identical characters or more than 2 consecutive identical characters.
Not be the same as the reverse of it, regardless of the letter case. The password cannot contain admin or the reverse of it, a mobile phone
number, or an email address and cannot be a weak password.
Not contain root, huawei, admin, campusSnmp, sysomc, hicampus, or gandalf (case-sensitive).
Password Description
Password of the O&M Password of the O&M management console and database, password of the admin user of the eCampusCore O&M
management console and management console, password of the sysadmin user of the database, password of the eDME image repository, and
database machine-machine account password.
Password used in SNMPv3 SNMPv3 authorization and authentication password used by service components. The verification rules are the same as
authentication those for the passwords of the O&M console and database.
The password cannot be the same as the password of the O&M management console or database.
The value is the same as that of Common default password on SmartKit.
sysomc user password Password of the sysomc user created on the VM during installation.
The value is the same as that of Common default password 2 on SmartKit.
Password of the root user of Password of the root user of the VM.
the VM The password cannot be the same as the template password (Huawei@12F3).
Password of the interface To ensure that FusionCompute interfaces can be properly called during the installation, you need to create the interface
interconnection user interconnection user OpsMon with the system management permission on the FusionCompute page before the
installation. You can preconfigure the user password.
Machine-machine account Machine-machine account password for interconnecting with eDME after the service is deployed.
password
Prerequisites
Log in to FusionCompute as a user associated with the Administrator role.
Procedure
1. Log in to FusionCompute as the admin user.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 182/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. In the navigation pane, click , choose System Management > System Configuration > License Management, and check whether the
license authorization information meets the requirements. If you do not have the related license, contact Huawei technical support.
For FusionCompute 8.7 and later versions, eContainer Suite License for 1 vCPU is available.
If yes, the container management function has been enabled and the environment meets the requirements.
If no, choose System Management > System Configuration > Service and Management Nodes. Then, choose More > Enable
Container Management to enable the container management function.
4. Check whether the VM image and software package have been uploaded to the content library of the container.
Check whether the VM image has been uploaded. For details, see "Configuring a VM Image" in FusionCompute 8.8.0 Product
Documentation.
Check whether the software package has been uploaded. For details, see "Configuring a Software Package" in FusionCompute 8.8.0
Product Documentation.
Prerequisites
You have logged in to the eDME O&M portal as an O&M administrator user at https://eDME O&M portal IP address:31943.
Procedure
1. In the navigation pane, choose Settings > Certificate Management. On the page that is displayed, click APIGWService.
2. Locate the certificate whose Certificate Alia is server_chain.cer and click in the Operation column to obtain the certificate.
Prerequisites
You have obtained the management IP address of one of the FusionCompute VRM nodes.
In the navigation pane of FusionCompute, click and search for the node name VRM01.
You have obtained the passwords for the gandalf and root users of the VRM nodes of FusionCompute.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 183/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Log in to the VRM node on FusionCompute as the gandalf user using the management IP address of the VRM node.
3. Run the following command and obtain the content of the FusionCompute certificate from the command output:
# cat /etc/galax/certs/vrm/rootCA.crt
5. Create a text file on the local PC, change the file type to .crt, and rename the file rootCA.crt.
6. Copy the content obtained in 3 to the rootCA.crt file and save the file.
Context
Requirements for creating an interface interconnection user on FusionCompute are described as follows:
Procedure
FusionCompute 8.6.0 is used as an example. The GUIs of other versions are different. For details, see related FusionCompute documents.
3. Choose Rights Management > User Management. On the User Management page, create the OpsMon user.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 184/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Role:Select administrator.
Password: Set the OpsMon user password. The only special character that the password can contain is the at sign (@).
4. View the password policy.
Choose Rights Management > Rights Management Policy. On the page that is displayed, check whether the policy Interface
interconnection user forcibly change passwords upon a reset or initial login. is enabled.
If the value for the policy is Yes, call the interface by referring to FusionCompute xxx APIs to change the password.
Installing Devices
Scenario Overview
The typical configuration of DCS is separated deployment (with servers, leaf switches, spine switches, and flash storage or scale-out storage) or
hyper-converged deployment.
For details about hyper-converged deployment, see Product Description > System Architecture > Logical Architecture in FusionCube
1000H Product Documentation (FusionCompute).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 185/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Networking description
All switching devices are configured in M-LAG mode on the standard Layer 2 network.
Leaf switches connect all the network ports on host network planes (management plane, service plane, and storage plane) to the network, as
shown in Figure 2.
Leaf switches connect the management plane and storage plane of storage devices to the network. Figure 3 shows the connection of flash
storage. For details about the connection of scale-out storage, see "Network Planning" in OceanStor Pacific Series 8.2.1 Product
Documentation. For details about the connection in hyper-converged scenarios, see FusionCube 1000H Product Documentation
(FusionCompute).
Leaf switches connect to each switch to implement plane interconnection between devices. Spine switches connect the cloud data center to
external networks.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 186/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Server TaiShan 200 (model For details about installation operations, see TaiShan Server Hardware Installation
2280) SOP.
Storage device OceanStor 5310 For details about installation operations, see OceanStor 5310 Series V700R001C00
(Version 6) Installation Guide.
OceanStor 5510 For details about installation operations, see OceanStor 5510 Series and 5610
(Version 6) V700R001C00 Installation Guide.
OceanStor 5610
(Version 6)
OceanStor 6810 For details about installation operations, see OceanStor 6x10 and 18x10 Series
(Version 6) V700R001C00 Installation Guide.
OceanStor 18510
(Version 6)
OceanStor 18810
(Version 6)
OceanStor Dorado For details about installation operations, see OceanStor Dorado 3000 V700R001C00
3000 6.x.x Installation Guide.
OceanStor Dorado For details about installation operations, see OceanStor Dorado 5000 and Dorado
5000 6.x.x 6000 V700R001C00 Installation Guide.
OceanStor Dorado
6000 6.x.x
OceanStor Dorado For details about installation operations, see OceanStor Dorado 8000 and Dorado
8000 6.x.x 18000 V700R001C00 Installation Guide.
OceanStor Dorado
18000 6.x.x
127.0.0.1:51299/icslite/print/pages/resource/print.do? 187/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
OceanStor Pacific For details about installation operations, see OceanStor Pacific Series 8.2.1
9920 Installation Guide.
OceanStor Pacific
9540
OceanStor Pacific
9520
Hyper-convergence FusionCube 1000H For details about installation operations, see FusionCube 1000H Product
Documentation (FusionCompute).
Leaf or border leaf switch CE6881-48S6CQ For details about installation operations, see CloudEngine 9800, 8800, 7800, 6800,
and 5800 Series Switches Hardware Installation and Maintenance Guide (V100 and
CE6857F-48S6CQ V200).
CE6857F-48T6CQ
CE6860-SAN
Spine switch CE8850-SAN For details about installation operations, see CloudEngine 9800, 8800, 7800, 6800,
and 5800 Series Switches Hardware Installation and Maintenance Guide (V100 and
V200).
Out-of-band management switch (only CE5882-48T4S For details about installation operations, see CloudEngine 5882 Switch
for the network overlay SDN solution) V200R020C10 Product Documentation.
SDN controller (only for the network iMaster NCE-Fabric For details about installation operations, see iMaster NCE-Fabric V100R024C00
overlay SDN solution) appliance Product Documentation.
Firewall (only for the network overlay USG6650E For details about installation operations, see HUAWEI USG6000, USG9500, and
SDN solution) (enterprise) NGFW Module V500R005C20 Product Documentation.
Eudemon1000E-G5
(carrier)
When deploying a host, check whether the RAID independent power supply (battery or capacitor) system works properly. If any exception occurs, disable the
RAID cache. Otherwise, files may be damaged due to unexpected power-off.
For details about how to check whether the RAID independent power supply works properly and how to disable the RAID cache, see the corresponding product
documentation of the server.
This section applies only to flash storage and scale-out storage deployment scenarios.
Procedure
1. Determine the positions of ports on the TaiShan 200 server (model 2280).
You are advised to view the port location through the 3D display of the server. For details about the 3D display of the server, click here.
3. (If the network overlay SDN solution is not used) Use cables to connect servers, switches, and storage devices.
The Mgmt ports are GE ports of servers for BMC hardware management. You can select a networking mode based on the site requirements.
Servers connect to 48-port switches through O/E converters.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 188/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Servers directly connect to GE switches.
Compute leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 1 to 4 of the spine switch.
(Using CE6881-48S6CQ as an Ports 3 and 4 are connected to uplink ports 3 and 4 of the other compute leaf switch and configured in
example) M-LAG mode.
Ports 5 and 6 are reserved.
Downlink Ports 1 to 12 are connected to ports 1 and 2 of the management plane planned for the server.
port Ports 13 to 24 are connected to ports 3 and 4 planned for the service plane of the server.
Ports 25 to 47 are reserved.
Port 48 is connected to uplink ports 1 to 4 of the BMC access switch.
Storage leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 5 to 8 of the spine switch.
(Using CE6881-48S6CQ as an Ports 3 and 4 are connected to uplink ports 3 and 4 of the other storage leaf switch and configured in
example) M-LAG mode.
Ports 5 and 6 are reserved.
Downlink Ports 1 to 30 are connected to ports 5 to 8 of the storage plane planned for the server.
port Ports 31 to 42 are connected to ports 1 to 8 of the storage device.
Ports 43 to 48 are reserved.
Spine switch Uplink port Ports 1 and 2 are connected to the uplink network.
(Using CE8850-SAN as an Ports 3 and 4 are connected to ports 3 and 4 of the other spine switch and configured in M-LAG mode.
example) Ports 5 to 8 are reserved.
Downlink Ports 1 to 4 are connected to uplink ports 1 and 2 of the compute leaf switch.
port Ports 5 to 8 are connected to uplink ports 1 and 2 of the storage leaf switch.
Ports 9 to 32 are reserved.
Server Uplink port Ports 1 and 2 are planned as management plane ports, which are connected to downlink ports 1 to 12
of the compute leaf switch.
Ports 3 and 4 are planned as service plane ports, which are connected to downlink ports 13 to 24 of the
compute leaf switch.
Ports 5 to 8 are planned as storage plane ports, which are connected to downlink ports 1 to 30 of the
storage leaf switch.
The management network port is connected to downlink ports 1 to 6 of the BMC access switch.
Storage device Uplink port Ports 1 to 8 are connected to downlink ports 31 to 42 of the storage leaf switch.
The management network port is connected to downlink ports 7 to 12 of the BMC access switch.
BMC access switch Uplink port Ports 1 to 4 are connected to downlink port 48 of the compute leaf switch.
(Using CE5882-48T4S as an
example) Downlink Ports 1 to 6 are connected to the management network port of the server.
port Ports 7 to 12 are connected to the management network port of the storage device.
Ports 13 to 48 are reserved.
4. Optional: (Only for the network overlay SDN solution) Confirm the port positions of the SDN controller and out-of-band management
switch.
Use cables to connect servers, switches, and storage devices.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 189/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Compute leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 1 to 4 of the spine switch.
(Using CE6881-48S6CQ as an example) Ports 3 and 4 are connected to uplink ports 3 and 4 of the other compute leaf switch and
configured in M-LAG mode.
Port 5 is connected to downlink ports 8 and 9 of the out-of-band management switch.
Port 6 is reserved.
Downlink Ports 1 to 12 are connected to ports 1 and 2 of the management plane planned for the server.
port Ports 13 to 24 are connected to ports 3 and 4 planned for the service plane of the server.
Ports 25 to 47 are reserved.
Port 48 is connected to uplink ports 1 to 4 of the BMC access switch.
Storage leaf switch Uplink port Ports 1 and 2 are connected to downlink ports 5 to 8 of the spine switch.
(Using CE6881-48S6CQ as an example) Ports 3 and 4 are connected to uplink ports 3 and 4 of the other storage leaf switch and
configured in M-LAG mode.
Port 5 is connected to downlink ports 10 and 11 of the out-of-band management switch.
Port 6 is reserved.
Downlink Ports 1 to 30 are connected to ports 5 to 8 of the storage plane planned for the server.
port Ports 31 to 42 are connected to ports 1 to 8 of the storage device.
Ports 43 to 48 are reserved.
Spine switch Uplink port Ports 1 and 2 are connected to the uplink network.
(Using CE8850-SAN as an example) Ports 3 and 4 are connected to uplink ports 3 and 4 of the other spine switch and configured
in M-LAG mode.
Port 5 is connected to downlink port 7 of the out-of-band management switch.
Ports 6 to 8 are reserved.
Downlink Ports 1 to 4 are connected to uplink ports 1 and 2 of the compute leaf switch.
port Ports 5 to 8 are connected to uplink ports 1 and 2 of the storage leaf switch.
Ports 9 to 32 are reserved.
Server Uplink port Ports 1 and 2 are planned as management plane ports, which are connected to downlink ports
1 to 12 of the compute leaf switch.
Ports 3 and 4 are planned as service plane ports, which are connected to downlink ports 13 to
24 of the compute leaf switch.
Ports 5 to 8 are planned as storage plane ports, which are connected to downlink ports 1 to
30 of the storage leaf switch.
The management network port is connected to downlink ports 4 to 9 of the BMC access
switch.
Storage device Uplink port Ports 1 to 8 are connected to downlink ports 31 to 42 of the storage leaf switch.
The management network port is connected to downlink ports 10 to 15 of the BMC access
switch.
SDN controller Uplink port Ports 1 and 2 are connected to downlink ports 1 to 3 of the out-of-band management switch.
(Using the iMaster NCE-Fabric appliance Ports 3 and 4 are connected to downlink ports 4 to 6 of the out-of-band management switch.
as an example) The management network port is connected to downlink ports 1 to 3 of the BMC access
switch.
Out-of-band management switch Uplink port Ports 1 and 2 are connected to uplink ports 1 and 2 of the other out-of-band management
(Using CE5882-48T4S as an example) switch and configured in M-LAG mode.
Ports 3 and 4 are reserved.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 190/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Ports 10 and 11 are connected to uplink port 3 of the storage leaf switch.
BMC access switch Uplink port Ports 1 to 4 are connected to downlink port 48 of the compute leaf switch.
(Using CE5882-48T4S as an example)
Downlink Ports 1 to 3 are connected to the management network port of the SDN controller.
port Ports 4 to 9 are connected to the management network port of the server.
Ports 10 to 15 are connected to management network port of the storage device.
Ports 16 to 48 are reserved.
Firewall - 10GE port 1 in slots 1 and 3 is used to connect to the active and standby firewalls.
10GE port 0 in slots 1 to 5 is used to connect to the core switch.
Scenarios
After all hardware devices are installed, you need to power on the entire system for trial running and check whether the hardware devices are
successfully installed.
Operation Process
Figure 1 System power-on process
Procedure
1. Turn on the power switches of the power distribution cabinet (PDC).
The voltage of the AC input power supply to the cabinet is 200 V to 240 V to prevent damage on devices and ensure the safety of the installation
personnel.
Before powering on the system, ensure that uplink ports are not connected to customers' switching devices.
Power on basic cabinets and then power on extension cabinets.
Turn off switches on the power distribution units (PDUs) before turning on the switches that control power supply to all cabinets in the PDC.
a. On the PDC side, a professional from the customer turns on the power switches for all cabinets in sequence.
b. Obtain the information on the PDU monitor or use a multimeter to check if the PDU output voltage stays between 200 V and 240 V.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 191/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the PDU output voltage is not between 200 V and 240 V, contact customers and Huawei technical support engineers immediately. Do not perform
the subsequent procedure.
2. Turn on the power switches of the PDUs one by one in the cabinet.
When the PDUs are powered on, the server is powered on.
a. Check the running status of the fans. Ensure that the fans run at full speed and then at even speed, and that the sound of the fans is
normal.
b. Check the indicator status on device panels to ensure that the devices are working properly.
For details about the separated deployment scenario, see Configuring Switches , Configuring Storage Devices and Configuring Servers . If the
network overlay SDN solution is used, see (Optional) Configuring Network Devices .
For details about the hyper-converged deployment scenario, see Configuring Hyper-Converged System Hardware Devices .
Configuring Servers
Configuring Switches
TaiShan 200 (model 2280) For details, see Huawei Server OS Installation Guide (Arm).
When deploying a host, check whether the RAID independent power supply (battery or capacitor) system works properly. If any exception occurs, disable the
RAID cache. Otherwise, files may be damaged due to unexpected power-off.
For details about how to check whether the RAID independent power supply works properly and how to disable the RAID cache, see the corresponding product
documentation of the server.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 192/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Configuring RAID 1
Scenarios
Log in to the iBMC page by the BMC IP address to set the parameters of the server.
Process
Figure 1 shows the process for logging in to the server using the BMC.
Procedure
Configure the login environment.
1. Connect the network port of the local computer to the BMC management port of the server using the network cable.
2. Set the IP address of the local computer and default BMC IP address of the server to the same network segment.
For example, set the IP address to 192.168.2.10, and subnet mask to 255.255.255.0.
The default BMC IP address of the server is 192.168.2.100, and the default subnet mask is 255.255.255.0.
3. On the menu bar of the Internet Explorer, choose Tools > Internet Options.
The Internet Options dialog box is displayed.
Windows 10 having Internet Explorer 11 installed is used as an example in the following descriptions.
5. In the Proxy server area, deselect Use a proxy server for your LAN.
6. Click OK.
The Local Area Network(LAN) Setting dialog box is closed.
7. Click OK.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 193/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Restart the browser, enter https://IP address of the BMC management port in the address bar, and press Enter.
For example, enter https://192.168.2.100.
The system prompts Certificate Error.
10. Enter the username and password and select This iBMC from the Log on to drop-down list.
The default username for logging in to the iBMC system is Administrator, and the default password is Admin@9000.
Change the default password upon your first login to ensure the system security.
12. Check whether the Security Information dialog box asking "Do you want to display the nonsecure items?" is displayed.
If yes, go to 13.
Scenarios
Log in to all servers through their BMC ports to check server version information and the number of hard disks.
Procedure
Checking the disk status
1. On the iBMC page, choose System Info > Storage > Views, and check the status of hard disks.
If Health Status is Normal, the disk is functional.
Scenarios
This section guides software commissioning engineers to log in to the BMC WebUI to configure the disks of a server to RAID 1.
Procedure
1. Log in to the iBMC WebUI.
For details, see Logging In to a Server Using the BMC .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 194/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. Click next to Logical Drive to open the logical disk configuration menu.
IO Policy I/O policy for reading data from special logical disks, which does not affect the pre-reading cache. The value can be either of the
following:
Cached IO: All the read and write requests are processed by the cache of the RAID controller. Select this value only when
CacheCade 1.1 is configured.
Direct IO: This value has different meanings in read and write scenarios.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 195/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In read scenarios, data is directly read from physical disks. (If Read Policy is set to Read Ahead, data read requests are processed
by the cache of the RAID controller.)
In write scenarios, data write requests are processed by the cache of the RAID controller. (If Write Policy is set to Write
Through, data is directly written into physical disks without being processed by the RAID controller cache.)
Disk Cache Policy The physical disk cache policy can be any of the following:
Enable: indicates that data is written into the cache before being written into a physical disk. This option improves data write
performance. However, data will be lost if there is no protection mechanism against power failures.
Disable: indicates that data is written into a physical disk without caching the data. Data is not lost if power failures occur.
Disk's default: indicates that the default cache policy is used.
Number of Drives Number of physical disks in each subgroup when the RAID level is 10, 50, or 60.
per Span
Scenarios
This section guides software commissioning engineers to log in to the server through the BMC WebUI to configure the disks of a server to RAID 1.
Operation Process
Figure 1 shows the process for configuring RAID 1.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 196/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Restart the server.
b. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 2.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 197/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
c. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated Remote
Console (Private), or HTML5 Integrated Remote Console (Shared). The real-time operation console of the server is displayed, as
shown in Figure 3 or Figure 4.
Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform
operations on the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform
operations on the server using iBMC. The users can view the operations of each other.
e. Select Reset.
The Are you sure to perform this operation dialog box is displayed.
f. Click Yes.
The server restarts.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 198/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The default BIOS password is Admin@9000. After the initial login, set the administrator password immediately.
For security purposes, change the administrator password periodically.
Enter the administrator password to go to the administrator page. The server will be locked after three consecutive failures with wrong
passwords. You can restart the server to unlock it.
d. On the Advanced page, select Avago MegaRAID <SAS3508> Configuration Utility and press Enter. The Dashboard View page is
displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 199/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. On the Dashboard View page, select Main Menu and press Enter. Then select Configuration Management and press Enter. Select
Create Virtual Drive and press Enter. The Create Virtual Drive page is displayed.
If RAID has been configured, you need to format the disk. On the Configuration Management page, select Clear Configuration and press Enter. On the
displayed confirmation page, select Confirm and press Enter. Then select Yes and press Enter to format the disk.
4. On the Create Virtual Drive screen, select Select RAID level using the up and down arrow keys and press Enter. Select RAID1 from the
drop-down list and press Enter.
5. On the Create Virtual Drive screen, select Default Initialization using the up and down arrow keys and press Enter. Select Fast from the
drop-down list and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 200/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
6. Select Select Drives From using the up and down arrow keys and press Enter. Select Unconfigured Capacity using the up and down arrow
keys.
7. Select Select Drives using the up and down arrow keys and press Enter. Select the first (Drive C0 & C1:01:02) and the second (Drive C0 &
C1:01:05) disks using the up and down arrow keys to configure RAID 1.
Drive C0 & C1 may vary on different servers. You can select a disk by entering 01:0x after Drive C0 & C1.
Press the up and down arrow keys to select the corresponding disk, and press Enter. [X] after a disk indicates that the disk has been selected.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 201/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Select Apply Changes using the up and down arrow keys to save the settings. The message The operation has been performed
successfully. is displayed. Press the down arrow key to choose OK and press Enter to complete the configuration of member disks.
9. Select Save Configuration and press Enter. The operation confirmation page is displayed. Select Confirm and press Enter. Select Yes and
press Enter. The message The operation has been performed successfully. is displayed. Select OK using the down arrow key and press
Enter.
b. Select Virtual Drive Management and press Enter. Current RAID information is displayed.
Storage device OceanStor 5310 For details about the configuration, see "Configuring Basic Storage Services" in OceanStor V700R001C00
(Version 6) Initialization Guide and OceanStor V700R001C00 Basic Storage Service Configuration Guide for Block.
OceanStor 5510
(Version 6)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 202/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
OceanStor 5610
(Version 6)
OceanStor 6810
(Version 6)
OceanStor 18510
(Version 6)
OceanStor 18810
(Version 6)
OceanStor Pacific For details about the configuration, see Installation > Hardware Installation Guide and Installation > Software
9920 Installation Guide > Installing the Block Service > Connecting to FusionCompute in OceanStor Pacific Series
8.2.1 Product Documentation.
OceanStor Pacific
9540
OceanStor Pacific
9520
OceanStor Dorado For details about the configuration, see Configure > Basic Storage Service Configuration Guide for Block in
3000 OceanStor Dorado 2000, 3000, 5000, and 6000 V700R001C00 Product Documentation.
OceanStor Dorado
5000
OceanStor Dorado
6000
OceanStor Dorado For details about the configuration, see Configure > Basic Storage Service Configuration Guide for Block in
8000 OceanStor Dorado 8000 and Dorado 18000 V700R001C00 Product Documentation.
OceanStor Dorado
18000
Hyper- FusionCube For details about configuration, see FusionCube 1000H Product Documentation (FusionCompute).
convergence 1000H
Use the latest matching software version (V200R022C00SPC500 or later) for the following switches. Earlier versions (such as V200R020C10SPC600) may cause
occasional traffic failures.
Table 1 lists some switches as a reference guide. For details about other supported switches, see Huawei Storage Interoperability Navigator.
Leaf or border CE6881-48S6CQ For details, see Configuration > Configuration Guide > Ethernet Switching Configuration in
leaf switch CloudEngine 8800 and 6800 Series Switches Product Documentation .
CE6857F-48S6CQ
CE6857F-48T6CQ
CE6860-SAN
CE6863-48S6CQ (only
for SDN)
CE6863E-48S6CQ (only
for SDN)
CE6881-48T6CQ (only
for SDN)
Spine switch CE8850-SAN For details, see Configuration > Configuration Guide > Ethernet Switching Configuration in
CloudEngine 8800 and 6800 Series Switches Product Documentation .
CE9860-4C-E1 (only for
SDN)
CE8850-64CQ-E1 (only
for SDN)
CE8851 (only for SDN)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 203/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
CE16804 (only for
SDN)
For details about typical configurations of switches in DCS, see Physical Network Interconnection Reference .
This section describes how to use SmartKit to install components, such as FusionCompute, eDME, DR and backup software, eDataInsight, HiCloud, and SFS,
and how to perform initial configuration.
Installing FabricInsight
127.0.0.1:51299/icslite/print/pages/resource/print.do? 204/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Phase Description
Installing using SmartKit Use SmartKit to install FusionCompute, UltraVR, eBackup, eDME, eDataInsight, HiCloud, SFS, and eCampusCore.
After FusionCompute is installed, SmartKit can be used to configure the backup server (optional) and FusionCompute
NTP clock source and time zone.
Before installing eDME, use SmartKit to create management VMs. Creating management VMs indicates to create
VMs on FusionCompute for installing eDME.
Configuring bonding for host After FusionCompute or eDME is installed, the system configures a bond port named Mgnt_Aggr consisting of one
network ports network port for the host by default. You need to manually add network ports to the bond port for improving the
network reliability of the system.
(Optional) Checking the system After the FusionCompute system is deployed and before services are provisioned, use SmartKit to perform preventive
environment before service maintenance, helping optimize system configurations.
provisioning
Configuring the FusionCompute After FusionCompute is installed, load license files and configure MAC address segments for sites on FusionCompute.
system
Performing initial configuration After eDME is installed, perform initial configuration to ensure that you can use the functions provided by eDME.
for eDME
Scenarios
This section guides the administrator to use SmartKit to install the FusionCompute, UltraVR, eBackup, eDME, eDataInsight, HiCloud, SFS, and
eCampusCore components.
The UltraVR and SFS components of the current DCS version are patch versions. After installing the two components, upgrade them to the corresponding patch
versions.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 205/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
You have installed SmartKit. For details about how to install and run SmartKit, see "Deploying SmartKit" in SmartKit 24.0.0 User Guide.
On the home page of SmartKit, click the Virtualization tab, click Function Management, and check whether the status of Datacenter Virtualization
Solution Deployment is Installed. If the status is Uninstalled, you can use either of the following methods to install the software:
On the home page of SmartKit, click the Virtualization tab, click Function Management, select Datacenter Virtualization Solution Deployment, and
click Install.
Import the software package for the basic virtualization O&M service (SmartKit_24.0.0_Tool_Virtualization_Service.zip).
1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is displayed, click
Import. In the Import dialog box, select the software package for the basic virtualization O&M service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click Install. In the dialog
box indicating a successful import, click OK. The status of Datacenter Virtualization Solution Deployment changes to Installed.
You have imported the software package of the eDME deployment tool (eDME_version_DeployTool.zip). The procedure is the same as that
for importing the basic virtualization O&M software package.
You have obtained the username and password for logging in to FusionCompute on the VM where the UltraVR is to be installed.
You have verified the software integrity of the UltraVR template file. For details, see Verifying the Software Package .
The stability of the server system time is critical to UltraVR. Do not change the system time when UltraVR is running. If the system
time changes, restart the UltraVR service. For details about how to restart UltraVR, see section "Starting the UltraVR" in OceanStor
BCManager 8.6.0 UltraVR User Guide.
You have obtained the username and password for logging in to FusionCompute.
In the eDataInsight decoupled storage-compute scenario, deploy OceanStor Pacific HDFS in advance. For details, see OceanStor Pacific Series
8.2.1 Software Installation Guide.
Procedure
1. On the home page of SmartKit, click the Virtualization tab. Click Datacenter Virtualization Solution Deployment in Site Deployment
Delivery.
3. On the Tasks tab page, click Create Task. The Basic Configuration page is displayed.
On the Site Deployment Delivery page, click Support List to view the list of servers supported by SmartKit.
6. On the Installation Policy page, select an installation policy as required. Click Next.
7. Configure parameters.
Online modification configuration: FusionCompute, UltraVR, eBackup, and eDME are supported.
Excel import configuration: All components are supported.
Quick configuration: FusionCompute, UltraVR, eBackup, and eDME are supported.
Modify the configuration online. Manually configure related parameters on the page. After the configuration is complete, go to 16.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 206/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Quickly fill in the configuration. Click the Quick Configuration tab, set parameters as prompted, and click Generate Parameter. If a
parameter error is reported, clear the error as prompted. If the parameters are correct, go to 16.
8. Configure FusionCompute parameters.
On the Online Modification Configuration tab page, click Add FusionCompute in the FusionCompute Parameter Configuration area.
On the Add FusionCompute page in the right pane, set related parameters.
a. Set System Name and select the path where the software package is stored. If the software package is not downloaded, download it as
instructed in FusionCompute Software Package .
b. Set Install the Mellanox virtual NIC driver. If it is set to Yes, ensure that the driver installation package exists in the path where the
software package is stored.
If the host uses Mellanox ConnectX-4 or Mellanox ConnectX-5 series NICs, and the NICs are in the compatibility list, you need to install the
Mellanox NIC driver.
c. CNA Installation. Select an installation scenario, set DHCP service parameters, configure basic node information, and configure the
node list.
Parameter Description
Fault recovery installation: In the fault recovery scenario, the user disk is not formatted.
DHCP Pool Start IP Start IP address and the number of IP addresses that can be assigned by the DHCP service to the host to be installed. The IP
Address address must be an unused IP address. You are advised to use an IP address in the management plane network segment and
ensure that the IP address does not conflict with other planned IP addresses.
DHCP Mask Subnet mask of the IP address segment assigned from the DHCP pool.
DHCP Gateway Gateway of the IP address segment assigned from the DHCP pool.
DHCP Pool Capacity The DHCP IP addresses may be used by devices not included in the original plan. Therefore, configure the number of IP
addresses in the DHCP address pool to at least twice the number of physical nodes.
Use Software RAID For HiSilicon and Phytium Arm architectures, you can configure whether to use the software RAID for system disks.
for System Disk
Yes
127.0.0.1:51299/icslite/print/pages/resource/print.do? 207/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
No
root Password Password of the OS account for logging in to the CNA node. You need to set the password during the installation.
NOTE:
After setting the password, click Downwards to automatically paste the password to the grub password, gandalf password, and Redis
password.
grub Password Password of the internal system account. You need to set the password during the installation.
gandalf Password Password of user gandalf for logging in to the CNA node to be installed.
Redis Password Redis password, which is set during CNA environment initialization.
Management Node or This parameter cannot be set for the first node.
Not
Yes
No
MAC Address This parameter cannot be set for the first node.
It specifies the MAC address of the physical port on the host for PXE booting host OSs. If the network configuration needs to be
specified before host installation, obtain the MAC address to identify the target host. For details, see the host hardware
documentation.
iBMC Username Username for BMC login authentication. This parameter is mandatory. Otherwise, the installation will fail.
iBMC Password Password for BMC login authentication. This parameter is mandatory. Otherwise, the installation will fail.
CNA Management IP If this parameter is not specified, the IP address assigned by the DHCP server is used. If this parameter is specified, the specified
Address IP address is used for configuration.
CNA Subnet Mask If this parameter is not specified, the DHCP subnet mask is used. If this parameter is specified, the specified subnet mask is used
for configuration.
CNA Subnet Gateway If this parameter is not specified, the DHCP gateway is used. If this parameter is specified, the specified gateway is used for
configuration.
CNA Cluster Name Name of the cluster to which the host belongs. This parameter cannot be set for the first node. This parameter can be set only
when Management Node or Not is set to No. After you enter a cluster name, the host is automatically added to the specified
cluster.
Management Plane Whether the management plane VLAN has a VLAN tag. The default value is 0. The value ranges from 0 to 4094.
VLAN Tag
0 indicates that the management plane VLAN is not specified. If you do not specify a management plane VLAN, set the VLAN
type on the access switch port connected to the management network port to untagged so that the aggregation switch is
reachable to the uplink IP packets from the management plane through the VLAN.
Other values indicate a specified management plane VLAN. If you specify a management plane VLAN, set the VLAN type on
the access switch port connected to the management network port to tagged so that the management plane and the switch can
communicate with each other.
Network Port of If the current node is the first node, you need to specify a network port in management IP address configuration, for example,
Management IP eth0.
Address NOTE:
For details about how to check the network port name of the first CNA node, see How Do I Determine the Network Port Name of the
First CNA Node? .
d. Install VRM nodes. Select a deployment scenario and set parameters of VRM nodes.
Parameter Description
Active VRM Node Name VRM node name, that is, the VM name.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 208/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Management IP Address of IP address of the VRM VM. The IP address must be within the management plane network segment. You are advised to
Active VRM Node select an IP address from a private network segment, because a management IP address from a public network segment may
pose security risks.
CNA Management IP CNA management IP address of the active node in active and standby deployment mode.
Address of Active VRM
Node
CNA root Password of Password of user root for logging in to the CNA node.
Active VRM Node NOTE:
After setting the password, click Downwards. The system automatically pastes the password to the following passwords (from
CNA gandalf Password of Active Node to galax Password).
CNA gandalf Password of Password of user gandalf for logging in to the CNA node.
Active VRM Node
Standby VRM Node Name Name of the standby VRM node in active and standby deployment mode, that is, the VM name.
Management IP Address of IP address of the standby VRM VM in active and standby deployment mode. The IP address must be within the management
Standby VRM Node plane network segment. You are advised to select an IP address from a private network segment, because a management IP
address from a public network segment may pose security risks.
CNA Management IP CNA management IP address of the standby node in active and standby deployment mode.
Address of Standby VRM
Node
CNA root Password of Password of user root for the standby CNA node in active and standby deployment mode.
Standby VRM Node
CNA gandalf Password of Password of user gandalf for the standby CNA node in active and standby deployment mode.
Standby VRM Node
Management Plane VLAN VLAN of the management plane. If no value is specified, the system uses VLAN 0 by default. For details about the
configuration method, see Table 3.
Container Management Whether to enable the container management function in the VRM node.
Configuration Item When Configuration Mode is set to Custom, configure the CPU and memory.
CPU: 4 to 20 (cores)
Memory: 6 to 30 (GB)
When Configuration Mode is set to Normal: Select one of the following configuration items:
Container Management disabled:
1000VM,50PM: VRM requires 4 CPUs, 6 GB memory, and 120 GB disk space.
3000VM,100PM: VRM requires 8 CPUs, 8 GB memory, and 120 GB disk space.
Container Management enabled:
1000VM,50PM,10 K8S Group,500 K8S Node: VRM requires 4 CPUs, 8 GB memory, 332 GB disk space plus the image
repository capacity (GB).
3000VM,100PM,20 K8S Group,1000 K8S Node: VRM requires 8 CPUs, 12 GB memory, 332 GB disk space plus the
image repository capacity (GB).
NOTE:
VM: virtual machine; PM: physical machine (host)
In the DCS scenario, FusionCompute requires at least 8 CPUs and 8 GB memory.
Image repository capacity After Container Management is enabled, Image repository capacity can be configured.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 209/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
admin Login Password Login password of user admin. This parameter is mandatory when User Rights Management Mode is set to Common.
sysadmin Login Password Login password of user sysadmin. This parameter is mandatory when User Rights Management Mode is set to Role-
based.
secadmin Login Password Login password of user secadmin. This parameter is mandatory when User Rights Management Mode is set to Role-
based.
secauditor Login Password Login password of user secauditor. This parameter is mandatory when User Rights Management Mode is set to Role-
based.
root Password Password of the OS account for logging in to the VRM node. You need to set the password during the installation.
grub Password Password of the internal system account. You need to set the password during the installation.
gandalf Password Password of user gandalf for logging in to the VM where VRM is to be installed.
postgres Password Password of user postgres for logging in to the VM where VRM is to be installed.
galax Password Password of user galax for logging in to the VM where VRM is to be installed.
Time Management
Time Zone Local time zone of the system. The system determines whether to enable Daylight Saving Time (DST) based on the time
zone you set.
NTP When configuring the NTP clock source, enable the NTP function.
NTP Server NTP server IP address or domain name. You can enter one to three IP addresses or domain names of the NTP servers. You
are advised to use an external NTP server running a Linux or Unix OS. If you enter a domain name for the configuration,
ensure that a DNS server is available.
If no external NTP server is deployed, set this parameter to the management IP address of the host where the active VRM
node resides.
NOTE:
If no external NTP server is available, set the time of the node that functions as the NTP server to the correct time. For details, see
How Do I Manually Change the System Time on a Node? .
Data Backup to Third- Key information is backed up to a third-party FTP server or host. If the system is abnormal, the backup data can be used to
party FTP Server/Host restore the system.
If data is backed up to a third-party FTP server, the VRM node automatically uploads key data to the FTP backup server at
02:00 every day.
If data is backed up to a host, the system automatically copies the management data excluding monitoring data to the
/opt/backupdb directory on the host every hour. The host retains only data generated in one day.
NOTE:
If no FTP server is used, select Host (monitoring data will not be backed up).
Protocol Type If Third-party FTP server is selected, FTPS and FTP are supported.
You are advised to select FTPS to enhance file transmission security. If the FTP server does not support the FTPS protocol,
select FTP.
NOTE:
If FTPS is used, you need to deselect the TLS session resumption option of the FTP server.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 210/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Backup Upload Path Relative path for storing backup files on the third-party backup server.
If multiple sites share the same backup server, set the directory to /Backup/VRM-VRM IP address/ for easy identification. If
the VRM nodes are deployed in active/standby mode, enter the floating IP address of the VRM nodes.
FTP Server Root If data is backed up to a third-party FTP server and the protocol type is FTPS, you need to import the root certificate of the
Certificate server certificate.
NOTE:
If the root certificate is not imported, an alarm is displayed, indicating that the certificate of the FTP server for management data
backup is invalid.
Node Name To back up data to a host, select the node corresponding to the host.
NOTE:
The backup directory is /opt/backupdb, which cannot be changed.
A maximum of five hosts can be selected. You are advised to select hosts in different clusters.
Access If the default VLANs of the switch ports are the same, you need to configure the For details about how to view the VLAN of a switch port,
management plane VLAN for the network ports on the host. see the official guide of the switch vendor.
This port type does not support allowing multiple
VLANs and layer 2 isolation. Therefore, you are advised
to not use it as the uplink of the storage plane or service
plane.
Trunk Based on the actual network plan: For details about how to view the VLAN of a switch port,
If the VLAN has been added to the list of allowed VLANs of the switch ports, you see the official guide of the switch vendor.
need to configure the management plane VLAN for the network ports on the host.
If the default VLANs of the switch ports have been added to the list of allowed
VLANs, you do not need to configure the management plane VLAN for the
network ports on the host.
Hybrid Based on the actual network plan: For details about how to view the VLAN of a switch port,
If the VLAN has been added to the list of allowed VLANs of the switch ports or the see the official guide of the switch vendor.
switch ports have been configured to carry the VLAN tag when sending data
frames, you need to configure a VLAN for the network ports on the host.
If the default VLANs of the switch ports have been added to the list of allowed
VLANs or the switch ports have been configured to remove the VLAN tag when
sending data frames, you do not need to configure a VLAN for the network ports on
the host.
Table 3 is for reference only. The actual networking depends on the actual network plan.
e. Click Confirm.
a. Select the path of the installation package, that is, the path of the folder where the deployment software package is stored. If the
software package is not downloaded, download it as instructed in eDME Software Package . After you select a path, the tool
automatically uploads the software package to the deployment node. If you do not select a path, manually upload the software package
to the /opt/install directory on the deployment node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 211/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Parameter Description
Automatic VM creation Whether to automatically create a VM. The value can be Yes or No. Other parameters are valid only when this
parameter is set to Yes.
Primary Node root Password Password of user root for logging in to the active node.
CNA name of primary node Name of the CNA to which the active node belongs.
Disk space of primary node Disk space of the active node, in GB.
CNA name of child node 1 Name of the CNA to which child node 1 belongs.
CNA name of child node 2 Name of the CNA to which child node 2 belongs.
Deploy Operation Portal or not Whether to deploy an operation portal for eDME.
No
Yes
127.0.0.1:51299/icslite/print/pages/resource/print.do? 212/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Operation Portal Node 1 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name
Operation Portal Node 1 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.
Operation Portal Node 1 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.
CNA name of Operation Portal Name of the CNA to which operation portal node 1 belongs. This parameter is valid only when the operation portal
node 1 is to be deployed.
Disk space of Operation Portal node Disk space used by operation portal node 1. This parameter is valid only when the operation portal is to be
1 deployed.
Operation Portal Node 2 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name
Operation Portal Node 2 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.
Operation Portal Node 2 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.
CNA name of Operation Portal Name of the CNA to which operation portal node 2 belongs. This parameter is valid only when the operation portal
node 2 is to be deployed.
Disk space of Operation Portal node Disk space used by operation portal node 2 (unit: GB). This parameter is valid only when the operation portal is to
2 be deployed.
Operation Portal Floating IP Management floating IP address used to log in to the operation portal. It must be in the same network segment as
Address the node IP address and has not been used. This parameter is valid only when the operation portal is to be deployed.
Operation Portal Global Load This parameter is used to configure global load balancing. It must be in the same network segment as the IP address
Balancing IP Address of the operation portal node and has not been used. This parameter is valid only when the operation portal is to be
deployed.
Operation Portal Management Password for logging in to the operation portal as user bss_admin. This parameter is valid only when the operation
Password portal is to be deployed.
Auto Scaling Service Node 1 Host The host naming rules are as follows:
Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.
Auto Scaling Service Node 1 root Password of user root for logging in to AS node 1.
Password
CNA name of Auto Scaling Service Name of the CNA to which AS node 1 belongs.
node 1
Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 1
Auto Scaling Service Node 2 Host For details, see the parameter description of Auto Scaling Service Node 1 Host Name.
Name
127.0.0.1:51299/icslite/print/pages/resource/print.do? 213/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Auto Scaling Service Node 2 root Password of user root for logging in to AS node 2.
Password
CNA name of Auto Scaling Service Name of the CNA to which AS node 2 belongs.
node 2
Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 2
Elastic Container Engine Node 1 The host naming rules are as follows:
Host Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.
Elastic Container Engine Node 1 Password of user root for logging in to ECE node 1.
root Password
Elastic Container Engine Node 1 Public service domain IP address of ECE node 1.
Public Service Domain IP Address NOTE:
If Automatic VM creation is set to Yes, enter an IP address that is not in use.
If Automatic VM creation is set to No, enter the IP address of the node where the OS has been deployed.
CNA name of Elastic Container Name of the CNA to which ECE node 1 belongs.
Engine Service node 1
Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 1
Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Host Name.
Host Name
Elastic Container Engine Node 2 IP For details, see the parameter description of Elastic Container Engine Node 1 IP Address.
Address
Elastic Container Engine Node 2 Password of user root for logging in to ECE node 2.
root Password
Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Public Service Domain IP
Public Service Domain IP Address Address.
CNA name of Elastic Container Name of the CNA to which ECE node 2 belongs.
Engine Service node 2
Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 2
Elastic Container Engine Floating Floating IP address used for the ECE service. It must be an idle IP address in the same network segment as the IP
IP Address address of the ECE node.
Elastic Container Engine Public Floating IP address used for the communication between the K8s cluster and the ECE node. It must be an idle IP
Service Domain Floating IP address in the same network segment as the public service domain IP address of the ECE node.
Address
Elastic Container Engine Global IP address used to configure load balancing for the ECE service. It must be an idle IP address in the same network
Load Balancing IP Address segment as the IP address of the ECE node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 214/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Subnet mask of the public service Subnet mask of the public service domain of the ECE service.
domain of the Elastic Container
Engine Service
Port group of the Elastic Container Port group of the public service domain of the ECE service.
Engine Service in the public service NOTE:
domain
If a port group has been created on FusionCompute, set this parameter to the name of the created port group.
If no port group has been created on FusionCompute, set this parameter to the name of the port group planned for
FusionCompute.
IP Address Gateway of Elastic Set the IP address gateway of the ECE public service domain.
Container Engine public Service
Domain
Elastic Container Engine Public Set the BMS and VIP subnet segments for the ECE public service network.
Service Network-BMS&VIP
Subnet Segment
IP Address Segment of Elastic Set the IP address segment of the ECE public service network client.
Container Engine Public Service
Network Client
eDME can manage FusionCompute only when both FusionCompute and eDME are deployed.
Interface Username Interface username. This parameter is valid only when Manage FusionCompute or not is set to Yes.
Interface Account Password Password of the interface account. This parameter is valid only when Manage FusionCompute or not is set to
Yes.
SNMP Security Username SNMP security username. This parameter is valid only when Manage FusionCompute or not is set to Yes.
SNMP Encryption Password SNMP encryption password. This parameter is valid only when Manage FusionCompute or not is set to Yes.
SNMP Authentication Password SNMP authentication password. This parameter is valid only when Manage FusionCompute or not is set to Yes.
Whether to install eDataInsight Whether to deploy the DCS eDataInsight management plane. If yes, prepare the product software package of the
Manager DCS eDataInsight management plane in advance.
Yes
No
Initial admin Password of Initial password of user admin on the management portal.
Management Portal NOTE:
After setting the password, click Downwards. The system automatically pastes the password to the following passwords
(from Initial admin Password of Management Portal to sftpuser Password). The rules of each password are different.
If the verification fails after the password is copied downwards, you need to change the password separately.
Initial admin Password of O&M Initial password of user admin on the O&M portal.
Portal
sopuser Password Password of user sopuser. The sopuser account is used for routine O&M.
ossadm Password Password of user ossadm. The ossadm account is used to install and manage the system.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 215/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ossuser Password Password of user ossuser. The ossuser account is used to install and run the product software.
Database sys Password Database sys password. The database sys account is used to manage and maintain the Zenith database and has the
highest operation rights on the database.
rts Password Password of user rts. The rts account is used for authentication between processes and RabbitMQ during process
communication.
KMC Protection Password KMC protection password. KMC is a key management component.
ER Certificate Password ER certificate password. The ER certificate is used to authenticate the management or O&M portal when you
access the portal on a browser.
ETCD root Password ETCD root password, which is used for ETCD root user authentication.
Whether to install object storage Whether to deploy the object storage service.
service
No
Yes-PoE Authentication
Yes-IAM Authentication
Whether to install application Whether to deploy the application backup service during operation portal deployment.
backup service
If you use Export Parameters to export an XLSX file, you can operate or view the file only in Office 2007 or later version.
Parameter Description
UltraVR Template File Directory where the downloaded UltraVR template file and signature file are stored. If the template file is not
Directory downloaded, download it by as instructed in UltraVR Software Package .
Owning CNA Name of VM Name of the CNA to which the template VM and UltraVR VM are bound when they are created. The CNA provides
Template storage resources for the VMs.
If you use Export Parameters to export an XLSX file, you can operate or view the file only in Office 2007 or later version.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 216/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
On the Online Modification Configuration tab page, click Add eBackup in the eBackup Parameter Configuration area. On the Add
eBackup page in the right pane, set related parameters.
Parameter Description
eBackup Template File Directory Directory where the downloaded eBackup template file and signature file are stored. If the software package is
not downloaded, download it as instructed in eBackup Software Package .
Owning CNA Name of VM Template Name of the CNA to which the template VM is bounded when the template VM is created. The CNA provides
storage resources for the VM.
Server Role Specifies the role for VM initialization. Only backup servers or backup proxies are supported.
Backup Server
Backup Proxy
NOTE:
If two VMs are deployed at the same time, only one backup server and one backup proxy are supported.
Owning CNA Name of VM Name of the CNA to which the eBackup VM is bound when the eBackup VM is created. The CNA provides
storage resources for the VM.
msuser Password Password of the interface interconnection user (msuser by default). The password is used for VM initialization.
hcp Password Password of the SSH connection user so that the password can be automatically changed during VM
initialization.
Internal Communication Plane IP Management IP address of the backup server connected to the backup proxy.
Address of Backup Server This parameter can be configured when Server Role is set to Backup Proxy.
hcp Password of Backup Server Password of the hcp account of the backup server connected to the backup proxy.
This parameter can be configured when Server Role is set to Backup Proxy.
Backup Server root Password Password of the root account of the backup server connected to the backup proxy. If the password is not
changed, you can view the default password in the eBackup account list.
This parameter can be configured when Server Role is set to Backup Proxy.
IP Address Management IP address of the production/backup management plane. The IP address must be in the same
network segment as the management plane of FusionCompute.
IP Address Management IP address of the internal communication plane. You are advised to use an IP address that is in the
same network segment as the management plane of FusionCompute.
Floating IP address Floating IP address of the internal communication plane. The value must be the same when multiple VMs are
deployed. You are advised to use an IP address that is in the same network segment as the management plane of
FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 217/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IPv4 Route Information Route information of the network plane. Configure the information based on the network plan.
You can click to configure IPv4Destination Network, IPv4Destination Network Mask, and IPv4Route
Gateway.
IP Address Management IP address of the production storage plane or backup storage plane. You are advised to use an
independent network plane.
Subnet Mask Subnet mask of the production storage plane or backup storage plane.
IPv4 Route Information Route information of the network plane. Configure the information based on the network plan.
You can click to configure IPv4Destination Network, IPv4Destination Network Mask, and IPv4Route
Gateway.
If network planes are combined based on the eBackup network plane planning and the DHCP service is deployed on the management plane, delete
unnecessary NICs by following instructions provided in "Deleting a NIC" in FusionCompute 8.8.0 Product Documentation. Otherwise, the eBackup service
may be inaccessible.
Import configurations using an EXCEL file. If multiple components are deployed together, on the Parameter Configuration page,
click the Excel Import Configuration tab and click Download File Template. If eDataInsight or HiCloud is deployed
independently, on the Advanced Service Configuration page, click Download File Template. After filling in the template, import
the parameter file. If the import fails, check the parameter file. If the parameters are imported successfully, you can view the
imported parameters on the Advanced Service Configuration page.
a. In the storage-compute decoupled scenario, the eDataInsight management plane and the OceanStor Pacific HDFS management plane need to
communicate with each, and the eDataInsight service plane and the OceanStor Pacific HDFS service plane need to communicate with each
other.
b. The floating IP address of the OceanStor Pacific HDFS management plane is the IP address for logging in to the OceanStor Pacific HDFS web
UI and is on the same management plane as the management IP address.
c. The IP address of the DNS server of OceanStor Pacific HDFS belongs to the service plane and needs to communicate with the eDataInsight
service plane.
d. If OceanStor Pacific HDFS is not reinstalled when eDataInsight is reinstalled, you need to delete related data by referring to section "Deleting
Residual Historical HBase Data from OceanStor Pacific HDFS" in the eDataInsight Product Documentation of the corresponding version
before the reinstallation. Otherwise, HBase component usage will be affected after the reinstallation.
e. If there is a VM cluster with correct specifications that is created using the eDataInsight image, the VM cluster can be used to install and
deploy eDataInsight. VM Deploy Type must be set to Manual Deploy.
f. The value of eDataInsight Deployment Scenarios determines an installation mode:
If the value is custom, the custom installation mode is used. You can choose to install NdpYarn, NdpHDFS, NdpSpark,
NdpKafka, NdpHive, NdpHBase, NdpRanger, NdpFlume, NdpFTP, NdpFlink, NdpES, NdpRedis, NdpClickHouse, and
NdpStarRocks as needed. NdpDiagnos, NdpKerberos, NdpStatus, NdpTool, NdpZooKeeper, and NdpLDAP will be installed
by default.
If the value is hadoop, the Hadoop analysis cluster installation mode is used. NdpFlume, NdpHBase, NdpHDFS, NdpHive,
NdpSpark, NdpYarn, NdpFlink, NdpRanger, NdpDiagnos, NdpTool, NdpKerberos, NdpStatus, NdpZooKeeper, and
NdpLDAP will be installed by default and cannot be modified. If there is an NDP node specified for installing NdpHDFS,
the data disk size of the node must be at least 500 GB, and the data disk size of other nodes must be at least 100 GB. If there
is no node specified for installing NdpHDFS, the data disk size of all nodes must be at least 500 GB.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 218/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the value is clickhouse, the ClickHouse analysis cluster installation mode is installed. NdpHDFS, NdpFlume,
NdpClickHouse, NdpDiagnos, NdpStatus, NdpTool, NdpKerberos, NdpZooKeeper, and NdpLDAP will be installed by
default and cannot be modified. If there is an NDP node specified for installing NdpHDFS or NdpClickHouse, the data disk
size of the node must be at least 500 GB, and the data disk size of other nodes must be at least 100 GB. If there is no node
specified for installing NdpHDFS or NdpClickHouse, the data disk size of all nodes must be at least 500 GB.
If one or more of NdpSpark, NdpFlink, and NdpHive are installed, NdpHudi will be installed by default.
Parameter Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 219/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you select Manual Deploy, the VM manually provisioned must meet the following requirements:
The VM template must be provided in the eDataInsight_24.0.0_DayuImage_Euler-x86_64.zip (x86 architecture) or
eDataInsight_24.0.0_DayuImage_Euler-aarch64.zip (Arm architecture) software package.
IP addresses must be configured in a specified sequence. That is, the service plane port group and service plane IP address must be
configured for the network port of the first NIC of the VM.
After the VM is provisioned, delete unnecessary NICs, create and attach required disks, and then start the VM.
DNS server address of the This parameter is mandatory for storage-compute decoupled.
Pacific HDFS Address of the OceanStor Pacific domain name server, which is used to convert a domain name to an IP address.
Parameter Description
VM Type The eDataInsight platform uses two types of VMs: CloudSOP (management VMs) and computeNode (service VMs).
VM Host Name 1. There are only three CloudSOP VMs. The VM names must be cloudsopdigit and should not be changed.
2. The number of computeNode VMs is N (N can be 3, 4, ...). The VM names must be ndpdigit (starts from 1 and must be
consecutive) and should not be changed.
3. You do not need to configure the management plane IP address for computeNode VMs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 220/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Management plane This parameter needs to be configured only for CloudSOP VMs.
IP
Service Disk (GB) The minimum service disk size of a CloudSOP VM is 200 GB.
The minimum service disk size of a computeNode VM is 200 GB.
Data Disk (GB) No data disk is mounted to a CloudSOP VM (set this parameter to 0) by default. The minimum size of the data disk of a
computeNode VM is 100 GB.
Service plane port Set the port group of the service plane.
group This port group cannot be the same as that of the management plane.
VM Host CNA VM Type Management Custom CPU Memory Service Data Service Service Plane Service Shared
Name Name Plane IP Component (GB) Disk Disk Plane IP Subnet Mask Plane Storage
(GB) (GB) Port Name
Group
127.0.0.1:51299/icslite/print/pages/resource/print.do? 221/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ndp1 linux- computeNode - NdpStarRocks 16 64 200 500 192.168.6.67 255.255.255.0 yewu NoF
CNA01
ndp2 linux- computeNode - NdpStarRocks 16 64 200 500 192.168.6.68 255.255.255.0 yewu NoF
CNA02
ndp3 linux- computeNode - NdpStarRocks 16 64 200 500 192.168.6.69 255.255.255.0 yewu NoF
CNA03
NdpYarn - A service that manages and allocates cluster resources in a unified manner.
NdpFlume - Provides a distributed computing framework oriented to stream processing and batch processing.
NdpFlink - Provides a distributed computing framework oriented to stream processing and batch processing.
NdpClickHouse - Columnar database management system (DBMS) for online analysis (OLAP).
NdpStarRocks - Uses StarRocks to implement a high-performance interactive analytical database with all-scenario massively
parallel processing (MPP).
Configure related parameters based on Table 11. Retain the default values for the parameters that are not described in the table.
Virtualization FusionCompute IP In separated deployment mode, enter the management IP address Mandatory in 192.168.*.166
Basic of FusionCompute. separated
Information deployment
mode
FusionCompute login In separated deployment mode, enter the administrator account Mandatory in admin
user of FusionCompute. separated
deployment
mode
127.0.0.1:51299/icslite/print/pages/resource/print.do? 222/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute login In separated deployment mode, enter the password of an Mandatory in Huawei12!@
password administrator account of FusionCompute. separated
deployment
mode
General NTP IP address of the NTP IP address of the PaaS management zone, which is used to No -
Parameters PaaS management plane synchronize the clock source of the time zone. If there are
multiple NTP IP addresses, separate them with commas (,) and
ensure that the time of multiple clock sources is the same.
NOTE:
Software package path Local path for storing obtained software packages. Yes D:\Hicloud\package
GDE op_svc_servicestage Password of the op_svc_servicestage tenant of the management Yes cnp2024@HW
Management tenant password zone. This parameter can be customized.
Plane Password NOTE:
op_svc_cfe tenant Password of the op_svc_cfe tenant of the management zone. Yes cnp2024@HW
password This parameter can be customized.
NOTE:
op_svc_pom tenant Password of the op_svc_pom tenant of the management zone. Yes cnp2024@HW
password This parameter can be customized.
NOTE:
tenant password Password of a tenant. The value must be the same as the Yes cnp2024@HW
password of the op_svc_cfe tenant of the management zone. This
is the confirmation password of the tenant.
install_op_svc_cfe user Password of the installation user of the tenant. This parameter Yes cnp2024@HW
password of tenant can be customized.
NOTE:
Password of the paas Password of the paas user of the management zone node. Yes Image0@Huawei123
user of the management The password is preset in the image. The value is fixed at
zone node Image0@Huawei123 and does not need to be changed.
sshusr password of Password of the sshusr user of the management zone node. Yes Image0@Huawei123
management node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.
root password of Password of the root user of the management zone node. Yes Image0@Huawei123
management node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.
paas password of data Password of the paas user of the data zone node. Yes Image0@Huawei123
node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.
root password of data Password of the root user of the data zone node. Yes Image0@Huawei123
node The password is preset in the image. The value is fixed at
Image0@Huawei123 and does not need to be changed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 223/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
GDE Data Plane system administrator Password of a system administrator. This parameter can be Yes cnp2024@HW
Password admin password customized.
security administrator Password of the security administrator. This parameter can be Yes cnp2024@HW
secadmin password customized.
security auditor Password of the security auditor. This parameter can be Yes cnp2024@HW
secauditor password customized.
GDE Network DVS port group name Name of the distributed virtual switch (DVS) port group in Yes managePortgroup
Plane FusionCompute Manager. The default name is
Configuration managePortgroup.
To query the name, perform the following steps:
Log in to FusionCompute Manager as the admin user.
Choose Resource Pool > DVS > Port Group.
Check the port group name, which is the required name.
Keepalived VIP (1) of keepalived VIP addresses of VMs in the data zone, each of which Yes 192.168.*.16
data zone must be unique.
Keepalived VIP (2) of keepalived VIP addresses of VMs in the data zone, each of which Yes 192.168.*.17
data zone must be unique.
Keepalived VIP (3) of keepalived VIP addresses of VMs in the data zone, each of which Yes 192.168.*.18
data zone must be unique.
Gaussdb VIP of data GaussDB VIP address of VMs in the data zone. Yes 192.168.*.19
zone
Service Machine account Machine-machine account password. This parameter can be Yes Changeme_123@
Deployment password customized.
Parameters
eDME OC plane IP IP address of the eDME O&M portal. No 192.168.*.250
platform-node1 192.168.*.191
127.0.0.1:51299/icslite/print/pages/resource/print.do? 224/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
platform-node2 192.168.*.192
platform-node3 192.168.*.193
gkit 192.168.*.194
platform-node1 MCNA01
platform-node2 MCNA02
platform-node3 MCNA02
gkit MCNA02
platform-node1 HCI_StoragePool0
platform-node2 HCI_StoragePool0
platform-node3 HCI_StoragePool0
gkit autoDS_MCNA02
Host VM Type Management Floating IP Keepalived VIP Keepalived VIP Keepalived VIP GaussDB VIP
Name Zone IP Address Address of Address of Data Address of Data Address of Data Address of Data
Management Zone (1) Zone (2) Zone (3) Zone
Zone
All types of VIP addresses must be planned on two nodes, and the VIP addresses of the same type must be the same.
The slash (/) in the table indicates that networking planning is not involved.
IP addresses in the table are only examples. Actual IP addresses may vary.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 225/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
General parameter Template File Local path for storing the template file of the SFS VM. If the software package is not downloaded,
download it as instructed in SFS Software Package .
Owning CNA of Name of the CNA to which the SFS VM template is uploaded.
Template
Owning Datastore of Name of the datastore where the SFS VM template resides after being uploaded to FusionCompute.
Template
Obtain the default password of account root from the "Type A (Background)" sheet in STaaS Solution 8.5.0
Account List (for DME and eDME). After the installation is complete, the password of account root is
automatically updated to the password of user root on the management plane.
SFS management Language OS language of the management plane VM, which must be the same as the language configured on eDME.
configuration
Floating IP Address of Floating IP address of the GaussDB database of the management plane VM. It is recommended that the IP
GaussDB address be the next consecutive IP address after the IP addresses of SFS VMs. For example, if the IP
addresses of SFS_DJ are 192.168.1.10 and 192.168.1.11, you are advised to set the IP address to
192.168.1.12.
Floating IP Address of Floating IP address of the management plane VM. It is recommended that the IP address be the next
Management Portal consecutive IP address after the floating IP address of the GaussDB database. For example, if the floating
IP address of the GaussDB database is 192.168.1.12, you are advised to set this IP address to 192.168.1.13.
eDME Operation Portal IP address used by SFS to interconnect with the eDME operation portal.
IP Address
SFS management root Password of Password of user root on the management plane.
password Management Portal
It is used as the password of the sfs_dj_admin account to log in to the SFS-DJ foreground management page,
and the password of machine-machine accounts (such as op_svc_sfs or op_service_sfs) for connecting SFS-DJ
background to other services.
Parameter Description
Two-node deployment. SFS needs to be deployed on two nodes simultaneously. You are advised to deploy SFS on different CNA
nodes.
Owning CNA Name of the CNA to which the SFS VM template is uploaded.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 226/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
System Disk (GB) The minimum system disk size is 240 GB.
Parameters required for software package path Local path for storing VM templates, certificates, and software packages, for example,
deploying the eCampusCore D:\packages. The following files must be included:
basic edition
Obtained software packages and ASC verification files.
VM template. The template file does not need to be decompressed.
Obtained eDME and FusionCompute certificates.
Owning CNA of Template CNA to which the VM template belongs. The template is uploaded to the CNA where
FusionCompute is installed.
Owning Datastore of Data storage to which the eCampusCore VM template belongs. The template is uploaded to
Template the FusionCompute data storage.
NOTE:
If FusionCompute has been interconnected with shared datastore, set this parameter to the
name of the datastore.
If FusionCompute has not been interconnected with shared datastore, set this parameter to the
name of the planned datastore.
Template Default Password Preset password of the template file. Set this parameter to Huawei@12F3.
Password of the O&M Password of the O&M management console and database, password of the admin user of
management console and the eCampusCore O&M management console, password of the sysadmin user of the
database database, password of the eDME image repository, and machine-machine account
password.
Password used in SNMPv3 SNMPv3 authorization password used by service components. The verification rules are
authentication the same as those of the password of O&M management console and database, but the two
passwords must be different.
FusionCompute eContainer Start IP address of the FusionCompute management subnet in the eContainer cluster. An
cluster master node manage example is 10.168.100.2.
start IP
FusionCompute eContainer End IP address of the FusionCompute management subnet in the eContainer cluster. An
cluster master node manage example is 10.168.100.6.
end IP
Floating IP address of the VIP of the FusionCompute management subnet segment of the internal gateway.
internal gateway
127.0.0.1:51299/icslite/print/pages/resource/print.do? 227/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NFS floating IP NFS VIP, which is used for internal access and NFS configuration.
Configuring IaaS Subnet Mask Subnet mask of the FusionCompute management subnet. An example is 255.255.255.0.
Information of the
Integration Framework Management Plane Port Subnet port group name. Reuse the FusionCompute management network port group and
Group Name set this parameter to managePortgroup.
To obtain the port group information, log in to FusionCompute and choose Resource Pool
> Network. On the page that is displayed, click the ManagementDVS switch to view
information on the Port Groups tab page.
Management Plane Gateway Gateway address of the FusionCompute management subnet. Set this parameter based on
the services and management network segments.
Interface Interconnection User used by eCampusCore to call FusionCompute interfaces to complete deployment
Username tasks. Set these parameters to the user and password configured during OpsMon user
configuration.
Password of the interface
interconnection user
Configuring Global VM root password Password of the root user of the VM to be applied for, which cannot be the same as
Parameters of the Template Default Password.
Integration Framework
sysomc user password Password of the sysomc user created on the VM during installation.
Language Language of the VM on the management plane. Set this parameter the same as the current
language configured on the eDME.
Time Zone/Region Region of the time zone to which the deployment environment belongs.
Time Zone/Area Region of the time zone to which the deployment environment belongs.
eDME Operation Portal Floating IP address of the eDME operation portal. Set this parameter to the IP address for
Floating IP Address logging in to the eDME operation portal.
Parameter Description
VM Host Name Host name of the eCampusCore VM. Retain the default value in the template.
Owning CNA Name CNA where the VM that is automatically created during the installation is located.
Datastore Name Name of the datastore associated with the CNA node.
NOTE:
If FusionCompute has interconnected with shared datastore, set this parameter to the name of the datastore.
If FusionCompute has not interconnected with shared datastore, set this parameter to the name of the planned datastore.
Memory (GB) Memory specifications. Retain the default value in the template.
System Disk (GB) System disk specifications. Retain the default value in the template.
Data Disk (GB) Data disk specifications. Retain the default value in the template.
16. Click Next. On the Confirm Parameter Settings page that is displayed, check the configuration information. If the information is correct,
click Deploy Now.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 228/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
17. Go to the Pre-deployment Check page, perform an automatic check and a manual check, and check whether each check item is passed. If
any check item is failed, perform operations as prompted in to meet the check standard. If all check items are passed, click Execute
Task.
18. Go to the Execute Deployment page and check the status of each execution item. After all items are successfully executed, click Finish.
If you use Export Report to export an XLSX file, you can operate or view the file only in Office 2007 or later version.
If the status of the Interconnecting and Configuring Shared Storage execution item is Pause, you need to configure the shared storage.
Click Continue in the Operation column of the Interconnecting and Configuring Shared Storage execution item.
Add shared storage based on the storage type. For details, see Operation and Maintenance > Service Management > Storage
Management in FusionCompute 8.8.0 Product Documentation.
After the shared storage is added, select the added shared storage or local disk from the datastore list to configure the datastore for the
VM.
Before using eVol storage (NVMe), ensure that the available hugepage memory capacity is greater than the memory capacity required for deploying
management VMs. Ensure that 1 GB hugepage memory has been enabled on the hosts where the VMs reside.
If an eVol storage device is used, ensure that the same association protocol is used between all associated hosts and the eVol storage device.
If VMs use scale-out storage, disk copy requirements must be met to ensure that the memory reservation is 100%.
When eVol storage or scale-out block storage is used, add virtualized local disks to the CNA nodes where eDME is deployed. For details, see
Operation and Maintenance > Service Management > Storage Resource Creation (for Local Disks) > Adding a Datastore in FusionCompute
8.8.0 Product Documentation. The disk space required by each eDME VM is 2 GB.
If the current deployment task fails and the deployment cannot continue, log in to FusionCompute and manually delete the VMs, VM templates, and
image files created in the current task.
19. If the first CNA node fails to be deployed, return to the Site Deployment Delivery page to view the list of supported servers.
If the server is not supported, you need to manually install the host. For details, see "Installing Hosts Using ISO Images (x86)" and "Installing
Hosts Using ISO Images (Arm)" in FusionCompute 8.8.0 Product Documentation.
After the CNA is manually installed, click Skip in the Operation column of the First CNA node deployment execution item on the
installation tool.
20. After FusionCompute is installed, click View Portal Link on the Execute Deployment page to view the FusionCompute address. Click the
FusionCompute address to go to the VRM login page.
After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is normal, and the alarm ALM-
10.1000027 Heartbeat Communication Between the Host and VRM Interrupted is generated, the alarm will be automatically cleared after 30 minutes.
Otherwise, clear the alarm by following the instructions provided in ALM-10.1000027 Heartbeat Communication Between the Host and VRM
Interrupted in FusionCompute 8.8.0 Product Documentation.
21. After eDME is installed, click View Portal Link on the Execute Deployment page to view the eDME address. Click the eDME address to
go to the login page of the O&M portal or the operation portal. You can log in to the O&M portal and operation portal to check whether the
eDME software is successfully installed. For details, see Verifying the Installation .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 229/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
Determine the method of binding network ports.
In active-backup mode, you can specify a primary network port among the selected network ports. If the primary network port has been specified, you
can configure the updelay of the primary network port as prompted.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 230/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In load sharing mode, configure port aggregation on the switches connected to the ports so that the host ports to be bound are configured in the same
Eth-trunk as the ports on the peer switches. Otherwise, network communication will be abnormal.
In LACP mode, some switches need to enable the bridge protocol data unit (BPDU) protocol packet forwarding function on the Eth-trunk. For details
about whether to enable the function, see the user guide of the corresponding switch model. If the function needs to be enabled and the switch is
Huawei S5300, run the following commands:
<S5352_01>sys
[S5352_01]interface Eth-Trunk x
[S5352_01-Eth-Trunkx]mode lacp-static
[S5352_01-Eth-Trunkx]bpdu enable
For details about how to configure port aggregation on a switch, see the switch user guide.
Active-backup: applies to scenarios where two network ports are to be bound. This mode provides high reliability. The bandwidth of
the bound port in this mode equals to that of a member port.
Round-robin: applies to scenarios where two or more network ports are to be bound. The bandwidth of the bound port in this mode is
higher than that of a member port, because the member ports share workloads in sequence.
This mode may result in data packet disorder because traffic is evenly sent to each port. Therefore, MAC address-based load
balancing prevails over Round-robin in load sharing mode.
IP address and port-based load balancing: applies to scenarios where two or more network ports are to be bound. The bandwidth of
the bound port in this mode is higher than that of a member port, because the member ports share workloads based on the IP address and
port-based load sharing algorithm.
Source-destination-port-based load balancing algorithm: When the packets contain IP addresses and ports, the member ports share
workloads based on the source and destination IP addresses, ports, and MAC addresses. When the packets contain only IP addresses, the
member ports share workloads based on the IP addresses and MAC addresses. When the packets contain only MAC addresses, the
member ports share workloads based on the MAC addresses.
MAC address-based load balancing: applies to scenarios where two or more network ports are to be bound. The bandwidth of the
bound port in this mode is higher than that of a member port, because the member ports share workloads based on the MAC addresses
of the source and destination ports.
This mode is recommended when most network traffic is on the layer 2 network. This mode allows network traffic to be evenly
distributed based on MAC addresses.
MAC address-based LACP: This mode is developed based on the MAC address-based load balancing mode. In MAC address-
based LACP mode, the bound port can automatically detect faults on the link layer and trigger a switchover if a link fails.
IP address-based LACP: applies to scenarios where two or more network ports are to be bound. The bandwidth of the bound port in
this mode is higher than that of a member port, because the member ports share workloads based on the source-destination-IP-address-
based load sharing algorithm. When the packets contain IP addresses, the member ports share workloads based on the IP addresses and
127.0.0.1:51299/icslite/print/pages/resource/print.do? 231/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
MAC addresses. When the packets contain only MAC addresses, the member ports share workloads based on the MAC addresses. In
this mode, the bound port can also automatically detect faults on the link layer and trigger a switchover if a link fails.
This mode is recommended when most network traffic goes across layer 3 networks.
IP address and port-based LACP: applies to scenarios where two or more network ports are to be bound. The bandwidth of the bound
port in this mode is higher than that of a member port, because the member ports share workloads based on the IP address and port-
based load sharing algorithm. In this mode, the bound port can also automatically detect faults on the link layer and trigger a switchover
if a link fails.
(x86 architecture) The following binding modes are available for DPDK-driven physical NICs:
DPDK-driven active/standby: used for the user-mode network port binding. The principle and application scenario are the
same as those of Active-backup for common NICs while this can provide better network packet processing than Active-
backup.
DPDK-driven LACP based on the source and destination MAC addresses: used for user-mode network port binding.
The principle and application scenario are the same as those of MAC address-based LACP for common NICs while this
mode can provide better network packet processing than MAC address-based LACP.
DPDK-driven LACP based on the source and destination IP addresses and ports: used for user-mode network port
binding. The principle and application scenario are the same as those of Source-destination-port-based load balancing
algorithm for common NICs while this mode can provide better network packet processing than Source-destination-port-
based load balancing algorithm.
Only NICs of the same type can be bound to Mellanox MT27712A0 NICs, and a bond supports a maximum of four Mellanox network ports.
8. In the network port list, select the physical network ports to be bound.
You are advised to bind network ports on different NICs to prevent network interruption caused by the fault of a single NIC.
9. Click OK.
The network ports are bound.
To change the binding mode of a bound port, locate the row that contains the bound port, click Modify, and change the binding mode in the
displayed dialog box.
In active-backup binding mode, you can select the primary network port from the Primary Network Port drop-down list. If the primary network port
has been specified, you can configure the updelay of the primary network port as prompted.
Switching between different load sharing modes or between different LACP modes interrupts network communication of the bound network port for 2s
to 3s.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 232/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the binding mode is changed from the active-backup mode to load sharing mode, port aggregation must be configured on the switch to which
network ports are connected. If the binding mode is changed from the load sharing mode to active-backup mode, the port aggregation configured on the
switch must be canceled. Otherwise, the network communication may be abnormal.
If the binding mode is changed from the LACP mode to another mode, port configurations on must be modified on the switch to which network ports
are connected. If the binding mode is changed from another mode to the LACP mode, port aggregation in LACP mode must be configured on the
switch to which network ports are connected. Otherwise, the network communication may be abnormal.
Configuration operations on the switch may interrupt the network communication. After the configurations are complete, the network communication is
automatically restored. If the network communication is not restored, perform either of the following methods:
Ping the destination IP address from the switch to trigger a MAC table update.
Select a member port in port aggregation, disable other ports on the switch, change the binding mode, and enable those ports.
When the resources of the host management domain are fully loaded, the Modify Aggregation Port operation cannot be scheduled due to insufficient
resources. As a result, the network interruption duration is prolonged. You can increase the number of CPUs in the management domain to solve this
problem.
11. On the Cluster tab page, click the cluster to be bound with network ports in batches.
The Summary tab page is displayed.
12. Choose More > Batch Operation > Batch Bind Network Ports in the upper part of the page.
The Batch Bind Network Ports page is displayed, as shown in Figure 2.
15. Open the template file and set the network port parameters on the Config sheet based on the information provided on the
Host_BindNetworkPort sheet.
The parameters include Host IP Address, Host ID, Network Port Names, Network Port IDs, Bound Network Port Name, and Binding
Mode.
17. On the Batch Bind Network Ports page, click Select File.
A dialog box is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 233/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If a dialog box is displayed, indicating that the operation fails, rectify the fault based on the failure cause and try again.
Scenarios
This section guides software commissioning engineers to load a license file to a site after FusionCompute is installed so that FusionCompute can
provide licensed services for this site within the specified period.
You can obtain the license using either of the following methods:
Apply for a license based on the electronic serial number (ESN) and load the license file.
Share a license file with another site. When a license is shared, the total number of physical resources (CPUs) and container resources (vCPUs)
at each site cannot exceed the license limit.
Prerequisites
Conditions
You need to obtain the following information before sharing a license file with another site:
If VRM nodes are deployed in active/standby mode at the site, you have obtained the VRM node floating IP address.
Data
Data preparation is not required for this operation.
Procedure
Log in to FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 234/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
6. Select License server and check whether the value of License server IP address is 127.0.0.1.
a. If yes, go to 7.
b. If no, set License server IP address to 127.0.0.1, and click OK. Then, go to 7.
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.
If the VRM version of the license client is FusionCompute 8.8.0, a VRM node in a version earlier than FusionCompute 8.8.0 cannot be used as a license server.
13. Run the following command on the VRM node of a later version to transfer the script to the /home/GalaX8800/ directory of the VRM node
of an earlier version. Then, move the script to the /opt/galax/gms/common/modsysinfo/ directory.
scp -o UserknownHostsFile=/dev/null -o StrictHostKeyChecking=no /opt/galax/gms/common/modsysinfo/keystoreManage.sh
gandalf@IP address of the VRM node of an earlier version:/home/GalaX8800/
cp /home/GalaX8800/keystoreManage.sh /opt/galax/gms/common/modsysinfo/
a. Import the VRM certificate of the site where the license file has been loaded to the local end. For details, see "Manually Importing
the Root Certificate" in FusionCompute 8.8.0 O&M Guide.
b. Import the VRM certificate of the local end to the site where the license file has been loaded.
To obtain the VRM certificate of the local end, perform the following steps:
i. Use PuTTY and the management IP address to log in to the active VRM node as user gandalf.
ii. Run the following command and enter the password of user root to switch to user root:
su - root
iii. Run the following command to copy server.crt to the /home/GalaX8800 directory:
cp /etc/galax/certs/vrm/server.crt /home/GalaX8800/
127.0.0.1:51299/icslite/print/pages/resource/print.do? 235/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
License server IP address: Enter the management IP address of the VRM node of the site that has the license file loaded. If the site
has two VRM nodes working in active/standby mode, enter the floating IP address of the VRM nodes.
Account: Enter the username of the FusionCompute administrator of the site that has the license file loaded.
Password: Enter the password of the FusionCompute administrator of the site that has the license file loaded.
The FusionCompute administrator at the site where the license has been loaded must be a new machine-machine account whose Subrole is
administrator or a new system super administrator account.
The VRM that is activated in associated mode cannot be set as the license server.
The keys of VRM nodes that share the license must be the same. If they are different, change them to be the same.
If the VRM nodes of different versions share the license, change the keys of the later version to the keys of the earlier version. The procedure is as
follows:
a. Run the following command on the VRM nodes of the later version to transfer the script to the /home/GalaX8800/ directory of the VRM nodes
of the earlier version.
scp -o UserknownHostsFile=/dev/null -o StrictHostKeyChecking=no /opt/galax/root/vrm/tomcat/script/updateLmKey.sh gandalf@IP
address of a VRM node in an earlier version:/home/GalaX8800/
b. Run the following command on the VRM nodes of the earlier version to query the keys of VRM nodes of the earlier version.
sh /home/GalaX8800/updateLmKey.sh query
c. Run the following command on the VRM nodes of the later version to change the keys of the later version to those of the earlier version. After
this command is executed, the VRM service automatically restarts.
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh set
The following command output is displayed:
Please Enter aes key:
Enter the key and press Enter. If the following information is displayed in the command output, the key is changed successfully.
Redirecting to /bin/systemctl restart vrmd.service
success
If VRM nodes of the same version share a license, change the key of the client to that of the server. The procedure is as follows:
a. Run the following command on the server VRM node to query the key:
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh query
b. Run the following command on the client VRM node to set the key of the client to the key of the server: After this command is executed, the
VRM service automatically restarts.
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh set
The following command output is displayed:
Please Enter aes key:
Enter the key and press Enter. If the following information is displayed in the command output, the key is changed successfully.
Redirecting to /bin/systemctl restart vrmd.service
success
Scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 236/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
This section guides administrators to configure available MAC address segments for the system on FusionCompute to allocate a unique MAC
address to each VM.
FusionCompute provides 100,000 MAC addresses for users, ranging from 28:6E:D4:88:B2:A1 to 28:6E:D4:8A:39:40. The first 5000 addresses
(28:6E:D4:88:B2:A1 to 28:6E:D4:88:C6:28) are dedicated for VRM VMs. The default address segment for new VMs is 28:6E:D4:88:C6:29 to
28:6E:D4:8A:39:40.
If only one FusionCompute environment is available on the Layer 2 network, the FusionCompute environment can use the default address
segment (28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40) provided by the system. In this case, skip this section.
If multiple FusionCompute environments are available on the Layer 2 network, you need to divide the default address segment based on the
number of VMs in each FusionCompute environment and allocate unique MAC address segments to each FusionCompute environment.
Otherwise, MAC addresses allocated to VMs may overlap, adversely affecting VM communication.
When configuring a custom MAC address segment, change the default MAC address segment to the custom address segment or add a new
address segment. A maximum of five MAC address segments can be configured for each FusionCompute environment, and the segments
cannot overlap.
Prerequisites
Conditions
You have logged in to FusionCompute.
Data
The MAC address segments for user VMs have been planned.
The address segments to be configured and the reserved 5000 MAC addresses dedicated for VRM VMs cannot overlap.
If only one FusionCompute environment is available on the Layer 2 network, you can use the default MAC address segment (28:6E:D4:88:C6:29 to
28:6E:D4:8A:39:40).
If multiple FusionCompute environments are available on the Layer 2 network, you need to divide the default MAC address segment based on the number of
VMs in each FusionCompute environment.
For example, if two FusionCompute environments are available on the Layer 2 network, evenly allocate the first 95,000 MAC addresses to the two
FusionCompute environments: 45,000 MAC addresses to one environment and 50,000 MAC addresses to the other environment.
The following MAC address segments can be allocated:
The MAC address segment for FusionCompute 1 (the first 45,000 addresses): 28:6E:D4:88:C6:29 to 28:6E:D4:89:75:F0
The MAC address segment for FusionCompute 2 (the last 50,000 addresses): 28:6E:D4:89:75:F1 to 28:6E:D4:8A:39:40
The same rule applies when there are multiple environments.
Procedure
5. Click OK.
The MAC address segment is configured.
To modify or delete a MAC address segment, locate the row where the target MAC address segment resides and click Modify or Delete.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 237/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If no NTP server is configured, the time of eDME may differ from that of managed resources and eDME may fail to obtain the performance data of the managed
resources. You are advised to configure the NTP service.
Context
NTP is a protocol that synchronizes the time of a computer system to Coordinated Universal Time (UTC). Servers that support NTP are called NTP
servers.
Precautions
Before configuring the NTP server, check the time difference between eDME and the NTP server. The time difference between the NTP server and
eDME cannot exceed 24 hours. The current NTP server time cannot be earlier than eDME installation time.
For example, if the current NTP server system time is 2021-04-05 16:01:49 UTC+08:00 and eDME is installed at 2021-04-06 16:30:20 UTC+08:00,
the NTP server time is earlier.
To check the system time of the eDME node, perform the following steps:
1. Use PuTTY to log in to the eDME node as user sopuser using the static IP address of the node.
The initial password of user sopuser is configured during eDME installation.
3. Run date to check whether the system time is consistent with the actual time.
If the system time of eDME is later than the NTP server time, you need to run the following command to restart the service after you
configure the NTP server and time synchronization is complete: cd /opt/oss/manager/agent/bin && . engr_profile.sh && export
mostart=true && ipmc_adm -cmd startapp. If the system time of eDME is earlier than the NTP server time, you do not need to run this
command.
Procedure
1. Visit https://Login IP address of the management portal:31945 and press Enter.
For eDME multi-node deployment (with or without two nodes of the operation portal), use the management floating IP address to log in.
2. Enter the username and password to log in to the eDME management portal.
The default username is admin, and the initial password is configured during installation of eDME.
Table 1 Parameters
127.0.0.1:51299/icslite/print/pages/resource/print.do? 238/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NTP Server IP IP address of the NTP server that functions as the clock source. IPv4 address
Address
Key Index Used to quickly search for the key value and digest type during the The value is an integer ranging from 1 to 65,534,
communication authentication with the NTP server. excluding 10,000.
The value must be the same as Key Index configured on the NTP server.
NOTE:
Key NTP authentication character string, which is an important part for The value is a string of a maximum of 30 ASCII
generating a digest during the communication authentication with the NTP characters. Spaces and number signs (#) are not
server. supported.
The value must be the same as Key set on the NTP server.
NOTE:
Operation Operation that can be performed on the configured NTP server. Verify
Delete
5. Click Apply.
7. For example, for storage devices, log in to the storage device management page and set the device time to be the same as that in eDME.
This section uses OceanStor Dorado 6.x devices as an example. The operations for setting the time vary according to the device model. For details, see the
online help of the storage device.
i. In NTP Server Address, enter the IPv4 address or domain name of the NTP server.
iii. (Optional) Select Enable next to NTP Authentication. Import the NTP CA certificate to CA Certificate.
Only when NTPv4 or later is used, NTP authentication can be enabled to authenticate the NTP server and automatically synchronize
the time to the storage device.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 239/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
b. Click next to Device Time to change the device time to be the same as the time of eDME.
If you set the time manually, there may be time difference. Ensure that the time difference is less than 1 minute.
After the SSO configuration is complete, if eDME is faulty, you may fail to log in to the connected FusionCompute system. For details, see "Failed to Log In to
FusionCompute Due to the Fault" in eDME Product Documentation.
During the SSO configuration, you must ensure that no virtualization resource-related task is running on eDME, such as creating a VM or datastore. Otherwise,
such tasks may fail.
Prerequisites
FusionCompute has been installed.
Procedure
1. Log in to the O&M portal as the admin user. The O&M portal address is https://IP address for logging in to the O&M portal:31943.
In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.
The default password of user admin is the password set during eDME installation.
2. In the navigation pane on the left of the eDME O&M portal, choose Settings > Security Management > Authentication.
3. In the left navigation pane, choose SSO Configuration > CAS SSO Configuration.
6. In the text box of IPv4 address or IPv6 address, enter the IP address for logging in to the FusionCompute web client.
7. Click OK.
Login addresses:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 240/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
9. In the navigation bar on the left of the FusionCompute home page, click to enter the System page.
11. (Optional) Upon the first configuration, click on the right of Interconnected Cloud Management to enable cloud management
settings.
13. Enter the login IP address of the eDME O&M portal in the System IP Address text box.
In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.
After the operation is complete, the system is interrupted for about 2 minutes. After the login mode is switched, you need to log out of the system and log in to
the system again.
If any fault occurs on the O&M portal of ManageOne or eDME after SSO is configured, the login to FusionCompute may fail. In this case, you need to log in to
the active VRM node to cancel SSO.
Run the following command on the active VRM to cancel SSO:
python /opt/galax/root/vrm/tomcat/script/omsconfig/bin/sm/changesso/changesso.py -m ge
Procedure
1. Mount new disks to the node to be expanded.
2. Use PuTTY to log in to the eDME node as user sopuser via the static IP address of the node.
The initial password of sopuser is configured during eDME installation.
For the SUSE OS, run the su - root command to switch to user root.
6. Enter the disk name as prompted. If it is left blank, all the disks with free space are used.
DOS disks with four partitions and GPT disks with 128 partitions cannot be used for capacity expansion.
7. Enter the capacity to be expanded as prompted. If it is left blank, all free space of all disks is used for capacity expansion.
8. After the script is executed successfully, run the df -h command to check whether the capacity expansion is successful.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 241/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
You have obtained the address, account, and password for logging in to the eDME management portal. The account must have the permission to add
product features.
Procedure
1. Open a browser, visit https://IP address:31945, and press Enter.
2. On the top navigation bar, choose Product > Optional Component Management.
3. On the Optional Component Management page, locate the component to be enabled and toggle on the button to enable it.
When you enable optional components, only one component can be enabled at a time. That is, if a component is being enabled, the next component
cannot be enabled until the current one is enabled.
After you enable a component, the system checks whether the resources (CPU, memory, and disk) on which the component depends meet the
requirements. If the resources do not meet the requirements, a dialog box is displayed. If a disk is missing during component deployment, expand the
capacity by referring to Expanding Partition Capacity .
Currently, the following optional components are supported: LiteCA Service, Data Protection Service, and Container Management Service. After you
enable the container management service, a dialog box is displayed, asking you to set the graph database password. The password must meet the
strength requirements of the graph database password.
To modify the VM configuration when the Modify VM permission is disabled, enable the Modify VM permission. To ensure information security,
disable the permission after the modification is complete.
Data Protection Service: provides ransomware protection capabilities for VMs. When installing this component, add VM resources for the O&M
node: number of vCPU cores (1) and memory (2 GB).
LiteCA service: provides basic CA capabilities. When installing this component, add VM resources for the O&M node: number of vCPU cores (1) and
memory (500 MB).
Container management service: provides the capability of taking over container resources. When installing this component, add VM resources for the
O&M node: number of vCPU cores (2) and memory (10 GB).
Configuring Certificates
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 242/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. In the navigation pane on the FusionCompute console, click . The Resource Pool page is displayed.
a. Locate the row that contains the VM to be stopped, and choose More > Power > Stop.
b. Click OK.
There are four VMs for HiCloud: paas-core, platform-node1, platform-node2, and platform-node3.
4. On the Clusters tab page, click the name of the cluster where HiCloud is located. The Summary tab page of the cluster is displayed.
5. On the Configuration tab page, choose Configuration > VM Override Policy. The VM Override Policy page is displayed.
8. In the Edit VM override policy area, set the following parameters. Retain the default values for the parameters that are not listed in.
Parameter Configuration
a. Locate the row that contains the VM to be started, click More and choose Power > Start.
b. Click OK.
Scenarios
If Digital Certificate Authentication is enabled, operations this section are mandatory. If Digital Certificate Authentication is disabled,
operations in this section are optional.
You can perform the following steps to query the status of Digital Certificate Authentication:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 243/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Log in to the eDME O&M portal as the O&M administrator admin at https://IP address:31943.
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured
during installation.
The password is set during eDME installation. Obtain the password from the environment administrator.
2. Choose Infrastructure > Configuration and Management > Access Settings. The Access Settings page is displayed.
3. Expand the Set certificates area and check the status of Digital Certificate Authentication.
The status of Digital Certificate Authentication is controlled by the solution. Generally, it is Enabled by default. You do not need to set it.
Before performing operations in this section, ensure that the certificates of FusionCompute, storage devices, and network services have been
imported and interconnected successfully. Otherwise, basic functions of HiCloud will be affected.
For details about how to import and interconnect certificates of FusionCompute, storage devices, and network services, see Datacenter
Virtualization Solution x.x.x Product Documentation at the Support website of Datacenter Virtualization Solution Product Documentation.
Procedure
1. Enter the URL of the GDE data zone in the address box of a browser, press Enter, and log in to the WebUI. URL of the GDE data zone
portal: https://IP address:38443.
IP address: is value of Keepalived VIP (2) of data plane described in 1.5.1.2-12 .
Password of the admin user: is the value of system administrator admin password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .
The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.
If the exported file does not have a file name extension, add the file name extension .cer.
5. Log in to the eDME O&M portal as the O&M administrator admin at https://IP address:31943.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 244/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured during
installation.
The password is set during eDME installation. Obtain the password from the environment administrator.
6. Choose Settings > System Management > Certificate Management > Service Certificate Management > ThirdPartyCloudService
Certificates > Trust Certificates and import the third-party trust certificate.
All services Certificate trust chain Used for the interconnection between If the interconnected eDME is deployed in active/standby mode, import
CMP HiCloud and eDME the certificates of both the active and standby eDMEs to GDE.
BMS service iBMC certificate Used for the interconnection between If the service interconnects with iBMCs of multiple vendors, import the
Root certificate the BMS service and iBMC iBMC certificate of each vendor.
The root certificate is a part of the certificate chain. The root certificate
and level-2 certificate also must be imported.
The xFusion root certificate is the same as the Huawei root certificate.
You can use the same method to obtain the xFusion root certificate.
If the root certificates of all or some servers cannot be obtained, change
the certificate chain verification mode. For details, see (Optional)
Changing the Certificate Chain Verification Mode .
VMware vCenter certificate Used for the interconnection between If the service interconnects with multiple vCenter systems, import the
integration the VMware integration service and certificate of each vCenter system.
service vCenter
The PM certificate managed by vCenter is used by the operation portal
to connect to VNC.
NSX-T certificate Used for the interconnection between If the service interconnects with multiple NSX-T systems, import the
the VMware integration service and certificate of each NSX-T system.
NSX-T
Security service DBAPPSecurity Used for the interconnection between The security service interconnects with only one set of DBAPPSecurity
cloud certificate the security service and cloud system. You only need to import one certificate.
DBAPPSecurity cloud
Database VastEM certificate Used for the interconnection between If the service interconnects with multiple VastEM systems, import the
service the database service and VastEM certificate in either of the following ways:
If multiple VastEM systems use the same certificate, you only need to
import the certificate once.
If VastEM systems use different certificates, import certificates with
different subjects.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 245/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Enter the IP address of the eDME management zone in the address box of the browser, press Enter, and log in. URL of eDME management
zone: https://IP address:31945
IP address: is the value of eDME OC plane IP described in 1.5.1.2-12 .
User name: is admin by default.
Password: Obtain it from the environment administrator.
3. Choose ER product.
4. On the Identity Certificate page, click Export Trust Chain on the right to export the trust chain to the local PC.
The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.
3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 246/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
a. Click the Root certificate card. The Root certificate page is displayed.
c. Click in the Operation column to download the root certificate to the local PC.
a. Click the Level-2 CA certificate card. The Level-2 CA certificate page is displayed.
c. Click in the Operation column to download the level-2 CA certificate to the local PC.
1. Enter the URL of vCenter in the address box of a browser and press Enter to access the WebUI.
The vCenter URL is provided by service users.
The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.
3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 248/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.
3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 249/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Enter the URL of the NSX-T resource pool in the address box of a browser, press Enter, and log in to the WebUI.
The URL is provided by service users.
The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.
3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 250/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Log in as a system administrator to the backend node of DBAPPSecurity cloud management platform in SSH mode.
The IP address, user name, and password for login are provided by the vendor of DBAPPSecurity.
Procedure
1. Enter the URL of VastEM in the address box of a browser, press Enter, and log in to the WebUI.
The VastEM URL is provided by service users.
The certificate information entry shown in the figure may vary depending on the browser version and personal settings. This section uses Google Chrome as
an example.
3. In the displayed dialog box, click the Details tab and click Export to export the certificate in Base64-encoded ASCII, single certificate
(*.pem;*.crt) format.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 251/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
gatewayuser Certificate trust chain. For details about how to obtain it, see Obtaining the Certificate Trust Chain . All services require this
certificate.
bms Certificate trust chain, iBMC certificate, and root certificate. For details about how to obtain them, see The BMS service requires
Obtaining the Certificate Trust Chain and Exporting the iBMC Certificate and Root Certificate . the certificates.
vmware- Certificate trust chain, vCenter certificate, and NSX-T certificate. For details about how to obtain The VMware integration
computeservice them, see Obtaining the Certificate Trust Chain , Exporting the vCenter System Certificate , and service requires the
Exporting the NSX-T Certificate . certificates.
security The PM certificate managed by vCenter is used by the operation portal to connect to VNC. Therefore,
you need to import the certificate to the security service. For details about how to obtain it, see
Exporting the PM Certificate Managed by vCenter .
security Certificate trust chain and DBAPPSecurity cloud certificate. For details about how to obtain them, see The security service
Obtaining the Certificate Trust Chain and Exporting the DBAPPSecurity Cloud Certificate . requires them.
dbaas Certificate trust chain and VastEM certificate. For details about how to obtain them, see Obtaining the The database service
Certificate Trust Chain and Exporting the VastEM Certificate . requires the certificates.
Procedure
1. Log in as the admin user to the GDE data zone portal at https://IP address:38443.
IP address: is value of Keepalived VIP (2) of data plane described in 1.5.1.2-12 .
Password of the admin user: is the value of system administrator admin password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 252/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Click the menu icon in the upper left corner of the page and choose Password and Key > Certificate Management.
4. On the Certificate Management tab page, click Edit Certificate in the Operation column of the service for which certificates need to be
imported. The gatewayuser service is used as an example.
5. On the Edit Certificate page, click the Trust Certificate tab and click Add in the Operation column.
6. In the Change Reminder dialog box, click OK. The Add xxx Trust Certificate dialog box is displayed.
7. In the displayed Add xxx Trust Certificate dialog box, customize a value for Alias Name and click Upload File under Trust Certificate
File.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 253/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
To upload multiple certificates, click Add to upload them one by one. Ensure that all certificates are added.
The system automatically reloads the certificate. The new settings will take effect within 3 minutes. If such settings do not take effect 3 minutes later, check
whether the certificate is correctly imported or restart the corresponding service. For details about the mapping between services and service applications, see
Table 2. For details about how to restart a service, see Restarting Services .
hicloud-common-admin-gateway
127.0.0.1:51299/icslite/print/pages/resource/print.do? 254/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
hicloud-vmware-scheduler
hicloud-vmware-service
hicloud-security-gateway
Prerequisites
You have obtained the value of the cert-white-list parameter and performed the following steps:
1. Obtain the certificate of the third-party system interconnected with the service. For details about how to obtain the certificate, see Obtaining
Certificates to Be Imported to GDE .
2. Open the obtained certificate as a text file to obtain the certificate body.
3. Delete the newline characters from the certificate body to obtain the certificate character string. The character string is the value of
cert_white_list.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 255/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Log in to the GDE management zone as the op_svc_cfe tenant in tenant login mode at https://IP address:31943.
IP address: is the value of Floating IP of management plane on the HiCloud Parameters sheet described in 1.5.1.2-12 .
The password of the op_svc_cfe tenant is the value of op_svc_cfe tenant password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .
2. Choose Maintenance > Instance Deployment > TOSCA Stack Deployment. The TOSCA Stack Deployment page is displayed.
Service Stack
5. Modify parameters.
a. Change the value of Value After Upgrade for the full-chain-check parameter to false.
b. Change the value of Value After Upgrade of cert-white-list to the character string obtained in Prerequisites.
If multiple certificates need to be imported, separate certificate character strings with commas (,).
After clicking Upgrade Now, wait for 3 to 5 minutes. If the stack status changes to Normal, the parameter has been successfully modified.
7. Manually restart the applications corresponding to the involved services by referring to Restarting Services .
For details about the mapping between services and corresponding service applications, see Table 2 .
8. (Optional) Perform this step if the BMS service certificate needs to be imported. Otherwise, skip this step.
a. Log in to the FEP for accessing the BMS as the paas user in SSH mode.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 256/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
vi /home/paas/BMSProxy/conf/cert_white.list
c. Add the certificate content mentioned in Prerequisites to the file, save the file, and exit.
Involved Services
To restart a service, you only need to restart the applications related to the service. The mapping is as follows:
hicloud-common-admin-gateway
hicloud-vmware-scheduler
hicloud-vmware-service
hicloud-security-gateway
hicloud-security-gateway
Procedure
Restart involved service applications by referring to Restarting Services .
If a new certificate is imported after a CMP HiCloud service is restarted, the service must be restarted again after certificate import.
Procedure
1. Log in to the eDME O&M portal at https://IP address:31943 as the O&M administrator admin.
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured during
installation.
The password is set during eDME installation. Obtain the password from the environment administrator.
3. Click in the upper left corner and choose Settings > Security Management > Authentication.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 257/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. In the navigation pane, choose SSO Configuration > CAS SSO Configuration.
b. On the displayed CAS SSO Configuration page, set IP address type to IPv4 address.
d. Click OK.
IP address of the LB port on GDE Value of Keepalived VIP (2) of data plane described in 1.5.1.2-12
IP address of the platform-node1 node Value of Management IP Address of platform-node1 described in 1.5.1.2-12
IP address of the platform-node2 node Value of Management IP Address of platform-node2 described in 1.5.1.2-12
IP address of the platform-node3 node Value of Management IP Address of platform-node3 described in 1.5.1.2-12
After the configuration is complete, you can view the configured IP address on the CAS SSO Configuration page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 258/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
You have obtained the adaptation package of each service by referring to Table 5 .
You have extracted adaptation packages and their certificates and signature files from the obtained packages.
Procedure
1. Log in to the eDME O&M portal as the O&M administrator admin at https://IP address:31943.
IP address is the value of eDME OC plane IP on the HiCloud Parameters sheet in the SmartKit installation template configured during
installation.
The password is set during eDME installation. Obtain the password from the environment administrator.
2. Choose Multi-tenant Service > Service Management > Service Management. The Service Management page is displayed.
3. Click Add Third-Party Service. In the Configure Gateway Info step, click Next.
4. In the Import Adaptation Package step, upload the obtained adaptation packages and the corresponding signature and certificate files. Click
Next.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 259/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. In the Configure Deployment Parameter step, set required parameters. Click Next.
address: Set this parameter to the value of eDME SC plane IP described in 1.5.1.2-12 .
gateway_ip: Set this parameter to the value of Keepalived VIP (2) of data plane described in 1.5.1.2-12 .
Parameter Configuration
Set Password Set the password to that of the machine-machine account. The machine-machine account password is the value of Machine account
password described in 1.5.1.2-12 .
7. Click OK.
If Server error is reported after the adaptation package is imported, wait for 10 to 15 minutes for the machine-machine account configuration to take
effect.
To modify parameters after the adaptation package is imported, perform the following steps:
a. Choose Multi-tenant Service > Service Management > Adaptation Package Management. The Adaptation Package
Management page is displayed.
b. In the Operation column of the service whose parameters need to modified, click Modify Parameter. On the displayed page, modify
parameters and click OK.
c. Click Deploy in the Operation column of the service. Parameter modification is successful when the service status changes to
Installation succeeded.
a. Choose Multi-tenant Service > Service Management > Adaptation Package Management. The Adaptation Package
Management page is displayed.
b. Select the service whose adaptation package needs to be updated and click Update.
c. Import the new adaptation package and the corresponding signature and certificates, and click Next.
Click OK.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 260/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
You have obtained the adaptation package of each service by referring to Table 5 and saved it to a local directory.
Procedure
1. Log in to SmartKit, and click the Virtualization tab to access the Datacenter Virtualization Solution Deployment page.
3. Click Create Task. On the displayed page, enter the task name, select Connecting DCS Project Components to eDME, and click Create.
4. In the Interconnection Policy step, click HiCloud Interconnection. Then, click Next.
5. In the Parameter Configuration step, click Download Template on the left of the page to download the SmartKit interconnection template
to the local PC.
6. In the HiCloud Parameter List sheet of the template, set all parameters according to Table 1, save the settings, and close the file.
Basic Interconnection Floating IP Address of IP address for logging in to the eDME operation portal. Mandatory 192.168.*.251
Parameters eDME Operation Portal
Account Username of Administrator account of the eDME operation portal. Mandatory bss_admin
eDME Operation Portal
Account Password of Password of the administrator account for logging in to Mandatory cnp2024@HW
eDME Operation Portal the eDME operation portal.
NOTE:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 261/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The password must be the new password after the first
login. The initial password set during installation is
invalid.
HiCloud Adaptation Management IP Address of IP address for logging in to the GKit VM. Mandatory 192.168.*.194
Parameters GKit VM
PaaS Account Username of paas user for logging in to the GKit VM. Mandatory paas
Gkit VM
PaaS Account Password of Password of the paas user for logging in to the GKit Mandatory Image0@Huawei123
Gkit VM VM.
Keepalived VIP (2) of Data Set this parameter to the value of Keepalived VIP (2) of Mandatory 192.168.*.17
Zone Data Zone in Table 11 .
Adaptation Package Local path for storing the mediation package. Mandatory D:\Hicloud\package01
Address
7. In the Upload Template area, click Browse Files and select the configured template to upload it to SmartKit.
9. On the displayed page, perform an automatic check and a manual check, and check whether each item passes the check. If any check fails,
perform operations as prompted on the page to ensure that the check result meets the check standard.
10. After the check is successful, click Execute Task in the lower part of the page to interconnect with eDME.
Table 1 lists the adaptation package, tool, and reference document required for connecting to eDME.
Table 1 Adaptation package, signature, and certificate file required for connecting CSHA to eDME
resource_uniteAccess_csha_8.6.0 The .zip package contains the following For enterprise users, click here, search for
NOTE: files: the software package by name, and
Adaptation package download it.
The version number in the software package name varies with
site conditions. Use the actual version number. resource_uniteAccess_csha_8.6.0.tar.gz
127.0.0.1:51299/icslite/print/pages/resource/print.do? 262/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Signature file For carrier users, click here, search for the
resource_uniteAccess_csha_8.6.0.tar.gz.cms software package by name, and download
it.
Certificate file
resource_uniteAccess_csha_8.6.0.tar.gz.crl
Connecting to eDME
1. Log in to the eDME O&M portal using a browser.
Login address: https://IP address of the node:31943, for example, https://192.168.125.10:31943
The default user name is admin, and the initial password is configured during installation of eDME.
b. Choose My Favorites from the main menu and click Manage on the right of the Quick Links area. The Link Management page
is displayed. Click Add and add the UltraVR access link to Common Links as prompted. After the link is added, you can click it
in Common Links to go to the UltraVR page. For details, see Adding a Link in eDME Product Documentation.
Set Access Mode to Account and password authentication and System Type to UltraVR.
b. Choose Settings > System Management > Certificate Management. The Certificate Management page is displayed.
c. Click SouthBoundNodeService, click the Trust Certificates tab, and import the certificate obtained in 3.
After the certificate function is enabled, if no certificate is imported or an incorrect certificate is imported, services may be affected.
5. On the Add Third-Party Service page of eDME, connect the CSHA service to the eDME platform.
For details, see Adding Third-Party Services in eDME Product Documentation.
When configuring gateway information, log in to the UltraVR system, choose Settings > Cloud Service Configuration Management > eDME
Configuration, and add Tenant Northbound API Configuration and Tenant Southbound API Configuration of the corresponding tenant. Set
Domain Name/IP Address and Port based on the procedure for configuring the gateway in accessing a third-party service on eDME.
When configuring tenant southbound APIs on UltraVR, set the user name and password to be the same as those set in the operation of configuring a
service account for accessing a third-party service on eDME.
Table 2 lists the CSHA parameters to be configured.
Parameter Description
ip UltraVR IP
127.0.0.1:51299/icslite/print/pages/resource/print.do? 263/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details, see Deploying a Cloud Service Adaptation Package in eDME Product Documentation.
i. Click in the upper left corner of the page and choose Infrastructure > Management and Configuration >
AZs from the main menu.
i. Click in the upper left corner of the page and choose Infrastructure > Management and Configuration > AZs
from the main menu.
iii. Locate the row that contains the target production AZ, and click the number of Operation Resource Pools.
iv. On the Operation Resource Pools page, click Add above the cluster list.
i. Click in the upper left corner of the page and choose Infrastructure > Management and Configuration > AZs
from the main menu.
iii. Locate the row that contains the target production AZ and choose > Operate Online.
a. Click in the upper left corner of the page and choose Infrastructure > Virtualization > Virtualization Platform from the
main menu.
b. On the Virtualization Resources tab page, click the cluster under the FusionCompute site and click the Associated Resources
tab.
d. Enter the name of the production host group, and select the production AZ and production host.
e. Click OK.
9. Configure UltraVR.
a. Update sites.
Log in to the UltraVR server as the admin user and choose Resources > LocalServer > Update.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 264/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
i. Click LocalServer.
ii. Select the production or DR site and the corresponding storage device, and choose More > Modify.
iii. In the displayed Modify Device Information dialog box, enter the user name and password.
Obtain the user name and password from the storage device administrator.
After the password of a storage device is changed, UltraVR continuously accesses the storage device. You are advised to create a user
for access by UltraVR on the storage device. The user must have administrator rights, for example, user mm_user of array storage and
user cmdadmin of Huawei distributed block storage.
i. Click LocalServer.
ii. Select the production or DR site and the corresponding FusionCompute, and click Modify.
iii. In the displayed Modify Device Information dialog box, enter the user name and password.
Obtain the username and password of the interface interconnection user from the FusionCompute system administrator.
ii. On the Datastore Mapping tab page, click Add, select the production site and DR site, and click Next. The page for
configuring the mapping view is displayed.
iii. Select datastore resources with active-active relationships and add them to the mapping view.
You can view the added mapping in the Mapping View area.
Select other datastore resources with active-active relationships and add them to the mapping view.
To ensure normal communication between eCampusCore and eDME, you need to configure the interconnection.
To ensure that VMs can be created when subsequent service instances are applied for, you need to import the VM template to the FusionCompute environment
on the operation portal after the product installation and set VM specifications on eDME.
Before installing eCampusCore components on the operation portal, you need to create an eDME image repository.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 265/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
This section describes how to use SmartKit to interconnect Link Services with eDME.
After the installation is complete, you need to import the service certificate to the eDME environment to ensure that the eDME can properly access service
interfaces.
After the configuration, you can log in to eCampusCore on eDME in single sign-on (SSO) mode.
Prerequisites
You have installed eCampusCore by referring to Installation Using SmartKit .
You have obtained the service adaptation package eCampusCore_<version>_PaaSeLink.zip and its .asc verification file by referring to Table
13 , and stored them in the local directory, for example, D:\packages.
Procedure
1. On the home page of SmartKit, click the Virtualization tab. In Site Deployment Delivery, click Datacenter Virtualization Solution
Deployment.
3. On the Tasks tab page, click Create Task. The Basic Configuration page is displayed.
4. Set Task Name. In the Select Scenario area, select Connecting DCS Project Components to eDME and click Create in the lower part of
the page.
5. In the Interconnection Policy area, select eCampusCore Interconnection and click Next.
6. On the Parameter Configuration page, click Download File Template and set interconnection parameters in the template based on Table 1.
Basic Interconnection Floating IP Address of Floating IP address of the eDME operation portal. Set this parameter to the IP address for logging
Parameters eDME Operation Portal in to the eDME operation portal.
Account Username of eDME operation portal account bss_admin. This account is used to access the eDME operation
eDME Operation Portal portal to obtain the user authentication token for service interconnection.
Account Password of Password of the eDME operation portal account. This account is used to access the eDME
eDME Operation Portal operation portal to obtain the user authentication token for service interconnection.
eCampusCore Management IP address Set this parameter to the IP address of the installer VM configured in 15 .
Adaptation Parameters of the installer VM
Account Username of User for logging in to the installer VM. Set this parameter to sysomc.
Installer VM
Account Password of Set this parameter to the password of the sysomc user configured in 15 .
Installer VM
Floating IP address of the Set this parameter to the value of Floating IP address of the internal gateway in 15 .
internal gateway
127.0.0.1:51299/icslite/print/pages/resource/print.do? 266/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Machine-Machine Set this parameter to the value of Password of the O&M management console and database in
Account Password 15 .
Adaptation Package Address of the adaptation package, which is used to store the adaptation package and its
Address verification file on the local host for the interconnection with the eDME service.
Interconnect with Whether to interconnect with the PaaSeLink service. If this parameter is set to Yes, the
PaaSeLink Service eCampusCore PaaSeLink service adaptation package is imported and the target service is deployed
through eDME operation portal APIs.
Interconnect with Whether to interconnect with the PaaSAPIGW service. If this parameter is set to Yes, the
PaaSAPIGW Service eCampusCore PaaSAPIGW service adaptation package is imported and the target service is
deployed through eDME operation portal APIs.
7. Click Browse Directory, upload the parameter template, and click Next.
8. On the Pre-interconnection Check page, perform an automatic check and a manual check, and check whether each check item is passed. If
any check item fails, perform operations as prompted in to meet the check standard. If all check items are passed, click Execute Task.
9. The Execute Interconnection page is displayed. After all items are successfully executed, click Finish.
Task parameters cannot be modified. If the task fails to be created, create a task again.
Prerequisites
You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M
portal:31943.
Procedure
1. Export the service certificate.
a. Choose Multi-tenant Service > Service Management > Service Management, and search for the service created in
Interconnecting eCampusCore with eDME .
c. On the error page that is displayed, click here to go to the login page.
Username: admin
Password: password of the O&M management console and database configured in Table 16 .
e. Choose System > Certificate > CA and click the download button in the Operation column to download the certificate.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 267/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
a. On the eDME O&M portal, choose Settings > System Management > Certificate Management.
c. Click the Trust Certificates tab and click Import to import the certificate.
After the installation is complete, you need to change the login mode so that multiple sessions are allowed for a single user and set the maximum number of
concurrent online users to ensure that the instance service page can be accessed properly.
To ensure that redirection from the eDME O&M portal is normal, you need to configure SSO interconnection using SAML with the eDME O&M portal as the
server (IdP) and eCampusCore as the client (SP).
Prerequisites
You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M portal:31943.
Procedure
1. Log in to the service.
a. Choose Multi-tenant Service > Service Management > Service Management, and search for the service created in
Interconnecting eCampusCore with eDME .
c. On the error page that is displayed, click here to go to the login page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 268/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Username: admin
3. Toggle on Enable single-user multi-session and set Max. Allowed Online Users to 50.
Procedure
1. Obtain the eDME product certificate.
The entered password is the private key password. Remember the password for future use.
d. After the commands are executed, save the following three files that are generated in the /home/sopuser/SAML_SSO directory to
your local PC.
a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.
b. Import the product certificate. Choose Settings > System Management > Certificate Management from the main menu.
c. Click UniSSOWebsite_SAML_IdP.
d. Click the Identity Certificates tab page and click Import to import the public and private key files.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 269/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Public key file Select the public key file obtained in Obtain the eDME product certificate.
Private key file Select the private key file obtained in Obtain the eDME product certificate.
Private key password Select the private key password obtained in Obtain the eDME product certificate.
a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.
c. In the navigation pane, choose SSO Configuration > SAML SSO Configuration.
d. Click the SP Configuration tab page and click Export Metadata to obtain the eDME O&M portal metadata.
4. Log in to eCampusCore.
a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.
b. Choose Multi-tenant Service > Service Management > Service Management, and search for the service created during
installation.
d. On the error page that is displayed, click here to go to the login page.
Username: admin
Password: password of the O&M management console and database configured in Table 16 .
a. Choose System > Certificate > CA and click Download to obtain the certificate.
b. Choose System > Authentication > SAML, click the SAML Client Configuration tab page, and click Export Metadata to
obtain the metadata file.
a. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME
O&M portal:31943.
b. Choose Settings > System Management > Certificate Management from the main menu.
c. Click UniSSOWebsite_SAML_IdP.
d. On the Trust Certificates tab page, click Import, set Certificate alias, and import the certificate file of eCampusCore obtained in
5.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 270/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
f. In the navigation pane, choose SSO Configuration > SAML SSO Configuration.
Parameter Description
Metadata file Select the metadata file of eCampusCore obtained in 5 and click Upload.
Attribute Mapping Rules Click + to configure the following two mapping rules:
Local Attribute: Role; Mapped Key: roleName
Local Attribute: Username; Mapped Key: userName
Parameter Description
Upload Metadata Select the metadata file of the eDME O&M portal obtained in 3.
Redirection is not allowed for the eDME user with the same name as the eCampusCore local user.
Binding relationship between a remote role Configure the role mapping rules. After the configuration, a remote role has the same permission as the
and a local role server role after logging to the server.
Local Role: DCS_Operations_Administrator
Remote Role: Administrators
127.0.0.1:51299/icslite/print/pages/resource/print.do? 271/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
b. Log in to the eDME O&M portal as a user attached to the Administrators role, choose Multi-tenant Service > Service
Management > Service Management, and search for the service created in Interconnecting eCampusCore with eDME .
c. Click Redirect in the Operation column and check that the service page can be displayed without login.
Creating VM Specifications
Before the installation, ensure that related VM specifications have been created.
Prerequisites
You have obtained the VM template corresponding to the FusionCompute architecture type by referring to Table 16 . The FusionCompute
architecture type can be viewed on the host overview page.
Procedure
1. Decompress the VM template package on the local PC to obtain the template files. Check whether the *.vhd and *.ovf files are in the same
directory. If no, obtain them again.
After decompression, ensure that the .vhd disk files are in the same directory as the .ovf files.
4. Right-click Resource Pool and choose Import > Import Template from the shortcut menu.
The Create Template dialog box is displayed.
Select Import from Local PC as the template source. Upload the *.vhd and *.ovf files in the decompressed template folder on the local PC
respectively.
5. Click Next.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 272/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You need to enter the template name as required. If the name does not meet the requirements, the VM may fail to be created.
The name of the VM template VMTemplate_x86_64_CampusContainerImage.zip and VMTemplate_aarch64_CampusContainerImage.zip
is a fixed value, that is, eContainer-K8S-VMImage-EulerOS-2.12-64bit-fc.8.7.RC1-Campus.
If you use another template to create a VM, the template name must be consistent with the name of the decompressed directory.
For example, after VMTemplate_x86_64_Euler2.11.zip is decompressed, the *.vhd and *.ovf files are stored in the Euler2.11_23.2 directory.
Euler2.11_23.2 is the template name.
6. Click Next.
The Datastore page is displayed. Select the shared data store interconnected with FusionCompute, for example, HCI_StoragePool0.
If all configurations have default values, you do not need to change them.
If the NIC configuration is empty, select a port group for NIC1 and NIC2, for example, managePortgroup.
8. Click Next to go to the Confirm Info page. Check the configured values and start the creation
After the task is created, do not refresh the browser before the task is complete.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 273/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Do not refresh the browser page until the creation task is complete.
If the upload task fails to be created, a dialog box is displayed. Click Load Certificate to load the certificate and click Continue Uploading.
Both the Arm and x86 templates need to be imported so that you can use the templates to apply for instances of different server types during instance
provisioning.
10. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M
portal:31943.
Context
The following VM specifications are required for applying for an ECS:
1C2G
2C4G
2C8G
3C8G
4C8G
4C16G
8C16G
8C32G
Procedure
1. You have logged in to the eDME O&M portal as a user attached to the Administrators role to https://IP address of the eDME O&M
portal:31943.
3. On the VM Specifications tab page, click Create Specification and perform the following steps to create a specification:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 274/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 275/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You have created an eContainer image repository account and obtained the eContainer image repository information, including the repository
address, repository account and password, and CA certificate.
You have logged in to the eDME operation portal as a VDC user who has the CCS Admin permissions.
Procedure
1. Copy a repository address to a browser and open it. On the Harbor login page, use the repository account and password to log in to Harbor.
2. Choose Users > NEW USER to create an account for logging in to the image repository.
Password: The value is same as the password of the O&M management console and database in Planning Data .
3. Select the new user and click SET AS ADMIN to set the new account as an administrator.
Project quota limits: Set this parameter to -1. The unit is GiB.
5. In the eDME navigation pane, choose Container > Elastic Container Engine > Repository.
Login Password: Enter the password for logging in to the image repository.
7. Click OK.
The eDME image repository is created.
Prerequisites
You have logged in to SmartKit.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 276/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Use either of the following methods to install the virtualization inspection service:
Method 1: On the SmartKit home page, click the Virtualization tab, click Function Management, select Datacenter Virtualization Solution
Inspection, and click Install.
Method 2: Import the software package of the DCS inspection service (SmartKit_version_Tool_Virtualization_Inspection.zip).
1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is displayed, click
Import. In the Import dialog box, select the software package of the virtualization inspection service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click Install. In the dialog
box that is displayed indicating a successful import, click OK. The status of Datacenter Virtualization Solution Inspection changes to
Installed.
Procedure
1. Access the inspection tool.
a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter Virtualization Solution
Inspection.
d. In the displayed Create an Environment dialog box, set Environment Name and Customer Cloud Name, and click OK.
The environment information can be added only once. If the environment information has been added, it cannot be added again. If you need to add an
environment, delete the original environment first.
3. Adds nodes.
c. Select the customer cloud and click Add Node in the Operation Column. The Add Node page is displayed.
e. Click OK. In the Add Node dialog box that is displayed, confirm the node information, select all nodes, and click OK. To add
multiple sets of devices of the same type, repeat the operations for adding a customer cloud and adding a node.
Prerequisites
You have logged in to SmartKit.
The environment has been configured. For details, see System Management .
To inspect HiCloud, import the HiCloud inspection package by following instructions provided in Using SmartKit to Perform Health Check on
CMP HiCloud .
Procedure
1. Access the inspection tool.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 277/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter Virtualization Solution
Inspection.
a. In the main menu, click Health Check to go to the Health Check Tasks page.
b. In the upper left corner, click Create to go to the Create Task page. In the Task Scenario area, select Quality check.
Parameter Description
Task Scenario Select a scenario where the health check task is executed.
Routine Health Check: Check basic check items required for routine O&M.
Pre-upgrade Check: Before the upgrade, check whether the system status meets the upgrade requirements.
Quality Check: After deploying the FusionCompute environment, check the environment before service rollout.
Post-upgrade acceptance: After the system is upgraded, check whether the system is normal.
Send check report via email Indicates whether to enable the email push task.
Customer Cloud Select the target customer cloud where the health check task is executed.
Select Objects Select the target object for the health check task.
Management: nodes of services on the management plane
Select at least one node to execute a health check task.
Select Check Items Select the items for the health check task.
Select at least one item for the health check task.
NOTE:
By default, all check items of all nodes are selected. To modify the items, select the needed nodes.
Viewing basic information about In the Basic Information area, view the name and status of the current task.
the task
Viewing the object check pass The pass rate of objects and items are displayed in pie charts. You can select By environment or By product.
rate and check item pass rate
127.0.0.1:51299/icslite/print/pages/resource/print.do? 278/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Determine the object status based on the results of check items. The object status can be Passed or Failed. If all check items
are passed, the object status is Passed. If any check item is failed, the object status is Failed.
Exporting the health check In the upper right corner of the page, click Export Report.
report
Select a report type (Basic Report, Quality Report, or Synthesis Report). If you select Synthesis Report, enter the
Customer Name (name of the user of the health check report) and Signature (name of the provider of the health
check report).
NOTE:
If a storage plane is not planned, the result of check item Network103 does not comply with the best practice. This issue does not need to be handled
and will be automatically resolved after storage interfaces are added.
If a large number of valid alarm information exists in the inspection environment, the inspection task may last for a long time. Wait patiently.
1. When creating a VM, ensure that the memory size is greater than or equal to 128 GB. At least two disks are configured, including one system disk and one or
more data disks. It is better that the capacity of the system disk is greater than 900 GB and the total capacity of data disks is greater than 4000 GB. (The
minimum configuration feasible to the system: system disk: 900 GB; total available capacity of data disks: 500 GB). You need to configure the capacity of
the system and data disks before starting the VM, otherwise you have to recreate a VM.
2. After a VM is created, you can expand but not reduce the disk capacity. You are advised to set Configuration Mode to Thick provisioning lazy zeroed or
Thin Provisioning to reduce the time for creating disks. At least 40 CPU cores are required in the Arm environment.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 279/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Only one NIC needs to be configured for a VM. When creating a VM, do not select Start VM immediately after the creation. Delete the extra NICs before
starting the VM.
Prerequisites
The network connection between the installation PC and all nodes is normal.
The communication between the management nodes and storage nodes is normal. You can ping the management IP addresses of other nodes
from one node to check whether the communication is normal.
Procedure
1. Install the scale-out block storage. For details, see Installation > Software Installation Guide > Installing the Block Service > Connecting
to FusionCompute in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 280/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Firewall Planning
Post-installation Check
Initial Configuration
Software Uninstallation
Primary node IP address 1 The primary node IP address, node IP address, floating management IP address, and
southbound floating IP address must be in the same network segment as the customer's
Node IP address 2 or 4 management network, and must be unused IP addresses.
NOTE:
The IP address and subnet mask must be in IPv4 format.
For three-node cluster
deployment, the number of Currently, all network parameters cannot be modified during the installation. Do not
node IP addresses is 2. perform any operation that may modify network parameters.
For five-node cluster The IP addresses of operation portal nodes must be in the same network segment as
deployment, the number of
tenants' management network and must communicate with the management plane at a
node IP addresses is 4.
layer 3 network. You are advised to plan the operation portal node IP addresses on the
same network as the management plane.
Floating management IP 1
address The floating IP address of the ECE, load balancing IP address of the ECE, and IP
address of the ECE node are in the same network segment and not in use.
Subnet mask 1
The floating IP address of the public service domain of the ECE and the IP address of
network port 2 of an ECE node are in the same network segment and not in use.
Gateway 1
Southbound floating IP 1
address
127.0.0.1:51299/icslite/print/pages/resource/print.do? 281/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
During network planning, ensure that each node can communicate with one another.
Scenario
This section describes how to use SmartKit to install eDME.
Prerequisites
You have installed SmartKit. For details about how to install and run SmartKit, see "Deploying SmartKit" in SmartKit 24.0.0 User Guide.
On the home page of SmartKit, click the Virtualization tab, click Function Management, and check whether the status of Datacenter Virtualization
Solution Deployment is Installed. If the status is Uninstalled, you can use either of the following methods to install the software:
On the home page of SmartKit, click the Virtualization tab, click Function Management, select Datacenter Virtualization Solution Deployment, and
click Install.
Import the software package for the basic virtualization O&M service (SmartKit_24.0.0_Tool_Virtualization_Service.zip).
1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is displayed, click
Import. In the Import dialog box, select the software package for the basic virtualization O&M service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click Install. In the dialog
box indicating a successful import, click OK. The status of Datacenter Virtualization Solution Deployment changes to Installed.
You have imported the software package of the eDME deployment tool (eDME_version_DeployTool.zip). The procedure is the same as that
for importing the basic virtualization O&M software package.
Procedure
1. On the home page of SmartKit, click the Virtualization tab. Click Datacenter Virtualization Solution Deployment in Site Deployment
Delivery.
3. On the Tasks tab page, click Create Task. The Basic Configuration page is displayed.
On the Site Deployment Delivery page, click Support List to view the list of servers supported by SmartKit.
7. Configure parameters.
To modify the configuration online, go to 8 to manually set related parameters on the page.
To import configurations using an EXCEL file, click the Excel Import Configuration tab. Click Download File Template, fill in
the template, and import the parameter file. If the import fails, check the parameter file. If the parameters are imported successfully,
127.0.0.1:51299/icslite/print/pages/resource/print.do? 282/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
you can view the imported parameters on the Online Modification Configuration tab page. Then, go to 9.
To quickly fill in the configuration, click the Quick Configuration tab, set parameters as prompted, and click Generate Parameter.
If a parameter error is reported, clear the error as prompted. If the parameters are correct, go to 9.
8. Set eDME parameters.
On the Online Modification Configuration tab page, click Add eDME in the eDME Parameter Configuration area. On the Add eDME
page in the right pane, set related parameters.
a. Select the path of the installation package, that is, the path of the folder where the deployment software package is stored. If the
software package is not downloaded, download it as instructed in eDME Software Package . After you select a path, the tool
automatically uploads the software package to the deployment node. If you do not select a path, manually upload the software package
to the /opt/install directory on the deployment node.
Parameter Description
Automatic VM creation Whether to automatically create a VM. The value can be Yes or No. Other parameters are valid only when this
parameter is set to Yes.
Primary Node root Password Password of user root for logging in to the active node.
CNA name of primary node Name of the CNA to which the active node belongs.
Disk space of primary node Disk space of the active node, in GB.
CNA name of child node 1 Name of the CNA to which child node 1 belongs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 283/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
CNA name of child node 2 Name of the CNA to which child node 2 belongs.
Deploy Operation Portal or not Whether to deploy an operation portal for eDME.
No
Yes
Operation Portal Node 1 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name
Operation Portal Node 1 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.
Operation Portal Node 1 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.
CNA name of Operation Portal Name of the CNA to which operation portal node 1 belongs. This parameter is valid only when the operation portal
node 1 is to be deployed.
Disk space of Operation Portal node Disk space used by operation portal node 1. This parameter is valid only when the operation portal is to be
1 deployed.
Operation Portal Node 2 Host Host name of an operation portal node. This parameter is valid only when the operation portal is to be deployed.
Name
Operation Portal Node 2 IP Address IP address of the operation portal node. This parameter is valid only when the operation portal is to be deployed.
Operation Portal Node 2 root Password of the root account of the operation portal node. This parameter is valid only when the operation portal is
Password to be deployed.
CNA name of Operation Portal Name of the CNA to which operation portal node 2 belongs. This parameter is valid only when the operation portal
node 2 is to be deployed.
Disk space of Operation Portal node Disk space used by operation portal node 2 (unit: GB). This parameter is valid only when the operation portal is to
2 be deployed.
Operation Portal Floating IP Management floating IP address used to log in to the operation portal. It must be in the same network segment as
Address the node IP address and has not been used. This parameter is valid only when the operation portal is to be deployed.
Operation Portal Global Load This parameter is used to configure global load balancing. It must be in the same network segment as the IP address
Balancing IP Address of the operation portal node and has not been used. This parameter is valid only when the operation portal is to be
deployed.
Operation Portal Management Password for logging in to the operation portal as user bss_admin. This parameter is valid only when the operation
Password portal is to be deployed.
Auto Scaling Service Node 1 Host The host naming rules are as follows:
Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 284/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Auto Scaling Service Node 1 root Password of user root for logging in to AS node 1.
Password
CNA name of Auto Scaling Service Name of the CNA to which AS node 1 belongs.
node 1
Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 1
Auto Scaling Service Node 2 Host For details, see the parameter description of Auto Scaling Service Node 1 Host Name.
Name
Auto Scaling Service Node 2 root Password of user root for logging in to AS node 2.
Password
CNA name of Auto Scaling Service Name of the CNA to which AS node 2 belongs.
node 2
Disk space of Auto Scaling Service Recommended disk space ≥ 555 GB (system disk space ≥ 55 GB; data disk space ≥ 500 GB)
node 2
Elastic Container Engine Node 1 The host naming rules are as follows:
Host Name
The value contains 2 to 32 characters.
The value contains only uppercase or lowercase letters (A to Z or a to z), digits, and hyphens (-), and cannot contain
two consecutive hyphens (--). The value must start with a letter and cannot end with a hyphen (-).
The name cannot be localhost or localhost.localdomain.
Elastic Container Engine Node 1 Password of user root for logging in to ECE node 1.
root Password
Elastic Container Engine Node 1 Public service domain IP address of ECE node 1.
Public Service Domain IP Address NOTE:
If Automatic VM creation is set to Yes, enter an IP address that is not in use.
If Automatic VM creation is set to No, enter the IP address of the node where the OS has been deployed.
CNA name of Elastic Container Name of the CNA to which ECE node 1 belongs.
Engine Service node 1
Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 1
Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Host Name.
Host Name
Elastic Container Engine Node 2 IP For details, see the parameter description of Elastic Container Engine Node 1 IP Address.
Address
Elastic Container Engine Node 2 Password of user root for logging in to ECE node 2.
root Password
Elastic Container Engine Node 2 For details, see the parameter description of Elastic Container Engine Node 1 Public Service Domain IP
Public Service Domain IP Address Address.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 285/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
CNA name of Elastic Container Name of the CNA to which ECE node 2 belongs.
Engine Service node 2
Disk space of Elastic Container Recommended disk space ≥ 2,955 GB (system disk space ≥ 55 GB; data disk space ≥ 2,900 GB)
Engine Service node 2
Elastic Container Engine Floating Floating IP address used for the ECE service. It must be an idle IP address in the same network segment as the IP
IP Address address of the ECE node.
Elastic Container Engine Public Floating IP address used for the communication between the K8s cluster and the ECE node. It must be an idle IP
Service Domain Floating IP address in the same network segment as the public service domain IP address of the ECE node.
Address
Elastic Container Engine Global IP address used to configure load balancing for the ECE service. It must be an idle IP address in the same network
Load Balancing IP Address segment as the IP address of the ECE node.
Subnet mask of the public service Subnet mask of the public service domain of the ECE service.
domain of the Elastic Container
Engine Service
Port group of the Elastic Container Port group of the public service domain of the ECE service.
Engine Service in the public service NOTE:
domain
If a port group has been created on FusionCompute, set this parameter to the name of the created port group.
If no port group has been created on FusionCompute, set this parameter to the name of the port group planned for
FusionCompute.
IP Address Gateway of Elastic Set the IP address gateway of the ECE public service domain.
Container Engine public Service
Domain
Elastic Container Engine Public Set the BMS and VIP subnet segments for the ECE public service network.
Service Network-BMS&VIP
Subnet Segment
IP Address Segment of Elastic Set the IP address segment of the ECE public service network client.
Container Engine Public Service
Network Client
eDME can manage FusionCompute only when both FusionCompute and eDME are deployed.
Interface Username Interface username. This parameter is valid only when Manage FusionCompute or not is set to Yes.
Interface Account Password Password of the interface account. This parameter is valid only when Manage FusionCompute or not is set to
Yes.
SNMP Security Username SNMP security username. This parameter is valid only when Manage FusionCompute or not is set to Yes.
SNMP Encryption Password SNMP encryption password. This parameter is valid only when Manage FusionCompute or not is set to Yes.
SNMP Authentication Password SNMP authentication password. This parameter is valid only when Manage FusionCompute or not is set to Yes.
Whether to install eDataInsight Whether to deploy the DCS eDataInsight management plane. If yes, prepare the product software package of the
Manager DCS eDataInsight management plane in advance.
Yes
127.0.0.1:51299/icslite/print/pages/resource/print.do? 286/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
No
Initial admin Password of Initial password of user admin on the management portal.
Management Portal NOTE:
After setting the password, click Downwards. The system automatically pastes the password to the following passwords
(from Initial admin Password of Management Portal to sftpuser Password). The rules of each password are different.
If the verification fails after the password is copied downwards, you need to change the password separately.
Initial admin Password of O&M Initial password of user admin on the O&M portal.
Portal
sopuser Password Password of user sopuser. The sopuser account is used for routine O&M.
ossadm Password Password of user ossadm. The ossadm account is used to install and manage the system.
ossuser Password Password of user ossuser. The ossuser account is used to install and run the product software.
Database sys Password Database sys password. The database sys account is used to manage and maintain the Zenith database and has the
highest operation rights on the database.
rts Password Password of user rts. The rts account is used for authentication between processes and RabbitMQ during process
communication.
KMC Protection Password KMC protection password. KMC is a key management component.
ER Certificate Password ER certificate password. The ER certificate is used to authenticate the management or O&M portal when you
access the portal on a browser.
ETCD root Password ETCD root password, which is used for ETCD root user authentication.
Whether to install object storage Whether to deploy the object storage service.
service
No
Yes-PoE Authentication
Yes-IAM Authentication
Whether to install application Whether to deploy the application backup service during operation portal deployment.
backup service
If you use Export Parameters to export an XLSX file, you can operate or view the file only in Office 2007 or later version.
9. Click Next. On the displayed Confirm Parameter Settings page, check the configuration information. If the information is correct, click
Deploy Now.
10. Go to the Pre-deployment Check page and check whether each check item passes the check. If any check item is failed, perform operations
as prompted in to meet the check standard. If all check items are passed, click Execute Task.
11. Go to the Execute Deployment page. Check the status of each execution item of eDME. After all items are successfully executed, click
Finish.
If you use Export Report to export an XLSX file, you can operate or view the file only in Office 2007 or later version.
12. After eDME is installed, click View Portal Link on the Perform Deployment page to view the eDME address. Click it to go to the login
page of the O&M portal. You can log in to the O&M portal and operation portal to check whether eDME is successfully installed. For
details, see Post-installation Check .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 287/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Log in to the eDME node as user root using SSH.
2. Run the bash parting_data_disk.sh command and set the sizes of /opt/log and /opt as prompted, as shown in Figure 1.
Context
The default session timeout duration of the eDME O&M portal is 30 minutes. If you do not perform any operations within this timeout duration, you
will be logged out automatically.
Prerequisites
You have installed the required version of the browser.
You have obtained the address for accessing the eDME O&M portal.
You have obtained the login user password if you log in using password authentication.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 288/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Click Advanced in the security warning dialog box that is displayed when no security certificate is installed. Select Proceed to Management floating
IP address of eDME (unsafe).
If multi-node clusters are deployed, use the management floating IP address to log in to eDME.
In the multi-node cluster deployment scenario, automatic active/standby switchover is supported. After the active/standby switchover, it takes about 10
minutes to start all services on the new active node. During this period, the O&M portal can be accessed, but some operations may fail. Wait until the
services are restarted and try again.
Password authentication
a. Enter the username and password. The default username is admin, and the initial password is configured during installation of eDME.
c. If you fail to log in to the O&M portal, check the following causes:
If the username or password is incorrect in the first login, you are required to enter a verification code in the second login.
If the system displays a message indicating that the login password must be changed upon the first login or be reset,
change the password as instructed.
If you forget your login password, you can use the email address or mobile number you specified to retrieve the
password.
After the installation is successful, the eDME function is available about 10 minutes later.
After eDME is successfully installed, the license file is in the grace period of 90 days. To better use this product, contact technical support engineers to
apply for a license as soon as possible.
Post-login Check
On the navigation bar, hover the mouse pointer over . The latest alarms are displayed. Click View All Alarms to go to the Alarms page.
If there is only one alarm indicating that the license is invalid, eDME is running properly.
Context
The default session timeout duration of the eDME operation portal is 30 minutes. If you do not perform any operations within this timeout duration,
you will be logged out automatically.
Prerequisites
You have installed the required version of the browser.
You have obtained the address for accessing the eDME operation portal.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 289/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You have obtained the login user password if you log in using password authentication. The default username of the operation administrator is
bss_admin.
2. Enter the login address in the address bar and press Enter.
Login address: https://IP address of the eDME operation portal, for example, https://10.10.10.10
3. Click Advanced in the security warning dialog box that is displayed when no security certificate is installed. Select Proceed to Floating
management IP address of eDME (unsafe).
Password authentication
If the username or password is incorrect in the first login, you are required to enter a verification code in the second login.
If the system displays a message indicating that the login password must be changed upon the first login or be reset,
change the password as instructed.
The operation portal does not support password retrieval. Keep your password secure.
Post-login Check
Log in to the eDME operation portal as user bss_admin. If the login is successful and the page is displayed properly, eDME is running properly.
If no NTP server is configured, the time of eDME may differ from that of managed resources and eDME may fail to obtain the performance data of the managed
resources. You are advised to configure the NTP service.
Context
127.0.0.1:51299/icslite/print/pages/resource/print.do? 290/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NTP is a protocol that synchronizes the time of a computer system to Coordinated Universal Time (UTC). Servers that support NTP are called NTP
servers.
Precautions
Before configuring the NTP server, check the time difference between eDME and the NTP server. The time difference between the NTP server and
eDME cannot exceed 24 hours. The current NTP server time cannot be earlier than eDME installation time.
For example, if the current NTP server system time is 2021-04-05 16:01:49 UTC+08:00 and eDME is installed at 2021-04-06 16:30:20 UTC+08:00,
the NTP server time is earlier.
To check the system time of the eDME node, perform the following steps:
1. Use PuTTY to log in to the eDME node as user sopuser using the static IP address of the node.
The initial password of user sopuser is configured during eDME installation.
3. Run date to check whether the system time is consistent with the actual time.
If the system time of eDME is later than the NTP server time, you need to run the following command to restart the service after you
configure the NTP server and time synchronization is complete: cd /opt/oss/manager/agent/bin && . engr_profile.sh && export
mostart=true && ipmc_adm -cmd startapp. If the system time of eDME is earlier than the NTP server time, you do not need to run this
command.
Procedure
1. Visit https://Login IP address of the management portal:31945 and press Enter.
For eDME multi-node deployment (with or without two nodes of the operation portal), use the management floating IP address to log in.
2. Enter the username and password to log in to the eDME management portal.
The default username is admin, and the initial password is configured during installation of eDME.
Table 1 Parameters
NTP Server IP IP address of the NTP server that functions as the clock source. IPv4 address
Address
Key Index Used to quickly search for the key value and digest type during the The value is an integer ranging from 1 to 65,534,
communication authentication with the NTP server. excluding 10,000.
The value must be the same as Key Index configured on the NTP server.
NOTE:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 291/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Key NTP authentication character string, which is an important part for The value is a string of a maximum of 30 ASCII
generating a digest during the communication authentication with the NTP characters. Spaces and number signs (#) are not
server. supported.
The value must be the same as Key set on the NTP server.
NOTE:
Operation Operation that can be performed on the configured NTP server. Verify
Delete
5. Click Apply.
7. For example, for storage devices, log in to the storage device management page and set the device time to be the same as that in eDME.
This section uses OceanStor Dorado 6.x devices as an example. The operations for setting the time vary according to the device model. For details, see the
online help of the storage device.
i. In NTP Server Address, enter the IPv4 address or domain name of the NTP server.
iii. (Optional) Select Enable next to NTP Authentication. Import the NTP CA certificate to CA Certificate.
Only when NTPv4 or later is used, NTP authentication can be enabled to authenticate the NTP server and automatically synchronize
the time to the storage device.
b. Click next to Device Time to change the device time to be the same as the time of eDME.
If you set the time manually, there may be time difference. Ensure that the time difference is less than 1 minute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 292/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the SSO configuration is complete, if eDME is faulty, you may fail to log in to the connected FusionCompute system. For details, see "Failed to Log In to
FusionCompute Due to the Fault" in eDME Product Documentation.
During the SSO configuration, you must ensure that no virtualization resource-related task is running on eDME, such as creating a VM or datastore. Otherwise,
such tasks may fail.
Prerequisites
FusionCompute has been installed.
Procedure
1. Log in to the O&M portal as the admin user. The O&M portal address is https://IP address for logging in to the O&M portal:31943.
In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.
The default password of user admin is the password set during eDME installation.
2. In the navigation pane on the left of the eDME O&M portal, choose Settings > Security Management > Authentication.
3. In the left navigation pane, choose SSO Configuration > CAS SSO Configuration.
6. In the text box of IPv4 address or IPv6 address, enter the IP address for logging in to the FusionCompute web client.
7. Click OK.
Login addresses:
9. In the navigation bar on the left of the FusionCompute home page, click to enter the System page.
11. (Optional) Upon the first configuration, click on the right of Interconnected Cloud Management to enable cloud management
settings.
13. Enter the login IP address of the eDME O&M portal in the System IP Address text box.
In multi-node deployment, the IP address for logging in to the O&M portal is the floating management IP address.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 293/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the operation is complete, the system is interrupted for about 2 minutes. After the login mode is switched, you need to log out of the system and log in to
the system again.
If any fault occurs on the O&M portal of ManageOne or eDME after SSO is configured, the login to FusionCompute may fail. In this case, you need to log in to
the active VRM node to cancel SSO.
Run the following command on the active VRM to cancel SSO:
python /opt/galax/root/vrm/tomcat/script/omsconfig/bin/sm/changesso/changesso.py -m ge
Procedure
1. Use PuTTY to log in to the eDME node as user sopuser (which is set during deployment) using the management IP address.
3. Run the vi /etc/sysconfig/static-routes command, press i to enter the insert mode, and add static routes to the configuration file. After the
addition, press Esc and enter :wq to save the configuration and exit. Enter :q! to forcibly exit without saving any changes.
Enter the network segment to be accessed and the next-hop address based on the site requirements. If multiple network segments need
to be taken over, configure multiple network segments based on site requirements.
Example
172.17.0.0/24 is the network segment to be accessed, and 192.168.1.1 is the next-hop address gateway.
In other formats, 172.17.0.0 indicates the network segment to be accessed, 255.255.255.0 indicates the mask of the network segment, and
192.168.1.1 indicates the next-hop gateway address.
Run the following command on the node to query the default gateway (next-hop IP address gateway):
ip route show default
The following information is displayed, where 192.168.1.1 is the next-hop address gateway:
Prerequisites
The node where eDME is installed is running properly.
You have obtained the password of user root for logging in to the node where eDME is to be uninstalled from the administrator.
Precautions
Ensure that the eDME-residing device is not powered off or restarted when eDME is being uninstalled. Otherwise, exceptions may occur.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 294/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Use PuTTY to log in to the eDME node as user sopuser (which is set during deployment) using the management IP address.
3. Run the cd /opt/dme/tools command to go to the directory where the uninstallation script resides.
If eDME is deployed in multi-node cluster mode (with or without two nodes of the operation portal), perform 1 to 6 on each node to uninstall eDME.
5. Enter y as prompted.
If the system displays the message "uninstalled eDME successfully", eDME has been uninstalled successfully.
If a message indicating the component uninstallation failed is displayed, go to /var/log/dme_data and open the uninstall.log file to
check the failure cause. Rectify the fault and uninstall eDME again. If eDME still fails to be uninstalled, contact Huawei technical
support engineers.
Backup
Metropolitan HA
Active-Standby DR
Geo-Redundant 3DC DR
Local HA
Metropolitan HA
Active-Standby DR
Geo-Redundant 3DC DR
3.6.1.1 Local HA
Local HA for Flash Storage
127.0.0.1:51299/icslite/print/pages/resource/print.do? 295/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
DR Commissioning
Configuring Switches
Configuring Storage
Installing FusionCompute
Creating DR VMs
Installation Requirements
Table 1 lists the installation requirements for the DR system.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 296/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Local PC The PC that is used for the The local PC only needs to meet the requirements for For details about the requirements of
installation installing FusionCompute and there is no special FusionCompute for the local PC, servers,
requirement for it. and storage devices, see System
Requirements .
Server The server that functions as a The server must meet the following requirements:
host (CNA node) on
Meets the host requirements for installing
FusionCompute
FusionCompute.
Supports the FC HBA port and can communicate with
the FC switches.
NOTE:
Flash storage Products used for storage This system must meet the following requirements for
management and storage DR installing flash storage:
The environment is satisfactory for installing flash
storage.
Deploy the quorum server in a third place.
The one-way delays of the network transmission between
the quorum server and the production site/the DR site is
less than or equal to 10 ms, which are suitable for the 1
Mbit/s bandwidth.
Access Access switches of the storage, There are no special requirements for the Ethernet access None
switch management, and service planes switches on the management and service planes.
The access switches on the storage plane must meet the
following requirements:
FC switches are recommended. Ethernet switches can
also be used.
FC switches or Ethernet switches must be compatible
with hosts and flash storage.
Aggregation Ethernet aggregation switches The Ethernet aggregation switches must support VRRP. None
switch and FC aggregation switches at
the production and DR sites
Network Network between the production The network must meet the following requirements: None
site and the DR site
The flash storage heartbeat plane uses a large Layer 2
network.
In the large Layer 2 network, the RTT between any two
sites is less than or equal to 1 ms.
The quorum plane of flash storage must be connected
using a Layer 3 virtual private network (L3VPN).
Documents
Table 2 lists the documents required for deploying the DR solution.
Table 2 Documents
Integration design Network integration Describes the deployment plan, networking Obtain this document from the engineering supervisor.
document design plan, and the bandwidth plan.
Data planning template Provides the network data plan result, such
for the network as the IP plan of nodes, storage plan, and
integration design plan of VLANs, zones, gateways, and
routes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 297/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Version document Datacenter Virtualization Provides information about hardware and For enterprise users: Visit
Solution 2.1.0 Version software version mapping. https://support.huawei.com/enterprise , search for the
Mapping document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
FusionCompute FusionCompute Product Provides guidance on installation, initial For enterprise users: Visit
product Documentation configuration, and commissioning of https://support.huawei.com/enterprise , search for the
documentation FusionCompute. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
Flash storage product OceanStor Series Product Includes storage installation, configuration, For enterprise users: Visit
documentation Documentation and commissioning as well as the https://support.huawei.com/enterprise , search for the
HyperMetro feature guide. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
NOTE:
Switch product Switch document Provides information about how to This document package is provided by the switch
documentation package configure the switches by running vendor.
commands.
Server product Server document package Provides information about how to This document package is provided by the server
documentation configure the servers. vendor.
After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.
FusionCompute
Flash storage
Scenarios
In the local HA for flash storage scenario, the switch configuration is the same as that in the normal deployment scenario. This section describes
only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Procedure
Configure Ethernet access switches.
1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 298/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. Configure the FC access switches based on the data plan and the FC access switch documents.
The FC aggregation switches deployed at two sites must be connected to each other using optical fibers. The zones and cascading must be
configured. There are no other special requirements.
3. Configure the Ethernet aggregation switches and FC aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:
Except the active-active quorum channel, configure the VLANs on other planes at another site.
When configuring the VLANs of a site at another site, configure VRRP for the active and standby gateways on the Ethernet aggregation
switches based on the VLANs. For a VLAN, the gateway at the site where VM services are deployed is configured as the active
gateway, and the gateway at the other site is configured as the standby gateway.
The Layer 2 interconnection between the Ethernet aggregation switches and the core switches needs to be configured.
The Layer 3 interconnection (implemented by using the VLANIF interface) between the Ethernet aggregation and core switches needs
to be configured for accessing services from external networks.
4. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:
Configure the Layer 2 interconnection between a local core switch and the peer core switch on the local core switch. Then, bind the
multiple links between the two sites to a trunk to prevent loops.
Distribute exact routes (measured in VLAN) on the core switch working at the active gateway side, and distribute non-exact routes on
the core switch working at the standby gateway side. The route precision is controlled by the subnet mask.
If a firewall is deployed and e Network Address Translation (NAT) needs to be configured for the firewall, distribute the external routes on the firewall,
instead of on the core switch. Distribute exact routes on the firewall at the production site, and non-exact routes on the firewall at the DR site.
The firewall configurations, such as ACL and NAT must be manually set to be the same.
Scenarios
This task describes how to configure OceanStor V5/Dorado series storage in the flash storage HA scenario.
Procedure
For flash storage, see "HyperMetro Feature Guide for Block" in OceanStor Product Documentation for the desired model.
When configuring multipathing policies, follow instructions in OceanStor Dorado and OceanStor 6.x and V700R001 DM-Multipath Configuration Guide for
FusionCompute.
Scenarios
This section describes how to install FusionCompute in the HA solution for flash storage. The FusionCompute installation method depends on
whether the large Layer 2 network on the management or service plane is connected.
If the large Layer 2 network is connected on the management and service planes, install FusionCompute by following the normal procedure,
and deploy the standby VRM node at the DR site.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 299/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the large Layer 2 network is not connected on the management or service plane, install the host by following the normal procedure. Note the
requirements for installing VRM: Deploy both the active and standby VRM nodes at the production site first. After the large Layer 2 network is
connected, deploy the standby VRM node at the DR site.
Prerequisites
Conditions
You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.
The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .
Data
You have obtained the password of the VRM database.
Process
Figure 1 shows the process for installing FusionCompute.
Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 300/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If yes, go to 3.
If no, go to 5.
Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.
Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.
Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.
Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 301/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.
Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.
Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.
Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.
Ensure that the hosts planned to run the VRM VMs use the same distributed virtual switches (DVSs) as those used by hosts where VMs
of the original nodes are deployed.
After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).
Enable the VM template deployment function on the standby VRM node at the production site.
8. On FusionCompute, view and make a note of the ID of the standby VRM VM.
Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.
11. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
12. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:
Please import database password:
13. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 302/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).
In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.
17. Use PuTTY to log in to the host where the standby VRM VM resides.
Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .
19. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
20. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.
After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.
Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.
25. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 303/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
26. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:
27. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
Scenarios
After deploying the DR system, create DR service VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.
Procedure
Create DR service VMs in the DR cluster.
For details, see VM Provisioning .
Scenarios
In the flash storage HA scenario, you need to configure and enable the HA and compute resource scheduling policies for the DR cluster. In this way,
HA can be triggered for DR VMs to preferentially start them on the local host, preventing cross-site VM HA and start during normal running of the
system. If all hosts at the local site are faulty, the VMs automatically start and implement the HA function on the hosts at the DR site by using the
cluster resource scheduling function.
Prerequisites
Conditions
The HA and compute resource scheduling functions have been enabled for the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 304/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Log in to FusionCompute.
Policy Delay: You are advised to set this parameter to 5 minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.
Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.
For details about how to configure compute resource scheduling policies, see "Configuring Compute Resource Scheduling Policies" in
FusionCompute 8.8.0 User Guide (Virtualization).
Automation Level: Set it to Automatic. In this case, the system automatically migrates VMs to achieve automatic service
DR.
Measure By: Set it to the object for determining the migration threshold. You are advised to set it to CPU and Memory.
Migration Threshold: The advanced rule takes effect if this parameter is set to Conservative for all time intervals.
Configure Host Group, VM Group, and Rule Group. For details, see "Configuring a Host Group for a Cluster", "Configuring a VM
Group for a Cluster", and "Configuring a Rule Group for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).
Add hosts running at the local site to the host group of the local site.
Add VMs running at the local site to the VM group of the local site.
If a host in the host group at the local site is faulty, the host resources will be preferentially scheduled to other hosts in the
host group based on the cluster HA policy. If the local site is faulty, the hosts will be scheduled to host groups at the other
site based on the cluster HA policy.
When setting a local VM group or local host group rule, set Type to VMs to hosts and Rule to Should run on host group.
3.6.1.1.1.2 DR Commissioning
Commissioning Process
Commissioning DR Switchover
Commissioning DR Switchback
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 305/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Verify that the DR site properly takes over services if the production site is faulty.
Check that the data of the DR site and production site is synchronized after the DR site takes over services.
Verify that the production site properly takes over services back when it is recovered.
Prerequisites
The flash storage local HA system has been deployed.
You have configured the HA and cluster scheduling policies for a DR cluster.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Procedure
Execute the following test cases:
Commissioning DR Switchover
Commissioning DR Switchback
Expected Result
The result of each test case meets expectation.
Purpose
By powering off the network devices at the production site, check whether DR can be automatically implemented on VMs, that is, check the
availability of the metropolitan active-active DR solution.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 306/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
The flash storage local HA system has been deployed.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.
2. Power off all the DR hosts and DR flash storage at the production site.
5. Execute the following test cases and verify that all these cases can be executed successfully.
Create VMs.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
Purpose
After rectifying the faults at the production site, commission the data resynchronization function by powering on the flash storage devices at the
production site and powering off the flash storage devices at the DR site.
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 307/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Randomly select a DR VM and save a test file on the VM.
3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.
5. Open the test file saved on the VM in 1 to check the file consistency.
Expected Result
The VM is running properly and the test file data is consistent.
Additional Information
None
Purpose
Commission the availability of the DR switchback function in the flash storage local HA scenario by powering on the DR hosts at the original
production site and enabling the compute resource scheduling function for the DR cluster.
Prerequisites
You have commissioned the DR data reprotection function.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Procedure
1. Power on the DR hosts at the production site.
3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.
4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.
Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.
Additional Information
None
DR Commissioning
Configuring Switches
Installing FusionCompute
Creating DR VMs
127.0.0.1:51299/icslite/print/pages/resource/print.do? 309/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 describes the installation requirements.
Local PC PCs used for the installation Local PCs only need to meet the requirements for For details about the requirements of
installing FusionCompute and there is no special FusionCompute on local PCs, servers, and
requirement for them. storage devices, see System Requirements .
Server Server that functions as a The server must meet the following requirements:
host (CNA node) on
Meets the host requirements for installing
FusionCompute
FusionCompute.
Supports the FC HBA port and can communicate with the
FC switches.
NOTE:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 310/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Storage device Storage devices used in the Huawei block storage must be used and meet the storage
DR solution compatibility requirements of UltraVR.
Independent GE/10GE/25GE network ports must be used
for block storage HA. Each block storage provides at least
two network ports for replication.
Access switch Access switches of the Each access switch has sufficient network ports to connect None
storage, management, and to the block storage HA network ports.
service planes
Aggregation Aggregation switches in the A route to the storage replication network must be None
switch production AZ and DR AZ configured for aggregation switches to route IP addresses.
Core switch Core switches in the A route to the storage replication network must be None
production AZ and DR AZ configured for core switches to route IP addresses.
Network Network environment The network environment between the production and DR None
environment between the production AZ AZs must meet the following requirements:
and DR AZ
The management plane has a bandwidth of at least 10
Mbit/s.
The bandwidth of the storage replication plane depends on
the total amount of data changed in a replication period.
The formula is as follows: Number of VMs to be protected
x Average amount of data changed per VM replication
period (MB) x 8/(VM replication period (minute) x 60).
Preparing Documents
Table 2 lists the documents required for deploying the DR solution.
Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.
Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution 2.1.0 Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
OceanStor Pacific Series OceanStor Pacific Series Provides guidance on how to For enterprise users: Visit
Product Documentation Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
the block storage devices. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
Server product Server document package Provides information about how This document package is provided by the server vendor.
documentation to configure the servers.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 311/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Switch product Switch document package Provides information about how This document package is provided by the switch vendor.
documentation to configure the switches by
running commands.
After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.
Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR listed by referring to Datacenter
Virtualization Solution 2.1.0 Version Mapping.
FusionCompute
UltraVR
Scenarios
In the local HA DR scenario for scale-out storage, the switch configuration is the same as that in the normal deployment scenario. This section only
describes the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Procedure
Configure Ethernet access switches.
1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.
2. Configure the Ethernet aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:
Except the active-active quorum channel, configure the VLANs on other planes in another AZ.
When configuring the VLANs in another AZ, configure the Virtual Router Redundancy Protocol (VRRP) for the active and standby
gateways on the Ethernet aggregation switches based on the VLANs. For a VLAN, the gateway in the AZ where VM services are
deployed is configured as the active gateway, and the gateway in the other AZ is configured as the standby gateway.
Configure the Layer 2 interconnection between the Ethernet aggregation switches and core switches.
Configure the Layer 3 interconnection (implemented by using the VLANIF) between the Ethernet aggregation switches and core
switches for accessing services from external networks.
3. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:
Configure the Layer 2 interconnection between the local core switch and the core switch in the peer AZ. Then, bind the multiple
Ethernet links between the AZs to a trunk to prevent loops.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 312/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Advertise exact routes (by VLAN) on the core switch working at the active gateway side, and non-exact routes (by VLAN) on the core
switch working at the standby gateway side. The route precision is controlled by the subnet mask.
If a firewall is deployed and NAT must be configured for the firewall, advertise the external routes on the firewall instead of the core switch. Advertise
exact routes on the firewall in the production AZ, and non-exact routes on the firewall in the DR AZ.
Manually set the firewall configurations such as the access control list (ACL) and NAT in the two AZs to be the same.
Scenarios
This section describes how to install FusionCompute in the local HA DR solution for scale-out storage. The FusionCompute installation method to
be used depends on whether the large Layer 2 network is connected on the management or service plane.
If the large Layer 2 network is connected, install FusionCompute by following the common procedure, and deploy the standby VRM node in
the DR AZ.
If the large Layer 2 network is not connected, install the host by following the normal procedure. Note the requirements for installing VRM
nodes compared with its normal installation procedure: Deploy both the active and standby VRM nodes in the production AZ. After the large
Layer 2 network is connected, deploy the standby VRM node in the DR AZ.
Prerequisites
Conditions
You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.
The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .
Data
You have obtained the password of the VRM database.
Operation Process
Figure 1 shows the FusionCompute installation process in the local HA DR scenario of scale-out storage.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 313/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.
If yes, go to 3.
If no, go to 6.
Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.
Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 314/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
5. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.
Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.
Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.
Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.
When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.
Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.
Ensure that the hosts planned to run the VRM VMs use the same DVSs as the hosts where VMs of the original nodes are deployed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 315/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).
9. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.
Enable the VM template deployment function on the standby VRM node at the production site.
10. On FusionCompute, view and make a note of the ID of the standby VRM VM.
Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.
13. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
14. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:
15. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
Open VM i-00000002 operating authority success.
18. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).
In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 316/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
19. Use PuTTY to log in to the host where the standby VRM VM resides.
Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .
21. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
22. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.
After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.
Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.
27. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
28. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:
29. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 317/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
In the local HA DR scenario for scale-out storage, the storage device configuration is the same as that in the normal deployment scenario. This
section only describes the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.
Procedure
1. Install the scale-out storage. For details, see "Installation" > "Software Installation Guide" > "Installing the Block Service" > "Connecting to
FusionCompute" in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.
2. Connect the storage system as instructed in "Storage Resource Creation (Scale-Out Block Storage)" in FusionCompute 8.8.0 User Guide
(Virtualization).
Plan the names of datastores in a unified manner, for example, DR_datastore01.
3. (Optional) In the converged deployment scenario, create a storage port as a replication port by following instructions provided in "Adding a
Storage Port" in FusionCompute 8.8.0 User Guide (Virtualization).
4. Add a remote device. For details, see "Checking the License", "Creating a Replication Cluster", and "Adding a Remote Device" in
"Configuration" > "Feature Guide" > "HyperMetro Feature Guide for Block"> "Installation and Configuration" > "Configuring HyperMetro"
in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.
Enter Password :
127.0.0.1:51299/icslite/print/pages/resource/print.do? 318/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Enter Password :
You need to log in to the FSM node and run the preceding commands on both storage clusters.
6. (Optional) Run the dsware and diagnose commands to check the status of the I/O suspension and forwarding functions. For details, see
OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer).
Scenarios
In the HA scenario, configure the HA policy for the DR cluster to enable the DR VMs to start and implement the HA function on local hosts
preferentially.
Prerequisites
Conditions
The HA function has been enabled in the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.
Procedure
1. Log in to FusionCompute.
The default HA function of the cluster must be enabled. Otherwise, the HA DR solution fails.
The following configuration takes effect only when the host datastore is created on virtualized SAN storage and the disk is a scale-out block storage
disk, an eVol disk, or an RDM shared storage disk.
Policy Delay: You are advised to set this parameter to 3 to 5 minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.
Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 319/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Fault Control Period (hour): The value ranges from 1 to 168, and the default value is 2.
Number of Hosts That Allow VM HA: The value ranges from 1 to 128. The default value is 2.
Scenarios
After deploying the DR system, create DR VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.
Procedure
Create DR VMs in the DR cluster.
For details, see VM Provisioning .
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
Configure DR policies.
For details, see "DR Configuration" > "HA Solution" > "Creating a Protected Group" in OceanStor BCManager 8.6.0 UltraVR User Guide.
3.6.1.1.2.2 DR Commissioning
Commissioning Process
Commissioning DR Switchover
Commissioning DR Switchback
Purpose
Check whether the DR AZ properly takes over services if the production AZ is faulty.
Check whether services can be switched back after the fault in the production AZ is rectified.
Check whether the DR site properly takes over services when the production AZ is in maintenance as planned.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 320/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After commissioning, back up the management data by exporting the configuration data. The data can be used to restore the system if an
exception occurs or an operation has not achieved the expected result.
Prerequisites
The local HA DR system for scale-out storage has been deployed.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Procedure
Perform the following operations:
Commissioning DR Switchover
Commissioning DR Switchback
Expected Result
The result of each operation meets expectation.
Purpose
Verify the availability of the local HA DR solution for scale-out storage by disconnecting the storage link and executing a recovery plan to check
whether the VM can be recovered.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 321/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
None
Prerequisites
The local HA DR system for scale-out storage has been deployed.
Procedure
1. Query and make a note of the number of DR VMs in the production AZ on FusionCompute of the production site.
2. Commission DR switchover.
If some hosts or VMs in the production AZ are faulty:
When a disaster occurs, VMs on CNA1 in the production AZ are unavailable for a short period of time (depending on the time taken to start
the VMs). After the disaster recovery, VMs on CNA1 are migrated to CNA2, and DR VMs in the DR AZ access storage resources in the DR
AZ. After the hosts in the DR AZ are recovered, migrate the VMs back to the production AZ.
After the DR switchover is successful, execute the required test cases at the DR site and ensure that the execution is successful.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the number of DR VMs is consistent with that at the production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on hosts in the DR AZ, and services in the DR AZ are normal.
Additional Information
None
Purpose
By powering on the scale-out storage devices in the original production AZ and then powering off the scale-out storage devices in the original DR
AZ, commission the data resynchronization function after the DR switchover in the original DR AZ.
Prerequisites
You have executed the DR switchover test case.
Procedure
1. Randomly select a DR VM and save test files on the VM.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 322/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices in the original
DR AZ.
4. Wait for a period of time until the VM status becomes normal and check the consistency of the VM testing files.
Expected Result
The VM is running properly and the test file data is consistent.
Additional Information
None
Purpose
Commission the availability of the HA DR switchback function of scale-out storage by powering on the DR hosts in the production AZ and
manually migrating VMs.
Prerequisites
You have commissioned the DR data protection function.
Procedure
1. Power on the DR hosts in the original production AZ.
3. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts in the production AZ.
Expected Result
VMs are running properly on the hosts in the production AZ, and services in the production AZ are normal.
Additional Information
None
Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 323/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip
If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.
Prerequisites
Conditions
You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.
Procedure
Automatic backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
SFTP IP
SFTP Password
SFTP Port
Encryption Password
To secure configuration data, the backup server must use the SFTP protocol.
5. Click OK.
6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.
After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.
Manual backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.
DR Commissioning
Configuring Switches
Installing FusionCompute
Installing UltraVR
Creating DR VMs
Configuring DR Policies
127.0.0.1:51299/icslite/print/pages/resource/print.do? 325/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 lists the installation requirements for the DR system.
Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:
Storage Storage devices used in eVol storage must be used and meet the storage compatibility
device the DR solution requirements of UltraVR.
Independent 10GE/25GE network ports are used for eVol storage
replication. Each eVol storage device provides at least two network
ports for storage replication.
Access switch Access switches of the Each access switch has sufficient network ports to connect to the data None
storage, management, replication network ports on the eVol storage devices.
and service planes
Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites
Network Network between the The network must meet the following requirements: None
environment production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the total
amount of data changed in the replication period, which is calculated
as follows: Number of VMs to be protected x Average amount of data
changed in the replication period per VM (MB) x 8/(Replication
period (minute) x 60).
Preparing Documents
Table 2 lists the documents required for deploying the DR solution.
Integration design Network Integration Describes the deployment plan, Obtain this document from the engineering supervisor.
document Design networking plan, and the
bandwidth plan.
Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution xxx Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
NOTE:
For carrier users: Visit https://support.huawei.com , search
xxx indicates the for the document by name, and download it.
software version.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 326/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
OceanStor Dorado OceanStor Dorado Series Provides guidance on how to For enterprise users: Visit
series product Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
documentation OceanStor Dorado storage. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
Server product Server document package Provides guidance on how to This document package is provided by the server vendor.
documentation configure the servers.
Switch product Switch document package Provides guidance on how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.
After obtaining the related documents by referring to Datacenter Virtualization Solution xxx Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.
Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution xxx Version Mapping.
FusionCompute
UltraVR
Scenarios
In the replication DR scenario for eVol storage, the switch configuration is the same as that in a common deployment scenario without the DR
system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Procedure
Configure access switches.
1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the storage replication ports on the OceanStor Dorado devices. There are
no other special configuration requirements.
2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the storage replication network must be configured for aggregation switches to route IP addresses.
3. Configure core switches based on the data plan and the core switch documents.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 327/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
A route to the storage replication network must be configured for core switches to route IP addresses. Management planes of the production
site and the DR site must be able to communicate with each other.
Scenarios
This section guides software commissioning engineers to install FusionCompute in the replication DR scenario for eVol storage.
Procedure
For details, see "Installing FusionCompute" in FusionCompute 8.8.0 Software Installation Guide.
Scenarios
In the replication DR scenario for eVol storage, the storage device configuration is the same as that in a common deployment scenario without the
DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.
Procedure
1. Install OceanStor Dorado storage systems as instructed in "Install and Initialize" > "Installation Guide" in OceanStor Dorado Series Product
Documentation.
2. Connect storage devices. For details, see "Storage Resource Creation (for eVol Storage)" in FusionCompute 8.8.0 User Guide (Virtualization)
of this document.
Plan the names of datastores in a unified manner, for example, DR_datastore01.
3. Create a storage port as a remote replication port. For details, see "Adding a Storage Port" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.
You are advised to use two physical NICs to form an aggregation port and create a storage port on the aggregation port.
It is recommended that the remote replication plane be separated from the storage plane. That is, the remote replication port and storage port are created
on different aggregation ports or NICs.
4. Configure storage remote replication as instructed in "Configure" > "HyperReplication Feature Guide for Block" > "Configuring and
Managing HyperReplication (System Users)" > "Configuring HyperReplication" in OceanStor Dorado Series Product Documentation.
Scenarios
This section guides software commissioning engineers to install and configure the UltraVR DR management software to implement the eVol
storage-based replication DR solution.
Prerequisites
Conditions
You have obtained the software packages and the data required for installing UltraVR.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 328/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Install the UltraVR DR management software.
For details, see Installation and Uninstallation in UltraVR User Guide.
Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.
Prerequisites
Conditions
You have completed the initial service configuration.
Data
You have obtained the data required for creating VMs.
Procedure
1. Determine the DR VM creation mode.
To create DR VMs, go to 2.
3. Migrate VMs that do not reside on DR volumes to the DR volumes and migrate non-DR VMs residing on DR volumes to non-DR volumes.
For details, see "Migrating a VM (Change Compute Resource)" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.
After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
1. Check whether DR policies are configured for the first time.
If yes, go to 2.
3.6.1.1.3.2 DR Commissioning
127.0.0.1:51299/icslite/print/pages/resource/print.do? 329/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Commissioning Process
Commissioning DR Switchover
Commissioning DR Switchback
Purpose
Verify that the DR site properly takes over services if the production site is faulty.
Check that the data of the DR site and production site is synchronized after the DR site takes over services.
Verify that the production site properly takes over services back when it is recovered.
Prerequisites
A DR system for eVol storage has been deployed.
You have configured the HA and cluster scheduling policies for a DR cluster.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Commissioning Procedure
Execute the following test cases:
Commissioning DR Switchover
Commissioning DR Switchback
Expected Result
The result of each test case meets expectation.
Purpose
By powering off the network devices deployed at the production site, check whether automatic DR is implemented for VMs and confirm the
availability of the DR solution for eVol storage.
Prerequisites
A DR system for eVol storage has been deployed.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Commissioning Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.
2. Power off all DR hosts and DR eVol storage at the production site.
5. Execute the following test cases and verify that all these cases can be executed successfully.
Create VMs.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
127.0.0.1:51299/icslite/print/pages/resource/print.do? 331/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Purpose
By powering on the eVol storage device deployed at the original production site and then powering off the eVol storage device deployed at the
original DR site, commission the data resynchronization function after the original production site recovers.
Prerequisites
You have executed the DR switchover commissioning case.
Commissioning Procedure
1. Randomly select a DR VM and save a test file on the VM.
3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.
5. Open the test file saved on the VM in 1 to check the file consistency.
Expected Result
The VM is running properly and the test file data is consistent.
Additional Information
None
Purpose
By powering on the DR hosts deployed at the production site and enabling the compute resource scheduling function in the DR cluster, commission
the availability of the switchback function provided by the DR solution for eVol storage.
Prerequisites
You have commissioned the DR data reprotection function.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Procedure
1. Power on the DR hosts at the production site.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 332/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.
4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.
Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.
Additional Information
None
3.6.1.2 Metropolitan HA
Metropolitan HA for Flash Storage
DR Commissioning
Configuring Switches
Configuring Storage
Installing FusionCompute
Creating DR VMs
127.0.0.1:51299/icslite/print/pages/resource/print.do? 333/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 lists the installation requirements for the DR system.
Local PC The PC that is used for the The local PC only needs to meet the requirements for For details about the requirements of
installation installing FusionCompute and there is no special FusionCompute for the local PC, servers,
requirement for it. and storage devices, see System
Requirements .
Server The server that functions as a The server must meet the following requirements:
host (CNA node) on
Meets the host requirements for installing
FusionCompute
FusionCompute.
Supports the FC HBA port and can communicate with
the FC switches.
NOTE:
Flash storage Products used for storage This system must meet the following requirements for
management and storage DR installing flash storage:
The environment is satisfactory for installing flash
storage.
Deploy the quorum server in a third place.
The one-way delays of the network transmission between
the quorum server and the production site/the DR site is
less than or equal to 10 ms, which are suitable for the 1
Mbit/s bandwidth.
Access Access switches of the storage, There are no special requirements for the Ethernet access None
switch management, and service planes switches on the management and service planes.
The access switches on the storage plane must meet the
following requirements:
FC switches are recommended. Ethernet switches can
also be used.
FC switches or Ethernet switches must be compatible
with hosts and flash storage.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 334/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Aggregation Ethernet aggregation switches The Ethernet aggregation switches must support VRRP. None
switch and FC aggregation switches at
the production and DR sites
Network Network between the production The network must meet the following requirements: None
site and the DR site
The flash storage heartbeat plane uses a large Layer 2
network.
In the large Layer 2 network, the RTT between any two
sites is less than or equal to 1 ms.
The quorum plane of flash storage must be connected
using a Layer 3 virtual private network (L3VPN).
Documents
Table 2 lists the documents required for deploying the DR solution.
Table 2 Documents
Integration design Network integration Describes the deployment plan, networking Obtain this document from the engineering supervisor.
document design plan, and the bandwidth plan.
Data planning template Provides the network data plan result, such
for the network as the IP plan of nodes, storage plan, and
integration design plan of VLANs, zones, gateways, and
routes.
Version document Datacenter Virtualization Provides information about hardware and For enterprise users: Visit
Solution 2.1.0 Version software version mapping. https://support.huawei.com/enterprise , search for the
Mapping document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
FusionCompute FusionCompute Product Provides guidance on installation, initial For enterprise users: Visit
product Documentation configuration, and commissioning of https://support.huawei.com/enterprise , search for the
documentation FusionCompute. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
Flash storage product OceanStor Series Product Includes storage installation, configuration, For enterprise users: Visit
documentation Documentation and commissioning as well as the https://support.huawei.com/enterprise , search for the
HyperMetro feature guide. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
NOTE:
Switch product Switch document Provides information about how to This document package is provided by the switch
documentation package configure the switches by running vendor.
commands.
Server product Server document package Provides information about how to This document package is provided by the server
documentation configure the servers. vendor.
After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.
The DR solution has no special requirements for the software packages. Obtain the software packages and license files for the following products
based on Datacenter Virtualization Solution 2.1.0 Version Mapping and Constraints and Limitations :
FusionCompute
Flash storage
Scenarios
In the metropolitan HA scenarios for flash storage, the switch configurations are the same as those in the normal deployment scenario. This section
only describes the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure the switches based on the data plan provided in the network device documents.
Procedure
Configure Ethernet access switches.
1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.
2. Configure the FC access switches based on the data plan and the FC access switch documents.
The FC aggregation switches deployed at two sites must be connected to each other using optical fibers. The zones and cascading must be
configured. There are no other special requirements.
3. Configure the Ethernet aggregation switches and FC aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:
Except the active-active quorum channel, configure the VLANs on other planes at another site.
When configuring the VLANs of a site at another site, configure VRRP for the active and standby gateways on the Ethernet aggregation
switches based on the VLANs. For a VLAN, the gateway at the site where VM services are deployed is configured as the active
gateway, and the gateway at the other site is configured as the standby gateway.
The Layer 2 interconnection between the Ethernet aggregation switches and the core switches needs to be configured.
The Layer 3 interconnection (implemented by using the VLANIF interface) between the Ethernet aggregation and core switches needs
to be configured for accessing services from external networks.
4. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:
Configure the Layer 2 interconnection between a local core switch and the peer core switch on the local core switch. Then, bind the
multiple links between the two sites to a trunk to prevent loops.
Distribute exact routes (measured in VLAN) on the core switch working at the active gateway side, and distribute non-exact routes on
the core switch working at the standby gateway side. The route precision is controlled by the subnet mask.
If a firewall is deployed and e Network Address Translation (NAT) needs to be configured for the firewall, distribute the external routes on the firewall,
instead of on the core switch. Distribute exact routes on the firewall at the production site, and non-exact routes on the firewall at the DR site.
The firewall configurations, such as ACL and NAT must be manually set to be the same.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 336/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
This section guides software commissioning engineers to configure flash storage in the metropolitan HA DR scenario.
Procedure
For flash storage, see "HyperMetro Feature Guide for Block" in OceanStor Product Documentation for the desired model.
When configuring multipathing policies, follow instructions in OceanStor Dorado and OceanStor 6.x and V700R001 DM-Multipath Configuration Guide for
FusionCompute.
Scenarios
This section describes how to install FusionCompute in the HA solution for flash storage. The FusionCompute installation method depends on
whether the large Layer 2 network on the management or service plane is connected.
If the large Layer 2 network is connected on the management and service planes, install FusionCompute by following the normal procedure,
and deploy the standby VRM node at the DR site.
If the large Layer 2 network is not connected on the management or service plane, install the host by following the normal procedure. Note the
requirements for installing VRM: Deploy both the active and standby VRM nodes at the production site first. After the large Layer 2 network is
connected, deploy the standby VRM node at the DR site.
Prerequisites
Conditions
You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.
The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .
Data
You have obtained the password of the VRM database.
Process
Figure 1 shows the process for installing FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 337/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.
If yes, go to 3.
If no, go to 5.
Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.
Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 338/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.
Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.
Enable the HA and DRS functions in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and
Policy Delay to 3 to 5 minutes (configure it based on the environment requirements). Set Migration Threshold of the DRS to
Conservative. Otherwise, the DR policies cannot take effect using the DRS advanced rules.
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.
Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.
Select only LUNs with the SAN active-active configurations in Configuring Storage and set datastores to Virtualization when creating
datastores for the hosts in the DR cluster.
Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.
Ensure that the hosts planned to run the VRM VMs use the same distributed virtual switches (DVSs) as those used by hosts where VMs
of the original nodes are deployed.
After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 339/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).
Enable the VM template deployment function on the standby VRM node at the production site.
8. On FusionCompute, view and make a note of the ID of the standby VRM VM.
Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.
11. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
12. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:
13. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
16. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).
In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.
17. Use PuTTY to log in to the host where the standby VRM VM resides.
Ensure that the management IP address and user gandalf are used for login.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 340/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .
19. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
20. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.
After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.
Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.
25. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
26. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:
27. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
Close VM operating authority success.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 341/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
After deploying the DR system, create DR service VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.
Procedure
Create DR service VMs in the DR cluster.
For details, see VM Provisioning .
Scenarios
In the metropolitan HA DR scenarios for flash storage, HA and compute resource scheduling policies need to be configured for a DR cluster. Then,
DR VMs can start and implement HA on local hosts preferentially, preventing cross-site VM start and HA. If all local hosts are faulty, the cluster
resource scheduling function automatically enables VMs to start and implement HA on hosts at the DR site.
Prerequisites
Conditions
The HA and compute resource scheduling functions have been enabled for the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.
Procedure
1. Log in to FusionCompute.
Policy Delay: You are advised to set this parameter to 5 minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.
Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 342/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details about how to configure compute resource scheduling policies, see "Configuring Compute Resource Scheduling Policies" in
FusionCompute 8.8.0 User Guide (Virtualization).
Automation Level: Set it to Automatic. In this case, the system automatically migrates VMs to achieve automatic service
DR.
Measure By: Set it to the object for determining the migration threshold. You are advised to set it to CPU and Memory.
Migration Threshold: The advanced rule takes effect if this parameter is set to Conservative for all time intervals.
Configure Host Group, VM Group, and Rule Group. For details, see "Configuring a Host Group for a Cluster", "Configuring a VM
Group for a Cluster", and "Configuring a Rule Group for a Cluster" in FusionCompute 8.8.0 User Guide (Virtualization).
Add hosts running at the local site to the host group of the local site.
Add VMs running at the local site to the VM group of the local site.
If a host in the host group at the local site is faulty, the host resources will be preferentially scheduled to other hosts in the
host group based on the cluster HA policy. If the local site is faulty, the hosts will be scheduled to host groups at the other
site based on the cluster HA policy.
When setting a local VM group or local host group rule, set Type to VMs to hosts and Rule to Should run on host group.
3.6.1.2.1.2 DR Commissioning
Commissioning Process
Commissioning DR Switchover
Commissioning DR Switchback
Purpose
Verify that the DR site properly takes over services if the production site is faulty.
Check that the data of the DR site and production site is synchronized after the DR site takes over services.
Verify that the production site properly takes over services back when it is recovered.
Prerequisites
You have deployed the metropolitan HA system for flash storage.
You have configured HA and cluster scheduling policies for the DR cluster.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 343/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
Perform the following operations:
Commissioning DR Switchover
Commissioning DR Switchback
Expected Result
The result of each test case meets expectation.
Purpose
By powering off the network devices at the production site, check whether automatic DR is implemented for VMs to verify the availability of the
metropolitan HA solution for flash storage.
Prerequisites
You have deployed the metropolitan HA system for flash storage.
Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.
2. Power off all the DR hosts and DR flash storage at the production site.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 344/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Execute the following test cases and verify that all these cases can be executed successfully.
Create VMs.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
Purpose
After rectifying the faults at the production site, commission the data resynchronization function by powering on the flash storage devices at the
production site and powering off the flash storage devices at the DR site.
Prerequisites
You have executed the DR switchover test case.
Procedure
1. Randomly select a DR VM and save a test file on the VM.
3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.
5. Open the test file saved on the VM in 1 to check the file consistency.
Expected Result
The VM is running properly and the test file data is consistent.
Additional Information
127.0.0.1:51299/icslite/print/pages/resource/print.do? 345/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
None
Purpose
Verify the availability of the switchback function of the metropolitan HA DR switchback function for flash storage by powering on the DR hosts at
the original production site and enabling the compute resource scheduling function of the DR cluster.
Prerequisites
You have commissioned the DR data protection function.
Procedure
1. Power on the DR hosts at the production site.
3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.
4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.
Expected Result
VMs are running properly on the hosts at the production site, and services at the production site are normal.
Additional Information
None
DR Commissioning
Configuring Switches
Installing FusionCompute
Creating DR VMs
127.0.0.1:51299/icslite/print/pages/resource/print.do? 346/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 lists the installation requirements for the DR system.
Local PC The PC that is used for the The local PC only needs to meet the requirements for For details about the requirements of
installation installing FusionCompute and there is no special FusionCompute for the local PC, servers,
requirement for it. and storage devices, see System
Requirements .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 347/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Server The server that functions as The server must meet the following requirements:
a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the
FC switches.
NOTE:
Storage Storage devices used in the Must be block storage meeting the storage compatibility
device DR solution requirements of UltraVR.
The metropolitan HA solution for scale-out storage uses
independent 10GE/25GE network ports. Each block storage
device provides at least two network ports for storage
replication.
Access switch Access switches of the Each access switch has sufficient network ports to connect None
storage, management, and to the HA network ports on the block storage devices.
service planes
Aggregation Aggregation switches at the A route to the data replication network is configured for None
switch production and DR sites aggregation switches to route IP addresses.
Core switch Core switches at the A route to the data replication network is configured for None
production and DR sites core switches to forward IP addresses.
Network Network between the The network must meet the following requirements: None
production site and the DR
The management plane has a bandwidth of at least 10
site
Mbit/s.
The bandwidth of the storage replication plane depends on
the total amount of data changed in the VM replication
period.
Calculation formula: Number of VMs to be protected x
Average amount of data changed in the replication period
per VM (MB) x 8/(Replication period (minute) x 60).
Preparing Documents
Table 2 lists the documents required for deploying the DR solution.
Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.
Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution 2.1.0 Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 348/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
OceanStor Pacific Series OceanStor Pacific Series Provides guidance on how to For enterprise users: Visit
Product Documentation Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
the block storage devices. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
Server product Server document package Provides information about how This document package is provided by the server vendor.
documentation to configure the servers.
Switch product Switch document package Provides information about how This document package is provided by the switch vendor.
documentation to configure the switches by
running commands.
After obtaining required documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as obtaining
the software packages and installation tools. The details are not described in this document.
Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR listed by referring to Datacenter
Virtualization Solution 2.1.0 Version Mapping.
FusionCompute
UltraVR
Scenarios
In the metropolitan HA scenario for scale-out storage, the switch configurations are the same as those in the common deployment scenario. This
section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Procedure
Configure Ethernet access switches.
1. Configure the Ethernet access switches based on the data plan and the Ethernet access switch documents.
The system has no special configuration requirements for the Ethernet access switches.
2. Configure the Ethernet aggregation switches based on the data plan and the aggregation switch documents.
Note the following configurations for the Ethernet aggregation switches:
Except the active-active quorum channel, configure the VLANs on other planes at another site.
When configuring the VLANs of a site at another site, configure VRRP for the active and standby gateways on the Ethernet aggregation
switches based on the VLANs. For a VLAN, the gateway at the site where VM services are deployed is configured as the active
gateway, and the gateway at the other site is configured as the standby gateway.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 349/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The Layer 2 interconnection between the Ethernet aggregation switches and the core switches needs to be configured.
The Layer 3 interconnection (implemented by using the VLANIF interface) between the Ethernet aggregation and core switches needs
to be configured for accessing services from external networks.
Configure Ethernet core switches.
3. Configure the Ethernet core switches based on the data plan and the Ethernet core switch documents.
Note the following configurations for the Ethernet core switches:
Configure the Layer 2 interconnection between a local core switch and the peer core switch on the local core switch. Then, bind the
multiple links between the two sites to a trunk to prevent loops.
Distribute exact routes (measured in VLAN) on the core switch working at the active gateway side, and distribute non-exact routes on
the core switch working at the standby gateway side. The route precision is controlled by the subnet mask.
If a firewall is deployed and e Network Address Translation (NAT) needs to be configured for the firewall, distribute the external routes on the firewall,
instead of on the core switch. Distribute exact routes on the firewall at the production site, and non-exact routes on the firewall at the DR site.
The firewall configurations, such as ACL and NAT must be manually set to be the same.
Scenarios
This section describes how to install FusionCompute in the metropolitan HA solution for scale-out storage. The FusionCompute installation method
depends on whether the large Layer 2 network on the management or service plane is connected.
If the large Layer 2 network is connected on the management and service planes, install FusionCompute by following the normal procedure,
and deploy the standby VRM node at the DR site.
If the large Layer 2 network is not connected on the management or service plane, install the host by following the normal procedure. Note the
requirements for installing VRM: Deploy both the active and standby VRM nodes at the production site first. After the large Layer 2 network is
connected, deploy the standby VRM node at the DR site.
Prerequisites
Conditions
You have made preparations for the FusionCompute installation, including configuring servers, storage devices, and the network, and obtaining
the required data, software packages, license files, documents, and tools.
The FusionCompute installation plan meets the deployment requirements described in Deployment Principles .
Data
You have obtained the password of the VRM database.
Process
Figure 1 shows the FusionCompute installation process in the metropolitan HA scenario for scale-out storage.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 350/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
For details about the installation and initial configuration methods of the FusionCompute components, see Installation Using SmartKit .
Install hosts.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is connected.
If yes, go to 3.
If no, go to 6.
Add the DR hosts at the production site and the DR site to the planned DR cluster, which includes the management cluster.
Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
127.0.0.1:51299/icslite/print/pages/resource/print.do? 351/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
5. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.
Install the active and standby VRM nodes and perform initial configurations on them when the large Layer 2 network is not connected.
Add the DR hosts at the production site to the planned DR cluster, which includes the management cluster.
Enable the HA in the DR cluster. Set Host Fault Policy to HA, Datastore Fault Handling by Host to HA, and Policy Delay to 3 to 5
minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting the
policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the I/O
Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei technical
support to confirm that services will not be affected and then disable the function.
When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.
Provide descriptions to indicate that the clusters, hosts, and datastores are used for DR when creating DR clusters, adding DR hosts, and
creating datastores.
Before adding storage devices to hosts, ensure that the large Layer 2 network of the storage plane is connected.
Add hosts at the DR site to a DR cluster after the large Layer 2 network is connected.
Add the DR hosts at the DR site to the planned DR cluster, which includes the management cluster.
When creating datastores for hosts in the DR cluster, you can only select scale-out block storage for configuration.
Provide descriptions to indicate that the hosts and datastores are used for DR when adding DR hosts and creating datastores.
Ensure that the hosts planned to run the VRM VMs use the same DVSs as the hosts where VMs of the original nodes are deployed.
After adding hosts at the DR site to the DR cluster, configure time synchronization on the node.
For details, see "Setting Time Synchronization on a Host" in FusionCompute 8.8.0 User Guide (Virtualization).
9. Configure the hosts at the production site and DR site to preferentially use their own block storage resources.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 352/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details, see "Configuration" > "Basic Service Configuration Guide for Block" > "Configuring Basic Services" in OceanStor Pacific Series
8.2.1 Product Documentation.
After this step, the FusionCompute installation is complete.
Enable the VM template deployment function on the standby VRM node at the production site.
10. On FusionCompute, view and make a note of the ID of the standby VRM VM.
Check whether the target standby VRM node is the default standby node. If it is not the default standby node, perform a switchover between the active and
standby nodes.
13. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
14. Run the following command on the active VRM node to enable the standby VRM VM to be cloned to a VM:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/OpenRights.sh i-00000002
Information similar to the following is displayed:
15. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
18. On FusionCompute, deploy a VM using the VM template of the standby VRM VM.
For details, see "Deploying a VM Using an Existing Template in the System" in FusionCompute 8.8.0 User Guide (Virtualization).
In the Set Compute Resource area, select a host in the intra-city DR center and select Bind to the selected host. Use the virtualized local storage of the
selected host as the target storage. When configuring VM attributes, select Customize using the Customization Wizard and configure the NIC information
to ensure the NIC information is consistent with that of the standby VRM VM. You can delete the standby VRM VM template at the production site only after
the standby VRM node is deployed at the DR site and the active/standby relationship is restored.
19. Use PuTTY to log in to the host where the standby VRM VM resides.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 353/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .
21. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
22. Run the following command to add the standby VRM VM ID to the host configuration file:
echo "vm_id" > /etc/vna-api/vrminfo
In the command, vm_id indicates the standby VRM VM ID.
After the standby VRM VM is started at the DR site, the system automatically restores the active/standby relationship, and then you can delete the standby
VRM VM template at the production site.
Disable the template deployment function of the standby VRM VM at the DR site after the standby VRM VM is deployed.
27. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
28. Run the following command on the active VRM VM to disable the template deployment function of the standby VRM VM:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh Standby VRM VM ID
For example, run the following command:
sh /opt/galax/root/vrm/tomcat/script/CloseRights.sh i-00000002
Information similar to the following is displayed:
29. Enter the password for accessing the database from FusionCompute.
Change the password upon the first login and save the new password.
The command is successfully executed if the following information is displayed:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 354/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
In the metropolitan HA scenario for scale-out storage,the storage device configuration is the same as that in a normal deployment scenario without
the DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.
Procedure
1. Install the scale-out storage. For details, see "Installation" > "Software Installation Guide" > "Installing the Block Service" > "Connecting to
FusionCompute" in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.
2. Connect the storage system as instructed in "Storage Resource Creation (Scale-Out Block Storage)" in FusionCompute 8.8.0 User Guide
(Virtualization).
Plan the names of datastores in a unified manner, for example, DR_datastore01.
3. (Optional) In the converged deployment scenario, create a storage port as a replication port by following instructions provided in "Adding a
Storage Port" in FusionCompute 8.8.0 User Guide (Virtualization).
4. Add a remote device. For details, see "Checking the License", "Creating a Replication Cluster", and "Adding a Remote Device" in
"Configuration" > "Feature Guide" > "HyperMetro Feature Guide for Block"> "Installation and Configuration" > "Configuring HyperMetro"
in OceanStor Pacific Series 8.2.1 Product Documentation for the desired version.
Enter Password :
Enter Password :
127.0.0.1:51299/icslite/print/pages/resource/print.do? 355/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You need to log in to the FSM node and run the preceding commands on both storage clusters.
6. (Optional) Run the dsware and diagnose commands to check the status of the I/O suspension and forwarding functions. For details, see
OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer).
Scenarios
In the HA scenario, configure the HA policy for the DR cluster to enable the DR VMs to start and implement the HA function on local hosts
preferentially.
Prerequisites
Conditions
The HA function has been enabled in the DR cluster.
Data
You have obtained the lists of the hosts and VMs that provide local services in the DR cluster.
Procedure
1. Log in to FusionCompute.
The default HA function of the cluster must be enabled. Otherwise, the HA DR solution fails.
The following configuration takes effect only when the host datastore is created on virtualized SAN storage and the disk is a scale-out block storage
disk, an eVol disk, or an RDM shared storage disk.
Policy Delay: You are advised to set this parameter to 3 to 5 minutes (configure it based on the environment requirements).
If OceanStor Pacific storage is connected, ensure that the I/O suspension function on the scale-out block storage has been disabled before setting
the policy delay. For details, see "References" > "Command Reference" > "DSware Tool Command Reference" > "Usage Guide" > "Querying the
I/O Suspension Switch" in OceanStor Pacific Series 8.2.1 Product Documentation (Huawei Engineer). If the function is enabled, contact Huawei
technical support to confirm that services will not be affected and then disable the function.
Configure the group fault control policy. If Group Fault Control is enabled, you need to manually disable it.
Fault Control Period (hour): The value ranges from 1 to 168, and the default value is 2.
Number of Hosts That Allow VM HA: The value ranges from 1 to 128. The default value is 2.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 356/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
After deploying the DR system, create DR VMs by following the normal procedure. Then, the DR system automatically implements the DR
function on the DR VMs.
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating DR VMs.
Procedure
Create DR VMs in the DR cluster.
For details, see VM Provisioning .
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
Configure DR policies.
For details, see "DR Configuration" > "HA Solution" > "Creating a Protected Group" in OceanStor BCManager 8.6.0 UltraVR User Guide.
3.6.1.2.2.2 DR Commissioning
Commissioning Process
Commissioning DR Switchover
Commissioning DR Switchback
Purpose
Verify that the DR site properly takes over services when the production site is faulty.
Verify that the production site properly takes over services back after it recovers.
Verify that the DR site properly takes over services when the production site is in maintenance as planned.
After commissioning, back up the management data by exporting the configuration data. The data can be used to restore the system if an
exception occurs or an operation has not achieved the expected result.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 357/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
The metropolitan HA system for scale-out storage has been deployed.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Commissioning Procedure
Execute the following test cases:
Commissioning DR Switchover
Commissioning DR Switchback
Expected Result
The result of each test case meets expectation.
Purpose
By disconnecting the storage link or making a host faulty, check whether DR can be implemented for VMs and the metropolitan HA solution for
scale-out storage is available.
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 358/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Commissioning Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
2. Commission DR switchover.
If some hosts or VMs at the production site are faulty:
When a disaster occurs, VMs on CNA1 at the production site are unavailable for a short period of time (depending on the time taken to start
the new VMs). After DR, VMs on CNA1 are migrated to CNA2, and VMs at the DR site access storage resources at the DR site. After the
hosts at the production site recover, you can migrate VMs back to the production site.
After the DR switchover is successful, execute the required test cases at the DR site and ensure that the execution is successful.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the number of DR VMs is consistent with that at the production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
Purpose
Power on scale-out storage devices at the production site and power off the scale-out storage devices at the DR site to check whether the data
synchronization function is supported after commissioning the DR switchover.
Prerequisites
You have executed the DR switchover test case.
Procedure
1. Randomly select a DR VM and save a test file on the VM.
3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 359/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. Wait for a period of time until the VM status becomes normal and check the consistency of the VM test files.
Expected Result
The VM is running properly and the test file data is consistent.
Additional Information
None
Purpose
By powering on the DR hosts deployed at the original production site and manually migrating VMs, commission the availability of the switchback
function of the HA solution for scale-out storage.
Prerequisites
You have commissioned the DR data reprotection function.
Commissioning Procedure
1. Power on the DR hosts at the production site.
3. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.
Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.
Additional Information
None
Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.
If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 360/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.
Prerequisites
Conditions
You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.
Procedure
Automatic backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
SFTP IP
SFTP Password
SFTP Port
Encryption Password
To secure configuration data, the backup server must use the SFTP protocol.
5. Click OK.
6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.
After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.
Manual backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.
DR Commissioning
127.0.0.1:51299/icslite/print/pages/resource/print.do? 361/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Configuring Switches
Installing FusionCompute
Installing UltraVR
Creating DR VMs
Configuring DR Policies
Installation Requirements
Table 1 lists the installation requirements for the DR system.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 362/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:
Storage Storage devices used in eVol storage must be used and meet the storage compatibility
device the DR solution requirements of UltraVR.
Independent 10GE/25GE network ports are used for eVol storage
replication. Each eVol storage device provides at least two network
ports for storage replication.
Access switch Access switches of the Each access switch has sufficient network ports to connect to the data None
storage, management, replication network ports on the eVol storage devices.
and service planes
Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites
Network Network between the The network must meet the following requirements: None
environment production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the total
amount of data changed in the replication period, which is calculated
as follows: Number of VMs to be protected x Average amount of data
changed in the replication period per VM (MB) x 8/(Replication
period (minute) x 60).
Preparing Documents
Table 2 lists the documents required for deploying the DR solution.
Integration design Network Integration Describes the deployment plan, Obtain this document from the engineering supervisor.
document Design networking plan, and the
bandwidth plan.
Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution xxx Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
NOTE:
For carrier users: Visit https://support.huawei.com , search
xxx indicates the for the document by name, and download it.
software version.
UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 363/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
OceanStor Dorado OceanStor Dorado Series Provides guidance on how to For enterprise users: Visit
series product Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
documentation OceanStor Dorado storage. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
Server product Server document package Provides guidance on how to This document package is provided by the server vendor.
documentation configure the servers.
Switch product Switch document package Provides guidance on how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.
After obtaining the related documents by referring to Datacenter Virtualization Solution xxx Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.
Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution xxx Version Mapping.
FusionCompute
UltraVR
Scenarios
In the replication DR scenario for eVol storage, the switch configuration is the same as that in a common deployment scenario without the DR
system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Procedure
Configure access switches.
1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the storage replication ports on the OceanStor Dorado devices. There are
no other special configuration requirements.
2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the storage replication network must be configured for aggregation switches to route IP addresses.
3. Configure core switches based on the data plan and the core switch documents.
A route to the storage replication network must be configured for core switches to route IP addresses. Management planes of the production
site and the DR site must be able to communicate with each other.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 364/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
This section guides software commissioning engineers to install FusionCompute in the replication DR scenario for eVol storage.
Procedure
For details, see "Installing FusionCompute" in FusionCompute 8.8.0 Software Installation Guide.
Scenarios
In the replication DR scenario for eVol storage, the storage device configuration is the same as that in a common deployment scenario without the
DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.
Procedure
1. Install OceanStor Dorado storage systems as instructed in "Install and Initialize" > "Installation Guide" in OceanStor Dorado Series Product
Documentation.
2. Connect storage devices. For details, see "Storage Resource Creation (for eVol Storage)" in FusionCompute 8.8.0 User Guide (Virtualization)
of this document.
Plan the names of datastores in a unified manner, for example, DR_datastore01.
3. Create a storage port as a remote replication port. For details, see "Adding a Storage Port" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.
You are advised to use two physical NICs to form an aggregation port and create a storage port on the aggregation port.
It is recommended that the remote replication plane be separated from the storage plane. That is, the remote replication port and storage port are created
on different aggregation ports or NICs.
4. Configure storage remote replication as instructed in "Configure" > "HyperReplication Feature Guide for Block" > "Configuring and
Managing HyperReplication (System Users)" > "Configuring HyperReplication" in OceanStor Dorado Series Product Documentation.
Scenarios
This section guides software commissioning engineers to install and configure the UltraVR DR management software to implement the eVol
storage-based replication DR solution.
Prerequisites
Conditions
You have obtained the software packages and the data required for installing UltraVR.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 365/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.
Prerequisites
Conditions
You have completed the initial service configuration.
Data
You have obtained the data required for creating VMs.
Procedure
1. Determine the DR VM creation mode.
To create DR VMs, go to 2.
3. Migrate VMs that do not reside on DR volumes to the DR volumes and migrate non-DR VMs residing on DR volumes to non-DR volumes.
For details, see "Migrating a VM (Change Compute Resource)" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.
After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
1. Check whether DR policies are configured for the first time.
If yes, go to 2.
3.6.1.2.3.2 DR Commissioning
Commissioning Process
127.0.0.1:51299/icslite/print/pages/resource/print.do? 366/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Commissioning DR Switchover
Commissioning DR Switchback
Purpose
Verify that the DR site properly takes over services if the production site is faulty.
Check that the data of the DR site and production site is synchronized after the DR site takes over services.
Verify that the production site properly takes over services back when it is recovered.
Prerequisites
A DR system for eVol storage has been deployed.
You have configured the HA and cluster scheduling policies for a DR cluster.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Commissioning Procedure
Execute the following test cases:
Commissioning DR Switchover
Commissioning DR Switchback
127.0.0.1:51299/icslite/print/pages/resource/print.do? 367/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Expected Result
The result of each test case meets expectation.
Purpose
By powering off the network devices deployed at the production site, check whether automatic DR is implemented for VMs and confirm the
availability of the DR solution for eVol storage.
Prerequisites
A DR system for eVol storage has been deployed.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Commissioning Procedure
1. On FusionCompute, make a note of the status of the VMs at the production site.
Make a note of the number of running VMs at the production site.
2. Power off all DR hosts and DR eVol storage at the production site.
5. Execute the following test cases and verify that all these cases can be executed successfully.
Create VMs.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
127.0.0.1:51299/icslite/print/pages/resource/print.do? 368/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Purpose
By powering on the eVol storage device deployed at the original production site and then powering off the eVol storage device deployed at the
original DR site, commission the data resynchronization function after the original production site recovers.
Prerequisites
You have executed the DR switchover commissioning case.
Commissioning Procedure
1. Randomly select a DR VM and save a test file on the VM.
3. After the HyperMetro pair or consistency group synchronization is complete (about 10 minutes), power off the storage devices at the original
DR site.
5. Open the test file saved on the VM in 1 to check the file consistency.
Expected Result
The VM is running properly and the test file data is consistent.
Additional Information
None
Purpose
By powering on the DR hosts deployed at the production site and enabling the compute resource scheduling function in the DR cluster, commission
the availability of the switchback function provided by the DR solution for eVol storage.
Prerequisites
You have commissioned the DR data reprotection function.
You have checked that the VM to be migrated is not associated with a host and is not mounted with a CD/DVD-ROM drive or Tools.
Procedure
1. Power on the DR hosts at the production site.
3. Perform the switchover between the active and standby VRM nodes to change the VRM node at the production site to the active node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 369/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. On FusionCompute, verify that all the DR VMs have been migrated back to the hosts at the production site.
Expected Result
VMs are running properly on the hosts at the production site and the services at the production site are running properly.
Additional Information
None
3.6.1.3 Active-Standby DR
Active-Standby DR Solution for Flash Storage
DR Commissioning
Configuring Switches
Creating DR VMs
Configuring DR Policies
127.0.0.1:51299/icslite/print/pages/resource/print.do? 370/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 lists the installation requirements for the DR system.
Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:
Storage Storage devices used in Must be the SAN devices meeting the storage compatibility
device the DR solution requirements of UltraVR.
The SAN devices use independent GE/10GE/25GE network ports for
data replication, and each SAN device provides at least two network
ports for data replication.
Access Access switches of the Each access switch has sufficient network ports to connect to the data None
switch storage, management, replication network ports on the SAN devices.
and service planes
127.0.0.1:51299/icslite/print/pages/resource/print.do? 371/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites
Network Network between the The network must meet the following requirements: None
production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the volume of
all the changed data in the VM replication period, which is calculated
as follows: Number of VMs to be protected x Average volume of the
changed data in the VM replication period (MB) x 8/(VM replication
period (minute) x 60).
Documents
Table 2 lists the documents required for deploying the DR solution.
Table 2 Documents
Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.
Data planning template for the Describes the network data plan,
network integration design such as the node IP address plan
and the storage plan.
Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution 2.1.0 Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for
the document by name, and download it.
UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for
the document by name, and download it.
SAN product SAN device document Provides guidance on how to This document package is provided by the SAN device
documentation package install, configure, and commission vendor.
For example, OceanStor the SAN devices. NOTE:
Pacific Series Product
For detailed version information, see Constraints and
Documentation.
Limitations .
Switch product Switch document package Provides information about how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.
Server product Server document package Provides information about how to This document package is provided by the server vendor.
documentation configure the servers.
After obtaining the related documents by referring to Datacenter Virtualization Solution 2.1.0 Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.
Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution 2.1.0 Version Mapping.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 372/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
In the active-standby DR scenario for flash storage, the switch configuration is the same as that in a common deployment scenario without the DR
system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Procedure
Configure access switches.
1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the data replication network ports on the SAN devices. There are no
special configuration requirements.
2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the data replication network is configured for aggregation switches to forward IP addresses.
3. Configure core switches based on the data plan and the core switch documents.
A route to the data replication network is configured for core switches to forward IP addresses. Management planes of the production site and
the DR site must be able to communicate with each other.
Scenarios
In the active-standby DR scenario for flash storage, the storage device configuration is the same as that in a common deployment scenario without
the DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan. OceanStor V5 series storage is used as an
example. For details, see "Configuring Basic Storage Services" in the block service section in OceanStor Product Documentation.
Procedure
1. Complete the initial configuration of the SAN devices based on the data plan in the SAN device documents.
If the storage is limited by A LUN mapping to a host can not be a secondary LUN of remote replication, you cannot create remote replications on the
current LUN. Therefore, you can configure basic storage services only after completing Configuring the Remote Replication Relationship in the DR center.
2. On FusionCompute, use the planned DR LUNs to create datastores and name these datastores in a unified manner to simplify management,
such as DR_datastore01. Select Virtualization when creating datastores.
Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 373/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating VMs.
Procedure
1. Determine the DR VM creation mode.
To create DR VMs, go to 2.
3. Migrate VMs that are not created on DR LUNs to the DR LUNs and migrate non-DR VMs created on DR LUNs to the non-DR LUNs.
For details, see "Migrating a Whole VM" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.
After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.
Scenarios
After the DR system is installed and configured and DR VMs are provisioned or migrated to DR LUNs, configure the remote replication
relationship and consistency groups for DR LUNs on storage devices. Then, the remote replication feature of storage devices can be used to
implement DR for VMs.
Prerequisites
Conditions
Procedure
1. Configure asynchronous or synchronous remote replication for DR LUNs on the storage device as planned. For details, see
"HyperReplication Feature Guide for Block" in the storage device product documentation.
The sizes of the active and standby LUNs must be the same. If resource LUNs need to be created, you need to configure one resource LUN
for each of controller A and controller B of the storage system. It is recommended that the size of each resource LUN be set to half the
maximum size of a resource pool supported by the storage system. Resource LUNs and the active remote replication LUN must be in
different resource pools.
Before initial synchronization, migrate the SAN devices at the DR site to the production site to reduce bandwidth consumption.
Deploy a high-bandwidth network for initial synchronization, for example, a 10GE network.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 374/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Allow only necessary I/O services at the production site during the synchronization. The reason is that the remote replication may be interrupted when resources
of the resource LUN are used up. If this case occurs, you need to manually initiate incremental synchronization.
Before the synchronization is complete, do not perform DR operations, such as a DR switchover, a scheduled migration, or a DR drill.
After the DR relationship is established between the production and DR sites, avoid creating VMs on or migrating VMs to DR LUNs and migrating or deleting
VMs from DR LUNs.
If capacity expansion is required, create a LUN, create VMs that use the storage resources of the LUN, evaluate the recovery point objective (RPO), and
configure a remote replication LUN pair. If data is copied using networks, the initial synchronization consumes more time.
If you have to provision VMs to the existing remote replication LUNs, the following conditions must be met:
Provision VMs during off-peak hours.
The VMs to be provisioned should use thin provisioning disks or thick provisioning lazy zeroed disks rather than common disks.
The total amount of data on all VMs provisioned each time must not exceed 90% of the resource LUN capacity. After provisioning a batch of VMs,
perform immediate synchronization on storage devices. Then, provision another batch of VMs.
After VM provisioning, check whether the RPO meets the service demand and adjust the RPO as required.
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
1. Check whether DR policies are configured for the first time.
If yes, go to 2.
If no, go to 3.
3. Modify DR policies.
For details, see DR Management > Active-Passive DR Solution > DR Protection > Modifying Protection Policies in UltraVR User
Guide.
3.6.1.3.1.2 DR Commissioning
Commissioning Process
Commissioning a DR Test
Commissioning Reprotection
Commissioning DR Switchback
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 375/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Verify that the DR site properly takes over services if the production site is faulty.
Verify that the production site properly takes over services back when it is recovered.
Verify that a recovery plan is feasible, and adjust and optimize the recovery procedure as required.
Verify that the DR site properly takes over services when the production site is in maintenance as planned.
After commissioning, back up the management data by exporting the configuration data. The data can be used to recover the system if an
exception occurs or an operation has not achieved the expected result.
Prerequisites
The active-standby DR system for flash storage has been deployed.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Procedure
Execute the following test cases:
Commissioning a DR Test
Commissioning Reprotection
Commissioning DR Switchback
Expected Result
The result of each test case meets expectation.
Purpose
Verify that a recovery plan is correct and executable by testing the recovery plan.
Prerequisites
The active-standby DR system for flash storage has been deployed.
Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
2. Commission a DR test.
For details, see "DR Management" > "Active-Passive DR Solution" > "DR Recovery" > "DR Testing in the DR Center" in OceanStor
BCManager 8.6.0 UltraVR User Guide.
During DR test commissioning, before clearing drilling data from the DR site, execute the required test cases at the DR site and ensure that the execution is
successful.
On FusionCompute at the DR site, view the number of drill VMs and ensure that the number of drill VMs is consistent with that of DR VMs at the
production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.
Additional Information
None
Purpose
Verify that the scheduled migration function is available by executing a recovery plan.
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 377/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
After the scheduled migration, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.
Additional Information
None
Purpose
Disconnect the storage link and execute a recovery plan to check whether DR can be implemented for VMs and further confirm the availability of
the array-based replication DR solution.
Prerequisites
The active-standby DR system for flash storage has been deployed.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 378/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
After the fault is rectified, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
Purpose
Verify that the reprotection function is available by executing a recovery plan.
Prerequisites
The active-standby DR system for flash storage has been deployed.
Procedure
1. Check the reprotection type.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 379/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Expected Result
The reprotection is successful.
Additional Information
None
Purpose
After services are switched from the production site to the DR site through the scheduled migration, switch the services back to the production site
based on the drill plan.
Services are migrated from the production site to the DR site when a recoverable fault, such as a power outage, occurs. After the production site
recovers from the fault, synchronize the data generated during DR from the DR site to the production site and then migrate services back to the
production site.
Prerequisites
The active-standby DR system for flash storage has been deployed.
Procedure
1. Determine the DR switchback type.
After commissioning DR switchback, execute the following test cases at the production site and ensure that the execution is successful.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 380/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
The DR switchback is successful. VMs are running on the hosts at the production site properly. Services at the production site are running properly
and services at the DR site are not affected.
Additional Information
None
Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.
If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip
If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.
Prerequisites
Conditions
You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.
Procedure
Automatic backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
SFTP IP
127.0.0.1:51299/icslite/print/pages/resource/print.do? 381/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SFTP Password
SFTP Port
Encryption Password
To secure configuration data, the backup server must use the SFTP protocol.
5. Click OK.
6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.
After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.
Manual backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.
DR Commissioning
Solution Overview
Maintenance Guide
Configuring Switches
Creating DR VMs
Configuring DR Policies
127.0.0.1:51299/icslite/print/pages/resource/print.do? 382/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 lists the installation requirements for the DR system.
Local PC The PC that is used for The local PC only needs to meet the requirements for installing For details about the requirements of
the installation FusionCompute and there is no special requirement for it. FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions The server must meet the following requirements: System Requirements .
as a host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC switches.
NOTE:
Storage Storage devices used in Must be block storage meeting the storage compatibility requirements
device the DR solution of UltraVR.
The block storage uses independent 10GE/25GE network ports for
replication. Each block storage device provides at least two network
ports for storage replication.
Access Access switches of the Each access switch has sufficient network ports to connect to the data None
switch storage, management, replication network ports on the block storage devices.
and service planes
Aggregation Aggregation and core A route to the data replication network is configured for aggregation None
switch switches at the and core switches to route IP addresses.
Core switch production and DR sites
Network Network between the The network must meet the following requirements: None
production site and the
The management plane has a bandwidth of at least 10 Mbit/s.
DR site
The bandwidth of the data replication plane depends on the volume of
all the changed data in the VM replication period, which is calculated
as follows: Number of VMs to be protected x Average volume of the
changed data in the VM replication period (MB) x 8/(VM replication
period (minute) x 60).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 383/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Preparing Documents
Table 2 lists the documents required for deploying the DR solution.
Integration design Network integration design Describes the deployment plan, Obtain this document from the engineering supervisor.
document networking plan, and the
bandwidth plan.
Version document Datacenter Virtualization Provides information about For enterprise users: Visit
Solution xxx Version hardware and software version https://support.huawei.com/enterprise , search for the
Mapping mapping. document by name, and download it.
NOTE:
For carrier users: Visit https://support.huawei.com , search
xxx indicates the for the document by name, and download it.
software version.
UltraVR product UltraVR User Guide Provides guidance on how to For enterprise users: Visit
document install, configure, and commission https://support.huawei.com/enterprise , search for the
UltraVR. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
OceanStor Pacific series OceanStor Pacific Series Provides guidance on how to For enterprise users: Visit
product documentation Product Documentation install, configure, and commission https://support.huawei.com/enterprise , search for the
the block storage devices. document by name, and download it.
For carrier users: Visit https://support.huawei.com , search
for the document by name, and download it.
Server product Server document package Provides information about how to This document package is provided by the server vendor.
documentation configure the servers.
Switch product Switch document package Provides information about how to This document package is provided by the switch vendor.
documentation configure the switches by running
commands.
After obtaining the related documents by referring to Datacenter Virtualization Solution xxx Version Mapping, make preparations for the installation, such as
obtaining the software packages and installation tools. For details about the installation preparations, see the related documents.
Software Packages
The DR solution has no special requirements for the software packages. Obtain the software packages of UltraVR by referring to Datacenter
Virtualization Solution xxx Version Mapping.
FusionCompute
UltraVR
Scenarios
In the active/standby DR scenario for scale-out storage, the switch configuration is the same as that in a common deployment scenario without the
DR system deployed. This section only describes the special configuration requirements and precautions in the DR scenario.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 384/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When deploying the DR system, configure switches based on the data plan provided in the network device documents.
Procedure
Configure access switches.
1. Configure access switches based on the data plan and the access switch documents.
Each access switch must have enough network ports to connect to the block storage replication network ports. There are no special
configuration requirements.
2. Configure aggregation switches based on the data plan and the aggregation switch documents.
A route to the data replication network is configured for aggregation switches to forward IP addresses.
3. Configure core switches based on the data plan and the core switch documents.
A route to the data replication network is configured for core switches to forward IP addresses. Management planes of the production site and
the DR site must be able to communicate with each other.
Scenarios
In the active-standby DR scenario for scale-out storage, the storage device configuration is the same as that in a common deployment scenario
without the DR system deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan.
Procedure
1. Install the scale-out storage. For details, see "Installation" > "Software Installation Guide" > "Installing the Block Service" > "Connecting to
FusionCompute" in OceanStor Pacific Series Product Documentation.
2. Connect the storage system as instructed in "Storage Resource Creation (Scale-Out Block Storage)" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.
Plan the names of datastores in a unified manner, for example, DR_datastore01.
3. Create a storage port as a remote replication port. For details, see "Adding a Storage Port" in FusionCompute 8.8.0 User Guide
(Virtualization) of this document.
You are advised to use two physical NICs to form an aggregation port and create a storage port on the aggregation port.
It is recommended that the remote replication plane be separated from the storage plane. That is, the remote replication port and storage port are created
on different aggregation ports or NICs.
4. Complete storage remote replication configurations. For details, see "Checking the License", "Creating a Replication Cluster", and "Adding a
Remote Device" in "Configuration" > "Feature Guide" > "HyperReplication Feature Guide for Block" > "Configuring HyperReplication" in
OceanStor Pacific Series Product Documentation.
Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 385/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating VMs.
Procedure
1. Determine the DR VM creation mode.
To create DR VMs, go to 2.
3. Migrate VMs that are not created on DR volumes to the DR volumes and migrate non-DR VMs created on DR volumes to the non-DR
volumes.
For details, see "Migrating a VM" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.
After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Active-Passive DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
1. Check whether DR policies are configured for the first time.
If yes, go to 2.
3.6.1.3.2.2 DR Commissioning
Commissioning Process
Commissioning a DR Test
Commissioning Reprotection
Commissioning DR Switchback
127.0.0.1:51299/icslite/print/pages/resource/print.do? 386/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Purpose
Verify that the DR site properly takes over services if the production site is faulty.
Verify that the production site properly takes over services back when it is recovered.
Verify that a recovery plan is feasible, and adjust and optimize the recovery procedure as required.
Verify that the DR site properly takes over services when the production site is in maintenance as planned.
After commissioning, back up the management data by exporting the configuration data. The data can be used to recover the system if an
exception occurs or an operation has not achieved the expected result.
Prerequisites
The active-standby DR system for scale-out storage has been deployed.
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Procedure
Execute the following test cases:
Commissioning a DR Test
Commissioning Reprotection
Commissioning DR Switchback
127.0.0.1:51299/icslite/print/pages/resource/print.do? 387/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Expected Result
The result of each test case meets expectation.
Purpose
Verify that a recovery plan is correct and executable by testing the recovery plan.
Prerequisites
The active-standby DR system for scale-out storage has been deployed.
Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
2. Commission a DR test.
For details, see DR Management > Active-Passive DR Solution > DR Recovery > DR Testing in the DR Center in UltraVR User Guide.
During DR test commissioning, before clearing drilling data from the DR site, execute the required test cases at the DR site and ensure that the execution is
successful.
On FusionCompute at the DR site, view the number of drill VMs and ensure that the number of drill VMs is consistent with that of DR VMs at the
production site.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.
Additional Information
None
Purpose
127.0.0.1:51299/icslite/print/pages/resource/print.do? 388/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Verify that the scheduled migration function is available by executing a recovery plan.
Prerequisites
The active-standby DR system for scale-out storage has been deployed.
Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
After the scheduled migration, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site. Services at the DR site are running properly, and services at the production site are not
affected.
Additional Information
None
Purpose
Disconnect the storage link and execute a recovery plan to check whether VM DR is supported and the active-standby DR solution for scale-out
storage is available.
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 389/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. On FusionCompute at the production site, query and make a note of the number of DR VMs at the production site.
After the fault is rectified, execute the required test cases at the DR site and ensure that the execution is successful before performing reprotection.
On FusionCompute at the DR site, view the number of DR VMs and ensure that the numbers of DR VMs at the DR and production sites are consistent.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts at the DR site and the services at the DR site are also running properly.
Additional Information
None
Purpose
Verify that the reprotection function is available by executing a recovery plan.
Prerequisites
The active-standby DR system for scale-out storage has been deployed.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 390/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Expected Result
The reprotection is successful.
Additional Information
None
Purpose
After services are switched from the production site to the DR site through the scheduled migration, switch the services back to the production site
based on the drill plan.
Services are migrated from the production site to the DR site when a recoverable fault, such as a power outage, occurs. After the production site
recovers from the fault, synchronize the data generated during DR from the DR site to the production site and then migrate services back to the
production site.
Prerequisites
The active-standby DR system for scale-out storage has been deployed.
Procedure
1. Determine the DR switchback type.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 391/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After commissioning DR switchback, execute the following test cases at the production site and ensure that the execution is successful.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
The DR switchback is successful. VMs are running on the hosts at the production site properly. Services at the production site are running properly
and services at the DR site are not affected.
Additional Information
None
Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.
If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip
If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.
Prerequisites
Conditions
You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.
Procedure
Automatic backup
127.0.0.1:51299/icslite/print/pages/resource/print.do? 392/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
SFTP IP
SFTP Password
SFTP Port
Encryption Password
To secure configuration data, the backup server must use the SFTP protocol.
5. Click OK.
6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.
After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.
Manual backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.
DR Commissioning
Configuring Switches
Installing FusionCompute
Creating DR VMs
Configuring DR Policies
127.0.0.1:51299/icslite/print/pages/resource/print.do? 393/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 394/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Requirements
Table 1 lists the installation requirements for the DR system.
Local PC The PC that is used for the The local PC only needs to meet the requirements for installing For details about the requirements
installation FusionCompute and there is no special requirement for it. of FusionCompute for the local PC,
servers, and storage devices, see
Server The server that functions as a The server must meet the following requirements: System Requirements .
host (CNA node) on
Meets the host requirements for installing FusionCompute.
FusionCompute
Supports the FC HBA port and can communicate with the FC
switches.
NOTE:
Storage Storage devices used in the The storage devices must meet the following requirements:
device DR solution
Must be the SAN devices meeting the storage compatibility
requirements of UltraVR.
Meets the storage device requirements for installing
FusionCompute.
Access Access switches of the There are no special requirements for the Ethernet access switches None
switch storage, management, and on the management and service planes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 395/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
service planes The access switches on the storage plane must meet the following
requirements:
The FC switches in use must be compatible with hosts and FC SAN
storage.
Access switches must have sufficient network ports to connect to IP
SAN storage replication network ports.
Aggregation Ethernet aggregation The Ethernet aggregation switches must support VRRP. None
switch switches and FC aggregation
switches at the production
and DR sites
Network Network between the The network must meet the following requirements: None
production site and the DR
The management plane has a bandwidth of at least 10 Mbit/s.
site
The bandwidth of the data replication plane depends on the volume
of all the changed data in the VM replication period, which is
calculated as follows: Number of VMs to be protected x Average
volume of the changed data in the VM replication period (MB) x
8/(VM replication period (minute) x 60).
In the production center and intra-city DR center, the management
plane connects to the service plane using a large Layer 2 network.
In the large Layer 2 network, the RTT between any two sites is less
than or equal to 1 ms.
Documents
Table 2 lists the documents required for deploying the DR solution.
Table 2 Documents
Integration design Network integration Describes the deployment plan, networking Obtain this document from the engineering
document design plan, and the bandwidth plan. supervisor.
Data planning template Provides the network data plan result, such as
for the network the IP plan of nodes, storage plan, and plan of
integration design VLANs, zones, gateways, and routes.
Version document FusionCompute X.X.X Provides information about hardware and For enterprise users: Visit
Version Mapping software version mapping. https://support.huawei.com/enterprise , search for
the document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
UltraVR product UltraVR User Guide Provides guidance on how to install, configure, For enterprise users: Visit
document and commission UltraVR. https://support.huawei.com/enterprise , search for
the document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
V3, V5, or Dorado OceanStor Pacific Includes storage installation, configuration, and For enterprise users: Visit
series storage product Series Product commissioning as well as the HyperMetro https://support.huawei.com/enterprise , search for
documentation Documentation feature guide. In a ring network, the the document by name, and download it.
configuration guide about the 3DC scenario is
included. For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
NOTE:
Switch product Switch document Provides information about how to configure This document package is provided by the switch
documentation package the switches by running commands. vendor.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 396/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Server product Server document Provides information about how to configure This document package is provided by the server
documentation package the servers. vendor.
After obtaining the related documents by referring to FusionCompute X.X.X Version Mapping, make preparations for the installation, such as obtaining the software
packages, installation tools, and license files. For details about the installation preparations, see the related documents.
FusionCompute
UltraVR
Scenarios
In the geo-redundant 3DC DR scenario, configure a switch. The switch configuration is the same as that in a common deployment scenario with no
DR system deployed. For details about the switch configurations for the production site and intra-city DR site, see Configuring Switches . For
details about the switch configurations for the remote DR site, see Configuring Switches .
When deploying the DR system, configure switches based on the network device documentation and the data plan.
Scenarios
In the 3DC DR scenario, install one set of FusionCompute system in the production center and intra-city DR center. The active and standby VRM
nodes are deployed in the production center and intra-city DR center, respectively. For details, see Installing FusionCompute . For details about how
to install one set of FusionCompute system in the remote DR center, see Installation Using SmartKit .
Scenarios
In the geo-redundant 3DC DR scenario, the storage device configuration is the same as that in a common deployment scenario with no DR system
deployed. This section describes only the special configuration requirements and precautions in the DR scenario.
When deploying the DR system, configure storage devices based on the storage device documentation and the data plan. OceanStor V5 series storage is used as an
example. For details, see "Configuring Basic Storage Services" in the block service section in OceanStor Product Documentation.
Procedure
1. Check the networking mode in the geo-redundant 3DC DR scenario.
In a ring network, complete the initial configuration of the active-active and asynchronous remote replication in a DR Star consisting
of three DCs according to the online documentation of SAN storage. Then, go to 3.
In a non-ring network, configure storage devices in the production center and those in the intra-city DR center by following the steps
provided in Configuring Storage .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 397/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. Complete the initial configuration of the SAN storage devices in the remote DR center based on the SAN storage device documentation and
the data plan.
If the storage is limited by A LUN mapping to a host can not be a secondary LUN of remote replication, you cannot create remote replication on the
current LUN. Therefore, you can configure basic storage services only after configuring the remote replication relationship in the DR center. For details, see
Configuring the Remote Replication Relationship (Non-Ring Networking Mode) .
3. On FusionCompute, use the planned DR LUNs to create datastores and name these datastores in a unified manner to simplify management,
such as DR_datastore01. Select Virtualization when creating datastores.
Scenarios
After the DR system is installed, you can create VMs by following the normal service process and use the DR system to protect these VMs.
Prerequisites
Conditions
You have finished the initial service configuration.
Data
You have obtained the data required for creating VMs.
Procedure
1. Determine the DR VM creation mode.
To create DR VMs, go to 2.
3. Migrate VMs that are not created on DR LUNs to the DR LUNs and migrate non-DR VMs created on DR LUNs to the non-DR LUNs.
For details, see "Migrating a Whole VM" in FusionCompute 8.8.0 User Guide (Virtualization).
During VM storage migration, non-DR VMs can only be migrated to DR datastores through whole storage migration. VMs to which multiple
disks are attached cannot be migrated through single-disk migration.
After DR VMs are created, VM information changes. In this case, you can update resource information manually or using UltraVR periodic polling. For details, see
DR Management > Geo-Redundant DR Solution > DR Protection > Refreshing Resource Information in UltraVR User Guide.
Scenarios
In the geo-redundant 3DC DR scenario, a production center and an intra-city DR center are deployed as an active-active data center. You need to
configure HA and resource scheduling policies for a DR cluster to meet the active-active scenario. For details, see Configuring HA and Resource
Scheduling Policies for a DR Cluster .
Scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 398/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the geo-redundant 3DC DR scenario with the non-ring networking mode, if the remote replication relationship is established between an active-
active data center and a remote DR center, configure the remote replication relationship for DR LUNs. For details, see Configuring the Remote
Replication Relationship .
Scenarios
This section guides software commissioning engineers to configure DR policies after deploying the DR system to protect DR VMs.
Procedure
1. Check whether DR policies are configured for the first time.
If yes, go to 2.
If no, go to 3.
3. Modify DR policies.
For details, see DR Management > Geo-Redundant DR Solution > DR Protection > Modifying Protection Policies in UltraVR User
Guide.
3.6.1.4.2 DR Commissioning
Commissioning Process
Commissioning a DR Test
Commissioning Reprotection
Commissioning DR Switchback
Purpose
Verify that the DR site properly takes over services if the production site is faulty.
Verify that the production site properly takes over services back when it is recovered.
Verify that a recovery plan is feasible, and adjust and optimize the recovery procedure as required.
Verify that the DR site properly takes over services when the production site is in maintenance as planned.
After commissioning, back up the management data by exporting the configuration data. The data can be used to recover the system if an
exception occurs or an operation has not achieved the expected result.
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 399/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Commissioning Process
Figure 1 shows the DR solution commissioning process.
Procedure
For details about how to commission an active-active data center that consists of a production center and an intra-city DR center, see "DR and
Backup" > "Metropolitan Active-Active DR (Using OceanStor V3/V5/Dorado Series)" > "DR Commissioning" in FusionCompute 8.8.0 DR
and Backup.
To commission the array-based replication DR system, which consists of an active-active data center and a remote DR center, execute the
following test cases:
Commissioning a DR Test
Commissioning Reprotection
Commissioning DR Switchback
Expected Result
The result of each test case meets expectation.
Purpose
Verify that a recovery plan is correct and executable by testing the recovery plan.
None
Prerequisites
The geo-redundant 3DC DR system has been deployed.
Procedure
1. On FusionCompute in the production center, make a note of the number of DR VMs in the production center.
2. Commission a DR test.
For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > DR Testing in UltraVR User Guide.
During DR test commissioning, before clearing drilling data from the remote DR center, execute the required test cases in the remote DR center and ensure that the
execution is successful.
On FusionCompute of the remote DR center, ensure that the number of drill VMs is consistent with that of DR VMs in the production center.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are properly running on the hosts in the remote DR center. The services in the remote DR center are running properly, and the services in the
active-active data center are not affected.
Additional Information
None
Purpose
To commission DR, switch production service systems to the remote DR center as planned.
Prerequisites
The geo-redundant 3DC DR system has been deployed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 401/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. On FusionCompute in the production center, make a note of the number of DR VMs in the production center.
After the scheduled migration, execute the required test cases in the remote DR center and ensure that the execution is successful before performing
reprotection.
On FusionCompute of the remote DR center, ensure that the number of DR VMs is consistent with that in the production center.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts in the remote DR center, and the services in the remote DR center are also running properly.
Additional Information
None
Purpose
If an unrecoverable fault occurs in an active-active data center that consists of the production center and intra-city DR center, enable the fault
recovery function of the DR plan to switch services to the remote DR center.
The fault recovery involves the reconstruction of the entire DR system. Therefore, exercise caution when using this function.
Prerequisites
The geo-redundant 3DC DR system has been deployed.
Procedure
1. On FusionCompute in the production center, make a note of the number of DR VMs in the production center.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 402/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details, see DR Management > Geo-Redundant DR Solution > DR (HyperMetro Expansion) > Migrating Services to the Remote
DR Center upon a Fault Occurring in HyperMetro Data Centers in UltraVR User Guide.
After the fault is rectified, execute the required test cases at the remote DR site and ensure that the execution is successful.
On FusionCompute of the remote DR center, ensure that the number of DR VMs is consistent with that in the production center.
Select a running VM randomly and log in to the VM using VNC.
If the VNC login page is displayed, the VM is running properly.
Migrate VMs.
Stop VMs.
Restart VMs.
Start VMs.
Hibernate VMs (in the x86 architecture).
For details, see "VM Operation Management" in FusionCompute 8.8.0 User Guide (Virtualization).
Expected Result
VMs are running properly on the hosts in the remote DR center, and the services in the remote DR center are also running properly.
Additional Information
None
Purpose
Verify that the reprotection function is available by executing a recovery plan.
Prerequisites
The geo-redundant 3DC DR system has been deployed.
Procedure
1. Check the reprotection type.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 403/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Expected Result
The reprotection is successful.
Additional Information
None
Purpose
After services are switched from the production center to the remote DR center through the scheduled migration, manually switch the services back
to the active-active data center that consists of the production center and intra-city DR center.
Services are migrated from the active-active data center to the remote DR center when a recoverable fault, such as a power failure, occurs. After the
active-active data center recovers from the fault, synchronize data generated during DR from the remote DR center to the active-active data center
and switch services back to the active-active data center.
This section describes how to commission a DR switchback.
Prerequisites
The geo-redundant 3DC DR system has been deployed.
Procedure
1. Determine the DR switchback type.
After commissioning DR switchback, perform the operations in Configuring HA and Resource Scheduling Policies for a DR Cluster and configure the production
center as the preferred site on the storage to ensure that services are running in the production center. Execute the required test cases in the production center, and
ensure that the execution is successful.
Expected Result
127.0.0.1:51299/icslite/print/pages/resource/print.do? 404/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The DR switchback is successful and VMs are running on hosts in the production center properly. Services in the production center and intra-
city DR center are running properly, and services in the remote DR center are not affected.
Additional Information
None
Scenarios
This section guides administrators to back up configuration data on UltraVR to back up a database before performing critical operations, such as a
system upgrade or critical data modification, or after changing the configuration. The backup data can be used to restore the database if an exception
occurs or the operation has not achieved the expected result.
The system supports automatic backup and manual backup.
If you choose automatic backup, prepare an SFTP server and configure the SFTP server information on UltraVR. After the configuration is
complete, the system backs up system data to the SFTP server at 02:00 every day based on the UltraVR server time. The UltraVR server time
at the production site and DR site must be consistent. An SFTP server can retain backup data for a maximum of seven days. Data older than
seven days will be automatically deleted. If a backup task fails, the system generates an alarm. The alarm will be automatically cleared when
the next backup task succeeds. The backup directory is:
Linux: /SFTP user/CloudComputing/DRBackup/eReplication management IP address/YYYY-MM-DD/Auto/ConfigData.zip
Windows: \CloudComputing\DRBackup\eReplication management IP address\YYYY-MM-DD\Auto\ConfigData.zip
If you choose manual backup, manually export the system configuration data and save it locally.
During manual backup, export both the configuration data at the production site and that at the DR site.
Prerequisites
Conditions
You have obtained the IP address, username, password, and port of the FTP server if you choose automatic backup.
Procedure
Automatic backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
SFTP IP
SFTP Password
SFTP Port
Encryption Password
127.0.0.1:51299/icslite/print/pages/resource/print.do? 405/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
To secure configuration data, the backup server must use the SFTP protocol.
5. Click OK.
6. In the Warning dialog box that is displayed, read the content of the dialog box carefully and click OK.
After you select Automatic Backup, for any change of the SFTP server information, you can directly modify the information and click OK.
Manual backup
2. In the navigation pane, choose Data Maintenance > System Configuration Data.
4. In the System Configuration Data area, click Export, enter the encryption password, and click OK.
3.6.2 Backup
Centralized Backup Solution
Backup Commissioning
127.0.0.1:51299/icslite/print/pages/resource/print.do? 406/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.
A PC
Network
Installation data
Scenarios
1. This section guides software commissioning engineers to install eBackup in a virtual environment by referring to OceanStor BCManager
eBackup User Guide (Virtualization) after installing and configuring FusionCompute to back up VMs using eBackup.
2. This section describes how to install eBackup using a template on FusionCompute. For details, see Installation and Uninstallation >
Installing eBackup > Installing eBackup Using a Template in OceanStor BCManager eBackup User Guide (Virtualization).
3. After eBackup is installed on servers, configure one server as the backup server and set related parameters. For details, see Installation and
Uninstallation > Configuring eBackup Servers > Configuring a Backup Server in OceanStor BCManager eBackup User Guide
(Virtualization).
4. If the backup proxy has been planned in the eBackup backup management system, configure other servers on which eBackup is installed as
the backup proxies and set related parameters. For details, see Installation and Uninstallation > Configuring eBackup Servers >
(Optional) Configuring a Backup Proxy in OceanStor BCManager eBackup User Guide (Virtualization).
Prerequisites
Conditions
You have obtained the username and password for logging in to FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 407/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You have completed the preparations for installing eBackup by referring to the prerequisites described in Installation and Uninstallation >
Installing eBackup > Installing eBackup Using a Template in OceanStor BCManager eBackup User Guide (Virtualization).
If scale-out block storage is used, eBackup VMs must be deployed on the compute nodes of scale-out block storage, and Switching Mode of
the host storage ports on the compute nodes must be set to OVS forwarding mode. For details, see Backup > Configuring Production
Storage > Adding an eBackup Server to a Huawei Distributed Block Storage Cluster in OceanStor BCManager eBackup User Guide
(Virtualization).
You have created VBS clients by referring to "Creating VBS Clients" in OceanStor Pacific Series Product Documentation (Huawei Engineer)
if scale-out block storage is used.
You have created a DVS connecting to the service plane and added an uplink to the DVS on the GE/10GE network.
Process
Figure 1 shows the process for installing and configuring eBackup.
Procedure
Perform the following operations on both the active and standby nodes.
Add an uplink to the DVS that connects to the management plane.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 408/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After FusionCompute installation is complete, if no management plane network port is added, the DVS connecting to the management plane has
only two uplinks, that is, the management plane ports of the host where the active and standby VRM VMs reside. Before installing the eBackup
server, you must add the management plane port of the host where the eBackup backup server resides to the DVS connecting to the management
plane based on the data plan.
1. Log in to FusionCompute.
For a GE network, go to 3.
For a GE network, go to 5.
Management plane port The port group must be newly created on the management DVS.
group
The port group must be named based on the data plan. For example, the port group is named Mgmt_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set to 0.
Keep the default values for other settings.
Service plane port group The port group must be newly created on the service plane DVS.
01
The port group must be named based on the data plan. For example, the port group is named Svc01_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 409/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set based on the user's network plan.
Keep the default values for other settings.
Storage plane port group The port group must be newly created on the storage plane DVS.
01
The port group must be named based on the data plan. For example, the port group is named Strg01_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set based on the user's network plan.
Keep the default values for other settings.
Storage plane port group The port group must be newly created on the storage plane DVS.
02
The port group must be named based on the data plan. For example, the port group is named Strg02_eBackup.
Port Type is set to Access.
Outbound Traffic Shaping and Inbound Traffic Shaping must be selected, and their six parameters must be set to the same
value based on the networking type.
In GE networking mode, the parameter values are set to 512.
In 10GE networking mode, the parameter values are set to 5120.
Priority is set to Low.
(Recommended) Connection mode is set to VLAN, and the VLAN ID is set based on the user's network plan.
Keep the default values for other settings.
NOTE:
If the production storage can communicate with the backup storage, the storage plane port group 01 and the storage plane port group
02 can be combined.
Install eBackup.
7. Create eBackup VMs. For details, see "Importing a VM using a Template" in FusionCompute 8.8.0 Product Documentation.
8. Use a template to deploy eBackup server VMs based on the data plan and by referring to OceanStor BCManager eBackup User Guide
(Virtualization).
Note the following requirements for deploying eBackup VMs:
The eBackup VMs must be named as planned. For example, the eBackup VM is named eBackup Server 01.
Select the port group to which the VM NICs belong based on the rules listed in Table 2.
Select the datastores to be used by the eBackup servers based on the data plan.
The VM specifications must meet the minimal configuration requirements of the eBackup server. For details, see "Checking the
Deployment Environment" in OceanStor BCManager eBackup User Guide (Virtualization).
Deselect Synchronize with host clock when configuring the clock synchronization policy.
QoS settings:
In the CPU resource control area, set the value of Reserved (MHz) to the maximum value.
In the memory resource control area, set the value of Reserved (MB) to the maximum value. (x86 architecture)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 410/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NIC 1 Used as the network port for the eBackup management plane. Select the newly created management plane port group, for example,
(eth0) Mgmt_eBackup.
NIC 2 Used as the network port for the eBackup internal communication plane. Select the newly created service plane port group 01, for example,
(eth1) Svc01_eBackup.
NIC 3 Used as the network port for the eBackup backup storage plane. Select the newly created storage plane port group 01, for example,
(eth2) Strg01_eBackup.
NIC 4 Used as the network port for the eBackup production storage plane. Select the newly created storage plane port group 02, for example,
(eth3) Strg02_eBackup.
Configure eBackup.
9. Configure eBackup by referring to Installation and Uninstallation > Configuring eBackup Servers in OceanStor BCManager eBackup
User Guide (Virtualization).
Configure one eBackup-installed server as the backup server. For details, see "Configuring a Backup Server" in chapter
"Configuring eBackup Servers".
If backup proxies are planned in the eBackup backup management system, you need to initialize the servers that have eBackup
installed into backup proxies. For details, see "(Optional) Configuring a Backup Proxy" in chapter "Configuring eBackup Servers".
To configure eBackup as an HA system, you need to set HA parameters. For details, see section "(Optional) Configuring the HA
Function" in Installation and Uninstallation.
Scenarios
Create storage units on the eBackup server and connect the eBackup server to FusionCompute.
Prerequisites
Conditions
The eBackup server is communicating properly with the management planes of FusionCompute and the external shared storage devices.
Storage units have been configured on eBackup. For details, see "Creating a Storage Unit" in OceanStor BCManager eBackup User Guide
(Virtualization).
Data
Obtain required data by referring to pre-backup preparation contents in OceanStor BCManager eBackup User Guide (Virtualization).
Procedure
1. On the FusionCompute web client, create an account for configuring interconnection with the eBackup server.
You are advised to create a dedicated account for the interconnection instead of using an existing account. For details about how to create an
account, see Creating a User . When creating the account, set User Type to Interface interconnection user and Role to administrator, as
shown in Figure 1.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 411/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Commissioning VM Restoration
Purpose
Verify the availability of the VM backup function by creating backup policies and monitoring the execution results of the backup tasks. You are
advised to verify the availability of both the CBT backup plan and the snapshot plan.
Prerequisites
The backup system has been installed and configured.
Commissioning Procedure
Do not use a VM that is running at the production site as the commissioning VM. Instead, create a VM to perform the commissioning.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 412/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. On the eBackup management console, choose Monitor > Job to check the VM backup status.
For details, see "(Optional) Viewing a Backup Job" in OceanStor BCManager eBackup User Guide (Virtualization).
Expected Result
The VM backup is successful.
Additional Information
None
Purpose
Verify the availability of the VM restoration function by restoring a VM using the backup data.
Prerequisites
The backup data of the VM is available.
Commissioning Procedure
Do not use a VM that is running at the production site as the commissioning VM. Instead, create a VM to perform the commissioning.
If the data on the VM is damaged, select Restore VM Disk to Original VM, Restore VM Disk to Specific VM or Restore VM
Disk to Specific Disk.
For details, see "Restoring FusionSphere VMs" in OceanStor BCManager eBackup User Guide (Virtualization).
2. On the eBackup management console, choose Monitor > Job to check the VM or VM disk restoration status.
For details, see "(Optional) Viewing a Restore Job" in OceanStor BCManager eBackup User Guide (Virtualization).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 413/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. x86 architecture: Log in to the restored VM using VNC and ensure that the VM is running properly without BSOD or black screen.
Arm architecture: Log in to the restored VM using VNC and ensure that the VM is running properly without black screen.
Expected Result
The restored VM is running properly.
Additional Information
None
Procedure
Verify the eDME installation.
1. On the maintenance terminal, type https://Node IP address:31943 (for example, https://192.168.125.10:31943) in the address box of a
browser and press Enter.
If eDME is deployed in a three-node cluster, use the management floating IP address to log in to eDME.
In the three-node cluster deployment scenario, automatic the active/standby switchover is supported. After the active/standby switchover, it takes about
10 minutes to start all services on the new active node. During this period, the O&M portal can be accessed, but some operations may fail. Wait until
the services are restarted and try again.
5. On the navigation bar, choose Provisioning > Virtualization Service > VMs.
6. Click Create.
Parameter Description
Name VM name
Description VM description
Compute Compute resource to which the VM belongs. You can select a cluster or host.
Resources NOTE:
If a cluster is selected, the system randomly selects a host in the cluster to create a VM.
If a host is selected, a VM is created on the specified host.
If a VM is bound to a host, the VM can run on this host only. The VM can be migrated to another host but the HA function becomes invalid.
OS Type of the OS to be installed on a VM. The options are Linux, Windows, and Other.
NOTE:
The OS type of the VM created in the Arm cluster or host must be set to Linux.
When you install an OS, the OS type and version must be consistent with the information you specify here. Otherwise, VM faults may occur
when the VM runs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 414/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Parameter Description
VMs of the Arm architecture support only graphics cards of the virtio type.
Floppy Drive Floppy drive file to be mounted. The default value is Unmount. You can manually select the floppy drive file to be
NOTE: mounted.
If you need to add a disk, NIC, or GPU resource group to the VM, click Add Device and select the device to be added.
Boot modes: Network, CD/DVD-ROM, Disk, and Specific device boot sequence
VNC keyboard settings: English (US), French, German, Italian, Russian, and Spanish
Parameter Description
Clock Synchronization Policy Selecting Synchronize with host clock: The VM periodically synchronizes time with the host.
Deselecting Synchronize with host clock: The user can set the VM time. The customized time depends on the VM
RTC time. After setting the VM system time, you need to synchronize the RTC time with the customized VM system
time. The system time of all hosts at a site must be the same. Otherwise, the time changes after a VM HA task is
performed, a VM is migrated, a VM is woken up from the hibernated state (x86 architecture), or a VM is restored using
a snapshot.
Boot Firmware You do not need to set this parameter when Boot Firmware of an Arm VM is UEFI.
NOTE:
This parameter is
unavailable when you
create a VM from a
template or clone a VM.
Latency (ms) This parameter is available only when Boot Firmware is BIOS.
EVS Affinity Selecting EVS Affinity: The VM CPUs, memory, and EVS forwarding cores share the same NUMA node on a host,
and VMs with vhost-user NICs feature the optimal NIC rate.
Deselecting EVS Affinity: The VM EVS affinity does not take effect. The VM CPUs and memory are allocated to
multiple NUMA nodes on a host randomly, and VMs with vhost-user NICs do not feature the optimal NIC rate.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 415/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NUMA Topology Adjustment Enabling NUMA Topology Adjustment: The system automatically calculates the VM NUMA topology based on VM
configurations, advanced NUMA parameters, and the physical server's NUMA configurations and sets the NUMA
affinity of the VM and physical server, enabling optimal VM memory access performance.
HPET Selecting this option can meet the precise timing requirements of multimedia players and other applications (such as
attendance and billing).
Security VM Set the security VM type. Security Type and Security VM Type are available when this option is selected.
Security Type can be set to the following:
Antivirus: provides antivirus for virtualization.
Deep packet inspection (DPI): provides network protection for virtualization.
Security VM Type can be set to SVM or GVM:
SVM: secure VM
When Security Type is set to Antivirus, an SVM provides antivirus services for a guest VM (GVM), such as virus
scanning, removal, and real-time monitoring. The VM template is provided by the antivirus vendor.
When Security Type is set to DPI, an SVM provides the following for a GVM: network intrusion detection, network
vulnerability scanning, and firewall services. The VM template is provided by the third-party security vendor.
GVM: An end user VM that uses the antivirus or DPI function provided by the SVM. If DPI virtualization is used, the
GVM performance deteriorates.
1. After FusionCompute is installed, click View Portal Link on the Execute Deployment page to view the FusionCompute address. Click the
FusionCompute address to go to the VRM login page.
After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is normal, and the alarm ALM-
10.1000027 Heartbeat Communication Between the Host and VRM Interrupted is generated, the alarm will be automatically cleared after 30 minutes.
Otherwise, clear the alarm by following the instructions provided in ALM-10.1000027 Heartbeat Communication Between the Host and VRM
Interrupted in FusionCompute 8.8.0 Product Documentation.
1. After UltraVR is installed, click View Portal Link on the Execute Deployment page to view the UltraVR address. Click the UltraVR
address to go to the login page of the management page.
2. Log in to the management page to check whether UltraVR is successfully installed. The login user name and password can be obtained from
Datacenter Virtualization Solution 2.1.0 Account List.
1. After eBackup is installed, click View Portal Link on the Execute Deployment page to view the eBackup address. Click the eBackup
address to go to the login page of the management page.
2. Log in to the management page to check whether eBackup is successfully installed. The login user name and password can be obtained from
Datacenter Virtualization Solution 2.1.0 Account List.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 416/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Host and cluster Create a cluster. For details, see Operation and Maintenance > Service Management > Compute Resource Management >
management Cluster Management > Creating a Cluster in FusionCompute 8.8.0 Product Documentation.
Add hosts to the cluster. For details, see Operation and Maintenance > Service Management > Compute Resource
Management > Host Management > Adding Hosts in FusionCompute 8.8.0 Product Documentation.
Set time synchronization on a host. For details, see Operation and Maintenance > Service Management > Compute Resource
Management > Host Management > Setting Time Synchronization on a Host in FusionCompute 8.8.0 Product
Documentation.
Add storage ports to the host. For details, see Operation and Maintenance > Service Management > Compute Resource
Management > System Port Management > Adding a Storage Port in FusionCompute 8.8.0 Product Documentation.
2. Storage Add storage resources to the site. For details, see Operation and Maintenance > Service Management > Storage Management
management > Storage Resource Management > Add Storage Resources to a Site in FusionCompute 8.8.0 Product Documentation.
Associate storage resources with a host. For details, see Operation and Maintenance > Service Management > Storage
Management > Storage Resource Management > Associate Associating Storage Resources with a Host in FusionCompute
8.8.0 Product Documentation.
Scan for storage devices. For details, see Operation and Maintenance > Service Management > Storage Management >
Storage Resource Management > Scanning Storage Devices in FusionCompute 8.8.0 Product Documentation.
Add datastores. For details, see Operation and Maintenance > Service Management > Storage Management > Data Storage
Management > Add Datastores in FusionCompute 8.8.0 Product Documentation.
Create a disk. For details, see Operation and Maintenance > Service Management > Storage Management > Disk
Management > Creating a Disk in FusionCompute 8.8.0 Product Documentation.
3. Network Create a DVS. For details, see Operation and Maintenance > Service Management > Network Management > DVS
management Management > Create a DVS in FusionCompute 8.8.0 Product Documentation.
Add an uplink. For details, see Operation and Maintenance > Service Management > Network Management > Upstream
Link Group Management > Adding an Uplink in FusionCompute 8.8.0 Product Documentation.
Add a VLAN pool. For details, see Operation and Maintenance > Service Management > Network Management >
Distributed Virtual Switch Management > Adding a VLAN Pool in FusionCompute 8.8.0 Product Documentation.
Add a MUX VLAN. For details, see Operation and Maintenance > Service Management > Network Management >
Distributed Virtual Switch Management > Adding a MUX VLAN in FusionCompute 8.8.0 Product Documentation.
Create a port group. For details, see Operation and Maintenance > Service Management > Network Management > Port
Group Management > Adding a Port Group in FusionCompute 8.8.0 Product Documentation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 417/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.9 Appendixes
FAQ
Common Operations
Introduction to Tools
VM-related Concepts
3.9.1 FAQ
How Do I Handle the Issue that System Installation Fails Because the Disk List Cannot Be Obtained?
How Do I Handle the Issue that VM Creation Fails Due to Time Difference?
What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported During System Installation?
How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be Added Due to a GPT Partition During Tool-based Installation?
How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for Customized VRM Installation?
How Do I Handle the Issue that the Mozilla Firefox Browser Prompts Connection Timeout During the Login to FusionCompute?
How Do I Handle the Storage Device Detection Failure on a FusionCompute Host During VRM Installation?
How Do I Configure Time Synchronization Between the System and an NTP Server of the w32time Type?
How Do I Configure Time Synchronization Between the System and a Host When an External Linux Clock Source Is Used?
What Should I Do If a Linux VM with More Than 32 CPU Cores Cannot Be Started?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 418/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
How Do I Handle the Issue that VRM Services Become Abnormal Because the DNS Is Unavailable?
What Can I Do If an Error Message Is Displayed Indicating That the Sales Unit HCore Is Not Supported When I Import Licenses on
FusionCompute?
How Do I Determine the Network Port Name of the First CNA Node?
Troubleshooting
3.9.1.1 How Do I Handle the Issue that System Installation Fails Because the Disk
List Cannot Be Obtained?
Symptom
System installation fails because the disk list cannot be obtained. Figure 1 or Figure 2 shows failure information.
Possible Causes
No installation disk is available in the system. As a result, the installation fails and the preceding information is reported.
Storage media on the server are not initialized. As a result, the installation fails and the preceding information is reported.
The server was used, and its RAID controllers and disks contain residual data. As a result, the installation fails and the preceding information is
reported.
The system may not have a RAID controller card driver. You need to confirm the hardware driver model, download the driver from the official
website, and install it. For details about how to install the driver, see FusionCompute SIA Device Driver Installation Guide.
Troubleshooting Guideline
Before installing the system, initialize the RAID controllers and disks on the server and delete their residual data.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 419/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Check the system architecture.
4. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 3.
5. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated Remote Console
(Private), or HTML5 Integrated Remote Console (Shared). The real-time desktop of the server is displayed, as shown in Figure 4 or
Figure 5.
Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform operations on
the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and perform operations
on the server using the iBMC. The users can view the operations of each other.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 420/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7. Select Reset.
The Are you sure to perform this operation dialog box is displayed.
8. Click Yes.
The server restarts.
9. When the following information is displayed during the server restart, press Delete quickly.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 421/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The default password for logging in to the BIOS is Admin@9000. Change the administrator password immediately after your first login.
For security purposes, change the administrator password periodically.
The system will be locked if incorrect passwords are entered three consecutive times. You need to restart the server to unlock it.
12. On the Advanced screen, select Avago MegaRAID <SAS3508> Configuration Utility and press Enter. The Dashboard View screen is
displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 422/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
13. Check whether the RAID array has been created for system disks on the server.
If yes, go to 14.
If no, go to 16.
14. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press Enter.
15. On the Configuration Management screen, select Clear Configuration and press Enter. On the displayed confirmation screen, select
Confirm and press Enter. Then select Yes and press Enter to format the hard disk.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 423/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press Enter. Select
Create Virtual Drive and press Enter. The Create Virtual Drive screen is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 424/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
17. On the Create Virtual Drive screen, select Select RAID level using the up and down arrow keys and press Enter. Create a RAID array
(RAID 1 is used as an example) using disks. Select RAID1 from the drop-down list box, and press Enter.
18. On the Create Virtual Drive screen, select Default Initialization using the up and down arrow keys and press Enter. Select Fast from the
drop-down list box and press Enter.
19. Select Select Drives From using the up and down arrow keys and press Enter. Select Unconfigured Capacity using the up and down
arrow keys.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 425/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
20. Select Select Drives using the up and down arrow keys and press Enter. Select the first (Drive C0 & C1:01:02) and the second (Drive C0 &
C1:01:05) disks using the up and down arrow keys to configure RAID 1.
Drive C0 & C1 may vary on different servers. You can select a disk by entering 01:0x after Drive C0 & C1.
Press the up and down arrow keys to select the corresponding disk, and press Enter. [X] after a disk indicates that the disk has been selected.
21. Select Apply Changes using the up and down arrow keys to save the settings. The message "The operation has been performed
successfully." is displayed. Press the down arrow key to choose OK and press Enter to complete the configuration of member disks.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 426/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
22. Select Save Configuration and press Enter. The operation confirmation screen is displayed. Select Confirm and press Enter. Select Yes
and press Enter. The message "The operation has been performed successfully." is displayed. Select OK using the down arrow key and
press Enter.
23. Press ESC to return to the Main Menu screen. Select Virtual Drive Management and press Enter to view the RAID information.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 427/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
24. Press F10 to save all the configurations and exit the BIOS.
26. Before installing a system, access the disk RAID controller page to view disk information. Figure 8 shows disk information. The method for
accessing the RAID controller page varies depending on the RAID controller card in use. For example, if RAID controller card 2308 is
used, press Ctrl+C to access the disk RAID controller page.
27. Check whether the RAID array has been created for system disks on the server.
If yes, select Manage Volume in Figure 8 to access the page shown in Figure 9 and then click Delete Volume to delete the residual
RAID disk information from the system.
If no, go to 28.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 428/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
29. After configuration is complete, select Save changes then exit this menu on the screen to exit, as shown in Figure 11.
3.9.1.2 How Do I Handle the Issue that VM Creation Fails Due to Time
Difference?
Symptom
The VM fails to be created when VRM is installed using an installation tool. The message indicating that this issue may be caused by time difference
will be displayed in some scenarios. If the message is not displayed, when you query the log, you may find that the time difference exceeds 5
minutes before the VM creation failure.
Procedure
1. Click Install VRM.
Check whether the VM is successfully created.
If no, the local PC may be a VM and is not restarted for a long time. In this case, go to 2.
3. Select Save.
5. Select Continue.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 429/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If data is not saved when you close the tool or VRM is not continuously installed but is uninstalled and then installed when you open the tool again, the
residual host data is not cleared. In this case, you need to install the host and VRM again. Otherwise, the system may prompt you that the host has been added
to another site when you configure the host.
3.9.1.3 What Do I Do If the Error "kernel version in isopackage.sdf file does not
match current" Is Reported During System Installation?
Symptom
During the system installation, an error is reported when the installation information is compared with the isopackage.sdf file. As a result, the
installation fails. Figure 1 shows the reported information.
Possible Causes
During the installation, both the local and remote ISO files are mounted to the server.
Procedure
1. Confirm the ISO file to be installed.
4. Reinstall the host using the ISO file of the remote CD/DVD-ROM drive.
Install the host. For details, see "Installing Hosts Using ISO Images (x86)" or "Installing Hosts Using ISO Images (Arm)" in FusionCompute
8.8.0 Product Documentation.
No further action is required.
5. Reinstall the host using the ISO file of the local CD/DVD-ROM drive.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 430/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Problem 2: After the Hygon server is added to the FusionCompute cluster and the server hardware configuration is queried, an extra USB device
American Megatrends Inc.. Virtual Cdrom Device is displayed in the USB device list.
Solution: The BMC virtual media (keyboard, mouse, and CD/DVD) of the Hygon server uses the virtual USB protocol. As a result, the BMC virtual
media is displayed as USB devices in the OS. Ignore this problem.
Problem 3: Setting PXE polling on the Hygon server fails. If the first network port in the boot device is not the planned PXE network port, the PXE
installation fails.
Solution: Set the planned PXE network port to the first network port in the PXE boot device in the BIOS.
Problem 4: After a USB device is inserted into the Hygon server after the system is installed, the host startup sequence changes.
Solution: After the USB device is inserted, access the BIOS to change the boot sequence and set the disk as the first boot device.
Problem 5: After the Hygon server is restarted, the host boot sequence changes, and the OS cannot be accessed after the restart.
Solution: Restart the server, enter the BIOS, and set the disk as the first boot device.
3.9.1.5 How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be
Added Due to a GPT Partition During Tool-based Installation?
Symptom
add storage failed is displayed when you add a datastore during the VRM installation using the FusionCompute installation tool.
Error message "Storage device exists GPT partition, please clear and try again." is displayed in the log information.
Procedure
The following operations are high-risk operations because they will delete and format the specified storage device. Before performing the following operations, ensure
that the local disk of the GPT partition to be deleted is not used.
1. Take a note of the value of the storageUnitUrn field displayed in the log information about the storage device that fails to be added.
For example: urn:sites:54830A53:storageunits:F1E6FF755C8C4AB49A8BD2791F1A4E3E
3. Run the following command and enter the password of user root to switch to user root:
su - root
4. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 431/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The value of the name field is the name of the storage device.
In the following example command output, the storage device name is HUS726T4TALA600_V6KV5K2S.
If the storage device name is on the right of master in 6, the host IP address is the value of master_ip. If the storage device name is on the right of slave in 6,
the host IP address is the value of slave_ip.
9. Run the following command and enter the password of user root to switch to user root:
su - root
10. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
11. Run the following command to view the storage device path:
redis-cli -p 6543 -a Redis password hget StorageUnit:Storage device name op_path
For details about the default password of Redis, see "Account Information Overview" in FusionCompute 8.8.0 O&M Guide. The storage device name is the
name obtained in 6.
12. Run the following command to delete the signature of the file system on the local disk:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 432/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
This operation will clear the original data on the disk, which is a high-risk operation. Before performing this operation, ensure that the disk is not in use.
If yes, go to 14.
3.9.1.6 How Can I Handle the Issue that the Node Fails to Be Remotely
Connected During the Host Configuration for Customized VRM Installation?
Symptom
When a host is installed using an ISO image, gandalf is not initialized. As a result, the system displays a message indicating that the remote
connection to the node fails during the host configuration for customized VRM installation.
Solution
Check whether the IP address of the host where the VRM is to be installed is correct.
Check whether the password of user root for logging in to the host where the VRM is to be installed is correct.
Check whether the following command has been executed on the host to set the password of user gandalf and whether the password is correct:
cnaInit
If you enter an incorrect password of user gandalf for logging in to the host, the user will be locked for 5 minutes. To manually unlock the
user, log in to the locked CNA node as user root through remote control (KVM) and run the faillock --reset command.
3.9.1.7 How Do I Handle the Issue that the Mozilla Firefox Browser Prompts
Connection Timeout During the Login to FusionCompute?
Symptom
FusionCompute is reinstalled multiple times and the Mozilla Firefox browser is used to log in to the management page. As a result, too many
certificates are loaded to the Mozilla Firefox browser. When FusionCompute is installed again and the Mozilla Firefox browser is used to log in to
the management page, the certificate cannot be loaded. As a result, the login fails and the browser prompts connection timeout.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 433/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Possible Causes
FusionCompute is reinstalled repeatedly.
Procedure
1. Click in the upper right corner of the browser and choose Options.
2. In the Network Proxy area of the General page, click Settings (E).
The Connection Settings dialog box is displayed.
If yes, go to 5.
If no, go to 4.
6. In the Security area, click View Certificates under the Certificate module.
The Certificate Manager dialog box is displayed.
7. On the Servers and Authorities tab pages, delete certificates that conflict with those used by the current management interface.
The certificates to be deleted are those that use the same IP address as the VRM node. For example, if the IP address of the VRM node is
192.168.62.27:
On the Servers tab page, delete the certificates of servers whose IP addresses are 192.168.62.27:XXX.
Scenarios
In the following scenarios, the host may fail to scan storage resources on the corresponding storage devices:
During the installation of hosts using the x86 architecture, the size of the swap partition is 30 GB by default. If you select auto to automatically
configure the swap partition size, the swap partition size is in proportion to the memory size. When the host has a large memory size, the swap
partition may occupy large storage space so that the system disk does not have available space, and no other disks are available except the
system disk.
The local disk of the host has residual partition information. In this case, you need to manually clear the residual information on the storage
devices.
Prerequisites
Conditions
You have obtained the IP address for logging in to the host.
Data
Data preparation is not required for this operation.
Procedure
1. Use PuTTY to log in to the host.
Ensure that the management IP address and username gandalf are used for login.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 434/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
For a host using the x86 architecture, determine the number of disks based on the value in the NAME column in the command output.
If the host has only one disk, install the host again, manually specify the swap partition size, and install VRM again.
The swap partition size is required to be greater than or equal to 30 GB. If the disk space is insufficient for host installation and VRM installation,
replace it with another disk.
If the host has other disks except the system disk and VRM can be created on other disks, go to 5.
Do not clear the partitions on the system disk when clearing the disk partitions on the host. Otherwise, the host becomes unavailable unless you reinstall an OS on the
host.
/dev/sda is the default system disk on a host. However, the system may select another disk as the system disk, or a user may specify a system disk during the host
installation. Therefore, distinguish between the system disk and user disks when deleting host disk partitions.
5. Run the following command to query the name of the existing disk on the host:
fdisk -l
6. In the command output, locate the Device column that contains the partitioned disks, and make a note of the disk names.
Information similar to the following is displayed:
...
Partition /dev/sdb1 of the disk is displayed in the Device column, and you need to make a note of the disk name /dev/sdb.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 435/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the disk has only one partition, the partition will be automatically deleted, and information similar to the following will be displayed:
Selected partition 1
Run the d command to automatically delete the unique partition, and then go to 10.
If yes, go to 12.
If no, go to 11.
12. Enter w to save the configuration and exit the fdisk mode.
Scenarios
An IP SAN initiator is required for IP SAN storage devices to map hosts and storage devices using the world wide name (WWN) generated after the
storage devices are associated with hosts.
OceanStor 5500 V3 is used as an example in this section. For more details, see the documentation delivered with the storage device.
Prerequisites
Conditions
You have logged in to the storage management system, and the storage devices have been detected.
You have configured the logical host (group) and LUNs on the storage management system of the SAN storage device, including creating a
logical host (group), dividing LUNs, and configuring the mapping between LUNs and the logical host (group).
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download the document for
the desired version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document for the desired
version.
Data
Data preparation is not required for this operation.
Procedure
1. Check whether the storage resource has been associated with the host.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 436/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If yes, go to 3.
If no, go to 2.
2. Create an initiator.
For details, see "Creating an Initiator" in OceanStor 5500 V3 Product Documentation.
Scenarios
An FC SAN initiator is required for FC SAN devices to map hosts and storage devices using the world wide name (WWN) generated after the
storage devices are associated with hosts. This section describes how to obtain the WWN of the host and configure the FC SAN initiator.
OceanStor 5500 V3 is used as an example in this section. For more details, see the documentation delivered with the storage device.
Prerequisites
Conditions
You have configured the logical host (group) and LUNs on the storage management system of the SAN storage device, including creating a
logical host (group), dividing LUNs, and configuring the mapping between LUNs and the logical host (group).
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download the document for
the desired version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document for the desired
version.
Data
Data preparation is not required for this operation.
Procedure
5. Click Scan.
6. Click Recent Tasks in the lower left corner. In the expanded task list, verify that the scan operation is successful.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 437/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
If the clock source is an NTP server of the w32time type, configure one host or a VRM node when the VRM node is deployed on a physical server
to synchronize time with the clock source, and then set this host or VRM node as the system clock source. This type of clock source is called the
internal clock source. Configure time synchronization between the system and the internal clock source.
Prerequisites
Conditions
You have obtained the IP address or domain name of the NTP server of the w32time type of the Windows OS.
If the NTP server domain name is to be used, ensure that a domain name server (DNS) is available. For details, see "Configuring the DNS
Server" in FusionCompute 8.8.0 O&M Guide.
You have obtained the password of user root and the management IP address of the host or VRM node that is to be configured as the internal
clock source.
Procedure
Configure time synchronization between a host or VRM node and a w32time-type NTP server.
1. Use PuTTY to log in to the host or VRM node to be set as the internal clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Run the following command to synchronize time between the host or VRM node and the NTP server:
service ntpd stop;/usr/sbin/ntpdate NTPServer && /sbin/hwclock -w -u > /dev/null 2>&1; service ntpd start
You can set NTPServer to the NTP server IP address or domain name. If you enter a domain name for the configuration, ensure that a DNS
is available.
If the command output contains the following information, run this command again:
5. Run the following commands to set the time synchronization interval to 20 minutes:
sed -i -e '/ntpdate/d' /etc/crontab
echo "*/20 * * * * root service ntpd stop;/usr/sbin/ntpdate NTPServer > /dev/null 2>&1 && /sbin/hwclock -w -u > /dev/null
2>&1;service ntpd start" >>/etc/crontab
You can set NTPServer to the NTP server IP address or domain name. If you enter a domain name for the configuration, ensure that a DNS
is available.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 438/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
6. Run the following command to restart the service for the configuration to take effect:
service crond restart
The configuration is successful if information similar to the following is displayed:
7. Run the following command to configure the host or VRM node as the internal clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip Management IP address of the host or VRM node that is to be configured as the
internal clock source -cycle 6 -timezone Local time zone -force true
In the preceding command, the value of Local time zone is in Continent/Region format and must be the time zone used by the external clock
source.
For example, Local time zone is set to Asia/Beijing.
If the command output contains the following information, the configuration is successful:
If a host is set as an internal clock source, the configuration causes the restart of the service processes on the host. If more than 40 VMs run on the host, the
service process restart will take a long time, triggering VM fault recovery tasks. However, the VMs will not be migrated to another host. After the service
processes restart, the fault recovery tasks will be automatically canceled.
8. Run the following command to check whether the synchronization status is normal:
ntpq -p
Information similar to the following is displayed:
==============================================================================
If the remote column contains * 6 to 10 minutes after you run the command, the synchronization status is normal.
10. Choose System Management > System Configuration > Time Management.
The Time Management page is displayed.
NTP Server: Set it to the management IP address of the host or VRM node that has been configured as the internal clock source.
If a VRM node is set as the internal clock source and the VRM nodes are deployed in active/standby mode, NTP server must be set to the management IP
address of the active VRM node instead of the floating IP address of the VRM nodes.
The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption, and abnormal antivirus service.
Proceed with the subsequent operation only after the service processes restart.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 439/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
If an external Linux clock source is used, manually configure a host to synchronize time with the external clock source. Set the host or VRM node as
the system clock source that is also called an internal clock source. Configure time synchronization between the system and the internal clock
source.
If a host is set as an internal clock source, service processes on the host will restart during the configuration. If more than 40 VMs run on the host, it may take a long
time to restart the processes, triggering fault recovery tasks for these VMs. However, these VMs will not be migrated to another host. After the service processes on
the host have restarted, the fault recovery tasks will be automatically canceled.
Prerequisites
Conditions
You have obtained the IP address or domain name of the external clock source.
If the NTP server domain name is to be used, ensure that a DNS is available. For details, see "Configuring the DNS Server" in FusionCompute
8.8.0 O&M Guide.
You have obtained the password of user root and the management IP address of the host or VRM node that is to be configured as the internal
clock source.
Procedure
Configure time synchronization between the system and the host or VRM node functioning as the internal clock source.
1. Use PuTTY to log in to the host or VRM node to be set as the internal clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Manually set the time on the host to be consistent with that on the external clock source. For details, see How Do I Manually Change the
System Time on a Node?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 440/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NTP Server: Set it to the management IP address of the host that has been configured as the internal clock source.
8. Click Save.
A dialog box is displayed.
9. Click OK.
The time zone and NTP clock source are configured.
The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption, and abnormal antivirus
service. Proceed with the subsequent operation only after the service processes restart.
Configure time synchronization between the host or VRM node and the external Linux clock source.
10. Configure time synchronization between the host and the external clock source. For details, see "Setting Time Synchronization on a Host" in
FusionCompute 8.8.0 Product Documentation.
11. Switch back to the VRM node that is configured as the internal clock source and run the following command to synchronize time between
the VRM node and the external clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip External clock source IP address or domain name -cycle 6 -force true
Scenarios
During the host installation process, some parameters are incorrectly configured. As a result, the host cannot be added to FusionCompute. In this
case, you can run the hostconfig command to reconfigure the host parameters.
The following parameters can be reconfigured:
Host name
VLAN
Prerequisites
The OS has been installed on the host.
You have obtained the IP address, username, and password for logging in to the BMC system of the host.
You have obtained the password of user root for logging in to the host.
The host is not added to the site or cluster that has FusionCompute installed.
Procedure
Log in to the host.
1. Open the browser on the local PC, enter the following IP address in the address bar, and press Enter:
https://Host BMC IP address
127.0.0.1:51299/icslite/print/pages/resource/print.do? 441/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you cannot log in to the BMC system of a single blade server (in the x86 architecture), you are advised to log in to the SMM of the blade server and open
the remote control window of the server.
5. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
6. Run the following command to enter Main Installation Window, as shown in Figure 1:
hostconfig
7. Choose Network > eth0 to enter the IP Configuration for eth0 screen, as shown in Figure 2.
Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 442/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad on the right.
10. Enter the gateway address of the host management plane in Default Gateway, as shown in Figure 3.
After the network configuration is complete, you can set the gateway address of the management plane and IP addresses of other planes in Test Network to
check whether the newly configured IP addresses are available.
12. Select Hostname. The Hostname Configuration screen is displayed, as shown in Figure 4.
13. Delete existing information, enter the new host name, and select OK.
15. Select VLAN. The VLAN Configuration screen is displayed, as shown in Figure 5.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 443/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16. Configure the VLAN ID, IP address, and subnet mask and select OK to complete VLAN configuration.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad on the right.
17. Select VLAN. The VLAN Configuration screen is displayed, as shown in Figure 6.
After deleting the VLAN, switch to the Network screen to reconfigure network information.
19. After the VLAN is configured, select Network and check whether the gateway is successfully configured based on the gateway information
in the Network Information list.
Scenarios
The FusionCompute web client displays Huawei-related information, including the product name, technical support website, product documentation
links, online help links, copyrights information, system language, system logo (displayed in different areas on the web client), background images
127.0.0.1:51299/icslite/print/pages/resource/print.do? 444/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
on the login page, system page, and About page, as shown in Figure 1. This section guides you to change or shield such information.
: product name
127.0.0.1:51299/icslite/print/pages/resource/print.do? 445/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
: copyrights information
: system name
: background image
Prerequisites
Conditions
You have prepared the following images:
: A system logo in 16 x 16 pixel size displayed in the browser address box. The image must be named favicon.ico and saved in ICO
format.
: A system logo in 48 x 48 pixel size displayed on the login page and About page. The image must be named huaweilogo.png and saved
in PNG format.
: A background image in 550 x 550 pixel size. The image must be named login_enbg.png. The image is saved in PNG format.
: A system logo in 33 x 33 pixel size displayed in the upper left corner of Online Help. The image must be named huaweilogo.gif and
saved in GIF format.
PuTTY is available.
WinSCP is available.
Ensure that SFTP is enabled on CNA or VRM nodes. For details, see "Enabling SFTP on CNA Nodes" in FusionCompute 8.8.0 O&M Guide.
Procedure
1. Use WinSCP to log in to the active VRM node.
Ensure that the management plane floating IP address and username gandalf are used for login.
3. Copy the prepared images to the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third directory to replace the original images.
5. Run the following command and enter the password of user root to switch to user root:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 446/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
su - root
6. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
7. Run the following command to open and edit the configuration file:
vi /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third/systitle.conf
Figure 2 shows information in the configuration file.
The entered information can contain only letters, digits, spaces, and special characters _-,.©:/
Set title to the new content. The value is a string of 1 to 18 characters (one uppercase letter is considered as two characters).
Set link to the new content. The value is a string of 1 to 100 characters (one uppercase letter is considered as two characters).
Set loginProductSupportText to false (to display information) or true (to hide information).
Set headProductSupportText to false (to display information) or true (to hide information).
Set copyrightEnUs to the new content displayed when the system language is English. The value is a string of 1 to 100
characters (one uppercase letter is considered as two characters).
Set portalsysNameEnUs to the new content displayed when the system language is English. The value is a string of 1 to 18
characters (one uppercase letter is considered as two characters).
8. Press Esc and enter :wq to save the configuration and exit the vi editor.
a. Right-click Computer and choose Properties > Advanced system settings > Environment Variables.
a. Use WinSCP to copy file systitle.conf in the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third directory to the local
PC.
b. Open the CLI on the local PC and switch to the directory in which file systitle.conf is saved.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 447/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
12. Run the following command to make the configuration take effect:
sh /opt/galax/root/vrmportal/tomcat/script/portalSh/syslogo/modifylogo.sh third
The configuration is successful if information similar to the following is displayed:
13. Use the browser to access the FusionCompute web client and check whether the new information is displayed, such as the system logo,
product name, copyrights information, and support website.
14. Disable the SFTP service. For details, see "Disabling SFTP on CNA Nodes" in FusionCompute 8.8.0 O&M Guide.
Additional Information
Related Tasks
Restore the default Huawei logo.
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
a. Use WinSCP to copy file systitle.conf in the /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/huawei directory to the local
PC.
b. Open the CLI on the local PC and switch to the directory in which file systitle.conf is saved.
7. Use the browser to access the FusionCompute web client and check whether the default Huawei interface is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 448/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Download the DeployTool deployment tool package by referring to Table 1.
2. Decompress eDME_24.0.0_DeployTool.zip on the local PC. The fio software is stored in the \DeployTool\pkgs directory.
3. Use PuTTY to log in to the node to be detected as user root through the static IP address of the node. Run the mkdir -p /opt/disk_test
command to create a test directory. The following uses the x86 EulerOS as an example to describe how to upload the fio_Euler_X86 file to
the /opt/disk_test directory.
4. Run the following command to test the I/O performance of the disk:
chmod 700 /opt/disk_test/fio_Euler_X86;/opt/disk_test/fio_Euler_X86 --name=Stress --rw=randwrite --direct=1 --ioengine=libaio --
numjobs=1 --filename=/opt/disk_test/fio_tmp_file --bs=8k --iodepth=1 --loops=100 --runtime 20 --size=90% of the remaining disk
spaceGB;rm -f /opt/disk_test/fio_tmp_file
The information in the red box is the IOPS value of the disk.
In the preceding commands, x86 EulerOS is used as an example. If other OSs are used, replace fio_Euler_X86 in the commands with other names. For
example, if EulerOS in the Arm architecture is used, replace fio_Euler_X86 in the commands with fio_Euler_ARM.
3.9.1.16 What Should I Do If a Linux VM with More Than 32 CPU Cores Cannot
Be Started?
Scenarios
If more than 32 CPU cores are required for VMs running certain OSs, you need to upgrade the Linux OS kernel. For details about supported OSs,
see FusionCompute SIA Huawei Guest OS Compatibility Guide (KVM) (x86 architecture) or FusionCompute SIA Huawei Guest OS Compatibility
Guide (Arm) (Arm architecture).
For details about how to query the FusionCompute SIA version, see How Do I Query the FusionCompute SIA Version?" in FusionCompute 8.8.0 O&M Guide.
To obtain FusionCompute SIA Huawei Guest OS Compatibility Guide (KVM) or FusionCompute SIA Huawei Guest OS Compatibility Guide (Arm), perform the
following steps:
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download the document for the desired version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document for the desired version.
Prerequisites
Conditions
127.0.0.1:51299/icslite/print/pages/resource/print.do? 449/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You have obtained the following RPM packages (available at https://vault.centos.org/6.6/updates/x86_64/Packages/) for upgrading the system
kernel:
kernel-2.6.32-504.12.2.el6.x86_64.rpm
kernel-firmware-2.6.32-504.12.2.el6.noarch.rpm
Procedure
1. Use WinSCP to copy the RPM packages to any directory on the VM.
For example, copy the packages to the /home directory.
3. Run the following command to switch to the directory where the RPM packages are saved:
cd /home
5. After the packages are installed, run the following command to restart the VM and make the new kernel take effect:
reboot
Scenarios
This section guides you to query the version of FusionCompute SIA installed in the system.
Procedure
1. Use PuTTY to log in to the active VRM node.
Log in to the node using the management IP address as user gandalf.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode? .
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
Symptom
127.0.0.1:51299/icslite/print/pages/resource/print.do? 450/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Possible Causes
The open source qemu-guest-agent service is installed on the VM, which occupies the vport. However, vm-agent also needs to use the vport. As a
result, Tools cannot be started normally.
Procedure
3. (Optional) On the VM page, click Advanced Search on the top of the VM list, enter or select the search criteria, and then click Search.
The query result is displayed.
Search criteria can be IP Address, VM ID, VM Name, MAC Address, Description, UUID, Tools Status, Owning Cluster/Host, Type,
and Status.
4. Locate the row that contains the target VM and click Log In Using VNC.
The VNC login window is displayed, and the VM can be viewed in the VNC window.
6. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.
8. Run the following command to check whether the qemu-guest-agent service exists:
ps -eaf | grep qemu-ga
If the command output contains qemu-guest-agent, the qemu-guest-agent service exists in the system.
If yes, go to 9.
If no, go to 13.
9. Run the following command to delete the open source qemu-guest-agent service.
The following uses CentOS as an example. For details about the commands for other OSs, see the corresponding guide.
rpm -e qemu-guest-agent
12. Run the following command to check whether Tools is started normally:
service vm-agent status
If the command output shows that the server is in the running state, Tools is started properly.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 451/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If no, go to 13.
4. (Optional) Run the following command to query the capacity of the opt_vol partition and the name of the new disk:
The capacity of the opt_vol partition on the data disk is critical to the resource management scale. The following expands the capacity of the opt_vol partition
as an example.
lsblk
5. Run the following command to create a physical volume group with the same name on the new disk:
6. Run the following command to add the capacity of the new physical volume group to the oss_vg volume group:
The unit of the expanded capacity is GB, for example, 500 GB.
The expanded capacity must be less than the capacity of the disk to be added. For example, if the capacity of the new disk is 100 GB, the expanded
capacity can be 99.9 GB at most.
8. Run the following command to check whether the capacity expansion is successful:
lsblk
4. Run the following command to check the disk information of the node:
fdisk -l
5. Run the following command to create a partition: /dev/vdb is used as an example. Replace it with the actual name.
fdisk /dev/vdb
Enter n, press Enter, and then enter w as prompted.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 452/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Changes will remain in memory only, until you decide to write them.
Partition type
Syncing disks.
6. Run the following command to view the new partition: The /dev/vdb2 partition is used as an example in the following commands.
fdisk /dev/vdb
[root@eDME 01 sopuser]# fdisk /dev/vdb
Changes will remain in memory only, until you decide to write them.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 453/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
9. Run the following command to add the new physical volume to the oss_vg volume group:
vgextend oss_vg /dev/vdb2
Size of logical volume oss_vg/opt_vol changed from 550.02 GiB (140806 extents) to 579.02 GiB (148230 extents).
The unit of the expanded capacity is GB, for example, 500 GB.
The expanded capacity must be less than the expanded node disk capacity. For example, if the expanded node disk capacity is 100 GB, the maximum
expanded capacity can be 99.9 GB.
11. Run the following command to identify the partition size again:
resize2fs /dev/mapper/oss_vg-opt_vol
12. Run the following command to check whether the capacity expansion is successful:
lsblk
Scenarios
If no external clock source is deployed, configure the host accommodating the VRM VM as the NTP clock source. In this case, the system time on
the target host or physical server must be accurate.
Prerequisites
You have obtained the passwords of users gandalf and root of the node to be configured as the NTP clock source.
Procedure
Log in to the operating system of the node.
1. Use PuTTY to log in to the node to be set as the NTP clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.
2. Run the following command and enter the password of user root to switch to user root:
su - root
127.0.0.1:51299/icslite/print/pages/resource/print.do? 454/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Check whether any external NTP clock source is configured for the node.
If yes, go to 5.
If no, go to 6.
5. Run the following command to set the node as its NTP clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip 127.0.0.1 -cycle 6 -timezone Local time zone -force true
For example, if the local time zone is Asia/Beijing and the node is a physical server that has VRM installed, run the following command:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip 127.0.0.1 -cycle 6 -timezone Asia/Beijing -force true
6. Run the date command to check whether the current system time is accurate.
If yes, go to 11.
If no, go to 7.
7. Run the required command to stop a corresponding process based on the node type.
8. Run the following command to rectify the system time of the node:
date -s Current time
The current time must be set in HH:MM:SS format.
For example, if the current time is 16:20:15, run the following command:
date -s 16:20:15
9. Run the following command to synchronize the new time to the basic input/output system (BIOS) clock:
/sbin/hwclock -w -u
10. Run the required command to start a corresponding process based on the node type.
==============================================================================
If * is displayed on the left of LOCAL, the time service is running properly on the node. The node can be used as an NTP clock source.
If * is not displayed, run the ntpq -p command again five to ten minutes later to check the time service running status.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 455/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.9.1.21 How Do I Handle the Issue that VRM Services Become Abnormal
Because the DNS Is Unavailable?
Symptom
When a DNS is invalid or faulty, the system becomes faulty after the following operations are performed. As a result, the user fails to log in to
FusionCompute for changing the DNS configuration.
Possible Causes
An invalid DNS is configured.
Procedure
1. Use PuTTY to log in to one VRM node.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key pair to authenticate
the login, see "How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication Mode?" in FusionCompute 8.8.0 O&M
Guide.
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
6. Repeat 1 to 5 to clear the DNS configurations for the other VRM node.
7. Wait for 10 minutes and then check whether you can log in to FusionCompute successfully.
If no, go to 8.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 456/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Symptom
After FusionCompute is upgraded from a version earlier than 850 to 850 or later, or FusionCompute 850 or later is installed, an error message is
displayed indicating that licenses with the sales unit HCore are not supported when licenses with the sales unit HCore are imported.
Possible Causes
The imported licenses contain licenses with the sales unit HCore.
Fault Diagnosis
None
Procedure
1. Log in to the iAuth platform using a W3 account and password.
3. On the Apply By Application page, select ESDP-Electronic Software Delivery Platform in Enter an application, enter GTS in Enter
Privilege, and click Search.
7. In the navigation pane, choose License Commissioning and Maintenance > License Split.
8. Click Add Node, enter the ESN, and click Search to search for the license information. Select the license information and click OK.
10. After the splitting, set Product Name to FusionCompute, Version to 8, and set ESN, and click Preview License.
11. Confirm that the license splitting result meets the expectation (16 HCore: 1 CPU (round down to the nearest integer)) and click Submit.
12. Confirm the information in the dialog box displayed and click OK. (The dialog box asks you whether to continue license splitting, because
this operation will result in the following: The annual fee NE is processed as a common NE, and the annual fee time and annual fee code
remains unchanged. Only common BOMs are changed, but the annual fee time remains unchanged.) Confirm the settings and click OK.
13. After the AMS manager approves the modification, the license splitting is complete.
14. Refresh the license on ESDP to obtain the split license file.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 457/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Related Information
None
3.9.1.23 How Do I Determine the Network Port Name of the First CNA Node?
Method 1: View the onboard network ports where network cables are inserted. They are numbered from 0 from left to right. If the network
ports are numbered to X, the corresponding deployment network port is ethX.
Method 2:
1. Retain only the network connection of the network port corresponding to the first CNA node.
a. If the operation is performed on site, you can remove unnecessary network cables.
b. If the operation is performed remotely, you can disable unnecessary network ports on the switch.
a. Log in to the iBMC WebUI of the first node and go to the remote virtual console.
b. Click the CD/DVD icon on the top, select the image file, and click Connect.
c. After the connection is successful, click the Boot Options icon on the navigation bar and change the boot mode to
CD/DVD.
d. Click Power Control, select Forced Restart, and click OK. Wait for the server to restart.
e. After the server is started, select Installation on the installation option page and press Enter.
f. When the main installation window is displayed, choose Network > IPv4. The IPv4 network configuration page is
displayed.
g. On the IPv4 network configuration page, you can view all NIC addresses. If a network port name is marked with an
asterisk (*), the network is connected. ethX corresponding to the network port marked with an asterisk (*) is the
network port name of the first CNA node.
Method 3: Log in to the target host in SSH mode and run the ethtool -p ethX command. The indicator of the corresponding network port is on.
You can determine the eth number of the network port of the first CNA node by consecutive attempts.
Method 4: For some servers, you can view the MAC address of each network port on iBMC and determine the network port name of the first
CNA node based on the MAC address of this network port.
1. On iBMC, check the MAC address of the network port of the first CNA node.
2. Perform step 2 in method 2. ethX corresponding to the MAC address of the network port of the first CNA node is the name of the
network port on the first CNA node.
3.9.1.24 Troubleshooting
Problem 1: Changing the Non-default Password of gandalf for Logging In to Host 02 to the
Default One
Possible cause: The default password of the gandalf user for logging in to host 02 is changed.
Solution: Change the password of the gandalf user for logging in to host 02 to the default password.
Possible cause 3: The IP address of a host conflicts with the IP address of another device on the network.
Solution:
Reinstall the target host and assign an IP address that does not conflict with that of other devices on the network to the host.
Possible cause 4: When the installation tool PXE is used to install hosts in a batch, the host installation progress varies. The IP addresses of the
installed hosts are temporarily occupied by those which are being installed.
Solution: Ensure that the address segment of the DHCP pool is different from the IP addresses of the planned host nodes to avoid IP address
conflicts. For details, see "Data Preparation" in FusionCompute 8.8.0 Product Documentation. You are advised to install a maximum of 10 hosts at a
time.
The root account is locked because incorrect passwords are entered for multiple consecutive times.
Solution:
If the password of the root user is incorrect, enter the correct password.
If the root account is locked, wait for 5 minutes and try again.
Problem 5: The Host Where the Installation Tool Is Installed Does Not Automatically Start
Services After Being Restarted
Possible cause: The command for automatically starting services fails to be executed during the host startup.
Solution:
The MCNA node is the CNA node on which the installation tool is installed.
2. Run the following command and enter the password of user root to switch to user root:
su - root
127.0.0.1:51299/icslite/print/pages/resource/print.do? 459/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Possible cause: The firewall on the local PC blocks the communication between the PC and the host.
Solution: Disable the Windows firewall and other software firewalls on the local PC and start the hosts through the network.
Possible cause: The IP address of the installation tool node (that is, the configured DHCP service address) cannot communicate with the
installation plane.
Solution:
Check the physical connection between the installation tool node and the host to be installed. Ensure that no hardware fault, such as
network cable or network port damage, occurs.
Check the physical devices between the installation tool node and the host to be installed, such as switches and firewalls. Ensure that the
DHCP, TFTP, and FTP ports are not disabled or the rates of the ports are not limited.
TFTP and FTP have security risks. You are advised to use secure protocols such as SFTP and FTPS.
If the IP address of the installation tool is not in the network segment of the installation plane, check whether the DHCP relay is
configured on the switch.
If a VLAN is configured for the host management plane, ensure that the installation plane and the host management plane are in different
VLANs and the installation tool can communicate with the two planes.
During PXE-based installation, ensure that the data packets on the PXE port do not carry VLAN tags, and allow these data packets in network settings.
The tag of nodes to be installed in PXE mode is the PVID of nodes to be installed in non-PXE mode + untag.
After the possible faults are rectified, boot the hosts from the network again.
Possible cause: Multiple DHCP servers are deployed on the installation plane.
Solution: Disable redundant DHCP servers to ensure that the installation tool provides DHCP services.
Possible cause: The host to be installed is connected to multiple network ports, and DHCP servers exist on the network planes of multiple
network ports.
Solution: Disable DHCP servers on non-installation planes to ensure that the installation tool provides DHCP services.
Possible cause: The host to be installed supports boot from the network, but this function is not configured during booting.
Solution: Configure hosts to be installed to boot from the network (by referring to corresponding hardware documentation), and then update
the host installation progress to Installation Progress in the Install Host step of the PXE process.
Possible cause: Hosts to be installed do not support booting from the network.
Solution: Install the hosts by mounting ISO images.
Possible cause: Packet loss or delay occurs due to network congestion or high loads on the switch.
Solution: Ensure that the network workloads are light during the installation process. If more than 10 hosts are to be installed, boot 10 hosts
from the network per batch.
Problem 7: Automatic Logout After Login Using a Firefox Browser Is Successful but an Error
Message Indicating that the User Has Not Logged In or the Login Times Out Is Displayed When
the User Clicks on the Operation Page
Possible cause: The time of the server where the FusionCompute web installation tool is deployed is not synchronized with the local time. As a
result, the Firefox browser considers that the registered session has expired.
Solution: Change the local time or run the date -s xx:xx:xx command (xx:xx:xx:xx indicates hours:minutes:seconds respectively) on the server to
ensure that the local time is the same as the time of the server where the web installation tool is deployed, refresh the browser, and log in again.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 460/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Logging In to FusionCompute
Restarting Services
Scenarios
This section guides you to use PuTTY and the private key required for authentication to log in to a target node.
Prerequisites
You have obtained the private key certificate matching the public key certificate.
You have obtained the password of the private key certificate if the private key certificate is encrypted.
Procedure
1. Check whether PuTTY was used to log in to the target node in private-public key pair authentication mode on the local PC.
If yes, go to 7.
2. Run PuTTY and enter the IP address of the target node and the SSH port number (22 by default).
3. In the Category area in the left pane, choose Connection > SSH > Auth.
The SSH authentication configuration page is displayed.
4. Click Browse, select the prepared private key certificate in the displayed window, and click Open.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 461/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The file name extension of the private key certificate is *.ppk. Contact the administrator to obtain the private key certificate.
The following figure shows the screen after the configuration.
6. To facilitate subsequent access, create a custom session in Saved Sessions and click Save.
The following figure shows the session configuration page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 462/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Click Open.
Scenarios
This section guides administrators to log in to FusionCompute to manage virtual, service, and user resources in a centralized manner.
Prerequisites
Conditions
You have configured the Google Chrome or Mozilla Firefox browser. For details, see Setting Google Chrome (Applicable to Self-Signed
Certificates) or Setting Mozilla Firefox .
The browser resolution is set to 1280 x 1024 or higher based on the service requirement to ensure the optimum display effect on
FusionCompute.
If the security certificate was not installed when the Google Chrome browser was set, the browser may display a message indicating that the web page cannot be
displayed upon first login to FusionCompute or to a VM using VNC. In this case, press F5 to refresh the web page.
The system supports the following browsers:
Google Chrome 118, Google Chrome 119, and Google Chrome 120
Mozilla Firefox 118, Mozilla Firefox 119, and Mozilla Firefox 120
Microsoft Edge 118, Microsoft Edge 119, and Microsoft Edge 120
Data
Table 1 describes the data required for performing this operation.
IP address of the VRM Specifies the floating IP address of the VRM nodes if the 192.168.40.3
node VRM nodes are deployed in active/standby mode.
Username/Password Specifies the username and password used for logging in Common mode:
to FusionCompute.
Username: admin
Password:
Tool-based VRM installation: Set the password during the
installation.
Manual VRM installation using an ISO image: Set the password
when executing the initialization script after the installation is
complete.
User type Specifies the type of the user to log in to the system. Local user
Local user: Log in to the system using a local username
and password.
Domain user: Log in to the system using a domain
username and password.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 463/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Open Mozilla Firefox.
When accessing the IP address, the system automatically converts the IP address into the HTTPS address to improve access security.
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
3. Set Username and Password, select User type, and click Login. If you attempt to log in to the system again after the initial login fails, you
also need to set Verification code.
Enter the username and password based on the permission management mode configured during VRM installation.
If it is your first login using the administrator username, the system will ask you to change the password of the admin user.
The password must meet the following requirements:
The password contains 8 to 32 characters.
The password must contain at least one space or one of the following special characters: `~!@#$%^&*()-_=+\|[{}];:'",<.>/?
The password must contain at least two types of the following characters:
Uppercase letters
Lowercase letters
Digits
The FusionCompute management page is displayed after you log in to the system.
The user is automatically logged out of the FusionCompute management system in case of any of the following circumstances:
After you log in to FusionCompute, you can learn the product functions from the online help, product tutorial, and alarm help.
Scenarios
After bare VM creation and OS installation are complete for eDME, you need to install Tools provided by Huawei on the VMs to improve the VM
I/O performance and implement VM hardware monitoring and other advanced functions. Some features are available only after Tools is installed.
For details about such features, see their prerequisites or constraints.
In addition to using Tools delivered with FusionCompute, you can also obtain the FusionCompute SIA software package that is compatible with
FusionCompute and OS from Huawei official website. After obtaining the FusionCompute SIA software package, you can install the latest Tools to
use new features. For details about the compatibility information and installation guide, see the FusionCompute SIA product documentation.
Prerequisites
Conditions
Tools has not been installed on the VM. If Tools has been installed, uninstall it by referring to Uninstalling the Tools from a Linux VM .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 464/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The free space of the system disk must be greater than 20 MB.
You have installed the gzip tool for the VM OS. For details about how to install gzip, see the product documentation of the OS in use. You can
run the tar command to decompress the software package.
Procedure
Mount Tools to the VM.
The Tools installation file is stored on the host. The VM can access the installation file only after Tools is mounted to the VM.
3. Locate the row that contains the target VM and choose More > Tools > Mount Tools.
A dialog box is displayed.
4. Click OK.
5. Locate the row that contains the target VM and choose Log In Using VNC.
The VNC login window is displayed.
Install Tools.
7. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.
8. Run the following command to check whether the qemu-guest-agent service exists:
ps -eaf | grep qemu-ga
If information similar to the following is displayed and contains qemu-ga or qemu-guest-agent, the qemu-guest-agent service exists in the
OS:
If yes, go to 9.
If no, go to 11.
9. Run the following command to delete the open source qemu-guest-agent service.
The following uses CentOS as an example. For details about the commands for other OSs, see the corresponding guide.
rpm -e qemu-guest-agent
12. Run the following command to mount a CD/DVD-ROM drive to the VM:
mount mounting path xvdd
For example, mount /dev/sr0 xvdd
For the Kylin V10 OS, run the following command to mount the CD/DVD-ROM drive to the VM.
mount -t iso9660 -o,loop mounting path xvdd
127.0.0.1:51299/icslite/print/pages/resource/print.do? 465/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The directory for mounting a CD/DVD-ROM drive to a VM varies with the version of Linux OS running on the VM.
For details about the OSs and corresponding mounting directories in the x86 architecture, see Table 1.
All OSs supported in the Arm architecture support both the ISO file and CD/DVD-ROM drive, and the mount directory can only be
/dev/sr0. Table 2 lists some OSs and mount directories. For details about other OSs, see FusionCompute SIA Guest OS Compatibility
Guide (Arm).
Table 1 Mapping between Linux OS versions and mounting directories (x86 architecture)
All EulerOS versions (including openEuler) ISO file or CD/DVD-ROM drive /dev/sr0
Table 2 Mapping between Linux OS versions and mounting directories (Arm architecture)
14. Run the following command to view the required Tools installation package:
ls
The following information is displayed:
.bz2 package:
...
vmtools-xxxx.tar.bz2
vmtools-xxxx.tar.bz2.sha256
.gz package:
...
vmtools-xxxx.tar.gz
vmtools-xxxx.tar.gz.sha256
15. Run the following commands to copy the Tools installation package to the root directory:
.bz2 package:
cp vmtools-xxxx.tar.bz2 /root
cd /root
.gz package:
cp vmtools-xxxx.tar.gz /root
cd /root
16. Run the following command to decompress the Tools installation package:
.bz2 package:
tar -xjvf vmtools-xxxx.tar.bz2
.gz package:
tar -xzvf vmtools-xxxx.tar.gz
127.0.0.1:51299/icslite/print/pages/resource/print.do? 466/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The version of Tools is incompatible with the VM OS if the following information is displayed (Download the latest Tools and install it.
If it fails to install, contact technical support):
The latest Tools software package is stored in FusionCompute_SIA-xxx-GuestOSDriver_xxx.zip. Download the software package of
the latest version from the following website:
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.
a. Obtain the vmtools-linux.iso package from the following directory in the software package (version 8.1.0.1 in the x86 scenario
is used as an example here):
FusionCompute_SIA-8.1.0.1-GuestOSDriver_X86.zip\uvp-vmtools-3.0.0-019.060.x86_64.rpm\uvp-vmtools-3.0.0-
019.060.x86_64.cpio\.\opt\patch\programfiles\vmtools\
b. Unmount Tools. Locate the row that contains the target VM and choose More > Tools > Unmount Tools.
c. Mount the file to the VM. For details, see "Mounting a CD/DVD-ROM Drive or an ISO File" in FusionCompute 8.8.0 User
Guide (Virtualization).
d. Perform 12 again.
Run the ./install -i command to install Tools if the x86 VM runs one of the following OSs:
DOPRA ICTOM V002R003 EIMP
DOPRA ICTOM V002R003 IMAOS
Red Hat Enterprise Linux 3.0
Red Hat Enterprise Linux 3.4
20. After the VM restarts, log in to the VM using VNC as user root.
21. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.
If Tools fails to be installed on some Arm-based OSs, for example, Kylin and UOS, see "What Should I Do If Tools Installed on Some OSs Fails to be
Started?" in FusionCompute 8.8.0 Maintenance Cases.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 467/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Additional Information
Related Tasks
Uninstalling the Tools from a Linux VM
Related Concepts
Introduction to Tools
Scenarios
Uninstall the Tools from a VM if the Tools malfunctions or you have misoperated the VM.
After you uninstall the Tools, install it again in a timely manner. Otherwise, the VM performance deteriorates, VM hardware cannot be monitored, and other advanced
VM functions become unavailable.
For details about Tools functions, see Introduction to Tools .
Prerequisites
Conditions
Tools has been installed on the VM.
Procedure
Uninstall the Tools.
2. On the VM desktop displayed in the VNC window, enter the command line interface (CLI) mode (For instructions about how to enter the
CLI mode, see the OS operation guide).
The CLI window is displayed.
Warning: If the guest is suspicious of having no virtio driver,uninstall UVP VMTools may cause the guest inoperatable after being rebooted.
If Tools cannot be uninstalled by performing the preceding operation, run the following command to uninstall it again:
cd /etc/.vmtools
./uninstall
127.0.0.1:51299/icslite/print/pages/resource/print.do? 468/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Follow-up Procedure
After you uninstall the Tools, install it again in a timely manner. Otherwise, the VM performance deteriorates, VM hardware cannot be monitored,
and other advanced VM functions become unavailable.
For details, see Installing Tools for eDME .
Scenarios
During Tools upgrade preparation or verification process, check the running status and version of the Tools on FusionCompute.
Tools can be in one of the following states:
Running: Tools is running properly. However, the system failed to obtain Tools version due to a network fault or the version mismatch
between the VRM node and the host.
Running (Current Version: x.x.x.xx): Tools is running properly and the Tools version is x.x.x.xx.
Not Running (Current Version: x.x.x.xx): Tools has been installed on the VM that is in the Stopped or Hibernated state and Tools was
x.x.x.xx when the VM was running last time.
Prerequisites
Conditions
You have logged in to FusionCompute.
Procedure
Search for a VM.
3. (Optional) On the VM page, click Advanced Search on the top of the VM list, enter or select the search criteria, and then click Search.
The query result is displayed.
Search criteria can be IP Address, VM ID, VM Name, MAC Address, Description, UUID, Tools Status, Cluster/Host, Type, and Status.
5. Locate the row that contains Tools on the Summary page and view the Tools version.
Running: Tools is running properly. However, the system failed to obtain Tools version due to a network fault or the version mismatch
between the VRM node and the host.
Running (Current Version: x.x.x.xx): Tools is running properly and the Tools version is x.x.x.xx.
Not Running (Current Version: x.x.x.xx): Tools has been installed on the VM that is in the Stopped or Hibernated state and the
Tools version was x.x.x.xx when the VM was running last time.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 469/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Only Hygon servers support BIOS configuration. For details about the BIOS parameters, see the server vendor's configurations.
Access the BIOS in the OS and run the ipmitool chassis bootdev bios && ipmitool power reset or ipmitool power cycle command. The
BIOS setting page is displayed.
The dimmed options are unavailable. The items marked with have submenus.
For details about how to set baseline parameters, see Table 1.
SR-IOV Enable
IO IOMMU Enabled
127.0.0.1:51299/icslite/print/pages/resource/print.do? 470/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
This section guides administrators to configure the Google Chrome browser before logging in to FusionCompute for the first time. After the
configuration, you can use Google Chrome to perform operations on FusionCompute.
Related configurations, such as certificate configuration, for Google Chrome are required.
Google Chrome 115 is used as an example.
If the security certificate is not installed when the Google Chrome browser is configured, the download capability and speed for converting a VM to a template and
importing a template are limited.
Prerequisites
Conditions
Data
Data preparation is not required for this operation.
Procedure
Enter the login page.
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
The HTTPS protocol used by FusionCompute supports only TLS 1.2. If SSL 2.0, SSL 3.0, TLS 1.0, or TLS 1.1 is used, the FusionCompute system cannot be
accessed.
If Google Chrome slows down after running for a period of time and no data needs to be saved, press F6 on the current page to move the cursor to the address
bar of the browser. Then, press F5 to refresh the page and increase the browser running speed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 471/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7. Locate the row that contains the address bar of the browser, and click Settings.
Select Privacy and security.
9. Click Import.
The Certificate Import Wizard dialog box is displayed.
11. Click Browse on the line where the file name is located.
Select the exported certificate.
To use a self-signed certificate, you need to generate a root certificate, issue a level-2 certificate based on the root certificate, use the level-2 certificate as the
web certificate, and import the root certificate to the certificate management page of the browser.
13. Select Place all certificates in the following store and click Browse.
The Select Certificate Store dialog box is displayed.
19. Locate the row that contains the address bar of the browser and select More tools.
Click Clear browsing data.
The Clear browsing data dialog box is displayed.
Browsing history
127.0.0.1:51299/icslite/print/pages/resource/print.do? 472/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
23. In the address box of the browser, repeat 2 to access the login page. You can see that Not secure is no longer displayed in non-Chinese
cryptographic algorithm scenarios.
Scenarios
This section guides administrators to set the Mozilla Firefox browser before logging in to FusionCompute the first time so that they can use a
Mozilla Firefox browser to perform operations normally on FusionCompute.
Prerequisites
Conditions
You have obtained the floating IP address of the VRM management nodes.
Data
Data preparation is not required for this operation.
Procedure
1. Open Mozilla Firefox.
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
4. Verify that Permanently store this exception is selected and click Confirm Security Exception.
Mozilla Firefox setting is complete.
Arm
3.9.2.9.1.1 x86
127.0.0.1:51299/icslite/print/pages/resource/print.do? 473/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7 GDEKernel_25.1.0.SPC6_Software_EulerOS-
X86_Docker-BackupWebsite-Any.7z
8 GDEKernel_25.1.0.SPC6_Software_EulerOS-
X86_Pkg-BackupAgentOM-Any.7z
127.0.0.1:51299/icslite/print/pages/resource/print.do? 474/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 DSP_25.1.0.SPC6_Software_Any-Any_Any- Asset package of information collection and Access the path for downloading DSP
Assets-Any.zip inspection. The package is used to help O&M software packages at the Support website,
personnel detect, locate, and resolve problems, find Digital Service Platform
improving the maintainability of Data Cube services. 25.1.0.SPC6, and download the required
software packages.
2 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- Software package on which the data zone depends. It
DSPBase-Any.7z supports public key creation, installation, upgrade,
and secondary development in the data zone.
16 DSP_25.1.0.SPC6_Software_EulerOS-
X86_Docker-LBKeepalived-Any.7z
127.0.0.1:51299/icslite/print/pages/resource/print.do? 475/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- Software package required by the Go to the path for obtaining IT Infra software
X86_Docker-DockerForLayer-OP.7z management zone, and common DSP packages at the Support website, find IT Infra
software package in the data zone. 5.1.0.SPC8, and download the required
software packages.
2 ITInfra_5.1.0.SPC8_Software_Euleros2sp12-X86_Pkg- EulerOS V2.0 SP10 (x86) OS image
FusionSphereVMImage40g-Any.zip package. It can be used to create VMs
on x86-based FusionSphere
OpenStack.
1 ADC_25.1.0.SPC6_Software_EulerOS- Basic installation packages Access the path for obtaining ADC software packages at the
X86_Docker-Base-Web-Package-Any.7z of the general job Support website, click Application Development Center
orchestration service. 25.1.0.SPC6, and download the required software packages.
2 ADC_25.1.0.SPC6_Software_EulerOS-
X86_Docker-Base-Package-Any.7z
3 ADC_25.1.0.SPC6_Software_EulerOS-
X86_Docker-Base-Package-Lib-Any.7z
3.9.2.9.1.2 Arm
7 GDEKernel_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-BackupWebsite-Any.7z
8 GDEKernel_25.1.0.SPC6_Software_EulerOS-
Aarch64_Pkg-BackupAgentOM-Any.7z
127.0.0.1:51299/icslite/print/pages/resource/print.do? 477/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 DSP_25.1.0.SPC6_Software_Any-Any_Any- Asset package of information collection and Access the path for downloading DSP
Assets-Any.zip inspection. The package is used to help O&M software packages at the Support website,
personnel detect, locate, and resolve problems, find Digital Service Platform
improving the maintainability of Data Cube services. 25.1.0.SPC6, and download the required
software packages.
2 DSP_25.1.0.SPC6_Software_Any-Any_Pkg- Software package on which the data zone depends. It
DSPBase-Any.7z supports public key creation, installation, upgrade,
and secondary development in the data zone.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 478/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16 DSP_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-LBKeepalived-Any.7z
127.0.0.1:51299/icslite/print/pages/resource/print.do? 479/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- Software package required by the Go to the path for obtaining IT Infra software
Aarch64_Docker-DockerForLayer-OP.7z management zone, and common DSP packages at the Support website, find IT Infra
software package in the data zone. 5.1.0.SPC8, and download the required
software packages.
2 ITInfra_5.1.0.SPC8_Software_Euleros2sp12- EulerOS V2.0 SP10 (Arm) OS image
Aarch64_Pkg-FusionSphereVMImage40g-OP.zip package. It can be used to create VMs
on Arm-based FusionSphere
OpenStack.
1 ADC_25.1.0.SPC6_Software_EulerOS- Basic installation packages Access the path for obtaining ADC software packages at the
Aarch64_Docker-Base-Web-Package-Any.7z of the general job Support website, click Application Development Center
orchestration service. 25.1.0.SPC6, and download the required software packages.
2 ADC_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-Base-Package-Any.7z
3 ADC_25.1.0.SPC6_Software_EulerOS-
Aarch64_Docker-Base-Package-Lib-Any.7z
1 HiCloud_25.1.0_Tool_Any- VM creation script package, which is Go to the CMP HiCloud path at the Support website, select
Any_HicloudVMToolBox-OP.zip used to automatically create the GKit the 25.1.0 version, and download the software package in the
VM. Software area.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 480/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 HiCloud_25.1.0_Scene_Any- Scenario package. Go to the CMP HiCloud path at the Support website, select the
Any_DCS.zip 25.1.0 version, and download the software package in the
Version Documentation area.
2 HiCloud_25.1.0_Tool_Any- Silent installation script package, Go to the CMP HiCloud path at the Support website, select the
Any_HicloudAutoInstallTool-OP.zip which is used to install GDE and CMP 25.1.0 version, and download the software package in the
HiCloud services. Software area.
1 HiCloud_25.1.0_Tool_Euler- QEMU binary package, which is used to process the Go to the CMP HiCloud path at the Support
Aarch64_Docker-Qemu.zip QCOW2 image and is integrated into VMTools. This website, select the 25.1.0 version, and download
package applies to the Arm architecture. the software package in the Software area.
1 HiCloud_25.1.0_Software_Euler-Any_Docker- CommonUserGateway service Go to the CMP HiCloud path at the Support website, select
CommonUserGateway.7z package the 25.1.0 version, and download the software package in
the Software area.
2 HiCloud_25.1.0_Software_Euler-Any_Docker- CommonAdminGateway service
CommonAdminGateway.7z package
1 HiCloud_25.1.0_Software_Euler-Any_Docker- DBaas adaptation Go to the CMP HiCloud path at the Support website, select the 25.1.0
DBaasSCUI.7z package. version, and download the software package in the Software area.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 481/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Log in to the GDE management zone as the op_svc_cfe tenant in tenant login mode at https://IP address:31943.
IP address: is the value of Floating IP of management plane on the HiCloud Parameters sheet described in 1.5.1.2-12 .
The password of the op_svc_cfe tenant is the value of op_svc_cfe tenant password on the HiCloud Parameters sheet described in 1.5.1.2-
12 .
2. Choose Maintenance > Instance Deployment > K8S Application Deployment. The K8S Application Deployment page is displayed.
3. Search for the application corresponding to the service to be restarted and click Stop in the Operation column of the found record.
The data center egress connects to external routers of the customer network through border leaf switches. The networking topology supports Layer 2
or Layer 3:
Layer 2 networking topology: spine + leaf (integrated deployment of border and spine switches)
The following uses Layer 3 networking as an example to describe the networking and configuration.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 482/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Topology description
1. Core spine switches, service and management leaf switches, storage leaf switches, and border leaf switches are configured in M-LAG mode.
Leaf and border leaf switches use 40GE or 100GE ports in the uplink and implement full-mesh networking with core spine switches by
configuring Eth-Trunks. The number of uplink ports can be adjusted based on project bandwidth requirements.
2. Border leaf switches are connected to external routers of the customer network through 10GE ports. The number of ports depends on the
actual networking requirements. If spine switches and border leaf switches are deployed in an integrated manner, configure the ports on spine
switches.
3. Management and service leaf switches as well as storage leaf switches are connected to converged nodes and compute nodes through 10GE
ports. Each node is configured with a two-port bond, supporting four network ports (two storage network ports and two management and
storage network ports) or six network ports (two storage network ports, two management network ports, and two storage network ports).
Select the number of ports depending on the actual situation.
4. Firewalls and load balancers are deployed in load balancing mode and are connected to border leaf switches through 10GE ports in bypass
mode. Two 10GE ports are used in the uplink and downlink, respectively. If spine switches and border leaf switches are deployed in an
integrated manner, configure Eth-Trunks on spine switches.
5. IP SAN storage nodes are connected to storage leaf switches through 10GE ports. At least two 10GE ports are required. The number of ports
to be configured depends on the networking scale and performance requirements.
6. BMC access switches are connected to BMC ports of each node, storage devices, and management ports of switches. BMC access switches
are also connected to spine core switches for remote management and routine maintenance of devices.
7. Firewalls, load balancers, and border leaf switches implement full-mesh networking at Layer 3 by configuring the VRF, static routing
protocol, or OSPF routing protocol. If spine switches and border leaf switches are deployed in an integrated manner, configure the VRF,
static routing protocol, or OSPF routing protocol on spine switches.
8. Border leaf switches and core spine switches implement full-mesh networking at Layer 3 by configuring the static routing protocol or OSPF
routing protocol. If spine switches and border leaf switches are deployed in an integrated manner, no configuration is required.
9. Border leaf switches and routers of the customer network implement full-mesh networking at Layer 3 by configuring the static routing
protocol or OSPF routing protocol. If spine switches and border leaf switches are deployed in an integrated manner, configure the static
routing protocol or OSPF routing protocol on spine switches.
Configuration prerequisites
Border leaf switches and spine switches are deployed in an integrated manner. Basic configurations, such as M-LAG mode, have been
completed for spine and leaf switches.
Leaf switches are connected to spine switches through the 40GE0/0/47 and 40GE0/0/48 ports in the uplink.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 483/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The network segment provided by the customer is used. The gateway is configured on spine switches, and a routing protocol is configured
between routers of the customer network and spine switches.
Management plane VLAN: The VLAN ID is 2, the gateway is 192.168.10.1, and the subnet mask is 255.255.255.0. Two network ports on the
server are configured in the active-standby bonding mode and connected to the 10GE0/0/1 port of leaf switches.
Storage plane VLAN: The VLAN ID is 11, the gateway is 192.168.11.1, and the subnet mask is 255.255.255.0. Two network ports on the
server are configured in the static Link Aggregation Control Protocol (LACP) bonding mode and connected to the 10GE0/0/2 port of leaf
switches.
Service plane VLAN: The VLAN ID is 12, the gateway is 192.168.12.1, and the subnet mask is 255.255.255.0. Two network ports on the
server are configured in the static LACP bonding mode and connected to the 10GE0/0/3 port of leaf switches.
When configuring network port bonding for servers to connect to corresponding switches, you can use the commands for configuring the active-standby mode and
LACP mode for the management network, service network, and storage network.
Configuration example
The following uses Huawei switches as an example to describe network device configurations:
1. Configurations for access leaf switches (The following commands need to be used for configurations on two leaf switches respectively.)
a. Create VLANs.
Create VLANs for the management plane, service plane, and storage plane.
vlan batch 2 11 12
b. Create an Eth-Trunk.
# Configure uplink ports for connecting leaf switches to spine switches.
# Configure the 40GE0/0/47 and 40GE0/0/48 ports as the Eth-Trunk interface to connect to spine switches in the uplink.
interface Eth-Trunk1
trunkport 40GE0/0/47
trunkport 40GE0/0/48
port link-type trunk
undo port trunk allow-pass vlan 1
port trunk allow-pass vlan 2 11 to 12
# Configure the switch ports connected to the management NIC.
# Configure the 10GE0/0/1 port as the Eth-Trunk interface of the management plane to connect to servers in the downlink.
# If the management NIC on the server is configured in the active-backup bonding mode, trunk configuration is not required for
switches.
port hybrid pvid vlan 2
undo port hybrid vlan 1
port hybrid untagged vlan 2
stp edged-port enable
# Configure the switch ports connected to the service NIC.
# Configure the 10GE0/0/3 port as the Eth-Trunk interface of the service plane to connect to servers in the downlink.
# If the service NIC of the server is configured in a bonding mode, the switch must be configured in the static LACP mode.
interface Eth-Trunk12
trunkport 10GE0/0/3
undo port hybrid vlan 1
port hybrid tagged vlan 12
stp edged-port enable
mode lacp-static
dfs-group 1 m-lag 1
# Configure the switch ports connected to the storage NIC.
# Configure the 10GE0/0/2 port as the Eth-Trunk interface of the storage plane to connect to servers in the downlink.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 484/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
# If the storage NIC of the server is configured in a bonding mode, the switch must be configured in the static LACP mode.
interface Eth-Trunk11
trunkport 10GE0/0/2
undo port hybrid vlan 1
port hybrid tagged vlan 11
stp edged-port enable
mode lacp-static
dfs-group 1 m-lag 2
2. Configurations for core spine switches (The following commands need to be used for configurations on two leaf switches respectively.)
b. Create an Eth-Trunk.
# Create an Eth-Trunk for connecting to network devices in the downlink.
# Assume that ports 40GE0/0/1 and 40GE0/0/2 are used to connect to CE6881 in the downlink.
interface Eth-Trunk1
trunkport 40GE0/0/1
trunkport 40GE0/0/2
port link-type trunk
undo port trunk allow-pass vlan 1
port trunk allow-pass vlan 2 11 to 12
In this interconnection mode, gateways are configured on the customer network. Therefore, you only need to focus on the uplink port trunk configuration.
Overview
Tools provides drivers for VMs.
After VM creation and OS installation are complete, you need to install Tools provided by Huawei on the VMs to improve the VM I/O performance
and implement VM hardware monitoring and other advanced functions. Some features are available only after the Tools is installed. For details
about such features, see the prerequisites or constraints about features.
Functions
After the Tools is installed and started on a VM, the VM provides the following functions:
Function Description
High-performance I/O Providing high-performance disk I/O and network I/O functions for a VM.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 485/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Obtaining the CPU and memory usage of a VM.
Obtaining the space usage of each disk or partition on a VM.
Advanced functions Adjusting the CPU specifications of the VM in the running state
Creating a VM snapshot
(Intel architecture) VM BSOD detection
Synchronizing time from the host to the VM
Advanced functions for VM NICs, including (QoS) settings
Automatically upgrading drivers for the VM, including Tools driver
Declaration: This feature is a high-risk feature. Using this feature complies with industry practices. However, end user data may be required for implementing the
feature. Exercise caution and obtain end users' consent when using this feature.
You can use Tools to query the name of the host housing the VM, the NIC IP address, system time, and VM idle values.
Precautions
Do not install any non-Huawei Tools on VMs running on the FusionCompute system.
Do not perform any of the following operations when installing Tools. Otherwise, the installation may fail or the system may be unstable after
installation:
Related Concepts
VM
A VM is a virtual computer that runs an OS and applications.
A VM runs on a host, obtains CPUs, memory, and other compute resources, as well as USB devices, and uses network connection and storage access
capabilities of the host. Multiple VMs can run concurrently on one host.
The VM creation location can be a host or a cluster. After a VM is created, you can migrate it to another host, or perform operations to adjust its
specifications and peripherals, such as adding NICs, attaching disks, binding USB devices, and mounting CD/DVD-ROM drives.
VM template
A VM template is a copy of a VM, containing an OS, applications, and VM specification configurations. It is used to create VMs that have OSs
installed. Compared with creating bare VMs, creating VMs using a template reduces much time.
A VM template can be created by converting a VM, cloning a VM, or cloning an existing template. You can convert a template to a VM or deploy a
VM from a template. You can also export a template from a site and import the template to another site to create a VM.
A VM template file can be in OVA or OVF format, and an image file can be in QCOW2 or VHD format.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 486/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
A template in OVA format contains only one OVA file. A VM template in OVF format consists of one OVF file and multiple VHD files.
OVF file: provides the description information about the VM. The file name is the same as the VM template, for example, template01.ovf.
VHD file: indicates the VM disk file. A VM disk file is generated for each VM disk. The file name format is Template name-Disk
identifier.vhd, for example, template01-sdad.vhd.
Disk identifier in the VHD file name consists of the disk bus type and disk serial number.
hd (x86) indicates that the disk bus type is IDE, a maximum of 3 disks in this bus type are supported, the slot number ranges from 1 to 3,
the disks are numbered as a, c, and d, and the corresponding disk slot number ranges from 1 to 3.
vd indicates that the disk bus type is VIRTIO, a maximum of 25 disks in this bus type are supported, the slot number ranges from 1 to 25,
the disks are numbered from a to y, and the corresponding disk slot number ranges from 1 to 25.
sd indicates that the disk bus type is SCSI, a maximum of 60 disks in this bus type are supported, the slot number ranges from 1 to 60, the
disks are numbered from a to z, aa to az, and ba to bh, and the corresponding disk slot number ranges from 1 to 26, 27 to 52, and 53 to 60.
QCOW2 file: A QCOW2 image is a disk image supported by the QEMU simulator. Such a file is used to represent a block device disk with a
fixed size. Compared with a RAW image, a QCOW2 image has the following features:
Supports Copy-On-Write (COW). The image file only represents changes made to an underlying disk.
Supports zlib compression and encryption by following Advanced Encryption Standard (AES).
You can import OVF and QCOW2 files from the local PC or a share directory.
You can import OVA files only from a share directory, and the OVA files must be OVA templates exported from FusionCompute.
VM Creation Methods
Table 1 describes the VM creation methods provided by FusionCompute.
Creating a bare A bare VM is like a blank physical computer without an OS After the system is installed, the first VM is needed.
VM installed.
You can create a bare VM on a host or in a cluster, and configure The OSs or specifications of existing VMs or templates in the
VM specifications, such the number of CPUs and NICs, the size of system do not meet user requirements.
memory or disks. A bare VM is required to be converted or cloned to a template
After a bare VM is created, install an OS on it. The procedure for for VM creation. Before cloning or converting it to a template,
installing an OS on a bare VM is the same as that for installing an install an OS on it.
OS on a physical computer.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 487/488
15/07/2025, 11:19 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Creating a VM Use an existing VM template to create a VM that has similar The OSs and specifications of templates in the system meet
from a template specifications with the template. user requirements. Creating a VM from a template can reduce
You can convert an existing template on the site to VM, or deploy a time.
VM from this template. You can also export a template from a peer site, and import the
You can also export a template from a peer site, and import the template to the current site to create a VM.
template to the current site to create a VM.
After a template is converted to a VM, the template disappears. All
attributes of the VM are identical to the template attributes.
The new VM inherits the following attributes of the template. You
can customize other attributes.
VM OS type and version
Number and size of VM disks and bus type
Number of VM NICs
Cloning a VM Clone a VM to a VM that has similar specifications with the VM. If multiple similar VMs are required, you can install different
from an existing The new VM inherits the following attributes of the used VM. You software products on one VM and clone it for multiple times
VM can customize other attributes. to obtain required VMs.
For computing-intensive and network-intensive services, VMs may fail to meet service requirements after virtualization. Therefore, you need to evaluate
whether services can be migrated to the cloud before migrating services to the cloud.
During VM creation, only OSs supported by FusionCompute can be installed on VMs. Click FusionCompute Compatibility Query to obtain the
supported OSs.
For details about the VM OSs that support the SCSI or VIRTIO bus type and the maximum CPU and memory specifications supported by the
VM OSs, see FusionCompute SIA Guest OS Compatibility Guide (x86) or FusionCompute SIA Guest OS Compatibility Guide (Arm).
For details about how to query the FusionCompute SIA version, see "How Do I Query the FusionCompute SIA Version?" in FusionCompute 8.8.0 O&M Guide.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 488/488