Professional Documents
Culture Documents
Select a specific event, and view its details. Clicking the Component ID in the event details view brings
you to the physical view of the specific component in the VxRail Manager Health tab.
The example shows a VxRail cluster which has a Gen 2 appliance with four nodes and a Gen 3 node. The
appliance detail of the node shows the front view with the disks and the back view with the power supplies
and network interfaces. Clicking the node on the back view shows the details of the node.
When a new version is available, the Config icon in the left navigation bar displays a highlighted number,
and the Internet Upgrade button is displayed. The Local Upgrade button – shown on the slide – is always
available. The VxRail Manager VM must have connectivity to the internet for the internet upgrade option.
In this example we see a warning status on the nodes of a G series appliance. Clicking the warning icon
on Node 1 takes us to the specific event. In this example we see that the message is a host heath
warning. One would have to use the vSphere Web client to explore further.
The node information panel of a G series node can also be used to initiate maintenance activities:
• Add disks
• Toggle chassis LED indicator
To see the NIC details for E, V, P, and S series (Gen 3) nodes, click the network interface ports in the
back view. The MAC addresses of the NICs and the link status are shown.
The Disk Information panel displays a message when the remaining rated write endurance reaches
certain levels:
• Warning: 30% rated write endurance remaining
• Error: 20% rated write endurance remaining
• Critical: 5% rated write endurance remaining
The Error and Critical levels are reported on the VxRail Manager Events screen.
The disk information panel can also be used to initiate maintenance activities:
• Replace disks
• Toggle disk LED indicator
The disk information panel can also be used to initiate maintenance activities:
• Replace disks
• Toggle disk LED indicator
The power down procedure is executed via VxRail Manager using the Shut Down Cluster feature. We
discuss the shutdown procedure in the next slide. The power-up procedure is a manual power on of each
host. The steps to power up a VxRail Appliance are:
• Make sure the TOR switch is fully powered on.
• Power on each host manually by depressing the power button.
• All the service VMs are powered on automatically, wait several minutes for VMs to power up. The
locator LED of Node 1 is automatically turned off when VxRail Manager starts.
• Users must power on the customer VMs manually.
Go to the Config General tab and click the Shut Down button. Click Confirm in the confirmation dialog.
VxRail Manager runs a series or precheck tests. Any precheck failures must be resolved before shutting
down. After all the precheck tests pass, the Shut Down button becomes visible. Click Shut Down to initiate
the shutdown process. The VxRail Manager VM also shuts down as part of the process.
Only capacity disks can be added to Gen 2 nodes. Gen 2 nodes have only one vSAN disk group, so a
cache disk would already be in place. SSD capacity disks can be added to all flash nodes, HDD capacity
disks can be added to hybrid nodes.
Cache or capacity disks can be added to Gen 3 nodes. Gen 3 nodes can have more than one vSAN disk
group – maximum number of disk groups varies with model. Cache disks can be added to systems which
have room to add more disk groups. At least one capacity disk has to be added along with a cache disk.
SSD capacity disks can be added to all flash nodes, HDD capacity disks can be added to hybrid nodes.
The add disk procedures are initiated from VxRail Manager physical view. vSAN auto claim of storage is
disabled. VxRail Manager handles the addition of capacity disks to existing disk group or the creation of
new disk groups.
Multiple disk groups are created, but it depends on the number of disks and the model of VxRail. Some
models allow more disk groups than others. Disk group creation is done automatically when the add disk
procedure is run. The table lists the disk group matrix for the various VxRail models.
The rest of the steps are not shown. VxRail will perform a precheck of all the disks, after the precheck
passes click Continue. VxRail adds the disks to the vSAN cluster. New disk groups are created as
needed.
VxRail clusters can scale up to 64 nodes. 1 GbE VxRail clusters can only scale up eight nodes. An RPQ is
required for stretched clusters. Scaling or expansion is something that an administrator wants to keep in
mind while designing and maintaining a VxRail environment.