Professional Documents
Culture Documents
16
1. Virtualization Management.................................................................................. 4
Storage Overview...................................................................................................................................................... 6
Virtualization Management Web Console Interface.................................................................................... 7
ii
Supported GPUs..........................................................................................................................................47
GPU Pass-Through for Guest VMs.......................................................................................................47
NVIDIA GRID Virtual GPU Support on AHV....................................................................................49
Windows VM Provisioning................................................................................................................................... 52
Nutanix VirtIO for Windows................................................................................................................... 52
Installing Windows on a VM....................................................................................................................61
PXE Configuration for AHV VMs...................................................................................................................... 62
Configuring the PXE Environment for AHV VMs........................................................................... 63
Configuring a VM to Boot over a Network......................................................................................64
Uploading Files to DSF for Microsoft Windows Users............................................................................ 65
Enabling Load Balancing of vDisks in a Volume Group..........................................................................65
Performing Power Operations on VMs.......................................................................................................... 66
Copyright...................................................................................................................74
License......................................................................................................................................................................... 74
Conventions............................................................................................................................................................... 74
Version......................................................................................................................................................................... 74
iii
1
VIRTUALIZATION MANAGEMENT
Nutanix nodes with AHV include a distributed VM management service responsible for storing
VM configuration, making scheduling decisions, and exposing a management interface.
Snapshots
Snapshots are consistent for failures. They do not include the VM's current memory image, only
the VM configuration and its disk contents. The snapshot is taken atomically across the VM
configuration and disks to ensure consistency.
If multiple VMs are specified when creating a snapshot, all of their configurations and disks are
placed into the same consistency group. Do not specify more than 8 VMs at a time.
If no snapshot name is provided, the snapshot is referred to as "vm_name-timestamp", where the
timestamp is in ISO-8601 format (YYYY-MM-DDTHH:MM:SS.mmmmmm).
VM Disks
A disk drive may either be a regular disk drive or a CD-ROM drive.
By default, regular disk drives are configured on the SCSI bus, and CD-ROM drives are
configured on the IDE bus. You can also configure CD-ROM drives to use the SATA bus. By
default, a disk drive is placed on the first available bus slot.
Disks on the SCSI bus may optionally be configured for passthrough on platforms that support
iSCSI. When in passthrough mode, SCSI commands are passed directly to DSF over iSCSI.
When SCSI passthrough is disabled, the hypervisor provides a SCSI emulation layer and treats
the underlying iSCSI target as a block device. By default, SCSI passthrough is enabled for SCSI
devices on supported platforms.
Host Maintenance
When a host is in maintenance mode, it is marked as unschedulable so that no new VM
instances are created on it. Subsequently, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails, the host remains in the "entering maintenance mode" state,
where it is marked unschedulable, waiting for user remediation. You can shut down VMs on the
host or move them to other nodes. Once the host has no more running VMs it is in maintenance
mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the
cluster. After exiting maintenance mode, those VMs are automatically returned to the original
host, eliminating the need to manually move them.
Limitations
Nested Virtualization
Nutanix does not support nested virtualization (nested VMs) in an AHV cluster.
Storage Overview
Acropolis uses iSCSI and NFS for storing VM files.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment
variables are set to anything other than en_US.UTF-8, reconnect with an SSH
configuration that does not import or change any locale settings.
Note:
• As an admin user, you cannot access nCLI by using the default credentials. If you
are logging in as the admin user for the first time, you must SSH to the Controller
By default, the admin user password does not have an expiry date, but you can change the
password at any time.
When you change the admin user password, you must update any applications and scripts
using the admin user credentials for authentication. Nutanix recommends that you create a user
assigned with the admin role instead of using the admin user for authentication. The Prism Web
Console Guide describes authentication and roles.
Following are the default credentials to access a Controller VM.
Procedure
1. Log on to the Controller VM with SSH by using the management IP address of the Controller
VM and the following credentials.
2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
CAUTION: Verify the data resiliency status of your cluster. If the cluster only has replication
factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have
more than one node shut down, shut down the entire cluster.
You must shut down the Controller VM to shut down a node. When you shut down the
Controller VM, you must put the node in maintenance mode.
When a host is in maintenance mode, VMs that can be migrated are moved from that host
to other hosts in the cluster. After exiting maintenance mode, those VMs are returned to the
original host, eliminating the need to manually move them.
If a host is put in maintenance mode, the following VMs are not migrated:
• VMs with GPUs, CPU passthrough, PCI passthrough, and host affinity policies are
not migrated to other hosts in the cluster. You can shut down such VMs by setting
the non_migratable_vm_action parameter to acpi_shutdown. If you do not want
to shut down these VMs for the duration of maintenance mode, you can set the
non_migratable_vm_action parameter to block, or manually move these VMs to another
host in the cluster.
• Agent VMs are always shut down if you put a node in maintenance mode and are powered
on again after exiting maintenance mode.
Perform the following procedure to shut down a node.
Note the value of Hypervisor address for the node you want to shut down.
c. Put the node into maintenance mode.
nutanix@cvm$ acli host.enter_maintenance_mode Hypervisor address [wait="{ true |
false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]
Replace Hypervisor address with either the IP address or host name of the AHV host you
want to shut down.
Set wait=true to wait for the host evacuation attempt to finish.
Set non_migratable_vm_action=acpi_shutdown if you want to shut down VMs such
as VMs with GPUs, CPU passthrough, PCI passthrough, and host affinity policies for the
duration of the maintenance mode.
If you do not want to shut down these VMs for the duration of the maintenance mode,
you can set the non_migratable_vm_action parameter to block, or manually move these
VMs to another host in the cluster.
If you set the non_migratable_vm_action parameter to block and the operation to
put the host into the maintenance mode fails, exit the maintenance mode and then
either manually migrate the VMs to another host or shut down the VMs by setting the
non_migratable_vm_action parameter to acpi_shutdown.
d. Shut down the Controller VM.
nutanix@cvm$ cvm_shutdown -P now
Procedure
Replace cvm_name with the name of the Controller VM that you found from the preceding
command.
5. If the node is in maintenance mode, log on to the Controller VM and take the node out of
maintenance mode.
nutanix@cvm$ acli
<acropolis> host.exit_maintenance_mode AHV-hypervisor-IP-address
If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886, 8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627, 4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Note:
• You must ensure that at any given time, the cluster has a minimum of three nodes
(never-schedulable or otherwise) in function. Note that to add your first never-
schedulable node to your Nutanix cluster, the cluster must comprise of at least three
schedulable nodes.
• You can add any number of never-schedulable nodes to your Nutanix cluster.
• If you want a node that is already a part of the cluster to work as a never-
schedulable node, you must first remove that node from the cluster and then add
that node as a never-schedulable node.
• If you no longer need a node to work as a never-schedulable node, remove the node
from the cluster.
Note: Perform this step only if you want a node that is already a part of the cluster to work as
a never-schedulable node.
For information about how to remove a node from a cluster, see the Modifying a Cluster
topic in the Prism Web Console Guide.
Replace uuid-of-the-node with the UUID of the node you want to add as a never-schedulable
node.
The never-schedulable-node is an optional parameter and is required only if you want to add
a never-schedulable node.
If you no longer need a node to work as a never-schedulable node, remove the node from
the cluster.
If you want the never-schedulable node to now work as a schedulable node, remove the
node from the cluster and add the node back to the cluster without using the never-
schedulable-node parameter as follows.
nutanix@cvm$ cluster add-node node-uuid=uuid-of-the-node
Replace uuid-of-the-node with the UUID of the node you want to add.
Note: For information about how to add a node (other than a never-schedulable node) to a
cluster, see the Expanding a Cluster topic in the Prism Web Console Guide.
Procedure
3. Open Configure CVM from the gear icon in the web console.
The Configure CVM dialog box is displayed.
Procedure
2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/
sysconfig/network file.
HOSTNAME=my_hostname
3. Use the text editor to replace the host name in the /etc/hostname file.
The host name is updated in the Prism web console after a few minutes.
Tip: Although it is not required for the root user to have the same password on all hosts, doing
so makes cluster management and support much easier. If you do select a different password for
one or more hosts, make sure to note the password for each host.
Procedure
The password you choose must meet the following complexity requirements:
• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde
(~), exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last 24 passwords.
• In configurations without high-security requirements, the password must contain:
Nutanix Software
Warning: Modifying any of the settings listed here may render your cluster inoperable.
In particular, do not, under any circumstances, delete the Nutanix Controller VM, or take a
snapshot of the Controller VM for backup.
• Nutanix does not support decreasing Controller VM memory below recommended minimum
amounts needed for cluster and add-in features. Nutanix Cluster Checks (NCC), preupgrade
cluster checks, and the AOS upgrade process detect and monitor Controller VM memory.
AHV Settings
• Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you
configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the
hosts during the imaging process.
Open vSwitch Do not modify the OpenFlow tables that are associated with the
default OVS bridge br0.
VLANs Add the Controller VM and the AHV host to the same VLAN.
By default, the Controller VM and the hypervisor are assigned
to VLAN 0, which effectively places them on the native VLAN
configured on the upstream physical switch.
OVS bonded port Aggregate the 10 GbE interfaces on the physical host to an OVS
(bond0) bond on the default OVS bridge br0 and trunk these interfaces on
the physical switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode.
Note: The mixing of bond modes across AHV hosts in the same
cluster is not recommended and not supported.
1 GbE and 10 GbE If you want to use the 10 GbE interfaces for guest VM traffic, make
interfaces (physical host) sure that the guest VMs do not use the VLAN over which the
Controller VM and hypervisor communicate.
If you want to use the 1 GbE interfaces for guest VM connectivity,
follow the hypervisor manufacturer’s switch port and networking
configuration guidelines.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Also, to avoid loops, do not add the 1 GbE interfaces to
bridge br0, either individually or in a second bond. Use them on
other bridges.
IPMI port on the Do not trunk switch ports that connect to the IPMI interface.
hypervisor host Configure the switch ports as access ports for management
simplicity.
Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur
as implementations scale upward (see Knowledge Base article
KB1612). Nutanix recommends the use of 10Gbps, line-rate, non-
blocking switches with larger buffers for production workloads.
Use an 802.3-2012 standards–compliant switch that has a low-
latency, cut-through design and provides predictable, consistent
traffic latency regardless of packet size, traffic pattern, or the
features enabled on the 10 GbE interfaces. Port-to-port latency
should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on
switch ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated
buffer for each port.
Controller VM Do not remove the Controller VM from either the OVS bridge br0
or the native Linux bridge virbr0.
This diagram shows the recommended network configuration for an Acropolis cluster. The
interfaces in the diagram are connected with colored lines to indicate membership to different
VLANs:
• An internal port with the same name as the default bridge; that is, an internal port named
br0. This is the access port for the hypervisor host.
• A bonded port named bond0. The bonded port aggregates all the physical interfaces
available on the node. For example, if the node has two 10 GbE interfaces and two 1 GbE
interfaces, all four interfaces are aggregated on bond0. This configuration is necessary for
Foundation to successfully image the node regardless of which interfaces are connected to
the network.
Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with
Desired Interfaces on page 25.
The following diagram illustrates the default factory configuration of OVS on an Acropolis node:
The Controller VM has two network interfaces. As shown in the diagram, one network interface
connects to bridge br0. The other network interface connects to a port on virbr0. The
Controller VM uses this bridge to communicate with the hypervisor host.
Procedure
• To show interface properties such as link speed and status, log on to the Controller VM, and
then list the physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces
Replace bridge with the name of the bridge for which you want to view uplink information.
Omit the --bridge_name parameter if you want to view uplink information for the default
OVS bridge br0.
Output similar to the following is displayed:
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2 eth1 eth0
lacp: off
lacp-fallback: false
lacp_speed: slow
• To show the bridges on the host, log on to any Controller VM with SSH and list the bridges:
nutanix@cvm$ manage_ovs show_bridges
• To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then
list the configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace bridge with a name for the bridge. Bridge names must not exceed six (6) characters.
The output does not indicate success explicitly, so you can append && echo success to the
command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 create_single_bridge && echo success'
Note:
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace bridge with the name of the bridge on which you want to create the bond. Omit the
--bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:
• A comma-separated list of the interfaces that you want to include in the bond. For
example, eth0,eth1.
• A keyword that indicates which interfaces you want to include. Possible keywords:
Note: If the bridge on which you want to create the bond does not exist, you must first create
the bridge. For information about creating an OVS bridge, see Creating an Open vSwitch
Bridge on page 24. The following example assumes that a bridge named br1 exists on
every host in the cluster.
VLAN Configuration
You can set up a segmented virtual network on an Acropolis node by assigning the ports on
Open vSwitch bridges to different VLANs. VLAN port assignments are configured from the
Controller VM that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations
on page 18. For information about assigning guest VMs to a VLAN, see the Web Console
Guide.
Procedure
5. Verify connectivity to the IP address of the AHV host by performing a ping test.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you
are logged on to the Controller VM through its public interface. To change the VLAN ID, log on to
the internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a
VLAN, do the following:
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10
new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
<source bridge="br0" />
<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.
Procedure
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk.
The parameter is processed only if vlan_mode is set to kTrunked and is ignored if
vlan_mode is set to kAccess. To include the default VLAN, VLAN 0, include it in the
list of trunked networks. To trunk all VLANs, set vlan_mode to kTrunked and skip this
parameter.
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see
the "VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App
Mobility Fabric Guide.
Procedure
2. Create a VLAN.
nutanix@cvm$ acli net.create name vswitch_name=bridge-name vlan=vlan-tag
CAUTION: All Controller VMs and hypervisor hosts must be on the same subnet.
Warning: Ensure that you perform the steps in the exact order as indicated in this document.
Note: You can determine the ID of the host by running the following command:
nutanix@cvm$ ncli host list
Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
...
Only the scavenger, genesis, and zookeeper processes must be running (process ID is
displayed next to the process name).
Do not continue if the CVM has failed to enter the maintenance mode, because it can cause a
service interruption.
Procedure
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
f. Assign the host to a VLAN. For information about how to add a host to a VLAN, see
Assigning an Acropolis Host to a VLAN on page 26.
g. Verify network connectivity by pinging the gateway, other CVMs, and AHV hosts.
See Controller VM Access on page 8 for information about how to log on to a Controller VM.
Genesis takes a few minutes to restart.
3. Verify if the IP address of the hypervisor host has changed. Run the following nCLI command
from any CVM other than the one in the maintenance mode.
nutanix@cvm$ ncli host list
Note: You cannot manage your guest VMs after the Acropolis service is stopped.
b. Verify if the Acropolis service is DOWN on all the CVMs, except the one in the
maintenance mode.
nutanix@cvm$ cluster status | grep -v UP
2019-09-04 14:43:18 INFO cluster:2774 Executing action status on SVMs X.X.X.1, X.X.X.2,
X.X.X.3
6. Verify if all processes on all the CVMs, except the one in the maintenance mode, are in the
UP state.
nutanix@cvm$ cluster status | grep -v UP
a. From any other CVM in the cluster, run the following command to exit the CVM from the
maintenance mode.
nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=false
Note: The command fails if you run the command from the CVM that is in the maintenance
mode.
Do not continue if the CVM has failed to exit the maintenance mode
a. From any CVM in the cluster, run the following command to exit the AHV host from the
maintenance mode.
nutanix@cvm$ acli host.exit_maintenance_mode new-host-ip
In the output that is displayed, ensure that node_state equals to kAcropolisNormal and
schedulable equals to True.
Contact Nutanix Support if any of the steps described in this document produce unexpected
results.
• SCSI: 256
• PCI: 6
• IDE: 4
• Boot faster
• Avoid legacy option ROM address constraints
• Include robust reliability and fault management
• Use UEFI drivers
Note:
• Nutanix supports the starting of VMs with UEFI firmware in an AHV cluster, but if a
VM is added to a Protection domain and is later restored on a different cluster, the
VM might lose boot configuration. To restore the lost boot configuration, see Setting
up Boot Device.
• Nutanix also provides limited support for VMs migrated from Hyper-V cluster.
You can create or update VMs with UEFI firmware by using the acli commands, Prism web
console, or Prism Central UI. For more information about creating a VM through Prism web
console or Prism Central UI, see the Creating VM (AHV) section in the Prism Web Console Guide
Note: If you are creating a VM by using aCLI commands, you can define the location of storage
container for UEFI firmware and variables. Prism web console or Prism Central UI does not
provide the option to define the storage container to store UEFI firmware and variables.
For more information about the supported OSes for the guest VMs, see Compatibility Matrix for
UEFI Supported VMs on page 35.
Procedure
1. Launch aCLI. To launch aCLI, log on to any Controller VM in the cluster with SSH.
nutanix@cvm$ acli
<acropolis>
3. Specify a location of the NVRAM storage container to store the UEFI firmware and variables
by running the following command.
<acropolis> vm.create test-efi uefi_boot=true nvram_container=NutanixManagementShare
Replace NutanixManagementShare with a storage container in which you want to store the UEFI
variables.
The UEFI variables are stored in a default NVRAM container. Nutanix recommends you
to choose a storage container with at least RF2 storage policy to ensure the VM high
availability for node failure scenarios. For more information about RF2 storage policy, see
Failure and Recovery Scenarios in the Prism Web Console Guide document.
Note: When you update the location of the storage container, clear the UEFI configuration
and update the location of nvram_container to a container of your choice.
What to do next
Go to the UEFI BIOS menu and configure the UEFI firmware settings. For more information
about accessing and setting the UEFI firmware, see Getting Familiar with UEFI Firmware Menu
on page 36.
Procedure
2. Launch the console for the VM. For more details about launching console for the VM, see
Managing A VM (AHV) section in the Prism Web Console Guide.
4. Use the up or down arrow key to go to Device Manager and press Enter.
The Device Manager page appears.
6. In the OVMF Settings page, use the up or down arrow key to go to Change Preferred field
and use the right or left arrow key to increase or decrease the boot resolution.
The default boot resolution is 1280X1024.
Procedure
2. Launch the console for the VM. For more details about launching console for the VM, see
Managing A VM (AHV) section in the Prism Web Console Guide.
3. To go to the UEFI firmware menu, press the Fn+ F2 keys on your keyboard.
5. In the Boot Manager screen, use the up or down arrow key to select the boot device and
press Enter.
The boot device is saved. After you select and save the boot device, the VM boots up with
the new boot device.
Procedure
2. Launch the console for the VM. For more details about launching console for the VM, see
Managing A VM (AHV) section in the Prism Web Console Guide.
3. To go to the UEFI firmware menu, press the Fn+ F2 keys on your keyboard.
5. In the Boot Maintenance Manager screen, use the up or down arrow key to go to the Auto
Boot Time-out field.
The default boot-time value is 0 seconds.
6. In the Auto Boot Time-out field, enter the boot-time value and press Enter.
The boot-time value is changed. The VM starts after the defined boot-time value.
• Nutanix does not support converting a VM that uses IDE disks or Legacy BIOS to VMs that
use Secure Boot.
Requirements
Following are the requirements for Secure Boot:
Procedure
1. Launch aCLI. To launch aCLI, log on to any Controller VM in the cluster with SSH.
nutanix@cvm$ acli
<acropolis>
Note: Specifying the machine type is required to enable the secure boot feature. UEFI is
enabled by default when the Secure Boot feature is enabled.
Procedure
1. Launch aCLI. To launch aCLI, log on to any Controller VM in the cluster with SSH.
nutanix@cvm$ acli
<acropolis>
Note:
• If you disable the secure boot flag alone, the machine type remains q35, unless
you disable that flag explicitly.
• UEFI is enabled by default when the Secure Boot feature is enabled. Disabling
Secure Boot does not revert the UEFI flags.
Procedure
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix
password.
The command removes the bond and then re-creates the bond with only the 10 GbE
interfaces.
5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge
called br1 (bridge names must not exceed 6 characters.).
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 create_single_bridge'
6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example,
aggregate them to a bond named br1-up.
nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up
update_uplinks'
7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the
guest VMs, and associate the new bridge with the network. For example, create a network
named vlan10.br1 on VLAN 10.
nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign
interfaces on the guest VMs to the network.
For information about assigning guest VM interfaces to a network, see "Creating a VM" in the
Prism Web Console Guide.
Procedure
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create vm network=network [vlan_mode={kAccess | kTrunked}]
[trunked_networks=networks]
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk.
The parameter is processed only if vlan_mode is set to kTrunked and is ignored if
vlan_mode is set to kAccess. To include the default VLAN, VLAN 0, include it in the
list of trunked networks. To trunk all VLANs, set vlan_mode to kTrunked and skip this
parameter.
Note: Both commands include optional parameters that are not directly associated with this
procedure and are therefore not described here. For the complete command reference, see
the "VM" section in the "Acropolis Command-Line Interface" chapter of the Acropolis App
Mobility Fabric Guide.
Note: You cannot decrease the memory allocation and the number of CPUs on your VMs while
the VMs are powered on.
Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory
online. If the memory is not online, you cannot use the new memory. Perform the following
procedure to make the memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/devices/system/memory/memoryXXX/state
2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more
memory to that VM so that the final memory is greater than 3 GB, results in a memory-
overflow condition. To resolve the issue, restart the guest OS (CentOS 7.2) with the following
setting:
swiotlb=force
CPU OS Limitation
On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you
might have to bring the CPUs online. For each hot-plugged CPU, run the following command to
bring the CPU online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Procedure
Replace vm-name with the name of the VM and new_memory_size with the memory size.
Replace vm-name with the name of the VM and n with the number of CPUs.
Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported
version, you must power cycle the VM that was instantiated and powered on before the
upgrade, so that it is compatible with the memory and CPU hot-plug feature. This power-cycle
has to be done only once after the upgrade. New VMs created on the supported version shall
have the hot-plug compatibility by default.
Procedure
Replace vm_name with the name of the VM on which you want to enable vNUMA or vUMA.
Replace x with the values for the following indicated parameters:
Note: You can configure either pass-through or a vGPU for a guest VM but not both.
This guide describes the concepts and driver installation information. For the configuration
procedures, see the Prism Web Console Guide.
Supported GPUs
The following GPUs are supported:
Note: These GPUs are supported only by the AHV version that is bundled with the AOS release.
Limitations
GPU pass-through support has the following limitations:
• Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is
necessary when the BIOS, BMC, and the hypervisor on the host are being upgraded. During
these upgrades, VMs that have a GPU configuration are powered off and then powered on
automatically when the node is back up.
• VM pause and resume are not supported.
• You cannot hot add VM memory if the VM is using a GPU.
• Hot add and hot remove support is not available for GPUs.
• You can change the GPU configuration of a VM only when the VM is turned off.
• The Prism web console does not support console access for VMs that are configured with
GPU pass-through. Before you configure GPU pass-through for a VM, set up an alternative
means to access the VM. For example, enable remote access over RDP.
Removing GPU pass-through from a VM restores console access to the VM through the
Prism web console.
Note:
Note: If the specified license is not available on the licensing server, the VM starts up and
functions normally, but the vGPU runs with reduced capability.
You must determine the vGPU profile that the VM requires, install an appropriate license on the
licensing server, and configure the VM to use that license and vGPU type. For information about
licensing for different vGPU types, see the NVIDIA GRID licensing documentation.
Guest VMs check out a license over the network when starting up and return the license when
shutting down. As the VM is powering on, it checks out the license from the licensing server.
When a license is checked back in, the vGPU is returned to the vGPU resource pool.
When powered on, guest VMs use a vGPU in the same way that they use a physical GPU that is
passed through.
• Make sure that NVIDIA GRID Virtual GPU Manager (the host driver) and the NVIDIA GRID
guest operating system driver are at the same version.
• The GPUs must run in graphics mode. If any M60 GPUs are running in compute mode, switch
the mode to graphics before you begin. See the gpumodeswitch User Guide.
• If you are using NVIDIA vGPU drivers on a guest VM and you modify the vGPU profile
assigned to the VM (in the Prism web console), you might need to reinstall the NVIDIA guest
drivers on the guest VM.
Procedure
1. If you do not have an NVIDIA GRID licensing server, set up the licensing server.
See the Virtual GPU License Server User Guide.
2. Download the guest and host drivers (both drivers are included in a single bundle) from the
NVIDIA Driver Downloads page. For information about the supported driver versions, see
Virtual GPU Software for Nutanix AHV Release Notes.
3. Install the host driver (NVIDIA GRID Virtual GPU Manager) on the AHV hosts. See Installing
NVIDIA GRID Virtual GPU Manager (Host Driver) on page 51.
4. In the Prism web console, configure a vGPU profile for the VM.
To create a VM, see Creating a VM (AHV) in the "Virtual Machine Management" chapter of
the Prism Web Console Guide. To allocate vGPUs to an existing VM, see the "Managing a VM
(AHV)" topic in that Prism Web Console Guide chapter.
a. Download the NVIDIA GRID guest operating system driver from the NVIDIA portal, install
the driver on the guest VM, and then restart the VM.
b. Configure vGPU licensing on the guest VM.
This step involves configuring the license server on the VM so that the VM can request
the appropriate license from the license server. For information about configuring vGPU
licensing on the guest VM, see the NVIDIA GRID vGPU User Guide.
Note: VMs using a GPU must be powered off if their parent host is affected by this install. If
left running, the VMs are automatically powered off when the driver installation begins on their
parent host, and then powered on after the installation is complete.
Procedure
a. Copy the RPM package to a web server to which you can connect from a Controller VM
on the AHV cluster.
b. Copy the RPM package to any Controller VM in the cluster on which you want to install
the driver.
2. Log on to any Controller VM in the cluster with SSH as the nutanix user.
Following are the default credentials of the nutanix user.
user name: nutanix
password: nutanix/4u
3. If the RPM package is available on a web server, install the driver from the server location.
nutanix@cvm$ install_host_package -u url
4. If the RPM package is available on the Controller VM, install the driver from the location to
which you uploaded the driver.
nutanix@cvm$ install_host_package -r rpm
Replace rpm with the path to the driver on the Controller VM.
Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for paravirtual devices that enhance the stability and
performance of virtual machines on AHV.
Nutanix VirtIO is available in two formats:
VirtIO Requirements
Requirements for Nutanix VirtIO for Windows.
• Operating system:
Procedure
1. Go to the Nutanix Support portal and select Downloads > Tools & Firmware.
The Tools & Firmware page appears.
» If you are creating a new Windows VM, download the ISO file. The installer is available on
the ISO if your VM does not have Internet access.
» If you are updating drivers in a Windows VM, download the MSI installer file.
» For the ISO: Upload the ISO to the cluster, as described in the Web Console Guide:
Configuring Images.
» For the MSI: open the download file to run the MSI.
4. Read and accept the Nutanix VirtIO license agreement. Click Install.
The Nutanix VirtIO setup wizard shows a status bar and completes installation.
Procedure
1. Go to the Nutanix Support portal and browse to Downloads > Tools & Firmware.
2. Use the filter search to find the latest Nutanix VirtIO ISO.
3. Download the latest VirtIO for Windows ISO to your local machine.
Note: Nutanix recommends extracting the VirtIO ISO into the same VM where you load
Nutanix VirtIO, for easier installation.
4. Upload the ISO to the cluster, as described in the Web Console Guide: Configuring Images.
5. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.
6. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.
• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO
7. Click Add.
8. Log into the VM and browse to Control Panel > Device Manager.
Open the devices and select the specific Nutanix drivers for download. For each device,
right-click and Update Driver Software into the drive containing the VirtIO ISO. For each
device, follow the wizard instructions until you receive installation confirmation.
Procedure
1. Go to the Nutanix Support portal and click Downloads > Tools & Firmware.
The Tools & Firmware page appears.
Note: The installer is available on the ISO if your VM does not have Internet access.
a. Upload the ISO to the cluster as described in the Web Console Guide: Configuring Images.
b. Mount the ISO image into the CD-ROM of each VM in the cluster that you want to
upgrade.
3. If you are updating drivers in a Windows VM, select the appropriate 32-bit or 64-bit MSI.
5. Read and accept the Nutanix VirtIO license agreement. Click Install.
The Nutanix VirtIO setup wizard shows a status bar and completes installation.
Creating a Windows VM on AHV with Nutanix VirtIO (New and Migrated VMs)
Create a Windows VM in AHV, or migrate a Windows VM from a non-Nutanix source to AHV,
with the Nutanix VirtIO drivers.
• Upload the Windows installer ISO to your cluster as described in the Web Console Guide:
Configuring Images.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Web Console Guide:
Configuring Images.
Procedure
Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are
creating a Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows
VM with the hardware clock pointing to the desired timezone.
d. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
e. MEMORY: Enter the amount of memory for the VM (in GiBs).
5. If you are creating a Windows VM, add a Windows CD-ROM to the VM.
a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated
fields.
• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO.
b. Click Add.
• TYPE: DISK
• OPERATION: ALLOCATE ON STORAGE CONTAINER
• BUS TYPE: SCSI
• STORAGE CONTAINER: Select the appropriate storage container.
• SIZE: Enter the number for the size of the hard drive (in GiB).
b. Click Add to add the disk driver.
• TYPE: DISK
• OPERATION: CLONE FROM IMAGE
• BUS TYPE: SCSI
• CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image
you created previously.
b. Click Add to add the disk driver.
9. Optionally, after you have migrated or created a VM, add a network interface card (NIC).
What to do next
Install Windows by following Installing Windows on a VM on page 61.
Installing Windows on a VM
Install a Windows virtual machine.
Procedure
6. Select the desired language, time and currency format, and keyboard information.
10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.
13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress, which can take several minutes.
14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows setup completes installation.
• An unmanaged network does not perform IPAM functions and gives VMs direct access to an
external Ethernet network. Therefore, the procedure for configuring the PXE environment
for AHV VMs is the same as for a physical machine or a VM that is running on any other
hypervisor. VMs obtain boot file information from the DHCP or PXE server on the external
network.
• A managed network intercepts DHCP requests from AHV VMs and performs IP address
management (IPAM) functions for the VMs. Therefore, you must add a TFTP server and the
required boot file information to the configuration of the managed network. VMs obtain boot
file information from this configuration.
A VM that is configured to use PXE boot boots over the network on subsequent restarts until
the boot order of the VM is changed.
Procedure
1. Log on to the Prism web console, click the gear icon, and then click Network Configuration
in the menu.
The Network Configuration dialog box is displayed.
2. On the Virtual Networks tab, click the pencil icon shown for the network for which you want
to configure a PXE environment.
The VMs that require the PXE boot information must be on this network.
a. Select the Configure Domain Settings check box and do the following in the fields shown
in the domain settings sections:
• In the TFTP Server Name field, specify the host name or IP address of the TFTP server.
If you specify a host name in this field, make sure to also specify DNS settings in the
Domain Name Servers (comma separated), Domain Search (comma separated), and
Domain Name fields.
• In the Boot File Name field, specify the boot file URL and boot file that the VMs
must use. For example, tftp://ip_address/boot_filename.bin, where ip_address is
b. Click Save.
4. Click Close.
Procedure
3. Create a VM.
Replace vm with a name for the VM, and replace num_vcpus and memory with the number of
vCPUs and amount of memory that you want to assign to the VM, respectively.
For example, create a VM named nw-boot-vm.
Replace vm with the name of the VM and replace network with the name of the network. If the
network is an unmanaged network, make sure that a DHCP server and the boot file that the
VM requires are available on the network. If the network is a managed network, configure the
DHCP server to provide TFTP server and boot file information to the VM. See Configuring
the PXE Environment for AHV VMs on page 63.
For example, create a virtual interface for VM nw-boot-vm and place it on a network named
network1.
<acropolis> vm.nic_create nw-boot-vm network=network1
Replace vm with the name of the VM and mac_addr with the MAC address of the virtual
interface that the VM must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM
uses the virtual interface with MAC address 00-00-5E-00-53-FF.
<acropolis> vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF
Replace vm_list with the name of the VM. Replace host with the name of the host on which
you want to start the VM.
For example, start the VM named nw-boot-vm on a host named host-1.
<acropolis> vm.on nw-boot-vm host="host-1"
Procedure
1. Authenticate by using Prism username and password or, for advanced users, use the public
key that is managed through the Prism cluster lockdown user interface.
2. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start
browsing the DSF data store.
Note: The root directory displays storage containers and you cannot change it. You can only
upload files to one of the storage containers and not directly to the root directory. To create
or delete storage containers, you can use the prism user interface.
Note:
• vDisk load balancing is disabled by default for volume groups that are directly
attached to VMs.
However, vDisk load balancing is enabled by default for volume groups that are
attached to VMs by using a data services IP address.
Perform the following procedure to enable load balancing of vDisks by using aCLI.
Procedure
Note: To modify an existing volume group, you must first detach all the VMs that are
attached to that volume group before you enable vDisk load balancing.
Note: You can also perform these power operations by using the V3 API calls. For more
information, see developer.nutanix.com.
Procedure
Generated Events
The following events are generated by an AHV cluster.
Event Description
VM.CREATE A VM is created.
VM.DELETE A VM is deleted.
When a VM that is powered on is deleted,
in addition to the VM.DELETE notification, a
VM.OFF event is generated.
VM.UPDATE A VM is updated.
VM.MIGRATE A VM is migrated from one host to another.
When a VM is migrated, in addition to the
VM.MIGRATE notification, a VM.UPDATE
event is generated.
Event Description
SUBNET.CREATE A virtual network is created.
SUBNET.DELETE A virtual network is deleted.
SUBNET.UPDATE A virtual network is updated.
Creating a Webhook
Send the Nutanix cluster an HTTP POST request whose body contains the information essential
to creating a webhook (the events for which you want the listener to receive notifications, the
listener URL, and other information such as a name and description of the listener).
Note: Each POST request creates a separate webhook with a unique UUID, even if the data in
the body is identical. Each webhook generates a notification when an event occurs, and that
results in multiple notifications for the same event. If you want to update a webhook, do not send
another request with changes. Instead, update the webhook. See Updating a Webhook on
page 71.
To create a webhook, send the Nutanix cluster an API request of the following form:
Procedure
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks
{
"metadata": {
"kind": "webhook"
},
"spec": {
"name": "string",
"resources": {
"post_url": "string",
"credentials": {
"username":"string",
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate
values for the following parameters:
The Nutanix cluster responds to the API request with a 200 OK HTTP response that contains the
UUID of the webhook that is created. The following response is an example:
{
"status": {
"state": "PENDING"
},
"spec": {
. . .
"uuid": "003f8c42-748d-4c0b-b23d-ab594c087399"
}
}
Listing Webhooks
You can list webhooks to view their specifications or to verify that they were created
successfully.
Procedure
• To show a single webhook, send the Nutanix cluster an API request of the following form:
GET https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address with the IP address of the Nutanix cluster. Replace webhook_uuid
with the UUID of the webhook that you want to show.
• To list all the webhooks configured on the Nutanix cluster, send the Nutanix cluster an API
request of the following form:
POST https://cluster_IP_address:9440/api/nutanix/v3/webhooks/list
{
"filter": "string",
"kind": "webhook",
"sort_order": "ASCENDING",
"offset": 0,
"total_matches": 0,
"sort_column": "string",
"length": 0
}
Replace cluster_IP_address with the IP address of the Nutanix cluster and specify appropriate
values for the following parameters:
Updating a Webhook
You can update a webhook by sending a PUT request to the Nutanix cluster. You can update
the name, listener URL, event list, and description.
Procedure
PUT https://cluster_IP_address:9440/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID
of the webhook you want to update, respectively. For a description of the parameters, see
Creating a Webhook on page 69.
Deleting a Webhook
Procedure
DELETE https://cluster_IP_address/api/nutanix/v3/webhooks/webhook_uuid
Replace cluster_IP_address and webhook_uuid with the IP address of the cluster and the UUID of
the webhook you want to update, respectively.
Notification Format
An event notification has the same content and format as the response to the version 3.0
REST API call associated with that event. For example, the notification generated when a VM is
powered on has the same format and content as the response to a REST API call that powers
on a VM. However, the notification also contains a notification version, and event type, and an
entity reference, as shown:
{
"version":"1.0",
"data":{
"metadata":{
"status": {
"name": "string",
"providers": {},
.
.
.
"event_type":"VM.ON",
"entity_reference":{
For VM.DELETE and SUBNET.DELETE, the UUID of the entity is included but not the metadata.
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: February 27, 2020 (2020-02-27T09:58:22+05:30)