You are on page 1of 33

AOS 5.

11

Getting Started Guide


NX Series

August 5, 2019
Contents

1. What Needs To Be Done...................................................................................... 3

2.  Unpacking the Box.................................................................................................4

3.  Mounting the Block................................................................................................ 7

4. Connecting the Nodes..........................................................................................8

5. Downloading Files (ESXi or Hyper-V)...........................................................14

6.  Creating a Cluster................................................................................................. 16

7. What To Do If You Have a Problem...............................................................29

Copyright...................................................................................................................32
License......................................................................................................................................................................... 32
Conventions............................................................................................................................................................... 32
Default Cluster Credentials..................................................................................................................................32
Version......................................................................................................................................................................... 33
1
WHAT NEEDS TO BE DONE
In order to get an NX or SX series system up and running, you need to perform the following
steps:
1. Unpack the block and nodes.
2. Mount the block in a rack.
3. Connect each node to the network.
4. If you want to use ESXi or Hyper-V as the hypervisor, download the necessary installation
files to your laptop (or whatever you are using as a workstation). You do not need to
download any files if you intend to use AHV as the hypervisor.
5. Launch the Foundation installer and create a cluster of your new nodes.
This manual guides you through each of these steps. If you encounter a problem at any point,
see What To Do If You Have a Problem on page 29.

Video: Video demonstrations of the documented procedures are included. The following
video covers physically installing the block and connecting the nodes. Additional videos in the
Unpacking the Box on page 4 section provide an animated view of unpacking the box, in
the Creating a Cluster on page 16 section cover how to use Foundation to create a cluster
and how to use the Prism web console to configure the cluster after creating it, and in the What
To Do If You Have a Problem on page 29 section explain how to open a ticket with Nutanix
customer support if you need help.. (The video links appear if you are viewing this in HTML
format; they do not appear if you are viewing this in PDF or EPUB format.)

Note: The following is an SX series training video, but the procedure for installing an NX series
block is similar.

Figure 1: Hardware Installation [video]

AOS  |  What Needs To Be Done | 3


2
UNPACKING THE BOX
About this task
The first step is to unpack the physical block from the box.

Procedure

1. Take the box to the machine room where the block will be mounted.

AOS  |  Unpacking the Box | 4


2. Open the box and verify the contents.
The box should include the following contents:

• NX or SX series block. The nodes you ordered should already be inserted into the block.
The block chassis varies depending on the model you purchased (see Connecting the
Nodes on page 8).
• Documentation packet (including a "Read Me First" page, welcome letter, global
regulatory compliance document, end user license agreement, warranty document, and
RoHS/REACH declaration)
• C-13/C-14 adapter power cables (two)
• [optional] Network SFP+ cables (one for each data port if ordered)

Note: If you do not order network cables from Nutanix, you must provide your own cables.

• [optional] SFP+ transceiver (if ordered)


• Rack-mounting rails (left and right)
• Rack adapter for square hole to round hole
• Bezel

Figure 2: Unpacking Box [animation]

A video https://vimeo.com/unique_3_Connect_42_171493599 on this topic is available


on Vimeo.

AOS  |  Unpacking the Box | 5


Figure 3: Sample Box Contents
3
MOUNTING THE BLOCK
About this task
Any Nutanix block is shipped with all necessary hardware to mount the block into a rack. The
block should be installed in a four-post rack that is located in a restricted access location (such
as a dedicated equipment room or service closet) with proper temperature, airflow, power, and
structural support.
The steps to mount a block varies depending on the model type. See the Nutanix Rack
Mounting Guide for the instructions to mount your model.

AOS  |  Mounting the Block | 7


4
CONNECTING THE NODES
After the block is installed in a rack, the nodes must be connected to the network and powered
on. There are many NX and SX series models, but they all come in one of the following chassis
form factors: single node (1U1N or 2U1N), two node (2U2N), or four node (2U4N). All chassis are
2U in size except for some single node models that come in a 1U size. Follow the procedure for
your chassis form factor.

Note: The following are generic procedures for each chassis form factor. Your model might vary
slightly from what is described.

Single Node Chassis (1U1N)


To connect and start up a node housed in a 1U1N chassis, do the following:
1. Connect the data-only 1GbE port to your network switch.
Each node includes a dedicated 1 GbE IPMI port (middle of the chassis) and two 1 GbE ports
to the left of the IPMI port, one a shared IPMI/data port and the other a data-only port. There
are also (optionally) up to four 10 GbE ports (far right). Connect to the data-only 1GbE port,
which is labeled Port 2 in the following figure. Your network switch must allow untagged
traffic and IPv6 multicast between the nodes, the IPMI interface, and the workstation from
which you run the installation. If in doubt, use an unmanaged switch not connected to your
production network. When you are done, you may plug in the remaining cables and remove
the 1GbE cable if you so choose.

Figure 4: Back Panel View (1U1N)

AOS  |  Connecting the Nodes | 8


2. Plug in the two power cables (back), press the power button on the control panel (front),
and check the LED lights to verify the node is powered on.

Figure 5: Control Panel View (1U1N)

Single Node Chassis (2U1N)


To connect and start up a node housed in a 2U1N chassis, do the following:
1. Connect the data-only 1GbE port to your network switch.
Each node includes one 10/100 IPMI port (middle of the chassis) and four 1 GbE ports to
the left of the IPMI port. The bottom left 1 GbE port is a shared data/IPMI port. The other
three are data-only ports. Connect to any of the data-only 1 GbE ports, which are labeled
Port 2, Port 3, and Port 4 in the following figure. Your network switch must allow untagged
traffic and IPv6 multicast between the nodes, the IPMI interface, and the workstation from

AOS  |  Connecting the Nodes | 9


which you run the installation. If in doubt, use an unmanaged switch not connected to your
production network. When you are done, you may plug in the remaining cables.

Figure 6: Back Panel View (2U1N)


2. Plug in the two power cables (back), press the power button on the control panel (front),
and check the LED lights to verify the node is powered on.

Figure 7: Control Panel View (2U1N)

Two Node Chassis (2U2N)


To connect and start up nodes housed in a 2U2N chassis, do the following:
1. Connect the data-only 1GbE port to your network switch.
Each node includes a dedicated 1 GbE IPMI port (far left) and two 1 GbE ports, a shared IPMI/
data port (bottom) and a data-only port (top), to the right of the IPMI port. There are also
two 10 GbE ports (far right). The port you should use is the top 1GbE port. Your network

AOS  |  Connecting the Nodes | 10


switch must allow untagged traffic and IPv6 multicast between the nodes, the IPMI interface,
and the workstation from which you run the installation. If in doubt, use an unmanaged
switch not connected to your production network. When you are done, you may plug in the
remaining cables and remove the 1GbE cable if you so choose.

Figure 8: Back Panel View (2U2N)


2. Plug in the two power cables (back), press the power button on the control panel for each
node (front), and check the LED lights to verify each node is powered on.

Figure 9: Control Panel View (2U2N and 2U4N)

AOS  |  Connecting the Nodes | 11


Four Node Chassis (2U4N)
To connect and start up nodes housed in a 2U4N chassis, do the following:
1. Connect the data-only 1GbE port to your network switch.
Each node includes one 10/100 IPMI port (far left) and two 1 GbE ports to the right of the
IPMI port. There are also two 10 GbE ports to the right of the 1 GbE ports on each node. The
top 1GbE port is the data-only port. (The bottom 1 GbE port is a shared IPMI/data port.)
Your network switch must allow untagged traffic and IPv6 multicast between the nodes, the
IPMI interface, and the workstation from which you run the installation. If in doubt, use an
unmanaged switch not connected to your production network. When you are done, you may
plug in the remaining cables and remove the 1GbE cable if you so choose.

Figure 10: Back Panel View (2U4N)

AOS  |  Connecting the Nodes | 12


2. Plug in the two power cables (back), press the power button on the control panel for each
node (front), and check the LED lights to verify each node is powered on.

Figure 11: Control Panel View (2U2N and 2U4N)

AOS  |  Connecting the Nodes | 13


5
DOWNLOADING FILES (ESXI OR
HYPER-V)
About this task
Nodes from the factory come with the AHV hypervisor and AOS installed, so you do not need
to download any files if you intend to use AHV. (Skip to Creating a Cluster on page 16.)
However, if you want to use ESXi or Hyper-V as the hypervisor, you must download both a
hypervisor ISO image and a new AOS installation bundle as follows:

Procedure

1. Open a web browser and log on to the Nutanix support portal: http://portal.nutanix.com.

2. Click Downloads > AOS (NOS) from the main menu (at the top).

Figure 12: Downloads Menu

AOS  |  Downloading Files (ESXi or Hyper-V) | 14


3. In the AOS screen, click the Download <version#> button to download the AOS installation
bundle named nutanix_installer_package-version#.tar.gz to any convenient location on
your laptop.

Note: There are several AOS versions you can download from this page, but choose
the current one (the one you get from clicking the Download <version#> button) unless
instructed to use a different version by Nutanix customer support.

Figure 13: AOS Download Page

4. Download the desired ESXi or Hyper-V ISO image to any convenient location on your laptop.
Hyper-V and ESXi ISOs are not available from the support portal; you must provide the
hypervisor ISO file. See the "Hypervisor ISO Images" section in the Field Installation Guide for
more information about the supported ISO images.

Note: Only standard ISO images provided by VMware or Microsoft are supported; do not
attempt to use a custom ISO image.

AOS  |  Downloading Files (ESXi or Hyper-V) | 15


6
CREATING A CLUSTER
Before you begin

• Connect your laptop (or other workstation device) to the same subnet as the nodes you just
installed.
• Install Java version 8 (1.8.0) or later on your laptop. The applet might not work properly with
an earlier Java version.
• Determine the appropriate parameter values required for installation, which include the
gateway and DNS server IP addresses, the cluster name and (if needed) virtual IP address,
and the Controller VM, hypervisor, and (if needed) IPMI IP address ranges for the nodes.

About this task


A software installation tool called Foundation is used to image the nodes (install the hypervisor
and Nutanix Controller VM) and create a cluster. You start this process by downloading and
running a Java applet from the Nutanix support portal.

Note: The following is an Xpress series training video, but the Foundation installation procedure
is the same for the NX series.

Figure 14: Software Installation (Foundation) [video]

AOS  |  Creating a Cluster | 16


Procedure

1. Open a browser on your laptop, log in to the Nutanix support portal (http://
portal.nutanix.com), and do the following:

a. Select Downloads > Foundation from the main menu.


b. Click the discovery applet link.
This downloads a bundle named FoundationApplet-offline.zip that contains the Java
applet.

Figure 15: Foundation Downloads Screen


c. Extract the contents of the downloaded bundle (FoundationApplet-offline.zip), and
then start the Foundation applet by double clicking nutanix_foundation_applet.jnlp.

Note: A security warning message may appear indicating this is from an unknown source.
Click the accept and run buttons to run the application. In some cases, you might also need
to add a security exception for applets that launch from the local machine. In the Java
security dialog, remove any URLs you have added that begin with "file" and then re-launch
the applet. It might complain that you need to add an exception for "file://". If you see this
message, add an exception for "file:/" (not file://) and then re-launch the applet.

AOS  |  Creating a Cluster | 17


2. A window appears with a list of discovered nodes. Select (click the line for) one of the nodes
you installed and then click the Launch Foundation button. This launches the Foundation
GUI.

Note: "Discovered nodes" are factory prepared nodes on the same subnet. If your laptop is
not on the same subset as the nodes you installed, the nodes will not be discovered.

If there are additional Nutanix nodes on the subnet (as in the following figure), they also
appear in the list. Just make sure you select one of the new nodes you just installed.

Figure 16: Foundation Launcher Window

3. In the Discovered Nodes screen, select the nodes you installed and then click the Next
button (bottom right).
Clicking an unchecked box for a node selects that node (and adds a check mark) and vice
versa. If the only nodes on the subnet are the ones in the new block you installed, those are
the only nodes that appear on this screen, and all of them should be checked. However,
if there are additional Nutanix nodes (as in the following figure), all the available blocks
and nodes are displayed. In this case make sure only the nodes you want in the cluster are
selected. An exclamation mark icon is displayed for unavailable (already in a cluster) nodes.
If a discovered node has a VLAN tag, that tag is displayed.

Note: This document goes through a basic installation that should be appropriate in most
cases. However, there are options on this (and subsequent) screens, for example changing the

AOS  |  Creating a Cluster | 18


redundancy factor setting, that are not described here. See the "Creating a Cluster" chapter in
the Field Installation Guide for information about all the options.

Figure 17: Discovery Screen

AOS  |  Creating a Cluster | 19


4. In the Define Cluster configuration screen, do the following in the indicated fields.

Figure 18: Define Cluster Screen

a. Cluster Name: Enter a name for the cluster.


b. IP Address: Enter an external (virtual) IP address for the cluster.
This field sets a logical IP address that always points to an active Controller VM
(provided the cluster is up). This parameter is required for Hyper-V clusters and is
optional for ESXi and AHV clusters.
c. NTP Server Address: Enter the NTP server IP address or (pool) domain name.
d. DNS Server IP: Enter the DNS server IP address.
e. (optional) Check the Configure IPMI IP box to specify an IPMI address.
When this box is checked, fields for IPMI global network parameters appear. Foundation
does not require an IPMI connection, so this information is not required. However, you
can use this option to configure IPMI for your use.
f. Netmask: Enter a netmask value for the Controller VMs and hypervisor.
g. Gateway: Enter an IP address for the gateway used by the Controller VMs and
hypervisor.
h. CVM Memory: Select a memory size for the Controller VMs from the pull-down list.
This field is set initially to Default. Leave the default setting unless you are instructed to
use a different setting by Nutanix customer support.

AOS  |  Creating a Cluster | 20


i. Note: The following four fields appear only if the Configure IPMI IP box was checked.

IPMI Netmask: Enter the IPMI netmask value.


j. IPMI Gateway: Enter an IP address for the gateway.
k. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.
l. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password.
m. (optional) Check the Enable Testing box.
This runs the Nutanix Cluster Check (NCC) immediately after the cluster is created. NCC
is a test suite that checks a variety of cluster health metrics (as reflected in step 9). This
is optional because NCC can be run at any time (after cluster creation), and the results
are difficult for an untrained person to interpret.
n. Click the Next button (lower right).

AOS  |  Creating a Cluster | 21


5. In the Setup Node configuration screen, do the following in the indicated fields.

Figure 19: Setup Node Screen

a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should
contain only digits, letters, and hyphens.
The base name with a suffix of "-1" is assigned as the host name of the first node, and the
base name with "-2", "-3" and so on are assigned automatically as the host names of the
remaining nodes.
b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes.
Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered
address is assigned to the Controller VM of the first node, and consecutive IP addresses
(sequentially from the entered address) are assigned automatically to the remaining
nodes.

CAUTION: The use of a DHCP server is not supported for Controller VMs, so make sure to
assign static IP addresses to Controller VMs.

c. Hypervisor IP: Repeat the previous step for this field.


This sets the hypervisor IP addresses for all the nodes.

CAUTION: The Nutanix high availability features require that both hypervisor and
Controller VM be in the same subnet. Putting them in different subnets reduces the failure

AOS  |  Creating a Cluster | 22


protection provided by Nutanix and can lead to other problems. Therefore, it is strongly
recommended that you keep both hypervisor and Controller VM in the same subnet.

d. IPMI IP (when enabled): Repeat the previous step for this field.
This sets the IPMI port IP addresses for all the nodes. This column appears only when IPMI
is enabled on the previous cluster setup screen.
e. In the Manual Input section, review the assigned host names and IP addresses. If any
of the names or addresses are not correct, enter the desired name or IP address in the
appropriate field.
There is a section for each block with a line for each node in the block. The letter
designation (A, B, C, and D) indicates the position of that node in the block.
f. When all the host names and IP addresses are correct, click the Validate Network button
at the bottom of the screen.
This does a ping test to each of the assigned IP addresses to check whether any of those
addresses are being used currently.

• If there are no conflicts (none of the addresses return a ping), the process continues.
• If there is a conflict (one or more addresses returned a ping), this screen reappears
with the conflicting addresses highlighted in red. Installation will not continue until the
conflict is resolved.

AOS  |  Creating a Cluster | 23


6. In the Select Images configuration screen, do the following in the indicated fields.

Figure 20: Select Images Screen

a. Do one of the following:

• If AHV is the hypervisor, you do not need to enter any information in this screen, so go
to step (d).
• If ESXi or Hyper-V is the hypervisor, click the Upload Tarball button in the Acropolis
column (left), and then click the Choose File button. In the file search window, find
and select the AOS installation bundle you downloaded earlier (see Downloading Files
(ESXi or Hyper-V) on page 14) and then click the Upload button. Uploading an AOS or
hypervisor image file may take some time (possibly a few minutes).

Figure 21: File Selection Buttons


b. [ESXi or Hyper-V only] Select the hypervisor (ESX or HYPERV) in the Hypervisor column,
click the Upload ISO button, and then click the Choose File button. In the file search

AOS  |  Creating a Cluster | 24


window, find and select the ESXi or Hyper-V ISO image you downloaded earlier and then
click the Upload button.

Note: The hypervisor field is not active until the AOS installation bundle is selected
(uploaded) in the previous step.

Only approved hypervisor versions are permitted. To verify your version is on the
approved list, click the See Whitelist link and select the appropriate hypervisor tab in the
pop-up window.
c. [Hyper-V only] Click the option in the SKU (right) column for the Hyper-V version to use.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI,
Datacenter with GUI. This column appears only when you select Hyper-V.
d. Do one of the following:

• If the hypervisor is AHV, click the Skip button. This creates the new cluster without
imaging the nodes.
• If the hypervisor is ESXi or Hyper-V, click the Create button at the bottom of the
screen. This images the nodes and then creates the new cluster.

AOS  |  Creating a Cluster | 25


7. Node imaging (or just creating a cluster) begins and the Cluster Create screen appears.
Monitor the node imaging and cluster creation progress. This screen includes the following
sections:

• Progress bar at the top (blue during normal processing or red when there is a problem).
• Cluster Creation Status section with a line for the cluster being created (status indicator,
cluster name, progress message, and log link).
• Node Status section with a line for each node being imaged (status indicator, hypervisor
IP address, progress message, and log link).

Figure 22: Foundation Progress Screen: Ongoing

The status message for each node (in the Node Status section) displays the imaging
percentage complete and current step. When installation moves to cluster creation, the
status message displays the percentage complete and current step. The full process
(imaging the nodes, creating the cluster, and running the NCC tests) takes about an hour
depending on the hypervisor. You can monitor overall progress by clicking the Log link at
the top, which displays the foundation.out contents in a separate tab or window. Click the
Log link for a node or the cluster to display that log file in a separate tab or window.

AOS  |  Creating a Cluster | 26


8. When processing completes successfully, do the following:
1. If you ran the NCC tests (see step 4m) and want to see the results, click the here link to
view those results.
2. If you want to save the log files, click the Export Logs link to download the
log_archive.tar file that contains all the log files.
3. Click Prism to open the Prism web console. Prism is the user interface tool for configuring
and monitoring a cluster.

Note: If processing does not complete successfully, see the "Troubleshooting" section in the
Field Installation Guide. For more information about logging in to Prism, See the "Logging Into
the Web Console" section of the Prism Web Console Guide.

Figure 23: Foundation Progress Screen: Successful Installation

9. In the Home dashboard of Prism (opening screen), check the cluster status in the Health
section of the dashboard.
A green heart indicates the cluster is healthy. A yellow (warning) or red (critical) heart
indicates a potential problem that should be investigated. Click the heart icon to display the
Health dashboard, which provides more information about any potential problems. (You can

AOS  |  Creating a Cluster | 27


also display the Health dashboard by selecting Health from the dashboard pull-down list.)
See the "Health Monitoring" chapter in the Prism Web Console Guide for more information.

Figure 24: Home Dashboard

What to do next
After verifying that the cluster is healthy, you can being configuring it. Cluster configuration
steps done soon after creating a cluster typically include specifying an outgoing SMTP server,
enabling the remote support tunnel, enabling Pulse which sends cluster status information
automatically to Nutanix customer support, adding a list of alert email recipients, and creating
a storage pool and one or more storage containers. (If AHV is the hypervisor, a storage pool
and one storage container are created automatically when the cluster is created.) See the Prism
Web Console Guide for instructions on how to do these and other configuration tasks. If you
need to reset the timezone, see the nCLI cluster set-timezone command in the Acropolis
Command Reference.

Note: The following is an Xpress series training video, but the procedure for configuring a cluster
through Prism is the same for the NX series.

Figure 25: Cluster Configuration (Prism) [video]


7
WHAT TO DO IF YOU HAVE A PROBLEM
If you encounter a problem, check the documents cited in this section. All of these documents
are posted on the Nutanix customer support portal (click the Documentation button on the
opening screen or select Documentation > Product Documentation from the main menu).

Hardware Issues

• If you have a hardware-related issue, see the NX and SX Series Hardware Administration and
Reference for detailed information about your NX series platform.
• If any part of the shipment is missing or damaged, contact Nutanix customer support for
assistance (see "Contacting Nutanix Customer Support" at the end of this section).

Network Issues

• If the Foundation installer cannot find the nodes or other network-related issues occur, see
the "Network Requirements" section in the Field Installation Guide. (Connecting the IPMI
port, which is described in that section, is not necessary in this case.)

File Download Issues

• If you have trouble logging on to the Nutanix support portal, see the "Logging Into the Web
Console" section in the Prism Web Console Guide.
• If you have trouble downloading a file from the Nutanix support portal, see the
"Downloading Installation Files" section in the Field Installation Guide.
• If you are not sure what ESXi and Hyper-V ISOs are supported or where to get them, see the
"Hypervisor ISO Images" section in the Field Installation Guide.

Cluster Creation Issues

• If you want more information about any of the fields in the Foundation installer screens,
see the appropriate section for each screen in the "Creating a Cluster" chapter in the Field
Installation Guide.
• If you run into a problem when trying to image the nodes or create the cluster, see the
"Troubleshooting" chapter in the Field Installation Guide.
• If you want more information about the cluster health checks, see the "Health Dashboard"
and "Alerts/Health Checks" sections in the Prism Web Console Guide.

Contacting Nutanix Customer Support


If none of the documents provide a solution to the problem, open a Nutanix customer support
case as follows:

AOS  |  What To Do If You Have a Problem | 29


Figure 26: Opening a Support Case [video]

1. Open a browser and log on to the Nutanix customer support portal (https://
portal.nutanix.com).

Note: You need a login account to open a case on the Nutanix customer support portal.
Follow the instructions on the login screen if you do not have a login account.

2. Click the Open Case (+) button (or select Support & Forums > Open Case from the main
menu).

Figure 27: Support Portal Opening Screen

AOS  |  What To Do If You Have a Problem | 30


3. Enter appropriate information in the Create a New Case fields and then click the Submit
button.

Figure 28: Create Case Screen

AOS  |  What To Do If You Have a Problem | 31


COPYRIGHT
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as


nutanix) in the system shell.

root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a


log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

AOS  | 
Interface Target Username Password

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix
OVM)

Version
Last modified: August 5, 2019 (2019-08-05T14:15:11+05:30)

AOS  | 

You might also like