Professional Documents
Culture Documents
Acropolis 5.1
27-Apr-2017
Notice
Copyright
Copyright 2017 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: April 27, 2017 (2017-04-27 23:16:28 GMT-7)
4
1
Cluster IP Address Configuration
AOS includes a web-based configuration tool that automates assigning IP addresses to cluster
components and creates the cluster.
Requirements
The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is
not available, you must configure the Controller VM IP addresses and the cluster manually. The web-based
configuration tool also requires that the Controller VMs be able to communicate with each other.
All Controller VMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed
provided that one interface is on the same subnet as the Controller VM.
Guest VMs can be on a different subnet.
a. Connect to the ESXi host with the IPMI remote console or by attaching a keyboard and monitor.
c. Press the down arrow key until Configure Management Network is highlighted and then press
Enter.
f. Press Esc and then Y to apply all changes and restart the management network.
c. Click Networking.
g. Click Close.
1. Connect to the Hyper-V host with the IPMI remote console or by attaching a keyboard and monitor.
2. Start PowerShell.
2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
5. Verify connectivity to the IP address of the AHV host by performing a ping test.
• Confirm that the system you are using to configure the cluster meets the following requirements:
• IPv6 link-local enabled.
• Windows 7, Vista, or MacOS.
• (Hyper-V only) Confirm that the hosts have only one type of NIC (10 GbE or 1 GbE) connected during
cluster creation. If the nodes have multiple types of network interfaces connected, disconnect them until
after you join the hosts to the domain.
Note: This procedure has been deprecated (superseded) in AOS 4.5 and later releases. Instead,
use the Foundation tool to configure a cluster. See the "Creating a Cluster" topics in the Field
Installation Guide for more information.
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
The value of the inet6 addr field up to the / character is the IPv6 address of the Controller VM.
4. Type a virtual IP address for the cluster in the Cluster External IP field.
This parameter is required for Hyper-V clusters and is optional for vSphere and AHV clusters.
You can connect to the external cluster IP address with both the web console and nCLI. In the event
that a Controller VM is restarted or fails, the external cluster IP address is relocated to another
Controller VM in the cluster.
5. (Optional) If you want to enable redundancy factor 3, set Cluster Max Redundancy Factor to 3.
Redundancy factor 3 has the following requirements:
• Redundancy factor 3 can be enabled only when the cluster is created.
• A cluster must have at least five nodes for redundancy factor 3 to be enabled.
• For guest VMs to tolerate the simultaneous failure of two nodes or drives in different blocks, the data
must be stored on storage containers with replication factor 3.
• Controller VMs must be configured with 24 GB of memory.
6. Type the appropriate DNS and NTP addresses in the respective fields.
Note: You must enter NTP servers that the Controller VMs can reach in the CVM NTP Servers
field. If reachable NTP servers are not entered or if the time on the Controller VMs is ahead of
the current time, cluster services may fail to start.
For Hyper-V clusters, the CVM NTP Servers parameter must be set to the IP addresses of one or more
Active Directory domain controllers.
The Hypervisor NTP Servers parameter is not used in Hyper-V clusters.
8. Type the appropriate default gateway IP addresses in the Default Gateway row.
9. Select the check box next to each node that you want to add to the cluster.
Note: The unconfigured nodes are not listed according to their position in the block. Ensure
that you assign the intended IP address to each node.
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886, 8888, 8889,
8890]
Ergon UP [8814, 8862, 8863, 8864]
→ Mac OS
$ ifconfig en0
Note the IPv6 link-local addresses, which always begin with fe80 . Omit the / character and anything
following.
→ Linux/Mac OS
$ ping6 ipv6_linklocal_addr%interface
• Replace ipv6_linklocal_addr with the IPv6 link-local address of the other laptop.
• Replace interface with the interface identifier on the other laptop (for example, 12 for Windows, eth0
for Linux, or en0 for Mac OS).
If the ping packets are answered by the remote host, IPv6 link-local is enabled on the subnet. If the
ping packets are not answered, ensure that firewalls are disabled on both laptops and try again before
concluding that IPv6 link-local is not enabled.
5. Reenable the firewalls on the laptops and disconnect them from the network.
Results:
• If IPv6 link-local is enabled on the subnet, you can use automated IP address and cluster configuration
utility.
• If IPv6 link-local is not enabled on the subnet, you have to manually set IP addresses and create the
cluster.
Note: IPv6 connectivity issue might occur if mismatch occurs because of VLAN tagging. This
issue might occur because ESXi that is shipped from the factory does not have VLAN tagging,
hence it might have VLAN tag as 0. The workstation (laptop) that you have connected might be
connected to access port, so it might use different VLAN tag. Hence, ensure that ESXi port must
be in the trunking mode.
4. Create the cluster by following Creating the Cluster (Manual) on page 15.
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Replace cluster_name with a name for the cluster chosen by the customer.
Replace dns_server with the IP address of a single DNS server or with a comma-separated list of
DNS server IP addresses.
Replace ntp_server with the IP address or host name of a single NTP server or a with a comma-
separated list of NTP server IP addresses or host names.
e. (Hyper-V only) Add a record for the cluster external IP address to the domain DNS server.
1. Log on to any Controller VM on the same subnet as the Controller VMs that you want to include in the
new cluster.
This can be a Controller VM that is already part of a cluster. Connect to the IPv6 address because an
IPv4 connection is lost during configuration.
2. Create a JSON file that defines the networking configuration for the new cluster.
{
"Subnet Mask": {
"Controller": "Subnet mask",
"Hypervisor": "Subnet mask",
"IPMI": "Subnet mask"
},
"Default Gateway": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"IP Addresses": {
"block_serial_number/A": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"block_serial_number/B": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"block_serial_number/C": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"block_serial_number/D": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
}
}
}
The IP Addresses block requires one entry for each node that you want to include in the cluster
(minimum 3 nodes). Each node is identified by the block serial number and the node position (A, B, C,
or D).
Replace cluster_config_json_file with the name of the JSON file that defines the networking
configuration for the new cluster.
If you want to configure redundancy factor 3, add the parameter --redundancy_factor=3 before create.
Redundancy factor 3 has the following requirements:
• Redundancy factor 3 can be enabled only when the cluster is created.
• A cluster must have at least five nodes for redundancy factor 3 to be enabled.
• For guest VMs to tolerate the simultaneous failure of two nodes or drives in different blocks, the data
must be stored on storage containers with replication factor 3.
• Controller VMs must be configured with 24 GB of memory.
If the cluster can be created, a success message for each Controller VM is displayed and the cluster
starts. If the cluster cannot be created, ensure the JSON file is correct and attempt the creation again.
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886, 8888, 8889,
8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627, 4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947, 9976, 9977,
10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
MinervaCVM UP [10174, 10200, 10201, 10202, 10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
Replace cluster_name with a name for the cluster chosen by the customer.
Replace dns_server with the IP address of a single DNS server or with a comma-separated list of
DNS server IP addresses.
Replace ntp_server with the IP address or host name of a single NTP server or a with a comma-
separated list of NTP server IP addresses or host names.
e. (Hyper-V only) Add a record for the cluster external IP address to the domain DNS server.
2. Restart the node and press Delete to enter the BIOS setup utility.
There is a limited amount of time to enter BIOS before the host completes the restart process.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Press down the arrow key until Update IPMI LAN Configuration is highlighted and press Enter to
select Yes.
1. Log on to the hypervisor host with SSH (vSphere or AHV) or remote desktop connection (Hyper-V).
→ Hyper-V
> ipmiutil lan -I mgmt_interface_ip_addr -G mgmt_interface_gateway `
-S mgmt_interface_subnet_addr -U ADMIN -P ADMIN
→ AHV
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
• Replace mgmt_interface_ip_addr with the new IP address for the remote console.
• Replace mgmt_interface_gateway with the gateway IP address.
• Replace mgmt_interface_subnet_addr with the subnet mask for the new IP address.
→ Hyper-V
> ipmiutil lan -r -U ADMIN -P ADMIN
→ AHV
root@ahv# ipmitool -v -U ADMIN -P ADMIN lan print 1
You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node.
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network is highlighted and then press Enter.
5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press
Enter. In the dialog box, provide the VLAN ID and press Enter.
7. If necessary, highlight the Set static IP address and network configuration option and press Space
to update the setting.
10. If necessary, highlight the Use the following DNS server addresses and hostname option and press
Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
15. Verify that the default gateway and DNS servers reported by the ping test match those that you
specified earlier in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP
addresses are configured.
1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
Name : Ethernet
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
LinkSpeed : 10 Gbps
Name : Ethernet 3
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
LinkSpeed : 10 Gbps
Name : NetAdapterTeam
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
LinkSpeed : 20 Gbps
Name : Ethernet 4
InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
LinkSpeed : 0 bps
Name : Ethernet 2
InterfaceDescription : Intel(R) I350 Gigabit Network Connection
LinkSpeed : 1 Gbps
Make a note of the Name of the 1 GbE interfaces you want to enable.
If you want to configure the interface as a standby for the 10 GbE interfaces, include the parameter -
AdministrativeMode Standby
Perform these steps once for each 1 GbE interface you want to enable.
1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
Name : Ethernet
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
LinkSpeed : 10 Gbps
Name : Ethernet 3
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
LinkSpeed : 10 Gbps
Name : NetAdapterTeam
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
LinkSpeed : 20 Gbps
Name : Ethernet 4
InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
LinkSpeed : 0 bps
Name : Ethernet 2
InterfaceDescription : Intel(R) I350 Gigabit Network Connection
LinkSpeed : 0 bps
Make a note of the InterfaceDescription for the vEthernet adapter that links to the physical interface
you want to modify.
a. Select a network adapter by typing the Index number of the adapter you want to change (refer to the
InterfaceDescription you found in step 2 on page 24) and pressing Enter.
Warning: Do not select the network adapter with the IP address 192.168.5.1. This IP
address is required for the Controller VM to communicate with the host.
f. Enter the IP address for the default gateway and press Enter.
The host networking settings are changed.
b. Enter the primary and secondary DNS servers and press Enter.
The DNS servers are updated.
1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
• Replace domain_name with the name of the join for the host to join.
• Replace node_name with a new name for the host.
• Replace domain_admin_user with the domain administrator username.
The host restarts and joins the domain.
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0
b. Update the values of the nameserver parameter and then save and close the file.
3. Assign the AHV host to a VLAN. For information about how to add the AHV host to a VLAN, see
Assigning an Acropolis Host to a VLAN in the Acropolis Hypervisor Administration Guide.
Output similar to the following indicates that interfaces eth2 and eth3 are bonded and that the bonded
port is assigned to the br0 switch. The output also indicates that the interfaces eth0 and eth1 are not on
bridge br0.
The external IP address reconfiguration script (external_ip_reconfig) performs the following tasks:
1. Puts the cluster in reconfiguration mode.
2. Restarts Genesis.
3. Prompts you to type the new netmask, gateway, and external IP addresses, and updates them.
4. Updates the IP addresses of the Zookeeper hosts.
Warning: If you are changing the Controller VM IP addresses to another subnet, network, IP
address range, or VLAN, you should also change the hypervisor management IP addresses to the
same subnet, network, IP address range, or VLAN.
1. Log on to the hypervisor with SSH (vSphere or AHV) or remote desktop connection (Hyper-V), or the
IPMI remote console.
Warning: This step affects the operation of a Nutanix cluster. Schedule a down time before
performing this step.
nutanix@cvm$ cluster stop
Note: If necessary, change the hypervisor management IP address or IPMI IP address before
you execute the external_ip_reconfig script.
4. Run the external IP address reconfiguration script (external_ip_reconfig) from any one Controller VM in
the cluster.
nutanix@cvm$ external_ip_reconfig
5. Follow the prompts to type the new netmask, gateway, and external IP addresses.
A message similar to the following is displayed after the reconfiguration is successfully completed:
External IP reconfig finished successfully. Restart all the CVMs and start the cluster.
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
Zeus UP [3704, 3727, 3728, 3729, 3807, 3821]
Scavenger UP [4937, 4960, 4961, 4990]
SSLTerminator UP [5034, 5056, 5057, 5139]
Hyperint UP [5059, 5082, 5083, 5086, 5099, 5108]
Medusa UP [5534, 5559, 5560, 5563, 5752]
DynamicRingChanger UP [5852, 5874, 5875, 5954]
Pithos UP [5877, 5899, 5900, 5962]
Stargate UP [5902, 5927, 5928, 6103, 6108]
Cerebro UP [5930, 5952, 5953, 6106]
Chronos UP [5960, 6004, 6006, 6075]
Curator UP [5987, 6017, 6018, 6261]
Prism UP [6020, 6042, 6043, 6111, 6818]
AlertManager UP [6070, 6099, 6100, 6296]
Arithmos UP [6107, 6175, 6176, 6344]
SysStatCollector UP [6196, 6259, 6260, 6497]
Tunnel UP [6263, 6312, 6313]
ClusterHealth UP [6317, 6342, 6343, 6446, 6468, 6469, 6604, 6605,
6606, 6607]
Janus UP [6365, 6444, 6445, 6584]
NutanixGuestTools UP [6377, 6403, 6404]
What to do next: Run the following NCC checks to verify the health of the Zeus configuration:
• nutanix@cvm$ ncc health_checks system_checks zkalias_check_plugin