You are on page 1of 44

NCC 3.

10

Nutanix Cluster Check


(NCC) 3.10 Guide
July 22, 2020
Contents

1. Nutanix Cluster Check (NCC)............................................................................. 4

2. Install and Upgrade Nutanix Cluster Check (NCC).................................... 6


NCC Compatibility with AOS and Prism Central Clusters.........................................................................6
Upgrading NCC on Prism Element Clusters................................................................................................... 7
Upgrading NCC by Uploading Binary and Metadata Files........................................................... 8
Installing NCC from an Installer File.................................................................................................................. 8
Upgrading NCC on Prism Central..................................................................................................................... 10
Upgrading NCC by Uploading Binary and Metadata Files........................................................... 11
Upgrading NCC with Life Cycle Manager (LCM)..........................................................................................11

3. Scheduling and Automatically Emailing NCC Results............................. 13

4.  Log Collection.........................................................................................................15


Collecting Logs from the Web Console with Logbay............................................................................... 15
Logbay Log Collection (Command Line)....................................................................................................... 18
Logbay Plug-ins............................................................................................................................................ 21
Anonymizing Log Bundle Details......................................................................................................... 22
Uploading Logbay Logs....................................................................................................................................... 23
Uploading Logs to Nutanix Storage Container...............................................................................23
Uploading Logs to Nutanix SFTP/FTP Server................................................................................ 24
Log Collector (Legacy Method)........................................................................................................................24
Securely Uploading Log Collector Log Files to Nutanix Support........................................... 25

5.  Hardware Collector.............................................................................................. 28


Using NCC Commands to Collect Hardware Information.......................................................................28
Change Log History............................................................................................................................................... 29
Hardware Collector Information........................................................................................................................29

6.  NCC Usage..............................................................................................................34


Learn More About NCC Health Checks......................................................................................................... 34
Displaying NCC and Logbay Help....................................................................................................................35
Run NCC Checks..................................................................................................................................................... 35
Running NCC (Prism Element)..............................................................................................................36
Running NCC (Prism Central)................................................................................................................ 37
Command Line Usage Examples...................................................................................................................... 37

7. Open Source Licenses.........................................................................................38

ii
Copyright.................................................................................................................. 43
License......................................................................................................................................................................... 43
Conventions............................................................................................................................................................... 43
Default Cluster Credentials................................................................................................................................. 43
Version......................................................................................................................................................................... 44

iii
1
NUTANIX CLUSTER CHECK (NCC)
Nutanix Cluster Check (NCC) is cluster-resident software that can help diagnose cluster health
and identify configurations qualified and recommended by Nutanix. NCC continously and
proactively runs hundreds of checks and takes the needed action towards issue resolution.
Depending on the issue discovered, NCC raises an alert or automatically creates Nutanix
Support cases. NCC can be run provided that the individual nodes are up, regardless of cluster
state.
When run from the Controller VM command line or web console, NCC generates a log file with
the output of the diagnostic commands selected by the user.
NCC actions are grouped into plugins and modules.

• A Plugin is a purpose-specific or component-specific code block inside a module, commonly


referred to as a check. A plugin can be a single check or one or more individual related
checks.
• A Module is a logical group of common-purpose plugins. It can also be a logical group of
common-purpose modules.

Note: Some plugins run nCLI commands and might require the user to input the nCLI password.
The password is logged on as plain text. If you change the password of the admin user from
the default, you must specify the password every time you start an nCLI session from a remote
system. A password is not required if you are starting an nCLI session from a Controller VM
where you are already logged on.

Comprehensive documentation of NCC is available in the Nutanix Command Reference.

NCC Output
Each NCC plugin is a test that completes independently of other plugins. Each test completes
with one of these status types. The status might also display a link to a Nutanix Support Portal
Knowledge Base article with more details about the check, or information to help you resolve
issues NCC finds.
PASS
The tested aspect of the cluster is healthy and no further action is required. A check can
also return a PASS status if it is not applicable
FAIL
The tested aspect of the cluster is not healthy and must be addressed. This message
requires an immediate action. If you do not take immediate action, the cluster might
become unavailable or require intervention by Nutanix Support.
WARN
The plugin returned an unexpected value that you must investigate. This message
requires user intervention which you should resolve as soon as possible to help maintain
cluster heath.

NCC  |  Nutanix Cluster Check (NCC) | 4


INFO
The plugin returned an expected value that however cannot be evaluated as PASS/FAIL.
The plugin returns information about the tested cluster item. In some cases, the message
might indicate a recommendation from Nutanix that you implement as soon as possible.
ERR
The plugin failed to execute. This message represents an error with the check execution
and not necessarily an error with the cluster entity. It states that the check cannot
confirm a PASS/INFO/WARN/FAIL status.

Running Health Checks


In addition to running all health checks, you can checks as follows. See also Run NCC Checks
on page 35.
Run all or some checks from the Prism Web Console

• From the Prism web console Health page, select Actions > Run Checks. Select All
checks and click Run.
• If you disable a check in the Prism web console, you cannot run it from the NCC
command line unless you enable it again from the web console.
• You can run NCC checks from the Prism web console for clusters where AOS 5.0 or
later and NCC 3.0 or later are installed. You cannot run NCC checks from the Prism
web console for clusters where AOS 4.7.x or previous and NCC 3.0 are installed.
• For AOS clusters where it is installed, running NCC 3.0 or later from the command
line updates the Cluster Health score, including the color of the score. For some NCC
checks, you can clear the score by disabling and then re-enabling the check.
Run two or more individual checks at a time

• You can specify two or more individual checks from the command line, with each
check separated by a comma. Ensure you do not use any spaces between checks, only
a comma character. For example:
ncc health_checks system_checks \
--plugin_list="cluster_version_check,cvm_reboot_check"

Re-run failing checks

• You can re-run any NCC checks or plug-ins that reported a FAIL status.
ncc --rerun_failing_plugins=True

NCC  |  Nutanix Cluster Check (NCC) | 5


2
INSTALL AND UPGRADE NUTANIX
CLUSTER CHECK (NCC)
Upgrade NCC by using the web console, installer file, or Life Cycle Manager.
To help maintain cluster health and take advantage of the latest NCC technology, Nutanix
recommends that you keep your NCC version current. You can upgrade NCC through the web
console, Life Cycle Manager, or command line. Also see the Acropolis Upgrade Guide for your
AOS version for more information about upgrades.
If you are adding one or more nodes to expand your cluster, the latest version of NCC might
not be installed on each newly-added node. In this case, re-install NCC in the cluster after you
have finished adding the one or more nodes.

Note: To help ensure that Prism Central and each managed cluster are taking advantage of NCC
features, ensure that:

• Each node in your cluster is running the same NCC version.


• Prism Central and each cluster managed by Prism Central are all running the same
NCC version.

NCC Compatibility with AOS and Prism Central Clusters

Table 1: Supported Versions, x86 Platforms

NCC Version Minimum Supported AOS/Prism Central Version


NCC 3.10.0 and later NCC 3.10.x 5.5
versions
This version is not supported on clusters running
AOS versions prior to 5.5 or any AOS Family for
IBM CS Series platforms. NCC 3.7.0.2 is supported
on clusters running AOS versions prior to 5.5. See
Knowledge Base article 7796 on the Nutanix Support
portal.

NCC 3.9.0 and later NCC 3.9.x versions 5.5


This version is not supported on clusters running
AOS versions prior to 5.5 or any AOS Family for
IBM CS Series platforms. NCC 3.7.0.2 is supported
on clusters running AOS versions prior to 5.5. See
Knowledge Base article 7796 on the Nutanix Support
portal.

NCC  |  Install and Upgrade Nutanix Cluster Check (NCC) | 6


NCC Version Minimum Supported AOS/Prism Central Version

NCC 3.8.0 and later NCC 3.8.x versions 5.5


This version is not supported on clusters running
AOS versions prior to 5.5 or any AOS Family for
IBM CS Series platforms. NCC 3.7.0.2 is supported
on clusters running AOS versions prior to 5.5. See
Knowledge Base article 7796 on the Nutanix Support
portal.

NCC 3.7.1 and later NCC 3.7.x versions 5.5


This version is not supported on clusters running
AOS versions prior to 5.5 or any AOS Family for
IBM CS Series platforms. NCC 3.7.0.2 is supported
on clusters running AOS versions prior to 5.5. See
Knowledge Base article 7796 on the Nutanix Support
portal.

NCC 3.7 / NCC 3.7.0.1 / NCC 3.7.0.2 4.7


NCC 3.6 and later NCC 3.6.x versions

Table 2: Supported Versions, IBM CS Power Series Platforms

NCC Version Minimum Supported AOS/Prism Central Version

NCC 3.9.0 for IBM Power CS Series AOS 5.11.1.2 (Power platforms)
(Power platforms only)

NCC 3.6.4 for IBM Power CS Series AOS 5.10.0.7 (Power platforms)
(Power platforms only)

Upgrading NCC on Prism Element Clusters


About this task
This topic describes how to install NCC software from the Prism Element web console. To install
NCC from the command line, see Installing NCC from an Installer File on page 8.

Procedure

1. Run NCC as described in Run NCC Checks on page 35.

2. Log on to the Prism web console for any node in the cluster and click the gear icon.

3. Click Upgrade Software, then click NCC in the dialog box.

4. If an update is available, click Upgrade Available and then click Download

5. When the download process is completed, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster. You you might observe notifications or other slight anomalies as
the service is restarting.

NCC  |  Install and Upgrade Nutanix Cluster Check (NCC) | 7


Upgrading NCC by Uploading Binary and Metadata Files

About this task

• Do the following steps to download NCC binary and metadata .JSON files from the Nutanix
Support Portal, then upgrade NCC through Upgrade Software in the Prism web console.
• Typically you must perform this procedure if your cluster is not directly connected to the
Internet and you cannot download the binary and metadata .JSON files through the Prism
web console.

Procedure

1. Log on to the Nutanix Support portal and select Downloads > Tools & Firmware.

2. Click the download link to save the binary gzipped TAR (.tar.gz) and metadata (.json) files on
your local media.

3. Log on to the Prism web console for any node in the cluster and click the gear icon.

4. Click Upgrade Software, then click NCC in the dialog box.

5. Click the upload the NCC binary link.

6. Click Choose File for the NCC metadata and binary files, respectively, browse to the file
locations, and click Upload Now.

7. When the upload process completes, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster. You you might observe notifications or other slight anomalies as
the service is restarting.

Installing NCC from an Installer File


Before you begin

• If you are adding one or more nodes to expand your cluster, the latest version of NCC might
not be installed on each newly-added node. In this case, re-install NCC in the cluster after
you have finished adding the one or more nodes.
• This topic describes how to install NCC from the command line by using a shell script
downloaded from the Nutanix Support Portal. To upgrade NCC software from the web
console, see Upgrading NCC on Prism Element Clusters on page 7 or Upgrading NCC on
Prism Central on page 10.

Note: To help ensure that Prism Central and each managed cluster are taking advantage of NCC
features, ensure that:

• Each node in your cluster is running the same NCC version.


• Prism Central and each cluster managed by Prism Central are all running the same
NCC version.

NCC  |  Install and Upgrade Nutanix Cluster Check (NCC) | 8


Procedure

1. From the Nutanix Support Portal Downloads > Tools & Firmware page, download, save, and
then copy the NCC installation shell file to any Controller VM in the cluster.

• Make sure that the Controller VM directory where you copy the shell file exists on all
nodes in the cluster. Nutanix recommends the /home/nutanix folder. This folder should be
owned by any accounts that use NCC.
• Note the MD5 value of the file as published on the Support Portal.

2. From the Controller VM, check the MD5 value of the file.
nutanix@cvm$ md5sum ./ncc_installer_filename.sh

It must match the MD5 value published on the Support Portal. If the value does not match,
delete the file and download it again from the Support Portal.

3. Make the installation file executable.


nutanix@cvm$ chmod u+x ./ncc_installer_filename.sh

4. Install NCC.
nutanix@cvm$ ./ncc_installer_filename.sh

The installation script installs NCC on each node in the cluster. NCC installation file logic tests
the NCC file checksum and prevents installation if it detects file corruption.

• If it verifies the file, the installation script installs NCC on each node in the cluster.
• If it detects file corruption, it prevents installation and deletes any extracted files. In this
case, download the file again from the Nutanix support portal.

5. Check the shell script output for any error messages.

• If installation is successful, a Finished Installation message is displayed. You can check


any NCC-related messages in /home/nutanix/data/logs/ncc-output-latest.log.

• In some cases, output similar to the following is displayed. Depending on the NCC version
installed, the installation file might log the output to /home/nutanix/data/logs/ or /home/
nutanix/data/serviceability/ncc.
Copying file to all nodes [ DONE ]
-------------------------------------------------------------------------------+
+---------------+
| State | Count |
+---------------+
| Total | 1 |
+---------------+
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log
[ info ] Installing ncc globally.
[ info ] Installing ncc on 10.130.45.72, 10.130.45.73
[ info ] Installation of ncc succeeded on nodes 10.130.45.72, 10.130.45.73.

What to do next

• As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster, so you might observe notifications or other slight anomalies as the
service is being restarted.

NCC  |  Install and Upgrade Nutanix Cluster Check (NCC) | 9


Upgrading NCC on Prism Central
Before you begin
To help ensure that Prism Central and each managed cluster are taking advantage of NCC
features, ensure that:

• Each node in your cluster is running the same NCC version.


• Prism Central and each cluster managed by Prism Central are all running the same NCC
version.
To check the currently-installed NCC version running on Prism Central:

• Log in to the Prism Central web console.


• From your user name link, click About Nutanix.
The pop-up window shows the installed NCC version.

About this task


This topic describes how to install NCC software from the Prism Central web console. .

Figure 1: Upgrade Software: NCC

NCC  |  Install and Upgrade Nutanix Cluster Check (NCC) | 10


Procedure

1. Log on to the Prism Central web console as the admin user and click the gear icon.

2. Click Upgrade Software, then click NCC in the dialog box.

3. If an update is available, click Upgrade Available and then click Download

4. When the download process is completed, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster, so you might observe notifications or other slight anomalies as the
service is being restarted.

Upgrading NCC by Uploading Binary and Metadata Files

About this task


Do the following steps to download NCC binary and metadata .JSON files from the Nutanix
Support Portal, then upgrade NCC through Upgrade Software in the Prism Central web
console.

Procedure

1. Log on to the Nutanix support portal and select Downloads > Tools & Firmware.

2. Click the NCC version download link to save the binary gzipped TAR (.tar.gz) and metadata
(.json) files on your local media.

3. Log on to the Prism Central web console as the admin user and click the gear icon.

4. Click Upgrade Software, then click NCC in the dialog box.

5. Click the upload the NCC binary link.

6. Click Choose File for the NCC metadata and binary files, respectively, browse to the file
locations, and click Upload Now.

7. When the upload process is completed, click Upgrade, then click Yes to confirm.
The Upgrade Software dialog box shows the progress of your selection.
As part of installation or upgrade, NCC automatically restarts the cluster health service on
each node in the cluster, so you might observe notifications or other slight anomalies as the
service is being restarted.

Upgrading NCC with Life Cycle Manager (LCM)


Upgrade NCC by using the Life Cycle Manager in the Prism Element or Prism Central web
console.

Before you begin


See the Life Cycle Manager documentation for complete information about using LCM.

Procedure

1. Upgrade NCC on your Prism Element clusters.

2. Run NCC as described in Run NCC Checks on page 35.

NCC  |  Install and Upgrade Nutanix Cluster Check (NCC) | 11


3. Click the gear icon in the main menu of the web console, then select LCM from the Settings
drop-down menu.

4. To show current software versions, click View By > Component.

5. Click Perform Inventory > Proceed.


Life Cycle Manager checks for its own latest version as a first step, then downloads and
installs the latest LCM framework before checking for other available software updates.
Depending on your network bandwidth, this task can take a few minutes to complete. You
might also need to refresh your browser to update the LCM page.

6. In the LCM sidebar under Updates, click Software to show the latest available NCC updates.

7. Select NCC and click Update.


LCM updates NCC on the Prism Element cluster.

8. Repeat these steps for your Prism Central cluster. See Running NCC (Prism Central) on
page 37 to run NCC checks on Prism Central.
3
SCHEDULING AND AUTOMATICALLY
EMAILING NCC RESULTS
About this task
Nutanix Cluster Check (NCC) enables you to set the frequency of cluster checks and to email
the results of these checks. By default, this feature is disabled. Once you enable and configure
this feature, NCC:

• Runs the checks periodically according to a frequency you have set


• Emails the results of the checks to users that you have also configured to receive
alert emails. That is, this feature uses the settings and infrastructure of the Alert Email
Configuration feature, which can send alert information automatically to Nutanix customer
support and others.
• Runs as configured even if you have upgraded your cluster's AOS or NCC version after
configuring this feature.
The Web Console Guide describes how to configure Alert email so that results can be mailed to
Nutanix support and others.

Note:

• NCC results emailed to Nutanix support do not automatically create support cases.
• After adding a node to a cluster, ensure that any previously-configured email
settings exist and configure them as needed, as described in this topic.

Procedure

1. In the Health dashboard, from the Actions drop-down menu select Set NCC Frequency.

2. Select the configuration schedule.

» Every 4 hours: Select this option to run the NCC checks at four hours interval.
» Every Day: Select this option to run the NCC checks on a daily basis.
Select the time of the day when you want to run the checks from Start Time field.
» Every Week: Select this option to run the NCC checks on a weekly basis.
Select the day and time of the week when you want to run the checks from the On and
Start Time fields. For example, if you select Sunday and Monday from the On field and
select 3:00 p.m. from the Start Time field, every Sunday and Monday at 3 p.m. the NCC
checks are run automatically.
The Email address that you have configured in the web console is also displayed. A report
will be sent as an email to all the recipients.

NCC  |  Scheduling and Automatically Emailing NCC Results | 13


3. Click Save.
4
LOG COLLECTION
Note: Nutanix plans to deprecate the legacy NCC Log Collector (ncc log_collector) in an
upcoming NCC release. Nutanix recommends that you use the latest Logbay utility to collect
logs.

AOS implements many logs and configuration information files that are useful for
troubleshooting issues and finding out details about a particular node or cluster. You can collect
logs for Controller VMs, file server, hardware, alerts, hypervisor, and for the system.
Log collection includes:

• Logs and configuration information for one or more Controller VMs


• Configuration information for hypervisors
• Logs generated by sysstats utilities
• Information about alerts
You can collect logs as follows.

• Collecting Logs from the Web Console with Logbay on page 15 through the Prism web
console or logbay command line
• Logbay Log Collection (Command Line) on page 18through the NCC command line

Collecting Logs from the Web Console with Logbay


Nutanix Cluster Check includes the Logbay plug-in. Logbay collects the logs from Controller
VMs and hosts. It can masks sensitive information like IP address, container names, and so on.
After the task finishes, the log bundle is available for download from the Tasks dashboard.

About this task


Log bundle includes logs and configuration information from one or more Controller VMs,
configuration information for hypervisors, and information about alerts and so on. To collect the
logs and download the log bundle, perform the following procedure.
Logbay output bundles are created as .zip files, which you can download through the Prism
Element web console. Logbay randomly selects one node in the cluster and collects cluster-
wide logs on that node. By default, an RPC server listens on port 5000.
You can then upload these log bundles to an internal SFTP/FTP server or Nutanix storage
container to help Nutanix Support to investigate any reported issues. See the NCC release
notes on the support portal for supported AOS versions.

Procedure

1. In the Health dashboard, from the Actions drop-down menu, select Collect Logs.

NCC  |  Log Collection | 15


2. In Node Selection, click + Select Nodes. Select the nodes for which you want to collect the
logs and click Done.

3. Click Next.

4. In Log Settings, select one of the following:

» All. Select this option if you want to collect the logs for all the tags.
» Specific (by tags). Select this option, click + Select Tags if you want to collect the logs
only for the selected tags and then click Done.

NCC  |  Log Collection | 16


5. In Output Preferences, do the following in the indicated fields.

• 1. Select Duration. Select the duration for which you want to collect the logs. You can
collect the logs either in hours or days. Click the drop-down list to select the required
option.
2. Cluster Date. Select the date from which you want to start the log collection operation.
Click the drop-down list to select either Before or After to collect logs before or after a
selected date.
3. Cluster Time. Select the time from when you want to start the log collection operation.
4. Select Destination for the collected logs. Click the drop-down list to select the server
where you want the logs to be collected.

• Download Locally
• Nutanix Support FTP. If you select this option, enter the case number in the Case
Number field.
• Nutanix Support SFTP. If you select this option, enter the case number in the Case
Number field.
• Custom Server. Enter server name, port, username, password, and archive path if
you select this option.
5. Anonymize Output. Select this option if you want to mask all the sensitive information
like the IP addresses.

Figure 2: Collect Logs

NCC  |  Log Collection | 17


6. To start the operation, click Collect.

7. After the operation completes, you can download the log bundle for the last two runs and
(as needed) add it to a support case as follows:

a. Go to the Task dashboard, find the log bundle task entry, and click the Succeeded link for
that task (in the Status column) to download the log bundle.

Note: If a pop-up blocker in your browser stops the download, turn off the pop-up blocker
and try again.

b. Log in the support portal, click on the target case in the 360 View widget on the
dashboard (or click the Create a New Case button to create a new case), and upload the
log bundle to the case (click the Choose Files button in the Attach Files section to select
the file to upload).

Logbay Log Collection (Command Line)


Logbay Commands
To use logbay, run the following command from a Controller VM.

• Check the logbay version.


nutanix@cvm$ logbay --version

• Select the output format.


nutanix@cvm$ logbay --output=normal|json

• View logbay context-aware help.


nutanix@cvm$ logbay --help
nutanix@cvm$ logbay collect --help

• Collect logs.
nutanix@cvm$ logbay collect
nutanix@cvm$ logbay collect [Options]

By default, logbay collect collects all tags or components for the last 4 hours, individual log
bundles per Controller VM are stored locally (not aggregated to the single Controller VM).
Replace [Options] with the options listed in table Options for the Collect Command.
• List all the available tags.
nutanix@cvm$ logbay list_tags

Logbay uses tags to easily select specific components for collection. Tags are useful for
faster and smaller collections of log files for focused troubleshooting.

Table 3: Options for the Collect Command

Option Description

-t, --tags Specify the tag name to collect the logs from
the tag specified. By default, -t collect logs for
all the tags.

NCC  |  Log Collection | 18


Option Description

-x, --exclude_tags Exclude logs that are tagged with any of the
specified tags. For example, if you want to
collect Controller VM logs and at the same
time you want to filter out logs that are
tagged with Stargate tag, you can run logbay
collect -t cvm_logs -x stargate command.
This command excludes all the Stargate logs
and generates a log bundle for the rest of the
Controller VM logs.

Note: -x always takes priority over -t.

--aggregate=0|1 Specify if you want to aggregate the bundle


from all nodes onto the current node. By
default, aggregate is set to 0.

--anonymize=0|1 Specify if the output of logbay should be


anonymized with certain information masked
or obscured. By default, anonymization is
disabled (=0). Use the command logbay
collect --anonymize=1 to mask the sensitive
information.

-c, --case_number Specify the case number for uploading the log
bundle to the Nutanix server.

-d, --duration=-4h0m0s Specify for how long, relative to the start time,
you want to collect logs from. For example,
300s, -1.5h, 3d2h45m0s. By default, the
duration for log collection is 4 hours.

-f, --from=2019/04/09-14:00:00 Specify the point in time from which you want
the logs to be collected. By default, -f is the
current cluster time.

-s, --src Specify the IP address from where you want


to collect the logs. By default, -s collect logs
for all the Controller VMs.

-n, --name="" Sets the name of the archive folder.

NCC  |  Log Collection | 19


Option Description

-D, --dst=/home/nutanix/data/logbay/ Specify the destination for your log bundle;


bundles whether a disk, an internal SFTP/FTP server,
or a Nutanix container.

• --dst=(file|ftp|sftp)://username@host]/
path or (ftp|sftp)://nutanix for Nutanix
uploads or --dst=container:/container_name
for containers .
• By default, the destination is set to file://
nutanix@127.0.0.1/home/nutanix/data/
logbay/bundles

Note: You must enter the Salesforce case


number to upload logs to the Nutanix
SFTP or FTP server.

-o, --options Pass these options onto lower collection


layers.

• Format: --options
optOne=foo,optTwo=[I,Am,A,Multi,String,Option],optThr
For example, run command logbay -o
file_server_name_list=<FSVM name to collect
file server logs. Replace FSVM name with the file
server name.

Examples

• The following command collects logs for Controller VM.


nutanix@cvm$ logbay collect -t cvm_logs

• The following command collects all the Controller VM logs excluding logs that are tagged
with Stargate tag.
nutanix@cvm$ logbay collect -t cvm_logs -x stargate

• The following command collects and aggregates all individual node log bundles to a single
file on the CVM where the command is run.
nutanix@cvm$ logbay collect --aggregate=1

• The following command collects logs with anonymized output.


nutanix@cvm$ logbay collect --anonymize=1

• The following command collects logs and uploads the log files to Nutanix FTP server for
automatic case association.
nutanix@cvm$ logbay collect --dst=ftp://nutanix -c case_number

In this command, case_number is the open Nutanix Support case number provided by Nutanix
Support team.

NCC  |  Log Collection | 20


• The following command collects logs for the last 2 hours.
nutanix@cvm$ logbay collect --duration=-2h0m

• The following command collects logs for the last 6.15 hours after 2pm (using cluster time and
timezone).
nutanix@cvm$ logbay collect --from=2019/04/09-14:00:00 --duration=+6h15m

• The following command collects logs only for specific node in a cluster.
nutanix@cvm$ logbay collect -s comma separated list of CVM IPs

In this example, comma separated list of CVM IPs is the list of Controller VM IP address on
which you want to collect logs.
• The following example sets the name of the archive folder to cluster_logs.
nutanix@cvm$ logbay collect --name="cluster_logs"

• The following commands collect logs and upload the log bundle to the destination specified;
whether a disk, an internal SFTP/FTP server, or a Nutanix container.
nutanix@cvm$ logbay collect --dst=(file|ftp|sftp)://username@host]/path
nutanix@cvm$ logbay collect --dst=(ftp|sftp)://nutanix
nutanix@cvm$ logbay collect --dst=container:/container_name

• The following command collects file server logs.


nutanix@cvm$ logbay collect -o file_server_name_list=FSVM_name

• The following example displays the output of collecting logs for a specific tag on.
nutanix@cvm$ logbay -o normal collect -t svm_boot
Time period of collection: Sun Dec 22 17:51:58 PST 2019 - Sun Dec 22 21:51:58 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::7ff44340-3aba-42f2-a6f5-df90a5fdf674
[====================================================================]
x.x.x.x
Archive Location: x.x.x.x:/home/nutanix/data/logbay/bundles/NTNX-
Log-2019-12-22-1577080318-33287-PE-x.x.x.x-CW.zip

Logbay Plug-ins

Table 4: Logbay Plug-ins

Contents Plugin Name

Controller VM configuration logbay collect -t cvm_config

Controller VM logs logbay collect -t cvm_logs

Controller VM kernel logs logbay collect -t cvm_kernel

Cluster alerts logbay collect -t alerts

NCC  |  Log Collection | 21


Contents Plugin Name

Hypervisor configuration
• For AHV, logbay collect -t ahv_config

• For ESXi, logbay collect -t esx_config

• For HyperV, logbay collect -t


hyperv_config

Hypervisor logs
• For AHV, logbay collect -t ahv_logs

• For ESXi, logbay collect -t esx_logs

• For HyperV, logbay collect -t


hyperv_logs

VPN logs logbay collect -t vpn_logs

Remote host logs Logbay collect -t ahv_logs

File server logs


• Using FSVM name, logbay
collect -t file_server_logs -o
file_server_name_list= <FSVM name>

• Using FSVM IP, logbay collect


-t file_server_logs -o
file_server_vm_list=<FSVM IP>

Security logs logbay collect -t security_logs

Binary logs logbay collect -t binary_logs

Core logbay collect -t cores

Activity traces logbay collect -t activity_traces

Anonymizing Log Bundle Details


You can create an anonymized log bundle to mask certain sensitive information in the log
bundles before you upload the logs to internal FTP/SFTP server or Nutanix container; or when
you collect logs locally on a disk.

Creating an Anonymized Log Bundle


This section describes information that is masked or obscured (anonymized) in a log bundle
when you use the logbay command with the --anonymize_output=1 option. This option is
disabled (false) by default.
nutanix@cvm$ logbay collect -t tag_list --anonymize=1

With NCC 3.10 and later releases, logbay masks the sensitive information to preserve
anonymity.

NCC  |  Log Collection | 22


Table 5: Anonymized Cluster Information

Cluster information Masked characters

CVM IPs cc.cc.cc.<last octet>

Hypervisor IPs hh.hh.hh.<last octet>

All other IPs xx.yy.zz.<last octet>

Cluster name Cluster1

Protection domain name PD0, PD1, and so on

Container name Container0, Container1, and so on

Hypervisor hostname Hypervisor.hostname0, Hypervisor.hostname1,


and so on

Example
The following example collects logs and masks the critical information.
nutanix@cvm$ logbay collect -t x --anonymize=1
Time period of collection: Sun Dec 22 17:57:25 PST 2019 - Sun Dec 22 21:57:25 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::845f77d0-9267-4049-8efb-ccf3a9d8741e
[====================================================================]
x.x.x.x
Archive Location: x.x.x.x:/home/nutanix/data/logbay/bundles/NTNX-
Log-2019-12-22-1577080645-33287-PE-xx.xx.xx.65-CW.zip

Uploading Logbay Logs


Uploading Logs to Nutanix Storage Container
You can upload logs collected by logbay to a Nutanix storage container to avoid local disk
usage.

Note: Uploading log files to Nutanix container is supported on clusters running AOS versions 5.9
and later.

In the Controller VM, run the following command to upload log files to Nutanix storage
container.
nutanix@cvm$ logbay collect --dst=container:/container_name

Replace container_name with the name of the storage container where you want to upload the
log files.
For example, the Controller VM SSH window displays results similar to the following.
nutanix@cvm$ logbay collect --dst=container:/<container_name>
Time period of collection: Sun Dec 22 17:55:41 PST 2019 - Sun Dec 22 21:55:41 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::fc7a6f51-18f5-4c0f-9fa1-3a8cce77a0cb
[====================================================================]
x.x.x.x
Archive Location: /<container_name>/NTNX-Log-2019-12-22-1573466625-2227044053748017374-
PE-x.x.x.x-CW.zip

NCC  |  Log Collection | 23


Uploading Logs to Nutanix SFTP/FTP Server
You can upload logs collected by logbay using internal SFTP or FTP server to help Nutanix
Support to investigate the reported issues.
Nutanix recommends that you upload a file as a single zip or tgz (tarred and gzipped) file.
In the Controller VM, run one of the following command to upload log files to an internal SFTP
or FTP server.
nutanix@cvm$ logbay collect --dst=sftp://nutanix -c case_number
nutanix@cvm$ logbay collect --dst=ftp://nutanix -c case_number

Replace case_number with the Nutanix support case number.


For example, the Controller VM SSH window displays results similar to the following.
nutanix@cvm$ logbay collect --dst=sftp://nutanix -c 123456 -t stargate
Time period of collection: Sun Dec 22 17:54:15 PST 2019 - Sun Dec 22 21:54:15 PST 2019
Creating a task to collect logs...
Logbay task created ID: x.x.x.x::4ba91e11-784c-4c8f-a37d-617d51ed2a45
[====================================================================]
x.x.x.x
Archive Location: ftp.nutanix.com:123456/NTNX-Log-2019-12-22-1577080455-33287-PE-
x.x.x.x-CW.zip

Log Collector (Legacy Method)


Note: Nutanix plans to deprecate Log Collector in an upcoming NCC release. See the NCC
Release Notes at the Nutanix Support Portal for more information.
Nutanix recommends that you use the latest Logbay utility to collect logs.

Nutanix Cluster Check (NCC) 1.3.1 introduces the Log Collector plugin. AOS implements many
logs and configuration information files that are useful for troubleshooting issues and finding
out details about a particular node or cluster.

Note:

• In some cases, running the NCC log collector (ncc log_collector run_all) can trigger
spikes in average cluster latency.
• Log collector is a resource intensive task. Running it for a long period might cause
performance degradation on the Controller VM where you are running it.
• Use caution if business needs require high performance levels. Try to run it during a
maintenance window if possible in this case.

To use the log collector, run the following command from a Controller VM.
nutanix@cvm$ ncc log_collector [plugin_name] --collector_plugin_timeout=seconds

To collect logs for specific plugins, use the --plugin_list option with a comma-separated list
of plugin names. For example, to collect Controller VM configuration, general, and kernel logs,
run the following command from a Controller VM.
nutanix@cvm$ ncc log_collector --plugin_list=cvm_config,cvm_logs,cvm_kernel_logs

[Introduced in NCC 2.3] Specify in seconds the optional collector timeout duration (time to wait
for collected results from a specified plug-in; --collector_plugin_timeout=).

NCC  |  Log Collection | 24


You can mask or obscure (anoymize) the sensitive information in a log bundle when you use
the NCC log collector command with the --anonymize_output=True option. Using this option
might increase log collection run times. It is disabled (false) by default.
ncc log_collector --anonymize_output=True

Replace plugin_name with one of the following plug-in names.

Table 6: Log Collector Plug-ins

Contents Plugin Name

All plugins (as a default, collects everything run_all


from the last four hours)

Specify if the output of log collector should be --anonymize_output


anonymized with certain information masked
or obscured. It is disabled (=false) by default.

Controller VM configuration cvm_config

Controller VM system statistics sysstats

Controller VM logs cvm_logs

Controller VM kernel logs cvm_kernel_logs

Cluster alerts alerts

Hypervisor configuration hypervisor_config

Hypervisor logs hypervisor_logs

Hypervisor logs (collect Hyper-V cluster logs) hypervisor_logs --


hyperv_log_level='CRITICAL','ERROR','WARNING',
You can specify one or more comma-
'INFO', 'DEBUG' --hyperv_cluster_logs=True
separated log levels, in single quotes, as
shown.

Specify plugin_name help_opts to get further details about options for the plugin_name.
Log Collector stores the output in a zipped tar file named log_collector_logs.tar.gz in the /
home/nutanix/data/log_collector directory on the Controller VM where the NCC command
was issued.

Securely Uploading Log Collector Log Files to Nutanix Support


NCC 2.2 introduced the ncc --secure_file_transfer command to enable you to upload log
collector log files to help Nutanix Support to investigate any reported issues.

About this task


This command runs the ncc log_collector run_all command, which gathers a variety of
information about your cluster. See Log Collector (Legacy Method) on page 24. After
collection, it logs onto a Nutanix Support FTP server and uploads compressed log files.

Procedure

1. Use SSH to log on to any Controller VM in your cluster.

NCC  |  Log Collection | 25


2. Collect log files and upload them to the Nutanix Support FTP server. The command
also accepts an FTP server IP address (--ftp_server=ftp_ip_address) or user name and
passwords as directed by or obtained from Nutanix Support.
nutanix@cvm$ ncc --secure_file_transfer=true --upload_file=true
--ftp_server=ftp.nutanix.com --ftp_username=user_name --ftp_password=password log_collector
run_all

For example, the Controller VM SSH window displays results similar to the following.
ncc_version: 2.2.0-22137482
cluster id: 37146
cluster name: your_cluster
node with service vm id 3
service vm external ip: ip_address
hypervisor address list: [u'ip_address']
hypervisor version: 6.3.9600 build - 9600
ipmi address list: [u'ip_address']
software version: danube-4.6-stable
software changeset ID: 18363af92e18843279a78f066f280af70a59ad27
node serial: OM159S002597
rackable unit: NX-1065-G4
node with service vm id 4
service vm external ip: ip_address
hypervisor address list: [u'ip_address']
hypervisor version: 6.3.9600 build - 9600
ipmi address list: [u'ip_address']
software version: danube-4.6-stable
software changeset ID: 18363af92e18843279a78f066f280af70a59ad27
node serial: OM159S002547
rackable unit: NX-1065-G4
node with service vm id 5
service vm external ip: ip_address
hypervisor address list: [u'ip_address']
hypervisor version: 6.3.9600 build - 9600
ipmi address list: [u'ip_address']
software version: danube-4.6-stable
software changeset ID: 18363af92e18843279a78f066f280af70a59ad27
node serial: OM159S002535
rackable unit: NX-1065-G4

Running /log_collector/cvm_config on the node

[==================================================] 100%

--------------------------------------------------+
Running /log_collector/cvm_logs on the node

[==================================================] 100%
[ WARN ]
--------------------------------------------------+
Running /log_collector/alerts on the node

[==================================================] 100%

--------------------------------------------------+
Running /log_collector/hypervisor_config on the node

[==================================================] 100%

--------------------------------------------------+
Running /log_collector/sysstats on the node

NCC  |  Log Collection | 26


[==================================================] 100%

--------------------------------------------------+

Running /log_collector/cvm_kernel_logs on the node

[==================================================] 100%

--------------------------------------------------+

Running /log_collector/hypervisor_logs on the node

[==================================================] 100%

--------------------------------------------------+
/home/nutanix/data/log_collector/NCC-logs-2016-02-08-37146-1454929507.tar uploaded to
remote server - ip_address
Detailed information for cvm_logs:
Node ip_address:
WARN: Not collecting /home/nutanix/data/logs/cluster_health.out since its size exceeds the
maximum size limit: 0.5
Node ip_address:
WARN: Not collecting /home/nutanix/data/logs/cluster_health.out since its size exceeds the
maximum size limit: 0.5
Node ip_address:
WARN: Not collecting /home/nutanix/data/logs/cluster_health.out since its size exceeds the
maximum size limit: 0.5
+-----------------+
| State | Count |
+-----------------+
| Warning | 1 |
| Total | 7 |
+-----------------+
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log

NCC  |  Log Collection | 27


5
HARDWARE COLLECTOR
You can also use NCC commands to collect cluster or node hardware information. A new
command ncc hardware_info collects cluster or node hardware information, as described in this
topic.
See Using NCC Commands to Collect Hardware Information on page 28.

Using NCC Commands to Collect Hardware Information


Note: This command is supported on Nutanix, Dell, and Lenovo platforms.

Use the NCC command ncc hardware_info to collect cluster or node hardware information.
This command displays the hardware information on the console and also writes to an output
log file on the Controller VM. For example, if the Controller VM IP address is 10.5.25.46, the
command writes an output file to /home/nutanix/data/hardware_logs/10.5.25.46_output.
ncc hardware_info saves the hardware information to an output file in the following location.
/home/nutanix/data/hardware_logs/controller_VM_IP_address_output

To collect this information Use this command

Node hardware information. nutanix@cvm$ ncc hardware_info


show_hardware_info
Run the command from the Controller VM you
are currently logged into.
To maintain historical hardware configuration,
the sysstat utility runs every 24 hours
and stores the data in the /home/nutanix/
data/logs/sysstat/hardware.info file.
For historical information, see the file
hardware.info.timestamp in the same
directory.

Hardware information of multiple nodes. nutanix@cvm$ ncc hardware_info


show_hardware_info –
Run the command from any Controller VM in cvm_ip=Controller_VM_IP_addresses
the cluster.
Replace Controller_VM_IP_addresses with a
comma-separated list of the Controller VM IP
addresses of all the nodes for which you want
to collect the hardware information.

Cluster hardware information. nutanix@cvm$ ncc hardware_info

Run the command from any Controller VM in


the cluster.

NCC  |  Hardware Collector | 28


To collect this information Use this command

Update the hardware information. nutanix@cvm$ ncc hardware_info


update_hardware_info

Display the command help. nutanix@cvm$ ncc hardware_info help_opts

Update the hardware information and nutanix@cvm$ ncc hardware_info run_all


displayd details on the console window.

Change Log History


The hardware collector maintains the change log history in a hardware.history file in the /home/
nutanix/config location. An entry is written to the change log history file upon any change to a
component or its value.
The file is archived after it reaches a maximum max size limit of 50MB. The archived file is
saved at the same location with current date and time appended to the file name; for example,
hardware.history_20160822173207.
Example file entries:
Tue, 16 Aug 2016 06:19:31 PDT|Storage controller Component changed: Old values: |{}|
(Last Detected Tue, 16 Aug 2016 05:01:28 PDT)
New values: |{location: ioc0}|

Tue, 16 Aug 2016 06:19:31 PDT|Storage controller Component changed: Old values: |{}|
(Last Detected Tue, 16 Aug 2016 05:01:28 PDT)
New values: |{location: ioc1}|

Tue, 16 Aug 2016 06:19:31 PDT|Storage controller Component changed: Old values: |{}|
Last Detected Tue, 16 Aug 2016 05:01:28 PDT)
New values: |{location: ioc2}|

Hardware Collector Information

Table 7: Information Obtained by the Hardware Collector

System Information
Manufacturer

Product part number

Configured serial number

Product name

Chassis Information

Manufacturer

Version

Serial number

Boot up

Thermal state

NCC  |  Hardware Collector | 29


Node Module

Node Position

Manufacturer

Version

Serial number

Boot up state

Thermal state

Host name

Hypervisor type

Temperature

BIOS Information

Vendor

Version

Release date

ROM size

BMC

Device id

Device revision

Firmware revision

Ipmi version

Manufacturer id

Manufacturer

Product id

Device available

Physical Memory Array

Num slots

Banks

Max size

Storage Controller

Manufacturer

Product part number

Serial number

Bios version

NCC  |  Hardware Collector | 30


Firmware version

System Power Supply

Location

Manufacturer

Product part number

Serial number

Revision

Max power capacity #1

Status #

Processor Information

Socket designation

Status

Type

Id #

Signature #

Version

Voltage #

External clock

Max speed

Current speed

Core count

Core enabled

Thread count

L1 cache handle

L2 cache handle

L3 cache handle

Temperature

Memory Module

Location

Manufacturer

Product part number

Serial number

Bank locator

NCC  |  Hardware Collector | 31


Configured clock speed #

Maximum Capable Speed

Type

Installed size

Temperature

NIC

Manufacturer

Location

Device name #

Version

Firmware version

SSD

Manufacturer

Product part number

Serial number

Firmware version

Capacity

Location

Power on hours

HDD

Product part number

Serial number

Firmware version

Capacity

Location

Power on hours

SATADOM

Firmware version

Capacity

Serial number

Device model

Power on hours #

FAN

NCC  |  Hardware Collector | 32


Status

Rpm

Location

1. # = Information available in AHV clusters only

NCC  |  Hardware Collector | 33


6
NCC USAGE
The general usage of NCC is as follows:
nutanix@cvm$ ncc ncc-flags module sub-module [...] plugin plugin-flags

By default, the output files will be generated to /home/nutanix/data/logs/ncc-output-


latest.log.
Alternately, you can run all or individual checks from the Prism web console. Select Actions >
Run Checks. Select All checks and click Run. See Run NCC Checks on page 35.
Typing ncc with no arguments yields a table listing the next modules that can be run. The Type
column distinguishes between modules (M) and plugins (P). The Impact tag identifies a plugin
as intrusive or non-intrusive. By default, only non-intrusive checks are used if a module is run
with the run_all plugin.
For example:
nutanix@cvm$ ncc
+-----------------------------------------------------------------------------
| Type | Name | Impact | Short help
+-----------------------------------------------------------------------------
| M | cassandra_tools | N/A | Plugins to help with Cassandra ring analysis
| M | fix_failures | N/A | Fix failures
| M | hardware_info | N/A | Plugin to get or update node hardware information
| M | health_checks | N/A | All health checks

| M | help_opts | N/A | Show various options for ncc.

| M | log_collector | N/A | Collect logs on all CVMs.

| M | performance_checks | N/A | This module performs various performance checks on the


cluster.
| M | pulsehd_collectors | N/A | Plugin to start the insights collectors

Learn More About NCC Health Checks


You can learn more about the Nutanix Cluster Check (NCC) health checks:

• The Nutanix Cluster Check (NCC) Guide for your NCC version provides more details about
NCC operation and command usage.
• The Nutanix support portal includes a series of Knowledge Base articles describing most
NCC health checks run by the ncc health_checks command. These articles are updated
regularly.
To search for new, updated or existing NCC health check articles on the Nutanix support portal:
1. Log on to the support portal and go to the Knowledge Base.
2. Click the Nutanix KB Articles filter search and type NCC Health Check.

NCC  |  NCC Usage | 34


3. You can sort the results by article number, title, or last modified date.

Figure 3: NCC Health Check KB Articles

Displaying NCC and Logbay Help


Get help about NCC from the command line. You can also run the NCC checks from the Health
dashboard of the Prism web console. Click Actions > Manage Checks, then select an NCC
check. Click the link to the Knowledge Base article for more information about that check.

About this task


See also the NCC Command Line Reference.

Procedure

• Show top-level help about available health check categories.


nutanix@cvm$ $ ncc health_checks

• Show top-level help about a specific available health check category. For example,
hypervisor_checks.
nutanix@cvm$ ncc health_checks hypervisor_checks

• Show all NCC flags to set your NCC configuration. Use these flags under the direction of
Nutanix Support.
nutanix@cvm$ ncc -help

• Show logbay log collector bundle help.


nutanix@cvm$ logbay collect --help

Run NCC Checks


Before doing any upgrade procedure, run Nutanix Cluster Check from Prism Element and Prism
Central.

NCC  |  NCC Usage | 35


Run NCC on Prism Element Clusters

• For Prism Element clusters, run the NCC checks from the Health dashboard of the
Prism web console. You can select to run all the checks at once, the checks that have
failed or displayed some warning, or even specific checks of your choice. You can also
log on to a Controller VM and run NCC from the command line.
• if you are running checks by using web console, you are unable to collect the logs at
the same time.
• You can also log on to a Controller VM and run NCC from the ncc command line.
Run NCC on Prism Central
For Prism Central clusters, log on to the Prism Central VM and run the NCC checks from
the ncc command line. You cannot run NCC from the Prism Central web console.

Running NCC (Prism Element)

About this task


Before doing any upgrade procedure, run Nutanix Cluster Check from the Prism Element web
console or ncc command line.

Procedure

1. Do these steps to run NCC from the ncc command line.

a. Log on to a Controller VM.


b. Run NCC. See Displaying NCC and Logbay Help on page 35 to display NCC help.
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than INFO or PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix Support for assistance.

2. Do these steps to run NCC from the Prism Element web console.

a. In the Health dashboard, from the Actions drop-down menu, select Run Checks.
b. Select the checks that you want to run for the cluster.

• All checks. Select this option to run all the checks at once.
• Only Failed and Warning Checks. Select this option to run only the checks that failed
or triggered a warning during the health check runs.
• Specific Checks. Select this option and type the check or checks name in the text box
that appears that you want to run.
This field gets auto-populated once you start typing the name of the check. The Added
Checks box lists all the checks that you have selected for this run.
c. Select the Send the cluster check report in the email option to receive the report after
the cluster check.
To receive the email, ensure that you have configured email configuration for alerts. For
more information, see the Web Console Guide.
The status of the run (succeeded or aborted) is available in the Tasks dashboard. By default,
all the event triggered checks are passed. Also, the Summary page of the Health dashboard
updates with the status according to health check runs.

NCC  |  NCC Usage | 36


Running NCC (Prism Central)

About this task


Before doing any upgrade procedure, log on to the Prism Central VM and run the NCC checks
from the ncc command line. You cannot run NCC from the Prism Central web console.

Procedure

1. Log on to the Prism Central VM.

2. Do these steps to run NCC from the ncc command line.

a. Log on to a Controller VM.


b. Run NCC. See Displaying NCC and Logbay Help on page 35 to display NCC help.
nutanix@cvm$ ncc health_checks run_all

If the check reports a status other than INFO or PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix Support for assistance.

Command Line Usage Examples


Procedure

• Run all health checks.


nutanix@cvm$ ncc health_checks run_all

• Display default command flags.


nutanix@cvm$ ncc --ncc_interactive=false module sub-module [...] plugin \
--helpshort

• Run the NCC with a named output file.


nutanix@cvm$ ncc --ncc_plugin_output_history_file=ncc.out health_checks \
hardware_checks ipmi_checks run_all

Note: The flags override the default configurations of the NCC modules and plugins. Do not
run with these flags unless your cluster configuration requires these modifications.

NCC  |  NCC Usage | 37


7
OPEN SOURCE LICENSES
Nutanix Cluster Check (NCC) uses the following open source software.
Component License

gosuri/uilive MIT License


Copyright (c) 2015, Greg Osuri
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/
or sell copies of the Software, and to permit persons to whom
the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT
WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

NCC  |  Open Source Licenses | 38


Component License

gosuri/uiprogress MIT License


Copyright (c) 2015, Greg Osuri
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/
or sell copies of the Software, and to permit persons to whom
the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT
WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

NCC  |  Open Source Licenses | 39


Component License

kr/fs Copyright (c) 2012 The Go Authors. All rights reserved.


Redistribution and use in source and binary forms, with
or without modification, are permitted provided that the
following conditions are met:
* Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT
HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.

NCC  |  Open Source Licenses | 40


Component License

pkg/sftp Copyright (c) 2013, Dave Cheney


All rights reserved.
Redistribution and use in source and binary forms, with
or without modification, are permitted provided that the
following conditions are met:
* Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT
HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
OF SUCH DAMAGE.

paramiko/paramiko Copyright (c) 2018 Jeff Forcier


This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
ersion 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied
warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU Lesser General Public
License for more details.
You should have received a copy of the GNU Lesser General
Public License along with this library; if not, write to the
Free Software Foundation, Inc., 51 Franklin Street, Suite 500,
Boston, MA 02110-1335 USA

NCC  |  Open Source Licenses | 41


Component License

djherbis/nio The MIT License (MIT)


Copyright (c) 2015 Dustin H
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the Software
without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/
or sell copies of the Software, and to permit persons to whom
the Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT
WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
COPYRIGHT
Copyright 2020 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.

License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.

Conventions
Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as


nutanix) in the system shell.

root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.

> command The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a


log file.

Default Cluster Credentials


Interface Target Username Password

Nutanix web console Nutanix Controller VM admin Nutanix/4u

vSphere Web Client ESXi host root nutanix/4u

vSphere Client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console AHV host root nutanix/4u

NCC  | 
Interface Target Username Password

SSH client or console Hyper-V host Administrator nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

SSH client Nutanix Controller VM admin Nutanix/4u

IPMI web interface or Nutanix node ADMIN ADMIN


ipmitool

SSH client or console Acropolis OpenStack root admin


Services VM (Nutanix OVM)

SSH client or console Xtract VM nutanix nutanix/4u

SSH client or console Xplorer VM nutanix nutanix/4u

Version
Last modified: July 22, 2020 (2020-07-22T21:53:41-07:00)

You might also like