SlideShare a Scribd company logo
1 of 91
Download to read offline
Field Installation Guide
Foundation 3.1
22-Apr-2016
Copyright | Field Installation Guide | Foundation | 2
Notice
Copyright
Copyright 2016 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention Description
variable_value The action depends on a value that is unique to your environment.
ncli> command The commands are executed in the Nutanix nCLI.
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command The commands are executed in the Hyper-V host shell.
output The information is displayed as output from a command or in a log file.
Default Cluster Credentials
Interface Target Username Password
Nutanix web console Nutanix Controller VM admin admin
vSphere Web Client ESXi host root nutanix/4u
Copyright | Field Installation Guide | Foundation | 3
Interface Target Username Password
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console AHV host root nutanix/4u
SSH client or console Hyper-V host Administrator nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
SSH client or console Acropolis OpenStack
Services VM (Nutanix
OVM)
root admin
Version
Last modified: April 22, 2016 (2016-04-22 2:19:56 GMT-7)
4
Contents
Release Notes...................................................................................................................6
1: Field Installation Overview...................................................................10
2: Creating a Cluster................................................................................. 11
Discovering the Nodes.......................................................................................................................11
Defining the Cluster........................................................................................................................... 15
Setting Up the Nodes........................................................................................................................ 17
Selecting the Images......................................................................................................................... 19
Creating the Cluster...........................................................................................................................22
Configuring a New Cluster................................................................................................................ 24
3: Imaging Bare Metal Nodes................................................................... 26
Preparing Installation Environment....................................................................................................27
Preparing a Workstation......................................................................................................... 27
Setting Up the Network...........................................................................................................32
Configuring Global Parameters......................................................................................................... 33
Configuring Node Parameters........................................................................................................... 36
Configuring Image Parameters..........................................................................................................39
Configuring Cluster Parameters........................................................................................................ 40
Monitoring Progress...........................................................................................................................42
Cleaning Up After Installation............................................................................................................45
4: Downloading Installation Files.............................................................46
Foundation Files.................................................................................................................................47
Phoenix Files......................................................................................................................................48
5: Hypervisor ISO Images.........................................................................50
6: Network Requirements......................................................................... 52
7: Controller VM Memory Configurations............................................... 55
8: Hyper-V Installation Requirements......................................................57
9: Setting IPMI Static IP Address.............................................................61
10: Troubleshooting...................................................................................63
Fixing IPMI Configuration Problems..................................................................................................63
Fixing Imaging Problems................................................................................................................... 64
5
Frequently Asked Questions (FAQ)...................................................................................................65
11: Appendix: Imaging a Node (Phoenix)................................................72
Summary: Imaging Nutanix NX Series Nodes.................................................................................. 72
Summary: Imaging Lenovo Converged HX Series Nodes................................................................73
Preparing the ISO Images.................................................................................................................73
Nutanix NX Series Platforms.............................................................................................................74
Installing a Hypervisor (Nutanix NX Series Platforms)...........................................................74
Installing ESXi (Nutanix NX Series Platforms)....................................................................... 77
Installing Hyper-V (Nutanix NX Series Platforms).................................................................. 78
Installing AHV (Nutanix NX Series Platforms)........................................................................82
Attaching the Controller VM Image (Nutanix NX Series Platforms)....................................... 83
Lenovo Converged HX Series Platforms.......................................................................................... 85
Attaching an ISO Image (Lenovo Converged HX Series Platforms)...................................... 85
Installing ESXi (Lenovo Converged HX Series Platforms)..................................................... 87
Installing AHV (Lenovo Converged HX Series Platforms)......................................................88
Installing the Controller VM............................................................................................................... 89
Release Notes | Field Installation Guide | Foundation | 6
Release Notes
The Foundation release notes provide brief, high-level descriptions of changes, enhancements, notes,
and cautions as applicable to various releases of Foundation software and to the use of the software with
specific hardware platforms. Where applicable, the description includes a solution or workaround.
Foundation Survey for Sales Engineers
Foundation is committed to improving install times and removing frustration from the process. To help
achieve that goal, we have created a survey that sales engineers can use to share their experience with
Foundation.
If you are a sales engineer, please fill out the survey at https://goo.gl/y61aqK.
Ideally, you should complete this survey after every install so that we have a large number of data points
and a good measure of how we are doing. Please bookmark the survey page.
Foundation Release 3.1.1
This release includes the following enhancements and changes:
• Foundation now configures both NTP and DNS servers on the hypervisor when you skip imaging and
only create a cluster.
• When you want to switch from the Controller VM–based imaging mode to the standalone imaging
mode, you no longer have to manually change the value of foundation_workflow to ipmi in ~/
foundation/config/foundation_settings.json. Foundation automatically detects the imaging work-flow
that is in progress.
This release includes fixes for the following issues:
• Foundation fails to install drivers, including the VAAI plug-in, when imaging hosts with ESXi.
[ENG-50287]
• Phoenix fails to configure the host during boot disk replacement. [ENG-50716]
• Foundation fails when you choose a replication factor of 3 (RF3).
• The Foundation user interface freezes after you correct an IP address.
• If the selected hypervisor image is listed as unsupported or deprecated in the whitelist of the selected
AOS installation bundle, Foundation fails to pass control to the first imaged node.
• Foundation fails if you do not specify a default gateway. [ENG-50379]
• After imaging, the cluster fails to start because the Foundation user interface passes the node model as
an imaging parameter. [ENG-50185]
• The Foundation service fails to start after upgrading from Foundation 3.0.x to 3.1.1.
• When run on Mac OS, the Foundation Java applet fails to discover free nodes and displays the
message “unable to route to host”.
The following issues have been identified in this release:
• After the imaging process is finished, the user interface might require a few seconds to indicate success
or failure. Any attempt to refresh the user interface after the imaging process is finished prevents that
status from being displayed. [ENG-53306]
• If you want to use Controller VM–based imaging to install AOS version 4.6 or later on AHV, make sure
that the Controller VM is not running Foundation 3.0.x. If it is running Foundation 3.0.x, first create the
cluster without imaging the nodes, and then upgrade AOS and the hypervisor.
If you are running AOS 4.6 or later and you want to install AHV and an AOS release earlier than 4.5, do
not use Controller VM–based imaging. Use standalone Foundation instead. [ENG-52449]
Release Notes | Field Installation Guide | Foundation | 7
• If you choose to run post-imaging tests when imaging a cluster with AOS 4.6.1, the cluster creation
process appears to terminate prematurely with the message fatal: Running NCC. The error condition,
which is caused by the NCC test suite failing to run on AOS 4.6.1, can be ignored because Foundation
runs the NCC test suite only after the cluster has been created successfully. Close the Foundation user
interface, log on to any Controller VM in the cluster, and run the NCC test suite manually by using the
following command:
nutanix@cvm$ ncc health_checks run_all
Workaround: You can avoid this error condition by choosing not to run post-imaging tests with
Foundation. After you create the cluster, log on to any Controller VM and run NCC manually.
Foundation Release 3.1.0.1
This release is the same as release 3.1 except that it contains an updated Acropolis hypervisor image
(AHV 20160217) which incorporates a security advisory fix. If 3.1 is installed currently, you can download
the AHV 20160217 installer and the new whitelist to achieve the same thing as upgrading to 3.1.0.1.
Foundation Release 3.1
This release includes the following enhancements and changes:
• Foundation currently supports VLAN tagging when adding nodes to an existing cluster (see "Expanding
a Cluster" in the Web Console Guide). This VLAN tagging support has been extended in release 3.1 to
include initial node imaging when creating a cluster (see Discovering the Nodes on page 11). VLAN
tagging support is limited to the Controller VM-based version of Foundation; it is not supported in the
standalone version. VLAN support when adding nodes applies to any hypervisor (AHV, ESXi, or Hyper-
V), but VLAN support for initial node imaging is limited to AHV (not ESXi or Hyper-V).
• Error logging has been expanded and improved for usability. Enhancements include the following:
• Node and cluster log files are now named node_cvm_ip_addr.log and cluster_cluster_name.log.
They are still located in ~/data/logs/foundation (Controller VM based) or /home/nutanix/
foundation/log (bare metal).
• Messages now include an associated log level. In the case of a failure, you can quickly find the
relevant issues by searching for messages that include ERROR or FATAL in the body, as in the
following example message:
20151229 15:47:32 ERROR Command 'mkisofs -o
foundation.node_10.1.87.247.iso phoenix_cd_image' returned error code
255
• There is a new debug.log file that records every bit of information Foundation outputs.
• There is a new api.log file that records certain requests made to the Foundation API.
• Progress monitoring has been improved. Some of the previously sequential tasks now run in parallel on
a per node basis, and the Foundation monitoring screen displays progress messages as the tasks are
processed (see Creating the Cluster on page 22).
• Support has been added for two additional Hyper-V SKUs, Datacenter with GUI and Standard with
GUI (see Selecting the Images on page 19).
• An AHV ISO file is no longer provided. Instead, an AHV tar file is provided, which is used to generate
an AHV ISO when needed. An AHV tar file is included as part of the AOS download bundle. You can
also download an AHV tar file from the support portal. The AHV tar file included in AOS is named
kvm_host_bundle_version#.tar.gz; choose the appropriate tar file when selecting an AHV version (see
Selecting the Images on page 19).
• When monitoring progress during a standalone Foundation (see Monitoring Progress on page 42),
clicking the general progress Log link may result in a "404 Not Found" error instead of displaying the
service.log contents. [ENG-49790]
• When you upgrade to AOS 4.6, the Controller VM-based version of Foundation is automatically
upgraded to version 3.1. In addition, Prism now supports upgrading Foundation (post 3.1) through the
Release Notes | Field Installation Guide | Foundation | 8
standard 1-click upgrade mechanism. See the "Upgrading Foundation: 1-Click Upgrade or Manual
Upgrade" section in the Web Console Guide for instructions on how to upgrade Foundation through
Prism.
If you install AOS 4.6 on direct-from-the-factory node hardware, an older version of Foundation is
installed. The workaround for this issue is to upgrade to Foundation through the Prism web console
after installing AOS 4.6. See the Prism Web Console Guide and KB 3068 for more information.
• To upgrade standalone (bare metal) Foundation to version 3.1 from any earlier version, do the following:
1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal to
the /home/nutanix directory.
2. Copy the /home/nutanix/foundation/config/foundation_settings.json file to a safe location. (You
will copy it back in step 5.)
3. If you want to save the existing log files, copy the /home/nutanix/foundation/log directory to a
safe location or download the log archive from the following URL: http://foundation_ip:8000/
foundation/log_archive.tar.
4. If you want to preserve the existing ISO files (the contents of the isos and nos directories), enter the
following commands:
$ cd /home/nutanix
$ cp -r foundation/isos .
$ cp -r foundation/nos .
5. Do one of the following:
• If upgrading from 3.x to 3.1, enter the following commands:
$ cd /home/nutanix # If not already there
$ pkill -9 foundation
$ rm -rf foundation
$ tar xf foundation-version#.tar.gz
$ cp <path>/foundation_settings.json config/foundation_settings.json
# If step 4 done, save new AHV files and restore backed up ISO files
$ mv foundation/isos/hypervisor/kvm/* isos/hypervisor/kvm/
$ mv isos foundation/isos
$ mv nos foundation/nos
$ sudo service foundation_service restart
• If upgrading from 2.x to 3.1, enter the following commands:
$ cd /home/nutanix # If not already there
$ pkill -9 foundation
$ sudo fusermount -uz foundation/tmp/fuse # It is okay if this complains at you
$ rm -rf foundation
$ tar xf foundation-version#.tar.gz
$ cd /etc/init.d
$ sudo rm foundation_service
$ sudo ln -s /home/nutanix/foundation/bin/foundation_service
$ sudo yum -y install libunwind-devel
$ cd /home/nutanix
$ cp <path>/foundation_settings.json config/foundation_settings.json
# If step 4 done, save new AHV files and restore backed up ISO files
$ mv foundation/isos/hypervisor/kvm/* isos/hypervisor/kvm/
$ mv isos foundation/isos
$ mv nos foundation/nos
$ sudo service foundation_service restart
Hardware Platform–Specific Notes
The following notes apply to the use of Foundation 3.1.x with specific hardware platforms:
NX-3175-G4
• One or more of the following issues result from the use of an unsupported 1000BASE-T Copper
SFP transceiver module (SFP-to-RJ45 adapter) when imaging NX-3175-G4 nodes:
Release Notes | Field Installation Guide | Foundation | 9
• Foundation times out with the following message in node logs: INFO: Populating firmware
information for device bmc...
• Foundation fails at random stages.
• Foundation cannot communicate with the baseboard management controller (BMC).
To avoid encountering these issues, use a supported SFP-to-RJ45 adapter. For information about
supported adapters, see KB2422.
Field Installation Overview | Field Installation Guide | Foundation | 10
1
Field Installation Overview
Nutanix installs the Acropolis hypervisor (AHV) and the Nutanix Controller VM at the factory before
shipping a node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes or to use
any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides step-by-
step instructions on how to use the Foundation tool to do a field installation, which consists of installing
a hypervisor and the Nutanix Controller VM on each node and then creating a cluster. You can also use
Foundation to create just a cluster from nodes that are already imaged or to image nodes without creating
a cluster.
Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster
from those nodes. Use the Prism web console (in clusters running AOS 4.5 or later) to image
factory-prepared nodes and then add them to an existing cluster. See the "Expanding a Cluster"
section in the Web Console Guide for this procedure.
A field installation can be performed for either factory-prepared nodes or bare metal nodes.
• See Creating a Cluster on page 11 to image factory-prepared nodes and create a cluster from those
nodes (or just create a cluster for nodes that are already imaged).
• See Imaging Bare Metal Nodes on page 26 to image bare metal nodes and optionally configure
them into one or more clusters.
Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix
hardware models with some restrictions. Click here (or log into the Nutanix support portal and
select Documentation > Compatibility Matrix from the main menu) for a list of supported
configurations. To check a particular configuration, go to the Filter By fields and select the
desired model, AOS version, and hypervisor in the first three fields and then set the last field to
Foundation. In addition, check the notes at the bottom of the table.
Creating a Cluster | Field Installation Guide | Foundation | 11
2
Creating a Cluster
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered
nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes on
the same subnet that are not part of a cluster currently. This procedure runs the Foundation tool through
the Nutanix Controller VM.
Note: This method creates a single cluster from discovered nodes. This method is limited to
factory prepared nodes running AOS 4.5 or later. If you want to image discovered nodes without
creating a cluster, image factory prepared nodes running an earlier AOS (NOS) version, or image
bare metal nodes, see Imaging Bare Metal Nodes on page 26.
To image the nodes and create a cluster, do the following:
1. Download the required files, start the cluster creation GUI, and run discovery (see Discovering the
Nodes on page 11).
2. Define cluster parameters; specify Controller VM, hypervisor, and (optionally) IPMI global network
addresses; and (optionally) enable health tests after the cluster is created (see Defining the Cluster on
page 15).
3. Configure the discovered nodes (Setting Up the Nodes on page 17).
4. Select the AOS and hypervisor images to use (see Selecting the Images on page 19).
5. Start the process and monitor progress as the nodes are imaged and the cluster is created (see
Creating the Cluster on page 22).
6. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster on
page 24).
Discovering the Nodes
Before you begin:
• Physically install the Nutanix nodes at your site. See the Physical Installation Guide for your model type
for installation instructions.
Note: If you have nodes running a version lower than AOS 4.5, you cannot use this (Controller
VM-based) method to create a cluster. Contact Nutanix customer support for help in creating
the cluster using the standalone (bare metal) method.
• Your workstation must be connected to the network on the same subset as the nodes you want to
image. Foundation does not require an IPMI connection or any special network port configuration to
image discovered nodes. See Network Requirements on page 52 for general information about the
network topology and port access required for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP
address), and node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed
for installation.
Creating a Cluster | Field Installation Guide | Foundation | 12
Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.
1. Open a browser, go to the Nutanix support portal (see Downloading Installation Files on page 46),
and download the following files to your workstation.
• FoundationApplet-offline.zip installation bundle from the Foundation download page.
Note: If you install from a workstation that has Internet access, you can forego downloading
this bundle and simply select the link to nutanix_foundation_applet.jnlp directly from the
support portal (see step 2). Otherwise, you must first download (and unpack) the installation
bundle.
• nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page. This is the
installation bundle used for imaging the desired AOS release.
• hypervisor ISO if installing Hyper-V or ESXi. The user must provide the supported Hyper-V or ESXi
ISO (see Hypervisor ISO Images on page 50); Hyper-V and ESXi ISOs are not available from the
support portal.
• It is not necessary to download an AHV ISO because both Foundation and the AOS bundle include
an AHV installation bundle. However, you have the option to download an AHV upgrade installation
bundle if you want to install a non-default version of AHV.
2. Do one of the following:
• Click the link to jnlp from online bundle link on the Nutanix support portal to download and start
the Java applet.
• Unzip the FoundationApplet-offline.zip installation bundle that was downloaded in step 1 and then
start (double-click) the nutanix_foundation_applet.jnlp Java applet.
The discovery process begins and a window appears with a list of discovered nodes.
Note: A security warning message may appear indicating this is from an unknown source.
Click the accept and run buttons to run the application.
Figure: Foundation Launcher Window
3. Select (click the line for) a node to be imaged from the list and then click the Launch Foundation
button.
Creating a Cluster | Field Installation Guide | Foundation | 13
This launches the cluster creation GUI. The selected node will be imaged first and then be used to
image the other nodes. Only nodes with a status field value of Free can be selected, which indicates
it is not currently part of a cluster. A value of Unavailable indicates it is part of an existing cluster or
otherwise unavailable. To rerun the discovery process, click the Retry discovery button.
Note: A warning message may appear stating this is not the highest available version of
Foundation found in the discovered nodes. If you select a node using an earlier Foundation
version (one that does not recognize one or more of the node models), installation may fail
when Foundation attempts to image a node of an unknown model. Therefore, select the
node with the highest Foundation version among the nodes to be imaged. (You can ignore
the warning and proceed if you do not intend to select any of the nodes that have the higher
Foundation version.)
Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that
are not part of a cluster) and then displays information about the discovered blocks and nodes in the
Discovered Nodes screen. (It does not display information about nodes that are powered off or in a
different subnet.) The discovery process normally takes just a few seconds.
Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.
4. Select (check the box for) the nodes to be imaged.
All discovered blocks and nodes are displayed by default, including those that are already in an existing
cluster. An exclamation mark icon is displayed for unavailable (already in a cluster) nodes; these node
cannot be selected. All available nodes are selected by default.
Note: A cluster requires a minimum of three nodes. Therefore, you must select at least three
nodes.
Note: If a discovered node has a VLAN tag, that tag is displayed. Foundation applies an
existing VLAN tag when imaging the node, but you cannot use Foundation to edit that tag or
add a tag to a node without one.
• To display just the available nodes, select the Show only new nodes option from the pull-down
list on the right of the screen. (Blocks with unavailable nodes only do not appear, but a block with
both available and unavailable nodes does appear with the exclamation mark icon displayed for the
unavailable nodes in that block.)
• To deselect nodes you do not want to image, uncheck the boxes for those nodes. Alternately, click
the Deselect All button to uncheck all the nodes and then select those you want to image. (The
Select All button checks all the nodes.)
Note: You can get help or reset the configuration at any time from the gear icon pull-down
menu (top right). Internet access is required to display the help pages, which are located in the
Nutanix support portal.
Creating a Cluster | Field Installation Guide | Foundation | 14
Figure: Discovery Screen
5. Set the redundancy factor (RF) for the cluster to be created.
The redundancy factor specifies the number of times each piece of data is replicated in the cluster.
• Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of any
single node or drive.
• Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure of
any two nodes or drives in different blocks. RF 3 requires that the cluster have at least five nodes,
and it can be enabled only when the cluster is created. (In addition, containers must have replication
factor 3 for guest VM data to withstand the failure of two nodes.)
The default setting for a cluster is RF 2. To set it to RF 3, do the following:
a. Click the Change RF (2) button.
The Change Redundancy Factor window appears.
b. Click (check) the RF 3 button and then click the Save Changes button.
The window disappears and the RF button changes to Change RF (3) indicating the RF factor is
now set to 3.
Creating a Cluster | Field Installation Guide | Foundation | 15
Figure: Change Redundancy Factor Window
6. Click the Next button at the bottom of the screen to configure cluster parameters (see Defining the
Cluster on page 15).
Defining the Cluster
Before you begin: Complete Discovering the Nodes on page 11.
The Define Cluster configuration screen appears. This screen allows you to define a new cluster and
configure global network parameters for the Controller VM, hypervisor, and (optionally) IPMI. It also allows
you to enable diagnostic and health tests after creating the cluster.
Creating a Cluster | Field Installation Guide | Foundation | 16
Figure: Cluster Screen
1. In the Cluster Information section, do the following in the indicated fields:
a. Name: Enter a name for the cluster.
b. IP Address (optional): Enter an external (virtual) IP address for the cluster.
This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and AHV clusters.
c. NTP Server Address (optional): Enter the NTP server IP address or (pool) domain name.
d. DNS Server IP (optional): Enter the DNS server IP address.
2. (optional) Click the Enable IPMI slider button to specify an IPMI address.
When this button is enabled, fields for IPMI global network parameters appear below. Foundation does
not require an IPMI connection, so this information is not required. However, you can use this option to
configure IPMI for your use.
3. In the Network Information section, do the following in the indicated fields:
a. CVM Netmask: Enter the Controller VM netmask value.
b. CVM Gateway: Enter an IP address for the gateway.
Creating a Cluster | Field Installation Guide | Foundation | 17
c. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configuration, see Controller VM Memory Configurations
on page 55.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations.
d. Hypervisor Netmask: Enter the hypervisor netmask value.
e. Hypervisor Gateway: Enter an IP address for the gateway.
f. Hypervisor DNS Server IP: Enter the IP address of the DNS server.
g. Note: The following fields appear only if the IPMI button was enabled in the previous step.
IPMI Netmask: Enter the IPMI netmask value.
h. IPMI Gateway: Enter an IP address for the gateway.
i. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.
j. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password.
4. (optional) Click the Enable Testing slider button to run the Nutanix Cluster Check (NCC) after the
cluster is created.
The NCC is a test suite that checks a variety of health metrics in the cluster. The results are stored in
the ~/foundation/logs/ncc directory.
5. Click the Next button at the bottom of the screen to configure the cluster nodes (see Setting Up the
Nodes on page 17).
Setting Up the Nodes
Before you begin: Complete Defining the Cluster on page 15.
The Setup Node configuration screen appears. This screen allows you to specify the Controller VM,
hypervisor, and (if enabled) IPMI IP addresses for each node.
Creating a Cluster | Field Installation Guide | Foundation | 18
Figure: Node Screen
1. In the Hostname and IP Range section, do the following in the indicated fields:
a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should contain
only digits, letters, and hyphens.
The base name with a suffix of "-1" is assigned as the host name of the first node, and the base
name with "-2", "-3" and so on are assigned automatically as the host names of the remaining nodes.
b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes.
Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered address is
assigned to the Controller VM of the first node, and consecutive IP addresses (sequentially from
the entered address) are assigned automatically to the remaining nodes. Discovered nodes are
sorted first by block ID and then by position, so IP assignments are sequential. If you do not want
all addresses to be consecutive, you can change the IP address for specific nodes by updating the
address in the appropriate fields for those nodes.
c. Hypervisor IP: Repeat the previous step for this field.
This sets the hypervisor IP addresses for all the nodes.
Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.
d. IPMI IP (when enabled): Repeat the previous step for this field.
Creating a Cluster | Field Installation Guide | Foundation | 19
This sets the IPMI port IP addresses for all the nodes. This column appears only when IPMI is
enabled on the previous cluster setup screen.
2. In the Manual Input section, review the assigned host names and IP addresses. If any of the names or
addresses are not correct, enter the desired name or IP address in the appropriate field.
There is a section for each block with a line for each node in the block. The letter designation (A, B, C,
and D) indicates the position of that node in the block.
3. When all the host names and IP addresses are correct, click the Validate Network button at the bottom
of the screen.
This does a ping test to each of the assigned IP addresses to check whether any of those addresses
are being used currently.
• If there are no conflicts (none of the addresses return a ping), the process continues (see Selecting
the Images on page 19).
• If there is a conflict (one or more addresses returned a ping), this screen reappears with the
conflicting addresses highlighted in red. Foundation will not continue until the conflict is resolved.
Selecting the Images
Before you begin: Complete Setting Up the Nodes on page 17.
The Select Images configuration screen appears. This screen allows you to specify and upload the AOS
and hypervisor image files to use.
Creating a Cluster | Field Installation Guide | Foundation | 20
Figure: Images Screen
1. Select the AOS image as follows:
Note: An AOS image may already be present. If it is the desired version running on the desired
hypervisor type and version, skip to step 4. However, if the hypervisor type or version is not
correct, you must upload an AOS installation bundle even if the present version is the same as
the uploaded one.
a. In the AOS (left) column, click the Upload Tarball button and then click the Choose File button.
Figure: File Selection Buttons
b. In the file search window, find and select the AOS installation bundle downloaded earlier (see
Discovering the Nodes on page 11) and then click the Upload button.
Uploading an image file (AOS or hypervisor) may take some time (possibly a few minutes). When a
new AOS installation bundle is uploaded, any existing AOS installation bundles in ~/foundation/nos
(if present) are deleted.
2. Select the hypervisor (AHV, ESX, or HYPERV) in the Hypervisor (middle) column and do the following:
Note: The hypervisor field is not active until the AOS installation bundle is selected (uploaded)
in the previous step. Foundation comes with an AHV image. If that is the correct image to use,
skip to the next step.
Creating a Cluster | Field Installation Guide | Foundation | 21
• AHV: Select the desired AHV installation bundle from the pull-down list. This list
includes the default AHV installation bundle included with Foundation (named
host_bundle_el6.nutanix.version#.tar.gz) and the AHV installation bundle included in the AOS
bundle (named kvm_host_bundle_version#.tar.gz). If you downloaded an AHV installation bundle
that you want to use (and it is not in the list), click the Upload RPM Tarball button to upload the
desired installation bundle from wherever you downloaded it.
• ESXi or Hyper-V: Click the Upload ISO button and then click the Choose File button. In the file
search window, find and select the ESXi or Hyper-V ISO image you downloaded earlier and then
click the Upload button.
Only approved hypervisor versions are permitted; Foundation will not image nodes with an unapproved
version. To verify your version is on the approved list, click the See Whitelist link and select the
appropriate hypervisor tab in the pop-up window. Nutanix updates the list as new versions are
approved, and the current version of Foundation may not have the latest list. If your version does not
appear on the list, click the Update the whitelist link to download the latest whitelist from the Nutanix
support portal.
Figure: Whitelist Compatibility List Window
3. [Hyper-V only] Click the radio button in the SKU (right) column for the Hyper-V version to use.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter
with GUI. This column appears only when you select Hyper-V.
Note: See Hyper-V Installation Requirements on page 57 for additional considerations
when installing a Hyper-V cluster.
4. When both images are uploaded and ready, do one of the following:
→ To image the nodes and then create the new cluster, click the Create Cluster button at the bottom of
the screen.
→ To create the cluster without imaging the nodes, click the Skip Imaging button (in either case see
Creating the Cluster on page 22).
Note: The Skip Imaging option requires that all the nodes have the same hypervisor
and AOS version. This option is disabled if they are not all the same (with the exception of
Creating a Cluster | Field Installation Guide | Foundation | 22
any model NX-6035C "cold" storage nodes in the cluster that run AHV regardless of the
hypervisor running on the other nodes).
Creating the Cluster
Before you begin: Complete Selecting the Images on page 19.
After clicking the Create Cluster or Skip Imaging button (in the Select Images screen), the Create Cluster
screen appears. This is a dynamically updated display that provides progress information about node
imaging and cluster creation.
1. Monitor the node imaging and cluster creation progress.
The progress screen includes the following sections:
• Progress bar at the top (blue during normal processing or red when there is a problem).
• Cluster Creation Status section with a line for the cluster being created (status indicator, cluster
name, progress message, and log link).
• Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).
Figure: Foundation Progress Screen: Ongoing
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. The selected node (see Discovering the Nodes on page 11) is imaged
first. When that imaging is complete, the remaining nodes are imaged in parallel. The imaging process
takes about 30 minutes, so the total time is about an hour (30 minutes for the first node and another 30
minutes for the other nodes imaged in parallel). You can monitor overall progress by clicking the Log
link at the top, which displays the foundation.out contents in a separate tab or window. Click on the
Log link for a node to display the log file for that node in a separate tab or window.
Creating a Cluster | Field Installation Guide | Foundation | 23
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 21 nodes, add an extra 30 minutes processing time for each group of 20 nodes.
When installation moves to cluster creation, the status message displays the percentage complete and
current step. Cluster creation happens quickly, but this step could take some time if you enabled the
post-creation tests. Click on the Log link for a cluster to display the log file for the cluster in a separate
tab or window.
2. When processing completes successfully, either open the Prism web console and begin configuring the
cluster (see Configuring a New Cluster on page 24) or exit from Foundation.
When processing completes successfully, a "Cluster creation successful" message appears. This
means imaging both the hypervisor and Nutanix Controller VM across all the nodes in the cluster was
successful (when imaging was not skipped) and cluster creation was successful.
• To configure the cluster, click the Prism link. This opens the Prism web console (login required using
the default "admin" username and password). See Configuring a New Cluster on page 24 for
initial cluster configuration steps.
• To download the log files, click the Export Logs link. This packages all the log files into a
log_archive.tar file and allows you to download that file to your workstation.
The Foundation service shuts down two hours after imaging. If you go to the cluster creation success
page after a long absence and theExport Logs link does not work (or your terminal went to sleep
and there is no response after refreshing it), you can point the browser to one of the Controller VM IP
addresses. If the Prism web console appears, installation completed successfully, and you can get the
logs from ~/data/logs/foundation on the node that was imaged first.
Note: If nothing loads when you refresh the page (or it loads one of the configuration pages),
the web browser might have missed the hand-off between the node that starts imaging and the
first node imaged. This can happen because the web browser went to sleep, you closed the
browser, or you lost connectivity for some other reason. In this case, enter http://cvm ip for
any Controller VM, which should open the Prism GUI if imaging has completed. If this does not
work, enter http://cvm ip:8000/guion each of the Controller VMs in the cluster until you see
the progress screen, from which you can continue monitoring progress.
Figure: Foundation Progress Screen: Successful Installation
3. If processing does not complete successfully, review and correct the problem(s), and then restart the
process.
If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
Creating a Cluster | Field Installation Guide | Foundation | 24
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
Note: If an imaging problem occurs, it typically appears when imaging the first node. In that
case Foundation will not attempt to image the other nodes, so only the first node will be in an
unstable state. Once the problem is resolved, the first node can be re-imaged and then the
other nodes imaged normally.
Figure: Foundation Progress Screen: Unsuccessful Installation
Configuring a New Cluster
After creating the cluster, you can configure it through the Prism web console. A storage pool and a
container are created automatically when the cluster is created, but many other set up options require user
action. The following are common cluster set up steps typically done soon after creating a cluster. (All the
sections cited in the following steps are in the Prism Web Console Guide.)
1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.
a. Check the installed NCC version and update it if a later version is available (see the "Software and
Firmware Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to any
Controller VM in the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before proceeding. If you
are unable to resolve the issues, contact Nutanix support for assistance.
2. Specify the timezone of the cluster.
Specifying the timezone must be done from the Nutanix command line (nCLI). While logged in to the
Controller VM (see previous step), run the following commands:
nutanix@cvm$ ncli
ncli> cluster set-timezone timezone=cluster_timezone
Creating a Cluster | Field Installation Guide | Foundation | 25
Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/
London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a
cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs
in a series, waiting until one has finished starting before proceeding to the next. See the Command
Reference for more information about using the nCLI.
3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section).
4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote
support tunnel (see the "Controlling Remote Connections" section).
Caution: Failing to enable remote support prevents Nutanix support from directly addressing
cluster issues. Nutanix recommends that all customers allow email alerts at minimum because
it allows proactive support of customer issues.
5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse
feature (see the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide more informed
and proactive help.
6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the
"Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring Alert
Policies" section).
7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster
elements, enable that feature (see the "Software and Firmware Upgrades" section).
Note: Allow access to the following through your firewall to ensure that automatic download of
updates can function:
• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80
8. License the cluster (see the "License Management" section).
9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
• vCenter: See the Nutanix vSphere Administration Guide.
• SCVMM: See the Nutanix Hyper-V Administration Guide.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 26
3
Imaging Bare Metal Nodes
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on bare metal
nodes and optionally configure the nodes into one or more clusters. "Bare metal" nodes are those that
are not factory prepared or cannot be detected through discovery. You can also use this method to image
factory prepared nodes that you do not want to configure into a cluster.
Before you begin:
Note: Imaging bare metal nodes is restricted to Nutanix sales engineers, support engineers, and
partners. Contact Nutanix customer support or your partner for help with this procedure.
• Physically install the nodes at your site. See the Physical Installation Guide for your model type for
installation instructions.
• Set up the installation environment (see Preparing Installation Environment on page 27).
Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will
get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in
the BIOS.
Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the
imaging process. Therefore, disable STP before starting Foundation.
Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents
virtual media, such as CDROM. This could conflict with the foundation installation when it tries
to mount the virtual CDROM hosting the install ISO.
• Have ready the appropriate global, node, and cluster parameter values needed for installation. The use
of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to
Controller VMs.
Note: If the Foundation VM IP address set previously was configured in one (typically public)
network environment and you are imaging the cluster on a different (typically private) network
in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on
page 27 to configure a new static IP address for the Foundation VM.
To image the nodes and create a cluster(s), do the following:
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
1. Prepare the installation environment:
a. Download necessary files and prepare a workstation (see Preparing a Workstation on page 27).
b. Connect the workstation and nodes to be imaged to the network (Setting Up the Network on
page 32).
2. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on
page 33).
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 27
3. Configure the nodes to image (see Configuring Node Parameters on page 36).
4. Select the images to use (see Configuring Image Parameters on page 39).
5. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring
Cluster Parameters on page 40).
6. Start the imaging process and monitor progress (see Monitoring Progress on page 42).
7. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see
Troubleshooting on page 63).
8. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After
Installation on page 45).
Preparing Installation Environment
Standalone (bare metal) imaging is performed from a workstation with access to the IPMI interfaces of the
nodes in the cluster. Imaging a cluster in the field requires first installing certain tools on the workstation
and then setting the environment to run those tools. This requires two preparation tasks:
1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to
installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using
VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on
page 27).
2. Set up the network. The nodes and workstation must have network access to each other through a
switch at the site (see Setting Up the Network on page 32).
Preparing a Workstation
A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the
following:
Note: You can perform these steps either before going to the installation site (if you use a portable
laptop) or at the site (if you can connect to the web).
1. Get a workstation (laptop or desktop computer) that you can use for the installation.
The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk
space (preferably SSD), and a physical (wired) network adapter.
2. Go to the Foundation download page in the Nutanix support portal (see Downloading Installation Files
on page 46) and download the following files to a temporary directory on the workstation.
• Foundation_VM_OVF-version#.tar. This tar file includes the following files:
• Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version#
release, for example Foundation_VM-3.1.ovf.
• Foundation_VM-version#-disk1.vmdk. This is the Foundation VM VMDK file for the version#
release, for example Foundation_VM-3.1-disk1.vmdk.
• VirtualBox-version#-[OSX|Win].[dmg|exe]. This is the Oracle VM VirtualBox installer for Mac
OS (VirtualBox-version#-OSX.dmg) or Windows (VirtualBox-version#-Win.exe). Oracle VM
VirtualBox is a free open source tool used to create a virtualized environment on the workstation.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 28
Note: Links to the VirtualBox files may not appear on the download page for every
Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.)
• nutanix_installer_package-version#.tar.gz. This is the tar file used for imaging the desired AOS
release. Go to the AOS (NOS) download page on the support portal to download this file.
• If you want to run the diagnostics test after creating a cluster, download the diagnostics test file(s) for
your hypervisor from the Tools & Firmware download page on the support portal:
• AHV: diagnostic.raw.img.gz
• ESXi: diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf
• Hyper-V: diagnostics_uvm.vhd.gz
3. Go to the download location and extract Foundation_VM_OVF-version#.tar by entering the following
command:
$ tar -xf Foundation_VM_OVF-version#.tar
Note: This assumes the tar command is available. If it is not, use the corresponding tar utility
for your environment.
4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.
See the Oracle VM VirtualBox User Manual for installation and start up instructions (https://
www.virtualbox.org/wiki/Documentation).
Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment.
Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM
VirtualBox.
5. Create a new folder called VirtualBox VMs in your home directory.
On a Windows system this is typically C:Usersuser_nameVirtualBox VMs.
6. Copy the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files to the VirtualBox
VMs folder that you created in step 5.
7. Start Oracle VM VirtualBox.
Figure: VirtualBox Welcome Screen
8. Click the File option of the main menu and then select Import Appliance from the pull-down list.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 29
9. Find and select the Foundation_VM-version#.ovf file, and then click Next.
10. Click the Import button.
11. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.
12. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).
13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,
install Oracle Additions as follows:
a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD
Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
c. Enter the root password (nutanix/4u) and then click Authenticate.
d. After the installation is complete, press the return key to close the VirtualBox Guest Additions
installation window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.
Note: A reboot is necessary for the changes to take effect.
g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on
the VirtualBox window for the Foundation VM.
14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to
get an IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as
follows:
Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
address now and setting it again when the workstation is on a different (typically private)
network for the installation (see Imaging Bare Metal Nodes on page 26).
a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 30
Figure: Foundation VM: Desktop
b. In the pop-up window, click the Run in Terminal button.
Figure: Foundation VM: Terminal Window
c. In the Select Action box in the terminal window, select Device Configuration.
Note: Selections in the terminal window can be made using the indicated keys only. (Mouse
clicks do not work.)
Figure: Foundation VM: Action Box
d. In the Select a Device box, select eth0.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 31
Figure: Foundation VM: Device Configuration Box
e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by
default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and
then click the OK button.
Figure: Foundation VM: Network Configuration Box
f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action
box.
This save the configuration and closes the terminal window.
15. Copy nutanix_installer_package-version#.tar.gz (downloaded in step 2) to the /home/nutanix/
foundation/nos folder.
16. If you intend to install ESXi or Hyper-V as the hypervisor, download the hypervisor ISO image into the
appropriate folder for that hypervisor.
• ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx
• Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv
Note: Customers must provide a supported ESXi or Hyper-V ISO image (see Hypervisor ISO
Images on page 50). Customers do not have to provide an AHV image because Foundation
automatically puts an AHV tar file into /home/nutanix/foundation/isos/hypervisor/kvm.
17. If you intend to run the diagnostics test after the cluster is created, download the diagnostic test file(s)
into the appropriate folder for that hypervisor:
• AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm
• ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /home/nutanix/foundation/
isos/diags/esx
• Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/hyperv
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 32
Setting Up the Network
The network must be set up properly on site before imaging nodes through the Foundation tool. To set up
the network connections, do the following:
Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing
tables). A flat switch is often recommended to protect against configuration errors that could
affect the production environment. Foundation includes a multi-homing feature that allows you
to image the nodes using production IP addresses despite being connected to a flat switch (see
Configuring Global Parameters on page 33). See Network Requirements on page 52 for
general information about the network topology and port access required for a cluster.
1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN
interfaces of the nodes must be in failover mode (factory default setting).
The exact location of the port depends on the model type. See the hardware documentation for your
model to determine the port location.
→ (Nutanix NX Series) The following figure illustrates the location of the network ports on the back of
an NX-3050 (middle RJ-45 interface).
Figure: Port Locations (NX-3050)
→ (Lenovo Converged HX Series) Unlike Nutanix NX-series systems, which only require that you
connect the 1 GbE port, Lenovo XC-series systems require that you connect both the system
management (IMM) port and one of the 1 GbE or 10 GbE ports. The following figure illustrates the
location of the network ports on the back of the HX3500 and HX5500.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 33
Figure: Port Locations (HX System)
→ (Dell XC series) Unlike Nutanix NX-series systems, which only require that you connect the 1 GbE
port, Dell XC-series systems require that you connect both the iDRAC port and one of the 1 GbE
ports.
Figure: Port Locations (XC System)
2. Connect the installation workstation (see Preparing a Workstation on page 27) to the same 1 GbE
switch as the nodes.
Configuring Global Parameters
Before you begin: Complete Imaging Bare Metal Nodes on page 26.
1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.
Note: See Preparing Installation Environment on page 27 if Oracle VM VirtualBox is not
started or the Foundation VM is not running currently. You can also start the Foundation GUI by
opening a web browser and entering http://localhost:8000/gui/index.html. Once you assign
an IP to the Foundation VM, you can access it from outside VirtualBox.
o
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 34
Figure: Foundation VM Desktop
The Global Configuration screen appears. Use this screen to configure network addresses.
Note: You can access help from the gear icon pull-down menu (top right), but this
requires Internet access. If necessary, copy the help URL to a browser with Internet access.
Figure: Global Configuration Screen
2. In the top section of the screen, enter appropriate values for the IPMI, hypervisor, and Controller VM in
the indicated fields:
Note: The parameters in this section are global and will apply to all the imaged nodes.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 35
Figure: Global Configuration Screen: IPMI, Hypervisor, and CVM Parameters
a. IPMI Netmask: Enter the IPMI netmask value.
b. IPMI Gateway: Enter an IP address for the gateway.
c. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.
d. IPMI Password: Enter the IPMI password. The default password is ADMIN.
Check the show password box to display the password as you type it.
e. Hypervisor Netmask: Enter the hypervisor netmask value.
f. Hypervisor Gateway: Enter an IP address for the gateway.
g. DNS Server IP: Enter the IP address of the DNS server.
h. CVM Netmask: Enter the Controller VM netmask value.
i. CVM Gateway: Enter an IP address for the gateway.
j. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more
information about Controller VM memory configuration, see Controller VM Memory Configurations
on page 55.
This field is set initially to default. (The default amount varies according to the node model type.)
The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The
default setting represents the recommended amount for the model type. Assigning more memory
than the default might be appropriate in certain situations.
3. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets,
check the Multi-Homing box in the bottom section of the screen.
When the box is checked, a line appears to enter Foundation VM virtual IP addresses. The purpose
of the multi-homing feature is to allow the Foundation VM to configure production IP addresses when
using a flat switch. Multi-homing assigns the Foundation VM virtual IP addresses on different subnets
so that you can use customer-specified IP addresses regardless of their subnet.
• Enter unique IPMI, hypervisor, and Controller VM IP addresses. Make sure that the addresses
match the subnets specified for the nodes to be imaged (see Configuring Node Parameters on
page 36).
• If this box is not checked, Foundation requires that either all IP addresses are on the same subnet or
that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 36
Figure: Global Configuration Screen: Multi-Homing
4. Click the Next button at the bottom of the screen to configure the nodes to be imaged (see Configuring
Node Parameters on page 36).
Configuring Node Parameters
Before you begin: Complete Configuring Global Parameters on page 33.
The Block & Node Config screen appears. This screen allows you to configure discovered nodes and
add other (bare metal) nodes to be imaged. Upon opening this screen, Foundation searches the network
for unconfigured Nutanix nodes (that is, factory prepared nodes that are not part of a cluster) and then
displays information about the discovered blocks and nodes. The discovery process can take several
minutes if there are many nodes on the network. Wait for the discovery process to complete before
proceeding. The message "Searching for nodes. This may take a while" appears during discovery.
Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes
to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition,
Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a
preconfigured block with an existing cluster and you want Foundation to image those nodes, you
must first destroy the existing cluster in order for Foundation to discover those nodes.
Figure: Node Configuration Screen
1. Review the list of discovered nodes.
A table appears with a section for each discovered block that includes information about each node in
the block.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 37
• You can exclude a block by clicking the X on the far right of that block. The block disappears from
the display, and the nodes in that block will not be imaged. Clicking the X on the top line removes all
the displayed blocks.
• To repeat the discovery process (search for unconfigured nodes again), click the Retry Discovery
button. You can reset all the global and node entries to the default state by selecting Reset
Configuration from the gear icon pull-down menu.
2. To image additional (bare metal) nodes, click the Add Blocks button.
A window appears to add a new block. Do the following in the indicated fields:
Figure: Add Bare Metal Blocks Window
a. Number of Blocks: Enter the number of blocks to add.
b. Nodes per Block: Enter the number of nodes to add in each block.
All added blocks get the same number of nodes. To add multiple blocks with differing nodes per
block, add the blocks as separate actions.
c. Click the Create button.
The window closes and the new blocks appear at the end of the discovered blocks table.
3. Configure the fields for each node as follows:
a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned
automatically.
b. Position: Uncheck the boxes for any nodes you do not want to be imaged.
The value (A, B, and so on) indicates the node placement in the block such as A, B, C, D for a four-
node block. You can exclude the node in that block position from being imaged by unchecking the
appropriate box. You can check (or uncheck) all boxes by clicking Select All or (Unselect All) above
the table on the right.
c. IPMI MAC Address: For any nodes you added in step 2, enter the MAC address of the IPMI
interface in this field.
Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is
read-only for discovered nodes and displays a value of "N/A" for those nodes.) The MAC address of
the IPMI interface normally appears on a label on the back of each node. (Make sure you enter the
MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".) The MAC
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 38
address appears in the standard form of six two-digit hexadecimal numbers separated by colons, for
example 00:25:90:D9:01:98.
Caution: Any existing data on the node will be destroyed during imaging. If you are using
the add node option to re-image a previously used node, do not proceed until you have
saved all the data on the node that you want to keep.
Figure: IPMI MAC Address Label
d. IPMI IP: Do one of the following in this field:
Note: If you are using a flat switch, the IP addresses must be on the same subnet as the
Foundation VM unless you configure multi-homing (see Configuring Global Parameters on
page 33).
• To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP
address in that field.
• To specify the IPMI addresses automatically, enter a starting IP address in the top line ("Start
IP address" field) of the IPMI IP column. The entered address is assigned to the IPMI port of
the first node, and consecutive IP addresses (starting from the entered address) are assigned
automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by
position, so IP assignments are sequential. If you do not want all addresses to be consecutive,
you can change the IP address for specific nodes by updating the address in the appropriate
fields for those nodes.
Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255
because such addresses are commonly reserved by network administrators.
e. Hypervisor IP: Repeat the previous step for this field.
This sets the hypervisor IP addresses for all the nodes.
f. CVM IP: Repeat the previous step for this field.
This sets the Controller VM IP addresses for all the nodes.
Caution: The Nutanix high availability features require that both hypervisor and Controller
VM be in the same subnet. Putting them in different subnets reduces the failure protection
provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended
that you keep both hypervisor and Controller VM in the same subnet.
g. Hypervisor Hostname: Do one of the following in this field:
• A host name is automatically generated for each host (NTNX-unique_identifier). If these names
are acceptable, do nothing in this field.
Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The
automatically generated names might be longer than 15 characters, which would result
in the same truncated name for multiple hosts in a Windows environment. Therefore, do
not use automatically generated names longer than 15 characters when the hypervisor is
Hyper-V.
• To specify the host names manually, go to the line for each node and enter the desired name in
that field. Host names should contain only digits, letters, and hyphens.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 39
• To specify the host names automatically, enter a base name in the top line of the Hypervisor
Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first
node, and the base name with "-2", "-3" and so on are assigned automatically as the host names
of the remaining nodes. You can specify different names for selected nodes by updating the entry
in the appropriate field for those nodes.
h. NX-6035C : Check this box for any node that is a model NX-6035C.
Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs
are not allowed. NX-6035C nodes run AHV (and so will be imaged with AHV) regardless of what
hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 39).
4. To check which IP addresses are active and reachable, click Ping Scan (above the table on the right).
This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP fields. A (returned
response) or (no response) icon appears next to that field to indicate the ping test result for each
node. This feature is most useful when imaging a previously unconfigured set of nodes. None of
the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing
infrastructure.
Note: When re-imaging a configured set of nodes using the same network configuration, failure
to ping indicates a networking issue.
5. Click the Next button at the bottom of the screen to select the images to use (see Configuring Image
Parameters on page 39).
Configuring Image Parameters
Before you begin: Complete Configuring Node Parameters on page 36.
The Node Imaging configuration screen appears. This screen is for selecting the AOS package and
hypervisor image to use when imaging the nodes.
Figure: Node Imaging Screen
1. Select the hypervisor to install from the pull-down list on the left.
The following choices are available:
• ESX. Selecting ESX as the hypervisor displays the Acropolis Package and Hypervisor ISO Image
fields directly below.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 40
• Hyper-V. Selecting Hyper-V as the hypervisor displays the Acropolis Package, Hypervisor ISO
Image, and SKU fields.
Caution: Nodes must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper-
V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on
page 57 for additional considerations when installing a Hyper-V cluster.
• AHV. Selecting AHV as the hypervisor displays the Acropolis Package and Hypervisor ISO Image
fields.
2. In the Acropolis Package field, select the AOS package to use from the pull-down list.
Note: Click the Refresh Acropolis package link to display the current list of available images
in the ~/foundation/nos folder. If the desired AOS package does not appear in the list, you
must download it to the workstation (see Preparing Installation Environment on page 27).
3. In the Hypervisor ISO Image field, select the hypervisor ISO image to use from the pull-down list.
Note: Click the Refresh hypervisor image link to display the current list of available images
in the ~/foundation/isos/hypervisor/[esx|hyperv|kvm] folder. If the desired hypervisor ISO
image (or AHV installation bundle) does not appear in the list, you must download it to the
workstation (see Preparing a Workstation on page 27).
4. [Hyper-V only] In the SKU field, select the Hyper-V version to use from the pull-down list.
Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter
with GUI. This column appears only when you select Hyper-V.
Note: See Hyper-V Installation Requirements on page 57 for additional considerations
when installing a Hyper-V cluster.
5. When all the settings are correct, do one of the following:
→ To create a new cluster, click the Next button at the bottom of the screen (see Configuring Cluster
Parameters on page 40).
→ To start imaging immediately (bypassing cluster configuration), click the Run Installation button at
the top of the screen (see Monitoring Progress on page 42).
Configuring Cluster Parameters
Before you begin: Complete Configuring Image Parameters on page 39.
The Clusters configuration screen appears. This screen allows you to create one or more clusters and
assign nodes to those clusters. It also allows you to enable diagnostic and health tests after creating the
cluster(s).
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 41
Figure: Cluster Configuration Screen
1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the
Cluster Creation section at the top of the screen.
This section includes a table that is empty initially. A blank line appears in the table for the new cluster.
Enter the following information in the indicated fields:
a. Cluster Name: Enter a cluster name.
b. External IP: Enter an external (virtual) IP address for the cluster.
This field sets a logical IP address that always points to an active Controller VM (provided the cluster
is up), which removes the need to enter the address of a specific Controller VM. This parameter is
required for Hyper-V clusters and is optional for ESXi and AHV clusters. (This applies to NOS 4.0 or
later; it is ignored when imaging an earlier NOS release.)
c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL.
Enter a comma separated list to specify multiple server addresses in this field (and the next two
fields).
d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL.
You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable
or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start.
Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active
Directory domain controller.
e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL.
f. Max Redundancy Factor: Select a redundancy factor (2 or 3) for the cluster from the pull-down list.
This parameter specifies the number of times each piece of data is replicated in the cluster (either 2
or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum
number of nodes required to support that protection.
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 42
• Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of
any single node or drive.
• Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure
of any two nodes or drives in different blocks. A redundancy factor of 3 requires that the cluster
have at least five nodes, and it can be enabled only when the cluster is created. It is an option on
NOS release 4.0 or later. (In addition, containers must have replication factor 3 for guest VM data
to withstand the failure of two nodes.)
2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in
the Post Image Testing section.
→ Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes
several performance metrics on each node in the cluster. These metrics indicate whether the cluster
is performing properly. The results are stored in the ~/foundation/logs/diagnostics directory.
Note: You must download the appropriate diagnostics test file(s) from the support portal to
run this test (see Preparing a Workstation on page 27).
→ Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of
tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/
logs/ncc directory.
3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes
field to be included in that cluster.
A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes
to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot
be assigned to more than one cluster.
Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to
add to an existing cluster, which can be done through the web console or nCLI at a later time.
4. When all settings are correct, click the Run Installation button at the top of the screen to start the
installation process (see Monitoring Progress on page 42).
Monitoring Progress
Before you begin: Complete Configuring Cluster Parameters on page 40 (or Configuring Image
Parameters on page 39 if you are not creating a cluster).
When all the global, node, and cluster settings are correct, do the following:
1. Click the Run Installation button at the top of the screen.
Figure: Run Installation Button
This starts the installation process. First, the IPMI port addresses are configured. The IPMI port
configuration processing can take several minutes depending on the size of the cluster.
Figure: IPMI Configuration Status
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 43
Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation
process stops before imaging any of the nodes. To correct a port configuration problem, see
Fixing IPMI Configuration Problems on page 63.
2. Monitor the imaging and cluster creation progress.
If IPMI port addressing is successful, Foundation moves to node imaging and displays a progress
screen. The progress screen includes the following sections:
• Progress bar at the top (blue during normal processing or red when there is a problem).
• Cluster Creation Status section with a line for each cluster being created (status indicator, cluster
name, progress message, and log link).
• Node Status section with a line for each node being imaged (status indicator, IPMI IP address,
progress message, and log link).
Figure: Foundation Progress Screen: Ongoing Installation
The status message for each node (in the Node Status section) displays the imaging percentage
complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45
minutes. You can monitor overall progress by clicking the Log link at the top, which displays the
service.log contents in a separate tab or window. Click on the Log link for a node to display the log file
for that node in a separate tab or window.
Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains
more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.
• When installation moves to cluster creation, the status message for each cluster (in the Cluster
Creation Status section) displays the percentage complete and current step. Cluster creation
happens quickly, but this step could take some time if you selected the diagnostic and NCC post-
creation tests. Click on the Log link for a cluster to display the log file for that cluster in a separate
tab or window. (The log file is not available until after cluster creation begins, so wait for cluster
progress reporting to start before clicking this link.)
• When processing completes successfully, an "Installation Complete" message appears, along with
a green check mark in the Status field for each node and cluster. This means IPMI configuration
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 44
and imaging (both hypervisor and Nutanix Controller VM) across all the nodes in the cluster was
successful, and cluster creation was successful (if enabled).
Figure: Foundation Progress Screen: Successful Installation
3. If the progress bar turns red with a "There were errors in the installation" message and one or more
node or cluster entries have a red X in the status column, the installation failed at the node imaging or
cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking
the Back to config button returns you to the configuration screens to correct any entries. The default
per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can
expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that
amount of time.
Figure: Foundation Progress Screen: Failed Installation
Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 45
Cleaning Up After Installation
Some information persists after imaging a cluster using Foundation. If you want to use the same
Foundation VM to image another cluster, the persistent information must be removed before attempting
another installation.
To remove the persistent information after an installation, go to a configuration screen and then click the
Reset Configuration option from the gear icon pull-down list in the upper right of the screen.
Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and
returns the Foundation environment to a fresh state.
Figure: Reset Configuration
Downloading Installation Files | Field Installation Guide | Foundation | 46
4
Downloading Installation Files
Nutanix maintains a support portal where you can download the Foundation and AOS (or Phoenix) files
required to do a field installation. To download the required files, do the following:
1. Open a web browser and log in to the Nutanix Support portal: http://portal.nutanix.com.
2. Click Downloads from the main menu (at the top) and then select the desired page: AOS (NOS) to
download AOS files, Foundation to download Foundation files, or Phoenix to download Phoenix files.
Figure: Nutanix Support Portal Main Screen
3. To download a Foundation installation bundle (see Foundation Files on page 47), go to the
Foundation page and do one (or more) of the following:
→ To download the Java applet used in discovery (see Creating a Cluster on page 11), click link to
jnlp from online bundle. This downloads nutanix_foundation_applet.jnlp and allows you to start
discovery immediately.
→ To download an offline bundle containing the Java applet, click offline bundle. This downloads an
installation bundle that can be taken to environments which do not allow Internet access.
→ To download the standalone Foundation bundle (see Imaging Bare Metal Nodes on page 26), click
Foundation_VM-version#.ovf.tar. (The exact file name varies by release.) This downloads an
installation bundle that includes OVF and VMDK files.
→ To download an installation bundle used to upgrade standalone Foundation, click
foundation-version#.tar.gz (see Release Notes on page 6).
→ To download the current hypervisor ISO whitelist, click iso_whitelist.json.
Note: Use the filter option to display the files for a specific Foundation release.
Downloading Installation Files | Field Installation Guide | Foundation | 47
Figure: Foundation Download Screen
4. To download an AOS release bundle, go to the AOS (NOS) page and click the button or link for the
desired release.
Clicking the Download version# button in the upper right of the screen downloads the latest
AOS release. You can download an earlier AOS release by clicking the appropriate Download
version# link under the ADDITIONAL RELEASES heading. The tar file to download is named
nutanix_installer_package-version#.tar.gz.
5. To download a Phoenix ISO image, go to the Phoenix page and click the file name link for the desired
Phoenix ISO image.
Note: Use the filter options to display the files for a specific Phoenix release and the desired
hypervisor type. Phoenix 2.1 or later includes support for all the hypervisors (AHV, ESXi, and
Hyper-V) in a single ISO while earlier versions have a separate ISO for each hypervisor type
(see Phoenix Files on page 48).
Foundation Files
The following table describes the files required to install Foundation. Use the latest Foundation version
available unless instructed by Nutanix customer support to use an earlier version.
File Name Description
nutanix_foundation_applet.jnlp This is the Foundation Java applet. This is the file
needed for doing a Controller VM-based installation
(see Creating a Cluster on page 11) supported in
Foundation 3.0 and later releases.
FoundationApplet-offline.zip This is an installation bundle that includes the
Foundation Java applet. Download and extract this
bundle for environments where Internet access is
not allowed.
Downloading Installation Files | Field Installation Guide | Foundation | 48
File Name Description
Foundation_VM-version#.ovf This is the Foundation VM OVF configuration file
where version# is the Foundation version number.
Foundation_VM-version#-disk1.vmdk This is the Foundation VM VMDK file.
Foundation_VM-version#-disk1.qcow2 This is the Foundation VM data disk in qcow2
format.
Foundation_VM-version#.ovf.tar This is a Foundation tar file that contains
the Foundation_VM-version#.ovf and
Foundation_VM-version#-disk1.vmdk files.
Foundation 2.1 and later releases package the OVF
and VMDK files into this TAR file.
Foundation-version#.tar.gz This is a tar file used for upgrading when
Foundation is already installed (see Release Notes
on page 6).
host-bundle-el6.nutanix.version#.tar.gz This is a tar file used to generate an AHV ISO
image.
nutanix_installer_package-version#.tar.gz This is the tar file used for imaging the desired
AOS release where version# is a version and build
number. Go to the Acropolis (NOS) download
page on the support portal to download this file.
(You can download all the other files from the
Foundation download page.)
iso_whitelist.json This file contains a list of supported ISO images.
Foundation uses the whitelist to validate an ISO
file before imaging (see Selecting the Images on
page 19).
VirtualBox-version#-OSX.dmg This is the Oracle VM VirtualBox installer for Mac
OS where version# is a version and build number.
VirtualBox-version#-Win.exe This is the Oracle VM VirtualBox installer for
Windows.
Phoenix Files
The following table describes the Phoenix ISO files.
Note: Starting with release 2.1, Foundation no longer uses a Phoenix ISO file for imaging.
Phoenix ISO files are now used only for single node imaging (see Appendix: Imaging a Node
(Phoenix) on page 72) and are generated by the user from Foundation and AOS tar files. The
Phoenix ISOs available on the support portal are only for those who are using an older version of
Foundation (pre 2.1).
Downloading Installation Files | Field Installation Guide | Foundation | 49
File Name Description
phoenix-x.x_NOS-y.y.y.iso This is the Phoenix ISO image for a selected AOS
version where x.x is the Phoenix version number
and y.y.y is the AOS version number. This version
applies to any hypervisor (AHV, ESXi, and Hyper-
V), and there is a separate file for each supported
AOS version. Version 2.1 and later (unlike earlier
versions) support a single Phoenix ISO that applies
across multiple hypervisors.
phoenix-x.x_ESX_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or
earlier) for a selected AOS version on the ESXi
hypervisor where x.x is the Phoenix version number
and y.y.y is the AOS version number. There is a
separate file for each supported AOS version.
phoenix-x.x_HYPERV_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or
earlier) for a selected AOS version on the Hyper-
V hypervisor. There is a separate file for each
supported AOS version.
phoenix-x.x_KVM_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0
or earlier) for a selected AOS version on the
KVM hypervisor. There is a separate file for each
supported AOS version.
Hypervisor ISO Images | Field Installation Guide | Foundation | 50
5
Hypervisor ISO Images
An AOS ISO image is included as part of Foundation. However, customers must provide an ESXi or Hyper-
V ISO image for those hypervisors. Check with your VMware or Microsoft representative, or download an
ISO image from an appropriate VMware or Microsoft support site:
• VMware Support: http://www.vmware.com/support.html
• Microsoft Technet: http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx
• Microsoft EA portal: http://www.microsoft.com/licensing/licensing-options/enterprise.aspx
• MSDN: http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052
The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate ISO
images. ISO files are identified in the whitelist by their MD5 value (not file name), so verify that the MD5
value of the ISO you want to use matches the corresponding one in the whitelist. You can download the
current whitelist from the Foundation page on the Nutanix support portal: https://portal.nutanix.com/#/page/
foundation/list
Note: The ISO images in the whitelist are the ones supported in Foundation, but some might no
longer be available from the download sites.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO image.
iso_whitelist.json Fields
Name Description
(n/a) Displays the MD5 value for that ISO image.
min_foundation Displays the earliest Foundation version that
supports this ISO image. For example, "2.1"
indicates you can install this ISO image using
Foundation version 2.1 or later (but not an earlier
version).
hypervisor Displays the hypervisor type (esx, hyperv, or
kvm). The "kvm" designation means the Acropolis
hypervisor (AHV). Entries with a "linux" hypervisor
are not available; they are for Nutanix internal use
only.
min_nos Displays the earliest AOS version compatible with
this hypervisor ISO. A null value indicates there are
no restrictions.
friendly_name Displays a descriptive name for the hypervisor
version, for example "ESX 6.0" or "Windows
2012r2".
version Displays the hypervisor version, for example "6.0"
or "2012r2".
Hypervisor ISO Images | Field Installation Guide | Foundation | 51
Name Description
unsupported_hardware Lists the Nutanix models on which this ISO cannot
be used. A blank list indicates there are no model
restrictions. However, conditional restrictions such
as the limitation that Haswell-based models support
only ESXi version 5.5 U2a or later may not be
reflected in this field.
skus (Hyper-V only) Lists which Hyper-V types (datacenter, standard,
and free) are supported with this ISO image. In
most cases, only datacenter and standard are
supported.
compatible_versions Reflects through regular expressions the hypervisor
versions that can co-exist with the ISO version in an
Acropolis cluster (primarily for internal use).
deprecated (optional field) Indicates that this hypervisor image is not
supported by the mentioned Foundation version
and higher versions. If the value is “null”, the image
is supported by all Foundation versions to date.
The following are sample entries from the whitelist for an ESX and an AHV image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6.0.*"]
},
"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},
Network Requirements | Field Installation Guide | Foundation | 52
6
Network Requirements
When configuring a Nutanix block, you will need to ask for the IP addresses of components that should
already exist in the customer network, as well as IP addresses that can be assigned to the Nutanix cluster.
You will also need to make sure to open the software ports that are used to manage cluster components
and to enable communication between components such as the Controller VM, Web console, Prism
Central, hypervisor, and the Nutanix hardware.
Existing Customer Network
You will need the following information during the cluster configuration:
• Default gateway
• Network mask
• DNS server
• NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the IP address
and port number of that server when enabling Nutanix support on the cluster.
New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:
• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller
VMs and hypervisor hosts can be on this network, which must be isolated and protected.
Software Ports Required for Management and Communication
The following Nutanix network port diagrams show the ports that must be open for supported hypervisors.
The diagrams also shows ports that must be opened for infrastructure services.
Network Requirements | Field Installation Guide | Foundation | 53
Figure: Nutanix Network Port Diagram for VMware ESXi
Figure: Nutanix Network Port Diagram for the Acropolis Hypervisor
Network Requirements | Field Installation Guide | Foundation | 54
Figure: Nutanix Network Port Diagram for Microsoft Hyper-V
Controller VM Memory Configurations | Field Installation Guide | Foundation | 55
7
Controller VM Memory Configurations
This topic lists the recommended Controller VM memory allocations for models and features.
Controller VM Memory Configurations for Base Models
Platform Default
Platform Recommended
Memory (GB)
Default Memory
(GB)
vCPUs
Default configuration for all platforms
unless otherwise noted
16 16 8
The following tables show the minimum amount of memory and vCPU requirements and recommendations
for the Controller VM on each node for platforms that do not follow the default.
Nutanix Platforms
Platform Recommended
Memory (GB)
Default Memory
(GB)
vCPUs
NX-1020 12 12 4
NX-6035C 24 24 8
NX-8150 32 32 8
NX-8150-G4 32 32 8
NX-9040 32 16 8
NX-9060-G4 32 16 8
Dell Platforms
Platform Recommended
Memory (GB)
Default Memory
(GB)
vCPUs
XC730xd-24 32 16 8
XC6320-6AF 32 16 8
XC630-10AF 32 16 8
Controller VM Memory Configurations | Field Installation Guide | Foundation | 56
Lenovo Platforms
Platform Recommended
Memory (GB)
Default Memory
(GB)
vCPUs
HX-5500 24 24 8
HX-7500 24 24 8
Controller VM Memory Configurations for Features
The following table lists the minimum amount of memory required when enabling features. The memory
size requirements are in addition to the default or recommended memory available for your platform
(Nutanix, Dell, Lenovo) as described in Controller VM Memory Configurations for Base Models. Adding
features cannot exceed 16 GB in additional memory.
Note: Default or recommended platform memory + memory required for each enabled feature =
total Controller VM Memory required
Feature(s) Memory (GB)
Capacity Tier Deduplication (includes Performance Tier Deduplication) 16
Redundancy Factor 3 8
Performance Tier Deduplication 8
Cold Tier nodes (6035-C) + Capacity Tier Deduplication 4
Performance Tier Deduplication + Redundancy Factor 3 16
Capacity Tier Deduplication + Redundancy Factor 3 16
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1
Field installation guide-v3_1

More Related Content

What's hot

OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方Toru Makabe
 
Slug 2009 06 SELinux For Sysadmins
Slug 2009 06 SELinux For SysadminsSlug 2009 06 SELinux For Sysadmins
Slug 2009 06 SELinux For SysadminsPaulWay
 
OpenStack Architecture and Use Cases
OpenStack Architecture and Use CasesOpenStack Architecture and Use Cases
OpenStack Architecture and Use CasesJalal Mostafa
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3Vepsun Technologies
 
OpenStack超入門シリーズ Novaのディスク周りあれこれ
OpenStack超入門シリーズ Novaのディスク周りあれこれOpenStack超入門シリーズ Novaのディスク周りあれこれ
OpenStack超入門シリーズ Novaのディスク周りあれこれToru Makabe
 
Red Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA Architecture
Red Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA ArchitectureRed Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA Architecture
Red Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA ArchitectureEtsuji Nakai
 
OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27Kentaro Ebisawa
 
L2 over L3 ecnaspsulations
L2 over L3 ecnaspsulationsL2 over L3 ecnaspsulations
L2 over L3 ecnaspsulationsMotonori Shindo
 
Building Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingBuilding Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingJoe Huang
 
試して覚えるPacemaker入門 『リソース設定編』
試して覚えるPacemaker入門 『リソース設定編』試して覚えるPacemaker入門 『リソース設定編』
試して覚えるPacemaker入門 『リソース設定編』健太 松浦
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingShapeBlue
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101Weaveworks
 
How to configure dhcp on a cisco asa 5505
How to configure dhcp on a cisco asa 5505How to configure dhcp on a cisco asa 5505
How to configure dhcp on a cisco asa 5505IT Tech
 
Hinemos ver.6.0 機能紹介
Hinemos ver.6.0 機能紹介Hinemos ver.6.0 機能紹介
Hinemos ver.6.0 機能紹介Hinemos
 
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking ShapeBlue
 
MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)
MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)
MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)Shinya Sugiyama
 

What's hot (20)

OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
OpenStack超入門シリーズ いまさら聞けないNeutronの使い方
 
Slug 2009 06 SELinux For Sysadmins
Slug 2009 06 SELinux For SysadminsSlug 2009 06 SELinux For Sysadmins
Slug 2009 06 SELinux For Sysadmins
 
Drive into calico architecture
Drive into calico architectureDrive into calico architecture
Drive into calico architecture
 
OpenStack Architecture and Use Cases
OpenStack Architecture and Use CasesOpenStack Architecture and Use Cases
OpenStack Architecture and Use Cases
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
OpenStack超入門シリーズ Novaのディスク周りあれこれ
OpenStack超入門シリーズ Novaのディスク周りあれこれOpenStack超入門シリーズ Novaのディスク周りあれこれ
OpenStack超入門シリーズ Novaのディスク周りあれこれ
 
Red Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA Architecture
Red Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA ArchitectureRed Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA Architecture
Red Hat Enterprise Linux OpenStack Platform 7 - VM Instance HA Architecture
 
OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27OVN 設定サンプル | OVN config example 2015/12/27
OVN 設定サンプル | OVN config example 2015/12/27
 
L2 over L3 ecnaspsulations
L2 over L3 ecnaspsulationsL2 over L3 ecnaspsulations
L2 over L3 ecnaspsulations
 
Building Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack CascadingBuilding Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
Building Multi-Site and Multi-OpenStack Cloud with OpenStack Cascading
 
試して覚えるPacemaker入門 『リソース設定編』
試して覚えるPacemaker入門 『リソース設定編』試して覚えるPacemaker入門 『リソース設定編』
試して覚えるPacemaker入門 『リソース設定編』
 
CloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and TroubleshootingCloudStack - Top 5 Technical Issues and Troubleshooting
CloudStack - Top 5 Technical Issues and Troubleshooting
 
Kubernetes Networking 101
Kubernetes Networking 101Kubernetes Networking 101
Kubernetes Networking 101
 
How to configure dhcp on a cisco asa 5505
How to configure dhcp on a cisco asa 5505How to configure dhcp on a cisco asa 5505
How to configure dhcp on a cisco asa 5505
 
MAAS High Availability Overview
MAAS High Availability OverviewMAAS High Availability Overview
MAAS High Availability Overview
 
Keystone fernet token
Keystone fernet tokenKeystone fernet token
Keystone fernet token
 
Hinemos ver.6.0 機能紹介
Hinemos ver.6.0 機能紹介Hinemos ver.6.0 機能紹介
Hinemos ver.6.0 機能紹介
 
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
Deploying CloudStack and Ceph with flexible VXLAN and BGP networking
 
非エンジニアがクラウド上にMinecraftサーバーを構築するまでの記録
非エンジニアがクラウド上にMinecraftサーバーを構築するまでの記録非エンジニアがクラウド上にMinecraftサーバーを構築するまでの記録
非エンジニアがクラウド上にMinecraftサーバーを構築するまでの記録
 
MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)
MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)
MySQL InnoDB Clusterによる高可用性構成(DB Tech Showcase 2017)
 

Similar to Field installation guide-v3_1

Ovms ops manager_admin
Ovms ops manager_adminOvms ops manager_admin
Ovms ops manager_adminsati1981
 
Configuration testing
Configuration testingConfiguration testing
Configuration testingfarouq umar
 
P6 professional standalone_install_and_config_guide
P6 professional standalone_install_and_config_guideP6 professional standalone_install_and_config_guide
P6 professional standalone_install_and_config_guideSuresh G Sankarankutty
 
Oracle ebs-r12-1-3installationlinux64bit
Oracle ebs-r12-1-3installationlinux64bitOracle ebs-r12-1-3installationlinux64bit
Oracle ebs-r12-1-3installationlinux64bitRavi Kumar Lanke
 
Quick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage Service
Quick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage ServiceQuick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage Service
Quick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage ServiceCloudian
 
Installation and c onfiguration
Installation and c onfigurationInstallation and c onfiguration
Installation and c onfigurationbispsolutions
 
Mikrotik
MikrotikMikrotik
Mikrotikhzcom
 
Safe peak installation guide version 2.1
Safe peak installation guide version 2.1Safe peak installation guide version 2.1
Safe peak installation guide version 2.1Vladi Vexler
 
Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253
Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253
Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253Sachin Jaypatre
 
General Ubuntu Advantage Service Guide
General  Ubuntu Advantage Service Guide General  Ubuntu Advantage Service Guide
General Ubuntu Advantage Service Guide The World Bank
 
PowerBreakfast #005 - Why DSC, NOW?
PowerBreakfast #005 - Why DSC, NOW?PowerBreakfast #005 - Why DSC, NOW?
PowerBreakfast #005 - Why DSC, NOW?Milton Goh
 
Planning Optimal Lotus Quickr services for Portal (J2EE) Deployments
Planning Optimal Lotus Quickr services for Portal (J2EE) DeploymentsPlanning Optimal Lotus Quickr services for Portal (J2EE) Deployments
Planning Optimal Lotus Quickr services for Portal (J2EE) DeploymentsStuart McIntyre
 
Kvm for ibm_z_systems_v1.1.2_limits
Kvm for ibm_z_systems_v1.1.2_limitsKvm for ibm_z_systems_v1.1.2_limits
Kvm for ibm_z_systems_v1.1.2_limitsKrystel Hery
 

Similar to Field installation guide-v3_1 (20)

Manual Sophos
Manual SophosManual Sophos
Manual Sophos
 
Ovms ops manager_admin
Ovms ops manager_adminOvms ops manager_admin
Ovms ops manager_admin
 
Configuration testing
Configuration testingConfiguration testing
Configuration testing
 
Stand alone
Stand aloneStand alone
Stand alone
 
Installation
InstallationInstallation
Installation
 
P6 professional standalone_install_and_config_guide
P6 professional standalone_install_and_config_guideP6 professional standalone_install_and_config_guide
P6 professional standalone_install_and_config_guide
 
Oracle ebs-r12-1-3installationlinux64bit
Oracle ebs-r12-1-3installationlinux64bitOracle ebs-r12-1-3installationlinux64bit
Oracle ebs-r12-1-3installationlinux64bit
 
Quick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage Service
Quick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage ServiceQuick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage Service
Quick-Start Guide: Deploying Your Cloudian HyperStore Hybrid Storage Service
 
Sap setup guide
Sap setup guideSap setup guide
Sap setup guide
 
Dev stacklabguide
Dev stacklabguideDev stacklabguide
Dev stacklabguide
 
Devstack lab guide
Devstack lab guideDevstack lab guide
Devstack lab guide
 
Installation and c onfiguration
Installation and c onfigurationInstallation and c onfiguration
Installation and c onfiguration
 
Mikrotik
MikrotikMikrotik
Mikrotik
 
Safe peak installation guide version 2.1
Safe peak installation guide version 2.1Safe peak installation guide version 2.1
Safe peak installation guide version 2.1
 
Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253
Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253
Zenoss core beta_installation_guide_r5.0.0b2_d99.14.253
 
General Ubuntu Advantage Service Guide
General  Ubuntu Advantage Service Guide General  Ubuntu Advantage Service Guide
General Ubuntu Advantage Service Guide
 
Setup guide nos-v3_5
Setup guide nos-v3_5Setup guide nos-v3_5
Setup guide nos-v3_5
 
PowerBreakfast #005 - Why DSC, NOW?
PowerBreakfast #005 - Why DSC, NOW?PowerBreakfast #005 - Why DSC, NOW?
PowerBreakfast #005 - Why DSC, NOW?
 
Planning Optimal Lotus Quickr services for Portal (J2EE) Deployments
Planning Optimal Lotus Quickr services for Portal (J2EE) DeploymentsPlanning Optimal Lotus Quickr services for Portal (J2EE) Deployments
Planning Optimal Lotus Quickr services for Portal (J2EE) Deployments
 
Kvm for ibm_z_systems_v1.1.2_limits
Kvm for ibm_z_systems_v1.1.2_limitsKvm for ibm_z_systems_v1.1.2_limits
Kvm for ibm_z_systems_v1.1.2_limits
 

Recently uploaded

Kenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby AfricaKenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby Africaictsugar
 
Digital Transformation in the PLM domain - distrib.pdf
Digital Transformation in the PLM domain - distrib.pdfDigital Transformation in the PLM domain - distrib.pdf
Digital Transformation in the PLM domain - distrib.pdfJos Voskuil
 
8447779800, Low rate Call girls in Uttam Nagar Delhi NCR
8447779800, Low rate Call girls in Uttam Nagar Delhi NCR8447779800, Low rate Call girls in Uttam Nagar Delhi NCR
8447779800, Low rate Call girls in Uttam Nagar Delhi NCRashishs7044
 
Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...
Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...
Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...lizamodels9
 
Case study on tata clothing brand zudio in detail
Case study on tata clothing brand zudio in detailCase study on tata clothing brand zudio in detail
Case study on tata clothing brand zudio in detailAriel592675
 
Kenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith PereraKenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith Pereraictsugar
 
Keppel Ltd. 1Q 2024 Business Update Presentation Slides
Keppel Ltd. 1Q 2024 Business Update  Presentation SlidesKeppel Ltd. 1Q 2024 Business Update  Presentation Slides
Keppel Ltd. 1Q 2024 Business Update Presentation SlidesKeppelCorporation
 
Future Of Sample Report 2024 | Redacted Version
Future Of Sample Report 2024 | Redacted VersionFuture Of Sample Report 2024 | Redacted Version
Future Of Sample Report 2024 | Redacted VersionMintel Group
 
Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...Seta Wicaksana
 
Youth Involvement in an Innovative Coconut Value Chain by Mwalimu Menza
Youth Involvement in an Innovative Coconut Value Chain by Mwalimu MenzaYouth Involvement in an Innovative Coconut Value Chain by Mwalimu Menza
Youth Involvement in an Innovative Coconut Value Chain by Mwalimu Menzaictsugar
 
Islamabad Escorts | Call 03070433345 | Escort Service in Islamabad
Islamabad Escorts | Call 03070433345 | Escort Service in IslamabadIslamabad Escorts | Call 03070433345 | Escort Service in Islamabad
Islamabad Escorts | Call 03070433345 | Escort Service in IslamabadAyesha Khan
 
8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR
8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR
8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCRashishs7044
 
Marketplace and Quality Assurance Presentation - Vincent Chirchir
Marketplace and Quality Assurance Presentation - Vincent ChirchirMarketplace and Quality Assurance Presentation - Vincent Chirchir
Marketplace and Quality Assurance Presentation - Vincent Chirchirictsugar
 
Annual General Meeting Presentation Slides
Annual General Meeting Presentation SlidesAnnual General Meeting Presentation Slides
Annual General Meeting Presentation SlidesKeppelCorporation
 
Contemporary Economic Issues Facing the Filipino Entrepreneur (1).pptx
Contemporary Economic Issues Facing the Filipino Entrepreneur (1).pptxContemporary Economic Issues Facing the Filipino Entrepreneur (1).pptx
Contemporary Economic Issues Facing the Filipino Entrepreneur (1).pptxMarkAnthonyAurellano
 
(Best) ENJOY Call Girls in Faridabad Ex | 8377087607
(Best) ENJOY Call Girls in Faridabad Ex | 8377087607(Best) ENJOY Call Girls in Faridabad Ex | 8377087607
(Best) ENJOY Call Girls in Faridabad Ex | 8377087607dollysharma2066
 
Innovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdfInnovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdfrichard876048
 
International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...ssuserf63bd7
 
8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR
8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR
8447779800, Low rate Call girls in New Ashok Nagar Delhi NCRashishs7044
 
Intro to BCG's Carbon Emissions Benchmark_vF.pdf
Intro to BCG's Carbon Emissions Benchmark_vF.pdfIntro to BCG's Carbon Emissions Benchmark_vF.pdf
Intro to BCG's Carbon Emissions Benchmark_vF.pdfpollardmorgan
 

Recently uploaded (20)

Kenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby AfricaKenya’s Coconut Value Chain by Gatsby Africa
Kenya’s Coconut Value Chain by Gatsby Africa
 
Digital Transformation in the PLM domain - distrib.pdf
Digital Transformation in the PLM domain - distrib.pdfDigital Transformation in the PLM domain - distrib.pdf
Digital Transformation in the PLM domain - distrib.pdf
 
8447779800, Low rate Call girls in Uttam Nagar Delhi NCR
8447779800, Low rate Call girls in Uttam Nagar Delhi NCR8447779800, Low rate Call girls in Uttam Nagar Delhi NCR
8447779800, Low rate Call girls in Uttam Nagar Delhi NCR
 
Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...
Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...
Call Girls In Sikandarpur Gurgaon ❤️8860477959_Russian 100% Genuine Escorts I...
 
Case study on tata clothing brand zudio in detail
Case study on tata clothing brand zudio in detailCase study on tata clothing brand zudio in detail
Case study on tata clothing brand zudio in detail
 
Kenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith PereraKenya Coconut Production Presentation by Dr. Lalith Perera
Kenya Coconut Production Presentation by Dr. Lalith Perera
 
Keppel Ltd. 1Q 2024 Business Update Presentation Slides
Keppel Ltd. 1Q 2024 Business Update  Presentation SlidesKeppel Ltd. 1Q 2024 Business Update  Presentation Slides
Keppel Ltd. 1Q 2024 Business Update Presentation Slides
 
Future Of Sample Report 2024 | Redacted Version
Future Of Sample Report 2024 | Redacted VersionFuture Of Sample Report 2024 | Redacted Version
Future Of Sample Report 2024 | Redacted Version
 
Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...Ten Organizational Design Models to align structure and operations to busines...
Ten Organizational Design Models to align structure and operations to busines...
 
Youth Involvement in an Innovative Coconut Value Chain by Mwalimu Menza
Youth Involvement in an Innovative Coconut Value Chain by Mwalimu MenzaYouth Involvement in an Innovative Coconut Value Chain by Mwalimu Menza
Youth Involvement in an Innovative Coconut Value Chain by Mwalimu Menza
 
Islamabad Escorts | Call 03070433345 | Escort Service in Islamabad
Islamabad Escorts | Call 03070433345 | Escort Service in IslamabadIslamabad Escorts | Call 03070433345 | Escort Service in Islamabad
Islamabad Escorts | Call 03070433345 | Escort Service in Islamabad
 
8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR
8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR
8447779800, Low rate Call girls in Kotla Mubarakpur Delhi NCR
 
Marketplace and Quality Assurance Presentation - Vincent Chirchir
Marketplace and Quality Assurance Presentation - Vincent ChirchirMarketplace and Quality Assurance Presentation - Vincent Chirchir
Marketplace and Quality Assurance Presentation - Vincent Chirchir
 
Annual General Meeting Presentation Slides
Annual General Meeting Presentation SlidesAnnual General Meeting Presentation Slides
Annual General Meeting Presentation Slides
 
Contemporary Economic Issues Facing the Filipino Entrepreneur (1).pptx
Contemporary Economic Issues Facing the Filipino Entrepreneur (1).pptxContemporary Economic Issues Facing the Filipino Entrepreneur (1).pptx
Contemporary Economic Issues Facing the Filipino Entrepreneur (1).pptx
 
(Best) ENJOY Call Girls in Faridabad Ex | 8377087607
(Best) ENJOY Call Girls in Faridabad Ex | 8377087607(Best) ENJOY Call Girls in Faridabad Ex | 8377087607
(Best) ENJOY Call Girls in Faridabad Ex | 8377087607
 
Innovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdfInnovation Conference 5th March 2024.pdf
Innovation Conference 5th March 2024.pdf
 
International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...International Business Environments and Operations 16th Global Edition test b...
International Business Environments and Operations 16th Global Edition test b...
 
8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR
8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR
8447779800, Low rate Call girls in New Ashok Nagar Delhi NCR
 
Intro to BCG's Carbon Emissions Benchmark_vF.pdf
Intro to BCG's Carbon Emissions Benchmark_vF.pdfIntro to BCG's Carbon Emissions Benchmark_vF.pdf
Intro to BCG's Carbon Emissions Benchmark_vF.pdf
 

Field installation guide-v3_1

  • 2. Copyright | Field Installation Guide | Foundation | 2 Notice Copyright Copyright 2016 Nutanix, Inc. Nutanix, Inc. 1740 Technology Drive, Suite 150 San Jose, CA 95110 All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. License The provision of this software to you does not grant any licenses or other rights under any Microsoft patents with respect to anything other than the file server implementation portion of the binaries for this software, including no licenses or any other rights in any hardware or any devices or software that are used to communicate with or in connection with this software. Conventions Convention Description variable_value The action depends on a value that is unique to your environment. ncli> command The commands are executed in the Nutanix nCLI. user@host$ command The commands are executed as a non-privileged user (such as nutanix) in the system shell. root@host# command The commands are executed as the root user in the vSphere or Acropolis host shell. > command The commands are executed in the Hyper-V host shell. output The information is displayed as output from a command or in a log file. Default Cluster Credentials Interface Target Username Password Nutanix web console Nutanix Controller VM admin admin vSphere Web Client ESXi host root nutanix/4u
  • 3. Copyright | Field Installation Guide | Foundation | 3 Interface Target Username Password vSphere client ESXi host root nutanix/4u SSH client or console ESXi host root nutanix/4u SSH client or console AHV host root nutanix/4u SSH client or console Hyper-V host Administrator nutanix/4u SSH client Nutanix Controller VM nutanix nutanix/4u SSH client or console Acropolis OpenStack Services VM (Nutanix OVM) root admin Version Last modified: April 22, 2016 (2016-04-22 2:19:56 GMT-7)
  • 4. 4 Contents Release Notes...................................................................................................................6 1: Field Installation Overview...................................................................10 2: Creating a Cluster................................................................................. 11 Discovering the Nodes.......................................................................................................................11 Defining the Cluster........................................................................................................................... 15 Setting Up the Nodes........................................................................................................................ 17 Selecting the Images......................................................................................................................... 19 Creating the Cluster...........................................................................................................................22 Configuring a New Cluster................................................................................................................ 24 3: Imaging Bare Metal Nodes................................................................... 26 Preparing Installation Environment....................................................................................................27 Preparing a Workstation......................................................................................................... 27 Setting Up the Network...........................................................................................................32 Configuring Global Parameters......................................................................................................... 33 Configuring Node Parameters........................................................................................................... 36 Configuring Image Parameters..........................................................................................................39 Configuring Cluster Parameters........................................................................................................ 40 Monitoring Progress...........................................................................................................................42 Cleaning Up After Installation............................................................................................................45 4: Downloading Installation Files.............................................................46 Foundation Files.................................................................................................................................47 Phoenix Files......................................................................................................................................48 5: Hypervisor ISO Images.........................................................................50 6: Network Requirements......................................................................... 52 7: Controller VM Memory Configurations............................................... 55 8: Hyper-V Installation Requirements......................................................57 9: Setting IPMI Static IP Address.............................................................61 10: Troubleshooting...................................................................................63 Fixing IPMI Configuration Problems..................................................................................................63 Fixing Imaging Problems................................................................................................................... 64
  • 5. 5 Frequently Asked Questions (FAQ)...................................................................................................65 11: Appendix: Imaging a Node (Phoenix)................................................72 Summary: Imaging Nutanix NX Series Nodes.................................................................................. 72 Summary: Imaging Lenovo Converged HX Series Nodes................................................................73 Preparing the ISO Images.................................................................................................................73 Nutanix NX Series Platforms.............................................................................................................74 Installing a Hypervisor (Nutanix NX Series Platforms)...........................................................74 Installing ESXi (Nutanix NX Series Platforms)....................................................................... 77 Installing Hyper-V (Nutanix NX Series Platforms).................................................................. 78 Installing AHV (Nutanix NX Series Platforms)........................................................................82 Attaching the Controller VM Image (Nutanix NX Series Platforms)....................................... 83 Lenovo Converged HX Series Platforms.......................................................................................... 85 Attaching an ISO Image (Lenovo Converged HX Series Platforms)...................................... 85 Installing ESXi (Lenovo Converged HX Series Platforms)..................................................... 87 Installing AHV (Lenovo Converged HX Series Platforms)......................................................88 Installing the Controller VM............................................................................................................... 89
  • 6. Release Notes | Field Installation Guide | Foundation | 6 Release Notes The Foundation release notes provide brief, high-level descriptions of changes, enhancements, notes, and cautions as applicable to various releases of Foundation software and to the use of the software with specific hardware platforms. Where applicable, the description includes a solution or workaround. Foundation Survey for Sales Engineers Foundation is committed to improving install times and removing frustration from the process. To help achieve that goal, we have created a survey that sales engineers can use to share their experience with Foundation. If you are a sales engineer, please fill out the survey at https://goo.gl/y61aqK. Ideally, you should complete this survey after every install so that we have a large number of data points and a good measure of how we are doing. Please bookmark the survey page. Foundation Release 3.1.1 This release includes the following enhancements and changes: • Foundation now configures both NTP and DNS servers on the hypervisor when you skip imaging and only create a cluster. • When you want to switch from the Controller VM–based imaging mode to the standalone imaging mode, you no longer have to manually change the value of foundation_workflow to ipmi in ~/ foundation/config/foundation_settings.json. Foundation automatically detects the imaging work-flow that is in progress. This release includes fixes for the following issues: • Foundation fails to install drivers, including the VAAI plug-in, when imaging hosts with ESXi. [ENG-50287] • Phoenix fails to configure the host during boot disk replacement. [ENG-50716] • Foundation fails when you choose a replication factor of 3 (RF3). • The Foundation user interface freezes after you correct an IP address. • If the selected hypervisor image is listed as unsupported or deprecated in the whitelist of the selected AOS installation bundle, Foundation fails to pass control to the first imaged node. • Foundation fails if you do not specify a default gateway. [ENG-50379] • After imaging, the cluster fails to start because the Foundation user interface passes the node model as an imaging parameter. [ENG-50185] • The Foundation service fails to start after upgrading from Foundation 3.0.x to 3.1.1. • When run on Mac OS, the Foundation Java applet fails to discover free nodes and displays the message “unable to route to host”. The following issues have been identified in this release: • After the imaging process is finished, the user interface might require a few seconds to indicate success or failure. Any attempt to refresh the user interface after the imaging process is finished prevents that status from being displayed. [ENG-53306] • If you want to use Controller VM–based imaging to install AOS version 4.6 or later on AHV, make sure that the Controller VM is not running Foundation 3.0.x. If it is running Foundation 3.0.x, first create the cluster without imaging the nodes, and then upgrade AOS and the hypervisor. If you are running AOS 4.6 or later and you want to install AHV and an AOS release earlier than 4.5, do not use Controller VM–based imaging. Use standalone Foundation instead. [ENG-52449]
  • 7. Release Notes | Field Installation Guide | Foundation | 7 • If you choose to run post-imaging tests when imaging a cluster with AOS 4.6.1, the cluster creation process appears to terminate prematurely with the message fatal: Running NCC. The error condition, which is caused by the NCC test suite failing to run on AOS 4.6.1, can be ignored because Foundation runs the NCC test suite only after the cluster has been created successfully. Close the Foundation user interface, log on to any Controller VM in the cluster, and run the NCC test suite manually by using the following command: nutanix@cvm$ ncc health_checks run_all Workaround: You can avoid this error condition by choosing not to run post-imaging tests with Foundation. After you create the cluster, log on to any Controller VM and run NCC manually. Foundation Release 3.1.0.1 This release is the same as release 3.1 except that it contains an updated Acropolis hypervisor image (AHV 20160217) which incorporates a security advisory fix. If 3.1 is installed currently, you can download the AHV 20160217 installer and the new whitelist to achieve the same thing as upgrading to 3.1.0.1. Foundation Release 3.1 This release includes the following enhancements and changes: • Foundation currently supports VLAN tagging when adding nodes to an existing cluster (see "Expanding a Cluster" in the Web Console Guide). This VLAN tagging support has been extended in release 3.1 to include initial node imaging when creating a cluster (see Discovering the Nodes on page 11). VLAN tagging support is limited to the Controller VM-based version of Foundation; it is not supported in the standalone version. VLAN support when adding nodes applies to any hypervisor (AHV, ESXi, or Hyper- V), but VLAN support for initial node imaging is limited to AHV (not ESXi or Hyper-V). • Error logging has been expanded and improved for usability. Enhancements include the following: • Node and cluster log files are now named node_cvm_ip_addr.log and cluster_cluster_name.log. They are still located in ~/data/logs/foundation (Controller VM based) or /home/nutanix/ foundation/log (bare metal). • Messages now include an associated log level. In the case of a failure, you can quickly find the relevant issues by searching for messages that include ERROR or FATAL in the body, as in the following example message: 20151229 15:47:32 ERROR Command 'mkisofs -o foundation.node_10.1.87.247.iso phoenix_cd_image' returned error code 255 • There is a new debug.log file that records every bit of information Foundation outputs. • There is a new api.log file that records certain requests made to the Foundation API. • Progress monitoring has been improved. Some of the previously sequential tasks now run in parallel on a per node basis, and the Foundation monitoring screen displays progress messages as the tasks are processed (see Creating the Cluster on page 22). • Support has been added for two additional Hyper-V SKUs, Datacenter with GUI and Standard with GUI (see Selecting the Images on page 19). • An AHV ISO file is no longer provided. Instead, an AHV tar file is provided, which is used to generate an AHV ISO when needed. An AHV tar file is included as part of the AOS download bundle. You can also download an AHV tar file from the support portal. The AHV tar file included in AOS is named kvm_host_bundle_version#.tar.gz; choose the appropriate tar file when selecting an AHV version (see Selecting the Images on page 19). • When monitoring progress during a standalone Foundation (see Monitoring Progress on page 42), clicking the general progress Log link may result in a "404 Not Found" error instead of displaying the service.log contents. [ENG-49790] • When you upgrade to AOS 4.6, the Controller VM-based version of Foundation is automatically upgraded to version 3.1. In addition, Prism now supports upgrading Foundation (post 3.1) through the
  • 8. Release Notes | Field Installation Guide | Foundation | 8 standard 1-click upgrade mechanism. See the "Upgrading Foundation: 1-Click Upgrade or Manual Upgrade" section in the Web Console Guide for instructions on how to upgrade Foundation through Prism. If you install AOS 4.6 on direct-from-the-factory node hardware, an older version of Foundation is installed. The workaround for this issue is to upgrade to Foundation through the Prism web console after installing AOS 4.6. See the Prism Web Console Guide and KB 3068 for more information. • To upgrade standalone (bare metal) Foundation to version 3.1 from any earlier version, do the following: 1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the support portal to the /home/nutanix directory. 2. Copy the /home/nutanix/foundation/config/foundation_settings.json file to a safe location. (You will copy it back in step 5.) 3. If you want to save the existing log files, copy the /home/nutanix/foundation/log directory to a safe location or download the log archive from the following URL: http://foundation_ip:8000/ foundation/log_archive.tar. 4. If you want to preserve the existing ISO files (the contents of the isos and nos directories), enter the following commands: $ cd /home/nutanix $ cp -r foundation/isos . $ cp -r foundation/nos . 5. Do one of the following: • If upgrading from 3.x to 3.1, enter the following commands: $ cd /home/nutanix # If not already there $ pkill -9 foundation $ rm -rf foundation $ tar xf foundation-version#.tar.gz $ cp <path>/foundation_settings.json config/foundation_settings.json # If step 4 done, save new AHV files and restore backed up ISO files $ mv foundation/isos/hypervisor/kvm/* isos/hypervisor/kvm/ $ mv isos foundation/isos $ mv nos foundation/nos $ sudo service foundation_service restart • If upgrading from 2.x to 3.1, enter the following commands: $ cd /home/nutanix # If not already there $ pkill -9 foundation $ sudo fusermount -uz foundation/tmp/fuse # It is okay if this complains at you $ rm -rf foundation $ tar xf foundation-version#.tar.gz $ cd /etc/init.d $ sudo rm foundation_service $ sudo ln -s /home/nutanix/foundation/bin/foundation_service $ sudo yum -y install libunwind-devel $ cd /home/nutanix $ cp <path>/foundation_settings.json config/foundation_settings.json # If step 4 done, save new AHV files and restore backed up ISO files $ mv foundation/isos/hypervisor/kvm/* isos/hypervisor/kvm/ $ mv isos foundation/isos $ mv nos foundation/nos $ sudo service foundation_service restart Hardware Platform–Specific Notes The following notes apply to the use of Foundation 3.1.x with specific hardware platforms: NX-3175-G4 • One or more of the following issues result from the use of an unsupported 1000BASE-T Copper SFP transceiver module (SFP-to-RJ45 adapter) when imaging NX-3175-G4 nodes:
  • 9. Release Notes | Field Installation Guide | Foundation | 9 • Foundation times out with the following message in node logs: INFO: Populating firmware information for device bmc... • Foundation fails at random stages. • Foundation cannot communicate with the baseboard management controller (BMC). To avoid encountering these issues, use a supported SFP-to-RJ45 adapter. For information about supported adapters, see KB2422.
  • 10. Field Installation Overview | Field Installation Guide | Foundation | 10 1 Field Installation Overview Nutanix installs the Acropolis hypervisor (AHV) and the Nutanix Controller VM at the factory before shipping a node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes or to use any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides step-by- step instructions on how to use the Foundation tool to do a field installation, which consists of installing a hypervisor and the Nutanix Controller VM on each node and then creating a cluster. You can also use Foundation to create just a cluster from nodes that are already imaged or to image nodes without creating a cluster. Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster from those nodes. Use the Prism web console (in clusters running AOS 4.5 or later) to image factory-prepared nodes and then add them to an existing cluster. See the "Expanding a Cluster" section in the Web Console Guide for this procedure. A field installation can be performed for either factory-prepared nodes or bare metal nodes. • See Creating a Cluster on page 11 to image factory-prepared nodes and create a cluster from those nodes (or just create a cluster for nodes that are already imaged). • See Imaging Bare Metal Nodes on page 26 to image bare metal nodes and optionally configure them into one or more clusters. Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix hardware models with some restrictions. Click here (or log into the Nutanix support portal and select Documentation > Compatibility Matrix from the main menu) for a list of supported configurations. To check a particular configuration, go to the Filter By fields and select the desired model, AOS version, and hypervisor in the first three fields and then set the last field to Foundation. In addition, check the notes at the bottom of the table.
  • 11. Creating a Cluster | Field Installation Guide | Foundation | 11 2 Creating a Cluster This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on discovered nodes and how to configure the nodes into a cluster. "Discovered nodes" are factory prepared nodes on the same subnet that are not part of a cluster currently. This procedure runs the Foundation tool through the Nutanix Controller VM. Note: This method creates a single cluster from discovered nodes. This method is limited to factory prepared nodes running AOS 4.5 or later. If you want to image discovered nodes without creating a cluster, image factory prepared nodes running an earlier AOS (NOS) version, or image bare metal nodes, see Imaging Bare Metal Nodes on page 26. To image the nodes and create a cluster, do the following: 1. Download the required files, start the cluster creation GUI, and run discovery (see Discovering the Nodes on page 11). 2. Define cluster parameters; specify Controller VM, hypervisor, and (optionally) IPMI global network addresses; and (optionally) enable health tests after the cluster is created (see Defining the Cluster on page 15). 3. Configure the discovered nodes (Setting Up the Nodes on page 17). 4. Select the AOS and hypervisor images to use (see Selecting the Images on page 19). 5. Start the process and monitor progress as the nodes are imaged and the cluster is created (see Creating the Cluster on page 22). 6. After the cluster is created successfully, begin configuring the cluster (see Configuring a New Cluster on page 24). Discovering the Nodes Before you begin: • Physically install the Nutanix nodes at your site. See the Physical Installation Guide for your model type for installation instructions. Note: If you have nodes running a version lower than AOS 4.5, you cannot use this (Controller VM-based) method to create a cluster. Contact Nutanix customer support for help in creating the cluster using the standalone (bare metal) method. • Your workstation must be connected to the network on the same subset as the nodes you want to image. Foundation does not require an IPMI connection or any special network port configuration to image discovered nodes. See Network Requirements on page 52 for general information about the network topology and port access required for a cluster. • Determine the appropriate network (gateway and DNS server IP addresses), cluster (name, virtual IP address), and node (Controller VM, hypervisor, and IPMI IP address ranges) parameter values needed for installation.
  • 12. Creating a Cluster | Field Installation Guide | Foundation | 12 Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to Controller VMs. 1. Open a browser, go to the Nutanix support portal (see Downloading Installation Files on page 46), and download the following files to your workstation. • FoundationApplet-offline.zip installation bundle from the Foundation download page. Note: If you install from a workstation that has Internet access, you can forego downloading this bundle and simply select the link to nutanix_foundation_applet.jnlp directly from the support portal (see step 2). Otherwise, you must first download (and unpack) the installation bundle. • nutanix_installer_package-version#.tar.gz from the AOS (NOS) download page. This is the installation bundle used for imaging the desired AOS release. • hypervisor ISO if installing Hyper-V or ESXi. The user must provide the supported Hyper-V or ESXi ISO (see Hypervisor ISO Images on page 50); Hyper-V and ESXi ISOs are not available from the support portal. • It is not necessary to download an AHV ISO because both Foundation and the AOS bundle include an AHV installation bundle. However, you have the option to download an AHV upgrade installation bundle if you want to install a non-default version of AHV. 2. Do one of the following: • Click the link to jnlp from online bundle link on the Nutanix support portal to download and start the Java applet. • Unzip the FoundationApplet-offline.zip installation bundle that was downloaded in step 1 and then start (double-click) the nutanix_foundation_applet.jnlp Java applet. The discovery process begins and a window appears with a list of discovered nodes. Note: A security warning message may appear indicating this is from an unknown source. Click the accept and run buttons to run the application. Figure: Foundation Launcher Window 3. Select (click the line for) a node to be imaged from the list and then click the Launch Foundation button.
  • 13. Creating a Cluster | Field Installation Guide | Foundation | 13 This launches the cluster creation GUI. The selected node will be imaged first and then be used to image the other nodes. Only nodes with a status field value of Free can be selected, which indicates it is not currently part of a cluster. A value of Unavailable indicates it is part of an existing cluster or otherwise unavailable. To rerun the discovery process, click the Retry discovery button. Note: A warning message may appear stating this is not the highest available version of Foundation found in the discovered nodes. If you select a node using an earlier Foundation version (one that does not recognize one or more of the node models), installation may fail when Foundation attempts to image a node of an unknown model. Therefore, select the node with the highest Foundation version among the nodes to be imaged. (You can ignore the warning and proceed if you do not intend to select any of the nodes that have the higher Foundation version.) Foundation searches the network subnet for unconfigured Nutanix nodes (factory prepared nodes that are not part of a cluster) and then displays information about the discovered blocks and nodes in the Discovered Nodes screen. (It does not display information about nodes that are powered off or in a different subnet.) The discovery process normally takes just a few seconds. Note: If you want Foundation to image nodes from an existing cluster, you must first either remove the target nodes from the cluster or destroy the cluster. 4. Select (check the box for) the nodes to be imaged. All discovered blocks and nodes are displayed by default, including those that are already in an existing cluster. An exclamation mark icon is displayed for unavailable (already in a cluster) nodes; these node cannot be selected. All available nodes are selected by default. Note: A cluster requires a minimum of three nodes. Therefore, you must select at least three nodes. Note: If a discovered node has a VLAN tag, that tag is displayed. Foundation applies an existing VLAN tag when imaging the node, but you cannot use Foundation to edit that tag or add a tag to a node without one. • To display just the available nodes, select the Show only new nodes option from the pull-down list on the right of the screen. (Blocks with unavailable nodes only do not appear, but a block with both available and unavailable nodes does appear with the exclamation mark icon displayed for the unavailable nodes in that block.) • To deselect nodes you do not want to image, uncheck the boxes for those nodes. Alternately, click the Deselect All button to uncheck all the nodes and then select those you want to image. (The Select All button checks all the nodes.) Note: You can get help or reset the configuration at any time from the gear icon pull-down menu (top right). Internet access is required to display the help pages, which are located in the Nutanix support portal.
  • 14. Creating a Cluster | Field Installation Guide | Foundation | 14 Figure: Discovery Screen 5. Set the redundancy factor (RF) for the cluster to be created. The redundancy factor specifies the number of times each piece of data is replicated in the cluster. • Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of any single node or drive. • Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure of any two nodes or drives in different blocks. RF 3 requires that the cluster have at least five nodes, and it can be enabled only when the cluster is created. (In addition, containers must have replication factor 3 for guest VM data to withstand the failure of two nodes.) The default setting for a cluster is RF 2. To set it to RF 3, do the following: a. Click the Change RF (2) button. The Change Redundancy Factor window appears. b. Click (check) the RF 3 button and then click the Save Changes button. The window disappears and the RF button changes to Change RF (3) indicating the RF factor is now set to 3.
  • 15. Creating a Cluster | Field Installation Guide | Foundation | 15 Figure: Change Redundancy Factor Window 6. Click the Next button at the bottom of the screen to configure cluster parameters (see Defining the Cluster on page 15). Defining the Cluster Before you begin: Complete Discovering the Nodes on page 11. The Define Cluster configuration screen appears. This screen allows you to define a new cluster and configure global network parameters for the Controller VM, hypervisor, and (optionally) IPMI. It also allows you to enable diagnostic and health tests after creating the cluster.
  • 16. Creating a Cluster | Field Installation Guide | Foundation | 16 Figure: Cluster Screen 1. In the Cluster Information section, do the following in the indicated fields: a. Name: Enter a name for the cluster. b. IP Address (optional): Enter an external (virtual) IP address for the cluster. This field sets a logical IP address that always points to an active Controller VM (provided the cluster is up), which removes the need to enter the address of a specific Controller VM. This parameter is required for Hyper-V clusters and is optional for ESXi and AHV clusters. c. NTP Server Address (optional): Enter the NTP server IP address or (pool) domain name. d. DNS Server IP (optional): Enter the DNS server IP address. 2. (optional) Click the Enable IPMI slider button to specify an IPMI address. When this button is enabled, fields for IPMI global network parameters appear below. Foundation does not require an IPMI connection, so this information is not required. However, you can use this option to configure IPMI for your use. 3. In the Network Information section, do the following in the indicated fields: a. CVM Netmask: Enter the Controller VM netmask value. b. CVM Gateway: Enter an IP address for the gateway.
  • 17. Creating a Cluster | Field Installation Guide | Foundation | 17 c. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more information about Controller VM memory configuration, see Controller VM Memory Configurations on page 55. This field is set initially to default. (The default amount varies according to the node model type.) The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The default setting represents the recommended amount for the model type. Assigning more memory than the default might be appropriate in certain situations. d. Hypervisor Netmask: Enter the hypervisor netmask value. e. Hypervisor Gateway: Enter an IP address for the gateway. f. Hypervisor DNS Server IP: Enter the IP address of the DNS server. g. Note: The following fields appear only if the IPMI button was enabled in the previous step. IPMI Netmask: Enter the IPMI netmask value. h. IPMI Gateway: Enter an IP address for the gateway. i. IPMI Username: Enter the IPMI user name. The default user name is ADMIN. j. IPMI Password: Enter the IPMI password. The default password is ADMIN. Check the show password box to display the password. 4. (optional) Click the Enable Testing slider button to run the Nutanix Cluster Check (NCC) after the cluster is created. The NCC is a test suite that checks a variety of health metrics in the cluster. The results are stored in the ~/foundation/logs/ncc directory. 5. Click the Next button at the bottom of the screen to configure the cluster nodes (see Setting Up the Nodes on page 17). Setting Up the Nodes Before you begin: Complete Defining the Cluster on page 15. The Setup Node configuration screen appears. This screen allows you to specify the Controller VM, hypervisor, and (if enabled) IPMI IP addresses for each node.
  • 18. Creating a Cluster | Field Installation Guide | Foundation | 18 Figure: Node Screen 1. In the Hostname and IP Range section, do the following in the indicated fields: a. Hypervisor Hostname: Enter a base host name for the set of nodes. Host names should contain only digits, letters, and hyphens. The base name with a suffix of "-1" is assigned as the host name of the first node, and the base name with "-2", "-3" and so on are assigned automatically as the host names of the remaining nodes. b. CVM IP: Enter the starting IP address for the set of Controller VMs across the nodes. Enter a starting IP address in the FROM/TO line of the CVM IP column. The entered address is assigned to the Controller VM of the first node, and consecutive IP addresses (sequentially from the entered address) are assigned automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by position, so IP assignments are sequential. If you do not want all addresses to be consecutive, you can change the IP address for specific nodes by updating the address in the appropriate fields for those nodes. c. Hypervisor IP: Repeat the previous step for this field. This sets the hypervisor IP addresses for all the nodes. Caution: The Nutanix high availability features require that both hypervisor and Controller VM be in the same subnet. Putting them in different subnets reduces the failure protection provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended that you keep both hypervisor and Controller VM in the same subnet. d. IPMI IP (when enabled): Repeat the previous step for this field.
  • 19. Creating a Cluster | Field Installation Guide | Foundation | 19 This sets the IPMI port IP addresses for all the nodes. This column appears only when IPMI is enabled on the previous cluster setup screen. 2. In the Manual Input section, review the assigned host names and IP addresses. If any of the names or addresses are not correct, enter the desired name or IP address in the appropriate field. There is a section for each block with a line for each node in the block. The letter designation (A, B, C, and D) indicates the position of that node in the block. 3. When all the host names and IP addresses are correct, click the Validate Network button at the bottom of the screen. This does a ping test to each of the assigned IP addresses to check whether any of those addresses are being used currently. • If there are no conflicts (none of the addresses return a ping), the process continues (see Selecting the Images on page 19). • If there is a conflict (one or more addresses returned a ping), this screen reappears with the conflicting addresses highlighted in red. Foundation will not continue until the conflict is resolved. Selecting the Images Before you begin: Complete Setting Up the Nodes on page 17. The Select Images configuration screen appears. This screen allows you to specify and upload the AOS and hypervisor image files to use.
  • 20. Creating a Cluster | Field Installation Guide | Foundation | 20 Figure: Images Screen 1. Select the AOS image as follows: Note: An AOS image may already be present. If it is the desired version running on the desired hypervisor type and version, skip to step 4. However, if the hypervisor type or version is not correct, you must upload an AOS installation bundle even if the present version is the same as the uploaded one. a. In the AOS (left) column, click the Upload Tarball button and then click the Choose File button. Figure: File Selection Buttons b. In the file search window, find and select the AOS installation bundle downloaded earlier (see Discovering the Nodes on page 11) and then click the Upload button. Uploading an image file (AOS or hypervisor) may take some time (possibly a few minutes). When a new AOS installation bundle is uploaded, any existing AOS installation bundles in ~/foundation/nos (if present) are deleted. 2. Select the hypervisor (AHV, ESX, or HYPERV) in the Hypervisor (middle) column and do the following: Note: The hypervisor field is not active until the AOS installation bundle is selected (uploaded) in the previous step. Foundation comes with an AHV image. If that is the correct image to use, skip to the next step.
  • 21. Creating a Cluster | Field Installation Guide | Foundation | 21 • AHV: Select the desired AHV installation bundle from the pull-down list. This list includes the default AHV installation bundle included with Foundation (named host_bundle_el6.nutanix.version#.tar.gz) and the AHV installation bundle included in the AOS bundle (named kvm_host_bundle_version#.tar.gz). If you downloaded an AHV installation bundle that you want to use (and it is not in the list), click the Upload RPM Tarball button to upload the desired installation bundle from wherever you downloaded it. • ESXi or Hyper-V: Click the Upload ISO button and then click the Choose File button. In the file search window, find and select the ESXi or Hyper-V ISO image you downloaded earlier and then click the Upload button. Only approved hypervisor versions are permitted; Foundation will not image nodes with an unapproved version. To verify your version is on the approved list, click the See Whitelist link and select the appropriate hypervisor tab in the pop-up window. Nutanix updates the list as new versions are approved, and the current version of Foundation may not have the latest list. If your version does not appear on the list, click the Update the whitelist link to download the latest whitelist from the Nutanix support portal. Figure: Whitelist Compatibility List Window 3. [Hyper-V only] Click the radio button in the SKU (right) column for the Hyper-V version to use. Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter with GUI. This column appears only when you select Hyper-V. Note: See Hyper-V Installation Requirements on page 57 for additional considerations when installing a Hyper-V cluster. 4. When both images are uploaded and ready, do one of the following: → To image the nodes and then create the new cluster, click the Create Cluster button at the bottom of the screen. → To create the cluster without imaging the nodes, click the Skip Imaging button (in either case see Creating the Cluster on page 22). Note: The Skip Imaging option requires that all the nodes have the same hypervisor and AOS version. This option is disabled if they are not all the same (with the exception of
  • 22. Creating a Cluster | Field Installation Guide | Foundation | 22 any model NX-6035C "cold" storage nodes in the cluster that run AHV regardless of the hypervisor running on the other nodes). Creating the Cluster Before you begin: Complete Selecting the Images on page 19. After clicking the Create Cluster or Skip Imaging button (in the Select Images screen), the Create Cluster screen appears. This is a dynamically updated display that provides progress information about node imaging and cluster creation. 1. Monitor the node imaging and cluster creation progress. The progress screen includes the following sections: • Progress bar at the top (blue during normal processing or red when there is a problem). • Cluster Creation Status section with a line for the cluster being created (status indicator, cluster name, progress message, and log link). • Node Status section with a line for each node being imaged (status indicator, IPMI IP address, progress message, and log link). Figure: Foundation Progress Screen: Ongoing The status message for each node (in the Node Status section) displays the imaging percentage complete and current step. The selected node (see Discovering the Nodes on page 11) is imaged first. When that imaging is complete, the remaining nodes are imaged in parallel. The imaging process takes about 30 minutes, so the total time is about an hour (30 minutes for the first node and another 30 minutes for the other nodes imaged in parallel). You can monitor overall progress by clicking the Log link at the top, which displays the foundation.out contents in a separate tab or window. Click on the Log link for a node to display the log file for that node in a separate tab or window.
  • 23. Creating a Cluster | Field Installation Guide | Foundation | 23 Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains more than 21 nodes, add an extra 30 minutes processing time for each group of 20 nodes. When installation moves to cluster creation, the status message displays the percentage complete and current step. Cluster creation happens quickly, but this step could take some time if you enabled the post-creation tests. Click on the Log link for a cluster to display the log file for the cluster in a separate tab or window. 2. When processing completes successfully, either open the Prism web console and begin configuring the cluster (see Configuring a New Cluster on page 24) or exit from Foundation. When processing completes successfully, a "Cluster creation successful" message appears. This means imaging both the hypervisor and Nutanix Controller VM across all the nodes in the cluster was successful (when imaging was not skipped) and cluster creation was successful. • To configure the cluster, click the Prism link. This opens the Prism web console (login required using the default "admin" username and password). See Configuring a New Cluster on page 24 for initial cluster configuration steps. • To download the log files, click the Export Logs link. This packages all the log files into a log_archive.tar file and allows you to download that file to your workstation. The Foundation service shuts down two hours after imaging. If you go to the cluster creation success page after a long absence and theExport Logs link does not work (or your terminal went to sleep and there is no response after refreshing it), you can point the browser to one of the Controller VM IP addresses. If the Prism web console appears, installation completed successfully, and you can get the logs from ~/data/logs/foundation on the node that was imaged first. Note: If nothing loads when you refresh the page (or it loads one of the configuration pages), the web browser might have missed the hand-off between the node that starts imaging and the first node imaged. This can happen because the web browser went to sleep, you closed the browser, or you lost connectivity for some other reason. In this case, enter http://cvm ip for any Controller VM, which should open the Prism GUI if imaging has completed. If this does not work, enter http://cvm ip:8000/guion each of the Controller VMs in the cluster until you see the progress screen, from which you can continue monitoring progress. Figure: Foundation Progress Screen: Successful Installation 3. If processing does not complete successfully, review and correct the problem(s), and then restart the process. If the progress bar turns red with a "There were errors in the installation" message and one or more node or cluster entries have a red X in the status column, the installation failed at the node imaging or cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking the Back to config button returns you to the configuration screens to correct any entries. The default
  • 24. Creating a Cluster | Field Installation Guide | Foundation | 24 per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that amount of time. Note: If an imaging problem occurs, it typically appears when imaging the first node. In that case Foundation will not attempt to image the other nodes, so only the first node will be in an unstable state. Once the problem is resolved, the first node can be re-imaged and then the other nodes imaged normally. Figure: Foundation Progress Screen: Unsuccessful Installation Configuring a New Cluster After creating the cluster, you can configure it through the Prism web console. A storage pool and a container are created automatically when the cluster is created, but many other set up options require user action. The following are common cluster set up steps typically done soon after creating a cluster. (All the sections cited in the following steps are in the Prism Web Console Guide.) 1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests. a. Check the installed NCC version and update it if a later version is available (see the "Software and Firmware Upgrades" section). b. Run NCC if you downloaded a newer version or did not run it as part of the install. Running NCC must be done from a command line. Open a command window, log on to any Controller VM in the cluster with SSH, and then run the following command: nutanix@cvm$ ncc health_checks run_all If the check reports a status other than PASS, resolve the reported issues before proceeding. If you are unable to resolve the issues, contact Nutanix support for assistance. 2. Specify the timezone of the cluster. Specifying the timezone must be done from the Nutanix command line (nCLI). While logged in to the Controller VM (see previous step), run the following commands: nutanix@cvm$ ncli ncli> cluster set-timezone timezone=cluster_timezone
  • 25. Creating a Cluster | Field Installation Guide | Foundation | 25 Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles, Europe/ London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the timezone. Because a cluster can tolerate only a single Controller VM unavailable at any one time, restart the Controller VMs in a series, waiting until one has finished starting before proceeding to the next. See the Command Reference for more information about using the nCLI. 3. Specify an outgoing SMTP server (see the "Configuring an SMTP Server" section). 4. If the site security policy allows Nutanix customer support to access the cluster, enable the remote support tunnel (see the "Controlling Remote Connections" section). Caution: Failing to enable remote support prevents Nutanix support from directly addressing cluster issues. Nutanix recommends that all customers allow email alerts at minimum because it allows proactive support of customer issues. 5. If the site security policy allows Nutanix support to collect cluster status information, enable the Pulse feature (see the "Configuring Pulse" section). This information is used by Nutanix support to diagnose potential problems and provide more informed and proactive help. 6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert emails (see the "Configuring Email Alerts" section). You also have the option to specify email recipients for specific alerts (see the "Configuring Alert Policies" section). 7. If the site security policy allows automatic downloads to update AOS and other upgradeable cluster elements, enable that feature (see the "Software and Firmware Upgrades" section). Note: Allow access to the following through your firewall to ensure that automatic download of updates can function: • *.compute-*.amazonaws.com:80 • release-api.nutanix.com:80 8. License the cluster (see the "License Management" section). 9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface. • vCenter: See the Nutanix vSphere Administration Guide. • SCVMM: See the Nutanix Hyper-V Administration Guide.
  • 26. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 26 3 Imaging Bare Metal Nodes This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on bare metal nodes and optionally configure the nodes into one or more clusters. "Bare metal" nodes are those that are not factory prepared or cannot be detected through discovery. You can also use this method to image factory prepared nodes that you do not want to configure into a cluster. Before you begin: Note: Imaging bare metal nodes is restricted to Nutanix sales engineers, support engineers, and partners. Contact Nutanix customer support or your partner for help with this procedure. • Physically install the nodes at your site. See the Physical Installation Guide for your model type for installation instructions. • Set up the installation environment (see Preparing Installation Environment on page 27). Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in the BIOS. Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the imaging process. Therefore, disable STP before starting Foundation. Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that presents virtual media, such as CDROM. This could conflict with the foundation installation when it tries to mount the virtual CDROM hosting the install ISO. • Have ready the appropriate global, node, and cluster parameter values needed for installation. The use of a DHCP server is not supported for Controller VMs, so make sure to assign static IP addresses to Controller VMs. Note: If the Foundation VM IP address set previously was configured in one (typically public) network environment and you are imaging the cluster on a different (typically private) network in which the current address is no longer correct, repeat step 13 in Preparing a Workstation on page 27 to configure a new static IP address for the Foundation VM. To image the nodes and create a cluster(s), do the following: Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may not reflect the latest features described in this section.) 1. Prepare the installation environment: a. Download necessary files and prepare a workstation (see Preparing a Workstation on page 27). b. Connect the workstation and nodes to be imaged to the network (Setting Up the Network on page 32). 2. Start the Foundation VM and configure global parameters (see Configuring Global Parameters on page 33).
  • 27. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 27 3. Configure the nodes to image (see Configuring Node Parameters on page 36). 4. Select the images to use (see Configuring Image Parameters on page 39). 5. [optional] Configure one or more clusters to create and assign nodes to the clusters (see Configuring Cluster Parameters on page 40). 6. Start the imaging process and monitor progress (see Monitoring Progress on page 42). 7. If a problem occurs during configuration or imaging, evaluate and resolve the problem (see Troubleshooting on page 63). 8. [optional] Clean up the Foundation environment after completing the installation (see Cleaning Up After Installation on page 45). Preparing Installation Environment Standalone (bare metal) imaging is performed from a workstation with access to the IPMI interfaces of the nodes in the cluster. Imaging a cluster in the field requires first installing certain tools on the workstation and then setting the environment to run those tools. This requires two preparation tasks: 1. Prepare the workstation. Preparing the workstation can be done on or off site at any time prior to installation. This includes downloading ISO images, installing Oracle VM VirtualBox, and using VirtualBox to configure various parameters on the Foundation VM (see Preparing a Workstation on page 27). 2. Set up the network. The nodes and workstation must have network access to each other through a switch at the site (see Setting Up the Network on page 32). Preparing a Workstation A workstation is needed to host the Foundation VM during imaging. To prepare the workstation, do the following: Note: You can perform these steps either before going to the installation site (if you use a portable laptop) or at the site (if you can connect to the web). 1. Get a workstation (laptop or desktop computer) that you can use for the installation. The workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of disk space (preferably SSD), and a physical (wired) network adapter. 2. Go to the Foundation download page in the Nutanix support portal (see Downloading Installation Files on page 46) and download the following files to a temporary directory on the workstation. • Foundation_VM_OVF-version#.tar. This tar file includes the following files: • Foundation_VM-version#.ovf. This is the Foundation VM OVF configuration file for the version# release, for example Foundation_VM-3.1.ovf. • Foundation_VM-version#-disk1.vmdk. This is the Foundation VM VMDK file for the version# release, for example Foundation_VM-3.1-disk1.vmdk. • VirtualBox-version#-[OSX|Win].[dmg|exe]. This is the Oracle VM VirtualBox installer for Mac OS (VirtualBox-version#-OSX.dmg) or Windows (VirtualBox-version#-Win.exe). Oracle VM VirtualBox is a free open source tool used to create a virtualized environment on the workstation.
  • 28. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 28 Note: Links to the VirtualBox files may not appear on the download page for every Foundation version. (The Foundation 2.0 download page has links to the VirtualBox files.) • nutanix_installer_package-version#.tar.gz. This is the tar file used for imaging the desired AOS release. Go to the AOS (NOS) download page on the support portal to download this file. • If you want to run the diagnostics test after creating a cluster, download the diagnostics test file(s) for your hypervisor from the Tools & Firmware download page on the support portal: • AHV: diagnostic.raw.img.gz • ESXi: diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf • Hyper-V: diagnostics_uvm.vhd.gz 3. Go to the download location and extract Foundation_VM_OVF-version#.tar by entering the following command: $ tar -xf Foundation_VM_OVF-version#.tar Note: This assumes the tar command is available. If it is not, use the corresponding tar utility for your environment. 4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options. See the Oracle VM VirtualBox User Manual for installation and start up instructions (https:// www.virtualbox.org/wiki/Documentation). Note: This section describes how to use Oracle VM VirtualBox to create a virtual environment. Optionally, you can use an alternate tool such as VMware vSphere in place of Oracle VM VirtualBox. 5. Create a new folder called VirtualBox VMs in your home directory. On a Windows system this is typically C:Usersuser_nameVirtualBox VMs. 6. Copy the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files to the VirtualBox VMs folder that you created in step 5. 7. Start Oracle VM VirtualBox. Figure: VirtualBox Welcome Screen 8. Click the File option of the main menu and then select Import Appliance from the pull-down list.
  • 29. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 29 9. Find and select the Foundation_VM-version#.ovf file, and then click Next. 10. Click the Import button. 11. In the left column of the main screen, select Foundation_VM-version# and click Start. The Foundation VM console launches and the VM operating system boots. 12. At the login screen, login as the Nutanix user with the password nutanix/4u. The Foundation VM desktop appears (after it loads). 13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM, install Oracle Additions as follows: a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD Image... from the menu. A VBOXADDITIONS CD entry appears on the Foundation VM desktop. b. Click OK when prompted to Open Autorun Prompt and then click Run. c. Enter the root password (nutanix/4u) and then click Authenticate. d. After the installation is complete, press the return key to close the VirtualBox Guest Additions installation window. e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject. f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI. Note: A reboot is necessary for the changes to take effect. g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on the VirtualBox window for the Foundation VM. 14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to get an IP address from the DHCP server. If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as follows: Note: Normally, the Foundation VM needs to be on a public network in order to copy selected ISO files to the Foundation VM in the next two steps. This might require setting a static IP address now and setting it again when the workstation is on a different (typically private) network for the installation (see Imaging Bare Metal Nodes on page 26). a. Double click the set_foundation_ip_address icon on the Foundation VM desktop.
  • 30. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 30 Figure: Foundation VM: Desktop b. In the pop-up window, click the Run in Terminal button. Figure: Foundation VM: Terminal Window c. In the Select Action box in the terminal window, select Device Configuration. Note: Selections in the terminal window can be made using the indicated keys only. (Mouse clicks do not work.) Figure: Foundation VM: Action Box d. In the Select a Device box, select eth0.
  • 31. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 31 Figure: Foundation VM: Device Configuration Box e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and then click the OK button. Figure: Foundation VM: Network Configuration Box f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action box. This save the configuration and closes the terminal window. 15. Copy nutanix_installer_package-version#.tar.gz (downloaded in step 2) to the /home/nutanix/ foundation/nos folder. 16. If you intend to install ESXi or Hyper-V as the hypervisor, download the hypervisor ISO image into the appropriate folder for that hypervisor. • ESXi ISO image: /home/nutanix/foundation/isos/hypervisor/esx • Hyper-V ISO image: /home/nutanix/foundation/isos/hypervisor/hyperv Note: Customers must provide a supported ESXi or Hyper-V ISO image (see Hypervisor ISO Images on page 50). Customers do not have to provide an AHV image because Foundation automatically puts an AHV tar file into /home/nutanix/foundation/isos/hypervisor/kvm. 17. If you intend to run the diagnostics test after the cluster is created, download the diagnostic test file(s) into the appropriate folder for that hypervisor: • AHV (diagnostic.raw.img.gz): /home/nutanix/foundation/isos/diags/kvm • ESXi (diagnostics-disk1.vmdk, diagnostics.mf, and diagnostics.ovf): /home/nutanix/foundation/ isos/diags/esx • Hyper-V (diagnostics_uvm.vhd.gz): /home/nutanix/foundation/isos/diags/hyperv
  • 32. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 32 Setting Up the Network The network must be set up properly on site before imaging nodes through the Foundation tool. To set up the network connections, do the following: Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing tables). A flat switch is often recommended to protect against configuration errors that could affect the production environment. Foundation includes a multi-homing feature that allows you to image the nodes using production IP addresses despite being connected to a flat switch (see Configuring Global Parameters on page 33). See Network Requirements on page 52 for general information about the network topology and port access required for a cluster. 1. Connect the first 1 GbE network interface of each node to a 1GbE Ethernet switch. The IPMI LAN interfaces of the nodes must be in failover mode (factory default setting). The exact location of the port depends on the model type. See the hardware documentation for your model to determine the port location. → (Nutanix NX Series) The following figure illustrates the location of the network ports on the back of an NX-3050 (middle RJ-45 interface). Figure: Port Locations (NX-3050) → (Lenovo Converged HX Series) Unlike Nutanix NX-series systems, which only require that you connect the 1 GbE port, Lenovo XC-series systems require that you connect both the system management (IMM) port and one of the 1 GbE or 10 GbE ports. The following figure illustrates the location of the network ports on the back of the HX3500 and HX5500.
  • 33. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 33 Figure: Port Locations (HX System) → (Dell XC series) Unlike Nutanix NX-series systems, which only require that you connect the 1 GbE port, Dell XC-series systems require that you connect both the iDRAC port and one of the 1 GbE ports. Figure: Port Locations (XC System) 2. Connect the installation workstation (see Preparing a Workstation on page 27) to the same 1 GbE switch as the nodes. Configuring Global Parameters Before you begin: Complete Imaging Bare Metal Nodes on page 26. 1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI. Note: See Preparing Installation Environment on page 27 if Oracle VM VirtualBox is not started or the Foundation VM is not running currently. You can also start the Foundation GUI by opening a web browser and entering http://localhost:8000/gui/index.html. Once you assign an IP to the Foundation VM, you can access it from outside VirtualBox. o
  • 34. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 34 Figure: Foundation VM Desktop The Global Configuration screen appears. Use this screen to configure network addresses. Note: You can access help from the gear icon pull-down menu (top right), but this requires Internet access. If necessary, copy the help URL to a browser with Internet access. Figure: Global Configuration Screen 2. In the top section of the screen, enter appropriate values for the IPMI, hypervisor, and Controller VM in the indicated fields: Note: The parameters in this section are global and will apply to all the imaged nodes.
  • 35. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 35 Figure: Global Configuration Screen: IPMI, Hypervisor, and CVM Parameters a. IPMI Netmask: Enter the IPMI netmask value. b. IPMI Gateway: Enter an IP address for the gateway. c. IPMI Username: Enter the IPMI user name. The default user name is ADMIN. d. IPMI Password: Enter the IPMI password. The default password is ADMIN. Check the show password box to display the password as you type it. e. Hypervisor Netmask: Enter the hypervisor netmask value. f. Hypervisor Gateway: Enter an IP address for the gateway. g. DNS Server IP: Enter the IP address of the DNS server. h. CVM Netmask: Enter the Controller VM netmask value. i. CVM Gateway: Enter an IP address for the gateway. j. CVM Memory: Select a memory size for the Controller VM from the pull-down list. For more information about Controller VM memory configuration, see Controller VM Memory Configurations on page 55. This field is set initially to default. (The default amount varies according to the node model type.) The other options allow you to specify a memory size of 16 GB, 24 GB, 32 GB, or 64 GB. The default setting represents the recommended amount for the model type. Assigning more memory than the default might be appropriate in certain situations. 3. If you are using a flat switch (no routing tables) for installation and require access to multiple subnets, check the Multi-Homing box in the bottom section of the screen. When the box is checked, a line appears to enter Foundation VM virtual IP addresses. The purpose of the multi-homing feature is to allow the Foundation VM to configure production IP addresses when using a flat switch. Multi-homing assigns the Foundation VM virtual IP addresses on different subnets so that you can use customer-specified IP addresses regardless of their subnet. • Enter unique IPMI, hypervisor, and Controller VM IP addresses. Make sure that the addresses match the subnets specified for the nodes to be imaged (see Configuring Node Parameters on page 36). • If this box is not checked, Foundation requires that either all IP addresses are on the same subnet or that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.
  • 36. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 36 Figure: Global Configuration Screen: Multi-Homing 4. Click the Next button at the bottom of the screen to configure the nodes to be imaged (see Configuring Node Parameters on page 36). Configuring Node Parameters Before you begin: Complete Configuring Global Parameters on page 33. The Block & Node Config screen appears. This screen allows you to configure discovered nodes and add other (bare metal) nodes to be imaged. Upon opening this screen, Foundation searches the network for unconfigured Nutanix nodes (that is, factory prepared nodes that are not part of a cluster) and then displays information about the discovered blocks and nodes. The discovery process can take several minutes if there are many nodes on the network. Wait for the discovery process to complete before proceeding. The message "Searching for nodes. This may take a while" appears during discovery. Note: Foundation discovers nodes on the same subnet as the Foundation VM only. Any nodes to be imaged that reside on a different subnet must be added explicitly (see step 2). In addition, Foundation discovers unconfigured Nutanix nodes only. If you are running Foundation on a preconfigured block with an existing cluster and you want Foundation to image those nodes, you must first destroy the existing cluster in order for Foundation to discover those nodes. Figure: Node Configuration Screen 1. Review the list of discovered nodes. A table appears with a section for each discovered block that includes information about each node in the block.
  • 37. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 37 • You can exclude a block by clicking the X on the far right of that block. The block disappears from the display, and the nodes in that block will not be imaged. Clicking the X on the top line removes all the displayed blocks. • To repeat the discovery process (search for unconfigured nodes again), click the Retry Discovery button. You can reset all the global and node entries to the default state by selecting Reset Configuration from the gear icon pull-down menu. 2. To image additional (bare metal) nodes, click the Add Blocks button. A window appears to add a new block. Do the following in the indicated fields: Figure: Add Bare Metal Blocks Window a. Number of Blocks: Enter the number of blocks to add. b. Nodes per Block: Enter the number of nodes to add in each block. All added blocks get the same number of nodes. To add multiple blocks with differing nodes per block, add the blocks as separate actions. c. Click the Create button. The window closes and the new blocks appear at the end of the discovered blocks table. 3. Configure the fields for each node as follows: a. Block ID: Do nothing in this field because it is a unique identifier for the block that is assigned automatically. b. Position: Uncheck the boxes for any nodes you do not want to be imaged. The value (A, B, and so on) indicates the node placement in the block such as A, B, C, D for a four- node block. You can exclude the node in that block position from being imaged by unchecking the appropriate box. You can check (or uncheck) all boxes by clicking Select All or (Unselect All) above the table on the right. c. IPMI MAC Address: For any nodes you added in step 2, enter the MAC address of the IPMI interface in this field. Foundation requires that you provide the MAC address for nodes it has not discovered. (This field is read-only for discovered nodes and displays a value of "N/A" for those nodes.) The MAC address of the IPMI interface normally appears on a label on the back of each node. (Make sure you enter the MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".) The MAC
  • 38. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 38 address appears in the standard form of six two-digit hexadecimal numbers separated by colons, for example 00:25:90:D9:01:98. Caution: Any existing data on the node will be destroyed during imaging. If you are using the add node option to re-image a previously used node, do not proceed until you have saved all the data on the node that you want to keep. Figure: IPMI MAC Address Label d. IPMI IP: Do one of the following in this field: Note: If you are using a flat switch, the IP addresses must be on the same subnet as the Foundation VM unless you configure multi-homing (see Configuring Global Parameters on page 33). • To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP address in that field. • To specify the IPMI addresses automatically, enter a starting IP address in the top line ("Start IP address" field) of the IPMI IP column. The entered address is assigned to the IPMI port of the first node, and consecutive IP addresses (starting from the entered address) are assigned automatically to the remaining nodes. Discovered nodes are sorted first by block ID and then by position, so IP assignments are sequential. If you do not want all addresses to be consecutive, you can change the IP address for specific nodes by updating the address in the appropriate fields for those nodes. Note: Automatic assignment is not used for addresses ending in 0, 1, 254, or 255 because such addresses are commonly reserved by network administrators. e. Hypervisor IP: Repeat the previous step for this field. This sets the hypervisor IP addresses for all the nodes. f. CVM IP: Repeat the previous step for this field. This sets the Controller VM IP addresses for all the nodes. Caution: The Nutanix high availability features require that both hypervisor and Controller VM be in the same subnet. Putting them in different subnets reduces the failure protection provided by Nutanix and can lead to other problems. Therefore, it is strongly recommended that you keep both hypervisor and Controller VM in the same subnet. g. Hypervisor Hostname: Do one of the following in this field: • A host name is automatically generated for each host (NTNX-unique_identifier). If these names are acceptable, do nothing in this field. Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The automatically generated names might be longer than 15 characters, which would result in the same truncated name for multiple hosts in a Windows environment. Therefore, do not use automatically generated names longer than 15 characters when the hypervisor is Hyper-V. • To specify the host names manually, go to the line for each node and enter the desired name in that field. Host names should contain only digits, letters, and hyphens.
  • 39. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 39 • To specify the host names automatically, enter a base name in the top line of the Hypervisor Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first node, and the base name with "-2", "-3" and so on are assigned automatically as the host names of the remaining nodes. You can specify different names for selected nodes by updating the entry in the appropriate field for those nodes. h. NX-6035C : Check this box for any node that is a model NX-6035C. Model NX-6035C nodes are used for "cold" storage and run nothing but a Controller VM; user VMs are not allowed. NX-6035C nodes run AHV (and so will be imaged with AHV) regardless of what hypervisor runs on the other nodes in a cluster (see Configuring Image Parameters on page 39). 4. To check which IP addresses are active and reachable, click Ping Scan (above the table on the right). This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP fields. A (returned response) or (no response) icon appears next to that field to indicate the ping test result for each node. This feature is most useful when imaging a previously unconfigured set of nodes. None of the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing infrastructure. Note: When re-imaging a configured set of nodes using the same network configuration, failure to ping indicates a networking issue. 5. Click the Next button at the bottom of the screen to select the images to use (see Configuring Image Parameters on page 39). Configuring Image Parameters Before you begin: Complete Configuring Node Parameters on page 36. The Node Imaging configuration screen appears. This screen is for selecting the AOS package and hypervisor image to use when imaging the nodes. Figure: Node Imaging Screen 1. Select the hypervisor to install from the pull-down list on the left. The following choices are available: • ESX. Selecting ESX as the hypervisor displays the Acropolis Package and Hypervisor ISO Image fields directly below.
  • 40. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 40 • Hyper-V. Selecting Hyper-V as the hypervisor displays the Acropolis Package, Hypervisor ISO Image, and SKU fields. Caution: Nodes must have a 64 GB DOM to install Hyper-V. Attempts to install Hyper- V on nodes with less DOM capacity will fail. See Hyper-V Installation Requirements on page 57 for additional considerations when installing a Hyper-V cluster. • AHV. Selecting AHV as the hypervisor displays the Acropolis Package and Hypervisor ISO Image fields. 2. In the Acropolis Package field, select the AOS package to use from the pull-down list. Note: Click the Refresh Acropolis package link to display the current list of available images in the ~/foundation/nos folder. If the desired AOS package does not appear in the list, you must download it to the workstation (see Preparing Installation Environment on page 27). 3. In the Hypervisor ISO Image field, select the hypervisor ISO image to use from the pull-down list. Note: Click the Refresh hypervisor image link to display the current list of available images in the ~/foundation/isos/hypervisor/[esx|hyperv|kvm] folder. If the desired hypervisor ISO image (or AHV installation bundle) does not appear in the list, you must download it to the workstation (see Preparing a Workstation on page 27). 4. [Hyper-V only] In the SKU field, select the Hyper-V version to use from the pull-down list. Five Hyper-V versions are supported: Free, Standard, Datacenter, Standard with GUI, Datacenter with GUI. This column appears only when you select Hyper-V. Note: See Hyper-V Installation Requirements on page 57 for additional considerations when installing a Hyper-V cluster. 5. When all the settings are correct, do one of the following: → To create a new cluster, click the Next button at the bottom of the screen (see Configuring Cluster Parameters on page 40). → To start imaging immediately (bypassing cluster configuration), click the Run Installation button at the top of the screen (see Monitoring Progress on page 42). Configuring Cluster Parameters Before you begin: Complete Configuring Image Parameters on page 39. The Clusters configuration screen appears. This screen allows you to create one or more clusters and assign nodes to those clusters. It also allows you to enable diagnostic and health tests after creating the cluster(s).
  • 41. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 41 Figure: Cluster Configuration Screen 1. To add a new cluster that will be created after imaging the nodes, click Create New Cluster in the Cluster Creation section at the top of the screen. This section includes a table that is empty initially. A blank line appears in the table for the new cluster. Enter the following information in the indicated fields: a. Cluster Name: Enter a cluster name. b. External IP: Enter an external (virtual) IP address for the cluster. This field sets a logical IP address that always points to an active Controller VM (provided the cluster is up), which removes the need to enter the address of a specific Controller VM. This parameter is required for Hyper-V clusters and is optional for ESXi and AHV clusters. (This applies to NOS 4.0 or later; it is ignored when imaging an earlier NOS release.) c. CVM DNS Servers: Enter the Controller VM DNS server IP address or URL. Enter a comma separated list to specify multiple server addresses in this field (and the next two fields). d. CVM NTP Servers: Enter the Controller VM NTP server IP address or URL. You must enter an NTP server that the Controller VMs can reach. If the NTP server is not reachable or if the time on the Controller VMs is ahead of the current time, cluster services may fail to start. Note: For Hyper-V clusters, the CVM NTP Servers parameter must be set to the Active Directory domain controller. e. Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or URL. f. Max Redundancy Factor: Select a redundancy factor (2 or 3) for the cluster from the pull-down list. This parameter specifies the number of times each piece of data is replicated in the cluster (either 2 or 3 copies). It sets how many simultaneous node failures the cluster can tolerate and the minimum number of nodes required to support that protection.
  • 42. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 42 • Setting this to 2 means there will be two copies of data, and the cluster can tolerate the failure of any single node or drive. • Setting this to 3 means there will be three copies of data, and the cluster can tolerate the failure of any two nodes or drives in different blocks. A redundancy factor of 3 requires that the cluster have at least five nodes, and it can be enabled only when the cluster is created. It is an option on NOS release 4.0 or later. (In addition, containers must have replication factor 3 for guest VM data to withstand the failure of two nodes.) 2. To run cluster diagnostic and/or health checks after creating a cluster, check the appropriate boxes in the Post Image Testing section. → Check the Diagnostics box to run a diagnostic utility on the cluster. The diagnostic utility analyzes several performance metrics on each node in the cluster. These metrics indicate whether the cluster is performing properly. The results are stored in the ~/foundation/logs/diagnostics directory. Note: You must download the appropriate diagnostics test file(s) from the support portal to run this test (see Preparing a Workstation on page 27). → Check the NCC Testing box to run the Nutanix Cluster Check (NCC) test suite. This is a suite of tests that check a variety of health metrics in the cluster. The results are stored in the ~/foundation/ logs/ncc directory. 3. To assign nodes to a new cluster (from step 1), check the boxes for each node in the Block and Nodes field to be included in that cluster. A section for each new cluster appears in the bottom of the screen. Each section includes all the nodes to be imaged. You can assign a node to any of the clusters (or leave it unassigned), but a node cannot be assigned to more than one cluster. Note: This assignment is to a new cluster only. Uncheck the boxes for any nodes you want to add to an existing cluster, which can be done through the web console or nCLI at a later time. 4. When all settings are correct, click the Run Installation button at the top of the screen to start the installation process (see Monitoring Progress on page 42). Monitoring Progress Before you begin: Complete Configuring Cluster Parameters on page 40 (or Configuring Image Parameters on page 39 if you are not creating a cluster). When all the global, node, and cluster settings are correct, do the following: 1. Click the Run Installation button at the top of the screen. Figure: Run Installation Button This starts the installation process. First, the IPMI port addresses are configured. The IPMI port configuration processing can take several minutes depending on the size of the cluster. Figure: IPMI Configuration Status
  • 43. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 43 Note: If the IPMI port configuration fails for one or more nodes in the cluster, the installation process stops before imaging any of the nodes. To correct a port configuration problem, see Fixing IPMI Configuration Problems on page 63. 2. Monitor the imaging and cluster creation progress. If IPMI port addressing is successful, Foundation moves to node imaging and displays a progress screen. The progress screen includes the following sections: • Progress bar at the top (blue during normal processing or red when there is a problem). • Cluster Creation Status section with a line for each cluster being created (status indicator, cluster name, progress message, and log link). • Node Status section with a line for each node being imaged (status indicator, IPMI IP address, progress message, and log link). Figure: Foundation Progress Screen: Ongoing Installation The status message for each node (in the Node Status section) displays the imaging percentage complete and current step. Nodes are imaged in parallel, and the imaging process takes about 45 minutes. You can monitor overall progress by clicking the Log link at the top, which displays the service.log contents in a separate tab or window. Click on the Log link for a node to display the log file for that node in a separate tab or window. Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes. • When installation moves to cluster creation, the status message for each cluster (in the Cluster Creation Status section) displays the percentage complete and current step. Cluster creation happens quickly, but this step could take some time if you selected the diagnostic and NCC post- creation tests. Click on the Log link for a cluster to display the log file for that cluster in a separate tab or window. (The log file is not available until after cluster creation begins, so wait for cluster progress reporting to start before clicking this link.) • When processing completes successfully, an "Installation Complete" message appears, along with a green check mark in the Status field for each node and cluster. This means IPMI configuration
  • 44. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 44 and imaging (both hypervisor and Nutanix Controller VM) across all the nodes in the cluster was successful, and cluster creation was successful (if enabled). Figure: Foundation Progress Screen: Successful Installation 3. If the progress bar turns red with a "There were errors in the installation" message and one or more node or cluster entries have a red X in the status column, the installation failed at the node imaging or cluster creation step. To correct such problems, see Fixing Imaging Problems on page 64. Clicking the Back to config button returns you to the configuration screens to correct any entries. The default per-node installation timeout is 30 minutes for ESXi or 60 minutes for Hyper-V and AHV, so you can expect all the nodes (in each run of up to 20 nodes) to finish successfully or encounter a problem in that amount of time. Figure: Foundation Progress Screen: Failed Installation
  • 45. Imaging Bare Metal Nodes | Field Installation Guide | Foundation | 45 Cleaning Up After Installation Some information persists after imaging a cluster using Foundation. If you want to use the same Foundation VM to image another cluster, the persistent information must be removed before attempting another installation. To remove the persistent information after an installation, go to a configuration screen and then click the Reset Configuration option from the gear icon pull-down list in the upper right of the screen. Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and returns the Foundation environment to a fresh state. Figure: Reset Configuration
  • 46. Downloading Installation Files | Field Installation Guide | Foundation | 46 4 Downloading Installation Files Nutanix maintains a support portal where you can download the Foundation and AOS (or Phoenix) files required to do a field installation. To download the required files, do the following: 1. Open a web browser and log in to the Nutanix Support portal: http://portal.nutanix.com. 2. Click Downloads from the main menu (at the top) and then select the desired page: AOS (NOS) to download AOS files, Foundation to download Foundation files, or Phoenix to download Phoenix files. Figure: Nutanix Support Portal Main Screen 3. To download a Foundation installation bundle (see Foundation Files on page 47), go to the Foundation page and do one (or more) of the following: → To download the Java applet used in discovery (see Creating a Cluster on page 11), click link to jnlp from online bundle. This downloads nutanix_foundation_applet.jnlp and allows you to start discovery immediately. → To download an offline bundle containing the Java applet, click offline bundle. This downloads an installation bundle that can be taken to environments which do not allow Internet access. → To download the standalone Foundation bundle (see Imaging Bare Metal Nodes on page 26), click Foundation_VM-version#.ovf.tar. (The exact file name varies by release.) This downloads an installation bundle that includes OVF and VMDK files. → To download an installation bundle used to upgrade standalone Foundation, click foundation-version#.tar.gz (see Release Notes on page 6). → To download the current hypervisor ISO whitelist, click iso_whitelist.json. Note: Use the filter option to display the files for a specific Foundation release.
  • 47. Downloading Installation Files | Field Installation Guide | Foundation | 47 Figure: Foundation Download Screen 4. To download an AOS release bundle, go to the AOS (NOS) page and click the button or link for the desired release. Clicking the Download version# button in the upper right of the screen downloads the latest AOS release. You can download an earlier AOS release by clicking the appropriate Download version# link under the ADDITIONAL RELEASES heading. The tar file to download is named nutanix_installer_package-version#.tar.gz. 5. To download a Phoenix ISO image, go to the Phoenix page and click the file name link for the desired Phoenix ISO image. Note: Use the filter options to display the files for a specific Phoenix release and the desired hypervisor type. Phoenix 2.1 or later includes support for all the hypervisors (AHV, ESXi, and Hyper-V) in a single ISO while earlier versions have a separate ISO for each hypervisor type (see Phoenix Files on page 48). Foundation Files The following table describes the files required to install Foundation. Use the latest Foundation version available unless instructed by Nutanix customer support to use an earlier version. File Name Description nutanix_foundation_applet.jnlp This is the Foundation Java applet. This is the file needed for doing a Controller VM-based installation (see Creating a Cluster on page 11) supported in Foundation 3.0 and later releases. FoundationApplet-offline.zip This is an installation bundle that includes the Foundation Java applet. Download and extract this bundle for environments where Internet access is not allowed.
  • 48. Downloading Installation Files | Field Installation Guide | Foundation | 48 File Name Description Foundation_VM-version#.ovf This is the Foundation VM OVF configuration file where version# is the Foundation version number. Foundation_VM-version#-disk1.vmdk This is the Foundation VM VMDK file. Foundation_VM-version#-disk1.qcow2 This is the Foundation VM data disk in qcow2 format. Foundation_VM-version#.ovf.tar This is a Foundation tar file that contains the Foundation_VM-version#.ovf and Foundation_VM-version#-disk1.vmdk files. Foundation 2.1 and later releases package the OVF and VMDK files into this TAR file. Foundation-version#.tar.gz This is a tar file used for upgrading when Foundation is already installed (see Release Notes on page 6). host-bundle-el6.nutanix.version#.tar.gz This is a tar file used to generate an AHV ISO image. nutanix_installer_package-version#.tar.gz This is the tar file used for imaging the desired AOS release where version# is a version and build number. Go to the Acropolis (NOS) download page on the support portal to download this file. (You can download all the other files from the Foundation download page.) iso_whitelist.json This file contains a list of supported ISO images. Foundation uses the whitelist to validate an ISO file before imaging (see Selecting the Images on page 19). VirtualBox-version#-OSX.dmg This is the Oracle VM VirtualBox installer for Mac OS where version# is a version and build number. VirtualBox-version#-Win.exe This is the Oracle VM VirtualBox installer for Windows. Phoenix Files The following table describes the Phoenix ISO files. Note: Starting with release 2.1, Foundation no longer uses a Phoenix ISO file for imaging. Phoenix ISO files are now used only for single node imaging (see Appendix: Imaging a Node (Phoenix) on page 72) and are generated by the user from Foundation and AOS tar files. The Phoenix ISOs available on the support portal are only for those who are using an older version of Foundation (pre 2.1).
  • 49. Downloading Installation Files | Field Installation Guide | Foundation | 49 File Name Description phoenix-x.x_NOS-y.y.y.iso This is the Phoenix ISO image for a selected AOS version where x.x is the Phoenix version number and y.y.y is the AOS version number. This version applies to any hypervisor (AHV, ESXi, and Hyper- V), and there is a separate file for each supported AOS version. Version 2.1 and later (unlike earlier versions) support a single Phoenix ISO that applies across multiple hypervisors. phoenix-x.x_ESX_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or earlier) for a selected AOS version on the ESXi hypervisor where x.x is the Phoenix version number and y.y.y is the AOS version number. There is a separate file for each supported AOS version. phoenix-x.x_HYPERV_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or earlier) for a selected AOS version on the Hyper- V hypervisor. There is a separate file for each supported AOS version. phoenix-x.x_KVM_NOS-y.y.y.iso This is the Phoenix ISO image (in version 2.0 or earlier) for a selected AOS version on the KVM hypervisor. There is a separate file for each supported AOS version.
  • 50. Hypervisor ISO Images | Field Installation Guide | Foundation | 50 5 Hypervisor ISO Images An AOS ISO image is included as part of Foundation. However, customers must provide an ESXi or Hyper- V ISO image for those hypervisors. Check with your VMware or Microsoft representative, or download an ISO image from an appropriate VMware or Microsoft support site: • VMware Support: http://www.vmware.com/support.html • Microsoft Technet: http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx • Microsoft EA portal: http://www.microsoft.com/licensing/licensing-options/enterprise.aspx • MSDN: http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052 The list of supported ISO images appears in a iso_whitelist.json file used by Foundation to validate ISO images. ISO files are identified in the whitelist by their MD5 value (not file name), so verify that the MD5 value of the ISO you want to use matches the corresponding one in the whitelist. You can download the current whitelist from the Foundation page on the Nutanix support portal: https://portal.nutanix.com/#/page/ foundation/list Note: The ISO images in the whitelist are the ones supported in Foundation, but some might no longer be available from the download sites. The following table describes the fields that appear in the iso_whitelist.json file for each ISO image. iso_whitelist.json Fields Name Description (n/a) Displays the MD5 value for that ISO image. min_foundation Displays the earliest Foundation version that supports this ISO image. For example, "2.1" indicates you can install this ISO image using Foundation version 2.1 or later (but not an earlier version). hypervisor Displays the hypervisor type (esx, hyperv, or kvm). The "kvm" designation means the Acropolis hypervisor (AHV). Entries with a "linux" hypervisor are not available; they are for Nutanix internal use only. min_nos Displays the earliest AOS version compatible with this hypervisor ISO. A null value indicates there are no restrictions. friendly_name Displays a descriptive name for the hypervisor version, for example "ESX 6.0" or "Windows 2012r2". version Displays the hypervisor version, for example "6.0" or "2012r2".
  • 51. Hypervisor ISO Images | Field Installation Guide | Foundation | 51 Name Description unsupported_hardware Lists the Nutanix models on which this ISO cannot be used. A blank list indicates there are no model restrictions. However, conditional restrictions such as the limitation that Haswell-based models support only ESXi version 5.5 U2a or later may not be reflected in this field. skus (Hyper-V only) Lists which Hyper-V types (datacenter, standard, and free) are supported with this ISO image. In most cases, only datacenter and standard are supported. compatible_versions Reflects through regular expressions the hypervisor versions that can co-exist with the ISO version in an Acropolis cluster (primarily for internal use). deprecated (optional field) Indicates that this hypervisor image is not supported by the mentioned Foundation version and higher versions. If the value is “null”, the image is supported by all Foundation versions to date. The following are sample entries from the whitelist for an ESX and an AHV image. "iso_whitelist": { "478e2c6f7a875dd3dacaaeb2b0b38228": { "min_foundation": "2.1", "hypervisor": "esx", "min_nos": null, "friendly_name": "ESX 6.0", "version": "6.0", "unsupported_hardware": [], "compatible_versions": { "esx": ["^6.0.*"] }, "a2a97a6af6a3e397b43e3a4c7a86ee37": { "min_foundation": "3.0", "hypervisor": "kvm", "min_nos": null, "friendly_name": "20160127", "compatible_versions": { "kvm": [ "^el6.nutanix.20160127$" ] }, "version": "20160127", "deprecated": "3.1", "unsupported_hardware": [] },
  • 52. Network Requirements | Field Installation Guide | Foundation | 52 6 Network Requirements When configuring a Nutanix block, you will need to ask for the IP addresses of components that should already exist in the customer network, as well as IP addresses that can be assigned to the Nutanix cluster. You will also need to make sure to open the software ports that are used to manage cluster components and to enable communication between components such as the Controller VM, Web console, Prism Central, hypervisor, and the Nutanix hardware. Existing Customer Network You will need the following information during the cluster configuration: • Default gateway • Network mask • DNS server • NTP server You should also check whether a proxy server is in place in the network. If so, you will need the IP address and port number of that server when enabling Nutanix support on the cluster. New IP Addresses Each node in a Nutanix cluster requires three IP addresses, one for each of the following components: • IPMI interface • Hypervisor host • Nutanix Controller VM All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than the Controller VMs and hypervisor hosts can be on this network, which must be isolated and protected. Software Ports Required for Management and Communication The following Nutanix network port diagrams show the ports that must be open for supported hypervisors. The diagrams also shows ports that must be opened for infrastructure services.
  • 53. Network Requirements | Field Installation Guide | Foundation | 53 Figure: Nutanix Network Port Diagram for VMware ESXi Figure: Nutanix Network Port Diagram for the Acropolis Hypervisor
  • 54. Network Requirements | Field Installation Guide | Foundation | 54 Figure: Nutanix Network Port Diagram for Microsoft Hyper-V
  • 55. Controller VM Memory Configurations | Field Installation Guide | Foundation | 55 7 Controller VM Memory Configurations This topic lists the recommended Controller VM memory allocations for models and features. Controller VM Memory Configurations for Base Models Platform Default Platform Recommended Memory (GB) Default Memory (GB) vCPUs Default configuration for all platforms unless otherwise noted 16 16 8 The following tables show the minimum amount of memory and vCPU requirements and recommendations for the Controller VM on each node for platforms that do not follow the default. Nutanix Platforms Platform Recommended Memory (GB) Default Memory (GB) vCPUs NX-1020 12 12 4 NX-6035C 24 24 8 NX-8150 32 32 8 NX-8150-G4 32 32 8 NX-9040 32 16 8 NX-9060-G4 32 16 8 Dell Platforms Platform Recommended Memory (GB) Default Memory (GB) vCPUs XC730xd-24 32 16 8 XC6320-6AF 32 16 8 XC630-10AF 32 16 8
  • 56. Controller VM Memory Configurations | Field Installation Guide | Foundation | 56 Lenovo Platforms Platform Recommended Memory (GB) Default Memory (GB) vCPUs HX-5500 24 24 8 HX-7500 24 24 8 Controller VM Memory Configurations for Features The following table lists the minimum amount of memory required when enabling features. The memory size requirements are in addition to the default or recommended memory available for your platform (Nutanix, Dell, Lenovo) as described in Controller VM Memory Configurations for Base Models. Adding features cannot exceed 16 GB in additional memory. Note: Default or recommended platform memory + memory required for each enabled feature = total Controller VM Memory required Feature(s) Memory (GB) Capacity Tier Deduplication (includes Performance Tier Deduplication) 16 Redundancy Factor 3 8 Performance Tier Deduplication 8 Cold Tier nodes (6035-C) + Capacity Tier Deduplication 4 Performance Tier Deduplication + Redundancy Factor 3 16 Capacity Tier Deduplication + Redundancy Factor 3 16