You are on page 1of 7

ACCESS TRAINING:

Nexenta Training April 2019


GTM:
https://global.gotomeeting.com/join/876165637

Please join my meeting from your computer, tablet or smartphone.

Access Code: 876-165-637

You can also dial in using your phone.

United States (Toll Free): +1 866 899 4679


United States: +1 (571) 317-3117
Austria (Toll Free): 0 800 202144
Slovakia (Toll Free): 0 800 132 608
South Africa (Toll Free): 0 800 555 451

Class:
https://training.nexenta.com/classroom01

Notes:
https://it.it-news-and-events.info/nexenta/2019-April-Training/access.txt

Nexenta Docs:
https://nexenta.com/products/documentation

Dark site upgrade instructions included in


https://prodpkg.nexenta.com/nstor/5.1.2.0/docs/NS-5.1-Installation-Guide-RevA.pdf

ISO for NexentaStore 5.x and NexentaFusion 1.x


https://nexenta.com/products/downloads/nexentastor5

Login as a student - student1, student2, etc


Password is nexentA01

Each student has several VMs:

- a windows desktop/server (admin desktop + iSCSI client + CIFs client)


- two NexentaStor nodes (aka "heads" or "servers") - storage cluster
- a linux system - your NFS client
- a NexentaFusion instance - your management GUI

From the guacamole web page, you can launch a windows desktop or an SSH session to an NexentaStor server.
Those will open in a separate browser tab.

To login to the Windows desktop, use studentX and password nexentA01


To login to the NexentaStor servers via ssh, use admin and nexentA01
The naming convention is sXnX. So, for student 3, their two nexenta nodes are s3n1 and s3n2.

The IP addresses/conventions are


192.168.0.11 10's of the 4th number for the student number: node 1, 2, VIP
192.168.1.11 Same but on the iSCSI network. Can be used for heartbeat
admin/nexenta change once you login to nexentA01!

192.168.0.10 s1-win s1-win.edu.nexenta.com


192.168.0.11 s1n1 s1n1.edu.nexenta.com
192.168.0.12 s1n2 s1n2.edu.nexenta.com
192.168.0.13 s1-ubuntu s1-ubuntu.edu.nexenta.com
192.168.0.13 s1-fusion s1-fusion.edu.nexenta.com
student 1's NexentaFusion server - on port 8457
http://192.168.0.13:8457

## IP Addresses to use for VIP's in cluster.


192.168.0.111 s1-vip1 s1-vip1.edu.nexenta.com
192.168.0.112 s1-vip2 s1-vip2.edu.nexenta.com

Each student environment is unconfigured and unlicensed. Please use these license tokens

student assignments are

Student
# First Last firm email token

1 John McLaughlin Nexenta john.mclaughlin@nexenta.com E9DE4D35-DEAA-47E3-9556-46A3804C9215


2 Christian Linhart ML11 christian.linhart@ml11.at 631E6F5F-FC6F-4E59-9521-50C861F41289
3 Horacio Rojas Flexential Horacio.Rojas@flexential.com DC83B75C-E87B-475C-A83E-2549C699AC08
4 Ivan Zorvan HPE van.zorvan@hpe.com 5027CED5-B1F4-43E0-B193-29C266B0E8B8
5 Joe Hockey Flexential Joe.Hockey@flexential.com 0BA298A7-6A69-44B4-B1E8-2DDA9003F389
6 Michael Kohlhauser ML11 michael.kohlhauser@ml11.at 66760DB9-1670-480D-A807-E52A29297405
7 Mzi Ndimande VNA Mzi.Ndimande@vnac.co.za E352A1BA-34F2-400A-BA46-41BAFFCB32F7
8 Radko Pal HPE radko.pal@hpe.com 2CCED926-D4A2-4555-8505-DE1A855E3EF9
9 Randy Nelson Flexential Randy.Nelson@flexential.com 065FF570-F2EC-4196-B42E-C37F3886EC4A
10 Tarun Misra VNA Tarun.Misra@vnac.co.za 2CD37A0E-79B7-4E1C-8BCB-4118D4E0CAA9
11 Zaaher Ismail VNA Zaaher.Ismail@vnac.co.za 50A314C6-CE61-4188-9161-3383C5F0D2F4
12 Brent Huntsman Flexentia Brent.Huntsman@flexential.com 26B13440-4C0E-4F83-8C14-BE03B6AED0C7
Useful CLI commands

Getting Started
license activate <token>
license show
Upgrade
software version
system status
software upgrade

Review
software list
system status
Date/Time
config list system.date

e.g. config set system.date=3/31/2017 9:20:00


svc set servers=pool.ntp.org ntp
svc list ntp
svc enable ntp
config list system.date

uptime
alias date="config list system.date"
date

Set Management Address

config get all system.managementAddress

config set system.managementAddress=<IP>

Inventory

enclosure list
disk list
inventory nic
link list
route list
ip list
net list host
net list dns

Dedicated Heartbeat
ip create static bge0/heartbeat 192.168.0.1/24

Rename a host and change timezone


config set system.hostName=
config set system.timeZone=US/Eastern

Change Admin Password


user passwd -p newpass admin

Configuring SMTP Email Service

config get value system.administratorEmail


config set system.administratorEmail=XXXX@XX
config get all alert
config set alert.email.address=abcd@defghi.com
config list | grep smtp

Example:

config set smtp.host=smtp.gmail.com


config set smtp.port=465
config set smtp.useSsl=True
config set smtp.useTls=True
config set smtp.authMethods=PLAIN
config set smtp.user=abcd@gmail.com
config set smtp.password=nexenta1
config set smtp.senderEmail=test@gmail.com
NOTE: If you use gmail as in this example, you may have to login via the web and change a setting to allow for less secure access

system smtp-test

FC

config set system.fcDefaultPortMode=target

Verifying Enclosure and Disk Information

enclosure list
enclosure get all
disk list
disk get all

Managing Networks

Interfaces

link list
link get all e1000g0
inventory nic

Addresses
ip create static
e.g. ip create static e1000g1/v4 10.3.10.38/24
ip list

DNS

net create dns 8.8.8.8


net list dns

Network Route

route create <dest.> <gateway>


route list default

Aggregating NICs

link create aggr [-P ] [-L ] [-T ] [- u ] ...


Example: link create aggr aggr1 e1000g1,e1000g2

Using VLANs

link assign vlan


Example: link assign vlan vlan22 22 e1000g1
Example: link assign vlan vlan22 22 aggr1
link list

Pool

pool create-auto <redundancy> <pool> -M <max-devices>[-c <vdev-size>] [-t <media-type>] [-s <disk-size>] [-r <rpm>] [-N] [-e
<enclosures>] [-R altroot] [-o <properties>] [--config-output=<flags>]

Example:
pool create-auto mirror poola-auto -c 2 -e e1,e2,e3 -M 120 -t hdd -s 2TB

pool list <pool name>

When creating a pool, you can force to utilize the disks even if they are currently in use by using the -f flag.

Create a new non-clustered pool with two 3-disk raidz vdevs,


using the same disks as before
a. pool create-auto raidz1 vol01 -M 6 -c 3 -N
b. -c 3 tells it to make each vdev 3 devices
c. Type y when prompted to confirm.
d. disk list -u to identify unused disks
e. pool add vol01 cache <diskName>
f. disk list -u to identify unused disks
g. pool add vol01 log mirror <disk1Name> <disk2Name>
h. pool status vol01 to confirm configuration
pool set autoexpand=yes poola

Configuring HA Cluster

svc list ha

net list host

On each node, create a host entry for the other node with net create host <IP> <name>
on s1n1: net create host 192.168.0.12 s1n2
on s1n2: net create host 192.168.0.11 s1n1

Verify times on each node

config list system.date

hacluster create [-fnv] [-d ] [-H ]


** use -f
Example: hacluster create -d "S1 test cluster" s1n1,s1n2 s1cluster

hacluster status

hacluster status -e

To add net heartbeat:


hacluster add-net-heartbeat [-nyv] <first-node> <first-ip> <second-node> <second-ip>
Example:
hacluster add-net-heartbeat -v s1n1 192.168.1.11 s1n2 192.168.1.12

To add disk heartbeat:


hacluster add-disk-heartbeat [-nyv] <first-node> <second-node> <service> <disk>
Example: hacluster add-disk-heartbeat smc-53-109 smc-53-110 hb c2t0d0

To add a VIP called vip22 on an interface called vlan22 configured over VLAN 22
haservice add-vip mypool vip22 128.2.126.232/255.255.255.224 s1n1:vlan22,s1n2:vlan22

Connect to Nexenta Fusion


192.168.0.101 student 1's NexentaFusion server - on port 8457
http://192.168.0.101:8457
admin/nexenta change one login to Nexenta123!

Use cog on upper right to access settings


Select DATE/TIME
Select NTP
Select Timezone
reboot

Register
https://nexenta.com/products/downloads/nexentastor5

https://nexenta.elmegil.net/Scripts/fe.tgz

----------
iozone and some other tools:

http://sea.zfsdestroy.com/tools.tgz

Essentially the 4.x binaries work on 5.x., I have fio and iozone in there as well.

Determine back-end and front end performance characteristics with iozone:

Create an iozone-test folder with compression disabled, 32k records.

The synthetic data file created by iozone be highly compressed, leading to silly, meaningless results

Install iozone

Run several tests in the iozone-test folder

start with a small data file size and small number of threads to validate test in a short period of time

iozone -ec -r 32 -s 982m -l 2 -i 0 -i 1 -i 8

finish with a big data file size and larger number of threads

iozone -ec -r 32 -s 98282m -l 6 -i 0 -i 1 -i 8

use a big data file to minimize caching


use multiple threads to generate multiple, parallel I/O requests
don't get too carried away or else the system will be spending too much time running the benchmark and not enough time doing I/O
make sure iozone runs for many hours - perhaps over night - to "burn in" the system

If you have two nodes, share the iozone folder to the other node and run the iozone tests via NFS. That will show the peak performance with no
network switch or competing traffic.

In addition to setting expectations for production performance, iozone can be used to stress the system and shake out errors. Check logs and run
commands like

iostat -en
to look for errors

You might also like