Solaris Containers cheat sheet

This a quick cheat sheet of the commands that can be used when using zones (containers), for a more complete guide see solaris zones. Zone States
Configured Incomplete Installed
Configuration has been completed and storage has been committed. Additional configuration is still required. Zone is in this state when it is being installed or uninstalled. The zone has a confirmed configuration, zoneadm is used to verify the configuration, Solaris packages have been installed, even through it has been installed, it still has no virtual platform associated with it. Zone's virtual platform is established. The kernel creates the zsched process, the network interfaces are plumbed and filesystems mounted. The system also assigns a zone ID at this state, but no processes are associated with this zone. A zone enters this state when the first user process is created. This is the normal state for an operational zone.

Ready (active)

Running (active)

Shutting down + Normal state when a zone is being shutdown. Down (active)

Cheat sheet
zonecfg -z <zone>

Creating a zone
see creating a zone for a more details

deleting a zone from the global zonecfg -z <zone> delete -F ssytem Display zones current configuration Create a zone creation file Verify a zone Installing a zone Ready a zone boot a zone reboot a zone halt a zone uninstalling a zone Veiwing zones login into a zone login to a zones console login into a zone in safe mode (recovery)
zonecfg -z <zone> info zonecfg -z <zone> export

zoneadm -z <zone> verify zoneadm -z <zone> install zoneadm -z <zone> ready zoneadm -z <zone> boot zoneadm -z <zone> reboot zoneadm -z <zone> halt zoneadm -z <zone> uninstall -F zoneadm list -cv

zlogin <zone> zlogin -C <zone> (use ~. to exit) zlogin -S <zone>

# pkgadd -G -d . <package>

add/remove a package (global zone)

If the -G option is missing the package will be added to all zones # pkgadd -Z -d . <package>

add/remove a package (nonglobal zone) Query packages in all nonglobal zones

If the -Z option is missing the package will be added to all zones # pkginfo -Z

query packages in a specified zone lists processes in a zone list the ipcs in a zone process grep in a zone list the ptree in a zone Display all filesystems display the zones process informtion (must be login into the zone)

# pkginfo -z <zone>

# ps -z <zone> # ipcs -z <zone> # pgrep -z <zone> # ptree -z <zone> # df -Zk # psrstat -Z

Quick and dirty ZFS cheatsheet
December 20, 2008

Create simple striped pool:
zpool create [pool_name] [device] [device] ... zpool create datapool c5t433127A900011C370000C00003210000d0 c5t433127B4001031250000900000540000d0

Create mirrored pool:
zpool create [pool_name] mirror [device] [device] ... zpool create datapool mirror c5t433127A900011C370000C00003210000d0 c5t433127B4001031250000900000540000d0

Create Raid-Z pool:
zpool create [pool_name] raidz [device] [device] [device] ... zpool create datapool raidz c5t433127A900011C370000C00003210000d0 c5t433127B4001031250000900000540000d0 c5t439257C4000019250000900000540000d0

Transform simple pool to a mirror:
zpool create [pool_name] [device] zpool attach [pool_name] [existing_device] [new_device] zpool create datapool c5t433127A900011C370000C00003210000d0 zpool attach datapool c5t433127A900011C370000C00003210000d0 c5t433127B4001031250000900000540000d0

Expand simple pool:
zpool zpool zpool zpool create [pool_name] [device] add [pool_name] [new_device] create datapool c5t433127A900011C370000C00003210000d0 add datapool c5t433127B4001031250000900000540000d0

Expand mirrored pool by attaching additional mirror:
zpool add [pool_name] mirror [new_device] [new_device] zpool add datapool mirror c5t433127A900011C370000C00003460000d0 c5t433127B400011C370000C00003410000d0

Replace device in a pool:
zpool replace [pool_name] [old_device] [new_device] zpool replace datapool c5t433127A900011C370000C00003410000d0 c5t433127B4001031250000900000540000d0

Destroy pool:
zpool destroy [pool_name] zpool destroy datapool

Set pool mountpoint:
zfs set mountpoint=/path [pool_name] zfs set mountpoint=/export/zfs datapool

Display configured pools:
zpool list zpool list

Display pool status info:
zpool status [-v] [pool_name] zpool status -v datapool

Display pool I/O statistics:
zpool iostat [pool_name] zpool iostat datapool

Display pool command history:
zpool history [pool_name] zpool history datapool

Export a pool:
zpool export [pool_name] zpool export datapool

Import a pool:
zpool import [pool_name] zpool import datapool

Create a filesystem:
zfs create [pool_name]/[fs_name] zfs create datapool/filesystem

Destroy a filesystem:
zfs destroy [pool_name]/[fs_name] zfs destroy datapool/filesystem

Rename a filesystem:
zfs rename [pool_name]/[fs_name] [pool_name]/[fs_name] zfs rename datapool/filesystem datapool/newfilesystem

Move a filesystem:
zfs rename [pool_name]/[fs_name] [pool_name]/[fs_name]/[fs_name] zfs rename datapool/filesystem datapool/users/filesystem

Display properties of a filesystem:
zfs get all [pool_name]/[fs_name] zfs get all datapool/filesystem

Make a snapshot:
zfs snapshot [pool_name]/[fs_name]@[time] zfs snapshot datapool/filesystem@friday

Roll back filesystem to its snapshot:
zfs rollback [pool_name]/[fs_name]@[time] zfs rollback datapool/filesystem@friday

Clone a filesystem:
zfs zfs zfs zfs snapshot [pool_name]/[fs_name]@[time] clone [pool_name]/[fs_name]@[time] [pool_name]/[fs_name] snapshot datapool/filesystem@today clone datapool/filesystem@today datapool/filesystemclone

Backup filesystem to a file:
zfs send [pool_name]/[fs_name] > /path/to/file zfs send datapool/filesystem@friday > /tmp/filesystem.bkp

Restore filesystem from a file:
zfs receive [pool_name]/[fs_name] < /path/to/file zfs receive datapool/restoredfilesystem < /tmp/filesystem.bkp

Create ZFS volume:
zfs create -V [size] [pool_name]/[vol_name] zfs create -V 100mb datapool/zvolume newfs /dev/zvol/dsk/datapool/zvolume

System Controller Systems (4800, 6900)
These systems have a system controller accessable over the network. They are given a name distinct from the system(s) they control because they can control multiple different systems. To gain access to the system controller, telnet to the relevant name and choose the 'Platform Shell' option. You will then need to provide a password. The other options in this list are for access to the relevant system consoles. At the command line, the following should be enough for basic operation :-

Command poweron all

Action Turn on all boards and start systems booting

setkeyswitch -d A Turn off domain A (or C if you replace A in the command)

eXtended System Control Facility (XSCF on the M5000)
The XSCF is provided on separate hardware from the main M5000 processing capacity. The network interfaces are distinct from those used by the server and are configured to connect via ssh. To gain access to the console ssh in to the XSCF controller and subsequently connect to domain 0 on the system. The M5000 servers are all configured with one domain at present. The controllers are registered in the format 'jamaican-xscf.iso.port.ac.uk'.

Command poweron -d 0 poweroff -d 0 sendbreak -d 0 console -d 0

Action Power on domain 0 Power off domain 0 Send a break signal to domain 0 Connect to the console of domain 0

showdomainstatus -a show status of all domains

integrated Lights Out Manager (iLom on the T5220)
The iLom can be used both via ssh or a web browser interface. To connect to the to the iLom of a server use the name format 'bread-lom.iso.port.ac.uk'. Although the servers are capable of making use of an aLom interface, Sun are reportedly standerdizing all controllers to the iLom model and as such it would be best to familiarize with the new commands.

Command start /SP/console

Action Connect to the console of the server

set /HOST send_break_action=break send a break signal to the host

Sign up to vote on this title
UsefulNot useful