Professional Documents
Culture Documents
Migration of Native Solaris Zones
Migration of Native Solaris Zones
A decision to migration a Native Solaris Zone (NGZ) to a LDOM is not a direct one, but a stepped one.
A NZG needs to be first migrated to a KZ and then this KZ needs to me migrated to a LDOM.
I have spent a lot of time on the web to find if anyone has done this before. I couldn’t find anything. So I
took it upon myself to do this task and document the procedure for the benefit of some poor souls who
has to undergo the same.
Please note: the below procedure is using Solaris x86/64 on VmWare workstation. But it will remain
more or less similar on a SPARC system as well.
Points to consider:
(1) DHCP setup is a must – at least for the purpose of the migration.
(2) The server on which the repository is configured, MUST have a fixed IP.
(3) The Resositiry must be published over http and not filesystem/directory.
(4) Storage space is required for facilitating the migration. This is the space where the images are
stored and should be available to the physical servers on which the zones are backed-up as well
as migrated to.
(5) A private link between the global zone and the Kernel Zone is preferred to avoid network
latency.
Map the ISO image of the OS to the CD/DVD to install the OS.
Change this to:
Select the ISO image location and file
Select “Power on”
OS Installation Starts
Select the 1st disk 16GB for OS installation
Change this to:
Change this to:
Change this to:
Type root password of your choice – twice
In the meantime, you need to make some changes in the “Virtual Network Editor”
Check VMnet1 details. Its NOT matching with out IP address that we chose for our server.
Login as root and disable Sendmail to avoid those annoying messages coming on the screen.
Alternatively, you can all a FQDN to the host file to work around the issue.
To disable Sendmail
# svcadm disable Sendmail
# svcadm disable Sendmail-client
To implement FQDN:
Edit the /etc/hosts file and add an alias as below
Now, activate direct root SSH access. Its faster to use putty than using the console.
We are in!!!
Download the repository files from the below location and extract it in /Repo filesystem
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/local-repository-
2245081.html
Check existing default repository
As you can see in the above outputs, the Repo version is higher than the OS version.
You acknowledge that your use of this Oracle Solaris software product
is subject to, and may not exceed the use for which you are authorized,
(i) the license or cloud services terms that you accepted when you
obtained the right to use Oracle Solaris software; or (ii) the license
terms that you agreed to when you placed your Oracle Solaris software
order with Oracle; or (iii) the Oracle Solaris software license terms
included with the hardware that you acquired from Oracle; or, if (i),
(ii) or (iii) are not applicable, then, (iv) the OTN License Agreement
for Oracle Solaris (which you acknowledge you have read and agree to)
available at
http://www.oracle.com/technetwork/licenses/solaris-cluster-express-license-167852.html.
Note: Software downloaded for trial use or downloaded as replacement
media may not be used to update any unsupported software.
Packages to install: 2
Packages to update: 217
Create boot environment: Yes
Create backup boot environment: No
PHASE ITEMS
Removing old actions 1437/1437
Installing new actions 2433/2433
Updating modified actions 11925/11925
Updating package state database Done
Updating package cache 217/217
Updating image state Done
Creating fast lookup database Done
Updating package cache 1/1
---------------------------------------------------------------------------
NOTE: Please review release notes posted at:
http://www.oracle.com/pls/topic/lookup?ctx=solaris11&id=SERNS
---------------------------------------------------------------------------
As you can see above, the OS version installed and the Repo versions are matching.
PHASE ITEMS
Installing new actions 68177/68177
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 1/1
Installation: Succeeded
done.
Next Steps: Boot the zone, then log into the zone console (zlogin -C)
Zone creation completed. Poweron the zone and complete the rest of the configuration
-bash-4.1$ cd /rpool/data
-bash-4.1$ ls -al
total 28
drwxr-xr-x 6 root root 11 May 17 20:21 .
drwxr-xr-x 3 root root 3 May 17 20:21 ..
-rw------- 1 uucp bin 0 May 17 18:06 aculog
drwxr-xr-x 2 adm adm 2 May 17 18:06 exacct
-r-------- 1 root root 28 May 17 20:20 lastlog
drwxr-xr-x 2 adm adm 2 May 17 18:06 log
-rw-r--r-- 1 root root 526 May 17 20:00 messages
drwxr-xr-x 2 root sys 2 May 17 18:06 sm.bin
drwxr-xr-x 2 root sys 2 May 17 18:06 streams
-rw-r--r-- 1 root bin 1860 May 17 20:20 utmpx
-rw-r--r-- 1 adm adm 5580 May 17 20:20 wtmpx
-bash-4.1$ logout
root@zone1-ngz:~# pwd
/root
root@zone1-ngz:~# cd /rpool/data
root@zone1-ngz:/rpool/data# >sa
root@zone1-ngz:/rpool/data# >sandeep
root@zone1-ngz:/rpool/data# >sandeep123
root@zone1-ngz:/rpool/data# pwd
/rpool/data
root@zone1-ngz:/rpool/data# ls -al
total 32
drwxr-xr-x 6 root root 15 May 17 20:23 .
drwxr-xr-x 3 root root 3 May 17 20:21 ..
-rw------- 1 uucp bin 0 May 17 18:06 aculog
drwxr-xr-x 2 adm adm 2 May 17 18:06 exacct
-r-------- 1 root root 28 May 17 20:20 lastlog
drwxr-xr-x 2 adm adm 2 May 17 18:06 log
-rw-r--r-- 1 root root 526 May 17 20:00 messages
-rw-r--r-- 1 root root 0 May 17 20:22 sa
-rw-r--r-- 1 root root 0 May 17 20:22 sandeep
-rw-r--r-- 1 root root 16384 May 17 20:23 sandeep.tar
-rw-r--r-- 1 root root 0 May 17 20:22 sandeep123
drwxr-xr-x 2 root sys 2 May 17 18:06 sm.bin
drwxr-xr-x 2 root sys 2 May 17 18:06 streams
-rw-r--r-- 1 root bin 1860 May 17 20:20 utmpx
-rw-r--r-- 1 adm adm 5580 May 17 20:20 wtmpx
root@zone1-ngz:/rpool/data#
root@zone1-ngz:/rpool/data#
zonecfg:zone1-kz> commit
zonecfg:zone1-kz> exit
root@SolA:~#
Implementation of Pre-requisites:
root@SolA:~# dladm
LINK CLASS MTU STATE OVER
net0 phys 1500 up --
root@SolA:~#
root@SolA:~# dladm
LINK CLASS MTU STATE OVER
net0 phys 1500 up --
stub0 etherstub 9000 unknown --
root@SolA:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/v4 static ok -- 192.9.200.10/24
net0/v6 addrconf ok -- fe80::20c:29ff:feae:e4dc/10
root@SolA:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
glbstub0 ip ok -- --
glbstub0/v4 static ok -- 1.1.1.1/24
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/v4 static ok -- 192.9.200.10/24
net0/v6 addrconf ok -- fe80::20c:29ff:feae:e4dc/10
root@SolA:~#
root@SolA:# cd /etc/inet
root@SolA:# cp dhcpd.conf.example dhcpd4.conf
To look as below
root@SolA:~#
root@SolA:~# svcs -a | grep dhcp
disabled 17:44:58 svc:/network/dhcp/relay:ipv4
disabled 17:44:58 svc:/network/dhcp/relay:ipv6
disabled 17:44:58 svc:/network/dhcp/server:ipv4
disabled 17:44:58 svc:/network/dhcp/server:ipv6
root@SolA:~#
This step is a MUST, because, after an initial copy process, as a part of the installation, kernel
zone will configure its pkg publisher and point it to the global zone’s repository.
If the Global zone’s local repository is file/filesystem based, the phase 2 installation of the
kernel zone will fail with the below message:
Kindly note: You may face an error while initiating the “install” as below:
1. First thing is that you need to be running i5/i7 generation processor that will support
nested paging.
2. Shutdown your VM and then go into the container/folder for the VM and modify the
.vmx file and add the following to the bottom:
vhv.enable = “TRUE”
Now, install the kernelzone again from the Unified Archive created earlier.
root@SolA:~# zoneadm -z zone1-kz install -a /backup/zone1-ngz_reco.uar
Progress being logged to /var/log/zones/zoneadm.20160518T095435Z.zone1-kz.install
[Connected to zone 'zone1-kz' console]
Boot device: cdrom1 File and args: /platform/i86pc/kernel/amd64/unix -B install=true
-B aimanifest=/system/shared/ai.xml
reading module /platform/i86pc/amd64/boot_archive...done.
reading kernel file /platform/i86pc/kernel/amd64/unix...done.
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
--------------Snip-----------------------------
At this time, kernel zone installation is in progress. But you can still login into the zone to see
if the IP Address and other parameters has been properly picked up by the zone.
In our case, the kernel zone should pickup 1.1.1.10/24 from the DHCP.
Let’s check
root@solaris:~# ipadm
NAME CLASS/TYPE STATE UNDER ADDR
lo0 loopback ok -- --
lo0/v4 static ok -- 127.0.0.1/8
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/v4 dhcp ok -- 1.1.1.10/27
net0/v6 addrconf ok -- fe80::8:20ff:fe60:b394/10
root@solaris:~# exit
logout
root@SolA:~#
Hostname: zone1-ngz
root@zone1-ngz:~# df -h
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris-recovery
15G 1.6G 12G 13% /
/devices 0K 0K 0K 0% /devices
/dev 0K 0K 0K 0% /dev
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 1.8G 1.4M 1.8G 1% /system/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/dev/kz/sdir/shared@0
4.4G 1.7M 4.4G 1% /system/shared
/usr/lib/libc/libc_hwcap1.so.1
13G 1.6G 12G 13% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
rpool/ROOT/solaris-recovery/var
15G 118M 12G 1% /var
swap 1.8G 4K 1.8G 1% /tmp
rpool/VARSHARE 15G 1.1M 12G 1% /var/share
rpool/VARSHARE/zones 15G 31K 12G 1% /system/zones
rpool/export 15G 32K 12G 1% /export
rpool/export/home 15G 32K 12G 1% /export/home
rpool 15G 35K 12G 1% /rpool
rpool/data 15G 62K 12G 1% /rpool/data
rpool/VARSHARE/pkg 15G 32K 12G 1% /var/share/pkg
rpool/VARSHARE/pkg/repositories
15G 31K 12G 1% /var/share/pkg/repositories
root@zone1-ngz:~#
root@zone1-ngz:~# cd /rpool/data
root@zone1-ngz:/rpool/data# ls -al
total 77
drwxr-xr-x 6 root root 15 May 17 20:23 .
drwxr-xr-x 4 root root 4 May 18 10:19 ..
-rw------- 1 uucp bin 0 May 17 18:06 aculog
drwxr-xr-x 2 adm adm 2 May 17 18:06 exacct
-r-------- 1 root root 28 May 17 20:20 lastlog
drwxr-xr-x 2 adm adm 2 May 17 18:06 log
-rw-r--r-- 1 root root 526 May 17 20:00 messages
-rw-r--r-- 1 root root 0 May 17 20:22 sa
-rw-r--r-- 1 root root 0 May 17 20:22 sandeep
-rw-r--r-- 1 root root 16384 May 17 20:23 sandeep.tar
-rw-r--r-- 1 root root 0 May 17 20:22 sandeep123
drwxr-xr-x 2 root sys 2 May 17 18:06 sm.bin
drwxr-xr-x 2 root sys 2 May 17 18:06 streams
-rw-r--r-- 1 root bin 1860 May 17 20:20 utmpx
-rw-r--r-- 1 adm adm 5580 May 17 20:20 wtmpx
root@zone1-ngz:/rpool/data#
As you can see above, all the data from the rpool has been successfully migrated to the
kernel zone.
All other zpool disks now can be configured into the kernelzone.
Upon reboot, the successful kernelzone migration is completed.