Professional Documents
Culture Documents
The idea is duplicating current running boot environment (BE) and upgrading
duplicate BE while current one is running.
Rebooting will activate new BE. If something goes wrong, previous good BE
can be easily re-activating with next reboot.
Live upgrade divides FS(s) in 2 groups: Critical (ones OS cannot live, like
/ and /var) and Sharable (like /export or /home).
Note: place new BE on disk slice. The SVM metadevice cannot be used for new
BE, but running BE can be on SVM.
But there is a trick here. These packages are from Solaris 9. And we want
to upgrade new BE to Solaris 10.
So we need to uninstall these packages and install new ones for Solaris
release to be installed (in this case Solaris 10).
Example:
# tmp> patchadd 137477-01
Checking installed patches...
Verifying sufficient filesystem capacity (dry run method)...
Installing patch packages...
Patch number 137477-01 has been successfully installed.
See /var/sadm/patch/137477-01/log for details
Patch packages installed:
SUNWbzip
SUNWsfman
Two partitions, 0 and 3, (size 20G) were created on Disk 1. They will be
dedicated for / and /var upgrades.
partition> p (Disk 1)
Current partition table (unnamed):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1 - 4122 20.00GB (4122/0/0)
41945472
1 unassigned wu 0 0 (0/0/0)
0
2 backup wm 0 - 14086 68.35GB (14087/0/0)
143349312
3 var wm 4123 - 8244 20.00GB (4122/0/0)
41945472
4 unassigned wm 0 0 (0/0/0)
0
5 unassigned wm 0 0 (0/0/0)
0
6 unassigned wu 0 0 (0/0/0)
0
7 unassigned wm 0 0 (0/0/0)
0
/dev/dsk/c1t0d0s1 - - swap - no -
/dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no
-
/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /var ufs 1 no
-
/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /.0 ufs 2
yes -
/dev/dsk/c1t0d0s5 /dev/rdsk/c1t0d0s5 /backup ufs 2
yes -
swap - /tmp tmpfs - yes -
The command lucreate creates new BE and syntax is (can be more than one -m
where you specify critical FS for new BE):
Here, I will not specify swap under -m, so both BEs will share same swap
slice.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0 yes yes yes no -
newBE-disk1 no no no no ACTIVE
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0 yes yes yes no -
newBE-disk1 yes no no yes -
Let's first check the media from where we want to perform OS upgrade.
If I run only luupgrade, it will show me all options as very good help.
Let's now try only dry-run (option -N) to see 'projection' of upgrading new
BE to Solaris 10.
Note: I use -N for dry-run and place error and output log files in shared
FS (/.0)
42126 blocks
miniroot filesystem is <lofs>
Mounting miniroot at
</net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot>
Validating the contents of the media
</net/unixlab/export/jumpstart/distrib/sparc/5.10u7>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <newBE-disk1>.
Performing the operating system upgrade of the BE <newBE-disk1>.
Execute Command:
</net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot/
usr/sbin/install.d/pfinstall -L /a -c
/net/unixlab/export/jumpstart/distrib/sparc/5.10u7
/tmp/.luupgrade.profile.upgrade.5743>.
Adding operating system patches to the BE < newBE-disk1>.
Execute Command
</net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot/
usr/sbin/install.d/install_config/patch_finish -R "/a" -c
"/net/unixlab/export/jumpstart/distrib/sparc/5.10u7">.
42126 blocks
miniroot filesystem is <lofs>
Mounting miniroot at
</net/unixlab/export/jumpstart/distrib/sparc/5.10u7/Solaris_10/Tools/Boot>
Validating the contents of the media
</net/unixlab/export/jumpstart/distrib/sparc/5.10u7>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <newBE-disk1>.
Determining packages to install or upgrade for BE <newBE-disk1>.
Performing the operating system upgrade of the BE <newBE-disk1>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <newBE-disk1>.
Package information successfully updated on boot environment <newBE-
disk1>.
Adding operating system patches to the BE <newBE-disk1>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <newBE-disk1> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <newBE-disk1> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <newBE-disk1>. Before you activate boot
environment <newBE-disk1>, determine if any additional system maintenance
is required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment < newBE-disk1> is complete.
Installing failsafe
Failsafe install is complete.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0 yes yes yes no -
newBE-disk1 yes no no no UPDATING (can
take up to 2h)
Okay, now we need to activate new BE that will make it bootable on next
reboot.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0 yes yes yes no -
newBE-disk1 yes no no yes -
# luactivate
oldBE-disk0
# luactivate newBE-disk1
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0 yes yes no no -
newBE-disk1 yes no yes no -
Reboot
# init 6
The command lufslist shows you nice output about BE and it's file systems.
# lufslist newBE-disk1
boot environment name: newBE-disk1
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount
Options
----------------------- -------- ------------ -------------------
--------------
/dev/dsk/c1t0d0s1 swap 4298342400 - -
/dev/dsk/c1t1d0s0 ufs 21476081664 / -
/dev/dsk/c1t1d0s3 ufs 21476081664 /var -
/dev/dsk/c1t0d0s4 ufs 21476081664 /.0 -
/dev/dsk/c1t0d0s5 ufs 38752813056 /backup -
# lufslist oldBE-disk0
boot environment name: oldBE-disk0
Filesystem fstype device size Mounted on Mount
Options
----------------------- -------- ------------ -------------------
--------------
/dev/dsk/c1t0d0s1 swap 4298342400 - -
/dev/dsk/c1t0d0s0 ufs 4298342400 / -
/dev/dsk/c1t0d0s3 ufs 4298342400 /var -
/dev/dsk/c1t0d0s4 ufs 21476081664 /.0 -
/dev/dsk/c1t0d0s5 ufs 38752813056 /backup -
You can also compare current BE with one specified in below command.
# lucompare newBE-disk1
ERROR: newBE-disk1 is the active boot environment; cannot compare with
itself
# lucompare oldBE-disk0
Determining the configuration of oldBE-disk0 ...
zoneadm: global: could not get state: No such zone configured
zoneadm: failed to get zone data
< newBE-disk1
> oldBE-disk0
Processing Global Zone
Comparing / ...
Links differ
01 < /:root:root:31:16877:DIR:
02 > /:root:root:25:16877:DIR:
Permissions, Links, Group differ
01 < /lib:root:bin:7:16877:DIR:
02 > /lib:root:root:1:41471:SYMLINK:9:
02 > /lib/svc does not exist
02 > /lib/svc/bin does not exist
02 > /lib/svc/bin/lsvcrun does not exist
Etc etc
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldBE-disk0 yes no no no COMPARING
newBE-disk1 yes yes yes no -
Another check:
# cat /etc/release
Solaris 10 5/09 s10s_u7wos_08 SPARC
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 30 March 2009
# cat /etc/release~9
Solaris 9 9/05 s9s_u8wos_05 SPARC
Copyright 2005 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 04 August 2005
Tue Dec 8 09:15:23 PST 2009 : 01 : supplementary software
Tue Dec 8 09:15:24 PST 2009 : 02 : prototype tree
# df -h -F ufs
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t1d0s0 20G 4.3G 15G 22% /
/dev/dsk/c1t1d0s3 20G 86M 19G 1% /var
/dev/dsk/c1t0d0s4 20G 20M 19G 1% /.0
/dev/dsk/c1t0d0s5 36G 1.7G 33G 5% /backup
/dev/dsk/c1t0d0s0 3.9G 1.6G 2.3G 41% /.alt.tmp.b-t2b.mnt
/dev/dsk/c1t0d0s3 3.9G 33M 3.9G 1% /.alt.tmp.b-
t2b.mnt/var
# df -h -F lofs
Filesystem size used avail capacity Mounted on
/.0 20G 20M 19G 1% /.alt.tmp.b-t2b.mnt/.0
/backup 36G 1.7G 33G 5% /.alt.tmp.b-
t2b.mnt/backup
Next: System Panics When Upgrading With Solaris Live Upgrade Running
Veritas VxVm
Description:
The error messages are seen when using the luupgrade command to upgrade a new
boot environment.
Cause:
An older version of Solaris Live Upgrade is being used. The Solaris Live Upgrade
packages you have installed on your system are incompatible with the media and
the release on that media.
Solution:
Always use the Solaris Live Upgrade packages from the release you are upgrading
to.
Example:
In the following example, the error message indicates that the Solaris Live
Upgrade packages on the system are not the same version as on the media.
ERROR: One or more patches required by Solaris Live Upgrade has not been
installed.
Cause:
One or more patches required by Solaris Live Upgrade are not installed on your
system. Beware that this error message does not catch all missing patches.
Solution:
Before using Solaris Live Upgrade, always install all the required patches. Ensure
that you have the most recently updated patch list by
consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve
web site.
ERROR: Device mapping command </sbin/biosdev> failed. Please reboot and try
again.
Cause:
Solution:
Reason 1: Reboot the system and try Solaris Live Upgrade again
Cause:
Reason 2: If you reboot your system and get the same error message, you have two
or more identical disks. The device mapping command is unable to distinguish
between them.
Solution:
Cannot delete the boot environment that contains the GRUB menu
Cause:
Solaris Live Upgrade imposes the restriction that a boot environment cannot be
deleted if the boot environment contains the GRUB menu.
Solution:
The file system containing the GRUB menu was accidentally remade. However,
the disk has the same slices as before. For example, the disk was not re-
sliced.
Cause:
The file system that contains the GRUB menu is critical to keeping the system
bootable. Solaris Live Upgrade commands do not destroy the GRUB menu. But, if
you accidentally remake or otherwise destroy the file system containing the GRUB
menu with a command other than a Solaris Live Upgrade command, the recovery
software attempts to reinstall the GRUB menu. The recovery software puts the
GRUB menu back in the same file system at the next reboot. For example, you
might have used the newfs or mkfs commands on the file system and accidentally
destroyed the GRUB menu. To restore the GRUB menu correctly, the slice must
adhere to the following conditions:
Remain a part of the same Solaris Live Upgrade boot environment where
the slice resided previously
Before rebooting the system, make any necessary corrective actions on the slice.
Solution:
Reboot the system. A backup copy of the GRUB menu is automatically installed.
Solution:
Reboot the system. A backup copy of the GRUB menu is automatically installed.
Next: System Panics When Upgrading With Solaris Live Upgrade Running
Veritas VxVm
Cookie Preferences
Ad Choices
The Solaris OS Recommended Patch Cluster provides critical Solaris OS Security, Data Corruption,
and System Availability fixes & hence it is advisable to patch your Solaris systems twice in a year
(atleast), as per Oracle-Sun's Critical Patch Update release schedule, I prefer to execute patch cycle
for my environment in end of April and sometime late October every year.
Oracle-Sun CPUs are released on the Tuesday closest to the 17th of January, April, July, and
October –
See - http://www.oracle.com/technetwork/topics/security/alerts-086861.html
In my environment, I use Live Upgrade to patch our Solaris systems. Reason behind using Live
Upgrade for patching purpose are -
1. Create a copy of the system environment; that is, a copy of the root (/) file system
2. Live Upgrade has build-in feature for splitting the mirrors of an SVM mirrored root (detach, attach,
preserve options on lucreate) hence low overhead to deal with SVM mirror break stuffs separately
etc.
3. Less downtime (not more than 15-20 mins) and minimal risk.
4. Better back out option. In case something breaks after patching revert to old BE and be at stage
from where started, again that doesn’t take much downtime and safe option.
5. The most appropriate option for those Solaris servers who have zones/containers installed on it.
There might be many more benefits out there, however I find above benefits best fit for my purpose.
So to summarize, all tasks except the reboot can be accomplished on an operational production
system; the impact on any running process is minimal. Live Upgrade is a combination of maximizing
system availability when applying changes and minimizing risk by offering the ability to reboot to a
known working state (your original environment).
Well. let's see how to do it in real life, in my current environment we have many servers which uses
Solaris Volume Manager as their primary volume manager to manage the disk and data. So, let's take
a look at patching procedure to patch servers who have SVM installed and configured on it along with
zones installed on it sitting on ZFS filesystem.
# metastat -c
d32 p 1.0GB d4
d33 p 1.0GB d4
d36 p 40GB d4
d35 p 1.0GB d4
d34 p 4.0GB d4
d60 p 16GB d4
d30 p 1.0GB d4
d31 p 1.0GB d4
d4 m 100GB d14 d24
d14 s 100GB c1t0d0s4
d24 s 100GB c1t1d0s4
d103 m 10GB d23 d13
d23 s 10GB c1t1d0s3
d13 s 10GB c1t0d0s3
d100 m 10GB d20 d10
d20 s 10GB c1t1d0s0
d10 s 10GB c1t0d0s0
d1 m 16GB d11 d21
d11 s 16GB c1t0d0s1
d21 s 16GB c1t1d0s1
Alright, my / is on d100 and /var is on d103. Let us create an alternative boot environment out of it.
Here I'm trying to create a metadevice d0 representing / UFS filesystem having a sub-mirror d20 (sub-
mirror d20 first gets detach from d100 and then attach to d0). Same thing applicable for /var
filesystem and it's meta device configuration.
In above command I'm creating a new boot environment called Sol10pu using option “-n”, option “-m”
Specifies the vfstab information for a new UFS-based BE.
NOTE: The -m option is not supported for BEs based on ZFS file systems.
NOTE: In case you're performing upgrade and patching in one go then point to be ponder -
Before upgrading, you must install the Oracle Solaris Live Upgrade packages from the release
to which you are upgrading. New capabilities are added to the upgrade tools, so installing the
new packages from the target release is important. Example, you need to upgrade from Oracle
Solaris 10 update 4 to Oracle Solaris update 8, so you must get the Oracle Solaris Live
Upgrade packages from the Oracle Solaris update 8 DVD.
Once above command finishes, you will see you meta device configuration changed as follows -
# metastat -c
d32 p 1.0GB d4
d33 p 1.0GB d4
d36 p 40GB d4
d35 p 1.0GB d4
d34 p 4.0GB d4
d60 p 16GB d4
d30 p 1.0GB d4
d31 p 1.0GB d4
d4 m 100GB d14 d24
d14 s 100GB c1t0d0s4
d24 s 100GB c1t1d0s4
d103 m 10GB d23
d13 s 10GB c1t1d0s3
d100 m 10GB d20
d10 s 10GB c1t1d0s0
d3 m 10GB d13
d23 s 10GB c1t0d0s3
d0 m 10GB d10
d20 s 10GB c1t0d0s0
d1 m 16GB d11 d21
d11 s 16GB c1t0d0s1
d21 s 16GB c1t1d0s1
d0 and d3 has one sub-mirror and d100 and d100 has one sub-mirror associated.
Also you will be able to see two boot environments on your Solaris system -
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10 yes yes yes no -
Sol10pu yes no no yes -
Fine, so now we have 2 boot environments with us and we are going to patch the alternate BE
(sol10pu) using a patching tool called PCA, BTW - I use PCA (Patch Check Advance) tool to apply
patches to our Solaris systems. PCA has been setup to download patches via local web proxy to
access outside systems.
- PERL distribution
- At least one server which is internet facing (this server will then act as a proxy to rest of servers)
- Patch cross-reference file called patchdiag.xref (latest one always while patching)
- Valid Oracle support (MOS) user ID and password
- If at all required, some wrapper scripts to PCA
# lumount Sol10pu /a
/a
Now I'll create a temporary directory to download the missing & required patches,
# mkdir -p /patchman/patches
My next job is to generate patch_order file,
# luumount /a
Now if you populate the /patchman/patches directory then you will see the list of patches in there.
Okay, at this stage we are ready upgrade ABE with patches available -
Once the patches are installed it will automatically un-mounts the ABE sol10pu mounted on mount
point /a.
Now it's time to activate the ABE sol10pu which just been patched using Live Upgrade utility.
# luactivate Sol10pu
A Live Upgrade Sync operation will be performed on startup of boot
environment .
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
2. Change the boot device back to the original boot environment by typing:
boot
**********************************************************************
# init 6
updating /platform/sun4v/boot_archive
SYSTEM GOING DOWN!!!!
NOTE: Live upgrade always uses init 6 or shutdown commands. Halt and reboot commands
will create big time bang, be aware!!!
Great, it's been a week after patching and application, DB owners are happy with patching stuffs and
now we need to perform post patching stuffs upon certain confirmations.
Now a week later I need to delete the old boot environment and rebuild the metadevices to be in
mirror layout.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10 yes no no yes -
Sol10pu yes yes yes no -
# ludelete Sol10
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment deleted.
# metastat -c
d32 p 1.0GB d4
d33 p 1.0GB d4
d36 p 40GB d4
d35 p 1.0GB d4
d34 p 4.0GB d4
d60 p 16GB d4
d30 p 1.0GB d4
d31 p 1.0GB d4
d4 m 100GB d14 d24
d14 s 100GB c1t0d0s4
d24 s 100GB c1t1d0s4
d103 m 10GB d23
d13 s 10GB c1t1d0s3
d100 m 10GB d20
d10 s 10GB c1t1d0s0
d3 m 10GB d13
d23 s 10GB c1t0d0s3
d0 m 10GB d10
d20 s 10GB c1t0d0s0
d1 m 16GB d11 d21
d11 s 16GB c1t0d0s1
d21 s 16GB c1t1d0s1
# metastat -c
d32 p 1.0GB d4
d33 p 1.0GB d4
d36 p 40GB d4
d35 p 1.0GB d4
d34 p 4.0GB d4
d60 p 16GB d4
d30 p 1.0GB d4
d31 p 1.0GB d4
d4 m 100GB d14 d24
d14 s 100GB c1t0d0s4
d24 s 100GB c1t1d0s4
d3 m 10GB d13
d23 s 10GB c1t0d0s3
d0 m 10GB d10
d20 s 10GB c1t0d0s0
d1 m 16GB d11 d21
d11 s 16GB c1t0d0s1
d21 s 16GB c1t1d0s1
d13 s 10GB c1t1d0s3
d10 s 10GB c1t1d0s0
Next attach the sub-mirrors d10 & d13 to metadevices d0 and d3 respectively.
# metattach d0 d10
d0: submirror d10 is attached
# metattach d3 d13
d3: submirror d13 is attached
# metastat -c
d32 p 1.0GB d4
d33 p 1.0GB d4
d36 p 40GB d4
d35 p 1.0GB d4
d34 p 4.0GB d4
d60 p 16GB d4
d30 p 1.0GB d4
d31 p 1.0GB d4
d4 m 100GB d14 d24
d14 s 100GB c1t0d0s4
d24 s 100GB c1t1d0s4
d3 m 10GB d23 d13 (resync-25%)
d23 s 10GB c1t1d0s3
d13 s 10GB c1t0d0s3
d0 m 10GB d20 d10 (resync-45%)
d20 s 10GB c1t1d0s0
d10 s 10GB c1t0d0s0
d1 m 16GB d11 d21
d11 s 16GB c1t0d0s1
d21 s 16GB c1t1d0s1
That's it. Now your done with patching your Solaris server and zones deployed on it.
Known issue:
root@audcourtap5 # luupgrade -u -n s10u11_Jul_2016 -s /mnt
67352 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
ERROR:
The auto registration file <> does not exist or incomplete.
The auto registration file is mandatory for this upgrade.
Use -k <autoreg_file> argument along with luupgrade command.
autoreg_file is path to auto registration information file.
See sysidcfg(4) for a list of valid keywords for use in
this file.
oracle_user=xxxx
oracle_pw=xxxx
http_proxy_host=xxxx
http_proxy_port=xxxx
http_proxy_user=xxxx
http_proxy_pw=xxxx
Solution:
Create the auto registry disabled file using below command.
Upgrade the new ABE with our latest iso using below command.
LiveUpgrade problems
I came across with few incidents recently related to LiveUpgrade & I would like to share it to system
administrators community through this blog.
Issue #1
Problem -
Solution -
Just execute -
# devfsadm -v -C
It will clear any broken device links under /dev or unreferenced device links under /dev directory. After
performing this LiveUpgrade work like a piece of cake!
Issue #2
Problem -
After LiveUpgrade/patching Sol10u7 to Sol10u8, I activated the ABE Sol10u8 & booted off from it.
After doing so I deleted the one of the container as we no longer needed that container & cleared up
the metadevice where the zoneroot was residing. Deleting container, I certainly made changes to ABE
Sol10u8 however those were not obliviously reflected to original BE Sol10u7. So now while deleting
Sol10u7 LU program finds a mismatch & fails to delete the boot environment.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10u7 yes no no yes -
Sol10u8 yes yes yes no -
# ludelete Sol10u7
ERROR: mount: /zone1: No such file or directory
ERROR: cannot mount mount point device /.alt.tmp.b-k2c.mnt/zone1 device
/zone1
ERROR: failed to mount file system /zone1 on /.alt.tmp.b-k2c.mnt/zone1
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file /etc/lu/ICF.2
ERROR: Cannot mount BE .
Unable to delete boot environment.
Solution -
I don't know if it's a Sun supported solution or workaround. I simply edit the /etc/lu/ICF.nn file for the
problematic BE, and delete the lines that reference the missing (or moved) filesystems & it worked
fine.
# ludelete Sol10u7
Determining the devices to be marked free.
INFORMATION: Unable to determine size or capacity of slice .
ERROR: An error occurred during creation of configuration file.
WARNING: Target BE BE ID <2> unable to verify file systems belonging to BE
are not mounted.
WARNING: Unable to determine disk partition configuration information for
BE .
WARNING: Unable to determine the devices/datasets to be freed for BE .
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment deleted.
# echo $?
0
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
Sol10u8 yes yes yes no -
Issue #3
Problem: -
# ludelete Sol10u8_stage1
ERROR: Read-only file system: cannot create mount point
ERROR: failed to create mount point for file system
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file
ERROR: Cannot mount BE .
Unable to delete boot environment.
Solution: -
As a part of solution, I used unofficial method to get rid of the Pesky BE. I modified /etc/lu/ICF.2 a bit
and TRUST ME "lufslist" IS YOUR TRUE FRIEND.
# cat ICF.2
Sol10u8_stage1:-:/dev/md/dsk/d1:swap:67120896
Sol10u8_stage1:/:/dev/md/dsk/d0:ufs:20982912
Sol10u8_stage1:/var:/dev/md/dsk/d3:ufs:16790400
Sol10u8_stage1:/cims:/dev/md/dsk/d33:ufs:2097152
Sol10u8_stage1:/home:/dev/md/dsk/d31:ufs:2097152
Sol10u8_stage1:/rpool:rpool:zfs:0
Sol10u8_stage1:/backup:zp00/backup:zfs:0
Sol10u8_stage1:/opt/OV:zp00/opt-OV:zfs:0
Sol10u8_stage1:/opt/pw:zp00/pw:zfs:0
Sol10u8_stage1:/oracle:zp00/oracle:zfs:0
Sol10u8_stage1:/oraarch:/dev/md/dsk/d34:ufs:8388608
Sol10u8_stage1:/orabkup:zp00/orabkup:zfs:0
Sol10u8_stage1:/oemagent:zp00/oemagent:zfs:0
Sol10u8_stage1:/oradata1:zp00/oradata1:zfs:0
Sol10u8_stage1:/oradata2:zp00/oradata2:zfs:0
Sol10u8_stage1:/oradata3:zp00/oradata3:zfs:0
Sol10u8_stage1:/oradata4:zp00/oradata4:zfs:0
Sol10u8_stage1:/patrol_sw:zp00/patrol_sw:zfs:0
Sol10u8_stage1:/etc/opt/OV:/dev/md/dsk/d32:ufs:1048576
Sol10u8_stage1:/opt/patrol:/dev/md/dsk/d30:ufs:2097152
Sol10u8_stage1:/rpool/ROOT:rpool/ROOT:zfs:0
Sol10u8_stage1:/var/opt/OV:zp00/var-opt-OV:zfs:0
I wonder why ZFS filesystems listed in this file and hence it is trying to mount BR on existing ZFS BE
so I just altered file with below entries -
# vi ICF.2
"ICF.2" 22 lines, 1001 characters
Sol10u8_stage1:-:/dev/md/dsk/d1:swap:67120896
Sol10u8_stage1:/:/dev/md/dsk/d0:ufs:20982912
Sol10u8_stage1:/var:/dev/md/dsk/d3:ufs:16790400
Sol10u8_stage1:/cims:/dev/md/dsk/d33:ufs:2097152
Sol10u8_stage1:/home:/dev/md/dsk/d31:ufs:2097152
Sol10u8_stage1:/oraarch:/dev/md/dsk/d34:ufs:8388608
Sol10u8_stage1:/etc/opt/OV:/dev/md/dsk/d32:ufs:1048576
Sol10u8_stage1:/opt/patrol:/dev/md/dsk/d30:ufs:2097152
:wq!