Professional Documents
Culture Documents
1
Exadata Patching Recipes
Exadata is an Engineered System consists of Compute nodes, Storage cells and Infiniband Switches or RoCE
Switches (starting X8M). Each of these components runs Oracle software which needs to be updated regularly.
Oracle periodically releases patches for Exadata to keep these components updated. Exadata patches are intended
for and include fixes for all Exadata components such as, storage cells and Compute nodes, and optionally
InfiniBand switches and RoCE switches. Exadata patches can be applied online (Rolling) or offline (Non-Rolling).
Exadata patching is most critical and complex task one need to perform with Exadata Database Machines. Extreme
care must be taken before applying Exadata patches.
The Exadata smart system software should be updated periodically. Oracle releases patches for Exadata every
quarter to keep these components updated. These patches can be applied online (Rolling) or offline (Non-Rolling).
Exadata Patching
In this Exadata patching recipes book, we will demonstrate practically how to patch an Exadata X8M-2 Quarter
Rack to ESS version 19.3.6
Launch patchmgr from the compute node that is node 1 that has user equivalence setup to all the storage cells,
Compute nodes and IB/RoCE Switches. For upgrading Compute node 1, run the patchmgr from Compute node 2 or
any other node that has the user equivalence setup to node 1.
2
Exadata Patch Release Frequency
The Exadata patches are released at the following frequency and it is subject to change without notice.
3
Exadata Health Check
Oracle Autonomous Health Framework contains Oracle ORAchk, Oracle EXAchk, and Oracle Trace File Analyzer.
You have access to Oracle Autonomous Health Framework as a value add-on to your existing support contract.
There is no additional fee or license required to run Oracle Autonomous Health Framework.
Run the exachk before and after Exadata patching to perform complete Exadata stack health check and correct
issues if any.
For complete details on AHF and how to download, install and execute exachk on Exadata, refer to the following
link: https://netsoftmate.com/blog/all-you-need-to-know-about-oracle-autonomous-health-framework/
Here I have staged the required software under the following directories for ease of understanding
4
Exadata Storage Cell Patching
Current Image version
• Execute the “imageinfo” command on one of the Compute nodes to identify the current Exadata Image
version
Prerequisites
• Install and configure VNC Server on Exadata compute node 1. It is recommended to use VNC or screen
utility for patching to avoid disconnections due network issues.
5
• Run Exachk before starting the actual patching. Correct any Critical issues and Failure that conflict with
patching.
• Verify hardware failure. Make sure there are no hardware failures before patching
• Download patches and copy them to the compute node 1 under staging directory
[root@dm01db01 ~]# mkdir -p /u01/stage/CELL
• Copy the patches to compute node 1 under staging area and unzip the patches
6
inflating: patch_19.3.6.0.0.200317/imageLogger
inflating: patch_19.3.6.0.0.200317/exadata.img.env
inflating: patch_19.3.6.0.0.200317/ExadataSendNotification.pm
inflating: patch_19.3.6.0.0.200317/README.txt
inflating: patch_19.3.6.0.0.200317/patchReport.py
creating: patch_19.3.6.0.0.200317/etc/
creating: patch_19.3.6.0.0.200317/etc/config/
inflating: patch_19.3.6.0.0.200317/etc/config/inventory.xml
inflating: patch_19.3.6.0.0.200317/patchmgr
inflating: patch_19.3.6.0.0.200317/patchmgr_functions
inflating: patch_19.3.6.0.0.200317/cellboot_usb_pci_path
inflating: patch_19.3.6.0.0.200317/dostep.sh.tmpl
inflating: patch_19.3.6.0.0.200317/ExadataImageNotification.pl
inflating: patch_19.3.6.0.0.200317/19.3.6.0.0.200317.iso
inflating: patch_19.3.6.0.0.200317/ExaXMLNode.pm
inflating: patch_19.3.6.0.0.200317/md5sum_files.lst
creating: patch_19.3.6.0.0.200317/plugins/
inflating: patch_19.3.6.0.0.200317/plugins/010-check_17854520.sh
inflating: patch_19.3.6.0.0.200317/plugins/030-check_24625612.sh
inflating: patch_19.3.6.0.0.200317/plugins/050-check_22651315.sh
inflating: patch_19.3.6.0.0.200317/plugins/040-check_22896791.sh
inflating: patch_19.3.6.0.0.200317/plugins/005-check_22909764.sh
inflating: patch_19.3.6.0.0.200317/plugins/000-check_dummy_perl
inflating: patch_19.3.6.0.0.200317/plugins/020-check_22468216.sh
inflating: patch_19.3.6.0.0.200317/plugins/000-check_dummy_bash
creating: patch_19.3.6.0.0.200317/linux.db.rpms/
inflating: patch_19.3.6.0.0.200317/README.html
• Read the readme file and Exadata document for storage cell patching steps
[root@dm01db01 CELL]# id
uid=0(root) gid=0(root) groups=0(root)
7
10.10.1.12: 00:18:36 up 2 days, 10:19, 0 users, load average: 2.09, 1.39,
1.29
10.10.1.13: 00:18:36 up 2 days, 10:19, 0 users, load average: 2.41, 2.10,
2.10
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
NAME VALUE
------------------------------ -------------------------------------
DG_DATA 12.0h
RECOC1 12.0h
• Shut down and stop the Oracle components on each database server using the following commands:
8
10.10.1.9: CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on server 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.chad' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.orcldb.db' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.nsmdb.db' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.orcldb.db' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.nsmdb.db' on 'dm01db01' succeeded
10.10.1.9: CRS-33673: Attempting to stop resource group 'ora.asmgroup' on
server 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.DG_DATA.dg' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.RECOC1.dg' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on
'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on
'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.cvu' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.DG_DATA.dg' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.RECOC1.dg' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.qosmserver' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'dm01db01'
succeeded
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'dm01db01'
succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.dm01db01.vip' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.orcldb.db' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.scan2.vip' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.scan3.vip' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.dm01db01.vip' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.scan3.vip' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.nsmdb.db' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.scan2.vip' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.asm' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on
'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.cvu' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on
'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.dm01db02.vip' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'dm01db02'
succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.scan1.vip' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.dm01db02.vip' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.scan1.vip' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.chad' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'dm01db01'
succeeded
9
10.10.1.9: CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on
'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'dm01db01' succeeded
10.10.1.9: CRS-33677: Stop of resource group 'ora.asmgroup' on server
'dm01db01' succeeded.
10.10.1.9: CRS-33673: Attempting to stop resource group 'ora.asmgroup' on
server 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.DG_DATA.dg' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.RECOC1.dg' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.DG_DATA.dg' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.RECOC1.dg' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.asm' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on
'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.chad' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'dm01db02'
succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on
'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'dm01db02' succeeded
10.10.1.9: CRS-33677: Stop of resource group 'ora.asmgroup' on server
'dm01db02' succeeded.
10.10.1.9: CRS-2673: Attempting to stop 'ora.ons' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.ons' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.net1.network' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.net1.network' on 'dm01db02' succeeded
10.10.1.9: CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'dm01db02' has completed
10.10.1.9: CRS-2673: Attempting to stop 'ora.ons' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.ons' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.net1.network' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.net1.network' on 'dm01db01' succeeded
10.10.1.9: CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'dm01db01' has completed
10.10.1.9: CRS-2677: Stop of 'ora.crsd' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.crsd' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.storage' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.evmd' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.storage' on 'dm01db01'
10.10.1.10: CRS-2673: Attempting to stop 'ora.evmd' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.storage' on 'dm01db01' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.storage' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.evmd' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.ctssd' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.evmd' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.ctssd' on 'dm01db01'
10.10.1.10: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.ctssd' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.ctssd' on 'dm01db01' succeeded
10
10.10.1.10: CRS-2677: Stop of 'ora.asm' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.cssd' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.cssd' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.diskmon' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.asm' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.cssd' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.cssd' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.diskmon' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.diskmon' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.diskmon' on 'dm01db01' succeeded
10.10.1.10: CRS-2679: Attempting to clean 'ora.diskmon' on 'dm01db01'
10.10.1.10: CRS-2681: Clean of 'ora.diskmon' on 'dm01db01' succeeded
11
Kernel version: 4.14.35-1902.5.1.4.el7uek.x86_64 #2 SMP Wed Oct 9 19:29:16
PDT 2019 x86_64
Cell version: OSS_19.3.2.0.0_LINUX.X64_191119
Cell rpm version: cell-19.3.2.0.0_LINUX.X64_191119-1.x86_64
• Shut down all cell services on all cells to be updated. This may be done by the root user on each cell by
running cellcli -e 'alter cell shutdown services all' or by the following dcli command to do all cells at the
same time:
• Reset the patchmgr state to a known state using the following command:
12
2020-04-10 00:22:46 +0300 :Working: Force Cleanup
2020-04-10 00:22:48 +0300 :SUCCESS: Force Cleanup
2020-04-10 00:22:48 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -reset_force
2020-04-10 00:22:48 +0300 :INFO : Reset_Force attempted on nodes in
file /root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:22:48 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:22:48 +0300 :INFO : 10.10.1.11: 19.3.2.0.0.191119
2020-04-10 00:22:48 +0300 :INFO : 10.10.1.12: 19.3.2.0.0.191119
2020-04-10 00:22:48 +0300 :INFO : 10.10.1.13: 19.3.2.0.0.191119
2020-04-10 00:22:48 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:22:48 +0300 :INFO : - <cell_name>.log
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.log
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.trc
2020-04-10 00:22:48 +0300 :INFO : Exit status:0
2020-04-10 00:22:48 +0300 :INFO : Exiting.
• Clean up any previous patchmgr utility runs using the following command:
• Verify that the cells meet prerequisite checks using the following command.
13
[root@dm01db01 patch_19.3.6.0.0.200317]# ./patchmgr -cells ~/cell_group -
patch_check_prereq
14
2020-04-10 00:24:46 +0300 :SUCCESS: No exposure to bug 22896791 with
non-rolling patching
2020-04-10 00:24:46 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22651315 v1.0.
2020-04-10 00:24:46 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:24:47 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22651315
2020-04-10 00:24:48 +0300 :SUCCESS: Execute plugin check for Patch
Check Prereq.
2020-04-10 00:24:48 +0300 :Working: Check ASM deactivation outcome. Up
to 1 minute ...
2020-04-10 00:24:59 +0300 :SUCCESS: Check ASM deactivation outcome.
2020-04-10 00:24:59 +0300 :Working: check if MS SOFTWAREUPDATE is
scheduled. up to 5 minutes...
2020-04-10 00:24:59 +0300 :NO ACTION NEEDED: No cells found with
SOFTWAREUPDATE scheduled by MS
2020-04-10 00:25:00 +0300 :SUCCESS: check if MS SOFTWAREUPDATE is
scheduled
2020-04-10 00:25:01 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -patch_check_prereq
2020-04-10 00:25:01 +0300 :INFO : patch_prereq attempted on nodes in
file /root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:25:01 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:25:01 +0300 :INFO : 10.10.1.11: 19.3.2.0.0.191119
2020-04-10 00:25:01 +0300 :INFO : 10.10.1.12: 19.3.2.0.0.191119
2020-04-10 00:25:01 +0300 :INFO : 10.10.1.13: 19.3.2.0.0.191119
2020-04-10 00:25:01 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:25:01 +0300 :INFO : - <cell_name>.log
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.log
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.trc
2020-04-10 00:25:01 +0300 :INFO : Exit status:0
2020-04-10 00:25:01 +0300 :INFO : Exiting.
15
WARNING Do not interrupt the patchmgr session.
WARNING Do not alter state of ASM instances during patch or rollback.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot cells or alter cell services during patch or rollback.
WARNING Do not open log files in editor in write mode or try to alter them.
16
2020-04-10 00:27:47 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22468216 v1.0.
2020-04-10 00:27:47 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:48 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22468216
2020-04-10 00:27:48 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 24625612 v1.0.
2020-04-10 00:27:48 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:48 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 24625612
2020-04-10 00:27:48 +0300 :SUCCESS: No exposure to bug 22896791 with
non-rolling patching
2020-04-10 00:27:48 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22651315 v1.0.
2020-04-10 00:27:48 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:50 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22651315
2020-04-10 00:27:50 +0300 :SUCCESS: Execute plugin check for Patch
Check Prereq.
2020-04-10 00:27:50 +0300 :Working: check if MS SOFTWAREUPDATE is
scheduled. up to 5 minutes...
2020-04-10 00:27:51 +0300 :NO ACTION NEEDED: No cells found with
SOFTWAREUPDATE scheduled by MS
2020-04-10 00:27:52 +0300 :SUCCESS: check if MS SOFTWAREUPDATE is
scheduled
2020-04-10 00:27:54 +0300 1 of 5 :Working: Initiate patch on cells. Cells
will remain up. Up to 5 minutes ...
2020-04-10 00:27:58 +0300 1 of 5 :SUCCESS: Initiate patch on cells.
2020-04-10 00:27:58 +0300 2 of 5 :Working: Waiting to finish pre-reboot patch
actions. Cells will remain up. Up to 45 minutes ...
2020-04-10 00:28:58 +0300 :INFO : Wait for patch pre-reboot
procedures
2020-04-10 00:29:46 +0300 2 of 5 :SUCCESS: Waiting to finish pre-reboot patch
actions.
2020-04-10 00:29:46 +0300 :Working: Execute plugin check for Patching
...
2020-04-10 00:29:46 +0300 :SUCCESS: Execute plugin check for Patching.
2020-04-10 00:29:46 +0300 3 of 5 :Working: Finalize patch on cells. Cells
will reboot. Up to 5 minutes ...
2020-04-10 00:29:55 +0300 3 of 5 :SUCCESS: Finalize patch on cells.
2020-04-10 00:30:11 +0300 4 of 5 :Working: Wait for cells to reboot and come
online. Up to 120 minutes ...
2020-04-10 00:31:11 +0300 :INFO : Wait for patch finalization and
reboot
2020-04-10 00:56:03 +0300 4 of 5 :SUCCESS: Wait for cells to reboot and come
online.
2020-04-10 00:56:03 +0300 5 of 5 :Working: Check the state of patch on cells.
Up to 5 minutes ...
17
2020-04-10 00:56:12 +0300 5 of 5 :SUCCESS: Check the state of patch on cells.
2020-04-10 00:56:12 +0300 :Working: Execute plugin check for Pre Disk
Activation ...
2020-04-10 00:56:12 +0300 :SUCCESS: Execute plugin check for Pre Disk
Activation.
2020-04-10 00:56:12 +0300 :Working: Activate grid disks...
2020-04-10 00:56:13 +0300 :INFO : Wait for checking and activating
grid disks
2020-04-10 00:56:20 +0300 :SUCCESS: Activate grid disks.
2020-04-10 00:56:22 +0300 :Working: Execute plugin check for Post
Patch ...
2020-04-10 00:56:23 +0300 :SUCCESS: Execute plugin check for Post
Patch.
2020-04-10 00:56:24 +0300 :Working: Cleanup
2020-04-10 00:56:37 +0300 :SUCCESS: Cleanup
2020-04-10 00:56:38 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -patch
2020-04-10 00:56:38 +0300 :INFO : patch attempted on nodes in file
/root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:56:38 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:56:38 +0300 :INFO : 10.10.1.11: 19.3.6.0.0.200317
2020-04-10 00:56:38 +0300 :INFO : 10.10.1.12: 19.3.6.0.0.200317
2020-04-10 00:56:38 +0300 :INFO : 10.10.1.13: 19.3.6.0.0.200317
2020-04-10 00:56:38 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:56:38 +0300 :INFO : - <cell_name>.log
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.log
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.trc
2020-04-10 00:56:38 +0300 :INFO : Exit status:0
2020-04-10 00:56:38 +0300 :INFO : Exiting.
• Monitor the log files and cells being updated when e-mail alerts are not setup. open a new session and do
a tail on the log file as shown below
• Verify the update status after the patchmgr utility completes as follows:
18
Active image version: 19.3.6.0.0.200317
Active image kernel version: 4.14.35-1902.9.2.el7uek
Active image activated: 2020-04-10 00:55:04 +0300
Active image status: success
Active node type: STORAGE
Active system partition on device: /dev/md24p6
Active software partition on device: /dev/md24p8
Version : 19.3.6.0.0.200317
Image activation date : 2020-04-10 00:55:04 +0300
Imaging mode : out of partition upgrade
Imaging status : success
19
• Clean up the cells using the -cleanup option to clean up all the temporary update or rollback files on the
cells.
20
-----------------------------------------------------------------------------
---
Name Target State Server State details
-----------------------------------------------------------------------------
---
Local Resources
-----------------------------------------------------------------------------
---
ora.LISTENER.lsnr
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.chad
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.net1.network
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.ons
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.proxy_advm
OFFLINE OFFLINE dm01db01 STABLE
OFFLINE OFFLINE dm01db02 STABLE
-----------------------------------------------------------------------------
---
Cluster Resources
-----------------------------------------------------------------------------
---
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.DG_DATA.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE dm01db01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE dm01db02 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE dm01db02 STABLE
ora.RECOC1.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE dm01db01 Started,STABLE
2 ONLINE ONLINE dm01db02 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.cvu
1 ONLINE ONLINE dm01db02 STABLE
21
ora.orcldb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o
racle/product/11.2.0
.4/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o
racle/product/11.2.0
.4/dbhome_1,STABLE
ora.nsmdb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o
racle/product/12.2.0
.1/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o
racle/product/12.2.0
.1/dbhome_1,STABLE
ora.dm01db01.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.dm01db02.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.qosmserver
1 ONLINE ONLINE dm01db02 STABLE
ora.scan1.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.scan2.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.scan3.vip
1 ONLINE ONLINE dm01db02 STABLE
22
Exadata RoCE Switch Patching
About RDMA over Converged Ethernet (RoCE)
The Exadata X8M release implements 100 Gb/sec RoCE network fabric, making the world’s fastest database
machine even faster
Oracle Exadata Database Machine X8M introduces a brand new high-bandwidth low-latency 100 Gb/sec RDMA
over Converged Ethernet (RoCE) Network Fabric that connects all the components inside an Exadata Database
Machine. Specialized database networking protocols deliver much lower latency and higher bandwidth than is
possible with generic communication protocols for faster response time for OLTP operations and higher
throughput for analytic workloads.
The Exadata X8M release provides the next generation in ultra-fast cloud scale networking fabric, RDMA over
Converged Ethernet (RoCE). RDMA (Remote Direct Memory Access) allows one computer to directly access data
from another without Operating System or CPU involvement, for high bandwidth and low latency. The network
card directly reads/writes memory with no extra copying or buffering and very low latency. RDMA is an integral
part of the Exadata high-performance architecture, and has been tuned and enhanced over the past decade,
underpinning several Exadata-only technologies such as Exafusion Direct-to-Wire Protocol and Smart Fusion Block
Transfer. As the RoCE API infrastructure is identical to InfiniBand’s, all existing Exadata performance features are
available on RoCE.
The patchmgr utility is used to upgrade and downgrade the RoCE switches.
23
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2019, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under their own
licenses, such as open source. This software is provided "as is," and unless
otherwise stated, there is no warranty, express or implied, including but not
limited to warranties of merchantability and fitness for a particular
purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.
Software
BIOS: version 05.39
NXOS: version 7.0(3)I7(6)
BIOS compile time: 08/30/2019
NXOS image file is: bootflash:///nxos.7.0.3.I7.6.bin
NXOS compile time: 3/5/2019 13:00:00 [03/05/2019 22:04:55]
Hardware
cisco Nexus9000 C9336C-FX2 Chassis
Intel(R) Xeon(R) CPU D-1526 @ 1.80GHz with 24571632 kB of memory.
Processor Board ID FDO23380VQS
plugin
Core Plugin, Ethernet Plugin
Active Package(s):
24
User Access Verification
System version: 7.0(3)I7(6)
• Download the RoCE switch software from MOS note 888828.1 and copy it Exadata compute node 1
patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_leaf_switch_multi.c
fg
inflating:
patch_switch_19.3.6.0.0.200317/sundcs_36p_repository_2.2.14_1.pkg
inflating: patch_switch_19.3.6.0.0.200317/README.txt
25
-r-xr-xr-x 1 root root 62499 Mar 18 05:48 ExadataImageNotification.pl
-r-xr-xr-x 1 root root 51616 Mar 18 05:48 dcli
-rw-r--r-- 1 root root 1011037696 Mar 18 05:48 nxos.7.0.3.I7.6.bin
-r-xr-xr-x 1 root root 16544 Mar 18 05:48 patchmgr_functions
-rwxr-xr-x 1 root root 11600 Mar 18 05:48 patch_bug_26678971
-rw-r--r-- 1 root root 975383040 Mar 18 05:48 nxos.7.0.3.I7.7.bin
-r-xr-xr-x 1 root root 171545108 Mar 18 05:48
sundcs_36p_repository_2.2.13_2.pkg
-r-xr-xr-x 1 root root 172863012 Mar 18 05:48
sundcs_36p_repository_2.2.14_1.pkg
-rwxr-xr-x 1 root root 172946493 Mar 18 05:48
sundcs_36p_repository_2.2.7_2.pkg
-rwxr-xr-x 1 root root 172947929 Mar 18 05:48
sundcs_36p_repository_2.2.7_2_signed.pkg
-r-xr-xr-x 1 root root 15001 Mar 18 05:48 xcp
-rwxr-xr-x 1 root root 184111553 Mar 18 05:48
sundcs_36p_repository_upgrade_2.1_to_2.2.7_2.pkg
-r-xr-xr-x 1 root root 168789 Mar 18 06:05 upgradeIBSwitch.sh
drwxr-xr-x 2 root root 103 Mar 18 06:05 roce_switch_templates
drwxr-xr-x 2 root root 98 Mar 18 06:05 roce_switch_api
drwxr-xr-x 6 root root 4096 Mar 18 06:05 ibdiagtools
drwxrwxr-x 3 root root 20 Mar 18 06:05 etc
-r-xr-xr-x 1 root root 457738 Mar 18 06:05 patchmgr
-rw-rw-r-- 1 root root 5156 Mar 18 06:05 md5sum_files.lst
-rwxrwxrwx 1 root root 822 Mar 18 07:15 README.txt
• Navigate to the patch directory and execute the following to get the patch syntax
Note that the patching should be performed by a non-root user. In this case I am using oracle user to perform the
patching
26
[root@dm01db01 stage]# chown -R oracle:oinstall ROCE/
27
2020-04-09 16:59:55 +0300: [INFO ] Validating running config
against template [1/3]:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:59:55 +0300: [INFO ] Config matches template:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:59:55 +0300: [SUCCESS ] Config validation successful!
Note that during this step it will prompt you setup the SSH between oracle user and RoCE switch. Please enter the
admin user password of RoCE switch.
28
checking if 'dm01sw-roceb01' is reachable... [OK]
setting up SSH equivalency for 'oracle' from dm01db01.netsoftmate.com to
'dm01sw-roceb01'... [OK]
2020-04-09 16:47:46 +0300 :Working: Initiate pre-upgrade validation
check on 2 RoCE switch(es).
29
2020-04-09 16:58:27 +0300 :INFO : upgrade attempted on nodes in file
/home/oracle/roce_list: [dm01sw-rocea01 dm01sw-roceb01]
2020-04-09 16:58:27 +0300 :INFO : For details, check the following
files in /u01/stage/ROCE:
2020-04-09 16:58:27 +0300 :INFO : - updateRoceSwitch.log
2020-04-09 16:58:27 +0300 :INFO : - updateRoceSwitch.trc
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.stdout
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.stderr
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.log
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.trc
2020-04-09 16:58:27 +0300 :INFO : Exit status:0
2020-04-09 6:58:27 +0300 :INFO : Exiting.
30
2020-04-09 17:02:28 +0300: [SUCCESS ] There is enough disk space to
proceed
2020-04-09 17:02:29 +0300: [INFO ] Found nxos.7.0.3.I7.7.bin on
switch, skipping download
2020-04-09 17:02:29 +0300: [INFO ] Verifying sha256sum of bin file
on switch
2020-04-09 17:02:43 +0300: [SUCCESS ] sha256sum matches:
dce664f1a90927e9dbd86419681d138d3a7a83c5ea7222718c3f6565488ac6d0
2020-04-09 17:02:43 +0300: [INFO ] Performing FW install pre-check
of nxos.7.0.3.I7.7.bin (eta: 2-3 minutes)
2020-04-09 17:04:44 +0300: [SUCCESS ] FW install pre-check completed
successfully
2020-04-09 17:04:44 +0300: [INFO ] Performing FW install of
nxos.7.0.3.I7.7.bin on dm01sw-rocea01 (eta: 3-7 minutes)
2020-04-09 17:09:51 +0300: [SUCCESS ] FW install completed
2020-04-09 17:09:51 +0300: [INFO ] Waiting for switch to come back
online (eta: 6-8 minutes)
2020-04-09 17:17:51 +0300: [INFO ] Verifying if FW install is
successful
2020-04-09 17:17:53 +0300: [SUCCESS ] dm01sw-rocea01 has been
successfully upgraded to nxos.7.0.3.I7.7.bin!
31
2020-04-09 17:31:20 +0300: [INFO ] Verifying if FW install is
successful
2020-04-09 17:31:22 +0300: [SUCCESS ] dm01sw-roceb01 has been
successfully upgraded to nxos.7.0.3.I7.7.bin!
2020-04-09 17:31:22 +0300 :Working: Initiate config verify on RoCE
switches from . Expect up to 6 minutes for each switch
32
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.stdout
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.stderr
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.log
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.trc
2020-04-09 17:31:27 +0300 :INFO : Exit status:0
2020-04-09 17:31:27 +0300 :INFO : Exiting.
Software
BIOS: version 05.39
NXOS: version 7.0(3)I7(7)
BIOS compile time: 08/30/2019
NXOS image file is: bootflash:///nxos.7.0.3.I7.7.bin
NXOS compile time: 3/5/2019 13:00:00 [03/05/2019 22:04:55]
Hardware
33
cisco Nexus9000 C9336C-FX2 Chassis
Intel(R) Xeon(R) CPU D-1526 @ 1.80GHz with 24571632 kB of memory.
Processor Board ID FDO23380VQS
plugin
Core Plugin, Ethernet Plugin
Active Package(s):
• The Exadata network grid consists of multiple (two) Sun QDR InfiniBand switches
• IB Switches are used for the storage network as well as the Oracle RAC interconnect
• Exadata compute nodes and storage cells are configured with dual-port InfiniBand ports and connect to
each of the two leaf switches.
• You can access IB Switches using command line and Web ILOM
• IB Switches run Linux operating system
Starting with release 11.2.3.3.0, the patchmgr utility is used to upgrade and downgrade the InfiniBand switches.
34
Steps to Patch Infiniband Switch
• Identify the number of switches in clusters.
• Log in to Exadata Compute node 1 as root user and navigate the Exadata Storage Software staging area
[root@dm01db01 ]# cd /u01/stage/ROCE/
• Create a file named ibswitches.lst and enter IB switch names one per line as follows:
35
[root@dm01db01 patch_18.1.12.0.0.190111]# ./patchmgr -ibswitches
~/ibswitch_group -upgrade
The patchmgr or dbnodeupdate.sh utility can be used for upgrading, rollback and backup Exadata Compute nodes.
patchmgr utility can be used for upgrading Compute nodes in a rolling or non-rolling fashion. Compute nodes
patches apply operating system, firmware, and driver updates.
Launch patchmgr from the compute node that is node 1 that has user equivalence setup to all the Compute nodes.
Patch all the compute nodes except node 1 and later patch node 1 alone.
Prerequisites
• Install and configure VNC Server on Exadata compute node 1. It is recommended to use VNC or screen
utility for patching to avoid disconnections due network issues.
36
• Run Exachk before starting the actual patching. Correct any Critical issues and Failure that conflict with
patching.
• Verify hardware failure. Make sure there are no hardware failures before patching
37
10.10.1.10: -rw-r--r-- 1 root root 438818890 Apr 10 14:18
p21634633_193600_Linux-x86-64.zip
10.10.1.10: -rw-r--r-- 1 root root 1493881603 Apr 10 14:18
p30893918_193000_Linux-x86-64.zip
• Read the readme file and Exadata document for patching steps
38
*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 14:21:37 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:21:37 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:21:38 +0300 :Working: Initiate precheck on 1 node(s)
2020-04-10 14:25:32 +0300 :Working: Check free space on 10.10.1.10
2020-04-10 14:25:35 +0300 :SUCCESS: Check free space on 10.10.1.10
2020-04-10 14:26:04 +0300 :Working: dbnodeupdate.sh running a precheck
on node(s).
2020-04-10 14:27:26 +0300 :SUCCESS: Initiate precheck on node(s).
2020-04-10 14:27:27 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -precheck -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 14:27:27 +0300 :INFO : Precheck attempted on nodes in
file /root/dbs_group-1: [10.10.1.10]
2020-04-10 14:27:27 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 14:27:27 +0300 :INFO : 10.10.1.10: 19.3.2.0.0.191119
2020-04-10 14:27:27 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 14:27:27 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 14:27:27 +0300 :INFO : - patchmgr.log
2020-04-10 14:27:27 +0300 :INFO : - patchmgr.trc
2020-04-10 14:27:27 +0300 :INFO : Exit status:0
2020-04-10 14:27:27 +0300 :INFO : Exiting.
*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
39
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 14:30:16 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:30:16 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:30:17 +0300 :Working: Initiate backup on 1 node(s).
2020-04-10 14:30:17 +0300 :Working: Check free space on 10.10.1.10
2020-04-10 14:30:19 +0300 :SUCCESS: Check free space on 10.10.1.10
2020-04-10 14:30:28 +0300 :Working: dbnodeupdate.sh running a backup
on node(s).
2020-04-10 14:32:51 +0300 :SUCCESS: Initiate backup on node(s).
2020-04-10 14:32:51 +0300 :SUCCESS: Initiate backup on 1 node(s).
2020-04-10 14:32:51 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -backup -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 14:32:51 +0300 :INFO : Backup attempted on nodes in file
/root/dbs_group-1: [10.10.1.10]
2020-04-10 14:32:51 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 14:32:51 +0300 :INFO : 10.10.1.10: 19.3.2.0.0.191119
2020-04-10 14:32:51 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 14:32:51 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 14:32:51 +0300 :INFO : - patchmgr.log
2020-04-10 14:32:51 +0300 :INFO : - patchmgr.trc
2020-04-10 14:32:51 +0300 :INFO : Exit status:0
2020-04-10 14:32:51 +0300 :INFO : Exiting.
*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
40
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 14:35:18 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:35:19 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:35:19 +0300 :Working: Initiate prepare steps on node(s).
2020-04-10 14:35:20 +0300 :Working: Check free space on 10.10.1.10
2020-04-10 14:35:23 +0300 :SUCCESS: Check free space on 10.10.1.10
2020-04-10 14:35:55 +0300 :SUCCESS: Initiate prepare steps on node(s).
2020-04-10 14:35:55 +0300 :Working: Initiate update on 1 node(s).
2020-04-10 14:35:55 +0300 :Working: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 14:38:27 +0300 :SUCCESS: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 14:38:27 +0300 :Working: Initiate update on node(s)
2020-04-10 14:38:27 +0300 :Working: Get information about any required
OS upgrades from node(s).
2020-04-10 14:38:37 +0300 :SUCCESS: Get information about any required
OS upgrades from node(s).
2020-04-10 14:38:37 +0300 :Working: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 14:49:06 +0300 :INFO : 10.10.1.10 is ready to reboot.
2020-04-10 14:49:06 +0300 :SUCCESS: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 14:49:11 +0300 :Working: Initiate reboot on node(s)
2020-04-10 14:49:16 +0300 :SUCCESS: Initiate reboot on node(s)
2020-04-10 14:49:17 +0300 :Working: Waiting to ensure 10.10.1.10 is
down before reboot.
2020-04-10 14:50:59 +0300 :SUCCESS: Waiting to ensure 10.10.1.10 is
down before reboot.
2020-04-10 14:50:59 +0300 :Working: Waiting to ensure 10.10.1.10 is up
after reboot.
2020-04-10 14:51:41 +0300 :SUCCESS: Waiting to ensure 10.10.1.10 is up
after reboot.
2020-04-10 14:51:41 +0300 :Working: Waiting to connect to 10.10.1.10
with SSH. During Linux upgrades this can take some time.
2020-04-10 15:10:38 +0300 :SUCCESS: Waiting to connect to 10.10.1.10
with SSH. During Linux upgrades this can take some time.
2020-04-10 15:10:38 +0300 :Working: Wait for 10.10.1.10 is ready for
the completion step of update.
2020-04-10 15:10:39 +0300 :SUCCESS: Wait for 10.10.1.10 is ready for
the completion step of update.
2020-04-10 15:10:39 +0300 :Working: Initiate completion step from
dbnodeupdate.sh on node(s)
2020-04-10 15:16:54 +0300 :SUCCESS: Initiate completion step from
dbnodeupdate.sh on 10.10.1.10
2020-04-10 15:17:07 +0300 :SUCCESS: Initiate update on node(s).
2020-04-10 15:17:07 +0300 :SUCCESS: Initiate update on 0 node(s).
41
[INFO ] Collected dbnodeupdate diag in file:
Diag_patchmgr_dbnode_upgrade_100420143517.tbz
-rw-r--r-- 1 root root 2866298 Apr 10 15:17
Diag_patchmgr_dbnode_upgrade_100420143517.tbz
2020-04-10 15:17:08 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -upgrade -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 15:17:08 +0300 :INFO : Upgrade attempted on nodes in file
/root/dbs_group-1: [10.10.1.10]
2020-04-10 15:17:08 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 15:17:08 +0300 :INFO : 10.10.1.10: 19.3.6.0.0.200317
2020-04-10 15:17:08 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 15:17:08 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 15:17:08 +0300 :INFO : - patchmgr.log
2020-04-10 15:17:08 +0300 :INFO : - patchmgr.trc
2020-04-10 15:17:08 +0300 :INFO : Exit status:0
2020-04-10 15:17:08 +0300 :INFO : Exiting.
• Now patch node 1 from another node in cluster. In this node 2 will be used to patch node 1
42
[root@dm01db02 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -
precheck -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317
*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 15:33:01 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.9
2020-04-10 15:33:01 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.9
2020-04-10 15:33:02 +0300 :Working: Initiate prepare steps on node(s).
2020-04-10 15:33:03 +0300 :Working: Check free space on 10.10.1.9
2020-04-10 15:33:05 +0300 :SUCCESS: Check free space on 10.10.1.9
2020-04-10 15:33:37 +0300 :SUCCESS: Initiate prepare steps on node(s).
2020-04-10 15:33:37 +0300 :Working: Initiate update on 1 node(s).
2020-04-10 15:33:37 +0300 :Working: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 15:36:19 +0300 :SUCCESS: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 15:36:19 +0300 :Working: Initiate update on node(s)
2020-04-10 15:36:19 +0300 :Working: Get information about any required
OS upgrades from node(s).
2020-04-10 15:36:30 +0300 :SUCCESS: Get information about any required
OS upgrades from node(s).
2020-04-10 15:36:30 +0300 :Working: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 15:47:08 +0300 :INFO : 10.10.1.9 is ready to reboot.
2020-04-10 15:47:08 +0300 :SUCCESS: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 15:47:15 +0300 :Working: Initiate reboot on node(s)
2020-04-10 15:47:19 +0300 :SUCCESS: Initiate reboot on node(s)
43
2020-04-10 15:47:19 +0300 :Working: Waiting to ensure 10.10.1.9 is
down before reboot.
2020-04-10 15:49:02 +0300 :SUCCESS: Waiting to ensure 10.10.1.9 is
down before reboot.
2020-04-10 15:49:02 +0300 :Working: Waiting to ensure 10.10.1.9 is up
after reboot.
2020-04-10 15:49:38 +0300 :SUCCESS: Waiting to ensure 10.10.1.9 is up
after reboot.
2020-04-10 15:49:38 +0300 :Working: Waiting to connect to 10.10.1.9
with SSH. During Linux upgrades this can take some time.
2020-04-10 16:08:34 +0300 :SUCCESS: Waiting to connect to 10.10.1.9
with SSH. During Linux upgrades this can take some time.
2020-04-10 16:08:34 +0300 :Working: Wait for 10.10.1.9 is ready for
the completion step of update.
2020-04-10 16:09:23 +0300 :SUCCESS: Wait for 10.10.1.9 is ready for
the completion step of update.
2020-04-10 16:09:24 +0300 :Working: Initiate completion step from
dbnodeupdate.sh on node(s)
2020-04-10 16:20:09 +0300 :SUCCESS: Initiate completion step from
dbnodeupdate.sh on 10.10.1.9
2020-04-10 16:20:22 +0300 :SUCCESS: Initiate update on node(s).
2020-04-10 16:20:22 +0300 :SUCCESS: Initiate update on 0 node(s).
[INFO ] Collected dbnodeupdate diag in file:
Diag_patchmgr_dbnode_upgrade_100420153300.tbz
-rw-r--r-- 1 root root 3006068 Apr 10 16:20
Diag_patchmgr_dbnode_upgrade_100420153300.tbz
2020-04-10 16:20:23 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -upgrade -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 16:20:23 +0300 :INFO : Upgrade attempted on nodes in file
/root/dbs_group-1: [10.10.1.9]
2020-04-10 16:20:23 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 16:20:23 +0300 :INFO : 10.10.1.9: 19.3.6.0.0.200317
2020-04-10 16:20:23 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 16:20:23 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 16:20:23 +0300 :INFO : - patchmgr.log
2020-04-10 16:20:23 +0300 :INFO : - patchmgr.trc
2020-04-10 16:20:23 +0300 :INFO : Exit status:0
2020-04-10 16:20:23 +0300 :INFO : Exiting.
44
• Verify the Oracle clusterware is up and running
45
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.cvu
1 ONLINE ONLINE dm01db02 STABLE
ora.orcldb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o
racle/product/11.2.0
.4/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o
racle/product/11.2.0
.4/dbhome_1,STABLE
ora.nsmdb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o
racle/product/12.2.0
.1/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o
racle/product/12.2.0
.1/dbhome_1,STABLE
ora.dm01db01.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.dm01db02.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.qosmserver
1 ONLINE ONLINE dm01db02 STABLE
ora.scan1.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.scan2.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.scan3.vip
1 ONLINE ONLINE dm01db02 STABLE
-----------------------------------------------------------------------------
---
46
Conclusion Handy References:
The objective of publishing this recipes book 1. Article: 10 Easy Steps To Patch
on Exadata patching is to ensure that all Oracle Exadata X8m RoCE Switch
aspects related to patching multiple Exadata
components are effectively covered. This e- 2. Article: 7 Easy Steps To Verify RoCE
book will help you prepare and deploy Cabling On Oracle Exadata X8m
patching using Oracle provided utility
patchmgr. 3. Article: Step-By-Step Guide Of
Exadata Snapshot Based Backup Of
Oracle Exadata 8XM is the new introduction
Compute Node To Nfs Share
to the Exadata family by Oracle. Netsoftmate
Oracle engineered systems team realized
4. Article: All You Need To Know About
that there’s no proper content or guide
Oracle Autonomous Health
available online which highlights detailed
Framework Execution
patching and management of this newly
launched Exadata 8XM machine.
: www.netsoftmate.com
: info@netsoftmate.com
47