You are on page 1of 48

Table of Contents

Exadata Patching Recipes.......................................................................................................................................... 2


Exadata Patching............................................................................................................................................................ 2
The patchmgr Utility ...................................................................................................................................................... 2
Exadata Patch Release Frequency ................................................................................................................................. 3
Important MOS Notes .................................................................................................................................................... 3
Exadata Health Check .................................................................................................................................................... 4
Download Exadata Patches............................................................................................................................................ 4
Exadata Storage Cell Patching ....................................................................................................................................... 5
Current Image version ............................................................................................................................................... 5
Prerequisites .............................................................................................................................................................. 5
Steps to Patch Storage Cell ........................................................................................................................................ 7
Exadata RoCE Switch Patching ..................................................................................................................................... 23
About RDMA over Converged Ethernet (RoCE) ....................................................................................................... 23
About RoCE Switch patching.................................................................................................................................... 23
Steps to Patch RoCE Switch ..................................................................................................................................... 23
Exadata Infiniband Switch Patching............................................................................................................................. 34
About Infiniband Switch .......................................................................................................................................... 34
About Infiniband Switch patching ........................................................................................................................... 34
Steps to Patch Infiniband Switch ............................................................................................................................. 35
Exadata Compute node Patching................................................................................................................................. 36
About Compute node Patching ............................................................................................................................... 36
Prerequisites ............................................................................................................................................................ 36
Steps to Patch Compute nodes................................................................................................................................ 38
Conclusion .................................................................................................................................................................... 47

1
Exadata Patching Recipes
Exadata is an Engineered System consists of Compute nodes, Storage cells and Infiniband Switches or RoCE
Switches (starting X8M). Each of these components runs Oracle software which needs to be updated regularly.
Oracle periodically releases patches for Exadata to keep these components updated. Exadata patches are intended
for and include fixes for all Exadata components such as, storage cells and Compute nodes, and optionally
InfiniBand switches and RoCE switches. Exadata patches can be applied online (Rolling) or offline (Non-Rolling).

Exadata patching is most critical and complex task one need to perform with Exadata Database Machines. Extreme
care must be taken before applying Exadata patches.

Exadata Database Machine Components

• Compute nodes (Database Server Grid)


o Exadata (Linux Operating System, Firmware, Exadata software)
o Oracle RDBM and Grid Infrastructure software
• Exadata Storage Server (Storage Server Grid)
o Exadata (Linux Operating System, Firmware, Exadata software)
• Network (Network Grid)
o Exadata IB switch software
o Exadata RoCE switch software – From Exadata X8M
• Other Components
o Cisco Switch, PDUs

The Exadata smart system software should be updated periodically. Oracle releases patches for Exadata every
quarter to keep these components updated. These patches can be applied online (Rolling) or offline (Non-Rolling).

Exadata Patching
In this Exadata patching recipes book, we will demonstrate practically how to patch an Exadata X8M-2 Quarter
Rack to ESS version 19.3.6

The patchmgr Utility


One single utility to patch entire Exadata software stack, that is patchmgr utility. The patchmgr utility can be used
for upgrading, rollback and backup Exadata software. patchmgr utility can be used for upgrading Storage cells,
Compute nodes, IB Switches and RoCE switches in a rolling or non-rolling fashion. Non-Rolling is default patching
option.

Launch patchmgr from the compute node that is node 1 that has user equivalence setup to all the storage cells,
Compute nodes and IB/RoCE Switches. For upgrading Compute node 1, run the patchmgr from Compute node 2 or
any other node that has the user equivalence setup to node 1.

2
Exadata Patch Release Frequency
The Exadata patches are released at the following frequency and it is subject to change without notice.

Exadata Smart Storage Software (ESSS)


• Quarterly

Quarterly Full Stack Download (QFSD)


• Quarterly

Infiniband Switch (IB)


• Semi-annually to Annually

RoCE Switch (RoCE)


• Semi-annually to Annually

Bundle Patch (QDPE)


• 19c – Quarterly
• 18c – Quarterly
• 12.2.0.1 – Quarterly
• 12.1.0.2 – Quarterly
• 12.1.0.1 – No further BP
• 11.2.0.4 – Quarterly
• 11.2.0.3 – No further BP
• 11.2.0.2 – No further BP
• 11.2.0.1 – No further BP

Important MOS Notes


MOS Note Description
888828.1 Exadata Database Machine and Exadata Storage Server Supported Versions
2550798.1 Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAChk
1553103.1 Updating Exadata Database Server Software using the DBNodeUpdate Utility
and patchmgr
1405320.1 Responses to common Exadata security scan findings
1270094.1 Exadata Critical Issues
2638622.1 Exadata 19.3.6.0.0 release and patch (31027642)

3
Exadata Health Check
Oracle Autonomous Health Framework contains Oracle ORAchk, Oracle EXAchk, and Oracle Trace File Analyzer.
You have access to Oracle Autonomous Health Framework as a value add-on to your existing support contract.
There is no additional fee or license required to run Oracle Autonomous Health Framework.

Run the exachk before and after Exadata patching to perform complete Exadata stack health check and correct
issues if any.

[root@dm01db01 ~]# cd /opt/oracle.ahf/exachk/


[root@dm01db01 exachk]# ./exachk

For complete details on AHF and how to download, install and execute exachk on Exadata, refer to the following
link: https://netsoftmate.com/blog/all-you-need-to-know-about-oracle-autonomous-health-framework/

Download Exadata Patches


Download the following patches from MOS and copy them to Compute node 1 under /u01/stage directory

Patch Number Patch Description


31027642 Storage server software (19.3.6.0.0.200317)
30893922 RDMA network switch (7.0(3)I7(7)) and InfiniBand network switch (2.2.14-1)
software
30893918 Database server bare metal / KVM / Xen domU ULN
exadata_dbserver_19.3.6.0.0_x86_64_base OL7 channel ISO image
(19.3.6.0.0.200317)
21634633 DBSERVER.PATCH.ZIP ORCHESTRATOR PLUS DBNU

Here I have staged the required software under the following directories for ease of understanding

/u01/stage/CELL Exadata Storage software patch


/u01/stage/DBS Compute node software patch and patchmgr utility
/u01/stage/ROCE RoCE & IB Switch software patch

4
Exadata Storage Cell Patching
Current Image version

• Execute the “imageinfo” command on one of the Compute nodes to identify the current Exadata Image
version

[root@dm01db01 ~]# ssh dm01cel01 imageinfo

Kernel version: 4.14.35-1902.5.1.4.el7uek.x86_64 #2 SMP Wed Oct 9 19:29:16


PDT 2019 x86_64
Cell version: OSS_19.3.2.0.0_LINUX.X64_191119
Cell rpm version: cell-19.3.2.0.0_LINUX.X64_191119-1.x86_64

Active image version: 19.3.2.0.0.191119


Active image kernel version: 4.14.35-1902.5.1.4.el7uek
Active image activated: 2020-01-02 23:25:57 -0800
Active image status: success
Active node type: STORAGE
Active system partition on device: /dev/md24p5
Active software partition on device: /dev/md24p7

Cell boot usb partition: /dev/md25p1


Cell boot usb version: 19.3.2.0.0.191119

Inactive image version: undefined


Rollback to the inactive partitions: Impossible

Prerequisites

• Install and configure VNC Server on Exadata compute node 1. It is recommended to use VNC or screen
utility for patching to avoid disconnections due network issues.

• Enable blackout (OEM, crontab and so on)

• Verify disk space on storage cells

[root@dm01db01 ~]# dcli -g ~/cell_group -l root 'df -h /'


10.10.1.11: Filesystem Size Used Avail Use% Mounted on
10.10.1.11: /dev/md24p5 10G 3.9G 6.2G 39% /
10.10.1.12: Filesystem Size Used Avail Use% Mounted on
10.10.1.12: /dev/md24p5 10G 3.9G 6.2G 39% /
10.10.1.13: Filesystem Size Used Avail Use% Mounted on
10.10.1.13: /dev/md24p5 10G 3.9G 6.2G 39% /

5
• Run Exachk before starting the actual patching. Correct any Critical issues and Failure that conflict with
patching.

• Verify hardware failure. Make sure there are no hardware failures before patching

[root@dm01db01 ~]# dcli -g ~/cell_group -l root 'cellcli -e list physicaldisk


where status!=normal'

[root@dm01db01 ~]# dcli -l root -g ~/cell_group "cellcli -e list physicaldisk


where diskType=FlashDisk and status not = normal"

[root@dm01db01 ~]# dcli -g ~/cell_group -l root 'ipmitool sunoem cli "show -d


properties -level all /SYS fault_state==Faulted"'

• Clear or acknowledge alerts on db and cell nodes

[root@dm01db01 ~]# dcli -l root -g ~/cell_group "cellcli -e drop alerthistory


all"

[root@dm01db01 ~]# dcli -l root -g ~/dbs_group "dbmcli -e drop alerthistory


all"

• Download patches and copy them to the compute node 1 under staging directory
[root@dm01db01 ~]# mkdir -p /u01/stage/CELL

[root@dm01db01 ~]# mv /u01/stage/p31027642_193000_Linux-x86-64.zip


/u01/stage/CELL/

[root@dm01db01 ~]# cd /u01/stage/CELL/

[root@dm01db01 CELL]# ls -ltr


total 1443412
-rw-r--r-- 1 root root 1478050137 Apr 6 10:23 p31027642_193000_Linux-x86-
64.zip

• Copy the patches to compute node 1 under staging area and unzip the patches

[root@dm01db01 CELL]# unzip p31027642_193000_Linux-x86-64.zip


Archive: p31027642_193000_Linux-x86-64.zip
creating: patch_19.3.6.0.0.200317/
inflating: patch_19.3.6.0.0.200317/dostep.sh
inflating: patch_19.3.6.0.0.200317/19.3.6.0.0.200317.patch.tar
inflating: patch_19.3.6.0.0.200317/dcli
inflating: patch_19.3.6.0.0.200317/exadata.img.hw

6
inflating: patch_19.3.6.0.0.200317/imageLogger
inflating: patch_19.3.6.0.0.200317/exadata.img.env
inflating: patch_19.3.6.0.0.200317/ExadataSendNotification.pm
inflating: patch_19.3.6.0.0.200317/README.txt
inflating: patch_19.3.6.0.0.200317/patchReport.py
creating: patch_19.3.6.0.0.200317/etc/
creating: patch_19.3.6.0.0.200317/etc/config/
inflating: patch_19.3.6.0.0.200317/etc/config/inventory.xml
inflating: patch_19.3.6.0.0.200317/patchmgr
inflating: patch_19.3.6.0.0.200317/patchmgr_functions
inflating: patch_19.3.6.0.0.200317/cellboot_usb_pci_path
inflating: patch_19.3.6.0.0.200317/dostep.sh.tmpl
inflating: patch_19.3.6.0.0.200317/ExadataImageNotification.pl
inflating: patch_19.3.6.0.0.200317/19.3.6.0.0.200317.iso
inflating: patch_19.3.6.0.0.200317/ExaXMLNode.pm
inflating: patch_19.3.6.0.0.200317/md5sum_files.lst
creating: patch_19.3.6.0.0.200317/plugins/
inflating: patch_19.3.6.0.0.200317/plugins/010-check_17854520.sh
inflating: patch_19.3.6.0.0.200317/plugins/030-check_24625612.sh
inflating: patch_19.3.6.0.0.200317/plugins/050-check_22651315.sh
inflating: patch_19.3.6.0.0.200317/plugins/040-check_22896791.sh
inflating: patch_19.3.6.0.0.200317/plugins/005-check_22909764.sh
inflating: patch_19.3.6.0.0.200317/plugins/000-check_dummy_perl
inflating: patch_19.3.6.0.0.200317/plugins/020-check_22468216.sh
inflating: patch_19.3.6.0.0.200317/plugins/000-check_dummy_bash
creating: patch_19.3.6.0.0.200317/linux.db.rpms/
inflating: patch_19.3.6.0.0.200317/README.html

• Read the readme file and Exadata document for storage cell patching steps

Steps to Patch Storage Cell


• Open VNC Session and login as root user

• login as root user

[root@dm01db01 CELL]# id
uid=0(root) gid=0(root) groups=0(root)

• Check SSH user equivalence

[root@dm01db01 CELL]# dcli -g ~/cell_group -l root uptime


10.10.1.11: 00:18:36 up 2 days, 10:19, 0 users, load average: 1.59, 1.53,
1.63

7
10.10.1.12: 00:18:36 up 2 days, 10:19, 0 users, load average: 2.09, 1.39,
1.29
10.10.1.13: 00:18:36 up 2 days, 10:19, 0 users, load average: 2.41, 2.10,
2.10

• Adjust the disk_repair_time for Oracle ASM.

[root@dm01db01 CELL]# su - oracle


Last login: Fri Apr 10 00:02:27 +03 2020

[oracle@dm01db01 ~]$ . oraenv


ORACLE_SID = [oracle] ? +ASM1
ORACLE_HOME = [/home/oracle] ? /u01/app/19.0.0.0/grid
The Oracle base has been set to /u01/app/oracle
[oracle@dm01db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Apr 10 00:19:02 2020


Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

SQL> col value for a40


SQL> select dg.name,a.value from v$asm_diskgroup dg, v$asm_attribute a where
dg.group_number=a.group_number and a.name='disk_repair_time';

NAME VALUE
------------------------------ -------------------------------------
DG_DATA 12.0h
RECOC1 12.0h

• Shut down and stop the Oracle components on each database server using the following commands:

[root@dm01db01 CELL]# dcli -g ~/dbs_group -l root


'/u01/app/19.0.0.0/grid/bin/crsctl stop cluster -all'
10.10.1.9: CRS-2673: Attempting to stop 'ora.crsd' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.crsd' on 'dm01db02'
10.10.1.9: CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on server 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.chad' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.nsmdb.db' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.orcldb.db' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.qosmserver' on 'dm01db01'

8
10.10.1.9: CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on server 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.chad' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.orcldb.db' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.nsmdb.db' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.orcldb.db' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.nsmdb.db' on 'dm01db01' succeeded
10.10.1.9: CRS-33673: Attempting to stop resource group 'ora.asmgroup' on
server 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.DG_DATA.dg' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.RECOC1.dg' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on
'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on
'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.cvu' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.DG_DATA.dg' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.RECOC1.dg' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.qosmserver' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'dm01db01'
succeeded
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'dm01db01'
succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.dm01db01.vip' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.orcldb.db' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.scan2.vip' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.scan3.vip' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.dm01db01.vip' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.scan3.vip' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.nsmdb.db' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.scan2.vip' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.asm' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on
'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.cvu' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on
'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.dm01db02.vip' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'dm01db02'
succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.scan1.vip' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.dm01db02.vip' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.scan1.vip' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.chad' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'dm01db01'
succeeded

9
10.10.1.9: CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on
'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'dm01db01' succeeded
10.10.1.9: CRS-33677: Stop of resource group 'ora.asmgroup' on server
'dm01db01' succeeded.
10.10.1.9: CRS-33673: Attempting to stop resource group 'ora.asmgroup' on
server 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.DG_DATA.dg' on 'dm01db02'
10.10.1.9: CRS-2673: Attempting to stop 'ora.RECOC1.dg' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.DG_DATA.dg' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.RECOC1.dg' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.asm' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on
'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.chad' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'dm01db02'
succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on
'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'dm01db02' succeeded
10.10.1.9: CRS-33677: Stop of resource group 'ora.asmgroup' on server
'dm01db02' succeeded.
10.10.1.9: CRS-2673: Attempting to stop 'ora.ons' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.ons' on 'dm01db02' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.net1.network' on 'dm01db02'
10.10.1.9: CRS-2677: Stop of 'ora.net1.network' on 'dm01db02' succeeded
10.10.1.9: CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'dm01db02' has completed
10.10.1.9: CRS-2673: Attempting to stop 'ora.ons' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.ons' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.net1.network' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.net1.network' on 'dm01db01' succeeded
10.10.1.9: CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'dm01db01' has completed
10.10.1.9: CRS-2677: Stop of 'ora.crsd' on 'dm01db02' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.crsd' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.storage' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.evmd' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.storage' on 'dm01db01'
10.10.1.10: CRS-2673: Attempting to stop 'ora.evmd' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.storage' on 'dm01db01' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.storage' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.evmd' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.ctssd' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.evmd' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.ctssd' on 'dm01db01'
10.10.1.10: CRS-2673: Attempting to stop 'ora.asm' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.ctssd' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.ctssd' on 'dm01db01' succeeded

10
10.10.1.10: CRS-2677: Stop of 'ora.asm' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.cssd' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.cssd' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.diskmon' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.asm' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.cssd' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.cssd' on 'dm01db01' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.diskmon' on 'dm01db01'
10.10.1.10: CRS-2677: Stop of 'ora.diskmon' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.diskmon' on 'dm01db01' succeeded
10.10.1.10: CRS-2679: Attempting to clean 'ora.diskmon' on 'dm01db01'
10.10.1.10: CRS-2681: Clean of 'ora.diskmon' on 'dm01db01' succeeded

[root@dm01db01 CELL]# dcli -g ~/dbs_group -l root


'/u01/app/19.0.0.0/grid/bin/crsctl stop crs'
10.10.1.9: CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.gpnpd' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.mdnsd' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.crf' on 'dm01db01'
10.10.1.9: CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.gpnpd' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.drivers.acfs' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.mdnsd' on 'dm01db01' succeeded
10.10.1.9: CRS-2677: Stop of 'ora.crf' on 'dm01db01' succeeded
10.10.1.9: CRS-2673: Attempting to stop 'ora.gipcd' on 'dm01db01'
10.10.1.9: CRS-2677: Stop of 'ora.gipcd' on 'dm01db01' succeeded
10.10.1.9: CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'dm01db01' has completed
10.10.1.9: CRS-4133: Oracle High Availability Services has been stopped.
10.10.1.10: CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.crf' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.gpnpd' on 'dm01db02'
10.10.1.10: CRS-2673: Attempting to stop 'ora.mdnsd' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.drivers.acfs' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.crf' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.gpnpd' on 'dm01db02' succeeded
10.10.1.10: CRS-2673: Attempting to stop 'ora.gipcd' on 'dm01db02'
10.10.1.10: CRS-2677: Stop of 'ora.mdnsd' on 'dm01db02' succeeded
10.10.1.10: CRS-2677: Stop of 'ora.gipcd' on 'dm01db02' succeeded
10.10.1.10: CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'dm01db02' has completed
10.10.1.10: CRS-4133: Oracle High Availability Services has been stopped.

• Get the current Cell Exadata Storage software version

[root@dm01db01 CELL]# ssh dm01cel01 imageinfo

11
Kernel version: 4.14.35-1902.5.1.4.el7uek.x86_64 #2 SMP Wed Oct 9 19:29:16
PDT 2019 x86_64
Cell version: OSS_19.3.2.0.0_LINUX.X64_191119
Cell rpm version: cell-19.3.2.0.0_LINUX.X64_191119-1.x86_64

Active image version: 19.3.2.0.0.191119


Active image kernel version: 4.14.35-1902.5.1.4.el7uek
Active image activated: 2020-01-02 23:25:57 -0800
Active image status: success
Active node type: STORAGE
Active system partition on device: /dev/md24p5
Active software partition on device: /dev/md24p7

Cell boot usb partition: /dev/md25p1


Cell boot usb version: 19.3.2.0.0.191119

Inactive image version: undefined


Rollback to the inactive partitions: Impossible

• Shut down all cell services on all cells to be updated. This may be done by the root user on each cell by
running cellcli -e 'alter cell shutdown services all' or by the following dcli command to do all cells at the
same time:

[root@dm01db01 CELL]# dcli -g ~/cell_group -l root "cellcli -e alter cell


shutdown services all"
10.10.1.11:
10.10.1.11: Stopping the RS, CELLSRV, and MS services...
10.10.1.11: The SHUTDOWN of services was successful.
10.10.1.12:
10.10.1.12: Stopping the RS, CELLSRV, and MS services...
10.10.1.12: The SHUTDOWN of services was successful.
10.10.1.13:
10.10.1.13: Stopping the RS, CELLSRV, and MS services...
10.10.1.13: The SHUTDOWN of services was successful.

• Reset the patchmgr state to a known state using the following command:

[root@dm01db01 CELL]# ls -ltr


total 1443416
drwxrwxr-x 5 root root 4096 Mar 18 07:06 patch_19.3.6.0.0.200317
-rw-r--r-- 1 root root 1478050137 Apr 6 10:23 p31027642_193000_Linux-x86-
64.zip

[root@dm01db01 CELL]# cd patch_19.3.6.0.0.200317/

[root@dm01db01 patch_19.3.6.0.0.200317]# ./patchmgr -cells ~/cell_group -


reset_force

12
2020-04-10 00:22:46 +0300 :Working: Force Cleanup
2020-04-10 00:22:48 +0300 :SUCCESS: Force Cleanup
2020-04-10 00:22:48 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -reset_force
2020-04-10 00:22:48 +0300 :INFO : Reset_Force attempted on nodes in
file /root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:22:48 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:22:48 +0300 :INFO : 10.10.1.11: 19.3.2.0.0.191119
2020-04-10 00:22:48 +0300 :INFO : 10.10.1.12: 19.3.2.0.0.191119
2020-04-10 00:22:48 +0300 :INFO : 10.10.1.13: 19.3.2.0.0.191119
2020-04-10 00:22:48 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:22:48 +0300 :INFO : - <cell_name>.log
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.log
2020-04-10 00:22:48 +0300 :INFO : - patchmgr.trc
2020-04-10 00:22:48 +0300 :INFO : Exit status:0
2020-04-10 00:22:48 +0300 :INFO : Exiting.

• Clean up any previous patchmgr utility runs using the following command:

[root@dm01db01 patch_19.3.6.0.0.200317]# ./patchmgr -cells ~/cell_group -


cleanup

2020-04-10 00:22:54 +0300 :Working: Cleanup


2020-04-10 00:22:56 +0300 :SUCCESS: Cleanup
2020-04-10 00:22:57 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -cleanup
2020-04-10 00:22:57 +0300 :INFO : Cleanup attempted on nodes in file
/root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:22:57 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:22:57 +0300 :INFO : 10.10.1.11: 19.3.2.0.0.191119
2020-04-10 00:22:57 +0300 :INFO : 10.10.1.12: 19.3.2.0.0.191119
2020-04-10 00:22:57 +0300 :INFO : 10.10.1.13: 19.3.2.0.0.191119
2020-04-10 00:22:57 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:22:57 +0300 :INFO : - <cell_name>.log
2020-04-10 00:22:57 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:22:57 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:22:57 +0300 :INFO : - patchmgr.log
2020-04-10 00:22:57 +0300 :INFO : - patchmgr.trc
2020-04-10 00:22:57 +0300 :INFO : Exit status:0
2020-04-10 00:22:57 +0300 :INFO : Exiting.

• Verify that the cells meet prerequisite checks using the following command.

13
[root@dm01db01 patch_19.3.6.0.0.200317]# ./patchmgr -cells ~/cell_group -
patch_check_prereq

2020-04-10 00:23:03 +0300 :Working: Check cells have ssh equivalence


for root user. Up to 10 seconds per cell ...
2020-04-10 00:23:04 +0300 :SUCCESS: Check cells have ssh equivalence
for root user.
2020-04-10 00:23:06 +0300 :Working: Initialize files. Up to 1 minute
...
2020-04-10 00:23:07 +0300 :Working: Setup work directory
2020-04-10 00:23:09 +0300 :SUCCESS: Setup work directory
2020-04-10 00:23:12 +0300 :SUCCESS: Initialize files.
2020-04-10 00:23:12 +0300 :Working: Copy, extract prerequisite check
archive to cells. If required start md11 mismatched partner size correction.
Up to 40 minutes ...
2020-04-10 00:23:25 +0300 :INFO : Wait correction of degraded md11
due to md partner size mismatch. Up to 30 minutes.
2020-04-10 00:23:27 +0300 :SUCCESS: Copy, extract prerequisite check
archive to cells. If required start md11 mismatched partner size correction.
2020-04-10 00:23:27 +0300 :Working: Check space and state of cell
services. Up to 20 minutes ...
2020-04-10 00:24:43 +0300 :SUCCESS: Check space and state of cell
services.
2020-04-10 00:24:43 +0300 :Working: Check prerequisites on all cells.
Up to 2 minutes ...
2020-04-10 00:24:45 +0300 :SUCCESS: Check prerequisites on all cells.
2020-04-10 00:24:45 +0300 :Working: Execute plugin check for Patch
Check Prereq ...
2020-04-10 00:24:45 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22909764 v1.0.
2020-04-10 00:24:45 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:24:45 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 17854520 v1.3.
2020-04-10 00:24:45 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:24:45 +0300 :SUCCESS: No exposure to bug 17854520 with
non-rolling patching
2020-04-10 00:24:45 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22468216 v1.0.
2020-04-10 00:24:45 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:24:45 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22468216
2020-04-10 00:24:45 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 24625612 v1.0.
2020-04-10 00:24:45 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:24:46 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 24625612

14
2020-04-10 00:24:46 +0300 :SUCCESS: No exposure to bug 22896791 with
non-rolling patching
2020-04-10 00:24:46 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22651315 v1.0.
2020-04-10 00:24:46 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:24:47 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22651315
2020-04-10 00:24:48 +0300 :SUCCESS: Execute plugin check for Patch
Check Prereq.
2020-04-10 00:24:48 +0300 :Working: Check ASM deactivation outcome. Up
to 1 minute ...
2020-04-10 00:24:59 +0300 :SUCCESS: Check ASM deactivation outcome.
2020-04-10 00:24:59 +0300 :Working: check if MS SOFTWAREUPDATE is
scheduled. up to 5 minutes...
2020-04-10 00:24:59 +0300 :NO ACTION NEEDED: No cells found with
SOFTWAREUPDATE scheduled by MS
2020-04-10 00:25:00 +0300 :SUCCESS: check if MS SOFTWAREUPDATE is
scheduled
2020-04-10 00:25:01 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -patch_check_prereq
2020-04-10 00:25:01 +0300 :INFO : patch_prereq attempted on nodes in
file /root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:25:01 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:25:01 +0300 :INFO : 10.10.1.11: 19.3.2.0.0.191119
2020-04-10 00:25:01 +0300 :INFO : 10.10.1.12: 19.3.2.0.0.191119
2020-04-10 00:25:01 +0300 :INFO : 10.10.1.13: 19.3.2.0.0.191119
2020-04-10 00:25:01 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:25:01 +0300 :INFO : - <cell_name>.log
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.log
2020-04-10 00:25:01 +0300 :INFO : - patchmgr.trc
2020-04-10 00:25:01 +0300 :INFO : Exit status:0
2020-04-10 00:25:01 +0300 :INFO : Exiting.

• If the prerequisite checks pass, then start the update process.

[root@dm01db01 patch_19.3.6.0.0.200317]# ./patchmgr -cells ~/cell_group -


patch
*****************************************************************************
***
NOTE Cells will reboot during the patch or rollback process.
NOTE For non-rolling patch or rollback, ensure all ASM instances using
NOTE the cells are shut down for the duration of the patch or rollback.
NOTE For rolling patch or rollback, ensure all ASM instances using
NOTE the cells are up for the duration of the patch or rollback.

15
WARNING Do not interrupt the patchmgr session.
WARNING Do not alter state of ASM instances during patch or rollback.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot cells or alter cell services during patch or rollback.
WARNING Do not open log files in editor in write mode or try to alter them.

NOTE All time estimates are approximate.


*****************************************************************************
***

2020-04-10 00:25:10 +0300 :Working: Check cells have ssh equivalence


for root user. Up to 10 seconds per cell ...
2020-04-10 00:25:11 +0300 :SUCCESS: Check cells have ssh equivalence
for root user.
2020-04-10 00:25:14 +0300 :Working: Initialize files. Up to 1 minute
...
2020-04-10 00:25:15 +0300 :Working: Setup work directory
2020-04-10 00:25:24 +0300 :SUCCESS: Setup work directory
2020-04-10 00:25:27 +0300 :SUCCESS: Initialize files.
2020-04-10 00:25:27 +0300 :Working: Copy, extract prerequisite check
archive to cells. If required start md11 mismatched partner size correction.
Up to 40 minutes ...
2020-04-10 00:25:41 +0300 :INFO : Wait correction of degraded md11
due to md partner size mismatch. Up to 30 minutes.
2020-04-10 00:25:42 +0300 :SUCCESS: Copy, extract prerequisite check
archive to cells. If required start md11 mismatched partner size correction.
2020-04-10 00:25:42 +0300 :Working: Check space and state of cell
services. Up to 20 minutes ...
2020-04-10 00:27:01 +0300 :SUCCESS: Check space and state of cell
services.
2020-04-10 00:27:01 +0300 :Working: Check prerequisites on all cells.
Up to 2 minutes ...
2020-04-10 00:27:03 +0300 :SUCCESS: Check prerequisites on all cells.
2020-04-10 00:27:03 +0300 :Working: Copy the patch to all cells. Up to
3 minutes ...
2020-04-10 00:27:45 +0300 :SUCCESS: Copy the patch to all cells.
2020-04-10 00:27:47 +0300 :Working: Execute plugin check for Patch
Check Prereq ...
2020-04-10 00:27:47 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22909764 v1.0.
2020-04-10 00:27:47 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:47 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 17854520 v1.3.
2020-04-10 00:27:47 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:47 +0300 :SUCCESS: No exposure to bug 17854520 with
non-rolling patching

16
2020-04-10 00:27:47 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22468216 v1.0.
2020-04-10 00:27:47 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:48 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22468216
2020-04-10 00:27:48 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 24625612 v1.0.
2020-04-10 00:27:48 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:48 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 24625612
2020-04-10 00:27:48 +0300 :SUCCESS: No exposure to bug 22896791 with
non-rolling patching
2020-04-10 00:27:48 +0300 :INFO : Patchmgr plugin start: Prereq
check for exposure to bug 22651315 v1.0.
2020-04-10 00:27:48 +0300 :INFO : Details in logfile
/u01/stage/CELL/patch_19.3.6.0.0.200317/patchmgr.stdout.
2020-04-10 00:27:50 +0300 :SUCCESS: Patchmgr plugin complete: Prereq
check passed for the bug 22651315
2020-04-10 00:27:50 +0300 :SUCCESS: Execute plugin check for Patch
Check Prereq.
2020-04-10 00:27:50 +0300 :Working: check if MS SOFTWAREUPDATE is
scheduled. up to 5 minutes...
2020-04-10 00:27:51 +0300 :NO ACTION NEEDED: No cells found with
SOFTWAREUPDATE scheduled by MS
2020-04-10 00:27:52 +0300 :SUCCESS: check if MS SOFTWAREUPDATE is
scheduled
2020-04-10 00:27:54 +0300 1 of 5 :Working: Initiate patch on cells. Cells
will remain up. Up to 5 minutes ...
2020-04-10 00:27:58 +0300 1 of 5 :SUCCESS: Initiate patch on cells.
2020-04-10 00:27:58 +0300 2 of 5 :Working: Waiting to finish pre-reboot patch
actions. Cells will remain up. Up to 45 minutes ...
2020-04-10 00:28:58 +0300 :INFO : Wait for patch pre-reboot
procedures
2020-04-10 00:29:46 +0300 2 of 5 :SUCCESS: Waiting to finish pre-reboot patch
actions.
2020-04-10 00:29:46 +0300 :Working: Execute plugin check for Patching
...
2020-04-10 00:29:46 +0300 :SUCCESS: Execute plugin check for Patching.
2020-04-10 00:29:46 +0300 3 of 5 :Working: Finalize patch on cells. Cells
will reboot. Up to 5 minutes ...
2020-04-10 00:29:55 +0300 3 of 5 :SUCCESS: Finalize patch on cells.
2020-04-10 00:30:11 +0300 4 of 5 :Working: Wait for cells to reboot and come
online. Up to 120 minutes ...
2020-04-10 00:31:11 +0300 :INFO : Wait for patch finalization and
reboot
2020-04-10 00:56:03 +0300 4 of 5 :SUCCESS: Wait for cells to reboot and come
online.
2020-04-10 00:56:03 +0300 5 of 5 :Working: Check the state of patch on cells.
Up to 5 minutes ...

17
2020-04-10 00:56:12 +0300 5 of 5 :SUCCESS: Check the state of patch on cells.
2020-04-10 00:56:12 +0300 :Working: Execute plugin check for Pre Disk
Activation ...
2020-04-10 00:56:12 +0300 :SUCCESS: Execute plugin check for Pre Disk
Activation.
2020-04-10 00:56:12 +0300 :Working: Activate grid disks...
2020-04-10 00:56:13 +0300 :INFO : Wait for checking and activating
grid disks
2020-04-10 00:56:20 +0300 :SUCCESS: Activate grid disks.
2020-04-10 00:56:22 +0300 :Working: Execute plugin check for Post
Patch ...
2020-04-10 00:56:23 +0300 :SUCCESS: Execute plugin check for Post
Patch.
2020-04-10 00:56:24 +0300 :Working: Cleanup
2020-04-10 00:56:37 +0300 :SUCCESS: Cleanup
2020-04-10 00:56:38 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -patch
2020-04-10 00:56:38 +0300 :INFO : patch attempted on nodes in file
/root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 00:56:38 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 00:56:38 +0300 :INFO : 10.10.1.11: 19.3.6.0.0.200317
2020-04-10 00:56:38 +0300 :INFO : 10.10.1.12: 19.3.6.0.0.200317
2020-04-10 00:56:38 +0300 :INFO : 10.10.1.13: 19.3.6.0.0.200317
2020-04-10 00:56:38 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 00:56:38 +0300 :INFO : - <cell_name>.log
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.stdout
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.stderr
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.log
2020-04-10 00:56:38 +0300 :INFO : - patchmgr.trc
2020-04-10 00:56:38 +0300 :INFO : Exit status:0
2020-04-10 00:56:38 +0300 :INFO : Exiting.

• Monitor the log files and cells being updated when e-mail alerts are not setup. open a new session and do
a tail on the log file as shown below

[root@dm01db01 patch_19.3.6.0.0.200317]# tail -f patchmgr.stdout

• Verify the update status after the patchmgr utility completes as follows:

[root@dm01db01 patch_19.3.6.0.0.200317]# ssh dm01cel01 imageinfo

Kernel version: 4.14.35-1902.9.2.el7uek.x86_64 #2 SMP Mon Dec 23 13:39:16 PST


2019 x86_64
Cell version: OSS_19.3.6.0.0_LINUX.X64_200317
Cell rpm version: cell-19.3.6.0.0_LINUX.X64_200317-1.x86_64

18
Active image version: 19.3.6.0.0.200317
Active image kernel version: 4.14.35-1902.9.2.el7uek
Active image activated: 2020-04-10 00:55:04 +0300
Active image status: success
Active node type: STORAGE
Active system partition on device: /dev/md24p6
Active software partition on device: /dev/md24p8

Cell boot usb partition: /dev/md25p1


Cell boot usb version: 19.3.6.0.0.200317

Inactive image version: 19.3.2.0.0.191119


Inactive image activated: 2020-01-02 23:25:57 -0800
Inactive image status: success
Inactive node type: STORAGE
Inactive system partition on device: /dev/md24p5
Inactive software partition on device: /dev/md24p7

Inactive marker for the rollback: /boot/I_am_hd_boot.inactive


Inactive grub config for the rollback: /boot/efi/EFI/redhat/grub.cfg.inactive
Inactive usb grub config for the rollback:
/boot/efi/EFI/redhat/grub.cfg.usb.inactive
Inactive kernel version for the rollback: 4.14.35-1902.5.1.4.el7uek.x86_64
Rollback to the inactive partitions: Possible

• Check the imagehistory

[root@dm01db01 patch_19.3.6.0.0.200317]# ssh dm01cel01 imagehistory


Version : 19.3.2.0.0.191119
Image activation date : 2020-01-02 23:25:57 -0800
Imaging mode : fresh
Imaging status : success

Version : 19.3.6.0.0.200317
Image activation date : 2020-04-10 00:55:04 +0300
Imaging mode : out of partition upgrade
Imaging status : success

• Verify the image on all cells

[root@dm01db01 patch_19.3.6.0.0.200317]# dcli -g ~/cell_group -l root


'imageinfo | grep "Active image version"'
10.10.1.11: Active image version: 19.3.6.0.0.200317
10.10.1.12: Active image version: 19.3.6.0.0.200317
10.10.1.13: Active image version: 19.3.6.0.0.200317

19
• Clean up the cells using the -cleanup option to clean up all the temporary update or rollback files on the
cells.

[root@dm01db01 patch_19.3.6.0.0.200317]# ./patchmgr -cells ~/cell_group -


cleanup

2020-04-10 01:07:44 +0300 :Working: Cleanup


2020-04-10 01:07:46 +0300 :SUCCESS: Cleanup
2020-04-10 01:07:47 +0300 :SUCCESS: Completed run of command:
./patchmgr -cells /root/cell_group -cleanup
2020-04-10 01:07:47 +0300 :INFO : Cleanup attempted on nodes in file
/root/cell_group: [10.10.1.11 10.10.1.12 10.10.1.13]
2020-04-10 01:07:47 +0300 :INFO : Current image version on cell(s)
is:
2020-04-10 01:07:47 +0300 :INFO : 10.10.1.11: 19.3.6.0.0.200317
2020-04-10 01:07:47 +0300 :INFO : 10.10.1.12: 19.3.6.0.0.200317
2020-04-10 01:07:47 +0300 :INFO : 10.10.1.13: 19.3.6.0.0.200317
2020-04-10 01:07:48 +0300 :INFO : For details, check the following
files in /u01/stage/CELL/patch_19.3.6.0.0.200317:
2020-04-10 01:07:48 +0300 :INFO : - <cell_name>.log
2020-04-10 01:07:48 +0300 :INFO : - patchmgr.stdout
2020-04-10 01:07:48 +0300 :INFO : - patchmgr.stderr
2020-04-10 01:07:48 +0300 :INFO : - patchmgr.log
2020-04-10 01:07:48 +0300 :INFO : - patchmgr.trc
2020-04-10 01:07:48 +0300 :INFO : Exit status:0
2020-04-10 01:07:48 +0300 :INFO : Exiting.

• Start clusterware and databases

[root@dm01db01 patch_19.3.6.0.0.200317]# dcli -g ~/dbs_group -l root


'/u01/app/19.0.0.0/grid/bin/crsctl start crs'
10.10.1.9: CRS-4123: Oracle High Availability Services has been started.
10.10.1.10: CRS-4123: Oracle High Availability Services has been started.

[root@dm01db01 patch_19.3.6.0.0.200317]# dcli -g ~/dbs_group -l root


'/u01/app/19.0.0.0/grid/bin/crsctl check crs'
10.10.1.9: CRS-4638: Oracle High Availability Services is online
10.10.1.9: CRS-4537: Cluster Ready Services is online
10.10.1.9: CRS-4529: Cluster Synchronization Services is online
10.10.1.9: CRS-4533: Event Manager is online
10.10.1.10: CRS-4638: Oracle High Availability Services is online
10.10.1.10: CRS-4537: Cluster Ready Services is online
10.10.1.10: CRS-4529: Cluster Synchronization Services is online
10.10.1.10: CRS-4533: Event Manager is online

[root@dm01db01 patch_19.3.6.0.0.200317]# /u01/app/19.0.0.0/grid/bin/crsctl


stat res -t| more

20
-----------------------------------------------------------------------------
---
Name Target State Server State details
-----------------------------------------------------------------------------
---
Local Resources
-----------------------------------------------------------------------------
---
ora.LISTENER.lsnr
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.chad
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.net1.network
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.ons
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.proxy_advm
OFFLINE OFFLINE dm01db01 STABLE
OFFLINE OFFLINE dm01db02 STABLE
-----------------------------------------------------------------------------
---
Cluster Resources
-----------------------------------------------------------------------------
---
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.DG_DATA.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE dm01db01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE dm01db02 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE dm01db02 STABLE
ora.RECOC1.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE dm01db01 Started,STABLE
2 ONLINE ONLINE dm01db02 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.cvu
1 ONLINE ONLINE dm01db02 STABLE

21
ora.orcldb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o

racle/product/11.2.0

.4/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o

racle/product/11.2.0

.4/dbhome_1,STABLE
ora.nsmdb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o

racle/product/12.2.0

.1/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o

racle/product/12.2.0

.1/dbhome_1,STABLE
ora.dm01db01.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.dm01db02.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.qosmserver
1 ONLINE ONLINE dm01db02 STABLE
ora.scan1.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.scan2.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.scan3.vip
1 ONLINE ONLINE dm01db02 STABLE

• Verify the databases and start them if needed

[root@dm01db01 patch_19.3.6.0.0.200317]# ps -ef|grep pmon


oracle 156983 1 0 01:10 ? 00:00:00 asm_pmon_+ASM1
oracle 163400 1 0 01:10 ? 00:00:00 ora_pmon_nsmdb1
oracle 164918 1 0 01:10 ? 00:00:00 ora_pmon_orcldb1

$ srvctl status database -d orcldb


$ srvctl status database -d nsmdb

22
Exadata RoCE Switch Patching
About RDMA over Converged Ethernet (RoCE)

The Exadata X8M release implements 100 Gb/sec RoCE network fabric, making the world’s fastest database
machine even faster

Oracle Exadata Database Machine X8M introduces a brand new high-bandwidth low-latency 100 Gb/sec RDMA
over Converged Ethernet (RoCE) Network Fabric that connects all the components inside an Exadata Database
Machine. Specialized database networking protocols deliver much lower latency and higher bandwidth than is
possible with generic communication protocols for faster response time for OLTP operations and higher
throughput for analytic workloads.

The Exadata X8M release provides the next generation in ultra-fast cloud scale networking fabric, RDMA over
Converged Ethernet (RoCE). RDMA (Remote Direct Memory Access) allows one computer to directly access data
from another without Operating System or CPU involvement, for high bandwidth and low latency. The network
card directly reads/writes memory with no extra copying or buffering and very low latency. RDMA is an integral
part of the Exadata high-performance architecture, and has been tuned and enhanced over the past decade,
underpinning several Exadata-only technologies such as Exafusion Direct-to-Wire Protocol and Smart Fusion Block
Transfer. As the RoCE API infrastructure is identical to InfiniBand’s, all existing Exadata performance features are
available on RoCE.

About RoCE Switch patching

The patchmgr utility is used to upgrade and downgrade the RoCE switches.

• RoCE & IB Switch patch are delivered as part of same patch


• RoCE Switch patches are released semi-annually to annually
• RoCE Switch can be patched in Rolling fashion only

Steps to Patch RoCE Switch

• Create a file containing RoCE switch hostname

[root@dm01db01 ~]# cat roce_list


dm01sw-rocea01
dm01sw-roceab1

• Get the current RoCE Switch software version

[root@dm01db01 ~]# ssh admin@dm01sw-rocea01 show version


User Access Verification

23
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2019, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under their own
licenses, such as open source. This software is provided "as is," and unless
otherwise stated, there is no warranty, express or implied, including but not
limited to warranties of merchantability and fitness for a particular
purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.

Software
BIOS: version 05.39
NXOS: version 7.0(3)I7(6)
BIOS compile time: 08/30/2019
NXOS image file is: bootflash:///nxos.7.0.3.I7.6.bin
NXOS compile time: 3/5/2019 13:00:00 [03/05/2019 22:04:55]

Hardware
cisco Nexus9000 C9336C-FX2 Chassis
Intel(R) Xeon(R) CPU D-1526 @ 1.80GHz with 24571632 kB of memory.
Processor Board ID FDO23380VQS

Device name: dm01sw-rocea01


bootflash: 115805708 kB
Kernel uptime is 8 day(s), 3 hour(s), 14 minute(s), 49 second(s)

Last reset at 145297 usecs after Wed Apr 1 09:29:43 2020


Reason: Reset Requested by CLI command reload
System version: 7.0(3)I7(6)
Service:

plugin
Core Plugin, Ethernet Plugin

Active Package(s):

[root@dm01db01 ~]# ssh admin@dm01sw-rocea01 show version | grep "System


version:"

24
User Access Verification
System version: 7.0(3)I7(6)

• Download the RoCE switch software from MOS note 888828.1 and copy it Exadata compute node 1

[root@dm01db01 ~]# cd /u01/stage/ROCE/

[root@dm01db01 ROCE]# ls -ltr


total 2773832
-rw-r--r-- 1 root root 2840400612 Apr 9 00:42 p30893922_193000_Linux-x86-
64.zip

• Unzip the RoCE patch

[root@dm01db01 ROCE]# unzip p30893922_193000_Linux-x86-64.zip


Archive: p30893922_193000_Linux-x86-64.zip
creating: patch_switch_19.3.6.0.0.200317/
inflating: patch_switch_19.3.6.0.0.200317/dcli
inflating: patch_switch_19.3.6.0.0.200317/exadata.img.hw
inflating: patch_switch_19.3.6.0.0.200317/sundcs_36p_repository_2.2.7_2.pkg
inflating: patch_switch_19.3.6.0.0.200317/imageLogger
inflating:

patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_leaf_switch_multi.c
fg
inflating:
patch_switch_19.3.6.0.0.200317/sundcs_36p_repository_2.2.14_1.pkg
inflating: patch_switch_19.3.6.0.0.200317/README.txt

• Verify the patch directory content after unzip

[root@dm01db01 ROCE]# cd patch_switch_19.3.6.0.0.200317/

[root@dm01db01 patch_switch_19.3.6.0.0.200317]# ls -ltr


total 2794980
-r-xr-x--- 1 root root 50674 Mar 18 05:48 exadata.img.hw
-r--r--r-- 1 root root 8664 Mar 18 05:48 exadata.img.env
-r--r--r-- 1 root root 45349 Mar 18 05:48 imageLogger
-r--r----- 1 root root 6133 Mar 18 05:48 ExaXMLNode.pm
-r--r----- 1 root root 51925 Mar 18 05:48 exadata_img_pylogger.py
-r-xr-xr-x 1 root root 17482 Mar 18 05:48 libxcp.so.1
-r-xr-xr-x 1 root root 4385 Mar 18 05:48 kernelupgrade_oldbios.sh
-r-xr-xr-x 1 root root 176994 Mar 18 05:48 installfw_exadata_ssh
-r-xr-xr-x 1 root root 426 Mar 18 05:48 fwverify
-r-xr-xr-x 1 root root 1570 Mar 18 05:48 ExadataSendNotification.pm

25
-r-xr-xr-x 1 root root 62499 Mar 18 05:48 ExadataImageNotification.pl
-r-xr-xr-x 1 root root 51616 Mar 18 05:48 dcli
-rw-r--r-- 1 root root 1011037696 Mar 18 05:48 nxos.7.0.3.I7.6.bin
-r-xr-xr-x 1 root root 16544 Mar 18 05:48 patchmgr_functions
-rwxr-xr-x 1 root root 11600 Mar 18 05:48 patch_bug_26678971
-rw-r--r-- 1 root root 975383040 Mar 18 05:48 nxos.7.0.3.I7.7.bin
-r-xr-xr-x 1 root root 171545108 Mar 18 05:48
sundcs_36p_repository_2.2.13_2.pkg
-r-xr-xr-x 1 root root 172863012 Mar 18 05:48
sundcs_36p_repository_2.2.14_1.pkg
-rwxr-xr-x 1 root root 172946493 Mar 18 05:48
sundcs_36p_repository_2.2.7_2.pkg
-rwxr-xr-x 1 root root 172947929 Mar 18 05:48
sundcs_36p_repository_2.2.7_2_signed.pkg
-r-xr-xr-x 1 root root 15001 Mar 18 05:48 xcp
-rwxr-xr-x 1 root root 184111553 Mar 18 05:48
sundcs_36p_repository_upgrade_2.1_to_2.2.7_2.pkg
-r-xr-xr-x 1 root root 168789 Mar 18 06:05 upgradeIBSwitch.sh
drwxr-xr-x 2 root root 103 Mar 18 06:05 roce_switch_templates
drwxr-xr-x 2 root root 98 Mar 18 06:05 roce_switch_api
drwxr-xr-x 6 root root 4096 Mar 18 06:05 ibdiagtools
drwxrwxr-x 3 root root 20 Mar 18 06:05 etc
-r-xr-xr-x 1 root root 457738 Mar 18 06:05 patchmgr
-rw-rw-r-- 1 root root 5156 Mar 18 06:05 md5sum_files.lst
-rwxrwxrwx 1 root root 822 Mar 18 07:15 README.txt

• Navigate to the patch directory and execute the following to get the patch syntax

[root@dm01db01 patch_switch_19.3.6.0.0.200317]# ./patchmgr -h


Usage:
./patchmgr --roceswitches [roceswitch_list_file]
--upgrade [--verify-config [yes|no]] [--roceswitch-precheck] [--
force] |
--downgrade [--verify-config [yes|no]] [--roceswitch-precheck] [-
-force] |
--verify-config [yes|no]
[-log_dir <fullpath> ]

./patchmgr --ibswitches [ibswitch_list_file]


<--upgrade | --downgrade> [--ibswitch_precheck] [--unkey] [--force
[yes|no]]

• Execute the following command to perform configuration verification

Note that the patching should be performed by a non-root user. In this case I am using oracle user to perform the
patching

26
[root@dm01db01 stage]# chown -R oracle:oinstall ROCE/

[root@dm01db01 stage]# su - oracle


Last login: Thu Apr 9 16:17:25 +03 2020

[oracle@dm01db01 ~]$ cd /u01/stage/ROCE/

[oracle@dm01db01 ROCE]$ ls -ltr


total 2773836
-rw-r--r-- 1 oracle oinstall 2840400612 Apr 9 00:42 p30893922_193000_Linux-
x86-64.zip
drwxrwxr-x 6 oracle oinstall 4096 Apr 9 16:31
patch_switch_19.3.6.0.0.200317
[oracle@dm01db01 ROCE]$ cd patch_switch_19.3.6.0.0.200317/

[oracle@dm01db01 ~]$ vi roce_list


dm01sw-rocea01
dm01sw-roceab1

[oracle@dm01db01 ~]$ cd /u01/stage/ROCE/patch_switch_19.3.6.0.0.200317

[oracle@dm01db01 patch_switch_19.3.6.0.0.200317]$ ./patchmgr --roceswitches


~/roce_list --verify-config --log_dir /u01/stage/ROCE

2020-04-09 16:59:52 +0300 :Working: Initiate config verify on RoCE


switches from . Expect up to 6 minutes for each switch

2020-04-09 16:59:53 +0300 1 of 2 :Verifying config on switch dm01sw-rocea01

2020-04-09 16:59:53 +0300: [INFO ] Dumping current running config


locally as file: /u01/stage/ROCE/run.dm01sw-rocea01.cfg
2020-04-09 16:59:54 +0300: [SUCCESS ] Backed up switch config
successfully
2020-04-09 16:59:54 +0300: [INFO ] Validating running config
against template [1/3]:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:59:54 +0300: [INFO ] Config matches template:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:59:54 +0300: [SUCCESS ] Config validation successful!

2020-04-09 16:59:54 +0300 2 of 2 :Verifying config on switch dm01sw-roceb01

2020-04-09 16:59:54 +0300: [INFO ] Dumping current running config


locally as file: /u01/stage/ROCE/run.dm01sw-roceb01.cfg
2020-04-09 16:59:55 +0300: [SUCCESS ] Backed up switch config
successfully

27
2020-04-09 16:59:55 +0300: [INFO ] Validating running config
against template [1/3]:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:59:55 +0300: [INFO ] Config matches template:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:59:55 +0300: [SUCCESS ] Config validation successful!

2020-04-09 16:59:55 +0300 :SUCCESS: Config check on RoCE switch(es)


2020-04-09 16:59:56 +0300 :SUCCESS: Completed run of command:
./patchmgr --roceswitches /home/oracle/roce_list --verify-config --log_dir
/u01/stage/ROCE
2020-04-09 16:59:56 +0300 :INFO : config attempted on nodes in file
/home/oracle/roce_list: [dm01sw-rocea01 dm01sw-roceb01]
2020-04-09 16:59:56 +0300 :INFO : For details, check the following
files in /u01/stage/ROCE:
2020-04-09 16:59:56 +0300 :INFO : - updateRoceSwitch.log
2020-04-09 16:59:56 +0300 :INFO : - updateRoceSwitch.trc
2020-04-09 16:59:56 +0300 :INFO : - patchmgr.stdout
2020-04-09 16:59:56 +0300 :INFO : - patchmgr.stderr
2020-04-09 16:59:56 +0300 :INFO : - patchmgr.log
2020-04-09 16:59:56 +0300 :INFO : - patchmgr.trc
2020-04-09 16:59:56 +0300 :INFO : Exit status:0
2020-04-09 16:59:56 +0300 :INFO : Exiting.

8. Execute the following command to perform prerequisite checks.

Note that during this step it will prompt you setup the SSH between oracle user and RoCE switch. Please enter the
admin user password of RoCE switch.

[oracle@dm01db01 patch_switch_19.3.6.0.0.200317]$ ./patchmgr --roceswitches


~/roce_list --upgrade --roceswitch-precheck --log_dir /u01/stage/ROCE

[NOTE ] Password equivalency is NOT setup for user 'oracle' to dm01sw-


rocea01 from 'dm01db01.netsoftmate.com'. Set it up? (y/n): y

enter switch 'admin' password:

checking if 'dm01sw-rocea01' is reachable... [OK]


setting up SSH equivalency for 'oracle' from dm01db01.netsoftmate.com to
'dm01sw-rocea01'... [OK]

[NOTE ] Password equivalency is NOT setup for user 'oracle' to dm01sw-


roceb01 from 'dm01db01.netsoftmate.com'. Set it up? (y/n): y

enter switch 'admin' password:

28
checking if 'dm01sw-roceb01' is reachable... [OK]
setting up SSH equivalency for 'oracle' from dm01db01.netsoftmate.com to
'dm01sw-roceb01'... [OK]
2020-04-09 16:47:46 +0300 :Working: Initiate pre-upgrade validation
check on 2 RoCE switch(es).

2020-04-09 16:47:47 +0300 1 of 2 :Updating switch dm01sw-rocea01

2020-04-09 16:47:49 +0300: [INFO ] Switch dm01sw-rocea01 will be


upgraded from nxos.7.0.3.I7.6.bin to nxos.7.0.3.I7.7.bin
2020-04-09 16:47:49 +0300: [INFO ] Checking for free disk space on
switch
2020-04-09 16:47:50 +0300: [INFO ] disk is 96.00% free,
available: 112371744768 bytes
2020-04-09 16:47:50 +0300: [SUCCESS ] There is enough disk space to
proceed
2020-04-09 16:47:52 +0300: [INFO ] Copying nxos.7.0.3.I7.7.bin
onto dm01sw-rocea01 (eta: 1-5 minutes)
2020-04-09 16:50:40 +0300: [SUCCESS ] Finished copying image to
switch
2020-04-09 16:50:40 +0300: [INFO ] Verifying sha256sum of bin file
on switch
2020-04-09 16:50:54 +0300: [SUCCESS ] sha256sum matches:
dce664f1a90927e9dbd86419681d138d3a7a83c5ea7222718c3f6565488ac6d0
2020-04-09 16:50:54 +0300: [INFO ] Performing FW install pre-check
of nxos.7.0.3.I7.7.bin (eta: 2-3 minutes)
2020-04-09 16:52:55 +0300: [SUCCESS ] FW install pre-check completed
successfully

2020-04-09 16:52:55 +0300 2 of 2 :Updating switch dm01sw-roceb01

2020-04-09 16:58:26 +0300: [INFO ] Dumping current running config


locally as file: /u01/stage/ROCE/run.dm01sw-roceb01.cfg
2020-04-09 16:58:27 +0300: [SUCCESS ] Backed up switch config
successfully
2020-04-09 16:58:27 +0300: [INFO ] Validating running config
against template [1/3]:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:58:27 +0300: [INFO ] Config matches template:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 16:58:27 +0300: [SUCCESS ] Config validation successful!

2020-04-09 16:58:27 +0300 :SUCCESS: Config check on RoCE switch(es)

2020-04-09 16:58:27 +0300 :SUCCESS: Initiate pre-upgrade validation


check on RoCE switch(es).
2020-04-09 16:58:27 +0300 :SUCCESS: Completed run of command:
./patchmgr --roceswitches /home/oracle/roce_list --upgrade --roceswitch-
precheck --log_dir /u01/stage/ROCE

29
2020-04-09 16:58:27 +0300 :INFO : upgrade attempted on nodes in file
/home/oracle/roce_list: [dm01sw-rocea01 dm01sw-roceb01]
2020-04-09 16:58:27 +0300 :INFO : For details, check the following
files in /u01/stage/ROCE:
2020-04-09 16:58:27 +0300 :INFO : - updateRoceSwitch.log
2020-04-09 16:58:27 +0300 :INFO : - updateRoceSwitch.trc
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.stdout
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.stderr
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.log
2020-04-09 16:58:27 +0300 :INFO : - patchmgr.trc
2020-04-09 16:58:27 +0300 :INFO : Exit status:0
2020-04-09 6:58:27 +0300 :INFO : Exiting.

• Execute the following command to perform RoCE patch

[oracle@dm01db01 patch_switch_19.3.6.0.0.200317]$ ./patchmgr --roceswitches


~/roce_list --upgrade --log_dir /u01/stage/ROCE

[NOTE ] Password equivalency is NOT setup for user 'oracle' to dm01sw-


rocea01 from 'dm01db01.netsoftmate.com'. Set it up? (y/n): y

enter switch 'admin' password:

checking if 'dm01sw-rocea01' is reachable... [OK]


setting up SSH equivalency for 'oracle' from dm01db01.netsoftmate.com to
'dm01sw-rocea01'... [OK]

[NOTE ] Password equivalency is NOT setup for user 'oracle' to dm01sw-


roceb01 from 'dm01db01.netsoftmate.com'. Set it up? (y/n): y

enter switch 'admin' password:

checking if 'dm01sw-roceb01' is reachable... [OK]


setting up SSH equivalency for 'oracle' from dm01db01.netsoftmate.com to
'dm01sw-roceb01'... [OK]
2020-04-09 17:02:26 +0300 :Working: Initiate upgrade of 2 RoCE
switches to 7.0(3)I7(7) Expect up to 15 minutes for each switch

2020-04-09 17:02:26 +0300 1 of 2 :Updating switch dm01sw-rocea01

2020-04-09 17:02:28 +0300: [INFO ] Switch dm01sw-rocea01 will be


upgraded from nxos.7.0.3.I7.6.bin to nxos.7.0.3.I7.7.bin
2020-04-09 17:02:28 +0300: [INFO ] Checking for free disk space on
switch
2020-04-09 17:02:28 +0300: [INFO ] disk is 95.00% free,
available: 111395401728 bytes

30
2020-04-09 17:02:28 +0300: [SUCCESS ] There is enough disk space to
proceed
2020-04-09 17:02:29 +0300: [INFO ] Found nxos.7.0.3.I7.7.bin on
switch, skipping download
2020-04-09 17:02:29 +0300: [INFO ] Verifying sha256sum of bin file
on switch
2020-04-09 17:02:43 +0300: [SUCCESS ] sha256sum matches:
dce664f1a90927e9dbd86419681d138d3a7a83c5ea7222718c3f6565488ac6d0
2020-04-09 17:02:43 +0300: [INFO ] Performing FW install pre-check
of nxos.7.0.3.I7.7.bin (eta: 2-3 minutes)
2020-04-09 17:04:44 +0300: [SUCCESS ] FW install pre-check completed
successfully
2020-04-09 17:04:44 +0300: [INFO ] Performing FW install of
nxos.7.0.3.I7.7.bin on dm01sw-rocea01 (eta: 3-7 minutes)
2020-04-09 17:09:51 +0300: [SUCCESS ] FW install completed
2020-04-09 17:09:51 +0300: [INFO ] Waiting for switch to come back
online (eta: 6-8 minutes)
2020-04-09 17:17:51 +0300: [INFO ] Verifying if FW install is
successful
2020-04-09 17:17:53 +0300: [SUCCESS ] dm01sw-rocea01 has been
successfully upgraded to nxos.7.0.3.I7.7.bin!

2020-04-09 17:17:53 +0300 2 of 2 :Updating switch dm01sw-roceb01

2020-04-09 17:17:56 +0300: [INFO ] Switch dm01sw-roceb01 will be


upgraded from nxos.7.0.3.I7.6.bin to nxos.7.0.3.I7.7.bin
2020-04-09 17:17:56 +0300: [INFO ] Checking for free disk space on
switch
2020-04-09 17:17:57 +0300: [INFO ] disk is 95.00% free,
available: 111542112256 bytes
2020-04-09 17:17:57 +0300: [SUCCESS ] There is enough disk space to
proceed
2020-04-09 17:17:58 +0300: [INFO ] Found nxos.7.0.3.I7.7.bin on
switch, skipping download
2020-04-09 17:17:58 +0300: [INFO ] Verifying sha256sum of bin file
on switch
2020-04-09 17:18:12 +0300: [SUCCESS ] sha256sum matches:
dce664f1a90927e9dbd86419681d138d3a7a83c5ea7222718c3f6565488ac6d0
2020-04-09 17:18:12 +0300: [INFO ] Performing FW install pre-check
of nxos.7.0.3.I7.7.bin (eta: 2-3 minutes)
2020-04-09 17:20:12 +0300: [SUCCESS ] FW install pre-check completed
successfully
2020-04-09 17:20:12 +0300: [INFO ] Checking if previous switch
dm01sw-rocea01 is fully up before proceeding (attempt 1 of 3)
2020-04-09 17:20:13 +0300: [SUCCESS ] dm01sw-rocea01 switch is fully
up and running
2020-04-09 17:20:13 +0300: [INFO ] Performing FW install of
nxos.7.0.3.I7.7.bin on dm01sw-roceb01 (eta: 3-7 minutes)
2020-04-09 17:23:20 +0300: [SUCCESS ] FW install completed
2020-04-09 17:23:20 +0300: [INFO ] Waiting for switch to come back
online (eta: 6-8 minutes)

31
2020-04-09 17:31:20 +0300: [INFO ] Verifying if FW install is
successful
2020-04-09 17:31:22 +0300: [SUCCESS ] dm01sw-roceb01 has been
successfully upgraded to nxos.7.0.3.I7.7.bin!
2020-04-09 17:31:22 +0300 :Working: Initiate config verify on RoCE
switches from . Expect up to 6 minutes for each switch

2020-04-09 17:31:25 +0300 1 of 2 :Verifying config on switch dm01sw-rocea01

2020-04-09 17:31:25 +0300: [INFO ] Dumping current running config


locally as file: /u01/stage/ROCE/run.dm01sw-rocea01.cfg
2020-04-09 17:31:26 +0300: [SUCCESS ] Backed up switch config
successfully
2020-04-09 17:31:26 +0300: [INFO ] Validating running config
against template [1/3]:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 17:31:26 +0300: [INFO ] Config matches template:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 17:31:26 +0300: [SUCCESS ] Config validation successful!

2020-04-09 17:31:26 +0300 2 of 2 :Verifying config on switch dm01sw-roceb01

2020-04-09 17:31:26 +0300: [INFO ] Dumping current running config


locally as file: /u01/stage/ROCE/run.dm01sw-roceb01.cfg
2020-04-09 17:31:27 +0300: [SUCCESS ] Backed up switch config
successfully
2020-04-09 17:31:27 +0300: [INFO ] Validating running config
against template [1/3]:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 17:31:27 +0300: [INFO ] Config matches template:
/u01/stage/ROCE/patch_switch_19.3.6.0.0.200317/roce_switch_templates/roce_lea
f_switch.cfg
2020-04-09 17:31:27 +0300: [SUCCESS ] Config validation successful!

2020-04-09 17:31:27 +0300 :SUCCESS: Config check on RoCE switch(es)

2020-04-09 17:31:27 +0300 :SUCCESS: upgrade 2 RoCE switch(es) to


7.0(3)I7(7)
2020-04-09 17:31:27 +0300 :SUCCESS: Completed run of command:
./patchmgr --roceswitches /home/oracle/roce_list --upgrade --log_dir
/u01/stage/ROCE
2020-04-09 17:31:27 +0300 :INFO : upgrade attempted on nodes in file
/home/oracle/roce_list: [dm01sw-rocea01 dm01sw-roceb01]
2020-04-09 17:31:27 +0300 :INFO : For details, check the following
files in /u01/stage/ROCE:
2020-04-09 17:31:27 +0300 :INFO : - updateRoceSwitch.log
2020-04-09 17:31:27 +0300 :INFO : - updateRoceSwitch.trc

32
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.stdout
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.stderr
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.log
2020-04-09 17:31:27 +0300 :INFO : - patchmgr.trc
2020-04-09 17:31:27 +0300 :INFO : Exit status:0
2020-04-09 17:31:27 +0300 :INFO : Exiting.

• Verify the new patch version on both RoCE switches

[oracle@dm01db01 patch_switch_19.3.6.0.0.200317]$ ssh admin@dm01sw-rocea01


show version
The authenticity of host 'dm01sw-rocea01 (dm01sw-rocea01)' can't be
established.
RSA key fingerprint is SHA256:N3/OT3xe4A8xi1zd+bkTfDyqE6yibk2zVlhXHvCk/Jk.
RSA key fingerprint is MD5:c4:1f:ef:f5:f5:ab:f1:29:c0:de:42:19:0e:f3:14:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'dm01sw-rocea01' (RSA) to the list of known hosts.
User Access Verification
Password:
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2019, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under their own
licenses, such as open source. This software is provided "as is," and unless
otherwise stated, there is no warranty, express or implied, including but not
limited to warranties of merchantability and fitness for a particular
purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.

Software
BIOS: version 05.39
NXOS: version 7.0(3)I7(7)
BIOS compile time: 08/30/2019
NXOS image file is: bootflash:///nxos.7.0.3.I7.7.bin
NXOS compile time: 3/5/2019 13:00:00 [03/05/2019 22:04:55]

Hardware

33
cisco Nexus9000 C9336C-FX2 Chassis
Intel(R) Xeon(R) CPU D-1526 @ 1.80GHz with 24571632 kB of memory.
Processor Board ID FDO23380VQS

Device name: dm01sw-rocea01


bootflash: 115805708 kB
Kernel uptime is 8 day(s), 5 hour(s), 1 minute(s), 41 second(s)

Last reset at 145297 usecs after Wed Apr 1 09:29:43 2020


Reason: Reset Requested by CLI command reload
System version: 7.0(3)I7(7)
Service:

plugin
Core Plugin, Ethernet Plugin

Active Package(s):

[oracle@dm01db01 patch_switch_19.3.6.0.0.200317]$ ssh admin@dm01sw-rocea01


show version | grep "System version:"
User Access Verification
Password:
System version: 7.0(3)I7(7)

Exadata Infiniband Switch Patching


About Infiniband Switch

• The Exadata network grid consists of multiple (two) Sun QDR InfiniBand switches
• IB Switches are used for the storage network as well as the Oracle RAC interconnect
• Exadata compute nodes and storage cells are configured with dual-port InfiniBand ports and connect to
each of the two leaf switches.
• You can access IB Switches using command line and Web ILOM
• IB Switches run Linux operating system

About Infiniband Switch patching

Starting with release 11.2.3.3.0, the patchmgr utility is used to upgrade and downgrade the InfiniBand switches.

• IB Switch and RoCE patch are delivered as part of same patch


• IB Switch patches are released semi-annually to annually
• IB Switch can be patched in Rolling fashion only

34
Steps to Patch Infiniband Switch
• Identify the number of switches in clusters.

[root@dm01db01 ~]# ibswitches

• Identify the current IB switch software version on all the Switches

[root@dm01db01 ~]# ssh dm01sw-iba01 version

• Log in to Exadata Compute node 1 as root user and navigate the Exadata Storage Software staging area

[root@dm01db01 ]# cd /u01/stage/ROCE/

[root@dm01db01 ROCE]# ls -ltr


total 2773832
-rw-r--r-- 1 root root 2840400612 Apr 9 00:42 p30893922_193000_Linux-x86-
64.zip

[root@dm01db01 ROCE]# unzip p30893922_193000_Linux-x86-64.zip

[root@dm01db01 ROCE]# cd patch_switch_19.3.6.0.0.200317/

• Create a file named ibswitches.lst and enter IB switch names one per line as follows:

[root@dm01db01 patch_18.1.12.0.0.190111]# vi ~/ibswitch_group

[root@dm01db01 patch_18.1.12.0.0.190111]# cat ~/ibswitch_group


dm01sw-ibb01
dm01sw-iba01

• Execute the following to perform the IB Switch precheck

[root@dm01db01 patch_18.1.12.0.0.190111]# ./patchmgr -ibswitches


~/ibswitch_group -upgrade -ibswitch_precheck

• Upgrade the IB Switches using the following command:

35
[root@dm01db01 patch_18.1.12.0.0.190111]# ./patchmgr -ibswitches
~/ibswitch_group -upgrade

• Verify that all the IB Switches are upgraded to latest version.

[root@dm01db01 ~]# ssh dm01sw-ibb01 version

[root@dm01db01 ~]# ssh dm01sw-iba01 version

Exadata Compute node Patching


About Compute node Patching

The patchmgr or dbnodeupdate.sh utility can be used for upgrading, rollback and backup Exadata Compute nodes.
patchmgr utility can be used for upgrading Compute nodes in a rolling or non-rolling fashion. Compute nodes
patches apply operating system, firmware, and driver updates.

Launch patchmgr from the compute node that is node 1 that has user equivalence setup to all the Compute nodes.
Patch all the compute nodes except node 1 and later patch node 1 alone.

Prerequisites

• Install and configure VNC Server on Exadata compute node 1. It is recommended to use VNC or screen
utility for patching to avoid disconnections due network issues.

• Enable blackout (OEM, crontab and so on)

• Verify disk space on Compute nodes

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root 'df -h /'


10.10.1.9: Filesystem Size Used Avail Use% Mounted on
10.10.1.9: /dev/mapper/VGExaDb-LVDbSys1 15G 6.0G 9.1G 40% /
10.10.1.10: Filesystem Size Used Avail Use% Mounted on
10.10.1.10: /dev/mapper/VGExaDb-LVDbSys1 15G 6.1G 9.0G 41% /

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root 'df -h /u01'


10.10.1.9: Filesystem Size Used Avail Use% Mounted on
10.10.1.9: /dev/mapper/VGExaDb-LVDbOra1 100G 100G 840M 100% /u01
10.10.1.10: Filesystem Size Used Avail Use% Mounted on
10.10.1.10: /dev/mapper/VGExaDb-LVDbOra1 100G 56G 45G 56% /u01

36
• Run Exachk before starting the actual patching. Correct any Critical issues and Failure that conflict with
patching.

• Verify hardware failure. Make sure there are no hardware failures before patching

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root 'dbmcli -e list physicaldisk


where status!=normal'

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root 'ipmitool sunoem cli "show -d


properties -level all /SYS fault_state==Faulted"'

• Clear or acknowledge alerts on db and cell nodes

[root@dm01db01 ~]# dcli -l root -g ~/dbs_group "dbmcli -e drop alerthistory


all"

• Download patches and copy them to the all compute nodes

[root@dm01db01 ~]# cd /u01/stage/

[root@dm01db01 stage]# cd /DBS

[root@dm01db01 DBS]# ls -ltr


total 1887408
-rw-r--r-- 1 root root 438818890 Apr 1 18:16 p21634633_193600_Linux-x86-
64.zip
-rw-r--r-- 1 root root 1493881603 Apr 6 10:23 p30893918_193000_Linux-x86-
64.zip

[root@dm01db01 DBS]# dcli -g ~/dbs_group -l root 'mkdir -p /u01/stage/DBS'

[root@dm01db01 DBS]# dcli -g ~/dbs_group -l root -d /u01/stage/DBS -f


p21634633_193600_Linux-x86-64.zip

[root@dm01db01 DBS]# dcli -g ~/dbs_group -l root -d /u01/stage/DBS -f


p30893918_193000_Linux-x86-64.zip

[root@dm01db01 DBS]# dcli -g ~/dbs_group -l root ls -ltr /u01/stage/DBS


10.10.1.9: total 1887408
10.10.1.9: -rw-r--r-- 1 root root 438818890 Apr 10 14:18
p21634633_193600_Linux-x86-64.zip
10.10.1.9: -rw-r--r-- 1 root root 1493881603 Apr 10 14:18
p30893918_193000_Linux-x86-64.zip
10.10.1.10: total 1887408

37
10.10.1.10: -rw-r--r-- 1 root root 438818890 Apr 10 14:18
p21634633_193600_Linux-x86-64.zip
10.10.1.10: -rw-r--r-- 1 root root 1493881603 Apr 10 14:18
p30893918_193000_Linux-x86-64.zip

[root@dm01db01 DBS]# unzip p21634633_193600_Linux-x86-64.zip

[root@dm01db01 DBS]# cd dbserver_patch_19.200331/

[root@dm01db01 dbserver_patch_19.200331]# unzip dbnodeupdate.zip

• Read the readme file and Exadata document for patching steps

Steps to Patch Compute nodes


• Umount all external file systems on all Compute nodes

[root@dm01db01 ~]# dcli -g ~/dbs_group -l root umount /zfssa/dm01/backup1

• Get the current version

[root@dm01db01 dbserver_patch_19.200331]# imageinfo

Kernel version: 4.14.35-1902.5.1.4.el7uek.x86_64 #2 SMP Wed Oct 9 19:29:16


PDT 2019 x86_64
Image kernel version: 4.14.35-1902.5.1.4.el7uek
Image version: 19.3.2.0.0.191119
Image activated: 2020-01-02 21:15:00 -0800
Image status: success
Node type: COMPUTE
System partition on device: /dev/mapper/VGExaDb-LVDbSys1

• Perform pre check on all nodes except node 1

[root@dm01db01 dbserver_patch_19.200331]# vi ~/dbs_group-1

[root@dm01db01 dbserver_patch_19.200331]# cat ~/dbs_group-1


10.10.1.10

[root@dm01db01 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -


precheck -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317

38
*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 14:21:37 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:21:37 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:21:38 +0300 :Working: Initiate precheck on 1 node(s)
2020-04-10 14:25:32 +0300 :Working: Check free space on 10.10.1.10
2020-04-10 14:25:35 +0300 :SUCCESS: Check free space on 10.10.1.10
2020-04-10 14:26:04 +0300 :Working: dbnodeupdate.sh running a precheck
on node(s).
2020-04-10 14:27:26 +0300 :SUCCESS: Initiate precheck on node(s).
2020-04-10 14:27:27 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -precheck -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 14:27:27 +0300 :INFO : Precheck attempted on nodes in
file /root/dbs_group-1: [10.10.1.10]
2020-04-10 14:27:27 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 14:27:27 +0300 :INFO : 10.10.1.10: 19.3.2.0.0.191119
2020-04-10 14:27:27 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 14:27:27 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 14:27:27 +0300 :INFO : - patchmgr.log
2020-04-10 14:27:27 +0300 :INFO : - patchmgr.trc
2020-04-10 14:27:27 +0300 :INFO : Exit status:0
2020-04-10 14:27:27 +0300 :INFO : Exiting.

• Perform compute node backup

[root@dm01db01 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -


backup -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317

*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE

39
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 14:30:16 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:30:16 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:30:17 +0300 :Working: Initiate backup on 1 node(s).
2020-04-10 14:30:17 +0300 :Working: Check free space on 10.10.1.10
2020-04-10 14:30:19 +0300 :SUCCESS: Check free space on 10.10.1.10
2020-04-10 14:30:28 +0300 :Working: dbnodeupdate.sh running a backup
on node(s).
2020-04-10 14:32:51 +0300 :SUCCESS: Initiate backup on node(s).
2020-04-10 14:32:51 +0300 :SUCCESS: Initiate backup on 1 node(s).
2020-04-10 14:32:51 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -backup -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 14:32:51 +0300 :INFO : Backup attempted on nodes in file
/root/dbs_group-1: [10.10.1.10]
2020-04-10 14:32:51 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 14:32:51 +0300 :INFO : 10.10.1.10: 19.3.2.0.0.191119
2020-04-10 14:32:51 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 14:32:51 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 14:32:51 +0300 :INFO : - patchmgr.log
2020-04-10 14:32:51 +0300 :INFO : - patchmgr.trc
2020-04-10 14:32:51 +0300 :INFO : Exit status:0
2020-04-10 14:32:51 +0300 :INFO : Exiting.

• Execute compute node upgrade

[root@dm01db01 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -


upgrade -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317

*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.

40
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 14:35:18 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:35:19 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.10
2020-04-10 14:35:19 +0300 :Working: Initiate prepare steps on node(s).
2020-04-10 14:35:20 +0300 :Working: Check free space on 10.10.1.10
2020-04-10 14:35:23 +0300 :SUCCESS: Check free space on 10.10.1.10
2020-04-10 14:35:55 +0300 :SUCCESS: Initiate prepare steps on node(s).
2020-04-10 14:35:55 +0300 :Working: Initiate update on 1 node(s).
2020-04-10 14:35:55 +0300 :Working: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 14:38:27 +0300 :SUCCESS: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 14:38:27 +0300 :Working: Initiate update on node(s)
2020-04-10 14:38:27 +0300 :Working: Get information about any required
OS upgrades from node(s).
2020-04-10 14:38:37 +0300 :SUCCESS: Get information about any required
OS upgrades from node(s).
2020-04-10 14:38:37 +0300 :Working: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 14:49:06 +0300 :INFO : 10.10.1.10 is ready to reboot.
2020-04-10 14:49:06 +0300 :SUCCESS: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 14:49:11 +0300 :Working: Initiate reboot on node(s)
2020-04-10 14:49:16 +0300 :SUCCESS: Initiate reboot on node(s)
2020-04-10 14:49:17 +0300 :Working: Waiting to ensure 10.10.1.10 is
down before reboot.
2020-04-10 14:50:59 +0300 :SUCCESS: Waiting to ensure 10.10.1.10 is
down before reboot.
2020-04-10 14:50:59 +0300 :Working: Waiting to ensure 10.10.1.10 is up
after reboot.
2020-04-10 14:51:41 +0300 :SUCCESS: Waiting to ensure 10.10.1.10 is up
after reboot.
2020-04-10 14:51:41 +0300 :Working: Waiting to connect to 10.10.1.10
with SSH. During Linux upgrades this can take some time.
2020-04-10 15:10:38 +0300 :SUCCESS: Waiting to connect to 10.10.1.10
with SSH. During Linux upgrades this can take some time.
2020-04-10 15:10:38 +0300 :Working: Wait for 10.10.1.10 is ready for
the completion step of update.
2020-04-10 15:10:39 +0300 :SUCCESS: Wait for 10.10.1.10 is ready for
the completion step of update.
2020-04-10 15:10:39 +0300 :Working: Initiate completion step from
dbnodeupdate.sh on node(s)
2020-04-10 15:16:54 +0300 :SUCCESS: Initiate completion step from
dbnodeupdate.sh on 10.10.1.10
2020-04-10 15:17:07 +0300 :SUCCESS: Initiate update on node(s).
2020-04-10 15:17:07 +0300 :SUCCESS: Initiate update on 0 node(s).

41
[INFO ] Collected dbnodeupdate diag in file:
Diag_patchmgr_dbnode_upgrade_100420143517.tbz
-rw-r--r-- 1 root root 2866298 Apr 10 15:17
Diag_patchmgr_dbnode_upgrade_100420143517.tbz
2020-04-10 15:17:08 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -upgrade -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 15:17:08 +0300 :INFO : Upgrade attempted on nodes in file
/root/dbs_group-1: [10.10.1.10]
2020-04-10 15:17:08 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 15:17:08 +0300 :INFO : 10.10.1.10: 19.3.6.0.0.200317
2020-04-10 15:17:08 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 15:17:08 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 15:17:08 +0300 :INFO : - patchmgr.log
2020-04-10 15:17:08 +0300 :INFO : - patchmgr.trc
2020-04-10 15:17:08 +0300 :INFO : Exit status:0
2020-04-10 15:17:08 +0300 :INFO : Exiting.

• Now patch node 1 from another node in cluster. In this node 2 will be used to patch node 1

[root@dm01db02 ~]# cat dbs_group-1


10.10.1.9

[root@dm01db02 ~]# dcli -g dbs_group-1 -l root uptime


10.10.1.9: 15:21:09 up 2 days, 2:54, 2 users, load average: 0.70, 0.49,
0.54
[root@dm01db02 ~]# dcli -g dbs_group-1 -l root hostname
10.10.1.9: dm01db01.netsoftmate.com

[root@dm01db02 ~]# cd /u01/stage/DBS/

[root@dm01db02 DBS]# ls -ltr


total 1887408
-rw-r--r-- 1 root root 438818890 Apr 10 14:18 p21634633_193600_Linux-x86-
64.zip
-rw-r--r-- 1 root root 1493881603 Apr 10 14:18 p30893918_193000_Linux-x86-
64.zip

[root@dm01db02 DBS]# unzip p21634633_193600_Linux-x86-64.zip

[root@dm01db02 DBS]# cd dbserver_patch_19.200331/

[root@dm01db02 dbserver_patch_19.200331]# unzip dbnodeupdate.zip

42
[root@dm01db02 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -
precheck -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317

[root@dm01db02 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -


backup -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317

[root@dm01db02 dbserver_patch_19.200331]# ./patchmgr -dbnodes ~/dbs_group-1 -


upgrade -iso_repo /u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -
target_version 19.3.6.0.0.200317

*****************************************************************************
*******************************
NOTE patchmgr release: 19.200331 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*****************************************************************************
*******************************
2020-04-10 15:33:01 +0300 :Working: Verify SSH equivalence for the
root user to 10.10.1.9
2020-04-10 15:33:01 +0300 :SUCCESS: Verify SSH equivalence for the
root user to 10.10.1.9
2020-04-10 15:33:02 +0300 :Working: Initiate prepare steps on node(s).
2020-04-10 15:33:03 +0300 :Working: Check free space on 10.10.1.9
2020-04-10 15:33:05 +0300 :SUCCESS: Check free space on 10.10.1.9
2020-04-10 15:33:37 +0300 :SUCCESS: Initiate prepare steps on node(s).
2020-04-10 15:33:37 +0300 :Working: Initiate update on 1 node(s).
2020-04-10 15:33:37 +0300 :Working: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 15:36:19 +0300 :SUCCESS: dbnodeupdate.sh running a backup
on 1 node(s).
2020-04-10 15:36:19 +0300 :Working: Initiate update on node(s)
2020-04-10 15:36:19 +0300 :Working: Get information about any required
OS upgrades from node(s).
2020-04-10 15:36:30 +0300 :SUCCESS: Get information about any required
OS upgrades from node(s).
2020-04-10 15:36:30 +0300 :Working: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 15:47:08 +0300 :INFO : 10.10.1.9 is ready to reboot.
2020-04-10 15:47:08 +0300 :SUCCESS: dbnodeupdate.sh running an update
step on all nodes.
2020-04-10 15:47:15 +0300 :Working: Initiate reboot on node(s)
2020-04-10 15:47:19 +0300 :SUCCESS: Initiate reboot on node(s)

43
2020-04-10 15:47:19 +0300 :Working: Waiting to ensure 10.10.1.9 is
down before reboot.
2020-04-10 15:49:02 +0300 :SUCCESS: Waiting to ensure 10.10.1.9 is
down before reboot.
2020-04-10 15:49:02 +0300 :Working: Waiting to ensure 10.10.1.9 is up
after reboot.
2020-04-10 15:49:38 +0300 :SUCCESS: Waiting to ensure 10.10.1.9 is up
after reboot.
2020-04-10 15:49:38 +0300 :Working: Waiting to connect to 10.10.1.9
with SSH. During Linux upgrades this can take some time.
2020-04-10 16:08:34 +0300 :SUCCESS: Waiting to connect to 10.10.1.9
with SSH. During Linux upgrades this can take some time.
2020-04-10 16:08:34 +0300 :Working: Wait for 10.10.1.9 is ready for
the completion step of update.
2020-04-10 16:09:23 +0300 :SUCCESS: Wait for 10.10.1.9 is ready for
the completion step of update.
2020-04-10 16:09:24 +0300 :Working: Initiate completion step from
dbnodeupdate.sh on node(s)
2020-04-10 16:20:09 +0300 :SUCCESS: Initiate completion step from
dbnodeupdate.sh on 10.10.1.9
2020-04-10 16:20:22 +0300 :SUCCESS: Initiate update on node(s).
2020-04-10 16:20:22 +0300 :SUCCESS: Initiate update on 0 node(s).
[INFO ] Collected dbnodeupdate diag in file:
Diag_patchmgr_dbnode_upgrade_100420153300.tbz
-rw-r--r-- 1 root root 3006068 Apr 10 16:20
Diag_patchmgr_dbnode_upgrade_100420153300.tbz
2020-04-10 16:20:23 +0300 :SUCCESS: Completed run of command:
./patchmgr -dbnodes /root/dbs_group-1 -upgrade -iso_repo
/u01/stage/DBS/p30893918_193000_Linux-x86-64.zip -target_version
19.3.6.0.0.200317
2020-04-10 16:20:23 +0300 :INFO : Upgrade attempted on nodes in file
/root/dbs_group-1: [10.10.1.9]
2020-04-10 16:20:23 +0300 :INFO : Current image version on dbnode(s)
is:
2020-04-10 16:20:23 +0300 :INFO : 10.10.1.9: 19.3.6.0.0.200317
2020-04-10 16:20:23 +0300 :INFO : For details, check the following
files in /u01/stage/DBS/dbserver_patch_19.200331:
2020-04-10 16:20:23 +0300 :INFO : - <dbnode_name>_dbnodeupdate.log
2020-04-10 16:20:23 +0300 :INFO : - patchmgr.log
2020-04-10 16:20:23 +0300 :INFO : - patchmgr.trc
2020-04-10 16:20:23 +0300 :INFO : Exit status:0
2020-04-10 16:20:23 +0300 :INFO : Exiting.

• Verify the compute nodes new Image version

[root@dm01db01 ~]# dcli -g dbs_group -l root 'imageinfo | grep "Image


version"'
10.10.1.9: Image version: 19.3.6.0.0.200317
10.10.1.10: Image version: 19.3.6.0.0.200317

44
• Verify the Oracle clusterware is up and running

[root@dm01db01 ~]# /u01/app/19.0.0.0/grid/bin/crsctl stat res -t


-----------------------------------------------------------------------------
---
Name Target State Server State details
-----------------------------------------------------------------------------
---
Local Resources
-----------------------------------------------------------------------------
---
ora.LISTENER.lsnr
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.chad
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.net1.network
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.ons
ONLINE ONLINE dm01db01 STABLE
ONLINE ONLINE dm01db02 STABLE
ora.proxy_advm
OFFLINE OFFLINE dm01db01 STABLE
OFFLINE OFFLINE dm01db02 STABLE
-----------------------------------------------------------------------------
---
Cluster Resources
-----------------------------------------------------------------------------
---
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.DG_DATA.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE dm01db01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE dm01db02 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE dm01db02 STABLE
ora.RECOC1.dg(ora.asmgroup)
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE dm01db01 Started,STABLE
2 ONLINE ONLINE dm01db02 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)

45
1 ONLINE ONLINE dm01db01 STABLE
2 ONLINE ONLINE dm01db02 STABLE
ora.cvu
1 ONLINE ONLINE dm01db02 STABLE
ora.orcldb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o

racle/product/11.2.0

.4/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o

racle/product/11.2.0

.4/dbhome_1,STABLE
ora.nsmdb.db
1 ONLINE ONLINE dm01db01 Open,HOME=/u01/app/o

racle/product/12.2.0

.1/dbhome_1,STABLE
2 ONLINE ONLINE dm01db02 Open,HOME=/u01/app/o

racle/product/12.2.0

.1/dbhome_1,STABLE
ora.dm01db01.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.dm01db02.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.qosmserver
1 ONLINE ONLINE dm01db02 STABLE
ora.scan1.vip
1 ONLINE ONLINE dm01db01 STABLE
ora.scan2.vip
1 ONLINE ONLINE dm01db02 STABLE
ora.scan3.vip
1 ONLINE ONLINE dm01db02 STABLE
-----------------------------------------------------------------------------
---

46
Conclusion Handy References:
The objective of publishing this recipes book 1. Article: 10 Easy Steps To Patch
on Exadata patching is to ensure that all Oracle Exadata X8m RoCE Switch
aspects related to patching multiple Exadata
components are effectively covered. This e- 2. Article: 7 Easy Steps To Verify RoCE
book will help you prepare and deploy Cabling On Oracle Exadata X8m
patching using Oracle provided utility
patchmgr. 3. Article: Step-By-Step Guide Of
Exadata Snapshot Based Backup Of
Oracle Exadata 8XM is the new introduction
Compute Node To Nfs Share
to the Exadata family by Oracle. Netsoftmate
Oracle engineered systems team realized
4. Article: All You Need To Know About
that there’s no proper content or guide
Oracle Autonomous Health
available online which highlights detailed
Framework Execution
patching and management of this newly
launched Exadata 8XM machine.

Hence, our experts took it up as a moral


About Netsoftmate
5.

responsibility to collate this recipes book on


how to setup and run patching effectively for Netsoftmate's journey began in the year 2014
this new edition of Oracle Exadata. with the objective of providing niche IT Services
in domains such as Oracle Engineered Systems,
We hope this e-book will help increase your Oracle GoldenGate, Cloud Infrastructure,
domain expertise and efficiency while Cybersecurity & Development. Almost 6 years
working on Oracle Engineered Systems from inception, we now serve customers across
specifically Exadata environments. This all industry verticals in the public and private
Exadata Patching Recipes will help you learn sectors. Our success story, built upon the
the patching process and apply the skills in successful delivery of complex projects..
your Oracle engineered systems
architecture.

: www.netsoftmate.com

: USA - + 1 512 808 5399


KSA - + 966 54 602 9364
IND - + 91 988 534 4596

: info@netsoftmate.com

47

You might also like