You are on page 1of 34

Implementation Header Answers

Change Number:
CHG0224959

IY-GLBL-Database Operations
Implementation Team:
Team

Conference Bridge Info:

Contact Person: Pramod Sharma


818-397-0334

New Monitoring Required No

Disable Existing Monitoring No

Traffic Swing Between


(Yes or No) DC4 to SC9
No
(Yes or No) SC9 to DC4

Servers, Switches, Routers, DB Names & SC9 X7


Servers

Following teams and users have


been notified about the change
What Notifications have occurred for this
Merick Miller (Network team), Brian
change What owners, groups, teams, users,
Watlington (CAB Team),
etc.
Notification is sent to application
team
Peer Review of the Change
sr member of team should review change,
Ashish Jain
implementation and back out for accuracy
and validity

Does this change require production


No
readiness review
Additional Details

Start Time : 2020-05-13 19:00:00 ET


End Time : 2020-05-14 07:00:00ET

(Yes or No) DC4 to SC9


(Yes or No) SC9 to DC4

What owners, groups, teams, users, etc.


Name of Sr team member should review
change request, implementation and
back out for accuracy

GIO review meeting


Exadata atld1paxd8 QFSDPJan2020 Execution Plan for Cell Node and IB Switch
All the timings are very critical. Please stick to the timings even if any component is patched ahead of time.
Time Taken
Cell Node-1 Patching Rolling
Cell Node-2 Patching Rolling
Cell Node-3 Patching Rolling

IB Switch Patching
QFSDPJan2020 Execution Plan for Cell Node and IB Switch
ry critical. Please stick to the timings even if any component is patched ahead of time.
Time Taken
Machine Information

DATABASE NODES
Database Node IP Address
atld1paxd8ad001.hiw.com 192.168.245.200
atld1paxd8ad002.hiw.com 192.168.245.201
atld1paxd8ad003.hiw.com 192.168.245.202

CELL NODES
Cell Node IP Address
atld1paxd8ce001 192.168.245.206
atld1paxd8ce002 192.168.245.207
atld1paxd8ce003 192.168.245.208

IB SWITCHES
IB Switch IP Address
atld1paxd8sw001 192.168.245.213
atld1paxd8sw002 192.168.245.214

Filesystem Mounted on DB Node-1


[root@atld1paxd8ad001 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 378G 0 378G 0% /dev
tmpfs 755G 209M 754G 1% /dev/shm
tmpfs 378G 5.2M 378G 1% /run
tmpfs 378G 0 378G 0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1 50G 30G 18G 64% /
/dev/mapper/VGExaDb-LVDbOra1 246G 95G 141G 41% /u01
/dev/sda1 488M 48M 405M 11% /boot
/dev/sda2 254M 7.3M 247M 3% /boot/efi
tmpfs 76G 0 76G 0% /run/user/0
/dev/loop0 4.3G 4.3G 0 100% /mnt/iso/yum/ol76
/dev/loop1 3.8G 3.8G 0 100% /mnt/iso/yum/ol6
/dev/asm/dbavol1-73 4.0T 14G 4.0T 1% /DBA
tmpfs 76G 0 76G 0% /run/user/12146
/dev/mapper/VGExaDb-LVdma 178G 122G 48G 72% /dma
tmpfs 76G 0 76G 0% /run/user/1001
tmpfs 76G 0 76G 0% /run/user/12149

Filesystem Mounted on DB Node-2


[root@atld1paxd8ad002 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 378G 0 378G 0% /dev
tmpfs 755G 209M 754G 1% /dev/shm
tmpfs 378G 5.0M 378G 1% /run
tmpfs 378G 0 378G 0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1 50G 23G 25G 49% /
/dev/sda1 488M 48M 405M 11% /boot
/dev/mapper/VGExaDb-LVDbOra1 246G 61G 175G 26% /u01
/dev/sda2 254M 7.3M 247M 3% /boot/efi
tmpfs 76G 0 76G 0% /run/user/0
/dev/asm/dbavol1-73 4.0T 14G 4.0T 1% /DBA
tmpfs 76G 0 76G 0% /run/user/12146
tmpfs 76G 0 76G 0% /run/user/1001
/dev/mapper/VGExaDb-LVdma 178G 94G 76G 56% /dma
tmpfs 76G 0 76G 0% /run/user/12162

Filesystem Mounted on DB Node-3


[root@atld1paxd8ad003 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 378G 0 378G 0% /dev
tmpfs 755G 209M 754G 1% /dev/shm
tmpfs 378G 5.2M 378G 1% /run
tmpfs 378G 0 378G 0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1 50G 23G 25G 49% /
/dev/sda1 488M 48M 405M 11% /boot
/dev/mapper/VGExaDb-LVDbOra1 246G 62G 174G 27% /u01
/dev/sda2 254M 7.3M 247M 3% /boot/efi
tmpfs 76G 0 76G 0% /run/user/0
/dev/asm/dbavol1-73 4.0T 14G 4.0T 1% /DBA
tmpfs 76G 0 76G 0% /run/user/12146
tmpfs 76G 0 76G 0% /run/user/1001
/dev/mapper/VGExaDb-LVdma 178G 94G 76G 56% /dma
tmpfs 76G 0 76G 0% /run/user/12162

Instances and Listeners Running Node-1


[root@atld1paxd8ad001 ~]# ps -ef|grep pmon
oracle 148566 1 0 2019 ? 00:14:15 asm_pmon_+ASM1
oracle 155788 1 0 2019 ? 00:16:24 apx_pmon_+APX1
oracle 158883 1 0 2019 ? 00:40:19 ora_pmon_AVALONXP1
oracle 250446 1 0 2019 ? 00:31:41 ora_pmon_db1db11

[root@atld1paxd8ad001 ~]# ps -ef|grep tns


oracle 154082 1 0 2019 ? 00:55:29 /u01/app/19.0.0.0/grid/bin/tnslsnr ASMNET1LS
oracle 154666 1 0 2019 ? 00:09:33 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER
oracle 155149 1 0 2019 ? 01:49:33 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_
oracle 164127 1 0 2019 ? 00:06:56 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_

Instances and Listeners Running Node-2


[root@atld1paxd8ad002 ~]# ps -ef|grep pmon
oracle 10626 1 0 2019 ? 02:43:04 ora_pmon_AVALONXP2
oracle 182787 1 0 2019 ? 00:30:16 ora_pmon_db1db12
oracle 284691 1 0 2019 ? 00:13:19 asm_pmon_+ASM2
oracle 289843 1 0 2019 ? 00:14:58 apx_pmon_+APX2

[root@atld1paxd8ad002 ~]# ps -ef|grep tns


oracle 104055 1 0 2019 ? 00:23:36 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_
oracle 281002 1 0 2019 ? 03:49:25 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER
oracle 281107 1 0 2019 ? 01:48:29 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_
oracle 281343 1 0 2019 ? 00:56:18 /u01/app/19.0.0.0/grid/bin/tnslsnr ASMNET1LS

Instances and Listeners Running Node-3


[root@atld1paxd8ad003 ~]# ps -ef|grep pmon
oracle 42031 1 0 2019 ? 00:30:26 ora_pmon_db1db13
oracle 303038 1 0 2019 ? 02:35:44 ora_pmon_AVALONXP3
oracle 371283 1 0 2019 ? 00:13:27 asm_pmon_+ASM3
oracle 376704 1 0 2019 ? 00:14:47 apx_pmon_+APX3

[root@atld1paxd8ad003 ~]# ps -ef|grep tns


oracle 353220 1 0 2019 ? 00:23:03 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_
oracle 368375 1 0 2019 ? 03:50:17 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER
oracle 368456 1 0 2019 ? 01:49:08 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_
oracle 368497 1 0 2019 ? 00:55:55 /u01/app/19.0.0.0/grid/bin/tnslsnr ASMNET1LS

Cluster Resource Information


[root@atld1paxd8ad001 ~]# crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATAC1.DBAVOL1.advm
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.DATAC1.GHCHKPT.advm
OFFLINE OFFLINE atld1paxd8ad001 STABLE
OFFLINE OFFLINE atld1paxd8ad002 STABLE
OFFLINE OFFLINE atld1paxd8ad003 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.LISTENER_DG.lsnr
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.chad
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.datac1.dbavol1.acfs
ONLINE ONLINE atld1paxd8ad001 mounted on /DBA,STAB
LE
ONLINE ONLINE atld1paxd8ad002 mounted on /DBA,STAB
LE
ONLINE ONLINE atld1paxd8ad003 mounted on /DBA,STAB
LE
ora.datac1.ghchkpt.acfs
OFFLINE OFFLINE atld1paxd8ad001 STABLE
OFFLINE OFFLINE atld1paxd8ad002 STABLE
OFFLINE OFFLINE atld1paxd8ad003 STABLE
ora.helper
OFFLINE OFFLINE atld1paxd8ad001 IDLE,STABLE
OFFLINE OFFLINE atld1paxd8ad002 IDLE,STABLE
OFFLINE OFFLINE atld1paxd8ad003 IDLE,STABLE
ora.net1.network
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.net2.network
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.ons
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
ora.proxy_advm
ONLINE ONLINE atld1paxd8ad001 STABLE
ONLINE ONLINE atld1paxd8ad002 STABLE
ONLINE ONLINE atld1paxd8ad003 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE atld1paxd8ad001 STABLE
2 ONLINE ONLINE atld1paxd8ad002 STABLE
3 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.DATAC1.dg(ora.asmgroup)
1 ONLINE ONLINE atld1paxd8ad001 STABLE
2 ONLINE ONLINE atld1paxd8ad002 STABLE
3 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE atld1paxd8ad001 STABLE
ora.RECOC1.dg(ora.asmgroup)
1 ONLINE ONLINE atld1paxd8ad001 STABLE
2 ONLINE ONLINE atld1paxd8ad002 STABLE
3 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE atld1paxd8ad001 Started,STABLE
2 ONLINE ONLINE atld1paxd8ad002 Started,STABLE
3 ONLINE ONLINE atld1paxd8ad003 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE atld1paxd8ad001 STABLE
2 ONLINE ONLINE atld1paxd8ad002 STABLE
3 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.atld1paxd8ad001.vip
1 ONLINE ONLINE atld1paxd8ad001 STABLE
ora.atld1paxd8ad001_2.vip
1 ONLINE ONLINE atld1paxd8ad001 STABLE
ora.atld1paxd8ad002.vip
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.atld1paxd8ad002_2.vip
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.atld1paxd8ad003.vip
1 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.atld1paxd8ad003_2.vip
1 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.availprd.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.cdoprod.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.db
1 ONLINE ONLINE atld1paxd8ad001 Open,HOME=/u01/app/o
racle/product/12.1.0
.2/dbhome_1,STABLE
2 ONLINE ONLINE atld1paxd8ad002 Open,HOME=/u01/app/o
racle/product/12.1.0
.2/dbhome_1,STABLE
3 ONLINE ONLINE atld1paxd8ad003 Open,HOME=/u01/app/o
racle/product/12.1.0
.2/dbhome_1,STABLE
ora.avalonxp.extlprod.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.odsp.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.odsp_taf.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.svc_avalonp_avail_r.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.avalonxp.svc_avalonp_avail_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.svc_avalonp_bas_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.avalonxp.svc_avalonp_bdp_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.avalonxp.svc_avalonp_cas_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.svc_avalonp_cdo_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.svc_avalonp_extl_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.svc_avalonp_precomprime_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.avalonxp.svc_avalonp_precompute_w.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
2 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.avalonxp.svc_avalonpdg_etl_r.svc
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.cvu
1 ONLINE ONLINE atld1paxd8ad001 STABLE
ora.db1db1.db
1 ONLINE ONLINE atld1paxd8ad001 Open,HOME=/u01/app/o
racle/product/12.1.0
.2/dbhome_1,STABLE
2 ONLINE ONLINE atld1paxd8ad002 Open,HOME=/u01/app/o
racle/product/12.1.0
.2/dbhome_1,STABLE
3 ONLINE ONLINE atld1paxd8ad003 Open,HOME=/u01/app/o
racle/product/12.1.0
.2/dbhome_1,STABLE
ora.qosmserver
1 ONLINE ONLINE atld1paxd8ad001 STABLE
ora.rhpserver
1 OFFLINE OFFLINE STABLE
ora.scan1.vip
1 ONLINE ONLINE atld1paxd8ad002 STABLE
ora.scan2.vip
1 ONLINE ONLINE atld1paxd8ad003 STABLE
ora.scan3.vip
1 ONLINE ONLINE atld1paxd8ad001 STABLE
--------------------------------------------------------------------------------
[root@atld1paxd8ad001 ~]#
[root@atld1paxd8ad001 ~]#
[root@atld1paxd8ad001 ~]# crsctl check cluster -all
**************************************************************
atld1paxd8ad001:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
atld1paxd8ad002:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
atld1paxd8ad003:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
/bin/tnslsnr LISTENER -no_crs_notify -inherit
/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
/bin/tnslsnr LISTENER_DG -no_crs_notify -inherit

/bin/tnslsnr LISTENER_DG -no_crs_notify -inherit


/bin/tnslsnr LISTENER -no_crs_notify -inherit
/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit

/bin/tnslsnr LISTENER_DG -no_crs_notify -inherit


/bin/tnslsnr LISTENER -no_crs_notify -inherit
/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
/DBA,STAB

/DBA,STAB

/DBA,STAB
u01/app/o
ct/12.1.0

u01/app/o
ct/12.1.0

u01/app/o
ct/12.1.0
u01/app/o
ct/12.1.0

u01/app/o
ct/12.1.0

u01/app/o
ct/12.1.0
*Modify the disk repair time
Modify the disk repair time as follows before starting the activity

$ . oraenv
$ +ASM1
$ sqlplus "/as sysasm"
SQL > alter diskgroup DATAC1 set attribute 'disk_repair_time'='8h';
SQL > alter diskgroup RECOC1 set attribute 'disk_repair_time'='8h';
SQL> SELECT * FROM v$asm_attribute where NAME='disk_repair_time';

Start Blackout from OEM


Keep Blackout Name handy so that it will be easier to Remove Blackout After completio
Cell Node-1 Patching (atld1paxd8ce001.hiw.com)

NOTE : We will patch the cell nodes in rolling but instead of passing all the cell node names in one cell group we will have on

Login to DB Node-1 (atld1paxd8ad001.hiw.com) as root (Login First with your individual id and switch to user root)
Change to patch directory
cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/ExadataStorageServer_InfiniBa

Apply Patch to first cell Node


nohup ./patchmgr -cells /root/cell_group1 -patch –rolling &

Monitor the log files and cells being patched. Monitor patch activity using: (Do Not vi or Open File Any Logfile IN Any Editor
less -rf patchmgr.stdout

Verify the patch status after the patchmgr utility completes as follows:
dcli -g /root/cell_group1 -l root imageinfo |grep -i image |grep Active

ssh to cell node (from 1st DB Node) and run below commands to check the patch status
# imageinfo

It should have output similar to below (Current Active Version ,will change after patching )

Kernel version: 4.1.12-124.26.12.el7uek.x86_64 #2 SMP Wed May 8 22:25:03 PDT 2019 x86_64
Cell version: OSS_19.2.4.0.0_LINUX.X64_190709
Cell rpm version: cell-19.2.4.0.0_LINUX.X64_190709-1.x86_64

Active image version: 19.2.4.0.0.190709


Active image kernel version: 4.1.12-124.26.12.el7uek
Active image activated: 2019-09-11 08:12:10 -0400
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

Cell boot usb partition: /dev/sdm1


Cell boot usb version: 19.2.4.0.0.190709

Validate the firmware of the cell by executing (From Cell Node)


# cellcli -e 'alter cell validate configuration'

Check Cell Services (From Cell Node)


# service celld status
Redirecting to /bin/systemctl status celld.service
● celld.service - celld
Loaded: loaded (/etc/systemd/system/celld.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2019-03-06 07:40:58 EST; 1 months 9 days ago
Main PID: 34026 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/celld.service
├─31252 /opt/oracle/cell/cellofl-11.2.3.3.1_LINUX.X64_170815/cellsrv/bin/celloflsrv -startup 1 0 1 5042 39997 SYS_112
├─31257 /bin/sh /opt/oracle/cell/cellofl-12.1.2.4.0_LINUX.X64_181106/cellsrv/bin/celloflsrv_start.sh -startup /opt/orac
├─31258 /opt/oracle/cell/cellofl-12.1.2.4.0_LINUX.X64_181106/cellsrv/bin/celloflsrv -startup 2 0 1 5042 39997 SYS_121

Clean up the cells using the -cleanup (From DB Node run Below Commands )
cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/ExadataStorageServer_InfiniBa

./patchmgr -cells /root/cell_group1 -cleanup

Cell Server Healthcheck (From Cell Node)


cellcli -e list griddisk (All Griddisk shoud have ACTIVE status)
cellcli -e list griddisk attributes name, asmmodestatus, asmdeactivationoutcome (Should be ONLINE and last column sho
service celld status
Cell Node-2 Patching (atld1paxd8ce002.hiw.com)

NOTE : We will patch the cell nodes in rolling but instead of passing all the cell node names in one cell group we will have on

Login to DB Node-1 (atld1paxd8ad001.hiw.com) as root (Login First with your individual id and switch to user root)
Change to patch directory
cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/ExadataStorageServer_InfiniBa

Apply Patch to first cell Node


nohup ./patchmgr -cells /root/cell_group2 -patch –rolling &

Monitor the log files and cells being patched. Monitor patch activity using: (Do Not vi or Open File Any Logfile IN Any Editor
less -rf patchmgr.stdout

Verify the patch status after the patchmgr utility completes as follows:
dcli -g /root/cell_group2 -l root imageinfo |grep -i image |grep Active

ssh to cell node (from 1st DB Node) and run below commands to check the patch status
# imageinfo

It should have output similar to below (Current Active Version ,will change after patching )

Kernel version: 4.1.12-124.26.12.el7uek.x86_64 #2 SMP Wed May 8 22:25:03 PDT 2019 x86_64
Cell version: OSS_19.2.4.0.0_LINUX.X64_190709
Cell rpm version: cell-19.2.4.0.0_LINUX.X64_190709-1.x86_64

Active image version: 19.2.4.0.0.190709


Active image kernel version: 4.1.12-124.26.12.el7uek
Active image activated: 2019-09-11 08:12:10 -0400
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

Cell boot usb partition: /dev/sdm1


Cell boot usb version: 19.2.4.0.0.190709

Validate the firmware of the cell by executing (From Cell Node)


# cellcli -e 'alter cell validate configuration'

Check Cell Services (From Cell Node)


# service celld status
Redirecting to /bin/systemctl status celld.service
● celld.service - celld
Loaded: loaded (/etc/systemd/system/celld.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2019-03-06 07:40:58 EST; 1 months 9 days ago
Main PID: 34026 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/celld.service
├─31252 /opt/oracle/cell/cellofl-11.2.3.3.1_LINUX.X64_170815/cellsrv/bin/celloflsrv -startup 1 0 1 5042 39997 SYS_112
├─31257 /bin/sh /opt/oracle/cell/cellofl-12.1.2.4.0_LINUX.X64_181106/cellsrv/bin/celloflsrv_start.sh -startup /opt/orac
├─31258 /opt/oracle/cell/cellofl-12.1.2.4.0_LINUX.X64_181106/cellsrv/bin/celloflsrv -startup 2 0 1 5042 39997 SYS_121

Clean up the cells using the -cleanup (From DB Node run Below Commands )
cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/ExadataStorageServer_InfiniBa

./patchmgr -cells /root/cell_group2 -cleanup

Cell Server Healthcheck (From Cell Node)


cellcli -e list griddisk (All Griddisk shoud have ACTIVE status)
cellcli -e list griddisk attributes name, asmmodestatus, asmdeactivationoutcome (Should be ONLINE and last column sho
service celld status
Cell Node-3 Patching (atld1paxd8ce003.hiw.com)

NOTE : We will patch the cell nodes in rolling but instead of passing all the cell node names in one cell group we will have on

Login to DB Node-1 (atld1paxd8ad001.hiw.com) as root (Login First with your individual id and switch to user root)
Change to patch directory
cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/ExadataStorageServer_InfiniBa

Apply Patch to first cell Node


nohup ./patchmgr -cells /root/cell_group3 -patch –rolling &

Monitor the log files and cells being patched. Monitor patch activity using: (Do Not vi or Open File Any Logfile IN Any Editor
less -rf patchmgr.stdout

Verify the patch status after the patchmgr utility completes as follows:
dcli -g /root/cell_group3 -l root imageinfo |grep -i image |grep Active

ssh to cell node (from 1st DB Node) and run below commands to check the patch status
# imageinfo

It should have output similar to below (Current Active Version ,will change after patching )

Kernel version: 4.1.12-124.26.12.el7uek.x86_64 #2 SMP Wed May 8 22:25:03 PDT 2019 x86_64
Cell version: OSS_19.2.4.0.0_LINUX.X64_190709
Cell rpm version: cell-19.2.4.0.0_LINUX.X64_190709-1.x86_64

Active image version: 19.2.4.0.0.190709


Active image kernel version: 4.1.12-124.26.12.el7uek
Active image activated: 2019-09-11 08:12:10 -0400
Active image status: success
Active system partition on device: /dev/md6
Active software partition on device: /dev/md8

Cell boot usb partition: /dev/sdm1


Cell boot usb version: 19.2.4.0.0.190709

Validate the firmware of the cell by executing (From Cell Node)


# cellcli -e 'alter cell validate configuration'

Check Cell Services (From Cell Node)


# service celld status
Redirecting to /bin/systemctl status celld.service
● celld.service - celld
Loaded: loaded (/etc/systemd/system/celld.service; enabled; vendor preset: disabled)
Active: active (exited) since Wed 2019-03-06 07:40:58 EST; 1 months 9 days ago
Main PID: 34026 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/celld.service
├─31252 /opt/oracle/cell/cellofl-11.2.3.3.1_LINUX.X64_170815/cellsrv/bin/celloflsrv -startup 1 0 1 5042 39997 SYS_112
├─31257 /bin/sh /opt/oracle/cell/cellofl-12.1.2.4.0_LINUX.X64_181106/cellsrv/bin/celloflsrv_start.sh -startup /opt/orac
├─31258 /opt/oracle/cell/cellofl-12.1.2.4.0_LINUX.X64_181106/cellsrv/bin/celloflsrv -startup 2 0 1 5042 39997 SYS_121

Clean up the cells using the -cleanup (From DB Node run Below Commands )
cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/ExadataStorageServer_InfiniBa

./patchmgr -cells /root/cell_group3 -cleanup

Cell Server Healthcheck (From Cell Node)


cellcli -e list griddisk (All Griddisk shoud have ACTIVE status)
cellcli -e list griddisk attributes name, asmmodestatus, asmdeactivationoutcome (Should be ONLINE and last column sho
service celld status
IB Switch Upgrade (atld1paxd8sw001 and atld1paxd8sw002)

NOTE : All commands should be fired as root on DB Node-1 (atld1paxd8ad001.hiw.com) only

By default, the patchmgr utility upgrades all the switches mentioned file /root/ibswitch_group

# cat ibswitch_group
atld1paxd8sw001
atld1paxd8sw002

Change to the patchmgr directory


# cd /dma/Jan2020QFSDP/30463800/Infrastructure/19.3.4.0.0/FabricSwitch/patch_switch_19.3.4.0.0.200130

Run the pre requisites check for the upgrade


# ./patchmgr -ibswitches /root/ibswitch_group -upgrade -ibswitch_precheck

If the output shows OVERALL SUCCESS (Like Below) then proceed with upgrade
----- InfiniBand switch update process ended 2019-10-11 04:20:41 -0400 -----
2019-04-13 04:20:41 -0400 1 of 1 :SUCCESS: DONE: Initiate pre-upgrade validation check on Infi

Upgrade the IB switches


# ./patchmgr -ibswitches /root/ibswitch_group -upgrade

Verify the output of the switches, it should success and the new version.

Post IB switch verification steps (As root user from DB Node-1)


ibswitches
ibhosts
[root@atld1paxd8sw001 ~]# hostname
atld1paxd8sw001.hiw.com
[root@atld1paxd8sw001 ~]# version
SUN DCS 36p version: 2.2.12-2
Build time: Oct 29 2018 08:35:27
SP board info:
Manufacturing Date: N/A
Serial Number: "NEDCJ0131"
Hardware Revision: 0x0100
Firmware Revision: 0x0000
BIOS version: NUP1R918
BIOS date: 01/19/2016
[root@atld1paxd8sw001 ~]# date
Thu May 7 12:22:27 GMT 2020

[root@atld1paxd8sw002 ~]# hostname


atld1paxd8sw002.hiw.com
[root@atld1paxd8sw002 ~]# version
SUN DCS 36p version: 2.2.12-2
Build time: Oct 29 2018 08:35:27
SP board info:
Manufacturing Date: N/A
Serial Number: "NEDCK0080"
Hardware Revision: 0x0100
Firmware Revision: 0x0000
BIOS version: NUP1R918
BIOS date: 01/19/2016
[root@atld1paxd8sw002 ~]# date
Thu May 7 12:22:53 GMT 2020
Post Patching Tasks and Checklist
Below Steps Should Be Executed Once The Entire Activity Got Completed On ALL Nodes

*Adjust the disk repair time to its original values as follows

$ . oraenv
$ +ASM1
$ sqlplus "/as sysasm"
SQL > alter diskgroup DATAC1 set attribute 'disk_repair_time'='3.6h';
SQL > alter diskgroup RECOC1 set attribute 'disk_repair_time'='3.6h';
SQL > SELECT * FROM v$asm_attribute where NAME='disk_repair_time';

*END Blackout from OEM which was started at starting of the activity

Post Patching Checklist


Verify Standby Sync Status and DG configuration
Change the disk repair time value to 3.6 hrs for all diskgroups
Remove Blackout from OEM and verify all targets status (Should be up and running )
Status

You might also like