You are on page 1of 62

Upgrade Grid infrastructure and RDBMS home from

11.2.0.3 to 11.2.0.4 and apply PSU (11.2.0.4.6)

Author: Giridhar Reddy Sangala

Department: Success Factors Support

Document name: Upgrade document

Email: giridharreddy.s@rolta.com

Security Classification: Success Factors Internal

Last modified date: 31/08/2015

Reviewers: Rolta DBA team


Table of Contents
1) Pre-upgrade activities (No downtime is required) ............................................................................... 5
a. Download the 11.2.0.4 software ...................................................................................................... 5
b. Download the PSU 11.2.0.4.6 patch for grid infrastructure and rdbms home ................................. 5
c. Download opatch utility (Required OPatch utility version is 11.2.0.3.6 or later) ............................. 6
d. Ensure vnc is working........................................................................................................................ 6
e. Ensure for oracle user, password-less ssh connectivity is working .................................................. 7
f. Ensure for root access is working ..................................................................................................... 7
g. Ensure rsync utility is available in each server.................................................................................. 7
h. Run CVU pre-upgrade check tool ...................................................................................................... 7
i. Ensure storage is available for /app mount point move and DG_VOTE disk move.......................... 8
j. Clean-up /app mount point .............................................................................................................. 8
k. Pre-execution to make the upgrade faster ....................................................................................... 9
2) Pre-upgrade tasks: (No downtime is required)..................................................................................... 9
a. Check OS is supported for upgraded version.................................................................................... 9
b. Collect the information before the upgrade................................................................................... 10
c) Check current version of Grid Infrastructure .................................................................................. 11
3) Upgrade grid infrastructure home ...................................................................................................... 11
a) Stop goldengate processes ............................................................................................................. 11
b) Stop the database(s) which are registered with cluster as resource using below command ........ 12
c) Set memory_max_target to 4GB and memory_target to 2GB, if not already set .......................... 12
d) Prepare the shell environment ....................................................................................................... 13
e) Install the clusterware using Oracle Universal Installer for new 11.2.0.4: ..................................... 13
f) Post checks ...................................................................................................................................... 15
g) Apply PSU (Apply Patch 20485808 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.6
(Apr2015) (Includes Database PSU 11.2.0.4.6) to grid home) ................................................................ 15
h) Move ORACLE_HOME binaries ....................................................................................................... 15
i) Move ocr and voting files to DG_VOTE disk group(This step is applicable where ocr and voting
files files are reside in ASM disk group, otherwise skip this step ) ........................................................ 16
j) Need to re-configure resource for new grid home ......................................................................... 22
k) Bring-up the database which are registered with cluster as resource ........................................... 23
l) Bring-up goldengate processes ....................................................................................................... 23
4) Upgrade RDBMS home ....................................................................................................................... 24
4.1 Install RDBMS software..................................................................................................................... 24
a) Bring-down the goldengate ............................................................................................................ 24
b) Create Target Directories ................................................................................................................ 25
c) Prepare Environment: ..................................................................................................................... 25
d) Install the Software: ........................................................................................................................ 25
4.2. Manually Upgrade Database(s):.................................................................................................. 26
a) Run utlu112i.sql script to determine whether the database is ready for upgrade: ....................... 26
b) Collect the database information .................................................................................................... 26
c) Migrate instances files to NEW Oracle Home: ................................................................................ 27
d) Create restore point and Stop Database/Service(s) using crsctl command: .............................. 27
e) Stop the database we wish to upgrade including any dependent client-facing services ............... 28
f) Run the upgrade script.................................................................................................................... 29
5) Post-upgrade tasks after applying patchset: ...................................................................................... 30
a) Perform sanity test and verify successful upgrade from 11.2.0.2.x to 11.2.0.4: ............................ 30
b) Check for Invalid objects and re-compile them if required: ........................................................... 30
c) Setting /etc/oratab to Point to the New Oracle Home:.................................................................. 31
d) Update .bash_profile with appropriate new database information: ............................................. 31
e) Check for DBA_REGISTRY displays information: ............................................................................. 31
f) Apply PSU (Patch 20299013 - Database Patch Set Update 11.2.0.4.6 (Includes CPUApr2015)) .... 32
g) Apply oneoff patch (17501296) ...................................................................................................... 32
h) Bring-up goldengate processes ....................................................................................................... 33
j) Modify RMAN scripts (level 0, level1 and archive) to point new RDBMS home IN RMAN scripts: 34
k) Clone the ORACLE_HOMEs, in all the cluster nodes, if same ORACLE_HOME does not exists ...... 35
l) Create link to library (libobk.so (/app/oracle/product/11.2.0.4/dbhome_4/lib/libobk.so) for the
successful backup ................................................................................................................................... 37
m) If there is catalog db, need to upgrade catalog schema ............................................................. 37
n) Remove OLD Database Home and remove directories: ................................................................. 38
6) Apply PSU patches to grid and rdbms home ...................................................................................... 39
6.1 Apply Patch 20485808 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.6 (Apr2015)
(Includes Database PSU 11.2.0.4.6) to grid home .................................................................................. 39
6.2 Patch 20299013 - Database Patch Set Update 11.2.0.4.6 (Includes CPUApr2015) .................... 44
APPENDIX A ................................................................................................................................................. 50
APPENDIX B ................................................................................................................................................. 53
APPENDIX C ................................................................................................................................................. 60
1) Pre-upgrade activities (No downtime is required)
a. Download the 11.2.0.4 software
Download 11g Release 2 (11.2.0.4) Patch Set 13390677 for GRID & RDBMS from Oracle
Metalink

For Linux:

p13390677_112040_Linux-x86-64_1of7.zip

p13390677_112040_Linux-x86-64_2of7.zip

p13390677_112040_Linux-x86-64_3of7.zip

For Solaris

p13390677_112040_SOLARIS64_1of7.zip

p13390677_112040_SOLARIS64_2of7.zip

p13390677_112040_SOLARIS64_3of7.zip

b. Download the PSU 11.2.0.4.6 patch for grid infrastructure and rdbms
home
Grid infrastructure PSU 11.2.0.4.6

Patch 20485808 - Oracle Grid Infrastructure Patch Set Update 11.2.0.4.6 (Apr2015) (Includes
Database PSU 11.2.0.4.6)

For Linux: p20485808_112040_Linux-x86-64.zip

For Solaris: p20485808_112040_Solaris86-64.zip

RDBMS PSU 11.2.0.4.6

Patch 20299013 - Database Patch Set Update 11.2.0.4.6 (Includes CPUApr2015)

For Linux: p20299013_112040_Linux-x86-64.zip

For Solaris: p20299013_112040_SOLARIS64.zip

Pre-check tool: utlu112i.zip


c. Download opatch utility (Required OPatch utility version is
11.2.0.3.6 or later)
For Linux: p6880880_112000_Linux-x86-64.zip

For Solaris: p6880880_112000_SOLARIS64.zip

Note: Download opatch utility (Required OPatch utility version is 11.2.0.3.6 or later)

Note: Software is already downloaded in below location; please upload to respective


database servers

\\inblr102\sapall\ByD_MS\SFSF\00_SF_OCC\01_IMC Dashboard\TUSC\11204_Linux_software

\\inblr102\sapall\ByD_MS\SFSF\00_SF_OCC\01_IMC Dashboard\TUSC\11204_solaris_software

Note: Unzip the software in appropriate place, where sufficient space is available

d. Ensure vnc is working


Login into server and switch to oracle user and execute below command to launch the vnc

vncserver

Below is sample example to launch vncserver

/home/oracle/tusc/tuscms/logfiles>vncserver

New 'dc10bizxprddb01.syd.sf.priv:2 (oracle)' desktop is dc10bizxprddb01.syd.sf.priv:2

Starting applications specified in /home/oracle/.vnc/xstartup

Log file is /home/oracle/.vnc/dc10bizxprddb01.syd.sf.priv:2.log

dc10bizxprddb01 (10.10.40.11)

From the vnc viewer, type ip address of the server and port which is displayed above

Set the vncpassword, using below command

/home/oracle/tusc/tuscms/logfiles>vncpasswd

Password:

Verify:

Note: If vnc is not working, need to create P2 jira ticket to infra team and assign it to queue:
OSS-L2
e. Ensure for oracle user, password-less ssh connectivity is working
Login into server and switch to oracle user and execute below command to verify this

Here, I am taking example of DC10 two node clusters (dc10bizxprddb01 and


dc10bizxprddb02)

Login into server (dc10bizxprddb01.syd.sf.priv ) and switch to oracle user, execute below
command, and output should come without prompting for the password

ssh dc10bizxprddb01 date

ssh dc10bizxprddb02 date

Login into server (dc10bizxprddb02.syd.sf.priv ) and switch to oracle user, execute below
command, and output should come without prompting for the password

ssh dc10bizxprddb01 date

ssh dc10bizxprddb02 date

Note: If password less ssh connectivity is not working, need to create P2 jira ticket to infra
team and assign it to queue: OSS-L2

f. Ensure for root access is working


Login into server and switch to root user and execute below command to verify this
sudo su –

Note: If above command is not working, need to create P2 jira ticket to infra team and
assign it to queue: OSS-L2

g. Ensure rsync utility is available in each server


Login into server, sudo to oracle user and execute below command for checking availability

Which rsync

Sample example:

/app/11204_software>which rsync

/usr/bin/rsync

Note: If above command is not working, need to create P2 jira ticket to infra team and
assign it to queue: OSS-L2

h. Run CVU pre-upgrade check tool


Execute runcluvfy.sh in unzipped GI software location as grid user to confirm whether the
environment is suitable for upgrading:
For example, to upgrade a 2-node Oracle Clusterware in/app/oracle/11.2.0/grid to 11.2.0.4
in /app/oracle/11.2.0.4/grid executes the following:

Create new 11.2.0.4.0 grid home (/app/oracle/11.2.0.4/grid) before running below pre-
check

/dumpfiles/11204_software/grid>./runcluvfy.sh stage -pre crsinst -upgrade -n


dc15orapc1n1,dc15orapc1n2 -rolling -src_crshome /app/oracle/11.2.0/grid/ -dest_crshome
/app/oracle/11.2.0.4/grid/ -dest_version 11.2.0.4.0 -fixup -fixupdir /tmp –verbose

/dumpfiles/11204_software/grid>./runcluvfy.sh stage -pre crsinst -upgrade -n


dc10bizxprddb01,dc10bizxprddb02 -rolling -src_crshome /app/11.2.0/grid -dest_crshome
/app/oracle/11.2.0.4/grid/ -dest_version 11.2.0.4.0 -verbose

Ensure all the checks are passed, before continue for next steps in the document

i. Ensure storage is available for /app mount point move and DG_VOTE
disk move
Need to create ticket for storage request for /app mount point and DG_VOTE disk
movement

Idea here is, moving the /app mount point for bigger mount point (256GB) and ocr and
voting disk files to normal redundancy disk (DG_VOTE) group

Sample tickets:

 CO-55502: DC10: Mount point with 256GB SAN storage is required in the servers
(dc10bizxprddb01 and dc10bizxprddb02)
o Note: Once the storage is available, need to mount 256GB SAN storage as /app1
mount point on each cluster node and this should not be shared between
cluster nodes
 CO-55508: DC10: Requesting 30 GB new disk group DG_VOTE_OCR in the servers
(dc10bizxprddb01 and dc10bizxprddb02)
o Note: Once the storage is available, need to create DG_VOTE disk with normal
redundancy

Note: for non-ASM instances, requesting new disk group (DG_VOTE) is not
required. For example: DC12: Siemens and Bosch

j. Clean-up /app mount point


As part of this upgrade, we are moving the Grid and RDBMS binaries to bigger mount point
(256GB). This we are going to move using rsync utility, hence clean-up all the old and
unwanted files in the /app mount point(not any oracle binaries), this will help us to save the
time during the movement and also taking less time backup the binaries
k. Pre-execution to make the upgrade faster
Run the pre-upgrade script, which is available in download software location

utlu112i.zip

Unzip the utlu112i.zip and run in the going to be upgraded database

SQL> utlu112i.sql

Create dictionary statistics the night before the upgrade.

SQL> exec DBMS_STATS.GATHER_DICTIONARY_STATS;

EXECUTE dbms_stats.gather_dictionary_stats;

SQL> purge recycle

SQL> truncate table sys.aud$;

Compiling invalid objects

SQL> @?/rdbms/admin/utlrp.sql

Duplicate objects:

Fix DUPLICATE objects in SYS/SYSTEM BEFORE upgrade

And collect number of invalid objects, so that we can compare after the database upgrade

Purge stats before starting upgrade

SQL> exec DBMS_STATS.PURGE_STATS(DBMS_STATS.PURGE_ALL);

2) Pre-upgrade tasks: (No downtime is required)


a. Check OS is supported for upgraded version
Below OS versions are supported.
Oracle Database 11.2.0.4.0 is certified on Linux x86-64 SLES 10
Oracle Database 11.2.0.4.0 is certified on Linux x86-64 SLES 11

Note: Linux x86-64 SLES 9 and Linux x86-64 SLES 8 are not supported.

Oracle Database 11.2.0.4.0 is certified on Oracle Solaris on SPARC (64-bit) 10


Oracle Database 11.2.0.4.0 is certified on Oracle Solaris on SPARC (64-bit) 11

Note: Solaris on SPARC (64-bit) 9 and Solaris on SPARC (64-bit) 8 are not supported
Oracle Database 11.2.0.4.0 is certified on Linux x86-64 Red Hat Enterprise Linux 4
Oracle Database 11.2.0.4.0 is certified on Linux x86-64 Red Hat Enterprise Linux 5
Oracle Database 11.2.0.4.0 is certified on Linux x86-64 Red Hat Enterprise Linux 6

For SuSE, check below file for the supported OS version


cat /etc/SuSE-release

For Solaris, check below file for the supported OS version


cat /etc/release

b. Collect the information before the upgrade


b.1) Generate awr reports for prior 4 days before upgrading, so that after upgradation
performance can be monitored and compared

SQL>@?/rdbms/admin/awrrpt.sql

b.2). Taking backup of GI HOME and RDBMS home using Tar command

Login into to database server and switch to root user


sudo su –

For Linux/SuSe
# cd <Tar backup location>
# tar -cvzf gridhome.tar.gz $GRID_HOME
# tar -cvzf rdbmshome.tar.gz $ORACLE_HOME

For Solaris
# cd <Tar backup location>
# tar -cvf gridhome.tar $GRID_HOME
# tar -cvf rdbmshome.tar $ORACLE_HOME

Ensure we should have enough space (About 50 GB) for taking backup of GRID &
RDBMS/ORACLE home in the appropriate location for taking backup

Get the GRID_HOME and RDBMS_HOME location from

For Solaris (/var/opt/oracle/oratab)

For Linux (/etc/oratab)

b.3). In case of ASM, need to collect OCR & Voting disk group names which will be used
during installation of cluster

Login into to database server and switch to oracle user


sudo su - oracle
Source the environment to ASM
Login into ASM instance

$ ocrcheck

$ crsctl query css votedisk

Check current version of Grid Infrastructure (Below are applicable for standalone grid
home

crsctl query has softwareversion


crsctl query has releaseversion

Check the resource status

crsctl stat res -t

c) Check current version of Grid Infrastructure


We do not have any 11.2.0.2.0 grid infrastructure home, if yes please let me know,
we need to apply PATCH 12539000 for Grid infrastructure home and 11.2.0.2.0 DB
home
There is mandatory patch for 11.2.0.2.0, need to check below note add to this document

Things to Consider Before Upgrading to 11.2.0.3/11.2.0.4 Grid


Infrastructure/ASM (Doc ID 1363369.1)

3) Upgrade grid infrastructure home


a) Stop goldengate processes
If Goldengate setup is available for the DB’s, please stop
Login into to database server and switch to oracle user and source the environment
sudo su – oracle
cd <GG_HOME>
./ggsci
Stop all
Stop mgr

For example:
Cd /goldengate/GG_SF10
./ggsci
Stop all
Stop mgr

Note: Repeat this step, if there is more than one database is configured for goldengate with
cluster

b) Stop the database(s) which are registered with cluster as resource using
below command
Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE HOME>/bin
./crsctl stop resource <resource name>

For example:
cd /app/oracle/11.2.0/grid/bin
./crsctl stop resource DC10PRD1-db

Note: repeat this step, if there is more than one database is registered with cluster

c) Set memory_max_target to 4GB and memory_target to 2GB, if not


already set
Login into to database server and switch to oracle user
sudo su – oracle
Source the environment to ASM
Login into ASM instance
sqlplus “/as sysasm”

Check for the current setting

SQL> show parameter memory_target

If the value is smaller than 1536m, issue the following:

SQL> alter system set memory_max_target=4G scope=spfile;

SQL> alter system set memory_target=2G scope=spfile;

And no re-start of the cluster is required to affect the value as part of rootupgrade.sh script will
bounce the cluster to affect these values

Note: Need to repeat this step for all the cluster nodes
Reference document for this step is

Things to Consider Before Upgrading to 11.2.0.3/11.2.0.4 Grid Infrastructure/ASM (Doc ID


1363369.1)

d) Prepare the shell environment


Including DISPLAY – before beginning the software installation (as the ‘grid’ owner: oracle):
Login into to database server and switch to oracle user
sudo su – oracle

[grid@rac1]$ export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib

[grid@rac1]$ unset ORA_CRS_HOME

[grid@rac1]$ unset GRID_HOME

[grid@rac1]$ unset ORACLE_BASE

[grid@rac1]$ unset ORACLE_HOME

[grid@rac1]$ unset ORACLE_SID

Start the vnc using below command and set the display

vncserver

Based on output from above command, replace appropriate port number in below command

[grid@rac1]$ export DISPLAY=<server name>:<port>.0

e) Install the clusterware using Oracle Universal Installer for new 11.2.0.4:
Login into to database server and switch to oracle user
sudo su – oracle

Change the directory to grid software unzip location

cd <grid software unzip location>

For example, software is downloaded to location (/dumpfiles/11204_software) and


unzipped

Execute below command

cd /dumpfiles/11204_software/grid

[grid@rac1 grid]$ ./runInstaller

The installer has a series of screens; below is a synopsis of the screens and the values to
provide for each prompt:

1. Skip software updates


2. Upgrade Oracle Grid Infrastructure or Oracle ASM

3. Select language: English

4. Select all cluster nodes which will appear (will get different messages during this process)

5. Select asmdba, asmoper and asmadmin from 3 different dropdowns (Specify oinstall for all the three)

6. Need to specify Installation location like /app/oracle/11.2.0.4/grid or whichever applicable (Message:


Retrieving ASM cluster file system info)

7. It will continue with Pre-requisite check (It will check for the sufficient swap space)

8. Next screen will give summary like Global settings etc.

9. Next it will do Install product (showing prepare, copy files, link binaries, setup files, execute root script
etc. on screen and its status)

10. Next screen it will give message: script need to be executed -- rootupgrade.sh (To be executed on all
nodes manually as per the order mentioned in the installer window)

Please note below, before running the rootupgrade.sh script for SUSE Linux Enterprise Server 11 (x86_64)
(11.3 or 11.2) execute below steps, then run the rootupgrade.sh

Note: Below steps are not required for other OS like Solaris or Red-hat

Login into server as root user

Change the directory to new grid home

cd /app/oracle/11.2.0.4/grid/bin

mv acfsroot acfsroot.backup

mv acfsdriverstate acfsdriverstate.backup

Note: This should be done on all cluster nodes,and then run rootupgrade on all nodes.

Below is sample command to check SUSE Linux version

oracle@dc17orasc1n2[DC17SBX1]$ cat /etc/SuSE-release

SUSE Linux Enterprise Server 11 (x86_64)

VERSION = 11

PATCHLEVEL = 3

oracle@dc17orasc1n2[DC17SBX1]$ cat /etc/SuSE-release

SUSE Linux Enterprise Server 11 (x86_64)

VERSION = 11
PATCHLEVEL = 2

f) Post checks
Login into to database server and switch to oracle user
sudo su – oracle

Source the environment to grid home (/app/oracle/11.2.0.4/grid)

crsctl check crs

crs_stat -t

crsctl query crs softwareversion

crsctl query crs activeversion

The above commands should show version as 11.2.0.4

Ensure local and scan listeners are running in this cluster

With this Grid infrastructure home upgrade is completed.

g) Apply PSU (Apply Patch 20485808 - Oracle Grid Infrastructure Patch Set
Update 11.2.0.4.6 (Apr2015) (Includes Database PSU 11.2.0.4.6) to grid
home)
Refer below section in the document to apply the patch

Section: 6.1 Apply Patch 20485808 - Oracle Grid Infrastructure Patch Set Update
11.2.0.4.6 (Apr2015) (Includes Database PSU 11.2.0.4.6) to grid home

h) Move ORACLE_HOME binaries

Login into to database server and switch to root user


sudo su –

And stop the cluster on all the nodes

cd /app/oracle/11.2.0.4/grid/bin

./crsctl stop crs

Note: Ensure cluster is stopped on all the nodes

Create script rsync_app.sh with below content in /home/oracle

#!/usr/bin/ksh
PATH=/usr/bin:/usr/local/bin:$PATH
export PATH
/usr/bin/rsync -vaz --delete --progress --exclude '.snapshot' /app /app1

Run the script in nohup mode

cd /home/oracle
chmod 755 rsync_app.sh
nohup ./rsync_app.sh &

Ensure, binaries are synced, then only execute below steps

Ask infra team to unmount /app and mount /app1 file system as /app

And start the cluster on all the nodes

cd /app/oracle/11.2.0.4/grid/bin

./crsctl start crs

Note: Ensure cluster is running on all the nodes

Need to verify anything changes are required for Solaris (rsync) as above were executed on Suse
Linux

i) Move ocr and voting files to DG_VOTE disk group(This step is applicable
where ocr and voting files files are reside in ASM disk group, otherwise
skip this step )

Create the disk group DG_VOTE with normal redundancy using asmca utility

Execute below steps (moving voting and ocr disks) as root user from first cluster node

1) Move the voting disk to asm disk group

1.1)- Check the Voting Disk status

Cd /app/oracle/11.2.0.4.0/grid/bin

./crsctl query css votedisk

Example:
oracle@dc10bizxprddb01[+ASM1]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 57a96ad861be4f0cbf4c2805730dd3d9 (ORCL:BIZXDB_PRD_DG3_01) [DG3]
Located 1 voting disk(s).

1.2) Move the voting disk to DG_VOTE disk group with the crsctl command:
Cd /app/oracle/11.2.0.4.0/grid/bin

./crsctl replace votedisk +DG_VOTE

Example steps
Below commands Run as root user on

/app/oracle/11.2.0.4/grid/bin>./crsctl query css votedisk


## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 57a96ad861be4f0cbf4c2805730dd3d9 (ORCL:BIZXDB_PRD_DG3_01) [DG3]
Located 1 voting disk(s).

/app/oracle/11.2.0.4/grid/bin>./crsctl replace votedisk +DG_VOTE


Successful addition of voting disk 0e94c454bea34f2cbf45d8eb1e05720e.
Successful addition of voting disk 6c6da961f98e4f51bf20c225050f187e.
Successful addition of voting disk 5ffeaab6e2694f4fbfc444caae575b5e.
Successful deletion of voting disk 57a96ad861be4f0cbf4c2805730dd3d9.
Successfully replaced voting disk group with +DG_VOTE.
CRS-4266: Voting file(s) successfully replaced

/app/oracle/11.2.0.4/grid/bin>./crsctl query css votedisk


## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0e94c454bea34f2cbf45d8eb1e05720e (ORCL:BIZXDB_PRD_DG_VOTE_01)
[DG_VOTE]
2. ONLINE 6c6da961f98e4f51bf20c225050f187e (ORCL:BIZXDB_PRD_DG_VOTE_02)
[DG_VOTE]
3. ONLINE 5ffeaab6e2694f4fbfc444caae575b5e (ORCL:BIZXDB_PRD_DG_VOTE_03)
[DG_VOTE]
Located 3 voting disk(s).
/app/oracle/11.2.0.4/grid/bin>

1.4) Check the voting disk on ASM after the migration/move from above command
Cd /app/oracle/11.2.0.4/grid/bin
./crsctl query css votedisk
2) Move the ocr disk to asm disk group

2.1) Check the current OCR File status


Cd /app/oracle/11.2.0.4/grid/bin
./ocrcheck

Example:
./ocrcheck
Status of Oracle Cluster Registry is as follows:
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3212
Available space (kbytes) : 258908
ID : 2096919195
Device/File Name : +DG2
Device/File integrity check succeeded
Device/File Name : +DG3
Device/File integrity check succeeded
Device/File Name : +DG1
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

2.2) Add new OCR by providing ASM diskgroup

cd /app/oracle/11.2.0.4/grid/bin
./ocrconfig -add +DG_VOTE

Example steps
/app/oracle/11.2.0.4/grid/bin>./ocrcheck
Status of Oracle Cluster Registry is as follows:
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3312
Available space (kbytes) : 258808
ID : 2096919195
Device/File Name : +DG2
Device/File integrity check succeeded
Device/File Name : +DG3
Device/File integrity check succeeded
Device/File Name : +DG1
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

/app/oracle/11.2.0.4/grid/bin>./ocrconfig -add +DG_VOTE


/app/oracle/11.2.0.4/grid/bin>echo $?
0
/app/oracle/11.2.0.4/grid/bin>./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3312
Available space (kbytes) : 258808
ID : 2096919195
Device/File Name : +DG2
Device/File integrity check succeeded
Device/File Name : +DG3
Device/File integrity check succeeded
Device/File Name : +DG1
Device/File integrity check succeeded
Device/File Name : +DG_VOTE
Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

2.3) Run ocrcheck again after adding asmdg for OCR


Cd /app/oracle/11.2.0.4/grid/bin
./ocrcheck

/app/oracle/11.2.0.4/grid/bin>./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3312
Available space (kbytes) : 258808
ID : 2096919195
Device/File Name : +DG2
Device/File integrity check succeeded
Device/File Name : +DG3
Device/File integrity check succeeded
Device/File Name : +DG1
Device/File integrity check succeeded
Device/File Name : +DG_VOTE
Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

2.4) Delete the old OCR with the ocrconfig command:

Cd /app/oracle/11.2.0.4/grid/bin
./ocrconfig -delete +DG3

Example steps
Note: In the ocrcheck command output above, three disks are displayed (+DG1,+DG2
and +DG3), hence need to remove these three disks using below command

/app/oracle/11.2.0.4/grid/bin>./ocrconfig -delete +DG2

/app/oracle/11.2.0.4/grid/bin>./ocrconfig -delete +DG3

/app/oracle/11.2.0.4/grid/bin>./ocrconfig -delete +DG1

/app/oracle/11.2.0.4/grid/bin>echo $?
0

After deleting all, check now, we have only one disk group (DG_VOTE)

/app/oracle/11.2.0.4/grid/bin>./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3324
Available space (kbytes) : 258796
ID : 2096919195
Device/File Name : +DG_VOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

2.5) Re-run ocrcheck again after deleting the old OCR


Cd /app/oracle/11.2.0.4/grid/bin
./ocrcheck

/app/oracle/11.2.0.4/grid/bin >./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3324
Available space (kbytes) : 258796
ID : 2096919195
Device/File Name : +DG_VOTE
Device/File integrity check succeeded

Device/File not configured


Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

2.6) Verify that OCR location is pointing to the new diskgroup (DG_VOTE ) on both nodes,
otherwise need to change it like below

# cat /etc/oracle/ocr.loc
#Device/file +DG1 getting replaced by device +DG_VOTE
ocrconfig_loc=+DG_VOTE
local_only=false

3) Bounce the cluster on all the nodes of the cluster as root user

Cd /app/oracle/11.2.0.4/grid/
./crsctl stop crs
./crsctl start crs

j) Need to re-configure resource for new grid home

Copy the “public” directory files from 11.2.0.3.0 grid infrastructure home to 11.2.0.4.0 grid
infrastructure home as root user

For example:
cp –r /app/11.2.0/grid/public /app/oracle/11.2.0.4/grid

Change the directory


Cd /app/oracle/11.2.0.4/grid

Change the ownership


Chown –R oracle:oinstall pulic

Edit the resource file <ORACLE_SID>-db.resourcefile.txt for the below entry


ACTION_SCRIPT=<script location>
For example:
Resource file: DC10PRD1-db.resourcefile.txt and change the below entry
ACTION_SCRIPT=/app/11.2.0/grid/public/active_passive_DC10PRD1-db.sh

To
ACTION_SCRIPT=/app/oracle/11.2.0.4/grid/public/active_passive_DC10PRD1-db.sh

Note: Repeat above steps on all the cluster nodes

Delete the resource


crsctl delete resource <resource name>

For example:
crsctl delete resource DC10PRD1-db

Add the resource

As root user run following command from the active node to add resource to the cluster

/app/oracle/11.2.0.4/grid/bin>./crsctl add resource DC10PRD1-db -type cluster-resource –file


/app/oracle/11.2.0.4/grid/public/DC10PRD1-db.resourcefile.txt

Note: Repeat this section, if there is more than one database is registered as resource with this cluster

k) Bring-up the database which are registered with cluster as resource


Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE NEW HOME>/bin # 11.2.0.4.0 home location

cd /app/oracle/11.2.0.4/grid/bin >

./crsctl start resource <resource name> -n <node name>

Do the health check, check for the alert log for any errors and etc
Note: repeat this step, if there is more than one database is registered with cluster

Ensure local and scan listeners are running registered with all database(s) which are running in
this cluster

l) Bring-up goldengate processes


If Goldengate setup is available for the DB’s, please start
Login into to database server and switch to oracle user and source the environment
sudo su – oracle
cd <GG_HOME>
./ggsci
Start mgr
Start *

For example:
Cd /goldengate/GG_SF10
./ggsci
Start mgr
Start *
Note#1: Repeat this step, if there is more than one database is configured for goldengate with
cluster and ensures goldengate is cathing up
Note#2: Verify on the target side (Audit and WFA) side, goldengate is catching up.

4) Upgrade RDBMS home


We will be installing the 11.2.0.4 software into a new software home:
‘/app/oracle/product/11.2.0.4/dbhome_1.’

4.1 Install RDBMS software

a) Bring-down the goldengate

Stop goldengate processes


If Goldengate setup is available for the DB’s, please stop
Login into to database server and switch to oracle user and source the environment
sudo su – oracle
cd <GG_HOME>
./ggsci
Stop all
Stop mgr

For example:
Cd /goldengate/GG_SF10
./ggsci
Stop all
Stop mgr

Note: Repeat this step, if you’re upgrading more than one database in this cluster
b) Create Target Directories
Make the target installation directory on all nodes:

[oracle@rac1]$ mkdir -p /app/oracle/product/11.2.0.4/dbhome_1

Repeat above steps in all nodes of the cluster.

c) Prepare Environment:
Prepare the shell environment – including DISPLAY – before beginning the software
installation (as the ‘oracle’ owner):

[oracle@rac1]$ unset ORACLE_HOME

[oracle@rac1]$ unset ORACLE_BASE

[oracle@rac1]$ unset ORACLE_SID

[oracle@rac1]$ export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib

[oracle@rac1]$ export
PATH=.:/usr/local/java/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/
oracle/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin

[oracle@rac1]$ export DISPLAY=<server name>:<port>:0

Here we are unsetting key variables and removing the OLD ORACLE_HOME from the
variables PATH and LD_LIBRARY_PATH.

d) Install the Software:

Install the new oracle database software by invoking the Oracle Universal Installer:

Change the directory to rdbms software unzip location

cd <rdbms software unzip location>

Execute below command

For example, software is downloaded to location (/dumpfiles/11204_software) and


unzipped

Execute below command

cd /dumpfiles/11204_software/database

[oracle@rac1 database]$ ./runInstaller


The installer has a series of screens; below is a synopsis of the screens and the values to
provide for each prompt:

Screen Response

Configure Security Updates Uncheck ‘I wish to receive security updates via My Oracle Support’
Download Software Updates ‘Skip Software Updates’
Select Installation Option ‘Install database software only’
Grid Installation Options ‘Single instance database installation’
Select Product Languages ‘English’
Select Database Edition ‘Enterprise Edition’
Specify Installation Location ‘/app/oracle/product/11.2.0.4/dbhome_1′; ‘Next’
Privileged Operating System Groups ‘Next’
Summary ‘Next’
Perform Prerequisite Checks ‘Install’
Summary ‘Next’
Install Product N/A

When prompted run the ‘root.sh’ script on each of the nodes as ‘root’:

[oracle@rac1]$ /app/oracle/base/product/11.2.0.4/dbhome_1/root.sh

4.2. Manually Upgrade Database(s):

Pre-upgrade Information Tool: Need to copy utlu112i.sql from 11.2.0.4.0 RDBMS home to
11.2.0.3 home and then run it

a) Run utlu112i.sql script to determine whether the database is ready for


upgrade:
Login to database via the existing ORACLE_HOME and run the ‘utlu112i.sql’ script to
determine whether the database is ready to be upgraded:

SQL> @ /app/oracle/product/11.2.0/dbhome_1/rdbms/admin/utlu112i.sql

Take any corrective actions that are necessary per the output. (For example, I needed to
change the Sysaux tablespace size etc.)

b) Collect the database information


Collect the number of invalid objects

SQL> select object_name from dba_objects where status=’INVALID’;


Check for DBA_REGISTRY and note down the component version and status

col comp_name format a40

col version format a10

col status format a10

select comp_name, version, status from dba_registry;

c) Migrate instances files to NEW Oracle Home:


Move Instance files like Init* and Password files associated with the instance to new Oracle
home.

[oracle@rac1]$ export NEW_ORACLE_HOME= /app/oracle/product/11.2.0.4/dbhome_1

[oracle@rac1]$ cp $ORACLE_HOME/dbs/initracdb* $NEW_ORACLE_HOME/dbs/.

[oracle@rac1]$ cp $ORACLE_HOME/dbs/orapwrac* $NEW_ORACLE_HOME/dbs/.

In case of ASM, we need to get spfile from disk group and modify as per suggestions.

Note: In this case NEW_ORACLE_HOME means: New 11.2.0.4.0 RDBMS home


(/app/oracle/product/11.2.0.4/dbhome_1)

d) Create restore point and Stop Database/Service(s) using crsctl


command:

Create restore point before the upgrade

Following are the current setting

SQL> show parameter db_recovery_file;

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------


db_recovery_file_dest string

db_recovery_file_dest_size big integer 0

Set below parameters

SQL> alter system set db_recovery_file_dest_size=1024G;

SQL> alter system set db_recovery_file_dest='<archive log destination>';

After the setting

SQL> show parameter db_reco;

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

db_recovery_file_dest string +DG4

db_recovery_file_dest_size big integer 1024G

Login into database and execute below command to create restore point

Source the environment and execute below command

SQL> Create restore point before_db_upgrade guarantee flashback database;

Note: We need to drop the restore point, after one business day of the DC

e) Stop the database we wish to upgrade including any dependent client-


facing services -

Stop the upgraded which is registered with cluster as resource using below command
Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE HOME>/bin
./crsctl stop resource <resource name>
f) Run the upgrade script

Source the environment and ensure new ORACLE_HOME is set

export ORACLE_SID=racdb1; . oraenv

Verify shell environment uses NEW Oracle database software home:

[oracle@rac1]$ export ORACLE_SID=<SID>

[oracle@rac1]$ export ORACLE_HOME= /app/oracle/product/11.2.0.4/dbhome_1

[oracle@rac1]$ export PATH


.:/usr/local/java/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/b
in:/app/oracle/product/11.2.0.4/dbhome_1/bin:/usr/bin:/bin:/usr/bin/X11:/usr/local/
bin

[oracle@rac1]$ export LD_LIBRARY_PATH


/app/oracle/product/11.2.0.4/dbhome_1/lib::/lib:/usr/lib:/usr/local/lib

Mount the database exclusively and startup in ‘Upgrade’ mode:

[oracle@rac1] sqlplus "/ as sysdba"

SQL> startup upgrade;

Run catupgrd.sql and catuppst.sql as part of post upgrade:

Run the ‘catupgrd.sql’ script to upgrade the database:

SQL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql

Upon completion, ‘startup’ the database to start a new, fresh instance:

[oracle@rac1]$ sqlplus "/ as sysdba"

SQL> startup
Run ‘catuppst.sql’ script to perform upgrade steps that don’t require being in
‘Upgrade’ mode:

SQL> @$ORACLE_HOME/rdbms/admin/catuppst.sql

Run the post-upgrade status tool:

SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql

Recompile the database and check for any invalid objects:

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL> select count(*) from dba_invalid_objects;

5) Post-upgrade tasks after applying patchset:

a) Perform sanity test and verify successful upgrade from 11.2.0.2.x to


11.2.0.4:

After applying the patchset, ASM and CRS should show 11.2.0.4 as their “active” versions:
[grid@rac1]$ sqlplus "/ as sysasm"
SQL> select version from v$instance;
VERSION ----------------- 11.2.0.4

[grid@rac1]$ $GRID_HOME/bin/crsctl query crs activeversion


Oracle Clusterware active version on the cluster is [11.2.0.3.4]

Verify that the database and services are operational and running on the new version.

SQL> select * from v$version;

SQL> select name,open_mode from v$database;

b) Check for Invalid objects and re-compile them if required:

SQL> Select * from dba_objects where status=’INVALID’;


If there are Invalid objects then we need to compile these Invalid objects
using below SQL:

SQL> @$ORACLE_HOME\rdbms\admin\utlrp

Note: Compare number of Invalid objects before and after the upgrade and
ensure there are no need invalid objects are introduced after the upgrade.

c) Setting /etc/oratab to Point to the New Oracle Home:

After upgrading to Oracle Database to the new release, we need to ensure that /etc/oratab
file and any client scripts that set the value of ORACLE_HOME point to the new Oracle home
that is created for the new Oracle Database 11.2.0.4 release.

d) Update .bash_profile with appropriate new database information:

[oracle@rac1]$ sed -i -e 's/11.2.0/11.2.0.4/g' $HOME/.bash_profile

Perform this task on all nodes.

e) Check for DBA_REGISTRY displays information:

Check for DBA_REGISTRY displays information: about the components loaded into the
database and check for new oracle version. Ensure that component status should be in
VALID state.

Component version should be 11.2.0.4

col comp_name format a40

col version format a10

col status format a10

Select comp_name, version, status from dba_registry;


In case of successful upgrade, for all component names it should display version as
11.2.0.4 and status as VALID.

f) Apply PSU (Patch 20299013 - Database Patch Set Update 11.2.0.4.6


(Includes CPUApr2015))
Refer section 6.2 Patch 20299013 - Database Patch Set Update 11.2.0.4.6 (Includes
CPUApr2015) to apply the patch

g) Apply oneoff patch (17501296)

Login into server as oracle user

Bring-down the database using cluster resource

Stop the database(s) which are registered with cluster as resource using below
command
Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE HOME>/bin
./crsctl stop resource <resource name>

Change the directory to following software download location and unzip the patch

cd <software download location>

For example:

cd /dumpfiles/11204_software

And unzip the patch

Unzip p17501296_112040_Generic.zip

Set following environment variables

Source the environment to RDBMS home

# export PATH=/app/oracle/product/11.2.0.4/dbhome_1/OPatch:$PATH

Execute below command to apply the patch

Cd /dumpfiles/11204_software/17501296
# opatch apply –ocmrf
/app/oracle/product/11.2.0.4/dbhome_1/OPatch/ocm/bin/ocm.rsp
Note: Need to execute this step on each cluster node of RDBMS home

Post patching health check

For RDBMS home

Check if patch is applied or not for RDBMS home

Execute following command

Source the environment to RDBMS home and execute below command for the patch
verification

$ opatch lsinventory -invPtrLoc $ORACLE_HOME/oraInst.loc | grep 17501296

The command should display patch # 17501296 as part of the output, otherwise
patch is not applied to the environment

Run the below post step

Bring-up the database using cluster resource

Start the database(s) which are registered with cluster as resource using below
command
Bring-up the database which are registered with cluster as resource
Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE NEW HOME>/bin # 11.2.0.4.0 home location
./crsctl start resource <resource name> -n <node name>
Execute below post steps

$ sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @?/sqlpatch/17501296/postinstall.sql

h) Bring-up goldengate processes

If Goldengate setup is available for the DB’s, please start


Login into to database server and switch to oracle user and source the environment

Set below parameter


ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION = TRUE SCOPE=BOTH;
Note: No database bounce is required

sudo su – oracle
cd <GG_HOME>
./ggsci
Start mgr
Start *

For example:
Cd /goldengate/GG_SF10
./ggsci
Start mgr
Start *

Note#1: Repeat this step, if there is more than one database is configured for goldengate with
cluster and ensures goldengate is cathing up
Note#2: Verify on the target side (Audit and WFA) side, goldengate is catching up.

i) Run the fixed object stats and dictionary stats using below commands from the terminal and
continue for below steps while this is running
SQL> Exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
SQL> Exec DBMS_STATS.GATHER_DICTIONARY_STATS;

j) Modify RMAN scripts (level 0, level1 and archive) to point new RDBMS
home IN RMAN scripts:

Note: This step is applicable only for RMAN backups where ASM instance is running

Note: If we are taking backup as dboost, need to install plug-in for new home, will add
steps here

Login to server and switch to oracle user

Get the location rman scripts

Crontab –l | grep rman

And change the ORACLE_HOME path in the script. This need to be changed for level 0,
level 1 and archive log backup scripts
Example is given below for level 0 script

For example:

/home/oracle>cat /home/oracle/tusc/tuscms/site/DC10AUD1_level0_backup.sh

/usr/bin/perl /home/oracle/tusc/tuscms/bin/pl/oradb_rman.pl --
rcv=/home/oracle/tusc/tuscms/site/rcv/rman_level0_backup.rcv --target=/ --
disk_channels=1 --
channel_operands_1="FORMAT='/dumpfiles/rman/DC10AUD1/cf_%U_%t'" --
tape_channels=4 --channel_operands_2="RATE 200M FORMAT='%U_%t'" --
send_cmd="send 'NB_ORA_POLICY=dc10-rmanbkp-
DC10AUD1,NB_ORA_CLIENT=dc10bizxprddb02-nbu';" --oracle_sid=DC10AUD1 --
oracle_home=/app/oracle/product/11.2.0/dbhome_1 --logscan=NONE

/home/oracle>

To

/home/oracle>cat /home/oracle/tusc/tuscms/site/DC10AUD1_level0_backup.sh

/usr/bin/perl /home/oracle/tusc/tuscms/bin/pl/oradb_rman.pl --
rcv=/home/oracle/tusc/tuscms/site/rcv/rman_level0_backup.rcv --target=/ --
disk_channels=1 --
channel_operands_1="FORMAT='/dumpfiles/rman/DC10AUD1/cf_%U_%t'" --
tape_channels=4 --channel_operands_2="RATE 200M FORMAT='%U_%t'" --
send_cmd="send 'NB_ORA_POLICY=dc10-rmanbkp-
DC10AUD1,NB_ORA_CLIENT=dc10bizxprddb02-nbu';" --oracle_sid=DC10AUD1 --
oracle_home=/app/oracle/product/11.2.0.4/dbhome_1 --logscan=NONE

/home/oracle>

Note:

Ensure level 0 backup is completed for the upgrade database

k) Clone the ORACLE_HOMEs, in all the cluster nodes, if same


ORACLE_HOME does not exists

Tar the 11.2.0.4 RDBMS home and copy to remaining cluster nodes and do the clone using
below steps

Login into server and switch to oracle user

Cd /app/oracle/product/11.2.0.4
Tar cvf dbhome_1.tar dbhome_1

Using scp, copy the dbhome_1.tar file to, remaining cluster nodes into directory
/app/oracle/product/11.2.0.4

Login into remaining cluster nodes and execute below step to clone the ORACLE_HOME

Cd /app/oracle/product/11.2.0.4

Tar xvf dbhome_1.tar

Execute below command to clone the RDBMS home

perl ${ORACLE_HOME}/clone/bin/clone.pl ORACLE_BASE=$ORACLE_BASE


ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=${DB_NAME}
OSDBA_GROUP=${ORACLE_DBA_GROUP} OSOPER_GROUP=${ORACLE_DBA_GROUP}
OSASM_GROUP=${ORACLE_DBA_GROUP} -O-invPtrLoc -O$ORACLE_HOME/oraInst.loc -
OLOCAL_NODE=`hostname`

Get the value of variable (ORACLE_HOME_NAME) from the node where upgrade was done

cat <inventory location/ ContentsXML/inventory.xm

Example:

cat /app/oracle/oraInventory/ContentsXML/inventory.xml

<HOME NAME="OraDb11g_home1" LOC="/app/oracle/product/11.2.0/dbhome_1" TYPE="O"


IDX="2"/>

From above output, get the value of HOME NAME, which is matching with RDBMS
ORACLE_HOME of 11.2.0.4

Example command

perl /app/oracle/product/11.2.0.4/dbhome_1/clone/bin/clone.pl
ORACLE_BASE=/app/oracle/base ORACLE_HOME=/app/oracle/product/11.2.0.4/dbhome_1
ORACLE_HOME_NAME=OraDb11g_home1 OSDBA_GROUP=oinstall OSOPER_GROUP=oinstall
OSASM_GROUP=oinstall -O-invPtrLoc -O/app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc -
OLOCAL_NODE=`hostname`
l) Create link to library (libobk.so
(/app/oracle/product/11.2.0.4/dbhome_4/lib/libobk.so) for the
successful backup

Note: If we are taking rman backup using dboost, no need to execute below step

Check in the 11.2.0.3.0 home (OLD HOME), whether this library link is created using below
command

cd /app/oracle/product/11.2.0/dbhome_4/lib>

/app/oracle/product/11.2.0/dbhome_1/lib>ls -l libobk.so

ls: libobk.so: No such file or directory

If you get message like above “ls: libobk.so: No such file or directory”, then you skip this step,
otherwise continue for below steps

/app/oracle/product/11.2.0/dbhome_1/lib>ls -l libobk.so

lrwxrwxrwx 1 oracle oinstall 36 Aug 16 02:53 libobk.so -> /usr/openv/netbackup/bin/libobk.so64


/app/oracle/product/11.2.0.4/dbhome_1/lib>

If you get above message that means in the old home link is created for the library, hence need
to create link in new 11.2.0.4 home using below steps

Create link using below command

/app/oracle/product/11.2.0.4/dbhome_1/lib>ln -s /usr/openv/netbackup/bin/libobk.so64
libobk.so

After creation of the link

/app/oracle/product/11.2.0.4/dbhome_1/lib>ls -l libobk.so

lrwxrwxrwx 1 oracle oinstall 36 Aug 16 02:53 libobk.so -> /usr/openv/netbackup/bin/libobk.so64


/app/oracle/product/11.2.0.4/dbhome_1/lib>

m) If there is catalog db, need to upgrade catalog schema


Once the db upgrade is completed, you have to upgrade the catalog schema (if backups are
using catalog database) or else backup will fail.
Rman target / catalog rcat_dc10prd1/mrbackup@rcatdc10
RMAN> upgrade catalog;
RMAN> upgrade catalog;
RMAN> resync catalog;

n) Remove OLD Database Home and remove directories:


Once all databases have been upgraded, then you can delete the OLD Oracle Home – from all
nodes – using the following steps

“Detach” the software home from all nodes:

[oracle@rac1]$ $/app/oracle/base/product/11.2.0/dbhome_2/oui/bin/runInstaller -
detachHome ORACLE_HOME=$/app/oracle/base/product/11.2.0/dbhome_2 -silent -local

Repeat above steps in all nodes of the cluster or we can execute above step

By connecting with remote nodes using ssh ( as below ):

[oracle@rac1]$ ssh rac2


"$/app/oracle/base/product/11.2.0/dbhome_2/oui/bin/runInstaller -detachHome
ORACLE_HOME=$/app/oracle/base/product/11.2.0/dbhome_2 -silent -local"

[oracle@rac1]$ ssh rac3


"$/app/oracle/base/product/11.2.0/dbhome_2/oui/bin/runInstaller -detachHome
ORACLE_HOME=$/app/oracle/base/product/11.2.0/dbhome_2 -silent -local"

Remove the software directory from all nodes:

[oracle@rac1]$ rm -Rf $OLD_ORACLE_HOME

[oracle@rac1]$ ssh rac2 "rm -Rf $OLD_ORACLE_HOME"

[oracle@rac1]$ ssh rac3 "rm -Rf $OLD_ORACLE_HOME"

With this database is upgraded to 11.2.0.4.0 version


6) Apply PSU patches to grid and rdbms home
6.1 Apply Patch 20485808 - Oracle Grid Infrastructure Patch Set
Update 11.2.0.4.6 (Apr2015) (Includes Database PSU 11.2.0.4.6) to
grid home
i. Stop the database(s) which are registered with cluster as resource using below
command
Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE HOME>/bin
./crsctl stop resource <resource name>

ii. Unzip the downloaded patch in step 1.b and 1.c to location
Execute below steps as root user

Create the directory grid_psu


mkdir –p <Patch download location>/grid_psu

For example: patch download location is /dumpfiles/11204_software

mkdir –p /dumpfiles/11204_software/grid_psu

Copy the below patch file into directory


/dumpfiles/11204_software/grid_psu,depending up on the platform

For Suse,red-hat
p20485808_112040_Linux-x86-64.zip

For Solaris
p20485808_112040_SOLARIS64

unzip the patch file


cd /dumpfiles/11204_software/grid_psu

For Suse,red-hat
unzip p20485808_112040_Linux-x86-64.zip

For Solaris
unzip p20485808_112040_SOLARIS64

Change the owner ship using below command


chown –R oracle:oinstall /dumpfiles/11204_software/20485808
chmod –R 755 /dumpfiles/11204_software/20485808

iii. Execute below steps as root user on each cluster node


Copy the below Opatch zip file into Grid infra-structure home
(/app/oracle/11.2.0.4/grid) depending up on the platform

For Suse,Red-hat
p6880880_112000_Linux-x86-64.zip
cp <opatch download location>/p6880880_112000_Linux-x86-64.zip
/app/oracle/11.2.0.4/grid

for example
cp /dumpfiles/11204_software/p6880880_112000_SOLARIS64.zip
/app/oracle/11.2.0.4/grid

For Solaris
p6880880_112000_SOLARIS64.zip
cp <opatch download location>/ p6880880_112000_SOLARIS64.zip
/app/oracle/11.2.0.4/grid

for example
cp /dumpfiles/11204_software/ p6880880_112000_SOLARIS64.zip
/app/oracle/11.2.0.4/grid

cd /app/oracle/11.2.0.4/grid
Take the backup of current OPatch directory
mv OPatch Opatch.old

For Suse,Red-hat
Unzip p6880880_112000_Linux-x86-64.zip

For Solaris
Unzip p6880880_112000_SOLARIS64.zip

Change the owner ship using below command


chown –R oracle:oinstall OPatch

iv. One-off Patch Conflict Detection and Resolution


Login into to database server and switch to oracle user
sudo su – oracle
And source the environment to 11.2.0.4.0 grid home (/app/oracle/11.2.0.4/grid)

export PATH=/app/oracle/11.2.0.4/grid/OPatch:$PATH

Change the directory to patch unzipped location


For example: <UNZIPPED_PATCH_LOCATION> is /dumpfiles/11204_software
cd /dumpfiles/11204_software/grid_psu/

Determine whether any currently installed one-off patches conflict with the PSU
patch as follows:

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./20485808

Ensure there are no conflicts, otherwise stop the patching and inform to SDM

Note: The above need to be executed on all the cluster nodes.

Example:

/dumpfiles/11204_software /grid_psu>opatch prereq


CheckConflictAgainstOHWithDetail -phBaseDir ./20485808

Oracle Interim Patch Installer version 11.2.0.3.10

Copyright (c) 2015, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /app/oracle/11.2.0.4/grid

Central Inventory : /app/oraInventory

from : /app/oracle/11.2.0.4/grid/oraInst.loc

OPatch version : 11.2.0.3.10

OUI version : 11.2.0.4.0

Log file location : /app/oracle/11.2.0.4/grid/cfgtoollogs/opatch/opatch2015-08-


15_01-11-50AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
v. Generate OCM Configuration

As the Grid home owner execute:

/app/oracle/11.2.0.4/grid/OPatch /ocm/bin/emocmrsp

Just press "enter" for first prompt and for second prompt type Yes, ocm.rsp file will
be generated in the above specified directory
(/app/oracle/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp)

Change the permission

chmod 755 /app/oracle/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp

vi. Patch grid infrastructure (GI) home

Login into server as root user

Change the directory to following (<UNZIPPED_PATCH_LOCATION>)

$ cd <UNZIPPED_PATCH_LOCATION>

Set following environment variables

# export ORACLE_HOME=/app/oracle/11.2.0.4/grid

# export PATH=/app/oracle/11.2.0.4/grid/OPatch:$PATH

# export LD_LIBRARY_PATH=/app/oracle/11.2.0.4/grid/lib

Execute below command to apply the patch

Cd /dumpfiles/11204_software/grid_psu

# opatch auto <UNZIPPED_PATCH_LOCATION>/20485808


-ocmrf <ocm response file> –oh <GRID ORACLE_HOME>

For example

As root user

Cd /dumpfiles/11204_software/grid_psu

# opatch auto /dump/grid_psu/20485808


-ocmrf /app/oracle/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp –oh
/app/oracle/11.2.0.4/grid/
Note: Need to execute this step on each cluster node

Example output

/dumpfiles/11204_software/grid_psu/20485808>opatch auto
/dumpfiles/11204_software/grid_psu/20485808 -ocmrf
/app/oracle/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp -oh
/app/oracle/11.2.0.4/grid

Executing /app/oracle/11.2.0.4/grid/perl/bin/perl
/app/oracle/11.2.0.4/grid/OPatch/crs/patch11203.pl -patchdir
/dumpfiles/11204_software/grid_psu -patchn 20485808 -ocmrf
/app/oracle/11.2.0.4/grid/OPatch/ocm/bin/ocm.rsp -oh
/app/oracle/11.2.0.4/grid -paramfile
/app/oracle/11.2.0.4/grid/crs/install/crsconfig_params

This is the main log file: /app/oracle/11.2.0.4/grid/cfgtoollogs/opatchauto2015-


08-15_01-33-59.log

This file will show your detected configuration and all the steps that opatchauto
attempted to do on your system:

/app/oracle/11.2.0.4/grid/cfgtoollogs/opatchauto2015-08-15_01-33-
59.report.log

2015-08-15 01:33:59: Starting Clusterware Patch Setup

Using configuration parameter file:


/app/oracle/11.2.0.4/grid/crs/install/crsconfig_params

Stopping CRS...

Stopped CRS successfully

patch /dumpfiles/11204_software/grid_psu/20485808/20299013 apply


successful for home /app/oracle/11.2.0.4/grid

patch /dumpfiles/11204_software/grid_psu/20485808/20420937 apply


successful for home /app/oracle/11.2.0.4/grid

patch /dumpfiles/11204_software/grid_psu/20485808/20299019 apply


successful for home /app/oracle/11.2.0.4/grid

Starting CRS...
Installing Trace File Analyzer

CRS-4123: Oracle High Availability Services has been started.

opatch auto succeeded.

/dumpfiles/11204_software /grid_psu/20485808>

vii. Post patching health check


For Grid infrastructure (GI) home

Check if patch is applied or not for Grid infrastructure (GI) home for a Cluster

Execute following command

Source the environment to Grid infrastructure (GI) home and execute below
command for the patch verification

$ opatch lsinventory

The command should display patches # 20299019, 20420937 and 20299013 as part
of the output; otherwise patch is not applied to the environment

Note: In case Grid infrastructure (GI) home for a Cluster, this step need to
repeat on all the cluster nodes

Execute below commands to ensure cluster is fine after patch application

crsctl check crs

crs_stat -t

crsctl query crs activeversion

crsctl query crs softwareversion

6.2 Patch 20299013 - Database Patch Set Update 11.2.0.4.6 (Includes


CPUApr2015)

i. Unzip the downloaded patch in step 1.b (20299013) o location


<UNZIPPED_PATCH_LOCATION>

Create the directory with rdbms_psu


mkdir -p <PATCH_download LOCATION>/rdbms_psu
For example: <UNZIPPED_PATCH_LOCATION> is /dump
mkdir –p /dumpfiles/11204_software/rdbms_psu

Copy the below patch file into directory /dumpfiles/11204_software/rdbms_psu,


depending up on the platform
For Suse or red-hat
p20299013_112040_Linux-x86-64.zip

For Solaris
p20299013_112040_SOLARIS64.zip

unzip the patch file


cd /dumpfiles/11204_software/rdbms_psu

For Suse or red-hat


unzip p20299013_112040_Linux-x86-64.zip

For Solaris
unzip p20299013_112040_SOLARIS64.zip

ii. Execute below steps as root user on each cluster node


Copy the below Opatch zip file into RDBMS home
(/app/oracle/product/11.2.0.4/dbhome_1) depending up on the platform

For Suse,Red-hat
p6880880_112000_Linux-x86-64.zip
cp <opatch download location>/p6880880_112000_Linux-x86-64.zip
/app/oracle/product/11.2.0.4/dbhome_1

for example
cp /dumpfiles/11204_software/p6880880_112000_SOLARIS64.zip
/app/oracle/product/11.2.0.4/dbhome_1

For Solaris
p6880880_112000_SOLARIS64.zip
cp <opatch download location>/ p6880880_112000_SOLARIS64
/app/oracle/product/11.2.0.4/dbhome_1

for example
cp /dumpfiles/11204_software/ p6880880_112000_SOLARIS64.zip
/app/oracle/product/11.2.0.4/dbhome_1

cd /app/oracle/product/11.2.0.4/dbhome_1
Take the backup of current OPatch directory
mv OPatch Opatch.old

For Suse,Red-hat
Unzip p6880880_112000_Linux-x86-64.zip

For Solaris
Unzip p6880880_112000_SOLARIS64.zip

Change the owner ship using below command


chown –R oracle:oinstall /app/oracle/product/11.2.0.4/dbhome_1/OPatch

iii. One-off Patch Conflict Detection and Resolution


Login into to database server and switch to oracle user
sudo su – oracle

And source the environment to 11.2.0.4.0 RDBMS home


/app/oracle/product/11.2.0.4/dbhome_1

export PATH=/app/oracle/product/11.2.0.4/dbhome_1/OPatch:$PATH

cd <UNZIPPED_PATCH_LOCATION>

cd /dumpfiles/11204_software/rdbms_psu

Determine whether any currently installed one-off patches conflict with the PSU
patch as follows:

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./20299013

Ensure there are no conflicts, otherwise stop the patching and inform to SDM

Note: The above need to be executed on all the cluster nodes.

Example:

/dumpfiles/11204_software/rdbms_psu>opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir ./20299013

Oracle Interim Patch Installer version 11.2.0.3.10


Copyright (c) 2015, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /app/oracle/product/11.2.0.4/dbhome_1

Central Inventory : /app/oraInventory

from : /app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc

OPatch version : 11.2.0.3.10

OUI version : 11.2.0.4.0

Log file location :


/app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2015-08-
16_03-49-54AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

/dumpfiles/11204_software/rdbms_psu>

iv. Generate OCM Configuration

As the RDBMS home owner execute:

/app/oracle/product/11.2.0.4/dbhome_1/OPatch /ocm/bin/emocmrsp

Just press "enter" for first prompt and for second prompt type Yes, ocm.rsp file will
be generated in the above specified directory
(/app/oracle/product/11.2.0.4/dbhome_1/OPatch/ocm/bin/ocm.rsp)

Change the permission

chmod 755 /app/oracle/product/11.2.0.4/dbhome_1/OPatch/ocm/bin/ocm.rsp

v. Patch RDBMS home

Login into server as oracle user

Change the directory to following (<UNZIPPED_PATCH_LOCATION>)

$ cd <UNZIPPED_PATCH_LOCATION>/20299013
For example:

Cd /dumpfiles/11204_software/rdbms_psu

Set following environment variables

# export PATH=/app/oracle/product/11.2.0.4/dbhome_1/OPatch:$PATH

Source the environment to RDBMS home

Execute below command to apply the patch

Cd /dumpfiles/11204_software/rdbms_psu/20299013
# opatch apply –ocmrf
/app/oracle/product/11.2.0.4/dbhome_1/OPatch/ocm/bin/ocm.rsp
Note: Need to execute this step on each cluster node of RDBMS home

Refer patch application output in the APPENDIX A, at the bottom of the document

vi. Patch Post-Installation Instructions


Bring-up the database which are registered with cluster as resource
Login into to database server and switch to root user
sudo su –
cd <GRID INFRASTRUCTURE NEW HOME>/bin # 11.2.0.4.0 home location
./crsctl start resource <resource name> -n <node name>
Execute below post steps

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @catbundle.sql psu apply
SQL> QUIT
The catbundle.sql execution is reflected in the dba_registry_history view by a row
associated with bundle series PSU.

Run utlrp.sql to validate objects

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @utlrp.sql

vii. Post patching health check


For RDBMS home
Check if patch is applied or not for RDBMS home

Execute following command

Source the environment to RDBMS home and execute below command for the
patch verification

$ opatch lsinventory -invPtrLoc $ORACLE_HOME/oraInst.loc | grep 20299013

The command should display patch # 20299013 as part of the output, otherwise
patch is not applied to the environment

Note: this step need to repeat on all the cluster nodes of RDBMS home

Execute below commands to ensure DB is fine after patch application

Check instance status and it should be open


col instance_name format a10
col host_name format a40
col status format a5
select INSTANCE_NAME,HOST_NAME,STATUS from gv$instance;

Check number of invalid objects in the instance


Login to database instance and run the below query check invalid objects
# sqlplus “/as sysdba”
SQL> select object_name from dba_objects where status=’INVALID’
There should be any extra invalid objects after applying the patch

Check registry of the database


Login to database instance and run the below query check to check registry
# sqlplus “/as sysdba”
col action format a35
col version format a10
col comments format a30
select action,version,comments from dba_registry_history;

The query should display row associated with PSU series applied
Check alert log for any issues.

With this patching activity is completed


APPENDIX A
After the upgrade to 11.2.0.4.0, you may find couple of components in INVALID state.

In our case, below highlighted components were in INVALID state and made VAILD using steps
mentioned below

Oracle Application Express 3.2.1.00.12 VALID

Oracle Multimedia 11.2.0.4.0 VALID

Oracle XML Database 11.2.0.4.0 VALID

Oracle Text 11.2.0.4.0 VALID

Oracle Expression Filter 11.2.0.4.0 VALID

Oracle Rules Manager 11.2.0.4.0 VALID

Oracle Workspace Manager 11.2.0.4.0 VALID

Oracle Database Catalog Views 11.2.0.4.0 VALID

Oracle Database Packages and Types 11.2.0.4.0 VALID

JServer JAVA Virtual Machine 11.2.0.4.0 VALID

Oracle XDK 11.2.0.4.0 VALID

Oracle Database Java Packages 11.2.0.4.0 VALID

Please use below steps to fix the issue


1) Execute the following sql repeatedly while connected as SYS AS SYSDBA. It may report
invalid objects. Make the reported invalid objects as valid by compiling the objects.
Repeat below steps until, you receive messages as “CATPROC can be validated now”

sqlplus “/as sysdba”

REM ***************

REM CHECKVALID.SQL

REM ***************

set serveroutput on;

declare

start_time date;

end_time date;

object_name varchar(100);

object_id char(10);

begin

SELECT date_loading, date_loaded into start_time, end_time FROM registry$ WHERE cid =
'CATPROC';

SELECT obj#,name into object_id,object_name FROM obj$ WHERE status > 1 AND

(ctime BETWEEN start_time AND end_time OR

mtime BETWEEN start_time AND end_time OR

stime BETWEEN start_time AND end_time) AND

ROWNUM <=1;

dbms_output.put_line('Please compile Invalid object '||object_name||'

Object_id '||object_id );

EXCEPTION

WHEN NO_DATA_FOUND THEN


dbms_output.put_line('CATPROC can be validated now' );

end;

2) Once you receive messages as ““CATPROC can be validated now” , like below
REM ***************
REM CHECKVALID.SQL
REM ***************
set serveroutput on;
declare
start_time date;
end_time date;
object_name varchar(100);
object_id char(10);
begin
SELECT date_loading, date_loaded into start_time, end_time FROM registry$ WHERE cid =
'CATPROC';
SELECT obj#,name into object_id,object_name FROM obj$ WHERE status > 1 AND
(ctime BETWEEN start_time AND end_time OR
mtime BETWEEN start_time AND end_time OR
stime BETWEEN start_time AND end_time) AND
ROWNUM <=1;
dbms_output.put_line('Please compile Invalid object '||object_name||'
Object_id '||object_id );
EXCEPTION
WHEN NO_DATA_FOUND THEN
dbms_output.put_line('CATPROC can be validated now' );
end;
/
CATPROC can be validated now

PL/SQL procedure successfully completed.

SQL>

3) Then execute below step by connecting as sysdba


Sqlplus “/as sysdba”
SQL> execute DBMS_REGISTRY_SYS.VALIDATE_CATPROC;

PL/SQL procedure successfully completed.

4) Then compile the invalid objects using urltp.sql


Sqlplus “/as sysdba”
SQL> @?/rdbms/admin/utlrp.sql

5) Then check for components status using below command for valid state
col comp_name format a40
col version format a10
col status format a10
select comp_name, version, status from dba_registry;

APPENDIX B
Below is the output of step ( 6.2.v Patch 20299013 - Database Patch Set Update 11.2.0.4.6
(Includes CPUApr2015))

/dumpfiles/11204_software/rdbms_psu/20299013>opatch apply -ocmrf


/app/oracle/product/11.2.0.4/dbhome_1/OPatch/ocm/bin/ocm.rsp

Oracle Interim Patch Installer version 11.2.0.3.10

Copyright (c) 2015, Oracle Corporation. All rights reserved.

Oracle Home : /app/oracle/product/11.2.0.4/dbhome_1

Central Inventory : /app/oraInventory

from : /app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc

OPatch version : 11.2.0.3.10

OUI version : 11.2.0.4.0

Log file location : /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2015-08-


16_03-51-54AM_1.log
Verifying environment and performing prerequisite checks...

OPatch continues with these patches: 17478514 18031668 18522509 19121551 19769489
20299013

Do you want to proceed? [y|n]

User Responded with: Y

All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.

(Oracle Home = '/app/oracle/product/11.2.0.4/dbhome_1')

Is the local system ready for patching? [y|n]

User Responded with: Y

Backing up files...

Applying sub-patch '17478514' to OH '/app/oracle/product/11.2.0.4/dbhome_1'

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.sdo, 11.2.0.4.0...

Patching component oracle.sysman.agent, 10.2.0.4.5...


Patching component oracle.xdk, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.sdo.locator, 11.2.0.4.0...

Patching component oracle.nlsrtl.rsf, 11.2.0.4.0...

Patching component oracle.xdk.rsf, 11.2.0.4.0...

Patching component oracle.rdbms.rman, 11.2.0.4.0...

Verifying the update...

Applying sub-patch '18031668' to OH '/app/oracle/product/11.2.0.4/dbhome_1'

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.ldap.rsf, 11.2.0.4.0...

Patching component oracle.rdbms.crs, 11.2.0.4.0...

Patching component oracle.precomp.common, 11.2.0.4.0...


Patching component oracle.ldap.rsf.ic, 11.2.0.4.0...

Patching component oracle.rdbms.deconfig, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.rdbms.rman, 11.2.0.4.0...

Verifying the update...

Applying sub-patch '18522509' to OH '/app/oracle/product/11.2.0.4/dbhome_1'

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.precomp.common, 11.2.0.4.0...

Patching component oracle.rdbms.rman, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.rdbms.deconfig, 11.2.0.4.0...

Verifying the update...


Applying sub-patch '19121551' to OH '/app/oracle/product/11.2.0.4/dbhome_1'

Patching component oracle.precomp.common, 11.2.0.4.0...

Patching component oracle.sysman.console.db, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.rdbms.rman, 11.2.0.4.0...

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.ordim.client, 11.2.0.4.0...

Patching component oracle.ordim.jai, 11.2.0.4.0...

Verifying the update...

Applying sub-patch '19769489' to OH '/app/oracle/product/11.2.0.4/dbhome_1'

ApplySession: Optional component(s) [ oracle.sysman.agent, 11.2.0.4.0 ] not present in the Oracle


Home or a higher version is found.

Patching component oracle.precomp.common, 11.2.0.4.0...

Patching component oracle.ovm, 11.2.0.4.0...


Patching component oracle.xdk, 11.2.0.4.0...

Patching component oracle.rdbms.util, 11.2.0.4.0...

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.xdk.parser.java, 11.2.0.4.0...

Patching component oracle.oraolap, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.xdk.rsf, 11.2.0.4.0...

Patching component oracle.rdbms.rman, 11.2.0.4.0...

Patching component oracle.rdbms.deconfig, 11.2.0.4.0...

Verifying the update...

Applying sub-patch '20299013' to OH '/app/oracle/product/11.2.0.4/dbhome_1'

Patching component oracle.rdbms.dv, 11.2.0.4.0...


Patching component oracle.rdbms.oci, 11.2.0.4.0...

Patching component oracle.precomp.common, 11.2.0.4.0...

Patching component oracle.sysman.agent, 10.2.0.4.5...

Patching component oracle.xdk, 11.2.0.4.0...

Patching component oracle.sysman.common, 10.2.0.4.5...

Patching component oracle.rdbms, 11.2.0.4.0...

Patching component oracle.rdbms.dbscripts, 11.2.0.4.0...

Patching component oracle.xdk.parser.java, 11.2.0.4.0...

Patching component oracle.sysman.console.db, 11.2.0.4.0...

Patching component oracle.xdk.rsf, 11.2.0.4.0...

Patching component oracle.rdbms.rsf, 11.2.0.4.0...

Patching component oracle.sysman.common.core, 10.2.0.4.5...


Patching component oracle.rdbms.rman, 11.2.0.4.0...

Patching component oracle.rdbms.deconfig, 11.2.0.4.0...

Verifying the update...

Composite patch 20299013 successfully applied.

Log file location: /app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2015-08-


16_03-51-54AM_1.log

OPatch succeeded.

/dumpfiles/11204_software/rdbms_psu/20299013>

APPENDIX C
Need to execute below for LMS DB’s after the upgrade to
11.2.0.4.0
Hi Rolta,

For upcoming upgrades for LMS databases from 11.2.0.3 to 11.2.0.4 .

Please include the workaround for bug 17501296 as a must do action item after upgrades .

The workaround is to recreate the procedure in the oracle notes.

Bug 17501296

UNABLE TO DELETE ROWS FROM TABLE WITH TEXT INDEX AFTER UPGRADE TO
11.2.0.4

Hdr: 17501296 11.2.0.4 DRGEN 11.2.0.4 PRODID-211 PORTID-226 PLS-306 13806179


Abstract: UNABLE TO DELETE ROWS FROM TABLE WITH TEXT INDEX AFTER UPGRADE TO
11.2.0.4

*** 09/24/13 02:56 am ***


PROBLEM:
--------
After database upgrade to 11.2.0.4 cannot delete any rows from a table with
context index due to PLS-306 error:

SQL> delete from foo where a=1;

1 row deleted.

SQL> commit;
commit
*
ERROR at line 1:
ORA-604: error occurred at recursive SQL level 1
ORA-6550: line 1, column 7:
PLS-306: wrong number or types of arguments in call to 'SYNCRN'
ORA-6550: line 1, column 7:
PL/SQL: Statement ignored

DIAGNOSTIC ANALYSIS:
--------------------
Commit callback procedure syncrn is not in sync with 11.2.0.4 C-code, smallr
argument is missing

SQL> select text from dba_source


where name = 'SYNCRN' and owner = 'CTXSYS'
order by line;

TEXT
-------------------------------------------------------------
procedure syncrn (
ownid IN binary_integer,
oname IN varchar2,
idxid IN binary_integer,
ixpid IN binary_integer,
rtabnm IN varchar2,
srcflg IN binary_integer
)
authid definer
as external
name "comt_cb"
library dr$lib
with context
parameters(
context,
ownid ub4,
oname OCISTRING,
idxid ub4,
ixpid ub4,
rtabnm OCISTRING,
srcflg ub1
);

22 rows selected.
WORKAROUND:
-----------
Recreate procedure syncrn as below:

connect / as sysdba
alter session set current_schema=CTXSYS;
create or replace procedure syncrn (
ownid IN binary_integer,
oname IN varchar2,
idxid IN binary_integer,
ixpid IN binary_integer,
rtabnm IN varchar2,
srcflg IN binary_integer,
smallr IN binary_integer
)
authid definer
as external
name "comt_cb"
library dr$lib
with context
parameters(
context,
ownid ub4,
oname OCISTRING,
idxid ub4,
ixpid ub4,
rtabnm OCISTRING,
srcflg ub1,
smallr ub1
);
/