You are on page 1of 48

Upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.

1 Part II (Solaris 10 x86-64)


In this article you will have a look at the guidelines and steps to upgrade Oracle GI and RDBMS to 11.2.0.2 from 11.2.0.1. The database will be upgraded using dbua. The configuration is a two node Solaris 10 x86-64 cluster described here. For Linux based upgrade to Oracle 11.2.0.2 from Oracle 11.2.0.1click here. For smooth upgrade to 11.2.0.2 it is worth to look at MOS for patches that need to be applied to an existing GI home prior to the upgrade. At the time of the writing patch 9706490 was necessary to be applied to the existing 11.2.0.1 Oracle GI home. In brief, 11.2.0.2 comes as a patch 10098816 downloadable from MOS. Starting with 11.2.0.2 onward all patch sets will be a full installation. Oracle supports GI upgrade as an out of place upgrade only. RDBMS upgrade is recommended to be out of place upgrade. In the article both GI and RDBMS will be upgraded in a new GI and RDBMS Oracle homes. Useful MOS notes. 1. 2. Pre-requsite for 11.2.0.1 to 11.2.0.2 ASM Rolling Upgrade [ID 1274629.1] How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix [ID 969254.1]

Download and unzip the following patches in a stage area:

y y y y

10098816 Oracle 11.2.0.2 6880880 - OPatch. 9706490 prerequisite for successful 11.2.0.2 upgrade 12311357 Latest patch to 11.2.0.2.2 as of the time of the article.

Failure to patch the 11.2.0.1 GI home with patch 9706490 will produce some errors during the course of OUI. 1. INS-40406 . Useful info: ASM 11gR2: INS-40406 Upgrading ASM Instance To Release 11.2.0.1.0 [ID 1117063.1]

2.

INS-30060

3.

While running upgraderoot.sh

bash-3.00# /u01/app/11.2.0.2/grid/rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:

Creating /var/opt/oracle/oratab file... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script.

Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params Creating trace directory Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256 The fixes for bug 9413827 are not present in the 11.2.0.1 crs home Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh /u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed bash-3.00#

1. Patching Oracle GI 11.2.0.1 home with 9706490


The patch readme notes are clear and the patch allows performing a rolling upgrade. After patching with 9706490

OUI installation went smoothly.

2. Clean up after the failed install.


If you have attempted to install 11.2.0.2 and rootupgrade.sh has failed than you should look at MOS How to Proceed from Failed Upgrade to 11gR2 Grid Infrastructure on Linux/Unix [ID 969254.1]. To clean up after the failed GI install run deinstall from NEW_GI_HOME/deinstall and perform on each node

Node: sol1 /u01/app/11.2.0/grid/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/app/11.2.0/grid /u01/app/11.2.0/grid/oui/bin/runInstaller -silent -local -ignoreSysPrereqs -attachHome ORACLE_HOME=/u01/app/11.2.0/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 LOCAL_NODE=sol1 CLUSTER_NODES=sol1,sol2 CRS=true unset ORACLE_HOME Node: sol2 /u01/app/11.2.0/grid/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/app/11.2.0/grid /u01/app/11.2.0/grid/oui/bin/runInstaller -silent -local -ignoreSysPrereqs -attachHome ORACLE_HOME=/u01/app/11.2.0/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 LOCAL_NODE=sol2 CLUSTER_NODES=sol1,sol2 CRS=true unset ORACLE_HOME Note that error INS-40406 was due to not de-installed failed GI upgrade.

3. Patching Oracle GI to 11.2.0.2


Create a new directory /u01/app/11.2.0.2/grid for the 11.2.0.2 GI home on all cluster nodes.

mkdir -p /u01/app/11.2.0.2/grid chown R grid:oinstall


Use the cluvfy to verify that the prerequisites are met: ./runcluvfy.sh stage -pre crsinst -n sol1,sol2 Annex1 show the output. Run the installer from the stage area as GI owner. I skipped the updates. Press Next to continue.

Select Upgrade option and press Next to continue.

Select Language and press Next to continue.

You will see the nodes where 11.2.0.2 will be installed. Press Next to continue.

Select the groups and press Next to continue.

Modify the GI home to point to the new location and press Next to continue.

Examine the results of the checks and press Next to continue if you are satisfied.

Press Install. Wait until prompted to execute the rootupgrade.sh as root.

On the first node sol1 bash-3.00# /u01/app/11.2.0.2/grid/rootupgrade.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: y Creating y directory... Copying dbhome to y ... Copying oraenv to y ... Copying coraenv to y ... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params ASM upgrade has started on first node.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'sol1' CRS-2673: Attempting to stop 'ora.crsd' on 'sol1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'sol1' CRS-2673: Attempting to stop 'ora.racdb.db' on 'sol1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'sol1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.sol1.vip' on 'sol1' CRS-2677: Stop of 'ora.sol1.vip' on 'sol1' succeeded CRS-2672: Attempting to start 'ora.sol1.vip' on 'sol2' CRS-2676: Start of 'ora.sol1.vip' on 'sol2' succeeded CRS-2677: Stop of 'ora.racdb.db' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on 'sol1' CRS-2677: Stop of 'ora.DATA.dg' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'sol1' CRS-2677: Stop of 'ora.asm' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.eons' on 'sol1' CRS-2673: Attempting to stop 'ora.ons' on 'sol1' CRS-2677: Stop of 'ora.ons' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'sol1' CRS-2677: Stop of 'ora.net1.network' on 'sol1' succeeded CRS-2677: Stop of 'ora.eons' on 'sol1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'sol1' has completed CRS-2677: Stop of 'ora.crsd' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'sol1' CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'sol1' CRS-2673: Attempting to stop 'ora.ctssd' on 'sol1' CRS-2673: Attempting to stop 'ora.evmd' on 'sol1' CRS-2673: Attempting to stop 'ora.asm' on 'sol1' CRS-2677: Stop of 'ora.cssdmonitor' on 'sol1' succeeded CRS-2677: Stop of 'ora.evmd' on 'sol1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'sol1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'sol1' succeeded CRS-2677: Stop of 'ora.asm' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'sol1' CRS-2677: Stop of 'ora.cssd' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'sol1' CRS-2673: Attempting to stop 'ora.diskmon' on 'sol1' CRS-2677: Stop of 'ora.gpnpd' on 'sol1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'sol1' CRS-2677: Stop of 'ora.diskmon' on 'sol1' succeeded CRS-2677: Stop of 'ora.gipcd' on 'sol1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'sol1' has completed CRS-4133: Oracle High Availability Services has been stopped. Successfully deleted 1 keys from OCR. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. OLR initialization - successful Adding daemon to inittab ACFS-9200: Supported ACFS-9300: ADVM/ACFS distribution files found. ACFS-9307: Installing requested ADVM/ACFS software. updating /platform/i86pc/boot_archive

ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9309: ADVM/ACFS installation correctness verified. clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded bash-3.00#

The output on sol2 is similar. Now I have the GI upgraded. bash-3.00# ./crsctl query crs softwareversion Oracle Clusterware version on node [sol1] is [11.2.0.2.0] bash-3.00# ./crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.2.0] bash-3.00# Use cluvfy to verify that the GI is properly installed. Run cluvfy stage -post crsinst -n all Refer to Annex2 for the verification and scripts outputs.

3. Upgrade RDBMS
Run the cluvfy to verify the prerequisites for RDBMS installation cluvfy stage -pre dbinst -n all Make a directory on each node of the cluster for the RDBMS 11.2.0.2 binaries and set proper ownership. mkdir -p /u01/app/oracle/product/11.2.0.2/db_1 chown R oracle:ooinstall /u01/app/oracle/product/11.2.0.2/db_1 Start OUI using ./runInstaller from the stage are. Press Next to continue.

Select Skip update and press Next to continue.

Select Software only and press Next to continue.

Select all nodes and select RAC database. Press Next to continue.

Select language and press Next to continue.

Select Enterprise Edition and press Next to continue.

Select the newly created RDBMS OH location and press Next to continue.

Select the OS groups and press Next to continue.

Review the output and press Next to continue.

Press Next.

Wait until prompted to run root.sh as root. After executing root.sh am all the cluster nodes press OK.

Press Close

4. Upgrade the database using dbua

Set the profile to reflect the changes with the new Oracle home location for ORACLE_HOME and PATH. Make sure that a backup of the database exists. Start the dbua. Press Next to continue.

Select the database to upgrade and press Next to continue.

Enter the local instance name when prompted.

Enter the location of the pfile.

Press Next.

Specify the FRS details and press Next to continue.

Select type of EM configuration(DC/GC or none) and press Next to continue.

Specify the credentials as required. Press Next to continue.

Review the summary.

Press Finish. Wait for the upgrade to complete.

Wait until the end and review the outcome.

Annex 1:
-bash-3.00$ ./runcluvfy.sh stage -pre crsinst -n sol1,sol2 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "sol1"

Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.0.0.0" with node(s) sol2,sol1 TCP connectivity check passed for subnet "10.0.0.0"

Node connectivity passed for subnet "192.168.2.0" with node(s) sol2,sol1 TCP connectivity check passed for subnet "192.168.2.0" Node connectivity passed for subnet "10.10.10.0" with node(s) sol2,sol1 TCP connectivity check passed for subnet "10.10.10.0" Node connectivity passed for subnet "192.168.56.0" with node(s) sol1 TCP connectivity check passed for subnet "192.168.56.0"

Interfaces found on subnet "10.0.0.0" that are likely candidates for a private interconnect are: sol2 e1000g0:10.0.2.16 sol1 e1000g0:10.0.2.15 Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are: sol2 e1000g1:192.168.2.22 e1000g1:192.168.2.32 sol1 e1000g1:192.168.2.21 e1000g1:192.168.2.51 e1000g1:192.168.2.31 Interfaces found on subnet "10.10.10.0" that are likely candidates for a private interconnect are: sol2 e1000g2:10.10.10.22 sol1 e1000g2:10.10.10.21 WARNING: Could not find a suitable set of interfaces for VIPs Node connectivity check passed

Checking OCR integrity... OCR integrity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "sol2:/tmp" Free disk space check passed for "sol1:/tmp" Check for multiple users with UID value 1100 passed User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" passed Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "project.max-sem-ids" Kernel parameter check passed for "process.max-sem-nsems" Kernel parameter check passed for "project.max-shm-memory"

Kernel parameter check passed for "project.max-shm-ids" Kernel parameter check passed for "tcp_smallest_anon_port" Kernel parameter check passed for "tcp_largest_anon_port" Kernel parameter check passed for "udp_smallest_anon_port" Kernel parameter check passed for "udp_largest_anon_port" Package existence check passed for "SUNWarc-...( i386)" Package existence check passed for "SUNWbtool-...( i386)" Package existence check passed for "SUNWhea-...( i386)" Package existence check passed for "SUNWlibm-...( i386)" Package existence check passed for "SUNWlibms-...( i386)" Package existence check passed for "SUNWsprot-...( i386)" Package existence check passed for "SUNWtoo-...( i386)" Package existence check passed for "SUNWi1of-...( i386)" Package existence check passed for "SUNWi15cs-...( i386)" Package existence check passed for "SUNWxwfnt-...( i386)" Package existence check passed for "SUNWlibC-...( i386)" Package existence check passed for "SUNWcsl-...( i386)" Operating system patch check passed for "Patch 139575-03" Operating system patch check passed for "Patch 139556-08" Operating system patch check passed for "Patch 137104-02" Operating system patch check passed for "Patch 120754-06" Operating system patch check passed for "Patch 119961-05" Operating system patch check passed for "Patch 119964-14" Operating system patch check passed for "Patch 141415-04" Check for multiple users with UID value 0 passed Current group ID check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes Time zone consistency check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed

Starting check for The SSH LoginGraceTime setting ... PRVE-0037 : LoginGraceTime setting passed on node "sol2" PRVE-0037 : LoginGraceTime setting passed on node "sol1" Check for The SSH LoginGraceTime setting passed Pre-check for cluster services setup was successful. -bash-3.00$

Annex 2:
bash-3.00# ./crsctl query crs softwareversion Oracle Clusterware version on node [sol1] is [11.2.0.2.0] bash-3.00# ./crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.2.0] bash-3.00# The error from cluvfy are acceptable: -bash-3.00$ cluvfy stage -post crsinst -n all -verbose Performing post-checks for cluster services setup Checking node reachability... Check: Node reachability from node "sol2" Destination Node Reachable? ------------------------------------ -----------------------sol1 yes sol2 yes Result: Node reachability check passed from node "sol2"

Checking user equivalence... Check: User equivalence for user "grid" Node Name Comment ------------------------------------ -----------------------sol2 passed sol1 passed Result: User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Node Name Status Comment ------------ ------------------------ -----------------------sol2 passed sol1 passed Verification of the hosts config file successful

Interface information for node "sol2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- -----e1000g0 10.0.2.16 10.0.0.0 10.0.2.16 UNKNOWN 08:00:27:C8:5A:32 1500 e1000g0 169.254.83.42 169.254.0.0 169.254.83.42 UNKNOWN 08:00:27:C8:5A:32 1500 e1000g1 192.168.2.22 192.168.2.0 192.168.2.22 UNKNOWN 08:00:27:9F:07:27 1500 e1000g1 192.168.2.32 192.168.2.0 192.168.2.22 UNKNOWN 08:00:27:9F:07:27 1500 e1000g2 10.10.10.22 10.10.10.0 10.10.10.22 UNKNOWN 08:00:27:F9:E9:ED 1500

Interface information for node "sol1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- -----e1000g0 10.0.2.15 10.0.0.0 10.0.2.15 UNKNOWN 08:00:27:55:88:6E 1500 e1000g0 169.254.9.33 169.254.0.0 169.254.9.33 UNKNOWN 08:00:27:55:88:6E 1500 e1000g1 192.168.2.21 192.168.2.0 192.168.2.21 UNKNOWN 08:00:27:1F:86:D8 1500 e1000g1 192.168.2.51 192.168.2.0 192.168.2.21 UNKNOWN 08:00:27:1F:86:D8 1500 e1000g1 192.168.2.31 192.168.2.0 192.168.2.21 UNKNOWN 08:00:27:1F:86:D8 1500 e1000g2 10.10.10.21 10.10.10.0 10.10.10.21 UNKNOWN 08:00:27:FE:CF:A1 1500 e1000g3 192.168.56.51 192.168.56.0 192.168.56.51 UNKNOWN 08:00:27:09:C0:56 1500

Check: Node connectivity for interface "e1000g0" Source Destination Connected? ------------------------------ ------------------------------ ---------------sol2[10.0.2.16] sol1[10.0.2.15] yes Result: Node connectivity passed for interface "e1000g0" Check: Node connectivity for interface "e1000g1" Source Destination Connected? ------------------------------ ------------------------------ ---------------sol2[192.168.2.22] sol2[192.168.2.32] yes sol2[192.168.2.22] sol1[192.168.2.21] yes sol2[192.168.2.22] sol1[192.168.2.51] yes sol2[192.168.2.22] sol1[192.168.2.31] yes sol2[192.168.2.32] sol1[192.168.2.21] yes sol2[192.168.2.32] sol1[192.168.2.51] yes sol2[192.168.2.32] sol1[192.168.2.31] yes sol1[192.168.2.21] sol1[192.168.2.51] yes sol1[192.168.2.21] sol1[192.168.2.31] yes sol1[192.168.2.51] sol1[192.168.2.31] yes Result: Node connectivity passed for interface "e1000g1" Result: Node connectivity check passed Check: Time zone consistency Result: Time zone consistency check passed Checking Cluster manager integrity...

Checking CSS daemon...

Node Name Status ------------------------------------ -----------------------sol2 running sol1 running Oracle Cluster Synchronization Services appear to be online. Cluster manager integrity check passed

Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------sol2 0022 0022 passed sol1 0022 0022 passed Result: Default user file creation mask check passed Checking cluster integrity... Node Name -----------------------------------sol1 sol2 Cluster integrity check passed

Checking OCR integrity... Checking the absence of a non-clustered configuration... All nodes free of non-clustered, local-only configurations

ASM Running check passed. ASM is running on all specified nodes Checking OCR config file "/var/opt/oracle/ocr.loc"... OCR config file "/var/opt/oracle/ocr.loc" check successful

ERROR: PRVF-4195 : Disk group for ocr location "+DATA" not available on the following nodes:

NOTE: This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR. OCR integrity check passed Checking CRS integrity...

The Oracle Clusterware is healthy on node "sol2" The Oracle Clusterware is healthy on node "sol1" CRS integrity check passed Checking node application existence... Checking existence of VIP node application (required) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------sol2 yes yes passed sol1 yes yes passed VIP node application check passed Checking existence of NETWORK node application (required) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------sol2 yes yes passed sol1 yes yes passed NETWORK node application check passed Checking existence of GSD node application (optional) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------sol2 no no exists sol1 no no exists GSD node application is offline on nodes "sol2,sol1" Checking existence of ONS node application (optional) Node Name Required Running? Comment ------------ ------------------------ ------------------------ ---------sol2 no yes passed sol1 no yes passed ONS node application check passed

Checking Single Client Access Name (SCAN)... SCAN Name Node Running? ListenerName Port Running? ---------------- ------------ ------------ ------------ ------------ -----------scan-sol sol1 true LISTENER_SCAN1 1521 true Checking TCP connectivity to SCAN Listeners... Node ListenerName TCP connectivity? ------------ ------------------------ -----------------------localnode LISTENER_SCAN1 yes TCP connectivity to SCAN Listeners exists on all cluster nodes Checking name resolution setup for "scan-sol"... ERROR: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-sol" SCAN Name IP Address Status Comment ------------ ------------------------ ------------------------ ----------

scan-sol

192.168.2.51

failed

NIS Entry

ERROR: PRVF-4657 : Name resolution setup check for "scan-sol" (IP address: 192.168.2.51) failed ERROR: PRVF-4663 : Found configuration issue with the 'hosts' entry in the /etc/nsswitch.conf file Verification of SCAN VIP and Listener setup failed Checking OLR integrity... Checking OLR config file... OLR config file check successful

Checking OLR file attributes... OLR file check successful

WARNING: This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR. OLR integrity check passed ERROR: PRCT-1128 : ACFS is not supported on this operating system Checking Oracle Cluster Voting Disk configuration... ASM Running check passed. ASM is running on all specified nodes Oracle Cluster Voting Disk configuration check passed Checking to make sure user "grid" is not in "root" group Node Name Status Comment ------------ ------------------------ -----------------------sol2 does not exist passed sol1 does not exist passed Result: User "grid" is not part of "root" group. Check passed Checking if Clusterware is installed on all nodes... Check of Clusterware install passed Checking if CTSS Resource is running on all nodes... Check: CTSS Resource running on all nodes Node Name Status ------------------------------------ -----------------------sol2 passed sol1 passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed Check CTSS state started... Check: CTSS state Node Name State ------------------------------------ -----------------------sol2 Active sol1 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status ------------ ------------------------ -----------------------sol2 0.0 passed sol1 0.0 passed Time offset is within the specified limits on the following set of nodes: "[sol2, sol1]" Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Starting check for The SSH LoginGraceTime setting ... PRVE-0037 : LoginGraceTime setting passed on node "sol2" PRVE-0037 : LoginGraceTime setting passed on node "sol1" Check for The SSH LoginGraceTime setting passed Post-check for cluster services setup was unsuccessful. Checks did not pass for the following node(s): sol2,sol1 -bash-3.00$

Annex3:
-bash-3.00$ cluvfy stage -pre dbinst -n all Performing pre-checks for database installation

Checking node reachability... Node reachability check passed from node "sol1"

Checking user equivalence... User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "e1000g0" Node connectivity passed for interface "e1000g0" Check: Node connectivity for interface "e1000g1" Node connectivity passed for interface "e1000g1" Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "sol2:/tmp" Free disk space check passed for "sol1:/tmp" Check for multiple users with UID value 1101 passed User existence check passed for "oracle" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "oracle" in group "oinstall" [as Primary] passed Membership check for user "oracle" in group "dba" passed Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "project.max-sem-ids" Kernel parameter check passed for "process.max-sem-nsems" Kernel parameter check passed for "project.max-shm-memory" Kernel parameter check passed for "project.max-shm-ids" Kernel parameter check passed for "tcp_smallest_anon_port" Kernel parameter check passed for "tcp_largest_anon_port" Kernel parameter check passed for "udp_smallest_anon_port" Kernel parameter check passed for "udp_largest_anon_port" Package existence check passed for "SUNWarc-...( i386)" Package existence check passed for "SUNWbtool-...( i386)" Package existence check passed for "SUNWhea-...( i386)" Package existence check passed for "SUNWlibm-...( i386)" Package existence check passed for "SUNWlibms-...( i386)" Package existence check passed for "SUNWsprot-...( i386)"

Package existence check passed for "SUNWtoo-...( i386)" Package existence check passed for "SUNWi1of-...( i386)" Package existence check passed for "SUNWi15cs-...( i386)" Package existence check passed for "SUNWxwfnt-...( i386)" Package existence check passed for "SUNWlibC-...( i386)" Package existence check passed for "SUNWcsl-...( i386)" Operating system patch check passed for "Patch 137104-02" Operating system patch check passed for "Patch 139575-03" Operating system patch check passed for "Patch 139556-08" Operating system patch check passed for "Patch 120754-06" Operating system patch check passed for "Patch 119961-05" Operating system patch check passed for "Patch 119964-14" Operating system patch check passed for "Patch 141415-04" Check for multiple users with UID value 0 passed Current group ID check passed Default user file creation mask check passed Checking CRS integrity... CRS integrity check passed Checking Cluster manager integrity...

Checking CSS daemon... Oracle Cluster Synchronization Services appear to be online. Cluster manager integrity check passed

Checking node application existence... Checking existence of VIP node application (required) VIP node application check passed Checking existence of NETWORK node application (required) NETWORK node application check passed Checking existence of GSD node application (optional) GSD node application is offline on nodes "sol2,sol1" Checking existence of ONS node application (optional) ONS node application check passed

Checking if Clusterware is installed on all nodes... Check of Clusterware install passed Checking if CTSS Resource is running on all nodes... CTSS resource check passed

Querying CTSS for time offset on all nodes...

Query of CTSS for time offset passed Check CTSS state started... CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Pre-check for database installation was successful.

You might also like