You are on page 1of 468

ble

fe r a
ans
n - t r
a no
Oracle 11g: )RAC has andฺ Grid
o m id e
n c ฺc GuAdministration
Infrastructure
s -i ent
Accelerated
c
@ a tud
t a n i s S II - Student Guide

ng se t h Volume

h e
e to u
h e
n (c nse
gT
a lice
H en
h e e
C

D60488GC11
Edition 1.1
January 2010
D65232

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Authors Copyright © 2010, Oracle and/or its affiliates. All rights reserved.

James Womack Disclaimer

James Spiller This document contains proprietary information and is protected by copyright and
other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered in
Technical Contributors any way. Except where your use constitutes "fair use" under copyright law, you may
not use, share, download, upload, copy, print, display, perform, reproduce, publish,
David Brower license, post, transmit, or distribute this document in whole or in part without the
Mark Fuller express authorization of Oracle.
Barb Lundhild The information contained in this document is subject to change without notice. If you
Mike Leatherman find any problems in the document, please report them in writing to: Oracle University,
S. Matt Taylor 500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
warranted to be error-free.
Jean-Francois Verrier
Rick Wessman Restricted Rights Notice

Technical Reviewers If this documentation is delivered to the United States Government or anyone using
ble
Christopher Andrews
the documentation on behalf of the United States Government, the following notice is
applicable:
fe r a
Christian Bauwens an s
Jonathan Creighton
U.S. GOVERNMENT RIGHTS
n - t r
The U.S. Government’s rights to use, modify, reproduce, release, perform, display, or
o
an
disclose these training materials are restricted by the terms of the applicable Oracle
Michael Cebulla license agreement and/or the applicable U.S. Government contract.
Al Flournoy s
ha ฺ
Andy Fortunak Trademark Notice )
Joel Goodman
ฺ c om uide
Michael Hazel
- i c tG
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other
n
names may be trademarks of their respective owners.
Pete Jones
a cs uden
Jerry Lee
a n @ St
Markus Michalewicz
g ฺ t t h is
Peter Sharman
e h en use
Ranbir Singh
Richard Strohm ( c he se to
Janet Stern
T an licen
en
Linda Smalleyg
e e H
Branislav Valny

C h Doug Williams

Graphics
Satish Bettegowda

Editors
Nita Pavitran
Arijit Ghosh
Aju Kumar
Daniel Milne

Publishers
Shaik Mahaboob Basha
Nita Brozowski
Jayanthy Keshavamurthy

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Contents

1 Oracle Grid Infrastructure Architecture


Objectives 1-2
Oracle Grid Infrastructure 1-3
Module 1: Oracle Clusterware Concepts 1-4
What Is a Cluster? 1-5
What Is Clusterware? 1-6
Oracle Clusterware 1-7
Oracle Clusterware Architecture and Services 1-8
ble
Goals for Oracle Clusterware 1-9
fe r a
Oracle Clusterware Networking 1-10
ans
Interconnect Link Aggregation: Single Switch 1-12 n - t r
o
Interconnect Link Aggregation: Multiswitch 1-14
s an
Interconnect NIC Guidelines 1-15
) ha ฺ
Additional Interconnect Guidelines 1-16
ฺ c om uide
Quiz 1-17
- i n c tG
cs uden
Module 2: Oracle Clusterware Architecture 1-18
a
a n @ St
Oracle Clusterware Startup 1-19
g ฺ t t h is
Oracle Clusterware Process Architecture 1-20
h en use
Grid Plug and Play 1-22
e
c he se to
GPnP Domain 1-23
(
T an licen
GPnP Components 1-24

en g GPnP Profile 1-25

e e H Grid Naming Service 1-26


C h Single Client Access Name 1-27
GPnP Architecture Overview 1-29
How GPnP Works: Cluster Node Startup 1-31
How GPnP Works: Client Database Connections 1-32
Quiz 1-33
Module 3: ASM Architecture 1-35
What Is Oracle ASM? 1-36
ASM and ASM Cluster File System 1-37
ASM Key Features and Benefits 1-39
ASM Instance Designs: Nonclustered ASM and Oracle Databases 1-40
ASM Instance Designs: Clustered ASM for Clustered Databases 1-41
ASM Instance Designs: Clustered ASM for Mixed Databases 1-42
ASM System Privileges 1-43
ASM OS Groups with Role Separation 1-44

iii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Authentication for Accessing ASM Instances 1-45
Password-Based Authentication for ASM 1-46
Managing the ASM Password File 1-47
Using a Single OS Group 1-48
Using Separate OS Groups 1-49
ASM Components: Software 1-50
ASM Components: ASM Instance 1-51
ASM Components: ASM Instance Primary Processes 1-53
ASM Components: ASM Listener 1-54
ASM Components: Configuration Files 1-55
ASM Components: Group Services 1-56
ASM Components: ASM Disk Group 1-57
ble
ASM Disk Group: Failure Groups 1-58
fe r a
ASM Components: ASM Disks 1-59
ans
ASM Components: ASM Files 1-60
n - t r
o
ASM Files: Extents and Striping 1-61
ASM Files: Mirroring 1-62 s an
ASM Components: ASM Clients 1-63 ) ha ฺ
ASM Components: ASM Utilities 1-64 ฺ c om uide
- i n c tG
ASM Scalability 1-65
a cs uden
Summary 1-66
a n @ St
g ฺ t t h is
e h en use
2 Grid Infrastructure Installation
Objectives 2-2 he
( c e to
a n c e nsPlanning 2-3
Module 1: Preinstallation
g
SharedTStorage lPlanning
i for Grid Infrastructure 2-4
n
eSizing Shared Storage 2-5
e H
C he Storing the OCR in ASM 2-6
Oracle Clusterware Repository (OCR) 2-7
Managing Voting Disks in ASM 2-9
CSS Voting Disk Function 2-10
Oracle Clusterware Main Log Files 2-11
Installing ASMLib 2-12
Preparing ASMLib 2-13
Quiz 2-14
Oracle Grid Infrastructure 11g Installation 2-16
Checking System Requirements 2-17
Checking Network Requirements 2-18
Software Requirements (Kernel) 2-20
Software Requirements: Packages 2-21
Oracle Validated Configuration RPM 2-22
Creating Groups and Users 2-24
Creating Groups and Users: Example 2-25

iv
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Shell Settings for the Grid Infrastructure User 2-26
Quiz 2-28
Module 3: Grid Infrastructure Installation 2-29
Installing Grid Infrastructure 2-30
Grid Plug and Play Support 2-31
Cluster Node Information 2-32
Specify Network Interface Usage 2-33
Storage Option Information 2-34
Failure Isolation Support with IPMI 2-35
Privileged Operating System Groups 2-36
Installation and Inventory Locations 2-37
Prerequisite Checks 2-38
ble
Finishing the Installation 2-39
fe r a
Verifying the Grid Infrastructure Installation 2-40
ans
Module 4: Configuring ASM Disk Groups and ACFS 2-41
n - t r
o
Configuring ASM with ASMCA 2-42
Creating an ASM Disk Group with ASMCA 2-43 s an
) ha ฺ
Creating an ASM Disk Group: Advanced Options 2-44
ฺ c om uide
Creating a Disk Group with Enterprise Manager 2-45
- i n c tG
cs uden
Creating an ASM Disk Group with Command-Line Tools 2-47
a
a n @ St
Configuring an ASM Volume: ASMCA 2-48
ฺ t is
Configuring an ASM Volume: EM 2-49
g t h
e h en use
Configuring an ASM Volume: ASMCMD and SQL 2-50

( c he se to
Configuring the ASM Cluster File System 2-51

T an licen
Configuring ACFS with EM 2-52

en g Configuring ACFS with Command Line 2-53

e e H Mounting ACFS with ASMCA 2-54

C h Mounting an ACFS with EM 2-55


Mounting ACFS with Command-Line Tools 2-56
Summary 2-57
Practice 2 Overview 2-58
Quiz 2-59

3 Administering Oracle Clusterware


Objectives 3-2
Managing Oracle Clusterware 3-3
Managing Clusterware with Enterprise Manager 3-4
Controlling Oracle Clusterware 3-5
Verifying the Status of Oracle Clusterware 3-6
Determining the Location of Oracle Clusterware Configuration Files 3-7
Checking the Integrity of Oracle Clusterware Configuration Files 3-8
Backing Up and Recovering the Voting Disk 3-9

v
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Adding, Deleting, or Migrating Voting Disks 3-10
Locating the OCR Automatic Backups 3-11
Changing the Automatic OCR Backup Location 3-12
Adding, Replacing, and Repairing OCR Locations 3-13
Removing an Oracle Cluster Registry Location 3-14
Migrating OCR Locations to ASM 3-15
Migrating OCR from ASM to Other Shared Storage 3-16
Performing Manual OCR Backups 3-17
Recovering the OCR by Using Physical Backups 3-18
Recovering the OCR by Using Logical Backups 3-19
Oracle Local Registry 3-20
Determining the Current Network Settings 3-22
ble
Changing the Public VIP Addresses 3-23
fe r a
Changing the Interconnect Adapter 3-25
ans
Managing SCAN VIP and SCAN Listener Resources 3-27
n - t r
o
Quiz 3-30
Summary 3-32 s an
Practice 3 Overview 3-33 ) ha ฺ
ฺ c om uide
- i n c tG
4 Managing Oracle Clusterware
a c s den
Module 4-1 Adding and Deleting Oracle
n @Clusterware
S tu Homes 4-2
Objectives 4-3
g ฺt a t h i s
Adding Oracle Clusterware
h n e
e Homesus4-4
Prerequisite Steps e
hefor RunningtoaddNode.sh 4-5
n
Adding a Node ( c ns e
with addNode.sh 4-7
T a l i c e
e n g
Completing OUI Silent Node Addition 4-8

e e H Removing a Node from the Cluster 4-9


C h Deleting a Node from the Cluster 4-10
Deleting a Node from a Cluster (GNS in Use) 4-13
Deleting a Node from the Cluster 4-14
Quiz 4-15
Summary 4-16
Module 4-2 Patching Oracle Clusterware 4-17
Objectives 4-18
Out-of-Place Oracle Clusterware Upgrade 4-19
Oracle Clusterware Upgrade 4-20
Types of Patches 4-21
Patch Properties 4-22
Configuring the Software Library 4-23
Setting Up Patching 4-24
Starting the Provisioning Daemon 4-25
Obtaining Oracle Clusterware Patches 4-26
Uploading Patches 4-28
vi
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Deployment Procedure Manager 4-30
Reduced Down-Time Patching for Cluster Environments 4-31
Rolling Patches 4-32
Checking Software Versions 4-33
Installing a Rolling Patchset with OUI 4-34
Patchset OUI 4-35
Installing a Rolling Patchset with OUI 4-36
OPatch: Overview 4-37
OPatch: General Usage 4-39
Before Patching with OPatch 4-40
Installing a Rolling Patch with OPatch 4-41
Installing a Patch with Minimum Down Time with Opatch 4-43
ble
Quiz 4-44
fe r a
Summary 4-46
ans
n - t r
o
5
an
Making Applications Highly Available with Oracle Clusterware
Objectives 5-2 s
)
Oracle Clusterware High Availability (HA) 5-3
ha ฺ
Oracle Clusterware HA Components 5-4 ฺ c om uide
- i n c tG
cs uden
Resource Management Options 5-5
a
Server Pools 5-6
a n @ St
ฺ t
Server Pool Attributes 5-7
g t h is
e h en use
GENERIC and FREE Server Pools 5-9

( c he se to
Assignment of Servers to Server Pools 5-11

T an licen
Server Attributes and States 5-12

en g Creating Server Pools with srvctl and crsctl 5-14

e e H Managing Server Pools with srvctl and crsctl 5-15


C h Adding Server Pools with Enterprise Manager 5-16
Managing Server Pools with Enterprise Manager 5-17
Clusterware Resource Modeling 5-18
Resource Types 5-20
Adding a Resource Type 5-21
Resource Type Parameters 5-23
Resource Type Advanced Settings 5-24
Defining Resource Dependencies 5-25
Creating an Application VIP by Using crsctl 5-27
Creating an Application VIP by Using EM 5-29
Managing Clusterware Resources with EM 5-30
Adding Resources with EM 5-31
Adding Resources by Using crsctl 5-36
Managing Resources with EM 5-37
Managing Resources with crsctl 5-40
HA Events: ONS and FAN 5-42
vii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Managing Oracle Notification Server with srvctl 5-43
Quiz 5-44
Summary 5-46
Practice 5 Overview 5-47

6 Troubleshooting Oracle Clusterware


Objectives 6-2
Golden Rule in Debugging Oracle Clusterware 6-3
Oracle Clusterware Main Log Files 6-5
Diagnostics Collection Script 6-6
Cluster Verify: Overview 6-7
Cluster Verify Components 6-8
ble
Cluster Verify Locations 6-9
fe r a
Cluster Verify Configuration File 6-10
ans
Cluster Verify Output: Example 6-12
n - t r
o
Enabling Resource Debugging 6-13
s an
Dynamic Debugging 6-14
Enabling Tracing for Java-Based Tools 6-15 ) ha ฺ
Preserving Log Files Before Wrapping 6-16 ฺ c om uide
- i n c tG
Processes That Can Reboot Nodes 6-17
a cs uden
a n @ St
Determining Which Process Caused Reboot 6-18
ฺ t is
Using diagwait for Eviction Troubleshooting 6-19
g t h
Linux hangcheck-timer h n Module
eKernel se 6-20
e u
( c he se to Kernel Module 6-21
Configuring the hangcheck-timer
Avoidinga
T n Reboots
False
l i c en 6-22
e ng ocrdump to View Logical Contents of the OCR 6-23
Using

ee H Checking the Integrity of the OCR 6-24


C h OCR-Related Tools for Debugging 6-25
Browsing My Oracle Support Knowledge Articles 6-26
Summary 6-27
Quiz 6-28
Practice 6 Overview 6-30

7 Administering ASM Instances


Objectives 7-2
ASM Initialization Parameters 7-3
ASM_DISKGROUPS 7-4
Disk Groups Mounted at Startup 7-5
ASM_DISKSTRING 7-6
ASM_POWER_LIMIT 7-8
INSTANCE_TYPE 7-9
CLUSTER_DATABASE 7-10

viii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
MEMORY_TARGET 7-11
Adjusting ASM Instance Parameters in SPFILEs 7-12
Starting and Stopping ASM Instances by Using srvctl 7-13
Starting and Stopping ASM Instances by Using SQL*Plus 7-14
Starting and Stopping ASM Instances by Using ASMCA and ASMCMD 7-16
Starting and Stopping ASM Instances Containing Cluster Files 7-17
Starting and Stopping the ASM Listener 7-18
ASM Dynamic Performance Views 7-19
ASM Dynamic Performance Views Diagram 7-20
Quiz 7-22
Summary 7-24
Practice 7 Overview: Administering ASM Instances 7-25
ble
fe r a
8 Administering ASM Disk Groups
ans
Objectives 8-2
n - t r
o
Disk Group Overview 8-3
s an
Creating a New Disk Group 8-4
) ha ฺ
Creating a New Disk Group with ASMCMD 8-6
ฺ c om uide
Disk Group Attributes 8-7
- i n c tG
V$ASM_ATTRIBUTE 8-9
a cs uden
a n @ St
Compatibility Attributes 8-10
g ฺ t t h is
Features Enabled by Disk Group Compatibility Attributes 8-11
h en use
Support for 4KB Sector Disk Drives 8-12
e
( c he se to
Supporting 4 KB Sector Disks 8-13

T an licen
ASM Support for 4KB Sector Disks 8-14

en g Using the SECTOR_SIZE Clause 8-15

e e H Viewing ASM Disk Groups 8-17


C h Viewing ASM Disk Information 8-19
Extending an Existing Disk Group 8-21
Dropping Disks from an Existing Disk Group 8-22
REBALANCE POWER 0 8-23
V$ASM_OPERATION 8-24
Adding and Dropping in the Same Command 8-25
Adding and Dropping Failure Groups 8-26
Undropping Disks in Disk Groups 8-27
Mounting and Dismounting Disk Groups 8-28
Viewing Connected Clients 8-29
Dropping Disk Groups 8-30
Checking the Consistency of Disk Group Metadata 8-31
ASM Fast Mirror Resync 8-32
Preferred Read Failure Groups 8-33
Preferred Read Failure Groups Best Practice 8-34
Viewing ASM Disk Statistics 8-35
ix
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Performance, Scalability, and Manageability Considerations for Disk Groups 8-37
Quiz 8-38
Summary 8-40
Practice 8 Overview: Administering ASM Disk Groups 8-41

9 Administering ASM Files, Directories, and Templates


Objectives 9-2
ASM Clients 9-3
Interaction Between Database Instances and ASM 9-5
Accessing ASM Files by Using RMAN 9-6
Accessing ASM Files by Using XML DB 9-8
Accessing ASM Files by Using DBMS_FILE_TRANSFER 9-9
ble
Accessing ASM Files by Using ASMCMD 9-10
fe r a
Fully Qualified ASM File Names 9-11
ans
Other ASM File Names 9-13
n - t r
o
Valid Contexts for the ASM File Name Forms 9-15
s an
Single File Creation Examples 9-16
Multiple File Creation Example 9-17 ) ha ฺ
View ASM Aliases, Files, and Directories 9-18 ฺ c om uide
- i n c tG
Viewing ASM Files 9-20
a cs uden
ASM Directories 9-21
a n @ St
ฺ t
Managing ASM Directories 9-22
g t h is
e h en use
Managing Alias File Names 9-23

( c he se to
ASM Intelligent Data Placement 9-24

an licen
Guidelines for Intelligent Data Placement 9-25
T
g
Assigning Files to Disk Regions 9-26
en
e e H Assigning Files to Disk Regions with Enterprise Manager 9-27

C h Monitoring Intelligent Data Placement 9-28


Disk Group Templates 9-29
Viewing Templates 9-31
Managing Disk Group Templates 9-32
Managing Disk Group Templates with ASMCMD 9-33
Using Disk Group Templates 9-34
ASM Access Control Lists 9-35
ASM ACL Prerequisites 9-36
Managing ASM ACL with SQL Commands 9-37
Managing ASM ACL with ASMCMD Commands 9-38
Managing ASM ACL with Enterprise Manager 9-39
ASM ACL Guidelines 9-41
Quiz 9-42
Summary 9-44

x
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
10 Administering ASM Cluster File Systems
Objectives 10-2
ASM Files and Volumes 10-3
ACFS and ADVM Architecture Overview 10-4
ADVM Processes 10-6
ADVM Restrictions 10-7
ASM Cluster File System 10-8
ADVM Space Allocation 10-9
Striping Inside the Volume 10-10
Volume Striping: Example 10-11
Creating an ACFS Volume 10-13
ble
Create an ASM Dynamic Volume with Enterprise Manager 10-14
fe r a
Managing ADVM Dynamic Volumes 10-17
ans
Create an ASM Cluster File System with Enterprise Manager 10-18
n - t r
o
Manage Dynamic Volumes with SQL*PLUS 10-19
Registering an ACFS Volume 10-20 s an
Creating an ACFS Volume with ASMCA 10-21 ) ha ฺ
ฺ c om uide
Creating the ACFS File System with ASMCA 10-22
- i n c tG
cs uden
Mounting the ACFS File System with ASMCA 10-23
a
a n @ St
Managing ACFS with EM 10-24
ฺ t is
Extending ASMCMD for Dynamic Volumes 10-25
g t h
e h en use
Linux-UNIX File System APIs 10-26

( c he se to
Linux-UNIX Extensions 10-27

T an licen
ACFS Platform-Independent Commands 10-28

en g ACFS Snapshots 10-29

e e H Managing ACFS Snapshots 10-30

C h Managing ACFS Snapshots with Enterprise Manager 10-32


Creating ACFS Snapshots 10-33
Managing Snapshots 10-34
Viewing Snapshots 10-35
ACFS Backups 10-36
ACFS Performance 10-37
Using ACFS Volumes After Reboot 10-38
ACFS Views 10-39
Quiz 10-40
Summary 10-41
Practice 10 Overview: Managing ACFS 10-42

11 Real Application Clusters Database Installation


Objectives 11-2
Installing the Oracle Database Software 11-3
Creating the Cluster Database 11-7

xi
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Select Database Type 11-8
Database Identification 11-9
Cluster Database Management Options 11-10
Passwords for Database Schema Owners 11-11
Database File Locations 11-12
Recovery Configuration 11-13
Database Content 11-14
Initialization Parameters 11-15
Database Storage Options 11-16
Create the Database 11-17
Monitor Progress 11-18
Postinstallation Tasks 11-19
ble
Check Managed Targets 11-20
fe r a
Background Processes Specific to Oracle RAC 11-21
ans
Single Instance to RAC Conversion 11-23
n - t r
o
Single-Instance Conversion Using the DBCA 11-25 s an
Issues for Converting Single Instance Databases to Oracle RAC 11-24

Conversion Steps 11-26 ) ha ฺ


Single-Instance Conversion Using rconfig 11-29 ฺ c om uide
- i n c tG
Quiz 11-31
a cs uden
Summary 11-33
a n @ St
Practice 11 Overview 11-34
g ฺ t t h is
e h en use
( c he se to
12 Oracle RAC Administration
Objectives n12-2
a c e n
T l i
g Database Home Page 12-3
Cluster
e n
e e H Cluster Database Instance Home Page 12-5
C h Cluster Home Page 12-6
Configuration Section 12-7
Topology Viewer 12-9
Enterprise Manager Alerts and RAC 12-10
Enterprise Manager Metrics and RAC 12-11
Enterprise Manager Alert History and RAC 12-13
Enterprise Manager Blackouts and RAC 12-14
Redo Log Files and RAC 12-15
Automatic Undo Management and RAC 12-16
Starting and Stopping RAC Instances 12-17
Starting and Stopping RAC Instances with SQL*Plus 12-18
Starting and Stopping RAC Instances with srvctl 12-19
Switch Between the Automatic and Manual Policies 12-20
RAC Initialization Parameter Files 12-21
SPFILE Parameter Values and RAC 12-22
EM and SPFILE Parameter Values 12-23
xii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
RAC Initialization Parameters 12-25
Parameters That Require Identical Settings 12-27
Parameters That Require Unique Settings 12-28
Quiescing RAC Databases 12-29
Terminating Sessions on a Specific Instance 12-30
How SQL*Plus Commands Affect Instances 12-31
Transparent Data Encryption and Wallets in RAC 12-32
Quiz 12-33
Summary 12-35
Practice 12 Overview 12-36

13 Managing Backup and Recovery for RAC


ble
Objectives 13-2
fe r a
Protecting Against Media Failure 13-3
ans
Media Recovery in Oracle RAC 13-4
n - t r
o
Parallel Recovery in RAC 13-5
Archived Log File Configurations 13-6 s an
RAC and the Fast Recovery Area 13-7 ) ha ฺ
RAC Backup and Recovery Using EM 13-8 ฺ c om uide
- i n c tG
cs uden
Configure RAC Recovery Settings with EM 13-9
a
a n @ St
Archived Redo File Conventions in RAC 13-10
ฺ t is
Configure RAC Backup Settings with EM 13-11
g t h
e h en use
Oracle Recovery Manager 13-12

( c he se to
Configure RMAN Snapshot Control File Location 13-13

T an licen
Configure Control File and SPFILE Autobackup 13-14

en g Crosschecking on Multiple RAC Clusters Nodes 13-15

e e H Channel Connections to Cluster Instances 13-16

C h RMAN Channel Support for the Grid 13-17


RMAN Default Autolocation 13-18
Distribution of Backups 13-19
One Local Drive CFS Backup Scheme 13-20
Multiple Drives CFS Backup Scheme 13-21
Restoring and Recovering 13-22
Quiz 13-23
Summary 13-25
Practice 13 Overview 13-26

14 RAC Database Monitoring and Tuning


Objectives 14-2
CPU and Wait Time Tuning Dimensions 14-3
RAC-Specific Tuning 14-4
RAC and Instance or Crash Recovery 14-5
Instance Recovery and Database Availability 14-7

xiii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Instance Recovery and RAC 14-8
Analyzing Cache Fusion Impact in RAC 14-10
Typical Latencies for RAC Operations 14-11
Wait Events for RAC 14-12
Wait Event Views 14-13
Global Cache Wait Events: Overview 14-14
2-way Block Request: Example 14-16
3-way Block Request: Example 14-17
2-way Grant: Example 14-18
Global Enqueue Waits: Overview 14-19
Session and System Statistics 14-20
Most Common RAC Tuning Tips 14-21
ble
Index Block Contention: Considerations 14-23
fe r a
Oracle Sequences and Index Contention 14-24
ans
Undo Block Considerations 14-25
n - t r
o
High-Water Mark Considerations 14-26
Concurrent Cross-Instance Calls: Considerations 14-27 s an
)
Monitoring RAC Database and Cluster Performance 14-28
ha ฺ
Cluster Database Performance Page 14-29 ฺ c om uide
- i n c tG
cs uden
Determining Cluster Host Load Average 14-30
a
a n @ St
Determining Global Cache Block Access Latency 14-31
ฺ t is
Determining Average Active Sessions 14-32
g t h
e h en use
Determining Database Throughput 14-33

( c he se to
Accessing the Cluster Cache Coherency Page 14-35

an licen
Viewing the Cluster Interconnects Page 14-37
T
en g
Viewing the Database Locks Page 14-39

e e H AWR Snapshots in RAC 14-40

C h AWR Reports and RAC: Overview 14-41


Active Session History Reports for RAC 14-43
Automatic Database Diagnostic Monitor for RAC 14-45
What Does ADDM Diagnose for RAC? 14-47
EM Support for ADDM for RAC 14-48
Quiz 14-49
Summary 14-51
Practice 14 Overview 14-52

15 Services
Objectives 15-2
Oracle Services 15-3
Services for Policy- and Administrator-Managed Databases 15-4
Default Service Connections 15-5
Create Service with Enterprise Manager 15-6
Create Services with SRVCTL 15-7

xiv
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Manage Services with Enterprise Manager 15-8
Managing Services with EM 15-9
Manage Services with srvctl 15-10
Use Services with Client Applications 15-11
Services and Connection Load Balancing 15-12
Services and Transparent Application Failover 15-13
Use Services with the Resource Manager 15-14
Services and Resource Manager with EM 15-15
Use Services with the Scheduler 15-16
Services and the Scheduler with EM 15-17
Using Distributed Transactions with RAC 15-19
Distributed Transactions and Services 15-20
ble
Service Thresholds and Alerts 15-22
fe r a
Services and Thresholds Alerts: Example 15-23
ans
Service Aggregation and Tracing 15-24
n - t r
o
Top Services Performance Page 15-25
s an
Service Aggregation Configuration 15-26
Service, Module, and Action Monitoring 15-27 ) ha ฺ
Service Performance Views 15-28 ฺ c om uide
- i n c tG
Quiz 15-29
a cs uden
Summary 15-31
a n @ St
ฺ t
Practice 15: Overview 15-32
g t h is
e h en use
16 Design for
( c e to
heHighsAvailability
n c16-2
Objectives
a e n
T l i
g Causes of Unplanned Down Time 16-3
H en Causes of Planned Down Time 16-4
h e e Oracle’s Solution to Down Time 16-5
C RAC and Data Guard Complementarity 16-6
Maximum Availability Architecture 16-7
RAC and Data Guard Topologies 16-8
RAC and Data Guard Architecture 16-9
Data Guard Broker (DGB) and Oracle Clusterware (OC) Integration 16-11
Fast-Start Failover: Overview 16-12
Data Guard Broker Configuration Files 16-14
Real-Time Query Physical Standby Database 16-15
Hardware Assisted Resilient Data 16-16
Database High Availability: Best Practices 16-17
How Many ASM Disk Groups Per Database? 16-18
Which RAID Configuration for High Availability? 16-19
Should You Use ASM Mirroring Protection? 16-20
What Type of Striping Works Best? 16-21
ASM Striping Only 16-22

xv
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Hardware RAID–Striped LUNs 16-23
Hardware RAID–Striped LUNs HA 16-24
Disk I/O Design Summary 16-25
Extended RAC: Overview 16-26
Extended RAC Connectivity 16-27
Extended RAC Disk Mirroring 16-28
Achieving Quorum with Extended RAC 16-29
Additional Data Guard Benefits 16-30
Using a Test Environment 16-31
Quiz 16-32
Summary 16-33

ble
Appendix A: Practices and Solutions
fe r a
ans
Appendix B: Oracle RAC One Node
n - t r
o
Oracle RAC One Node B 2
The Omotion Utility B-3 s an
Oracle RAC One Node and OVM B-4 ) ha ฺ
ฺ c om uide
- i n c tG
Appendix C: Cloning Oracle Clusterware
a c s den
Objectives C-2
a n @ Stu
What Is Cloning? C-3
g ฺ t t h is
n se C-4
Benefits of Cloning OracleeClusterware
h
Creating a Cluster e
byeCloningto
u
Oracle Clusterware C-5
h
c Clusterware
(Oracle se
a n
Preparing the
c e n Home for Cloning C-6
g Tto Createli a New Oracle Clusterware Environment C-9
Cloning
n
eThe clone.pl Script C-11
e H
C he The clone.pl Environment Variables C-12
The clone.pl Command Options C-13
Cloning to Create a New Oracle Clusterware Environment C-14
Log Files Generated During Cloning C-18
Cloning to Extend Oracle Clusterware to More Nodes C-20
Quiz C-25
Summary C-27

Appendix D: High Availability of Connections


Objectives D-2
Types of Workload Distribution D-3
Client-Side Load Balancing D-4
Client-Side, Connect-Time Failover D-5
Server-Side, Connect-Time Load Balancing D-6
Fast Application Notification: Overview D-7
Fast Application Notification: Benefits D-8
xvi
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
FAN-Supported Event Types D-9
FAN Event Status D-10
FAN Event Reasons D-11
FAN Event Format D-12
Load Balancing Advisory: FAN Event D-13
Implementation of Server-Side Callouts D-14
Server-Side Callout Parse: Example D-15
Server-Side Callout Filter: Example D-16
Configuring the Server-Side ONS D-17
Optionally Configure the Client-Side ONS D-18
JDBC Fast Connection Failover: Overview D-19
Using Oracle Streams Advanced Queuing for FAN D-20
ble
JDBC/ODP.NET FCF Benefits D-21
fe r a
Load Balancing Advisory D-22
ans
JDBC/ODP.NET Runtime Connection Load Balancing: Overview D-23
n - t r
o
Monitor LBA FAN Events D-24
Transparent Application Failover: Overview D-25 s an
) ha ฺ
TAF Basic Configuration Without FAN: Example D-26
ฺ c om uide
TAF Basic Configuration with FAN: Example D-27
- i n c tG
cs uden
TAF Preconnect Configuration: Example D-28
a
TAF Verification D-29
a n @ St
ฺ t is
FAN Connection Pools and TAF Considerations D-30
g t h
Summary D-31
e h en use
( c he se to
T an licen
en g
e e H
C h

xvii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

xviii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle RAC Administration

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to:


• Use Enterprise Manager Cluster Database pages
• Define redo log files in a RAC environment
• Define undo tablespaces in a RAC environment
• Start and stop RAC databases and instances
• Modify initialization parameters in a RAC environment ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 2
Cluster Database Home Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
T Home
Cluster Database
g l ice Page
TheH en Database Home page serves as a crossroad for managing and monitoring all
Cluster
e e
Ch aspects of your RAC database. From this page, you can access the other main cluster database
tabs: Performance, Availability, Server, Schema, Data Movement, Software and Support, and
Topology.
On this page, you find General, High Availability, Space Summary, and Diagnostic Summary
sections for information that pertains to your cluster database as a whole. The number of
instances is displayed for the RAC database, in addition to the status. A RAC database is
considered to be up if at least one instance has the database open. You can access the Cluster
Home page by clicking the Cluster link in the General section of the page.
Other items of interest include the date of the last RMAN backup, archiving information,
space utilization, and an alert summary. By clicking the link next to the Flashback Database
Logging label, you can open the Recovery Settings page from where you can change various
recovery parameters.
The Alerts table shows all the recent alerts that are open. Click the alert message in the
Message column for more information about the alert. When an alert is triggered, the name of
the metric for which the alert was triggered is displayed in the Name column.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 3
Cluster Database Home Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
T Home
Cluster Database
g l ice Page (continued)
TheH en Alerts table provides information about alerts for related targets, such as Listeners
Related
e e
Ch and Hosts, and contains details about the message, the time the alert was triggered, the value,
and the time the alert was last checked.
The Policy Trend Overview page (accessed by clicking the Compliance Score link) provides a
comprehensive view about a group or targets containing other targets with regard to
compliance over a period of time. Using the tables and graphs, you can easily watch for trends
in progress and changes.
The “Security At a Glance” page shows an overview of the security health of the enterprise for
the targets or specific groups. This helps you to focus on security issues by showing statistics
about security policy violations and noting critical security patches that have not been applied.
The Job Activity table displays a report of the job executions that shows the scheduled,
running, suspended, and problem (stopped/failed) executions for all Enterprise Manager jobs
on the cluster database.
The Instances table lists the instances for the cluster database, their availability, alerts, policy
violations, performance findings, and related ASM Instance. Click an instance name to go to
the Home page for that instance. Click the links in the table to get more information about a
particular alert, advice, or metric.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 4
Cluster Database Instance Home Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
T Instance
Cluster Database
g l ice Home Page
TheH en Database Instance Home page enables you to view the current state of the instance
Cluster
e e
Ch by displaying a series of metrics that portray its overall health. This page provides a launch
point for the performance, administration, and maintenance of the instance environment.
You can access the Cluster Database Instance Home page by clicking one of the instance
names from the Instances section of the Cluster Database Home page. This page has basically
the same sections as the Cluster Database Home page.
The difference is that tasks and monitored activities from these pages apply primarily to a
specific instance. For example, clicking the Shutdown button on this page shuts down only this
one instance. However, clicking the Shutdown button on the Cluster Database Home page
gives you the option of shutting down all or specific instances.
By scrolling down on this page, you see the Alerts, Related Alerts, Policy Violations, Jobs
Activity, and Related Links sections. These provide information similar to that provided in the
same sections on the Cluster Database Home page.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 5
Cluster Home Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T Pagelice
a
Cluster Home
g
TheH en shows you the Cluster Home page, which can be accessed by clicking the Cluster tab
slide
e e
Ch located in the top-right corner of Enterprise Manager. Even if the database is down, the cluster
page is available to manage resources. The cluster is represented as a composite target
composed of nodes and cluster databases. An overall summary of the cluster is provided here.
The Cluster Home page displays several sections, including General, Configuration,
Diagnostic Summary, Cluster Databases, Alerts, and Hosts. The General section provides a
quick view of the status of the cluster, providing basic information such as current Status,
Availability, Up nodes, Clusterware Version, and Oracle Home.
The Configuration section enables you to view the operating systems (including hosts and OS
patches) and hardware (including hardware configuration and hosts) for the cluster.
The Cluster Databases table displays the cluster databases (optionally associated with
corresponding services) associated with this cluster, their availability, and any alerts on those
databases. The Alerts table provides information about any alerts that have been issued along
with the severity rating of each.
The Hosts table (not shown in the screenshot) displays the hosts for the cluster, availability,
corresponding alerts, CPU and memory utilization percentage, and total input/output (I/O) per
second.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 6
Configuration Section

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Configuration
g
a
T Section
l ice
TheH en Home page is invaluable for locating configuration-specific data. Locate the
Cluster
he e
C Configuration section on the Cluster Home page. The View drop-down list enables you to
inspect hardware and operating system overview information.
Click the Hosts link, and then click the Hardware Details link of the host that you want. On the
Hardware Details page, you find detailed information about your CPU, disk controllers,
network adapters, and so on. This information can be very useful when determining the Linux
patches for your platform.
Click History to access the hardware history information for the host.
Some hardware information is not available, depending on the hardware platform.
Note: The Local Disk Capacity (GB) field shows the disk space that is physically attached
(local) to the host. This value does not include disk space that may be available to the host
through networked file systems.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 7
Configuration Section

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Configuration
g
a
T Section
l ice(continued)
TheH en
Operating System Details General page displays the operating system details for a host,
h e e
C including:
• General information, such as the distributor version and the maximum swap space of the
operating system
• Information about operating system properties
The Source column displays where Enterprise Manager obtained the value for each operating
system property.
To see a list of changes to the operating system properties, click History.
The Operating System Details File Systems page displays information about one or more file
systems for the selected hosts:
• Name of the file system on the host
• Type of mounted file system—for example, ufs or nfs
• Directory where the file system is mounted
• The mount options for the file system—for example, ro, nosuid, or nobrowse
The Operating System Details Packages page displays information about the operating system
packages that have been installed on a host.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 8
Topology Viewer

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
TopologygViewer l ice
TheH en Enterprise Manager Topology Viewer enables you to visually see the relationships
Oracle
he e
C between target types for each host of your cluster database. You can zoom in or out, pan, and
see selection details. These views can also be used to launch various administration functions.
The Topology Viewer populates icons on the basis of your system configuration. If a listener is
serving an instance, a line connects the listener icon and the instance icon. Possible target
types are:
• Interface
• Listener
• ASM Instance
• Database Instance
If the Show Configuration Details option is not selected, the topology shows the monitoring
view of the environment, which includes general information such as alerts and overall status.
If you select the Show Configuration Details option, additional details are shown in the
Selection Details window, which are valid for any topology view. For instance, the Listener
component would also show the machine name and port number.
You can click an icon and then right-click to display a menu of available actions.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 9
Enterprise Manager Alerts and RAC

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce and RAC
Enterprise
g Manager l iAlerts
YouH en use Enterprise Manager to administer alerts for RAC environments. Enterprise
can
he e
C Manager distinguishes between database- and instance-level alerts in RAC environments.
Enterprise Manager also responds to metrics from across the entire RAC database and
publishes alerts when thresholds are exceeded. Enterprise Manager interprets both predefined
and customized metrics. You can also copy customized metrics from one cluster database
instance to another, or from one RAC database to another. A recent alert summary can be
found on the Database Control Home page. Notice that alerts are sorted by relative time and
target name.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 10
Enterprise Manager Metrics and RAC

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce and RAC
Enterprise g Manager l iMetrics
n
ethresholds
e e H
Alert for instance-level alerts, such as archive log alerts, can be set at the instance
C h target level. This enables you to receive alerts for the specific instance if performance exceeds
your threshold. You can also configure alerts at the database level, such as setting alerts for
tablespaces. This enables you to avoid receiving duplicate alerts at each instance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 11
Enterprise Manager Metrics and RAC

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce and RAC (continued)
Enterprise g Manager l iMetrics
H
Iteis en possible to view the metric across the cluster in a comparative or overlay fashion. To
also
C he view this information, click the Compare Targets link at the bottom of the corresponding
metric page. When the Compare Targets page appears, choose the instance targets that you
want to compare by selecting them and then clicking the Move button. If you want to compare
the metric data from all targets, click the Move All button. After making your selections, click
the OK button to continue.
The Metric summary page appears next. Depending on your needs, you can accept the default
timeline of 24 hours or select a more suitable value from the View Data drop-down list. If you
want to add a comment regarding the event for future reference, enter a comment in the
“Comment for Most Recent Alert” field, and then click the Add Comment button.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 12
Enterprise Manager Alert History and RAC

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce History and RAC
Enterprise
g Manager l iAlert
In H en environment, you can see a summary of the alert history for each participating
a RAC
he e
C instance directly from the Cluster Database Home page. The drill-down process is shown in
the slide. You click the Alert History link in the Related Links section of the Cluster Database
Home page. This takes you to the Alert History page on which you can see the summary for
both instances in the example. You can then click one of the instance’s links to go to the
corresponding Alert History page for that instance. From there, you can access a
corresponding alert page by choosing the alert of your choice.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 13
Enterprise Manager Blackouts and RAC

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce
Enterprise
g Manager l iBlackouts and RAC
n
e use Enterprise Manager to define blackouts for all managed targets of your RAC
You
e Hcan
C he database to prevent alerts from being recorded. Blackouts are useful when performing
scheduled or unscheduled maintenance or other tasks that might trigger extraneous or
unwanted events. You can define blackouts for an entire cluster database or for specific cluster
database instances.
To create a blackout event, click the Setup link at the top of any Enterprise Manager page.
Then click the Blackouts link on the left. The Blackouts page appears.
Click the Create button. The Create Blackout: Properties page appears. You must enter a name
or tag in the Name field. If you want, you can also enter a descriptive comment in the
Comments field. This is optional. Enter a reason for the blackout in the Reason field.
In the Targets area of the Properties page, you must choose a target Type from the drop-down
list. In the example in the slide, the entire cluster database RDBB is chosen. Click the cluster
database in the Available Targets list, and then click the Move button to move your choice to
the Selected Targets list. Click the Next button to continue.
The Member Targets page appears next. Expand the Selected Composite Targets tree and
ensure that all targets that must be included appear in the list. Continue and define your
schedule as you normally would.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 14
Redo Log Files and RAC

Node1 Node2
RAC01 RAC02

Shared storage

Group 1 SPFILE
Group 4
ble

fe r a
Group 2 RAC01.THREAD=1
Group 5
ans
RAC02.THREAD=2
n - t r
Group 3 …
a noThread 2
Thread 1
h a s
m ) e ฺ
o
ฺc 4;Gu i d
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP
i n c t5;
ALTER DATABASE ADD LOGFILE THREADs-2 GROUP e n
c
a tud
ALTER DATABASE ENABLE THREAD
a n @ 2; S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n
T andliRAC ce
Redo LoggFiles
n
eincrease
e H
Ifeyou the cardinality of a server pool in a policy-managed database, and a new server
C h is allocated to the server pool, then Oracle Clusterware starts an instance on the new server if
you have Oracle Managed Files (OMF) enabled. If the instance starts and there is no thread or
redo log file available, then Oracle Clusterware automatically enables a thread of redo and
allocates the associated redo log files and undo if the database uses Oracle ASM or any cluster
file system.
You should create redo log groups only if you are using administrator-managed databases. For
administrator-managed databases, each instance has its own online redo log groups. Create
these redo log groups and establish group members. To add a redo log group to a specific
instance, specify the INSTANCE clause in the ALTER DATABASE ADD LOGFILE statement. If
you do not specify the instance when adding the redo log group, the redo log group is added to
the instance to which you are currently connected. Each instance must have at least two groups
of redo log files. You must allocate the redo log groups before enabling a new instance with
the ALTER DATABASE ENABLE INSTANCE instance_name command. When the current
group fills, an instance begins writing to the next log file group. If your database is in
ARCHIVELOG mode, each instance must save full online log groups as archived redo log files
that are tracked in the control file.
Note: You can use Enterprise Manager to administer redo log groups in a RAC environment.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 15
Automatic Undo Management and RAC

Pending Node1 Node2


offline
RAC01 RAC02

Consistent reads
Transaction recovery

Shared storage ble


fe r a
… SPFILE
ans
undotbs3
n - t r
no
RAC01.UNDO_TABLESPACE=undotbs3
undotbs1
RAC02.UNDO_TABLESPACE=undotbs2 a
sundotbs2

) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud SID='RAC01'; e
a n @
ALTER SYSTEM SET UNDO_TABLESPACE=undotbs3
S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Automaticg
a
Undo ice
T Management
l in RAC
n
e database automatically manages undo segments within a specific undo tablespace
The
e HOracle
C hethat is assigned to an instance. Under normal circumstances, only the instance assigned to the
undo tablespace can modify the contents of that tablespace. However, all instances can always
read all undo blocks for consistent-read purposes. Also, any instance can update any undo
tablespace during transaction recovery, as long as that undo tablespace is not currently used by
another instance for undo generation or transaction recovery.
You assign undo tablespaces in your RAC database by specifying a different value for the
UNDO_TABLESPACE parameter for each instance in your SPFILE or individual PFILEs. If you do
not set the UNDO_TABLESPACE parameter, each instance uses the first available undo tablespace.
For policy-managed databases, Oracle automatically allocates the undo tablespace when the
instance starts if you have OMF enabled.
You can dynamically switch undo tablespace assignments by executing the ALTER SYSTEM
SET UNDO_TABLESPACE statement. You can run this command from any instance. In the
example above, the previously used undo tablespace assigned to the RAC01 instance remains
assigned to it until RAC01’s last active transaction commits. The pending offline tablespace
may be unavailable for other instances until all transactions against that tablespace are
committed. You cannot simultaneously use Automatic Undo Management (AUM) and manual
undo management
Unauthorized in a RAC
reproduction database. prohibitedฺ
or distribution It is highly recommended that you
Copyright© 2010, use the
Oracle AUMitsmode.
and/or affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 16
Starting and Stopping RAC Instances

• Multiple instances can open the same database


simultaneously.
• Shutting down one instance does not interfere with other
running instances.
• SHUTDOWN TRANSACTIONAL LOCAL does not wait for
other instances’ transactions to finish.
ble
• RAC instances can be started and stopped by using: fe r a
– Enterprise Manager ans
n - t r
– The Server Control (srvctl) utility
a no
– SQL*Plus h a s
m ) e ฺ
• Shutting down a RAC database means o
ฺc Gu d
shutting
i down all
instances accessing the database. n
-i ent c
c
a tuds
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Starting and
g
a
TStoppingl iceRAC Instances
In H en environment, multiple instances can have the same RAC database open at the same
a RAC
e e
Ch time. Also, shutting down one instance does not interfere with the operation of other running
instances.
The procedures for starting up and shutting down RAC instances are identical to the
procedures used in single-instance Oracle, with the following exception:
The SHUTDOWN TRANSACTIONAL command with the LOCAL option is useful to shut down an
instance after all active transactions on the instance have either committed or rolled back.
Transactions on other instances do not block this operation. If you omit the LOCAL option, this
operation waits until transactions on all other instances that started before the shutdown are
issued either a COMMIT or a ROLLBACK.
You can start up and shut down instances by using Enterprise Manager, SQL*Plus, or Server
Control (srvctl). Both Enterprise Manager and srvctl provide options to start up and shut
down all the instances of a RAC database with a single step.
Shutting down a RAC database mounted or opened by multiple instances means that you need
to shut down every instance accessing that RAC database. However, having only one instance
opening the RAC database is enough to declare the RAC database open.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 17
Starting and Stopping
RAC Instances with SQL*Plus
[host01] $ echo $ORACLE_SID
orcl1
sqlplus / as sysdba
SQL> startup
SQL> shutdown

[host02] $ echo $ORACLE_SID


orcl2
ble
sqlplus / as sysdba
fe r a
SQL> startup ans
SQL> shutdown n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Starting and
g
a
TStopping l iceRAC Instances with SQL*Plus
n
ewant
e H
Ifeyou to start up or shut down just one instance, and you are connected to your local
Ch node, then you must first ensure that your current environment includes the system identifier
(SID) for the local instance.
To start up or shut down your local instance, initiate a SQL*Plus session connected as SYSDBA
or SYSOPER, and then issue the required command (for example, STARTUP).
You can start multiple instances from a single SQL*Plus session on one node by way of
Oracle Net Services. To achieve this, you must connect to each instance by using a Net
Services connection string, typically an instance-specific alias from your tnsnames.ora file.
For example, you can use a SQL*Plus session on a local node to shut down two instances on
remote nodes by connecting to each using the instance’s individual alias name.
It is not possible to start up or shut down more than one instance at a time in SQL*Plus, so you
cannot start or stop all the instances for a cluster database with a single SQL*Plus command.
To verify that instances are running, on any node, look at V$ACTIVE_INSTANCES.
Note: SQL*Plus is integrated with Oracle Clusterware to make sure that corresponding
resources are correctly handled when starting up and shutting down instances via SQL*Plus.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 18
Starting and Stopping
RAC Instances with srvctl

• start/stop syntax:
srvctl start|stop instance -d <db_name> -i <inst_name_list>
[-o open|mount|nomount|normal|transactional|immediate|abort>]
[-c <connect_str> | -q]

srvctl start|stop database -d <db_name>


[-o open|mount|nomount|normal|transactional|immediate|abort>]
ble
[-c <connect_str> | -q]
fe r a
ans
• Examples:
n - t r
a no
$ srvctl start instance -d orcl -i orcl1,orcl2 s
) ha ฺ
$ srvctl stop instance -d orcl -i orcl1,orcl2
ฺ c om uide
- i n c tG
$ srvctl start database -d orcl -o
a cs uden open

a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

an liceRAC n
Starting and
g TStopping Instances with srvctl
e n
eThe
e H srvctl start database command starts a cluster database, its enabled instances, and
Ch services. The srvctl stop database command stops a database, its instances, and its services.
The srvctl start instance command starts instances of a cluster database. This command
also starts all enabled and nonrunning services that have the listed instances either as preferred
or as available instances.
The srvctl stop instance command stops instances, and all enabled and running services
that have these instances as either preferred or available instances.
You must disable an object that you intend to keep stopped after you issue a srvctl stop
command, otherwise Oracle Clusterware can restart it as a result of another planned operation.
For the commands that use a connect string, if you do not provide a connect string, then
srvctl uses / as sysdba to perform the operation. The –q option asks for a connect string
from standard input. srvctl does not support concurrent executions of commands on the same
object. Therefore, run only one srvctl command at a time for each database, service, or other
object. In order to use the START or STOP options of the SRVCTL command, your service must
be an Oracle Clusterware–enabled, nonrunning service.
Note: For more information, refer to the Oracle Clusterware and Oracle Real Application
Clusters Administration and Deployment Guide.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 19
Switch Between the Automatic
and Manual Policies
$ srvctl config database -d orcl -a
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain:
Start options: open
Stop options: immediate
ble
Database role: PRIMARY
fe r a
Management policy: AUTOMATIC
ans
Server pools: orcl
n - t r
Database instances: orcl1,orcl2
a no
Disk Groups: DATA, FRA
h a s
Services:
m ) e ฺ
Database is enabled o
ฺc Gu i d
Database is administrator managed
- i n c t
c
a tuds e n
srvctl modify database n
a -d@ orclS-y MANUAL;
ฺ t
g e th i s
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Switch Between
g ce
Ta theliAutomatic and Manual Policies
n
e Oracle Clusterware is configured to start the VIP, listener, instance, ASM,
By
e H
default,
C hedatabase services, and other resources during system boot. It is possible to modify some
resources to have their profile parameter AUTO_START set to the value 2. This means that after
node reboot, or when Oracle Clusterware is started, resources with AUTO_START=2 need to be
started manually via srvctl. This is designed to assist in troubleshooting and system
maintenance. When changing resource profiles through srvctl, the command tool
automatically modifies the profile attributes of other dependent resources given the current
prebuilt dependencies. The command to accomplish this is:
srvctl modify database -d <dbname> -y AUTOMATIC|MANUAL
To implement Oracle Clusterware and Real Application Clusters, it is best to have Oracle
Clusterware start the defined Oracle Clusterware resources during system boot, which is the
default. The first example in the slide uses the srvctl config database command to display
the current policy for the orcl database. As you can see, it is currently set to its default:
AUTOMATIC. The second statement uses the srvctl modify database command to change
the current policy to MANUAL for the orcl database. When you add a new database by using the
srvctl add database command, by default, that database is placed under the control of
Oracle Clusterware using the AUTOMATIC policy. However, you can use the following
statement to directly set the policy to MANUAL: srvctl add database -d orcl -y MANUAL.
Note: You can also use this procedure to configure your system to prevent Oracle Clusterware
from auto-restarting
Unauthorized reproductionfailed database instances
or distribution moreCopyright©
prohibitedฺ than once. 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 20
RAC Initialization Parameter Files

• An SPFILE is created if you use the DBCA.


• The SPFILE must be created on a shared volume or
shared raw device.
• All instances use the same SPFILE.
• If the database is created manually, create an SPFILE
from a PFILE. e
r a bl
s fe
Node1 Node2 - t r an
n no
RAC01 RAC02
s a
) h a
initRAC01.ora
o m d e ฺ
initRAC02.ora
SPFILE=…
c ฺc Gu i SPFILE=…
s
SPFILE - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
T Parameter
RAC Initialization
g l ice Files
n
eyou create the database, DBCA creates an SPFILE in the file location that you specify.
e e H
When
Ch This location can be an ASM disk group, cluster file system file, or a shared raw device. If you
manually create your database, it is recommended to create an SPFILE from a PFILE.
All instances in the cluster database use the same SPFILE. Because the SPFILE is a binary file,
do not edit it. Change the SPFILE settings by using EM or ALTER SYSTEM SQL statements.
RAC uses a traditional PFILE only if an SPFILE does not exist or if you specify PFILE in your
STARTUP command. Using SPFILE simplifies administration, maintaining parameter settings
consistent, and guarantees parameter settings persistence across database shutdown and
startup. In addition, you can configure RMAN to back up your SPFILE.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 21
SPFILE Parameter Values and RAC

• You can change parameter settings using the ALTER


SYSTEM SET command from any instance:
ALTER SYSTEM SET <dpname> SCOPE=MEMORY sid='<sid|*>';
• SPFILE entries such as:
– *.<pname> apply to all instances
– <sid>.<pname> apply only to <sid> e
– <sid>.<pname> takes precedence over *.<pname> r a bl
n s fe
• -tra
Use current or future *.<dpname> settings for <sid>:
on
a n
s
ALTER SYSTEM RESET <dpname> SCOPE=MEMORY sid='<sid>';
ha ฺ
)
• Remove an entry from your SPFILE: ฺ c om uide
- i n c tG
ALTER SYSTEM RESET <dpname>acSCOPE=SPFILE
s den sid='<sid|*>';
a n @ Stu
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an Values
SPFILE Parameter l i c en and RAC
e ng
eYou
e Hcan modify the value of your initialization parameters by using the ALTER SYSTEM SET
C h command. This is the same as with a single-instance database except that you have the
possibility to specify the SID clause in addition to the SCOPE clause.
By using the SID clause, you can specify the SID of the instance where the value takes effect.
Specify SID='*' if you want to change the value of the parameter for all instances. Specify
SID='sid' if you want to change the value of the parameter only for the instance sid. This
setting takes precedence over previous and subsequent ALTER SYSTEM SET statements that
specify SID='*'. If the instances are started up with an SPFILE, then SID='*' is the default if
you do not specify the SID clause.
If you specify an instance other than the current instance, then a message is sent to that
instance to change the parameter value in its memory if you are not using the SPFILE scope.
The combination of SCOPE=MEMORY and SID='sid' of the ALTER SYSTEM RESET command
allows you to override the precedence of a currently used <sid>.<dparam> entry. This allows
for the current *.<dparam> entry to be used, or for the next created *.<dparam> entry to be
taken into account on that particular SID.
Using the last example, you can remove a line from your SPFILE.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 22
EM and SPFILE Parameter Values

ble
SCOPE=MEMORY
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
EM and SPFILE
g
a
T Parameter
l ice Values
YouH en access the Initialization Parameters page by clicking the Initialization Parameters link
can
e e
Ch on the Cluster Database Server page.
The Current tabbed page shows you the values currently used by the initialization parameters
of all the instances accessing the RAC database. You can filter the Initialization Parameters
page to show only those parameters that meet the criteria of the filter that you entered in the
Name field.
The Instance column shows the instances for which the parameter has the value listed in the
table. An asterisk (*) indicates that the parameter has the same value for all remaining
instances of the cluster database.
Choose a parameter from the Select column and perform one of the following steps:
• Click Add to add the selected parameter to a different instance. Enter a new instance
name and value in the newly created row in the table.
• Click Reset to reset the value of the selected parameter. Note that you can reset only
those parameters that do not have an asterisk in the Instance column. The value of the
selected column is reset to the value of the remaining instances.
Note: For both Add and Reset buttons, the ALTER SYSTEM command uses SCOPE=MEMORY.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 23
EM and SPFILE Parameter Values

SCOPE=BOTH

ble
fe r a
ans
n - t r
no
SCOPE=SPFILE

s a
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
EM and SPFILE
g
a
T Parameter
l ice Values (continued)
TheH en tabbed page displays the current values stored in your SPFILE.
SPFile
e e
Ch As on the Current tabbed page, you can add or reset parameters. However, if you select the
“Apply changes in SPFILE mode” check box, the ALTER SYSTEM command uses
SCOPE=BOTH. If this check box is not selected, SCOPE=SPFILE is used.
Click Apply to accept and generate your changes.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 24
RAC Initialization Parameters

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
RAC Initialization
g l ice
T Parameters
H en
CLUSTER_DATABASE: Enables a database to be started in cluster mode. Set this to TRUE.
e e
Ch CLUSTER_DATABASE_INSTANCES: Sets the number of instances in your RAC
environment. A proper setting for this parameter can improve memory use.
CLUSTER_INTERCONNECTS: Specifies the cluster interconnect when there is more than one
interconnect. Refer to your Oracle platform–specific documentation for the use of this
parameter, its syntax, and its behavior. You typically do not need to set the
CLUSTER_INTERCONNECTS parameter. For example, do not set this parameter for the
following common configurations:
• If you have only one cluster interconnect
• If the default cluster interconnect meets the bandwidth requirements of your RAC
database, which is typically the case
• If NIC bonding is being used for the interconnect
• When OIFCFG’s global configuration can specify the right cluster interconnects. It only
needs to be specified as an override for OIFCFG.
DB_NAME: If you set a value for DB_NAME in instance-specific parameter files, the setting
must be identical for all instances.
DISPATCHERS: Set this parameter to enable a shared-server configuration, that is, a server
that is configured to allow many user processes to share very few server processes.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 25
RAC Initialization Parameters (continued)
With shared-server configurations, many user processes connect to a dispatcher. The
DISPATCHERS parameter may contain many attributes. Oracle recommends that you
configure at least the PROTOCOL and LISTENER attributes.
PROTOCOL specifies the network protocol for which the dispatcher process generates a
listening end point. LISTENER specifies an alias name for the Oracle Net Services listeners.
Set the alias to a name that is resolved through a naming method, such as a tnsnames.ora
file.
THREAD: If specified, this parameter must have unique values on all instances. The THREAD
parameter specifies the number of the redo thread to be used by an instance. You can specify
any available redo thread number as long as that thread number is enabled and is not used.
Other parameters that can affect RAC database configurations include: le
• ASM_PREFERRED_READ_FAILURE_GROUPS: Specifies a set of disks to be the preferred a b
disks from which to read mirror data copies. The values that you set for this parameter s f er
are instance specific and need not be the same on all instances. - t r an
n
nonumber
• GCS_SERVER_PROCESSES: This static parameter specifies the a initial of server
processes for an Oracle RAC instance’s Global Cache Service s
ha (GCS). The GCS
processes manage the routing of interinstance trafficoamong )
m Oracle ฺ
e RAC instances. The
default number of GCS server processes is calculated ฺ c
c based u i d
- i n n t G on system resources. For
systems with one to three CPUs, there are
a cstwo uGCS d eserver processes (LMSn). For
systems with four to fifteen CPUs, n@
there areS t
two GCS server processes (LMSn). For
systems with sixteen more CPUs,ฺ t
g the a h i s
tnumber of GCS server processes equals (the
number of CPUs divided e n e
h byo32)u+s 2, dropping any fractions. You can set this parameter
e e t instances.
h on different
to different values
( c s e
an licenThe instance’s SID. The SID identifies the instance’s shared
• INSTANCE_NAME:
T
e ng on a host. Any alphanumeric characters can be used. The value for this
memory
e e H parameter is automatically set to the database unique name followed by an incrementing
Ch number during the creation of the database when using DBCA.
• INSTANCE_NUMBER: An Oracle RAC parameter that specifies a unique number that
maps the instance to one free list group for each database object. This parameter must be
set for every instance in the cluster. It is automatically defined during the creation of the
database when using DBCA.
• REMOTE_LISTENER: This dynamic parameter specifies a network name that resolves
to an address or address list of Oracle Net remote listeners (that is, listeners that are not
running on the same machine as this instance). The address or address list is specified in
the TNSNAMES.ORA file or other address repository as configured for your system.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 26
Parameters That Require Identical Settings

• ACTIVE_INSTANCE_COUNT
• ARCHIVE_LAG_TARGET
• COMPATIBLE
• CLUSTER_DATABASE/CLUSTER_DATABASE_INSTANCES
• CONTROL_FILES
• DB_BLOCK_SIZE
• DB_DOMAIN
• DB_FILES a b le
• DB_NAME s f er
• DB_RECOVERY_FILE_DEST/DB_RECOVERY_FILE_DEST_SIZE - t r an
• n on
DB_UNIQUE_NAME
s a
• INSTANCE_TYPE (RDBMS or ASM)
) a
h ฺ
• PARALLEL_EXECUTION_MESSAGE_SIZE
c m
o uide
c ฺ
• REMOTE_LOGIN_PASSWORD_FILE
s - in nt G
• UNDO_MANAGEMENT ac ude @ St
ฺ t a n is
g t h
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Parameters T an Require
That l i c en Identical Settings
e ng
e e H initialization parameters that are critical at database creation or that affect certain
Certain
C h database operations must have the same value for every instance in RAC. Specify these
parameter values in the SPFILE, or within each init_dbname.ora file on each instance. In the
above list, each parameter must have the same value on all instances.

Note: The setting for DML_LOCKS and RESULT_CACHE_MAX_SIZE must be identical on every
instance only if set to zero. Disabling the result cache on some instances may lead to incorrect
results.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 27
Parameters That Require Unique Settings

Instance settings:
• THREAD • INSTANCE_NUMBER
• ROLLBACK_SEGMENTS • UNDO_TABLESPACE
• INSTANCE_NAME • CLUSTER_INTERCONNECTS
• ASM_PREFERRED_READ_FAILURE_GROUPS

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Parameters g
a l ice Unique Settings
TThat Require
n
euse
e H
Ifeyou the THREAD or ROLLBACK_SEGMENTS parameters, it is recommended that you set
C h unique values for them by using the SID identifier in the SPFILE. However, you must set a
unique value for INSTANCE_NUMBER for each instance and you cannot use a default value.
The Oracle server uses the INSTANCE_NUMBER parameter to distinguish among instances at
startup. The Oracle server uses the THREAD number to assign redo log groups to specific
instances. To simplify administration, use the same number for both the THREAD and
INSTANCE_NUMBER parameters.
If you specify UNDO_TABLESPACE with Automatic Undo Management enabled, set this
parameter to a unique undo tablespace name for each instance.
Using the ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter, you can specify a
list of preferred read failure group names. The disks in those failure groups become the
preferred read disks. Thus, every node can read from its local disks. The setting for this
parameter is instance specific, and the values need not be the same on all instances.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 28
Quiescing RAC Databases

• Use the ALTER SYSTEM QUIESCE RESTRICTED


statement from a single instance:
SQL> ALTER SYSTEM QUIESCE RESTRICTED;
• You must have the Database Resource Manager feature
activated to issue the statement above.
• The database cannot be opened by other instances after
a b le
the ALTER SYSTEM QUIESCE… statement starts. s f er
• The ALTER SYSTEM QUIESCE RESTRICTED andnALTER - tran
SYSTEM UNQUIESCE statements affect all instances a no in a
h a s
RAC environment. ) ฺ
o m i d e
• Cold backups cannot be taken when
n c ฺc theGudatabase is in a
quiesced state. s - i n t
a c d e
a n @ Stu
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Quiescing T
RAC
anDatabases
l i c en
e ng
e
To
e Hquiesce a RAC database, use the ALTER SYSTEM QUIESCE RESTRICTED statement from one
C h instance. It is not possible to open the database from any instance while the database is in the
process of being quiesced from another instance. After all the non-DBA sessions become
inactive, the ALTER SYSTEM QUIESCE RESTRICTED statement executes and the database is
considered to be quiesced. In a RAC environment, this statement affects all instances.
To issue the ALTER SYSTEM QUIESCE RESTRICTED statement in a RAC environment, you
must have the Database Resource Manager feature activated, and it must have been activated
since instance startup for all instances in the cluster database. It is through the Database
Resource Manager that non-DBA sessions are prevented from becoming active. The following
conditions apply to RAC:
• If you had issued the ALTER SYSTEM QUIESCE RESTRICTED statement, but the Oracle
server has not finished processing it, then you cannot open the database.
• You cannot open the database if it is already in a quiesced state.
• The ALTER SYSTEM QUIESCE RESTRICTED and ALTER SYSTEM UNQUIESCE statements
affect all instances in a RAC environment, not just the instance that issues the command.
Cold backups cannot be taken when the database is in a quiesced state because the Oracle
background processes may still perform updates for internal purposes even when the database
is in a quiesced state. Also, the file headers of online data files continue to appear as if they are
being accessed. They do not look the same as if a clean shutdown were done.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 29
Terminating Sessions on a Specific Instance

SQL> SELECT SID, SERIAL#, INST_ID


2 FROM GV$SESSION WHERE USERNAME='JMW';
SID SERIAL# INST_ID
---------- ---------- ----------
140 3340 2
SQL> ALTER SYSTEM KILL SESSION '140,3340,@2';
ble
System altered. fe r a
an s
SQL> n - t r
a no
ALTER SYSTEM KILL SESSION '140,3340,@2'
h a s
* m ) e ฺ
o
ฺc Gu i d
ERROR at line 1:
- i n c t
c s
a fortudkill e n
ORA-00031: session marked @
t a n i s S

ng se t h
h e
e © t2010, uOracle and/or its affiliates. All rights reserved.
h eCopyright
o
n (c nse
Terminating
g
a
TSessionsl iceon a Specific Instance
H enwith Oracle Database 11g Release 1, you can use the ALTER SYSTEM KILL
Starting
e e SESSION
Ch statement to terminate a session on a specific instance.
The slide above illustrates this by terminating a session started on a different instance than the
one used to terminate the problematic session.
If the session is performing some activity that must be completed, such as waiting for a reply
from a remote database or rolling back a transaction, then Oracle Database waits for this
activity to complete, marks the session as terminated, and then returns control to you. If the
waiting lasts a minute, Oracle Database marks the session to be terminated and returns control
to you with a message that the session is marked to be terminated. The PMON background
process then marks the session as terminated when the activity is complete.
Note: You can also use the IMMEDIATE clause at the end of the ALTER SYSTEM command to
immediately terminate the session without waiting for outstanding activity to complete.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 30
How SQL*Plus Commands Affect Instances

SQL*Plus Command Associated Instance


ARCHIVE LOG Generally affects the current instance
CONNECT Affects the default instance if no instance is specified
in the CONNECT command
HOST Affects the node running the SQL*Plus session
RECOVER Does not affect any particular instance, but rather the
database ble
fe r a
SHOW PARAMETER and Show the current instance parameter and SGA
an s
SHOW SGA information
n - t r
STARTUP and Affect the current instance a no
SHUTDOWN
h a s
Displays information about m )
the current e ฺinstance
SHOW INSTANCE
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
How SQL*Plus
g
a
T Commands
l ice Affect Instances
n
eSQL
e e H
Most statements affect the current instance. You can use SQL*Plus to start and stop
Ch instances in the RAC database. You do not need to run SQL*Plus commands as root on
UNIX-based systems or as Administrator on Windows-based systems. You need only the
proper database account with the privileges that you normally use for single-instance Oracle
database administration. The following are some examples of how SQL*Plus commands affect
instances:
• The ALTER SYSTEM SET CHECKPOINT LOCAL statement affects only the instance
to which you are currently connected, rather than the default instance or all instances.
• ALTER SYSTEM CHECKPOINT LOCAL affects the current instance.
• ALTER SYSTEM CHECKPOINT or ALTER SYSTEM CHECKPOINT GLOBAL
affects all instances in the cluster database.
• ALTER SYSTEM SWITCH LOGFILE affects only the current instance.
• To force a global log switch, use the ALTER SYSTEM ARCHIVE LOG CURRENT
statement.
• The INSTANCE option of ALTER SYSTEM ARCHIVE LOG enables you to archive
each online redo log file for a specific instance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 31
Transparent Data Encryption and Wallets in RAC

• One wallet shared by all instances on shared storage:


– No additional administration is required.
• One copy of the wallet on each local storage:
– Local copies need to be synchronized each time master key
is changed.

1 ble
ALTER SYSTEM SET ENCRYPTION KEY
fe r a
ans
n - t r
Wallet Wallet Wallet
a no
Manual
h a s
Master keys
copy
Master key
m ) Master e ฺ key
o
ฺc Gu i d
2
Node2
- i n c t
Node1
c
a tuds e n Noden

a n @ ManualS copy
ฺ t
g e th i s
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Transparent
g l ice
a Encryption
TData and Wallets in RAC
n
e used by RAC instances for Transparent Database Encryption may be a local copy of a
H
Wallets
e
C hecommon wallet shared by multiple nodes, or a shared copy residing on shared storage that all
of the nodes can access.
A deployment with a single wallet on a shared disk requires no additional configuration to use
Transparent Data Encryption.
If you want to use local copies, you must copy the wallet and make it available to all of the
other nodes after initial configuration. For systems using Transparent Data Encryption with
encrypted wallets, you can use any standard file transport protocol. For systems using
Transparent Data Encryption with obfuscated wallets, file transport through a secured channel
is recommended. The wallet must reside in the directory specified by the setting for the
WALLET_LOCATION or ENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora. The
local copies of the wallet need not be synchronized for the duration of Transparent Data
Encryption usage until the server key is rekeyed through the ALTER SYSTEM SET KEY SQL
statement. Each time you run the ALTER SYSTEM SET KEY statement at a database instance,
you must again copy the wallet residing on that node and make it available to all of the other
nodes. To avoid unnecessary administrative overhead, reserve rekeying for exceptional cases
where you are certain that the server master key is compromised and that not rekeying it would
cause a serious
Unauthorized securityor
reproduction problem.
distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 32
Quiz

If an instance starts in a policy-managed RAC environment and


no thread or redo log file is available, then Oracle Clusterware
automatically enables a thread of redo and allocates the redo
log files and undo if the database uses Oracle ASM or any
cluster file system and OMF is enabled.
1. True
ble
2. False r a
s fe
- t r an
o n
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 1g T
an licen
TheH en
statement is true.
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 33
Quiz

Which of the following statements is not true:


1. Multiple instances can open the same database
simultaneously.
2. Shutting down one instance does not interfere with other
running instances.
3. SHUTDOWN TRANSACTIONAL LOCAL will wait for other le
a b
instances’ transactions to finish.
s f er
4. Shutting down a RAC database means shutting down - t r an
all
instances accessing the database. non a
a s
m ) h eฺ
c ฺ co Guid
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
Answer: 3g T
a lice
H en 3 is not true.
Statement
he e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 34
Summary

In this lesson, you should have learned how to:


• Use Enterprise Manager Cluster Database pages
• Define redo log files in a RAC environment
• Define undo tablespaces in a RAC environment
• Start and stop RAC databases and instances
• Modify initialization parameters in a RAC environment ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 35
Practice 12 Overview

This practice covers the following topics:


• Using operating system and password file authenticated
connections
• Using Oracle Database authenticated connections
• Stopping a complete ORACLE_HOME component stack
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 12 - 36
Managing Backup and Recovery for RAC
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to configure


the following:
• The RAC database to use ARCHIVELOG mode and the fast
recovery area
• RMAN for the RAC environment

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 2
Protecting Against Media Failure

Archived Archived
log files log files

ble
Database fe r a
Mirrored backups ans
disks
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Protecting
g T a
Against l ice Failure
Media
H en RAC provides you with methods to avoid or to reduce down time due to a failure of
Although
he e
C one or more (but not all) of your instances, you must still protect the database itself, which is
shared by all the instances. This means that you need to consider disk backup and recovery
strategies for your cluster database just as you would for a nonclustered database.
To minimize the potential loss of data due to disk failures, you may want to use disk mirroring
technology (available from your server or disk vendor). As in nonclustered databases, you can
have more than one mirror if your vendor allows it, to help reduce the potential for data loss
and to provide you with alternative backup strategies. For example, with your database in
ARCHIVELOG mode and with three copies of your disks, you can remove one mirror copy and
perform your backup from it while the two remaining mirror copies continue to protect
ongoing disk activity. To do this correctly, you must first put the tablespaces into backup mode
and then, if required by your cluster or disk vendor, temporarily halt disk operations by issuing
the ALTER SYSTEM SUSPEND command. After the statement completes, you can break the
mirror and then resume normal operations by executing the ALTER SYSTEM RESUME
command and taking the tablespaces out of backup mode.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 3
Media Recovery in Oracle RAC

• Media recovery must be user initiated through a client


application.
• In these situations, use RMAN to restore backups of the
data files and then recover the database.
• RMAN media recovery procedures for RAC do not differ
substantially from those for single-instance environments.
a b le
• The node that performs the recovery must be able to
s f er
restore all of the required data files. tran -
on
– That node must also be able to either read all required
n
archived redo logs on disk or restore them a s a backups.
from
h ฺ
m )
• When recovering a database with c
ฺ o uidetablespaces,
encrypted
the Oracle Wallet must be opened - c
in after t Gdatabase mount
s
c ude
adatabase. n
and before you open the @ t
a n s S
n g ฺt t hi
h e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
Media Recovery
g l ice RAC
Ta in Oracle
H
Media enrecovery must be user initiated through a client application, whereas instance recovery
e e
Ch is automatically performed by the database. In these situations, use RMAN to restore backups
of the data files and then recover the database. The procedures for RMAN media recovery in
Oracle RAC environments do not differ substantially from the media recovery procedures for
single-instance environments.
The node that performs the recovery must be able to restore all of the required data files. That
node must also be able to either read all the required archived redo logs on disk or be able to
restore them from backups. Each instance generates its own archive logs that are copies of its
dedicated redo log group threads. It is recommended that Automatic Storage Management
(ASM) or a cluster file system be used to consolidate these files.
When recovering a database with encrypted tablespaces (for example, after a SHUTDOWN
ABORT or a catastrophic error that brings down the database instance), you must open the
Oracle Wallet after database mount and before you open the database, so the recovery process
can decrypt data blocks and redo.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 4
Parallel Recovery in RAC

• Oracle Database automatically selects the optimum degree


of parallelism for:
– Instance recovery
– Crash recovery
• Archived redo logs are applied using an optimal number of
parallel processes based on the availability of CPUs.
• With RMAN's RESTORE and RECOVER commands, the a b le
f
following three stages of recovery are performed in parallel:
s er
n a
– Restoring data files
o n -tr
– Applying incremental backups
n a
a s
– Applying archived redo logs m ) h eฺ
• To disable parallel instance and c ฺ co recovery,
crash G u id set the
s n
-i ent to 0.
RECOVERY_PARALLELISM c
a tudparameter
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
T
Parallel Recovery
g inlic
RAC
H enDatabase automatically selects the optimum degree of parallelism for instance and
Oracle
e e
Ch crash recovery. Oracle Database applies archived redo logs using an optimal number of
parallel processes based on the availability of CPUs. With RMAN’s RESTORE and RECOVER
commands, Oracle Database automatically performs in parallel the following three stages of
recovery:
• Restoring Data Files: When restoring data files, the number of channels you allocate in
the RMAN recover script effectively sets the parallelism that RMAN uses. For example,
if you allocate five channels, you can have up to five parallel streams restoring data files.
• Applying Incremental Backups: Similarly, when you are applying incremental
backups, the number of channels you allocate determines the potential parallelism.
• Applying Archived Redo Logs with RMAN: The application of archived redo logs is
performed in parallel. Oracle Database automatically selects the optimum degree of
parallelism based on available CPU resources.
To disable parallel instance and crash recovery on a system with multiple CPUs, set the
RECOVERY_PARALLELISM parameter to 0.
Use the NOPARALLEL clause of the RMAN RECOVER command or ALTER DATABASE
RECOVER statement to force the RAC database to use nonparallel media recovery.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 5
Archived Log File Configurations

ble
fe r a
ans
n - t r
a no
h a s
Cluster file system scheme: Local archive
m ) with
eNFS

Archive logs from each scheme: o
ฺc Gu Each i d
instance can
instance are written to the readn c
-i mounted t archive
c s e n
same file location.
n @ S tud
a destinations of all instances.

g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n cen
TaFile
ArchivedgLog
n li
Configurations
H e backup and recovery operations involving archived log files, the Oracle server
During
e e
Ch determines the file destinations and names from the control file. If you use RMAN, the
archived log file path names can also be stored in the optional recovery catalog. However, the
archived log file path names do not include the node name, so RMAN expects to find the files
it needs on the nodes where the channels are allocated.
If you use a cluster file system, your instances can all write to the same archive log
destination. This is known as the cluster file system scheme. Backup and recovery of the
archive logs are easy because all logs are located in the same directory.
If a cluster file system is not available, Oracle recommends that local archive log destinations
be created for each instance with NFS-read mount points to all other instances. This is known
as the local archive with network file system (NFS) scheme. During backup, you can either
back up the archive logs from each host or select one host to perform the backup for all
archive logs. During recovery, one instance may access the logs from any host without having
to first copy them to the local destination. The LOG_ARCHIVE_FORMAT parameter supports
the %t variable that embeds the unique thread number into the name of the archive logs so that
each node generates unique names.
Using either scheme, you may want to provide a second archive destination to avoid single
points of reproduction
Unauthorized failure. or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 6
RAC and the Fast Recovery Area

Fast
recovery
ble
area
fe r a
ans
n - t r
a no
Cluster file system a
Certified
h s NFS
m ) directory
e ฺ
o
ฺc Gu i d
- i n c t
c
ASM
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
RAC and g the
a l ice Area
TFast Recovery
To H ena fast recovery area in RAC, you must place it on an ASM disk group, a cluster file
use
e e
Ch system, or on a shared directory that is configured through certified NFS for each RAC
instance. That is, the fast recovery area must be shared among all the instances of a RAC
database. In addition, set the DB_RECOVERY_FILE_DEST parameter to the same value on
all instances.
Oracle Enterprise Manager enables you to set up a fast recovery area. To use this feature:
1. From the Cluster Database Home page, click the Maintenance tab.
2. Under the Backup/Recovery options list, click Configure Recovery Settings.
3. Specify your requirements in the Flash Recovery Area section of the page.
Note: For Oracle Database 11g Release 2 (11.2), the flash recovery area has been renamed fast
recovery area. Oracle Enterprise Manager, however, still uses the older vocabulary on its Web
pages.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 7
RAC Backup and Recovery Using EM

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
RAC Backup
g
a l ice Using EM
Tand Recovery
YouH en access the Cluster Database backup and recovery–related tasks by clicking the
can
e e
Ch Availability tab on the Cluster Database Home page. On the Availability tabbed page, you can
perform a range of backup and recovery operations using RMAN, such as scheduling backups,
performing recovery when necessary, and configuring backup and recovery settings. Also,
there are links related to Oracle Secure Backup and Service management.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 8
Configure RAC Recovery Settings with EM

bl e
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
T Recovery
ConfiguregRAC l ice Settings with EM
YouH en use Enterprise Manager to configure important recovery settings for your cluster
can
he e
C database. On the Database Home page, click the Availability tab, and then click the Recovery
Settings link. From here, you can ensure that your database is in ARCHIVELOG mode and
configure flash recovery settings.
With a RAC database, if the Archive Log Destination setting is not the same for all instances,
the field appears blank, with a message indicating that instances have different settings for this
field. In this case, entering a location in this field sets the archive log location for all instances
of database. You can assign instance-specific values for an archive log destination by using the
Initialization Parameters page.
Note: You can run the ALTER DATABASE SQL statement to change the archiving mode in
RAC as long as the database is mounted by the local instance but not open in any instances.
You do not need to modify parameter settings to run this statement. Set the initialization
parameters DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE to the
same values on all instances to configure a flash recovery area in a RAC environment.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 9
Archived Redo File Conventions in RAC

Variable Description Example

%t Thread number, not padded log_1

%T Thread number, left-zero-padded log_0001

%s Log sequence number, not padded log_251


e
r a bl
s fe
%S Log sequence number, left-zero-padded log_0000000251
- t r an
n no
%r Resetlogs identifier log_23452345
s a
) h a
%R Padded resetlogs identifier o m d e
log_0023452345ฺ
c ฺc Gu i
- i n n tlog_1_251_23452345
Using multiple variables acs e
tud
%t_%s_%r
n @ S
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
TanFile lConventions
ArchivedgRedo ice in RAC
n
e archived redo log configuration, uniquely identify the archived redo logs with the
For
e Hany
C he LOG_ARCHIVE_FORMAT parameter. The format of this parameter is operating system–specific
and it can include text strings, one or more variables, and a file name extension.
All of the thread parameters, in either uppercase or lowercase, are mandatory for RAC. This
enables the Oracle database to create unique names for archive logs across the incarnation.
This requirement is in effect when the COMPATIBLE parameter is set to 10.0 or greater. Use
the %R or %r parameter to include the resetlogs identifier to avoid overwriting the logs from a
previous incarnation. If you do not specify a log format, the default is operating system
specific and includes %t, %s, and %r.
As an example, if the instance associated with redo thread number 1 sets
LOG_ARCHIVE_FORMAT to log_%t_%s_%r.arc, then its archived redo log files are named
as:
log_1_1000_23435343.arc
log_1_1001_23452345.arc
log_1_1002_23452345.arc
...

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 10
Configure RAC Backup Settings with EM

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
T Backup
ConfiguregRAC l ice Settings with EM
H en backup settings can be configured using Enterprise Manager. On the Database
Persistent
he e
C Control Home page, click the Availability tab, and then click the Backup Settings link. You
can configure disk settings, such as the directory location of your disk backups and level of
parallelism. You can also choose the default backup type:
• Backup set
• Compressed backup set
• Image copy
You can also specify important tape-related settings, such as the number of available tape
drives and vendor-specific media management parameters.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 11
Oracle Recovery Manager

RMAN provides the following


Recovery benefits for Real Application
Manager Clusters:
Recovery • Can read cluster files,
catalog
Archived ASM files, or raw
log files Oracle partitions with no
Server
configuration changes a b le
process
Stored s f er
• Can access multiplean
scripts tr
on-
archive log destinations
s an
Oracle
) ha ฺ
database
Backup ฺ c om uide
storage Snapshot - i n c tG
a c s den
a n @ Stu
control file
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an Manager
Oracle Recovery l i c en
e ng
e e H Database provides RMAN for backing up and restoring the database. RMAN enables
Oracle
C h you to back up, restore, and recover data files, control files, SPFILEs, and archived redo logs.
You can run RMAN from the command line or you can use it from the Backup Manager in
Enterprise Manager. In addition, RMAN is the recommended backup and recovery tool if you
are using ASM. RMAN can use stored scripts, interactive scripts, or an interactive GUI front
end. When using RMAN with your RAC database, use stored scripts to initiate the backup and
recovery processes from the most appropriate node.
If you use different Oracle Home locations for your RAC instances on each of your nodes,
create a snapshot control file in a location that exist on all your nodes. The snapshot control
file is only needed on the nodes on which RMAN performs backups. The snapshot control file
does not need to be globally available to all instances in a RAC environment though.
You can use either a cluster file or a shared raw device as well as a local directory that exists
on each node in your cluster. Here is an example:
RMAN> CONFIGURE SNAPSHOT CONTROLFILE TO
'/oracle/db_files/snaps/snap_prod1.cf';
For recovery, you must ensure that each recovery node can access the archive log files from all
instances by using one of the archive schemes discussed earlier, or make the archived logs
availablereproduction
Unauthorized to the recovering instance by prohibitedฺ
or distribution copying them from another
Copyright© location.
2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 12
Configure RMAN Snapshot Control File Location

• The snapshot control file path must be valid on every node


from which you might initiate an RMAN backup.
• Configure the snapshot control file location in RMAN.
– Determine the current location:
RMAN> SHOW SNAPSHOT CONTROLFILE NAME;
/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snap_prod.f
ble
– You can use ASM, a shared file system location, or a shared fera
block device if you prefer:
t r a ns
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO no
n-
s a
'+FRA/SNAP/snap_prod.cf'; a
) h TOeฺ
RMAN> CONFIGURE SNAPSHOT CONTROLFILE m NAME
'/ocfs2/oradata/dbs/scf/snap_prod.cf'; c ฺ co Guid
s - in nNAME t
RMAN> CONFIGURE SNAPSHOT CONTROLFILE
a c u d e TO
'/dev/sdj2'; n@ is S t
ฺ t
g e tha
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Ta Snapshot
ConfiguregRMAN l ice Control File Location
TheH en control file is a temporary file that RMAN creates to resynchronize from a read-
snapshot
e e
Ch consistent version of the control file. RMAN needs a snapshot control file only when
resynchronizing with the recovery catalog or when making a backup of the current control file.
In a RAC database, the snapshot control file is created on the node that is making the backup.
You need to configure a default path and file name for these snapshot control files that are
valid on every node from which you might initiate an RMAN backup.
Run the following RMAN command to determine the configured location of the snapshot
control file:
SHOW SNAPSHOT CONTROLFILE NAME
You can change the configured location of the snapshot control file. For example, on UNIX-
based systems, you can specify the snapshot control file location as snap_prod.cf located in
the ASM disk group +FRA by entering the following at the RMAN prompt:
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+FRA/SNAP/snap_prod.cf'
This command globally sets the configuration for the location of the snapshot control file
throughout your cluster database.
Note: The CONFIGURE command creates persistent settings across RMAN sessions.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 13
Configure Control File and SPFILE Autobackup

• RMAN automatically creates a control file and SPFILE


backup after the BACKUP or COPY command:
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

• Change default location:


RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR
DEVICE TYPE DISK TO '+DATA'; ble
fe r a
• Location must be available to all nodes in your RAC trans
database. o n -
n a
a s
m ) h eฺ
c ฺ co Guid
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
Ta File
ConfiguregControl l iceand SPFILE Autobackup
n
eset
e H
Ifeyou CONFIGURE CONTROLFILE AUTOBACKUP to ON, RMAN automatically creates a
C h control file and an SPFILE backup after you run the BACKUP or COPY command. RMAN can
also automatically restore an SPFILE if this is required to start an instance to perform
recovery. This means that the default location for the SPFILE must be available to all nodes in
your RAC database.
These features are important in disaster recovery because RMAN can restore the control file
even without a recovery catalog. RMAN can restore an autobackup of the control file even
after the loss of both the recovery catalog and the current control file.
You can change the default location that RMAN gives to this file with the CONFIGURE
CONTROLFILE AUTOBACKUP FORMAT command. If you specify an absolute path name in this
command, this path must exist identically on all nodes that participate in backups.
Note: RMAN performs CONTROL FILE AUTOBACKUP on the first allocated channel.
When you allocate multiple channels with different parameters (especially if you allocate a
channel with the CONNECT command), you must determine which channel will perform the
automatic backup. Always allocate the channel for the connected node first.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 14
Crosschecking on Multiple RAC Clusters Nodes

When crosschecking on multiple nodes make sure that all


backups can be accessed by every node in the cluster.
• This allows you to allocate channels at any node in the
cluster during restore or crosscheck operations.
• Otherwise you must allocate channels on multiple nodes
by providing the CONNECT option to the CONFIGURE
CHANNEL command. ble
fe r a
• If backups are not accessible because no channel was t r a ns
o n -
configured on the node that can access those backups,
n
then those backups are marked EXPIRED.as a
m ) h eฺ
c ฺ co Guid
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
Crosschecking
g l ice RAC Clusters Nodes
Ta on Multiple
n
ecrosschecking
e e H
When on multiple RAC nodes, configure the cluster so that all backups can be
C h accessed by every node, regardless of which node created the backup. When the cluster is
configured this way, you can allocate channels at any node in the cluster during restore or
crosscheck operations.
If you cannot configure the cluster so that each node can access all backups, then during
restore and crosscheck operations, you must allocate channels on multiple nodes by providing
the CONNECT option to the CONFIGURE CHANNEL command, so that every backup can be
accessed by at least one node. If some backups are not accessible during crosscheck because
no channel was configured on the node that can access those backups, then those backups are
marked EXPIRED in the RMAN repository after the crosscheck.
For example, you can use CONFIGURE CHANNEL ... CONNECT in an Oracle RAC
configuration in which tape backups are created on various nodes in the cluster and each
backup is only accessible on the node on which it is created.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 15
Channel Connections to Cluster Instances
• When backing up, each allocated channel can connect to a
different instance in the cluster.
• Instances to which the channels connect must be either all
mounted or all open.
• When choosing a channel to use, RMAN gives preference
to the nodes with faster access to the data files that you
want to back up.
ble
CONFIGURE DEFAULT DEVICE TYPE TO sbt;
fe r a
CONFIGURE DEVICE TYPE sbt PARALLELISM 3;
ans
CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT='sys/rac@orcl1';
n - t r
CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT='sys/rac@orcl2';
a no
CONFIGURE CHANNEL 3 DEVICE TYPE sbt CONNECT='sys/rac@orcl3';
h a s
OR m ) e ฺ
o
ฺc Gu i d
CONFIGURE DEFAULT DEVICE TYPE TO sbt;
- i n c t
c s e n
tud
asbt CONNECT='sys/rac@bkp_serv';
CONFIGURE DEVICE TYPE sbt PARALLELISM 3;
CONFIGURE CHANNEL DEVICE n @
TYPE S
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licetonCluster Instances
Channel Connections
e ng
e e H making backups in parallel, RMAN channels can connect to a different instance in the
When
C h cluster. The examples in the slide illustrate two possible configurations:
• If you want to dedicate channels to specific instances, you can control at which instance
the channels are allocated by using separate connect strings for each channel
configuration as shown by the first example.
• If you define a special service for your backup and recovery jobs, you can use the second
example shown in the slide. If you configure this service with load balancing turned on,
then the channels are allocated at a node as decided by the load balancing algorithm.
During a backup, the instances to which the channels connect must be either all mounted or all
open. For example, if the orcl1 instance has the database mounted whereas the orcl2 and
orcl3 instances have the database open, then the backup fails.
In some cluster database configurations, some nodes of the cluster have faster access to certain
data files than to other data files. RMAN automatically detects this, which is known as node
affinity awareness. When deciding which channel to use to back up a particular data file,
RMAN gives preference to the nodes with faster access to the data files that you want to back
up. For example, if you have a three-node cluster, and if node 1 has faster read/write access to
data files 7, 8, and 9 than the other nodes, then node 1 has greater node affinity to those files
than nodes
Unauthorized 2 and 3 and or
reproduction RMAN will take
distribution advantageCopyright©
prohibitedฺ of this automatically.
2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 16
RMAN Channel Support for the Grid

• RAC allows the use of nondeterministic connect strings.


• RMAN can use connect strings that are not bound to a
specific instance in the Grid environment.
• It simplifies the use of parallelism with RMAN in a RAC
environment.
• It uses the load-balancing characteristics of the Grid e
environment. r a bl
s fe
– Channels connect to RAC instances that are the least tran
loaded. o n -
n a
a
h ฺ s
CONFIGURE DEFAULT DEVICE TYPE TO sbt;m)
c ฺ co 3; u ide
CONFIGURE DEVICE TYPE sbt PARALLELISM
s - in nt G
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
RMAN Channel
g Ta Support
l ice for the Grid
n the use of nondeterministic connect strings that can connect to different instances
eallows
RAC
e H
C hebased on RAC features, such as load balancing. Therefore, to support RAC, the RMAN polling
mechanism no longer depends on deterministic connect strings, and makes it possible to use
RMAN with connect strings that are not bound to a specific instance in the Grid environment.
Previously, if you wanted to use RMAN parallelism and spread a job between many instances,
you had to manually allocate an RMAN channel for each instance. To use dynamic channel
allocation, you do not need separate CONFIGURE CHANNEL CONNECT statements
anymore. You only need to define your degree of parallelism by using a command such as
CONFIGURE DEVICE TYPE disk PARALLELISM, and then run backup or restore
commands. RMAN then automatically connects to different instances and does the job in
parallel. The Grid environment selects the instances that RMAN connects to, based on load
balancing. As a result of this, configuring RMAN parallelism in a RAC environment becomes
as simple as setting it up in a non-RAC environment. By configuring parallelism when backing
up or recovering a RAC database, RMAN channels are dynamically allocated across all RAC
instances.

Note: RMAN has no control over the selection of the instances. If you require a guaranteed
connection to an instance, you should provide a connect string that can connect only to the
required instance.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 17
RMAN Default Autolocation

• Recovery Manager autolocates the following files:


– Backup pieces
– Archived redo logs during backup
– Data file or control file copies
• If local archiving is used, a node can read only those
archived logs that were generated on that node.
a b le
• When restoring, a channel connected to a specific node fer
s
tran
restores only those files that were backed up to the node.
n -
n o
s a
a
) h eฺ
m
co Guid
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
RMAN Default
g l ice
TaAutolocation
H en Manager automatically discovers which nodes of a RAC configuration can access
Recovery
e e
Ch the files that you want to back up or restore. Recovery Manager autolocates the following
files:
• Backup pieces during backup or restore
• Archived redo logs during backup
• Data file or control file copies during backup or restore
If you use a noncluster file system local archiving scheme, a node can read only those archived
redo logs that were generated by an instance on that node. RMAN never attempts to back up
archived redo logs on a channel that it cannot read.
During a restore operation, RMAN automatically performs the autolocation of backups. A
channel connected to a specific node attempts to restore only those files that were backed up to
the node. For example, assume that log sequence 1001 is backed up to the drive attached to
node 1, whereas log 1002 is backed up to the drive attached to node 2. If you then allocate
channels that connect to each node, the channel connected to node 1 can restore log 1001 (but
not 1002), and the channel connected to node 2 can restore log 1002 (but not 1001).

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 18
Distribution of Backups

Several possible backup configurations for RAC:


• A dedicated backup server performs and manages
backups for the cluster and the cluster database.
• One node has access to a local backup appliance and
performs and manages backups for the cluster database.
• Each node has access to a local backup appliance and e
can write to its own local backup media. r a bl
s fe
- t r an
n no
s a
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Distribution
g
a
Tof Backups
l ice
n
econfiguring
e e H
When the backup options for RAC, you have several possible configurations:
C h • Network backup server: A dedicated backup server performs and manages backups for
the cluster and the cluster database. None of the nodes have local backup appliances.
• One local drive: One node has access to a local backup appliance and performs and
manages backups for the cluster database. All nodes of the cluster should be on a cluster
file system to be able to read all data files, archived redo logs, and SPFILEs. It is
recommended that you do not use the noncluster file system archiving scheme if you
have backup media on only one local drive.
• Multiple drives: Each node has access to a local backup appliance and can write to its
own local backup media.
In the cluster file system scheme, any node can access all the data files, archived redo logs,
and SPFILEs. In the noncluster file system scheme, you must write the backup script so that
the backup is distributed to the correct drive and path for each node. For example, node 1 can
back up the archived redo logs whose path names begin with /arc_dest_1, node 2 can back
up the archived redo logs whose path names begin with /arc_dest_2, and node 3 can back
up the archived redo logs whose path names begin with /arc_dest_3.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 19
One Local Drive CFS Backup Scheme

RMAN> CONFIGURE DEVICE TYPE sbt PARALLELISM 1;


RMAN> CONFIGURE DEFAULT DEVICE TYPE TO sbt;

RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
T CFSlicBackup
One Local g Drive Scheme
n
e file system (CFS) backup scheme, each node in the cluster has read access to all
eIn
e H
a cluster
Ch the data files, archived redo logs, and SPFILEs. This includes Automated Storage
Management (ASM), cluster file systems, and Network Attached Storage (NAS).
When backing up to only one local drive in the cluster file system backup scheme, it is
assumed that only one node in the cluster has a local backup appliance such as a tape drive. In
this case, run the following one-time configuration commands:
CONFIGURE DEVICE TYPE sbt PARALLELISM 1;
CONFIGURE DEFAULT DEVICE TYPE TO sbt;
Because any node performing the backup has read/write access to the archived redo logs
written by the other nodes, the backup script for any node is simple:
BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
In this case, the tape drive receives all data files, archived redo logs, and SPFILEs.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 20
Multiple Drives CFS Backup Scheme

CONFIGURE DEVICE TYPE sbt PARALLELISM 2;


CONFIGURE DEFAULT DEVICE TYPE TO sbt;
CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT 'usr1/pwd1@n1';
CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'usr2/pwd2@n2';

BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;


ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n ce
T CFSliBackup
Multiple Drives
g Scheme
n
ebacking up to multiple drives in the cluster file system backup scheme, it is assumed
e e H
When
Ch that each node in the cluster has its own local tape drive. Perform the following one-time
configuration so that one channel is configured for each node in the cluster. For example, enter
the following at the RMAN prompt:
CONFIGURE DEVICE TYPE sbt PARALLELISM 2;
CONFIGURE DEFAULT DEVICE TYPE TO sbt;
CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT 'user1/passwd1@node1';
CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'user2/passwd2@node2';
Similarly, you can perform this configuration for a device type of DISK. The following
backup script, which you can run from any node in the cluster, distributes the data files,
archived redo logs, and SPFILE backups among the backup drives:
BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 21
Restoring and Recovering

• Media recovery may require one or more archived log files


from each thread.
• The RMAN RECOVER command automatically restores and
applies the required archived logs.
• Archive logs may be restored to any node performing the
restore and recover operation.
a b le
• Logs must be readable from the node performing the
s f er
restore and recovery activity.
- t r an
• Recovery processes request additional threads n on
enabled
s a
during the recovery period. a
m ) h eฺ
• co Guno
Recovery processes notify you of ฺthreads
c id longer
s-i n
needed because they were disabled. nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
TaRecovering
Restoringgand l ice
H
Mediaenrecovery of a database that is accessed by RAC may require at least one archived log
he e
C file for each thread. However, if a thread’s online redo log contains enough recovery
information, restoring archived log files for any thread is unnecessary.
If you use RMAN for media recovery and you share archive log directories, you can change
the destination of the automatic restoration of archive logs with the SET clause to restore the
files to a local directory of the node where you begin recovery. If you backed up the archive
logs from each node without using a central media management system, you must first restore
all the log files from the remote nodes and move them to the host from which you will start
recovery with RMAN. However, if you backed up each node’s log files using a central media
management system, you can use RMAN’s AUTOLOCATE feature. This enables you to
recover a database using the local tape drive on the remote node.
If recovery reaches a time when an additional thread was enabled, the recovery process
requests the archived log file for that thread. If you are using a backup control file, when all
archive log files are exhausted, you may need to redirect the recovery process to the online
redo log files to complete recovery. If recovery reaches a time when a thread was disabled, the
process informs you that the log file for that thread is no longer needed.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 22
Quiz

Which of the following statements regarding media recovery in


RAC is not true?
1. Media recovery must be user initiated through a client
application.
2. RMAN media recovery procedures for RAC are quite
different from those for single-instance environments.
a b le
3. The node that performs the recovery must be able to
s f er
restore all the required data files.
- t r an
on
4. The recovering node must be able to either readnall
s a
required archived redo logs on disk or restore
h ฺthem from
a
)
backups. com ide c ฺ G u
i n t
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 2g T
an licen
H en 2 is not correct.
Statement
he e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 23
Quiz

To use a fast recovery area in RAC, you must place it on an


ASM disk group, a cluster file system, or on a shared directory
that is configured through certified NFS for each RAC instance.
1. True
2. False

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 1g T
an licen
TheH en
statement above is true.
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 24
Summary

In this lesson, you should have learned how to configure the


following:
• The RAC database to use ARCHIVELOG mode and the fast
recovery area
• RMAN for the RAC environment

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 25
Practice 13 Overview

This practice covers the following topics:


• Configuring the archive log mode
• Configuring specific instance connection strings
• Configuring RMAN and performing parallel backups

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 13 - 26
RAC Database Monitoring and Tuning

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to:


• Determine RAC-specific tuning components
• Tune instance recovery in RAC
• Determine RAC-specific wait events, global enqueues, and
system statistics
• Implement the most common RAC tuning tips b le
f er a
• Use the Cluster Database Performance pages s
• Use the Automatic Workload Repository (AWR) inoRAC - t r an
n n
a
• Use Automatic Database Diagnostic Monitors (ADDM) in
h a
RAC m) eฺ co Guid
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
a lice
en gT
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 2
CPU and Wait Time Tuning Dimensions

CPU
time
Possibly Scalable
needs SQL application
tuning

ble
fe r a
ans
Scalable Needs No gain achieved
n - t r
application instance/RAC by adding
CPUs/nodes a no
tuning s
) h a
o m d e ฺ
c ฺc G u
Wait i
i n t
a cs- uden time
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n cen
TaTime
CPU and g
n Wait liTuning Dimensions
H
When etuning your system, it is important that you compare the CPU time with the wait time of
e e
Ch your system. Comparing CPU time with wait time helps to determine how much of the
response time is spent on useful work and how much on waiting for resources potentially held
by other processes.
As a general rule, the systems where CPU time is dominant usually need less tuning than the
ones where wait time is dominant. Alternatively, heavy CPU usage can be caused by badly
written SQL statements.
Although the proportion of CPU time to wait time always tends to decrease as load on the
system increases, steep increases in wait time are a sign of contention and must be addressed
for good scalability.
Adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit under
contention. Conversely, a system where the proportion of CPU time to wait time does not
decrease significantly as load increases can scale better, and would most likely benefit from
adding CPUs or Real Application Clusters (RAC) instances if needed.
Note: Automatic Workload Repository (AWR) reports display CPU time together with wait
time in the Top 5 Timed Events section, if the CPU time portion is among the top five events.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 3
RAC-Specific Tuning

• Tune for a single instance first.


• Tune for RAC:
– Instance recovery
– Interconnect traffic
– Point of serialization can be exacerbated
• RAC-reactive tuning tools: e
– Specific wait events Certain combinations r a bl
are characteristic of s fe
– System and enqueue statistics well-known tuning cases.
- t r an
– Enterprise Manager performance pages
n no
s a
– Statspack and AWR reports
) h a
• RAC-proactive tuning tools: o m d e ฺ
c ฺc Gu i
– AWR snapshots s - i n n t
c
a tud e
– ADDM reports
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
RAC-Specific
g
a
T Tuning l ice
H en there are specific tuning areas for RAC, such as instance recovery and interconnect
Although
he e
C traffic, you get most benefits by tuning your system like a single-instance system. At least, this
must be your starting point.
Obviously, if you have serialization issues in a single-instance environment, these may be
exacerbated with RAC.
As shown in the slide, you have basically the same tuning tools with RAC as with a single-
instance system. However, certain combinations of specific wait events and statistics are well-
known RAC tuning cases.
In this lesson, you see some of those specific combinations, as well as the RAC-specific
information that you can get from the Enterprise Manager performance pages, and Statspack
and AWR reports. Finally, you see the RAC-specific information that you can get from the
Automatic Database Diagnostic Monitor (ADDM).

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 4
RAC and Instance or Crash Recovery

Use information
Remaster for other caches
enqueue LMS
resources Remaster recovers
1 cache GRD
resources
2
ble
fe r a
Build re- ans
SMON covery set n - t r
recovers Resource
a no
the 3 claim
h a s
database Merge failed 4
m ) Roll
e ฺ
forward
redo threads o
ฺc Gu i d
recovery set
n
-i ent c
a c s d
5

n @RecoveryS tutime
g ฺt a t hi s
h n e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
RAC and g Ta orlicCrash
Instance
e Recovery
n instance fails and the failure is detected by another instance, the second instance
ean
e e H
When
Ch performs the following recovery steps:
1. During the first phase of recovery, Global Enqueue Services remasters the enqueues.
2. The Global Cache Services (GCS) remasters its resources. The GCS processes remaster
only those resources that lose their masters. During this time, all GCS resource requests
and write requests are temporarily suspended. However, transactions can continue to
modify data blocks as long as these transactions have already acquired the necessary
resources.
3. After enqueues are reconfigured, one of the surviving instances can grab the Instance
Recovery enqueue. Therefore, at the same time as GCS resources are remastered, SMON
determines the set of blocks that need recovery. This set is called the recovery set.
Because, with Cache Fusion, an instance ships the contents of its blocks to the requesting
instance without writing the blocks to the disk, the on-disk version of the blocks may not
contain the changes that are made by either instance. This implies that SMON needs to
merge the content of all the online redo logs of each failed instance to determine the
recovery set. This is because one failed thread might contain a hole in the redo that needs
to be applied to a particular block. So, redo threads of failed instances cannot be applied
serially. Also, redo threads of surviving instances are not needed for recovery because
SMON could use past or current images of their corresponding buffer caches.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 5
RAC and Instance or Crash Recovery (continued)
4. Buffer space for recovery is allocated and the resources that were identified in the
previous reading of the redo logs are claimed as recovery resources. This is done to avoid
other instances to access those resources.
5. All resources required for subsequent processing have been acquired and the Global
Resource Directory (GRD) is now unfrozen. Any data blocks that are not in recovery can
now be accessed. Note that the system is already partially available.
Then, assuming that there are past images or current images of blocks to be recovered in
other caches in the cluster database, the most recent image is the starting point of
recovery for these particular blocks. If neither the past image buffers nor the current
buffer for a data block is in any of the surviving instances’ caches, then SMON performs
a log merge of the failed instances. SMON recovers and writes each block identified in
step 3, releasing the recovery resources immediately after block recovery so that more
a b le
blocks become available as recovery proceeds. Refer to the section titled “Global Cache
s f er
Coordination: Example” in this lesson for more information about past images.
- t r an
After all the blocks have been recovered and the recovery resources have n onreleased, the
been
system is again fully available. s a
a
)ofhthe database
In summary, the recovered database or the recovered portions
o m d e ฺ becomes
available earlier, and before the completion of the entire
c ฺcrecovery i
usequence. This makes the
i n t G
cs- uscalable.
system available sooner and it makes recovery more
a d en
Note: The performance overhead of a log n @merge S is tproportional to the number of failed
a
gฺt of redo
instances and to the size of the amount s
thi written in the redo logs for each instance.
e h en use
( c he se to
n
Tan lice
e ng
ee H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 6
Instance Recovery and Database Availability

Full A G H
Database availability

Partial B F
ble
2
fe r a
4 ans
n - t r
1 2 3
a no
None C D E
h a s
Elapsed time om
) e ฺ
c ฺc Gu i d
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Instance g T a
Recovery l iceDatabase Availability
and
H
The en illustrates the degree of database availability during each step of Oracle instance
graphic
he e
C recovery:
A. Real Application Clusters is running on multiple nodes.
B. Node failure is detected.
C. The enqueue part of the GRD is reconfigured; resource management is redistributed to
the surviving nodes. This operation occurs relatively quickly.
D. The cache part of the GRD is reconfigured and SMON reads the redo log of the failed
instance to identify the database blocks that it needs to recover.
E. SMON issues the GRD requests to obtain all the database blocks it needs for recovery.
After the requests are complete, all other blocks are accessible.
F. The Oracle server performs roll forward recovery. Redo logs of the failed threads are
applied to the database, and blocks are available right after their recovery is completed.
G. The Oracle server performs rollback recovery. Undo blocks are applied to the database
for all uncommitted transactions.
H. Instance recovery is complete and all data is accessible.
Note: The dashed line represents the blocks identified in step 2 in the previous slide. Also, the
dotted steps represent the ones identified in the previous slide.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 7
Instance Recovery and RAC

Instance startup
Instance + Instance
crashes crash recovery opens
FAST_START_MTTR_TARGET
Instance
starts

Rolling e
Instance forward r a bl
ends s fe
Instance recovery
- t r an
crashes first pass + lock claim n no

FAST_START_MTTR_TARGET as
a
Instance m ) h eฺ
recovery c ฺ co Guid
V$INSTANCE_RECOVERY.ESTD_CLUSTER_AVAILABLE_TIME

starts s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
Instance g Ta and
Recovery l iceRAC
In H en
a single-instance environment, the instance startup combined with the crash recovery time is
h e e
C controlled by the setting of the FAST_START_MTTR_TARGET initialization parameter. You can
set its value if you want incremental checkpointing to be more aggressive than the autotuned
checkpointing. However, this is at the expense of a much higher I/O overhead.
In a RAC environment, including the startup time of the instance in this calculation is useless
because one of the surviving instances is doing the recovery.
In a RAC environment, it is possible to monitor the estimated target, in seconds, for the
duration from the start of instance recovery to the time when GCD is open for lock requests for
blocks not needed for recovery. This estimation is published in the V$INSTANCE_RECOVERY
view through the ESTD_CLUSTER_AVAILABLE_TIME column. Basically, you can monitor the
time your cluster is frozen during instance recovery situations.
In a RAC environment, the FAST_START_MTTR_TARGET initialization parameter is used to
bound the entire instance recovery time, assuming it is instance recovery for single instance
death.
Note: If you really want to have small instance recovery time by setting
FAST_START_MTTR_TARGET, you can safely ignore the alert log messages indicating to raise its
value.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 8
Instance Recovery and RAC

• Use parallel instance recovery.


• Set PARALLEL_MIN_SERVERS.
• Use asynchronous input/output (I/O).
• Increase the size of the default buffer cache.

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Instance g T a
Recovery l iceRAC (continued)
and
H
Here earen some guidelines you can use to make sure that instance recovery in your RAC
he e
C environment is faster:
• Use parallel instance recovery by setting RECOVERY_PARALLISM.
• Set PARALLEL_MIN_SERVERS to CPU_COUNT-1. This will prespawn recovery
slaves at startup time.
• Using asynchronous I/O is one of the most crucial factors in recovery time. The first-pass
log read uses asynchronous I/O.
• Instance recovery uses 50 percent of the default buffer cache for recovery buffers. If this
is not enough, some of the steps of instance recovery will be done in several passes. You
should be able to identify such situations by looking at your alert.log file. In that
case, you should increase the size of your default buffer cache.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 9
Analyzing Cache Fusion Impact in RAC

• The cost of block access and cache coherency is


represented by:
– Global Cache Services statistics
– Global Cache Services wait events
• The response time for cache fusion transfers is determined
by:
a b le
– Overhead of the physical interconnect components e r
– IPC protocol a n sf
o n -tr
– GCS protocol n
sby disk I/O a
• The response time is not generally affected
) h a
factors. o m d e ฺ
c i c ฺ u
i n t G
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
Tan Fusion
AnalyzinggCache l ice Impact in RAC
TheH en of accessing blocks in the global cache and maintaining cache coherency is
effect
he e
C represented by:
• The Global Cache Services statistics for current and cr blocks—for example, gc current
blocks received, gc cr blocks received, and so on
• The Global Cache Services wait events for gc current block 3-way, gc cr grant 2-way,
and so on
The response time for cache fusion transfers is determined by the messaging time and
processing time imposed by the physical interconnect components, the IPC protocol, and the
GCS protocol. It is not affected by disk input/output (I/O) factors other than occasional log
writes. The cache fusion protocol does not require I/O to data files in order to guarantee cache
coherency, and RAC inherently does not cause any more I/O to disk than a nonclustered
instance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 10
Typical Latencies for RAC Operations
AWR Report Latency Name Lower Typical Upper
Bound Bound
Average time to process cr block request 0.1 1 10
Avg global cache cr block receive time (ms) 0.3 4 12
Average time to process current block request 0.1 3 23
Avg global cache current block receive time(ms) 0.3 8 30

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e Operations
T
Typical Latencies
g l
foricRAC
In H en AWR report, there is a table in the RAC Statistics section containing average times
a RAC
e e
Ch (latencies) for some Global Cache Services and Global Enqueue Services operations. This
table is shown in the slide and is called “Global Cache and Enqueue Services: Workload
Characteristics.” Those latencies should be monitored over time, and significant increases in
their values should be investigated. The table presents some typical values, based on empirical
observations. Factors that may cause variations to those latencies include:
• Utilization of the IPC protocol. User-mode IPC protocols are faster, but only Tru64’s
RDG is recommended for use.
• Scheduling delays, when the system is under high CPU utilization
• Log flushes for current blocks served
Other RAC latencies in AWR reports are mostly derived from V$GES_STATISTICS and
may be useful for debugging purposes, but do not require frequent monitoring.
Note: The time to process consistent read (CR) block request in the cache corresponds to
(build time + flush time + send time), and the time to process current block
request in the cache corresponds to (pin time + flush time + send time).

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 11
Wait Events for RAC

• Wait events help to analyze what sessions are waiting for.


• Wait times are attributed to events that reflect the outcome
of a request:
– Placeholders while waiting
– Precise events after waiting
• Global cache waits are summarized in a broader category le
a b
called Cluster Wait Class.
s f er
• These wait events are used in ADDM to enable Cache- t r an
Fusion diagnostics. non a
a s
m ) h eฺ
c ฺ co Guid
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
Wait Events
a RAClice
Tfor
g
en what sessions are waiting for is an important method to determine where time is
e e H
Analyzing
Ch spent. In RAC, the wait time is attributed to an event that reflects the exact outcome of a
request. For example, when a session on an instance is looking for a block in the global cache,
it does not know whether it will receive the data cached by another instance or whether it will
receive a message to read from disk. The wait events for the global cache convey precise
information and wait for global cache blocks or messages. They are mainly categorized by the
following:
• Summarized in a broader category called Cluster Wait Class
• Temporarily represented by a placeholder event that is active while waiting for a block
• Attributed to precise events when the outcome of the request is known
The wait events for RAC convey information valuable for performance analysis. They are
used in ADDM to enable precise diagnostics of the impact of cache fusion.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 12
Wait Event Views

Total waits for an event V$SYSTEM_EVENT

Waits for a wait event class V$SESSION_WAIT_CLASS


by a session

Waits for an event by a session V$SESSION_EVENT

ble
Activity of recent active sessions V$ACTIVE_SESSION_HISTORY
fe r a
ans
Last 10 wait events n - t r
for each active session no
V$SESSION_WAIT_HISTORY
a
a s
Events for which
m ) h eฺ
co Guid
active sessions are waiting V$SESSION_WAIT
c ฺ
s - in nt
Identify SQL statements impacted
@ ac tude V$SQLSTATS
by interconnect latencies
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
Wait Event g Ta lice
Views
H
When
eitntakes some time to acquire resources because of the total path length and latency for
e e
Ch requests, processes sleep to avoid spinning for indeterminate periods of time. When the
process decides to wait, it wakes up either after a specified timer value expires (timeout) or
when the event it is waiting for occurs and the process is posted. The wait events are recorded
and aggregated in the views shown in the slide. The first three are aggregations of wait times,
timeouts, and the number of times waited for a particular event, whereas the rest enable the
monitoring of waiting sessions in real time, including a history of recent events waited for.
The individual events distinguish themselves by their names and the parameters that they
assume. For most of the global cache wait events, the parameters include file number, block
number, the block class, and access mode dispositions, such as mode held and requested. The
wait times for events presented and aggregated in these views are very useful when debugging
response time performance issues. Note that the time waited is cumulative, and that the event
with the highest score is not necessarily a problem. However, if the available CPU power
cannot be maximized, or response times for an application are too high, the top wait events
provide valuable performance diagnostics.
Note: Use the CLUSTER_WAIT_TIME column in V$SQLSTATS to identify SQL statements
impacted by interconnect latencies, or run an ADDM report on the corresponding AWR
snapshot.reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Unauthorized
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 13
Global Cache Wait Events: Overview

Just requested gc [current/cr] [multiblock] request


(placeholder)

gc [current/cr] [2/3]-way gc [current/cr] block busy


Received after two or three network Received but not sent immediately
hops, immediately after request

gc [current/cr] grant 2-way gc current grant busy


ble
Not received and not mastered Not received and not mastered
fe r a
locally. Grant received immediately. locally. Grant received with delay.
ans
n - t r
gc [current/cr] [block/grant] congested a no
gc [current/cr] [failure/retry]
Block or grant received with delay h a s
because of CPU or memory lack m )
Not received because
e ฺ of failure

ฺ o
c Guid
gc buffer n c
-ibusy ent
c s
a arrivaltutime d
Block
a n @
less than S
buffer pin time
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Global Cache
g
a l ice Overview
T Wait Events:
TheH en global cache wait events are described briefly in the slide:
main
e e
Ch • gc current/cr request: These wait events are relevant only while a gc request
for a cr block or current buffer is in progress. They act as placeholders until the request
completes.
• gc [current/cr] [2/3]-way: A current or cr block is requested and received
after two or three network hops. The request is processed immediately; the block is not
busy or congested.
• gc [current/cr] block busy: A current or cr block is requested and received,
but is not sent immediately by LMS because some special condition that delayed the
sending was found.
• gc [current/cr] grant 2-way: A current or cr block is requested and a grant
message received. The grant is given without any significant delays. If the block is not in
its local cache, a current or cr grant is followed by a disk read on the requesting instance.
• gc current grant busy: A current block is requested and a grant message
received. The busy hint implies that the request is blocked because others are ahead of it
or it cannot be handled immediately.
Note: For dynamic remastering, two events are of most importance: gc remaster and gc
quiesce. They can be symptoms of the impact of remastering on the running processes.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 14
Global Cache Wait Events: Overview (continued)
• gc [current/cr] [block/grant] congested: A current or cr block is
requested and a block or grant message received. The congested hint implies that the
request spent more than 1 ms in internal queues.
• gc [current/cr] [failure/retry]: A block is requested and a failure status
received or some other exceptional event has occurred.
• gc buffer busy: If the time between buffer accesses becomes less than the time the
buffer is pinned in memory, the buffer containing a block is said to become busy and as a
result interested users may have to wait for it to be unpinned.
Note: For more information, refer to Oracle Database Reference.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 15
2-way Block Request: Example

Wait:
gc current block request
1
FGP
Direct send
Node1

SGA2 2

LGWR:
LGWR
Log sync e
r a bl
s fe
SGA1
- t r an
n no
s a
) h a
o m d e ฺ
Block transfer c ฺc Gu i
i n
s- dent Node2
LMS
Wait complete: 3 ac
gc current block 2-way @
n S tu
g ฺt a t hi s
h n e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
2-way Blockg TaRequest:
l iceExample
n
eslide
e e H
This shows you what typically happens when the master instance requests a block that is
Ch not cached locally. Here it is supposed that the master instance is called SGA1, and SGA2
contains the requested block. The scenario is as follows:
1. SGA1 sends a direct request to SGA2. So SGA1 waits on the gc current block
request event.
2. When SGA2 receives the request, its local LGWR process may need to flush some
recovery information to its local redo log files. For example, if the cached block is
frequently changed, and the changes have not been logged yet, LMS would have to ask
LGWR to flush the log before it can ship the block. This may add a delay to the serving
of the block and may show up in the requesting node as a busy wait.
3. Then, SGA2 sends the requested block to SGA1. When the block arrives in SGA1, the
wait event is complete, and is reflected as gc current block 2-way.
Note: Using the notation R = time at requestor, W = wire time and transfer delay, and S = time
at server, the total time for a round-trip would be:
R(send) + W(small msg) + S(process msg,process block,send) + W(block) + R(receive block)

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 16
3-way Block Request: Example

Wait:
gc current block request
1 LMS 2
FGP Resource
Direct Master
message
Node1
SGA2 3

LGWR
ble
fe r a
ans
SGA1
n - t r
a no
h a s
m ) e ฺ
Block transfer cฺc
o u i
LMS d
- i n n t G Node2
c
a tud
4
s e
Wait complete: @
t a n i s S
gc current block 3-way

ng se t h
h e
e © t2010, uOracle and/or its affiliates. All rights reserved.
h eCopyright
o
n (c nse
3-way Block g
a
TRequest:l iceExample
H
This eis na modified scenario for a cluster with more than two nodes. It is very similar to the
e e
Ch previous one. However, the master for this block is on a node that is different from that of the
requestor, and where the block is cached. Thus, the request must be forwarded. There is an
additional delay for one message and the processing at the master node:
R(send) + W(small msg) + S(process msg,send) + W(small msg) + S(process msg,process
block,send) + W(block) + R(receive block)
While a remote read is pending, any process on the requesting instance that is trying to write
or read the data cached in the buffer has to wait for a gc buffer busy. The buffer remains
globally busy until the block arrives.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 17
2-way Grant: Example

Wait:
gc current block request
1 LMS
FGP Resource
Direct
Master
message

SGA2
Node1
ble
2 fe r a
ans
SGA1
3
n - t r
Node2
a no
h a s
m ) e ฺ
o
ฺc Gu i d
Grant message - i n c t
c
a tuds e n
Wait complete:
a n @ S
gc current grant 2-way
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
2-way Grant:
g
a
T Example l ice
In H enscenario, a grant message is sent by the master because the requested block is not
this
e e
Ch cached in any instance.
If the local instance is the resource master, the grant happens immediately. If not, the grant is
always 2-way, regardless of the number of instances in the cluster.
The grant messages are small. For every block read from the disk, a grant has to be received
before the I/O is initiated, which adds the latency of the grant round-trip to the disk latency:
R(send) + W(small msg) + S(process msg,send) + W(small msg) + R(receive block)
The round-trip looks similar to a 2-way block round-trip, with the difference that the wire time
is determined by a small message, and the processing does not involve the buffer cache.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 18
Global Enqueue Waits: Overview

• Enqueues are synchronous.


• Enqueues are global resources in RAC.
• The most frequent waits are for:

TX TM
ble
fe r a
ans
US HW
n - t r
a no
h a s
TA SQ om) e ฺ
c ฺc Gu i d
i n
• The waits may constitute aserious cs- udserializationent points.
@
n is S t
ฺ t
g e tha
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Ta Waits:
Global Enqueue
g l iceOverview
An H en wait is not RAC specific, but involves a global lock operation when RAC is
enqueue
e e
Ch enabled. Most of the global requests for enqueues are synchronous, and foreground processes
wait for them. Therefore, contention on enqueues in RAC is more visible than in single-
instance environments. Most waits for enqueues occur for enqueues of the following types:
• TX: Transaction enqueue; used for transaction demarcation and tracking
• TM: Table or partition enqueue; used to protect table definitions during DML operations
• HW: High-water mark enqueue; acquired to synchronize a new block operation
• SQ: Sequence enqueue; used to serialize incrementing of an Oracle sequence number
• US: Undo segment enqueue; mainly used by the Automatic Undo Management (AUM)
feature
• TA: Enqueue used mainly for transaction recovery as part of instance recovery
In all of the cases above, the waits are synchronous and may constitute serious serialization
points that can be exacerbated in a RAC environment.
Note: The enqueue wait events specify the resource name and a reason for the wait—for
example, “TX Enqueue index block split.” This makes diagnostics of enqueue waits easier.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 19
Session and System Statistics

• Use V$SYSSTAT to characterize the workload.


• Use V$SESSTAT to monitor important sessions.
• V$SEGMENT_STATISTICS includes RAC statistics.
• RAC-relevant statistic groups are:
– Global Cache Service statistics
– Global Enqueue Service statistics
bl e
– Statistics for messages sent fe r a
t r a ns
• V$ENQUEUE_STATISTICS determines the enqueue n -with
o
the highest impact.
s an
• V$INSTANCE_CACHE_TRANSFER breaks ) a
hdown GCS
o m e ฺ
statistics into block classes.
n c ฺc Guid
c s -i ent
@ a tud
t a n i s S

ng se t h
h e
e © t2010, uOracle and/or its affiliates. All rights reserved.
h e
Copyright
o
n (c nse
Session and
a e
TSystemlicStatistics
n g
esystem
e e H
Using statistics based on V$SYSSTAT enables characterization of the database activity
Ch based on averages. It is the basis for many metrics and ratios used in various tools and
methods, such as AWR, Statspack, and Database Control.
In order to drill down to individual sessions or groups of sessions, V$SESSTAT is useful when
the important session identifiers to monitor are known. Its usefulness is enhanced if an
application fills in the MODULE and ACTION columns in V$SESSION.
V$SEGMENT_STATISTICS is useful for RAC because it also tracks the number of CR and
current blocks received by the object.
The RAC-relevant statistics can be grouped into:
• Global Cache Service statistics: gc cr blocks received, gc cr block receive time, and so
on
• Global Enqueue Service statistics: global enqueue gets, and so on
• Statistics for messages sent: gcs messages sent and ges messages sent
V$ENQUEUE_STATISTICS can be queried to determine which enqueue has the highest
impact on database service times and eventually, response times.
V$INSTANCE_CACHE_TRANSFER indicates how many current and CR blocks per block
class are received from each instance, including how many transfers incurred a delay.
Note: Forreproduction
Unauthorized more information about statistics,
or distribution refer toCopyright©
prohibitedฺ Oracle Database
2010, Reference.
Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 20
Most Common RAC Tuning Tips

Application tuning is often the most beneficial!


• Resize and tune the buffer cache.
• Reduce long full-table scans in OLTP systems.
• Use Automatic Segment Space Management (ASSM).
• Increase sequence caches.
• Use partitioning to reduce interinstance traffic.
a b le
• Avoid unnecessary parsing. s f er
- t r an
• Minimize locking usage.
n on
• Remove unselective indexes. s a
) a
h ฺ
• Configure interconnect properly. co m ide
n c ฺ G u
i t
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Most Common T c en Tips


anRACliTuning
e ng
eIn
e H
any database system, RAC or single instance, the most significant performance gains are
Ch usually obtained from traditional application-tuning techniques. The benefits of those
techniques are even more remarkable in a RAC database. In addition to traditional application
tuning, some of the techniques that are particularly important for RAC include the following:
• Try to avoid long full-table scans to minimize GCS requests. The overhead caused by the
global CR requests in this scenario is because when queries result in local cache misses,
an attempt is first made to find the data in another cache, based on the assumption that
the chance is high that another instance has cached the block.
• Automatic Segment Space Management can provide instance affinity to table blocks.
• Increasing sequence caches improves instance affinity to index keys deriving their values
from sequences. That technique may result in significant performance gains for multi-
instance insert-intensive applications.
• Range or list partitioning may be very effective in conjunction with data-dependent
routing, if the workload can be directed to modify a particular range of values from a
particular instance.
• Hash partitioning may help to reduce buffer busy contention by making buffer access
distribution patterns sparser, enabling more buffers to be available for concurrent access.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 21
Most Common RAC Tuning Tips (continued)
• In RAC, library cache and row cache operations are globally coordinated. So, excessive
parsing means additional interconnect traffic. Library cache locks are heavily used, in
particular by applications using PL/SQL or Advanced Queuing. Library cache locks are
acquired in exclusive mode whenever a package or procedure has to be recompiled.
• Because transaction locks are globally coordinated, they also deserve special attention in
RAC. For example, using tables instead of Oracle sequences to generate unique numbers
is not recommended because it may cause severe contention even for a single instance
system.
• Indexes that are not selective do not improve query performance, but can degrade DML
performance. In RAC, unselective index blocks may be subject to inter-instance
contention, increasing the frequency of cache transfers for indexes belonging to insert-
intensive tables.
a b le
s f er
• Always verify that you use a private network for your interconnect, and that your private
t r n
network is configured properly. Ensure that a network link is operating in fulladuplex
mode. Ensure that your network interface and Ethernet switches support -
onMTU size of 9
n
a thousand 8-KB
KB. Note that a single gigabit Ethernet interface can scale up to ten
blocks per second before saturation. h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c s
a tud e n
a n @ S
ฺ t h i s
h e ng se t
h e e to u
n (c nse
gT
a lice
H en
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 22
Index Block Contention: Considerations

Wait events
Index
enq: TX - index
contention
block
Split in
gc buffer busy
progress
gc current block busy
gc current split
ble
fe r a
System statistics ans
n - t r
Leaf node splits
n o
Branch node splits s a
) h a
Exchange deadlocks
o m d e ฺ
gcs refuse xid c ฺc Gu i
s - i n n t
gcs ast xid c
a tud RAC01 RAC02 e
Service ITL waits
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Index Blockg
a l ice Considerations
TContention:
In H en systems where the loading or batch processing of data is a dominant business
application
e e
Ch function, there may be performance issues affecting response times because of the high
volume of data inserted into indexes. Depending on the access frequency and the number of
processes concurrently inserting data, indexes can become hot spots and contention can be
exacerbated by:
• Ordered, monotonically increasing key values in the index (right-growing trees)
• Frequent leaf block splits
• Low tree depth: All leaf block access go through the root block.
A leaf or branch block split can become an important serialization point if the particular leaf
block or branch of the tree is concurrently accessed. The tables in the slide sum up the most
common symptoms associated with the splitting of index blocks, listing wait events and
statistics that are commonly elevated when index block splits are prevalent. As a general
recommendation, to alleviate the performance impact of globally hot index blocks and leaf
block splits, a more uniform, less skewed distribution of the concurrency in the index tree
should be the primary objective. This can be achieved by:
• Global index hash partitioning
• Increasing the sequence cache, if the key value is derived from a sequence
• Using natural keys as opposed to surrogate keys
• Using reverse key indexes
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 23
Oracle Sequences and Index Contention

Can contain 500 rows

1…50000 50001…100000 ble


fe r a
ans
n - t r
CACHE 50000 NOORDER a no
h a s
m ) e ฺ
RAC01 o
ฺc Gu i dRAC02
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
Oracle Sequences
g l
andiceIndex Contention
H enwith key values generated by sequences tend to be subject to leaf block contention
Indexes
e e
Ch when the insert rate is high. That is because the index leaf block holding the highest key value
is changed for every row inserted, as the values are monotonically ascending. In RAC, this
may lead to a high rate of current and CR blocks transferred between nodes.
One of the simplest techniques that can be used to limit this overhead is to increase the
sequence cache, if you are using Oracle sequences. Because the difference between sequence
values generated by different instances increases, successive index block splits tend to create
instance affinity to index leaf blocks. For example, suppose that an index key value is
generated by a CACHE NOORDER sequence and each index leaf block can hold 500 rows. If
the sequence cache is set to 50000, while instance 1 inserts values 1, 2, 3, and so on, instance 2
concurrently inserts 50001, 50002, and so on. After some block splits, each instance writes to a
different part of the index tree.
So, what is the ideal value for a sequence cache to avoid inter-instance leaf index block
contention, yet minimizing possible gaps? One of the main variables to consider is the insert
rate: the higher it is, the higher must be the sequence cache. However, creating a simulation to
evaluate the gains for a specific configuration is recommended.
Note: By default, the cache value is 20. Typically, 20 is too small for the preceding example.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 24
Undo Block Considerations

Index Changes
Reads

ble
SGA1 SGA2 fe r a
ans
n - t r
Undo Undo
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c s
Additional
a tutraffic d e n
interconnect
@
n
ta this S
g ฺ
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T a n cen
Undo Block
n g li
Considerations
H e undo block shipment and contention for undo buffers usually happens when index
Excessive
e e
Ch blocks containing active transactions from multiple instances are read frequently.
When a SELECT statement needs to read a block with active transactions, it has to undo the
changes to create a CR version. If the active transactions in the block belong to more than one
instance, there is a need to combine local and remote undo information for the consistent read.
Depending on the amount of index blocks changed by multiple instances and the duration of
the transactions, undo block shipment may become a bottleneck.
Usually this happens in applications that read recently inserted data very frequently, but
commit infrequently. Techniques that can be used to reduce such situations include the
following:
• Shorter transactions reduce the likelihood that any given index block in the cache
contains uncommitted data, thereby reducing the need to access undo information for
consistent read.
• As explained earlier, increasing sequence cache sizes can reduce inter-instance
concurrent access to index leaf blocks. CR versions of index blocks modified by only one
instance can be fabricated without the need of remote undo information.
Note: In RAC, the problem is exacerbated by the fact that a subset of the undo information has
to be obtained
Unauthorized from remote
reproduction instances. prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
or distribution
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 25
High-Water Mark Considerations

Wait events
enq: HW -
contention
gc current grant Heavy
inserts

HWM
ble
Heavy
fe r a
inserts
ans
n - t r
a no
h a s
m ) e ฺ
New extent o
ฺc Gu i d
- i n c t
c s
a tud e n
RAC01 @ RAC02
a n s S
n g ฺt t hi
h e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
High-Waterg l ice
a Considerations
TMark
H en combination of wait events and statistics presents itself in applications where the
Aecertain
C he insertion of data is a dominant business function and new blocks have to be allocated
frequently to a segment. If data is inserted at a high rate, new blocks may have to be made
available after unfruitful searches for free space. This has to happen while holding the high-
water mark (HWM) enqueue.
Therefore, the most common symptoms for this scenario include:
• A high percentage of wait time for enq: HW – contention
• A high percentage of wait time for gc current grant events
The former is a consequence of the serialization on the HWM enqueue, and the latter is
because of the fact that current access to the new data blocks that need formatting is required
for the new block operation. In a RAC environment, the length of this space management
operation is proportional to the time it takes to acquire the HWM enqueue and the time it takes
to acquire global locks for all the new blocks that need formatting. This time is small under
normal circumstances because there is never any access conflict for the new blocks. Therefore,
this scenario may be observed in applications with business functions requiring a lot of data
loading, and the main recommendation to alleviate the symptoms is to define uniform and
large extent sizes for the locally managed and automatic space managed segments that are
subject toreproduction
Unauthorized high-volume orinserts.
distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 26
Concurrent Cross-Instance Calls: Considerations

Dirty
block

SGA1 SGA2
Table1 Table1

CKPT CKPT
Table2 Table2
ble
fe r a
ans
n - t r
1
a no
h a s 2
3 ) ฺ
Truncate Table1
4
ฺ c omTruncate u ide Table2
- c
in callnt G
s
ac tude
Cross-instance

a @
n is S
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Concurrent
a lice Calls: Considerations
TCross-Instance
g
enwarehouse and data mart environments, it is not uncommon to see a lot of TRUNCATE
In
e H
data
C he operations. These essentially happen on tables containing temporary data.
In a RAC environment, truncating tables concurrently from different instances does not scale
well, especially if, in conjunction, you are also using direct read operations such as parallel
queries.
As shown in the slide, a truncate operation requires a cross-instance call to flush dirty blocks
of the table that may be spread across instances. This constitutes a point of serialization. So,
while the first TRUNCATE command is processing, the second has to wait until the first one
completes.
There are different types of cross-instance calls. However, all use the same serialization
mechanism.
For example, the cache flush for a partitioned table with many partitions may add latency to a
corresponding parallel query. This is because each cross-instance call is serialized at the
cluster level, and one cross-instance call is needed for each partition at the start of the parallel
query for direct read purposes.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 27
Monitoring RAC Database and Cluster
Performance
Database
Directly from Database Control and Grid Control:
• View the status of each node in the cluster.
• View the aggregated alert messages across
Instances
all the instances.
• Review the issues that are affecting the entire cluster or
each instance. le
a b
• Monitor the cluster cache coherency statistics. s f er
• Determine whether any of the services for the cluster - t r an
database are having availability problems. n on
s a
a
) h ฺ alerts.
• Review any outstanding Clusterware interconnect
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Monitoring T an Database
RAC l i c en and Cluster Performance
e ng
e e H Oracle Enterprise Manager Database Control and Grid Control are cluster-aware and
Both
C h provide a central console to manage your cluster database. From the Cluster Database Home
page, you can do all of the following:
• View the overall system status, such as the number of nodes in the cluster and their
current status, so you do not have to access each individual database instance for details
• View the alert messages aggregated across all the instances with lists for the source of
each alert message.
• Review the issues that are affecting the entire cluster as well as those that are affecting
individual instances.
• Monitor cluster cache coherency statistics to help you identify processing trends and
optimize performance for your Oracle RAC environment. Cache coherency statistics
measure how well the data in caches on multiple instances is synchronized.
• Determine whether any of the services for the cluster database are having availability
problems. A service is deemed to be a problem service if it is not running on all preferred
instances, if its response time thresholds are not met, and so on.
• Review any outstanding Clusterware interconnect alerts.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 28
Cluster Database Performance Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a
Cluster Database
g l ice
T Performance Page
n
e Database Performance page provides a quick glimpse of the performance statistics
eThe
e H Cluster
Ch for a database. Enterprise Manager accumulates data from each instance over specified periods
of time, called collection-based data. Enterprise Manager also provides current data from each
instance, known as real-time data.
Statistics are rolled up across all the instances in the cluster database. Using the links next to
the charts, you can get more specific information and perform any of the following tasks:
• Identify the causes of performance issues.
• Decide whether resources need to be added or redistributed.
• Tune your SQL plan and schema for better optimization.
• Resolve performance issues.
The screenshot in the slide shows a partial view of the Cluster Database Performance page.
You access this page by clicking the Performance tab from the Cluster Database Home page.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 29
Determining Cluster Host Load Average

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e Load Average
TClusterlicHost
Determining
g
TheH en Host Load Average chart in the Cluster Database Performance page shows
Cluster
he e
C potential problems that are outside the database. The chart shows maximum, average, and
minimum load values for available nodes in the cluster for the previous hour.
If the load average is higher than the average of the total number of CPUs across all the hosts
in the cluster, then too many processes are waiting for CPU resources. SQL statements that are
not tuned often cause high CPU usage. Compare the load average values with the values
displayed for CPU Used in the Average Active Sessions chart. If the sessions value is low and
the load average value is high, this indicates that something else on the host, other than your
database, is consuming the CPU.
You can click any of the load value labels for the Cluster Host Load Average chart to view
more detailed information about that load value. For example, if you click the Average label,
the Hosts: Average Load page appears, displaying charts that depict the average host load for
up to four nodes in the cluster.
You can select whether the data is displayed in a summary chart, combining the data for each
node in one display, or using tile charts, where the data for each node is displayed in its own
chart. You can click Customize to change the number of tile charts displayed in each row or
the method of ordering the tile charts.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 30
Determining Global Cache Block Access Latency

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n ce Block Access Latency
TGloballiCache
Determining
g
TheH en Cache Block Access Latency chart shows the latency for each type of data block
Global
he e
C requests: current and consistent-read (CR) blocks. That is the elapsed time it takes to locate
and transfer consistent-read and current blocks between the buffer caches.
You can click either metric for the Global Cache Block Access Latency chart to view more
detailed information about that type of cached block.
If the Global Cache Block Access Latency chart shows high latencies (high elapsed times),
this can be caused by any of the following:
• A high number of requests caused by SQL statements that are not tuned
• A large number of processes in the queue waiting for the CPU, or scheduling delays
• Slow, busy, or faulty interconnects. In these cases, check your network connection for
dropped packets, retransmittals, or cyclic redundancy check (CRC) errors.
Concurrent read and write activity on shared data in a cluster is a frequently occurring activity.
Depending on the service requirements, this activity does not usually cause performance
problems. However, when global cache requests cause a performance problem, optimizing
SQL plans and the schema to improve the rate at which data blocks are located in the local
buffer cache, and minimizing I/O is a successful strategy for performance tuning. If the latency
for consistent-read and current block requests reaches 10 milliseconds, then see the Cluster
Cache Coherency
Unauthorized page or
reproduction fordistribution
more detailed information.
prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 31
Determining Average Active Sessions

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
Determiningg TAveragel icActive Sessions
e n
eThe
e HAverage Active Sessions chart on the Cluster Database Performance page shows potential
C h problems inside the database. Categories, called wait classes, show how much of the database
is using a resource, such as CPU or disk I/O. Comparing CPU time with wait time helps to
determine how much of the response time is consumed with useful work rather than waiting
for resources that are potentially held by other processes.
At the cluster database level, this chart shows the aggregate wait class statistics across all the
instances. For a more detailed analysis, you can click the Clipboard icon at the bottom of the
chart to view the ADDM analysis for the database for that time period.
If you click the wait class legends beside the Average Active Sessions chart, you can view
instance-level information stored in “Active Sessions by Instance” pages. You can use the
Wait Class action list on the “Active Sessions by Instance” page to view the different wait
classes. The “Active Sessions by Instance” pages show the service times for up to four
instances. Using the Customize button, you can select the instances that are displayed. You can
view the data for the instances separately by using tile charts, or you can combine the data into
a single summary chart.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 32
Determining Database Throughput

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Determining
g
a
TDatabasel iceThroughput
TheH enchart on the Performance page monitors the usage of various database resources.
last
he e
C Click the Throughput tab at the top of this chart to view the Database Throughput chart.
Compare the peaks on the Average Active Sessions chart with those on the Database
Throughput charts. If internal contention is high and throughput is low, consider tuning the
database.
The Database Throughput charts summarize any resource contention that appears in the
Average Active Sessions chart, and also show how much work the database is performing on
behalf of the users or applications. The Per Second view shows the number of transactions
compared to the number of logons, and (not shown here) the number of physical reads
compared to the redo size per second. The Per Transaction view shows the number of physical
reads compared to the redo size per transaction. Logons is the number of users that are logged
on to the database.
To obtain information at the instance level, access the “Database Throughput by Instance”
page by clicking one of the legends to the right of the charts. This page shows the breakdown
of the aggregated Database Throughput chart for up to four instances. You can select the
instances that are displayed. You can drill down further on the “Database Throughput by
Instance” page to see the sessions of an instance consuming the greatest resources. Click an
instance name legend under the chart to go to the Top Sessions sub-page of the Top
Consumers
Unauthorized page for thatorinstance.
reproduction distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 33
Determining Database Throughput

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Determining
g
a
TDatabasel iceThroughput (continued)
TheH enchart on the Performance page monitors the usage of various database resources. By
last
he e
C clicking the Instances tab at the top of this chart, you can view the “Active Sessions by
Instance” chart.
The “Active Sessions by Instance” chart summarizes any resource contention that appears in
the Average Active Sessions chart. Using this chart, you can quickly determine how much of
the database work is being performed on each instance.
You can also obtain information at the instance level by clicking one of the legends to the right
of the chart to access the Top Sessions page. On the Top Session page, you can view real-time
data showing the sessions that consume the greatest system resources. In the graph in the slide
above, the orac2 instance after 8:20 PM is consistently showing more active sessions than the
orac1 instance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 34
Accessing the Cluster Cache Coherency Page

Block
Class

ble
Segment fe r a
name ans
n - t r
a no
h a s
Segment
m ) e ฺ
name o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Accessing g
a
T Cluster
the l iceCache Coherency Page
To H en the Cluster Cache Coherency page, click the Performance tab on the Cluster
access
he e
C Database Home page, and click Cluster Cache Coherency in the Additional Monitoring Links
section at the bottom of the page. Alternatively, click either of the legends to the right of the
Global Cache Block Access Latency chart.
The Cluster Cache Coherency page contains summary charts for cache coherency metrics for
the cluster:
• Global Cache Block Access Latency: Shows the total elapsed time, or latency, for a
block request. Click one of the legends to the right of the chart to view the average time
it takes to receive data blocks for each block type (current or CR) by instance. On the
“Average Block Receive Time by Instance” page, you can click an instance legend under
the chart to go to the “Block Transfer for Local Instance” page, where you can identify
which block classes, such as undo blocks, data blocks, and so on, are subject to intense
global cache activity. This page displays the block classes that are being transferred, and
which instances are transferring most of the blocks. Cache transfer indicates how many
current and CR blocks for each block class were received from remote instances,
including how many transfers incurred a delay (busy) or an unexpected longer delay
(congested).
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 35
Accessing the Cluster Cache Coherency Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Accessingg
a
T Cluster
the l iceCache Coherency Page (continued)
n
e•He
Global Cache Block Transfer Rate: Shows the total aggregated number of blocks
C he received by all instances in the cluster by way of an interconnect. Click one of the
legends to the right of the chart to go to the “Global Cache Blocks Received by Instance”
page for that type of block. From there, you can click an instance legend under the chart
to go to the “Segment Statistics by Instance” page, where you can see which segments
are causing cache contention.
• Global Cache Block Transfers and Physical Reads: Shows the percentage of logical
read operations that retrieved data from the buffer cache of other instances by way of
Direct Memory Access and from disk. It is essentially a profile of how much work is
performed in the local buffer cache, rather than the portion of remote references and
physical reads, which both have higher latencies. Click one of the legends to the right of
the chart to go to the “Global Cache Block Transfers vs. Logical Reads by Instance” and
“Physical Reads vs. Logical Reads by Instance” pages. From there, you can click an
instance legend under the chart to go to the “Segment Statistics by Instance” page, where
you can see which segments are causing cache contention.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 36
Viewing the Cluster Interconnects Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n ce
TClusterliInterconnects
Viewing the
g Page
n
e Interconnects page is useful for monitoring the interconnect interfaces,
eThe
e HCluster
Ch determining configuration issues, and identifying transfer rate–related issues including excess
traffic. This page helps to determine the load added by instances and databases on the
interconnect. Sometimes you can quickly identify interconnect delays that are due to
applications outside Oracle.
You can use this page to perform the following tasks:
• View all interfaces that are configured across the cluster
• View statistics for the interfaces, such as absolute transfer rates and errors
• Determine the type of interfaces, such as private or public
• Determine whether the instance is using a public or private network
• Determine which database instance is currently using which interface
• Determine how much the instance is contributing to the transfer rate on the interface
The Private Interconnect Transfer Rate value shows a global view of the private interconnect
traffic, which is the estimated traffic on all the private networks in the cluster. The traffic is
calculated as the summary of the input rate of all private interfaces known to the cluster.
From the Cluster Interconnects page, you can access the Hardware Details page, on which you
can get more information about all the network interfaces defined on each node of your
cluster.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 37
Viewing the Cluster Interconnects Page (continued)
Similarly, you can access the “Transfer Rate metric” page, which collects the internode
communication traffic of a cluster database instance. The critical and warning thresholds of
this metric are not set by default. You can set them according to the speed of your cluster
interconnects.
Note: You can query the GV$CLUSTER_INTERCONNECTS view to see information about the
private interconnect:
SQL> select * from GV$CLUSTER_INTERCONNECTS;

INST_ID NAME IP_ADDRESS IS_PUBLIC SOURCE


-------- ----- --------------- --------- -------------------------
1 eth1 192.0.2.110 NO Oracle Cluster Repository
ble
2 eth1 192.0.2.111 NO Oracle Cluster Repository fe r a
n s
3 eth1 192.0.2.112 NO
-
Oracle Cluster Repository
n tra
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
h e e to u
n (c nse
gT
a lice
H en
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 38
Viewing the Database Locks Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Viewing the
g
a
TDatabasel iceLocks Page
UseH enDatabase Locks page to determine whether multiple instances are holding locks for the
the
e e
Ch same object. The page shows user locks, all database locks, or locks that are blocking other
users or applications. You can use this information to stop a session that is unnecessarily
locking an object.
To access the Database Locks page, select Performance on the Cluster Database Home page,
and click Database Locks in the Additional Monitoring Links section at the bottom of the
Performance subpage.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 39
AWR Snapshots in RAC

MMON Coordinator
In-memory
statistics
SYSAUX
SGA (Inst1)
AWR tables
… 6:00 a.m. ble
9:00 a.m. fe r a
7:00 a.m.
ans
8:00 a.m.
n - t r
In-memory
MMON 9:00 a.m.
a no
statistics h a s
m ) e ฺ
o
ฺc Gu i d
SGA (Instn)
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
AWR Snapshots
g
a
T in lRAC ice
n
eautomatically
eAWR
e H generates snapshots of the performance data once every hour and collects
Ch the statistics in the workload repository. In RAC environments, each AWR snapshot captures
data from all active instances within the cluster. The data for each snapshot set that is captured
for all active instances is from roughly the same point in time. In addition, the data for each
instance is stored separately and is identified with an instance identifier. For example, the
buffer_busy_wait statistic shows the number of buffer waits on each instance. The
AWR does not store data that is aggregated from across the entire cluster. That is, the data is
stored for each individual instance.
The statistics snapshots generated by the AWR can be evaluated by producing reports
displaying summary data such as load and cluster profiles based on regular statistics and wait
events gathered on each instance.
The AWR functions in a similar way as Statspack. The difference is that the AWR
automatically collects and maintains performance statistics for problem detection and self-
tuning purposes. Unlike in Statspack, in the AWR, there is only one snapshot_id per
snapshot across instances.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 40
AWR Reports and RAC: Overview

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
AWR Reportsg
a
T and RAC l ice
TheH en
RAC-related statistics in an AWR report are organized in different sections. A RAC
e e
Ch statistics section appears after the Top 5 Timed Events. This section contains:
• The number of instances open at the time of the begin snapshot and the end snapshot to
indicate whether instances joined or left between the two snapshots
• The Global Cache Load Profile, which essentially lists the number of blocks and
messages that are sent and received, as well as the number of fusion writes
• The Global Cache Efficiency Percentages, which indicate the percentage of buffer gets
broken up into buffers received from the disk, local cache, and remote caches. Ideally,
the percentage of disk buffer access should be close to zero.
• GCS and GES Workload Characteristics, which gives you an overview of the more
important numbers first. Because the global enqueue convert statistics have been
consolidated with the global enqueue get statistics, the report prints only the average
global enqueue get time. The round-trip times for CR and current block transfers follow,
as well as the individual sender-side statistics for CR and current blocks. The average log
flush times are computed by dividing the total log flush time by the number of actual log
flushes. Also, the report prints the percentage of blocks served that actually incurred a
log flush.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 41
AWR Reports and RAC (continued)
• GCS and GES Messaging Statistics. The most important statistic here is the average
message sent queue time on ksxp, which indicates how well the IPC works. Average
numbers should be less than 1 ms.
Additional RAC statistics are then organized in the following sections:
• The Global Enqueue Statistics section contains data extracted from
V$GES_STATISTICS.
• The Global CR Served Stats section contains data from V$CR_BLOCK_SERVER.
• The Global CURRENT Served Stats section contains data from
V$CURRENT_BLOCK_SERVER.
• The Global Cache Transfer Stats section contains data from
V$INSTANCE_CACHE_TRANSFER.
The Segment Statistics section also includes the GC Buffer Busy Waits, CR Blocks Received, ble
fe r a
and CUR Blocks Received information for relevant segments.
n s
- tra
Note: For more information about wait events and statistics, refer to Oracle Database
n
o
Reference.
s an
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
h e e to u
n (c nse
gT
a lice
H en
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 42
Active Session History Reports for RAC

• Active Session History (ASH) report statistics provide


details about the RAC Database session activity.
• The database records
information about active
sessions for all active RAC
instances.
ble
fe r a
ans
n - t r
a no
h a s
• Two ASH report sections m ) e ฺ
specific to Oracle RAC are o
ฺc Gu i d
i n c
Top Cluster Events and acs- dent
Top Remote Instance. a n @ Stu
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

anHistory c n
eReports
T
Active Session
g l i for RAC
n
e Session History (ASH) is an integral part of the Oracle Database self-management
e e H
Active
Ch framework and is useful for diagnosing performance problems in Oracle RAC environments.
ASH report statistics provide details about Oracle Database session activity. Oracle Database
records information about active sessions for all active Oracle RAC instances and stores this
data in the System Global Area (SGA). Any session that is connected to the database and using
CPU is considered an active session. The exception to this is sessions that are waiting for an
event that belongs to the idle wait class.
ASH reports present a manageable set of data by capturing only information about active
sessions. The amount of the data is directly related to the work being performed, rather than
the number of sessions allowed on the system. ASH statistics that are gathered over a specified
duration can be put into ASH reports.
Each ASH report is divided into multiple sections to help you identify short-lived performance
problems that do not appear in the ADDM analysis. Two ASH report sections that are specific
to Oracle RAC are Top Cluster Events and Top Remote Instance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 43
Active Session History Reports for RAC (continued)
Top Cluster Events
The ASH report Top Cluster Events section is part of the Top Events report that is specific to
Oracle RAC. The Top Cluster Events report lists events that account for the highest percentage
of session activity in the cluster wait class event along with the instance number of the affected
instances. You can use this information to identify which events and instances caused a high
percentage of cluster wait events.
Top Remote Instance
The ASH report Top Remote Instance section is part of the Top Load Profile report that is
specific to Oracle RAC. The Top Remote Instance report shows cluster wait events along with
the instance numbers of the instances that accounted for the highest percentages of session
activity. You can use this information to identify the instance that caused the extended cluster le
wait period.
e rab
a n sf
n -tr
n o
s a
a
) h eฺ
m
co Guid
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e e h o us
( c h se t
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 44
Automatic Database Diagnostic Monitor for RAC
ADDM can perform analysis:
– For the entire cluster
– For a specific database instance
Database ADDM
– For a subset of database instances

Self-diagnostic engine

ble
fe r a
ans
Instance ADDM
n - t r
a no
h a s
m ) e ฺ
o i d
AWR
… -incฺc nt Gu
a cs ude
Inst1 a n @ St Instn
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
Automaticg T
Database l icDiagnostic Monitor for RAC
n
ethe Automatic Database Diagnostic Monitor (ADDM), you can analyze the information
H
Using
e
C he collected by AWR for possible performance problems with your Oracle database. ADDM
presents performance data from a clusterwide perspective, thus enabling you to analyze
performance on a global basis. In an Oracle RAC environment, ADDM can analyze
performance using data collected from all instances and present it at different levels of
granularity, including:
• Analysis for the entire cluster
• Analysis for a specific database instance
• Analysis for a subset of database instances
To perform these analyses, you can run the ADDM Advisor in Database ADDM for RAC
mode to perform an analysis of the entire cluster, in Local ADDM mode to analyze the
performance of an individual instance, or in Partial ADDM mode to analyze a subset of
instances. Database ADDM for RAC is not just a report of reports but has independent
analysis that is appropriate for RAC. You activate ADDM analysis using the advisor
framework through Advisor Central in Oracle Enterprise Manager, or through the
DBMS_ADVISOR and DBMS_ADDM PL/SQL packages.
Note: Database ADDM report is generated on AWR snapshot coordinator.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 45
Automatic Database Diagnostic Monitor for RAC

• Identifies the most critical performance problems for the


entire RAC cluster database
• Runs automatically when taking AWR snapshots
• Performs database-wide analysis of:
– Global resources (for example I/O and global locks)
– High-load SQL and hot blocks le
a b
– Global cache interconnect traffic
s f er
– Network latency issues - t r an
– Skew in instance response times n on
a s
• ha ฺ
Is used by DBAs to analyze cluster performance
)
• om common
No need to investigate n reports ctoฺcspot uid
e problems
s - in nt G
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
Automatic Database
e
Ta licDiagnostic Monitor for RAC (continued)
n g
e Database 11g, you can create a period analysis mode for ADDM that analyzes the
In
e H
Oracle
C hethroughput performance for an entire cluster. When the advisor runs in this mode, it is called
database ADDM. You can run the advisor for a single instance, which is called instance
ADDM.
Database ADDM has access to AWR data generated by all instances, thereby making the
analysis of global resources more accurate. Both database and instance ADDM run on
continuous time periods that can contain instance startup and shutdown. In the case of database
ADDM, there may be several instances that are shut down or started during the analysis
period. However, you must maintain the same database version throughout the entire time
period.
Database ADDM runs automatically after each snapshot is taken. You can also perform
analysis on a subset of instances in the cluster. This is called partial analysis ADDM.
I/O capacity finding (the I/O system is overused) is a global finding because it concerns a
global resource affecting multiple instances. A local finding concerns a local resource or issue
that affects a single instance. For example, a CPU-bound instance results in a local finding
about the CPU. Although ADDM can be used during application development to test changes
to either the application, the database system, or the hosting machines, database ADDM is
targeted at DBAs.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 46
What Does ADDM Diagnose for RAC?

• Latency problems in interconnect


• Congestion (identifying top instances affecting the entire
cluster)
• Contention (buffer busy, top objects, etc.)
• Top consumers of multiblock requests
• Lost blocks b le
a
• Reports information about interconnect devices; warns sfer
about using PUBLIC interfaces - t r an
n on
• Reports throughput of devices, and how much s aof it is used
a
) h PQ)
by Oracle and for what purpose (GC, locks, ฺ om uide

c tGc
- i n
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

What Does T an Diagnose


ADDM l i c en for RAC?
e ng
e e H sources are:
Data
Ch • Wait events (especially Cluster class and buffer busy)
• Active Session History (ASH) reports
• Instance cache transfer data
• Interconnect statistics (throughput, usage by component, pings)
ADDM analyzes the effects of RAC for both the entire database (DATABASE analysis mode)
and for each instance (INSTANCE analysis mode).

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 47
EM Support for ADDM for RAC

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
EM Support
g
a
Tfor ADDM
l icefor RAC
H enDatabase 11g Enterprise Manager displays the ADDM analysis on the Cluster Database
Oracle
e e
Ch Home page.
On the Automatic Database Diagnostic Monitor (ADDM) page, the Database Activity chart
(not shown here) plots the database activity during the ADDM analysis period. Database
activity types are defined in the legend based on its corresponding color in the chart. Each icon
below the chart represents a different ADDM task, which in turn corresponds to a pair of
individual Oracle Database snapshots saved in the Workload Repository.
In the ADDM Performance Analysis section, the ADDM findings are listed in descending
order, from highest impact to least impact. For each finding, the Affected Instances column
displays the number (m of n) of instances affected. Drilling down further on the findings takes
you to the Performance Findings Detail page. The Informational Findings section lists the
areas that do not have a performance impact and are for informational purpose only.
The Affected Instances chart shows how much each instance is impacted by these findings.
The display indicates the percentage impact for each instance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 48
Quiz

Although there are specific tuning areas for RAC, such as


instance recovery and interconnect traffic, you get most
benefits by tuning your system like a single-instance system.
1. True
2. False

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 1g T
an licen
H en
he e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 49
Quiz

Which of the following RAC tuning tips are correct?


1. Application tuning is often the most beneficial!
2. Reduce long full-table scans in OLTP systems
3. Eliminate sequence caches
4. Use partitioning to reduce inter-instance traffic
5. Configure the interconnects properly ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
Answer: 1,2,4,5
e ng
e e H 3 is incorrect.
Statement
Ch

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 50
Summary

In this lesson, you should have learned how to:


• Determine RAC-specific tuning components
• Tune instance recovery in RAC
• Determine RAC-specific wait events, global enqueues, and
system statistics
• Implement the most common RAC tuning tips b le
f er a
• Use the Cluster Database Performance pages s
• Use the Automatic Workload Repository (AWR) inoRAC - t r an
n n
a
• Use Automatic Database Diagnostic Monitors (ADDM) in
h a
RAC m) eฺ co Guid
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
a lice
en gT
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 51
Practice 14 Overview

This practice covers the following topics:


• This lab shows how to manually discover performance
issues using the EM performance pages as well as ADDM.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 14 - 52
Services

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to:


• Configure and manage services in a RAC environment
• Use services with client applications
• Use services with the Database Resource Manager
• Use services with the Scheduler
• Configure services aggregation and tracing ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 2
Oracle Services

• To manage workloads or a group of applications, you can


define services for a particular application or a subset of an
application’s operations.
• You can also group work by type under services.
• For example OLTP users can use one service while batch
processing can use another to connect to the database.
• Users who share a service should have the same service- a b le
level requirements. s f er
- t r an
• Use srvctl or Enterprise Manager to manage services, on
n
not DBMS_SERVICE. sa ha ฺ
)
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T
Oracle Services
an licen
e ng
eTo
e Hmanage workloads or a group of applications, you can define services that you assign to a
Ch particular application or to a subset of an application’s operations. You can also group work by
type under services. For example, online users can use one service, while batch processing can
use another, and reporting can use yet another service to connect to the database.
It is recommended that all users who share a service have the same service-level requirements.
You can define specific characteristics for services and each service can be a separate unit of
work. There are many options that you can take advantage of when using services. Although
you do not have to implement these options, using them helps optimize application
performance. You can define services for both policy-managed and administrator-managed
databases.
Do not use DBMS_SERVICE with cluster-managed services. When Oracle Clusterware starts
a service, it updates the database with the attributes stored in the CRS resource. If you use
DBMS_SERVICE to modify the service and do not update the CRS resource, the next time
CRS resource is started, it will override the database attributes set by DBMS_SERVICE.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 3
Services for Policy- and Administrator-Managed
Databases
• You can define services for both policy-managed and
administrator-managed databases.
• Services for a policy-managed database are defined to a
server pool where the database is running.
• Services for policy-managed databases can be defined as:
– UNIFORM (running on all instances in the server pool)
– SINGLETON (running on only one instance in the pool) bl e
fe r a
— For singleton services, RAC chooses on which instance in the
t r a ns
server pool the service is active. - o n
• Services for an administrator-managed database n
a define
a s
which instances normally support that service.
m ) h eฺ
– These are known as the PREFERRED c ฺ co G id
instances.
u
s n
-ia service n t if the preferred
– Instances defined to support c e
instance fails are known n @ S tud
aas AVAILABLE instances.
ฺ t
g e tha i s
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Services g
a ce Administrator-Managed Databases
forTPolicy-liand
H
Iteis en
recommended that all users who share a service have the same service-level requirements.
h eYou can define specific characteristics for services and each service can be a separate unit of
C
work. There are many options that you can take advantage of when using services. Although
you do not have to implement these options, they help optimize application performance. You
can define services for both policy-managed and administrator-managed databases.
• Policy-managed database: When you define services for a policy-managed database,
you define the service to a server pool where the database is running. You can define the
service as either uniform (running on all instances in the server pool) or singleton
(running on only one instance in the server pool). For singleton services, RAC chooses
on which instance in the server pool the service is active. If that instance fails, then the
service fails over to another instance in the pool. A service can only run in one server
pool.
• Administrator-managed database: When you define a service for an administrator-
managed database, you define which instances support that service. These are known as
the PREFERRED instances. You can also define other instances to support a service if
the service's preferred instance fails. These are known as AVAILABLE instances.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 4
Default Service Connections

• Application services:
– Limit of 115 services per database
• Internal services:
– SYS$BACKGROUND
– SYS$USERS
– Cannot be deleted or changed e
r a bl
• A special Oracle database service is created by default for fe
the Oracle RAC database. t r a ns
o n -
• This default service is always available on all instances
s an in
an Oracle RAC environment. ) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T
Default Service
anConnections
l i c en
e ng
e e H are two broad types of services: application services and internal services. Application
There
Ch services are mainly functional maps to workloads. Sessions doing work for a common business
function are grouped together. For Oracle E-Business Suite, AP, AR, GL, MFG, WIP, and so on
create a functional division of work within the database and can be categorized as services.
The RDBMS also supports two internal services. SYS$BACKGROUND is used by the
background processes only. SYS$USERS is the default service for user sessions that are not
associated with any application service. Both internal services support all the workload
management features and neither one can be stopped or disabled. A special Oracle database
service is created by default for your Oracle RAC database. This default service is always
available on all instances in an Oracle RAC environment, unless an instance is in restricted
mode. You cannot alter this service or its properties. There is a limitation of 115 application
services per database that you can create. Also, a service name is restricted to 64 characters.
Note: Shadow services are also included in the application service category. For more
information about shadow services, see the lesson titled “High Availability of Connections.”
In addition, a service is also created for each Advanced Queue created. However, these types
of services are not managed by Oracle Clusterware. Using service names to access a queue
provides location transparency for the queue within a RAC database.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 5
Create Service with Enterprise Manager

Administration-Managed

Policy-Managed

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
T withlicEnterprise
Create Services
g Manager
n
eyour Cluster Database home page, click the Availability tab, and then click Cluster
e e H
From
Ch Managed Database Services. On the Cluster Managed Database Services page, click Create
Service.
Use the Create Service page to configure a new service in which you do the following:
• Select the desired service policy for each instance configured for the cluster database.
• Select the desired service properties. Refer to the section “Service Attributes” in this
lesson for more information about the properties you can specify on this page.
If your database is administration managed, the High Availability Configuration section allows
you to configure preferred and available servers. If your database employs policy-managed
administration, you can configure the service cardinality to be UNIFORM or SINGLETON and
assign the service to a server pool.
You can also define the management policy for a service. You can choose either an automatic
or a manual management policy.
• Automatic: The service always starts when the database starts.
• Manual: Requires that the service be started manually. Prior to Oracle RAC 11g Release
2, all services worked as though they were defined with a manual management policy.
Note: Enterprise Manager now generates the corresponding entries in your tnsnames.ora
files for your services. Just click the “Update local naming parameter (tnsnames.ora) file”
check boxreproduction
Unauthorized when creating
orthe service. prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
distribution
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 6
Create Services with SRVCTL

• To create a service called GL with preferred instance


RAC02 and an available instance RAC01:
$ srvctl add service –d PROD1 –s GL -r RAC02 -a RAC01

• To create a service called AP with preferred instance


RAC01 and an available instance RAC02:
$ srvctl add service –d PROD1 –s AP –r RAC01 -a RAC02
ble
• To create a SINGLETON service called BATCH using server fe r a
n s
pool SP1 and a UNIFORM service called ERP using server
n - tra
pool SP2 : a no
h a s
$ srvctl add service -d PROD2 -s BATCH -g
m ) SP1
e ฺ
\
-c singleton -y manual o
ฺc Guid
n c
-i ERPen-gt SP2 \
$ srvctl add service -d PROD2 c
a tuds -s
-c UNIFORM -y manual
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
T withlicSRVCTL
Create Services
g
ForH en
the example in the slide, assume a two-node, administration-managed database called
e e
Ch PROD1 with an instance named RAC01 on one node and an instance called RAC02 on the
other. Two services are created, AP and GL, to be managed by Oracle Clusterware. The AP
service is defined with a preferred instance of RAC01 and an available instance of RAC02.
If RAC01 dies, the AP service member on RAC01 is restored automatically on RAC02. A
similar scenario holds true for the GL service.
Note that it is possible to assign more than one instance with both the -r and -a options.
However, -r is mandatory but -a is optional.
Next, assume a policy-managed cluster database called PROD2. Two services are created, a
SINGELTON service called BATCH and a UNIFORM service called ERP. SINGLETON
services run on one of the active servers and UNIFORM services run on all active servers of
the server pool. The characteristics of the server pool determines how resources are allocated
to the service.
Note: When services are created with srvctl, tnsnames.ora is not updated and the service
is not started.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 7
Manage Services with Enterprise Manager

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Manage Services
g
a
T with l iceEnterprise Manager
YouH en use Enterprise Manager to manage services within a GUI framework. The screenshot
can
e e
Ch in the slide shows the main page for administering services within RAC. It shows you some
basic status information about a defined service.
To access this page, click the Cluster Managed Database Services link on the Cluster Database
Availability page.
You can perform simple service management such as enabling, disabling, starting, stopping,
and relocating services. All possible operations are shown in the slide.
If you choose to start a service on the Cluster Managed Database Services page, then EM
attempts to start the service on every preferred instance. Stopping the service stops it on all
instances that it is currently running.
To relocate a service, select the service that you want to administer, select the Manage option
from the Actions drop-down list, and then click Go.
Note: On the Cluster Managed Database Services page, you can test the connection for a
service.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 8
Managing Services with EM

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Manage Services
g
a
T with l iceEnterprise Manager (continued)
To H en the Cluster Managed Database Service page for an individual service, you must
access
e e
Ch select a service from the Cluster Managed Database Services page, select the Manage option
from the Actions drop-down list, and then click Go.
This is the Cluster Managed Database Service page for an individual service. It offers you the
same functionality as the previous page, except that actions performed here apply to specific
instances of a service.
This page also offers you the added functionality of relocating a service to an available
instance. Relocating a service from one instance to another stops the service on the first
instance and then starts it on the second.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 9
Manage Services with srvctl

• Start a named service on all configured instances:


$ srvctl start service –d orcl –s AP

• Stop a service:
$ srvctl stop service –d orcl –s AP

• Disable a service at a named instance: ble


fe r a
$ srvctl disable service –d orcl –s AP –i orcl4
ans
n - t r
• no
Set an available instance as a preferred instance:
a
$ srvctl modify service –d orcl –s AP -i horcl5 –r a s
m ) e ฺ
• Relocate a service from one instance
o
ฺc toGanother: u i d
i n c t
a s- den
corcl
$ srvctl relocate service –d
n @ S tu–s AP -i orcl5 –t orcl4
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

an with c n
esrvctl
T
Manage Services
g l i
e n
e
The
e Hslide demonstrates some management tasks with services by using SRVCTL.
C h Assume that an AP service has been created with four preferred instances: orcl1, orcl2,
orcl3, and orcl4. An available instance, orcl5, has also been defined for AP.
In the first example, the AP service is started on all instances. If any of the preferred or
available instances that support AP are not running but are enabled, then they are started.
The stop command stops the AP service on instances orcl3 and orcl4. The instances
themselves are not shut down, but remain running possibly supporting other services. The AP
service continues to run on orcl1 and orcl2. The intention might have been to perform
maintenance on orcl4, and so the AP service was disabled on that instance to prevent
automatic restart of the service on that instance. The OCR records the fact that AP is disabled
for orcl4. Thus, Oracle Clusterware will not run AP on orcl4 until the service is enabled.
The next command in the slide changes orcl5 from being an available instance to a preferred
one. This is beneficial if the intent is to always have four instances run the service because
orcl4 was previously disabled. The last example relocates the AP service from instance
orcl5 to orcl4. Do not perform other service operations while the online service
modification is in progress.
Note: For more information, refer to the Oracle Real Application Clusters Administrator’s
Unauthorized
Guide. reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 10
Use Services with Client Applications

ERP=(DESCRIPTION= ## Using the SCAN ##


(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=cluster01-scan)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=ERP)))

ERP=(DESCRIPTION= ## Using VIPs ##


(LOAD_BALANCE=on)
(ADDRESS=(PROTOCOL=TCP)(HOST=node-1vip)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=node-2vip)(PORT=1521))
ble
(ADDRESS=(PROTOCOL=TCP)(HOST=node-3vip)(PORT=1521))
fe r a
(CONNECT_DATA=(SERVICE_NAME=ERP)))
ans
n - t r
url="jdbc:oracle:oci:@ERP" ## Thick JDBC ## a no
h a s
) ฺ ##
url="jdbc:oracle:thin:@(DESCRIPTION= com## Thin i d eJDBC
n c ฺ G u
i t
cs- uden
(LOAD_BALANCE=on)
a
(ADDRESS=(PROTOCOL=TCP)(HOST=cluster01-scan)(PORT=1521)))
@ St
ฺ t n
(CONNECT_DATA=(SERVICE_NAME=ERP)))"
a is
g t h
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T n cen
awith
Use Services
n g li Applications
Client
TheH e example in the slide shows the TNS connect descriptor that can be used to access the
first
e
heERP service. It uses the cluster's Single Client Access Name (SCAN). The SCAN provides a
C
single name to the clients connecting to Oracle RAC that does not change throughout the life
of the cluster, even if you add or remove nodes from the cluster. Clients connecting with
SCAN can use a simple connection string, such as a thin JDBC URL or EZConnect, and still
achieve the load balancing and client connection failover. The second example uses virtual IP
addresses as in previous versions of the Oracle Database.
The third example shows the thick JDBC connection description using the previously defined
TNS connect descriptor.
The third example shows the thin JDBC connection description using the same TNS connect
descriptor as the first example.
Note: The LOAD_BALANCE=ON clause is used by Oracle Net to randomize its progress
through the protocol addresses of the connect descriptor. This feature is called client
connection load balancing.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 11
Services and Connection Load Balancing

• The two load balancing methods that you can implement are:
– Client-side load balancing: Balances the connection
requests across the listeners
– Server-side load balancing: The listener directs a
connection request to the best instance currently providing
the service by using the load balancing advisory (LBA).
• FAN, Fast Connection Failover, and LBA depend on a
connection load balancing configuration that includes a b le
s f er
setting the connection load balancing goal for the service.
- t r an
• non
The load balancing goal for the service can be either:
a
h a s
– LONG: For applications having long-lived connections. This is
m )
typical for connection pools and SQL*Forms ฺ
sessions.
e
o i d
ฺc Gu connections
inc short-lived
– SHORT: For applications that-have t
a cs uden
srvctl modify service -s
n @service_name
S t -j LONG|SHORT
g ฺt a t hi s
h n e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
Services g
and l ice Load Balancing
TaConnection
H enNet Services provides the ability to balance client connections across the instances in
Oracle
e
hean Oracle RAC configuration. You can implement two types of load balancing: client-side and
C server-side. Client-side load balancing balances the connection requests across the listeners.
With server-side load balancing, the listener directs a connection request to the best instance
currently providing the service by using the load balancing advisory. In a RAC database, client
connections should use both types of connection load balancing.
FAN, Fast Connection Failover, and the load balancing advisory depend on an accurate
connection load balancing configuration that includes setting the connection load balancing
goal for the service. You can use a goal of either LONG or SHORT for connection load
balancing. These goals have the following characteristics:
• LONG: Use the LONG load balancing method for applications that have long-lived
connections. This is typical for connection pools and SQL*Forms sessions. LONG is the
default connection load balancing goal. The following is an example of modifying a
service, POSTMAN, with the srvctl utility to define the connection load balancing goal
for long-lived sessions:
srvctl modify service -s POSTMAN -j LONG
• SHORT: Use the SHORT connection load balancing method for applications that have
short-lived connections. The following example modifies the ORDER service , using
srvctl to set the goal to SHORT:
Unauthorized reproduction or distribution
srvctl modify prohibitedฺ
service -s ORDERCopyright©
-j SHORT2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 12
Services and Transparent Application Failover

• Services simplify the deployment of Transparent


Application Failover (TAF).
• You can define a TAF policy for a service and all
connections using this service will automatically have TAF
enabled.
• The TAF setting on a service can be NONE, BASIC, or
PRECONNECT and overrides any TAF setting in the client
a b le
connection definition. s f er
t r n
acan
• To define a TAF policy for a service, the srvctl utility
-
n on
be used as shown below: a
h a s
srvctl modify service -s gl.example.com m )-q TRUE e ฺ -P
o
ฺc Gu i d
BASIC -e SELECT -z 180 -w 5 -j LONG
i n c
- between t retry attempts and -j is the
Where -z is the number of retries, -w is thecdelay
a tuds e n
connection load balancing goal.
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Services gand
a l ice Application Failover
T Transparent
n
eOracle
e e H
When Net Services establishes a connection to an instance, the connection remains
C h open until the client closes the connection, the instance is shut down, or a failure occurs. If you
configure TAF for the connection, then Oracle Database moves the session to a surviving
instance when an outage occurs.
TAF can restart a query after failover has completed but for other types of transactions, such as
INSERT, UPDATE, or DELETE, the application must roll back the failed transaction and
resubmit the transaction. You must re-execute any session customizations, in other words,
ALTER SESSION statements, after failover has occurred. However, with TAF, a connection
is not moved during normal processing, even if the workload changes over time.
Services simplify the deployment of TAF. You can define a TAF policy for a service, and all
connections using this service will automatically have TAF enabled. This does not require any
client-side changes. The TAF setting on a service overrides any TAF setting in the client
connection definition. To define a TAF policy for a service, use the srvctl utility as in the
following example:
srvctl modify service -s gl.example.com -q TRUE -P BASIC -e
SELECT -z 180 -w 5 -j LONG
Note: TAF applies only to an admin-managed database and not to policy-managed databases.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 13
Use Services with the Resource Manager

• Consumer groups are automatically assigned to sessions


based on session services.
• Work is prioritized by service inside one instance.

ble
Instance resources fe r a
AP
ans
n - t r
o
Connections AP an
75%
s
) h a
o m d e ฺ
c ฺc Gu 25%
BATCH
i
BATCH
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Use Services
g
a
T with thel iceResource Manager
TheH en Resource Manager (also called Resource Manager) enables you to identify work
Database
e e
Ch by using services. It manages the relative priority of services within an instance by binding
services directly to consumer groups. When a client connects by using a service, the consumer
group is assigned transparently at connect time. This enables the Resource Manager to manage
the work requests by service in the order of their importance.
For example, you define the AP and BATCH services to run on the same instance, and assign
AP to a high-priority consumer group and BATCH to a low-priority consumer group. Sessions
that connect to the database with the AP service specified in their TNS connect descriptor get
priority over those that connect to the BATCH service.
This offers benefits in managing workloads because priority is given to business functions
rather than the sessions that support those business functions.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 14
Services and Resource Manager with EM

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Services gand
a
T Resource
l ice Manager with EM
H en Manager (EM) presents a GUI through the Consumer Group Mapping page to
Enterprise
he e
C automatically map sessions to consumer groups. You can access this page by clicking the
Consumer Group Mappings link on the Server page.
Using the General tabbed page of the Consumer Group Mapping page, you can set up a
mapping of sessions connecting with a service name to consumer groups as illustrated on the
right part of the slide.
With the ability to map sessions to consumer groups by service, module, and action, you have
greater flexibility when it comes to managing the performance of different application
workloads.
Using the Priorities tabbed page of the Consumer Group Mapping page, you can change
priorities for the mappings that you set up on the General tabbed page. The mapping options
correspond to columns in V$SESSION. When multiple mapping columns have values, the
priorities you set determine the precedence for assigning sessions to consumer groups.
Note: You can also map a service to a consumer group directly from the Create Service page
as shown on the left part of the slide.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 15
Use Services with the Scheduler

• Services are associated with Scheduler classes.


• Scheduler jobs have service affinity:
– High Availability
– Load balancing
HOT_BATCH_SERV HOT_BATCH_SERV LOW_BATCH_SERV

ble
fe r a
Job coordinator Job coordinator Job coordinator
ans
n - t r
Job slaves Job slaves Job slaves
a no
Database h a s
m ) e ฺ
Job table o
ฺc Gu i d
i n c
-HOT_BATCH_SERV
Job1 HOT_BATCH_CLASS HOT_BATCH_SERV t
Job2 HOT_BATCH_CLASS c s e n
@
Job3 LOW_BATCH_CLASS
n S tud
a LOW_BATCH_SERV
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T n cen
awith
Use Services
n g li Scheduler
the
H
Just e in other environments, the Scheduler in a RAC environment uses one job table for each
as
e e
Ch database and one job coordinator (CJQ0 process) for each instance. The job coordinators
communicate with each other to keep information current.
The Scheduler can use the services and the benefits they offer in a RAC environment. The
service that a specific job class uses is defined when the job class is created. During execution,
jobs are assigned to job classes and job classes run within services. Using services with job
classes ensures that the work of the Scheduler is identified for workload management and
performance tuning. For example, jobs inherit server-generated alerts and performance
thresholds for the service they run under.
For High Availability, the Scheduler offers service affinity instead of instance affinity. Jobs
are not scheduled to run on any specific instance. They are scheduled to run under a service.
So, if an instance dies, the job can still run on any other instance in the cluster that offers the
service.
Note: By specifying the service where you want the jobs to run, the job coordinators balance
the load on your system for better performance.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 16
Services and the Scheduler with EM

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Services g
and
a l ice with EM
T the Scheduler
To H en a job to run under a specific service, click the Job Classes link in the Database
configure
he e
C Scheduler section of the Server page. This opens the Scheduler Job Classes page. On the
Scheduler Job Classes page, you can see services assigned to job classes.
When you click the Create button on the Scheduler Job Classes page, the Create Job Class
page is displayed. On this page, you can enter details of a new job class, including which
service it must run under.
Note: Similarly, you can map a service to a job class on the Create Service page as shown at
the bottom of the slide.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 17
Services and the Scheduler with EM

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Services g and
a l ice with EM (continued)
T the Scheduler
n
eyour
e e H
After job class is set up with the service that you want it to run under, you can create the
C h job. To create the job, click the Jobs link on the Server page. The Scheduler Jobs page appears,
on which you can click the Create button to create a new job. When you click the Create
button, the Create Job page is displayed. This page has different tabs: General, Schedule, and
Options. Use the General tabbed page to assign your job to a job class.
Use the Options page (displayed in the slide) to set the Instance Stickiness attribute for your
job. Basically, this attribute causes the job to be load balanced across the instances for which
the service of the job is running. The job can run only on one instance. If the Instance
Stickiness value is set to TRUE, which is the default value, the Scheduler runs the job on the
instance where the service is offered with the lightest load. If Instance Stickiness is set to
FALSE, then the job is run on the first available instance where the service is offered.
Note: It is possible to set job attributes, such as INSTANCE_STICKINESS, by using the
SET_ATTRIBUTE procedure of the DBMS_SCHEDULER PL/SQL package.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 18
Using Distributed Transactions with RAC

• An XA transaction can span RAC instances, allowing any


application that uses XA to take full advantage of the
Oracle RAC environment.
• Tightly coupled XA transactions no longer require the
special type of singleton services (DTP).
• XA transactions are transparently supported on Oracle
RAC databases with any type of services configuration.
a b le
• However, DTP services will improve performance for many s f er
distributed transaction scenarios. - t r an
n on
• DTP services allow you to direct all branches s aof a
distributed transaction to a single instance
a
) h inethe cluster.
o m ฺ
• To load balance, it is better to ihave
n c G uidgroups of
ฺc several
c s - each e n t
smaller application serversa tud group directing its
with
@
transactions to a single is S
ฺtan service.
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Using Distributed
g ice
Ta Transactions
l with RAC
n
e transaction can span Oracle RAC instances by default, allowing any application that
eAn
e H
XA
Ch uses XA to take full advantage of the Oracle RAC environment to enhance the availability and
scalability of the application. This is controlled through the GLOBAL_TXN_PROCESSES
initialization parameter, which is set to 1 by default. This parameter specifies the initial
number of GTXn background processes for each Oracle RAC instance. Keep this parameter at
its default value clusterwide to allow distributed transactions to span multiple Oracle RAC
instances. This allows the units of work performed across these Oracle RAC instances to share
resources and act as a single transaction (that is, the units of work are tightly coupled). It also
allows 2PC requests to be sent to any node in the cluster. Tightly coupled XA transactions no
longer require the special type of singleton services (that is, Oracle Distributed Transaction
Processing [DTP] services) to be deployed on Oracle RAC database. XA transactions are
transparently supported on Oracle RAC databases with any type of services configuration.
To provide improved application performance with distributed transaction processing in
Oracle RAC, you may want to take advantage of the specialized service referred to as a DTP
Service. Using DTP services, you can direct all branches of a distributed transaction to a single
instance in the cluster. To load balance across the cluster, it is better to have several groups of
smaller application servers with each group directing its transactions to a single service, or set
of services, than to have one or two larger application servers.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 19
Distributed Transactions and Services
To leverage all instances, create one or more DTP services for
each instance that hosts distributed transactions.
• Use EM or srvctl to create singleton services XA1, XA2
and XA3 for database CRM, enabling DTP for the service:

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Distributed
g
a l ice and Services
TTransactions
To H en the performance of distributed transactions, you can use services to manage DTP
enhance
he e
C environments. By defining the DTP property of a service, the service is guaranteed to run on
one instance at a time in an Oracle RAC database. All global distributed transactions
performed through the DTP service are ensured to have their tightly coupled branches running
on a single Oracle RAC instance. This has the following benefits:
• The changes are available locally within one Oracle RAC instance when tightly coupled
branches need information about changes made by each other.
• Relocation and failover of services are fully supported for DTP.
• By using more DTP services than there are Oracle RAC instances, Oracle Database can
balance the load by services across all the Oracle RAC database instances.
To leverage all the instances in a cluster, create one or more DTP services for each Oracle
RAC instance that hosts distributed transactions. Choose one DTP service for one distributed
transaction. Choose different DTP services for different distributed transactions to balance the
workload among the Oracle RAC database instances.
Because all the branches of a distributed transaction are on one instance, you can leverage all
the instances to balance the load of many DTP transactions through multiple singleton
services, thereby maximizing application throughput.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 20
Distributed Transactions and Services (continued)
An external transaction manager, such as OraMTS, coordinates DTP/XA transactions.
However, an internal Oracle transaction manager coordinates distributed SQL transactions.
Both DTP/XA and distributed SQL transactions must use the DTP service in Oracle RAC.
For services that you are going to use for distributed transaction processing, create the service
using Oracle Enterprise Manager or srvctl, and define only one instance as the preferred
instance. You can have as many AVAILABLE instances as you want. For example, the
following srvctl command creates a singleton service for database CRM,
xa1.service.example.com, whose preferred instance is orcl1:
srvctl add service -d CRM -s xa1.service.example.com -r
orcl1 -a orcl2, orcl3
Then mark the service for distributed transaction processing by setting the DTP parameter to
TRUE; the default is FALSE. Oracle Enterprise Manager enables you to set this parameter on ble
the Cluster Managed Database Services: Create Service or Modify Service page. f e ra
t r a ns
o n -
If, for example, orcl1 (that provides service XA1) fails, then the singleton service that it
provided fails over to another instance, such as orcl2 or orcl3. If services
a n migrate to other
s have to force the
instances after the cold-start of the Oracle RAC database, then youamight
h
relocation of the service to evenly re-balance the load on allm ) available
the e ฺ hardware. Use data
c o i d
from the GV$ACTIVE_SERVICES view to determine
- in nt cฺwhether G u do this.
to
s
ac tude
a @
n is S
ฺ t
g e th
e n
h o us
e e
h se t
( c
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 21
Service Thresholds and Alerts
• Service-level thresholds enable you to compare achieved
service levels against accepted minimum required levels.
• You can explicitly specify two performance thresholds for
each service:
– SERVICE_ELAPSED_TIME: The response time for calls
– SERVICE_CPU_TIME: The CPU time for calls

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce Alerts
Service Thresholds
g l iand
H en thresholds enable you to compare achieved service levels against accepted
Service-level
e e
Ch minimum required levels. This provides accountability for the delivery or the failure to deliver
an agreed service level. The end goal is a predictable system that achieves service levels.
There is no requirement to perform as fast as possible with minimum resource consumption;
the requirement is to meet the quality of service.
You can explicitly specify two performance thresholds for each service: the response time for
calls, or SERVICE_ELAPSED_TIME, and the CPU time for calls, or SERVICE_CPU_TIME. The
response time goal indicates that the elapsed time should not exceed a certain value, and the
response time represents wall clock time. Response time is a fundamental measure that reflects
all delays and faults that might be blocking the call from running on behalf of the user.
Response time can also indicate differences in node power across the nodes of an Oracle RAC
database.
The service time and CPU time are calculated as the moving average of the elapsed, server-
side call time. The AWR monitors the service time and CPU time and publishes AWR alerts
when the performance exceeds the thresholds. You can then respond to these alerts by
changing the priority of a job, stopping overloaded processes, or by relocating, expanding,
shrinking, starting, or stopping a service. This permits you to maintain service availability
despite changes
Unauthorized in demand.
reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 22
Services and Thresholds Alerts: Example

EXECUTE DBMS_SERVER_ALERT.SET_THRESHOLD(
METRICS_ID => DBMS_SERVER_ALERT.ELAPSED_TIME_PER_CALL
, warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE
, warning_value => '500000'
, critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE
, critical_value => '750000'
, observation_period => 30
a b le
, consecutive_occurrences => 5
s f er
, instance_name => NULL
- t r an
on
, object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_SERVICE
n
, object_name => 'servall'); a s
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Services g
and
a l ice Alerts: Example
T Thresholds
To H en the thresholds for the servall service, use the AWR report. You should record
check
he e
C output from the report over several successive intervals during which time the system is
running optimally. For example, assume that for an email server, the AWR report runs each
Monday during the peak usage times of 10:00 AM to 2:00 PM. The AWR report would
contain the response time, or DB time, and the CPU consumption time, or CPU time, for calls
for each service. The AWR report would also provide a breakdown of the work done and the
wait times that are contributing to the response times.
Using DBMS_SERVER_ALERT, set a warning threshold for the servall service at 0.5
seconds and a critical threshold for the payroll service at 0.75 seconds. You must set these
thresholds at all instances within an Oracle RAC database. The parameter instance_name
can be set to a NULL value to indicate database-wide alerts. You can schedule actions using
Enterprise Manager jobs for alerts, or you can schedule actions to occur programmatically
when the alert is received. In this example, thresholds are added for the servall service and
set as shown above.
Verify the threshold configuration using the following SELECT statement:
SELECT metrics_name, instance_name, warning_value,
critical_value, observation_period FROM dba_thresholds;
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 23
Service Aggregation and Tracing

• Statistics are always aggregated by service to measure


workloads for performance tuning.
• Statistics can be aggregated at finer levels:
– MODULE
– ACTION
– Combination of SERVICE_NAME, MODULE, ACTION
• Tracing can be done at various levels: bl e
fe r a
– SERVICE_NAMES n s
– MODULE n - tra
a no
– ACTION
h a s
– Combination of SERVICE_NAME, MODULE,m) ACTION
eฺ co Guid
c ฺ
• This is useful for tuning systems-that
s in usentshared sessions.
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
Ta licand
Service Aggregation
e Tracing
g
en important statistics and wait events are collected for the work attributed to every
eBy
e Hdefault,
Ch service. An application can further qualify a service by MODULE and ACTION names to
identify the important transactions within the service. This enables you to locate exactly the
poorly performing transactions for categorized workloads. This is especially important when
monitoring performance in systems by using connection pools or transaction processing
monitors. For these systems, the sessions are shared, which makes accountability difficult.
SERVICE_NAME, MODULE, and ACTION are actual columns in V$SESSION.
SERVICE_NAME is set automatically at login time for the user. MODULE and ACTION names
are set by the application by using the DBMS_APPLICATION_INFO PL/SQL package or
special OCI calls. MODULE should be set to a user-recognizable name for the program that is
currently executing. Likewise, ACTION should be set to a specific action or task that a user is
performing within a module (for example, entering a new customer).
Another aspect of this workload aggregation is tracing by service. The traditional method of
tracing each session produces trace files with SQL commands that can span workloads. This
results in a hit-or-miss approach to diagnose problematic SQL. With the criteria that you
provide (SERVICE_NAME, MODULE, or ACTION), specific trace information is captured in a
set of trace files and combined into a single output trace file. This enables you to produce trace
files that reproduction
Unauthorized contain SQL that is relevant toprohibitedฺ
or distribution a specific workload being
Copyright© done.
2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 24
Top Services Performance Page

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Top Services
g
a l ice Page
T Performance
n
ethe
e e H
From Performance page, you can access the Top Consumers page by clicking the Top
Ch Consumers link.
The Top Consumers page has several tabs for displaying your database as a single-system
image. The Overview tabbed page contains four pie charts: Top Clients, Top Services, Top
Modules, and Top Actions. Each chart provides a different perspective regarding the top
resource consumers in your database.
The Top Services tabbed page displays performance-related information for the services that
are defined in your database. Using this page, you can enable or disable tracing at the service
level, as well as view the resulting SQL trace file.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 25
Service Aggregation Configuration

• Automatic service aggregation level of statistics


• DBMS_MONITOR used for finer granularity of service
aggregations:
– SERV_MOD_ACT_STAT_ENABLE
– SERV_MOD_ACT_STAT_DISABLE
• Possible additional aggregation levels: e
– SERVICE_NAME/MODULE r a bl
s fe
– SERVICE_NAME/MODULE/ACTION
- t r an
n no
• Tracing services, modules, and actions: a
– SERV_MOD_ACT_TRACE_ENABLE h a s
m ) e ฺ
– SERV_MOD_ACT_TRACE_DISABLE o
ฺc Gu i d
i n c
• Database settings persistaacross cs- udinstance ent restarts.
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T
Service Aggregation
en
an licConfiguration
e ng
eOn
e Heach instance, important statistics and wait events are automatically aggregated and
Ch collected by service. You do not have to do anything to set this up, except connect with
different connect strings using the services you want to connect to. However, to achieve a finer
level of granularity of statistics collection for services, you must use the
SERV_MOD_ACT_STAT_ENABLE procedure in the DBMS_MONITOR package. This
procedure enables statistics gathering for additional hierarchical combinations of
SERVICE_NAME/MODULE and SERVICE_NAME/MODULE/ACTION. The
SERV_MOD_ACT_STAT_DISABLE procedure stops the statistics gathering that was turned on.
The enabling and disabling of statistics aggregation within the service applies to every
instance accessing the database. These settings are persistent across instance restarts.
The SERV_MOD_ACT_TRACE_ENABLE procedure enables tracing for services with three
hierarchical possibilities: SERVICE_NAME, SERVICE_NAME/MODULE, and
SERVICE_NAME/MODULE/ACTION. The default is to trace for all instances that access the
database. A parameter is provided that restricts tracing to specified instances where poor
performance is known to exist. This procedure also gives you the option of capturing relevant
waits and bind variable values in the generated trace files.
SERV_MOD_ACT_TRACE_DISABLE disables the tracing at all enabled instances for a given
combination of service, module, and action. Like the statistics gathering mentioned previously,
service tracing
Unauthorized persists or
reproduction across instance prohibitedฺ
distribution restarts. Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 26
Service, Module, and Action Monitoring

• For the ERP service, enable monitoring for the exceptions


pay action in the PAYROLL module.
EXEC DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(
service_name => 'ERP', module_name=> 'PAYROLL',
action_name => 'EXCEPTIONS PAY')

• For the ERP service, enable monitoring for all the actions in
the PAYROLL module: ble
fe r a
an=> s
EXEC DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(service_name
- t r
on
'ERP', module_name=> 'PAYROLL', action_name => NULL);
s an
• For the HOT_BATCH service, enable monitoring
) ha ฺ for all
actions in the posting module: ฺ c om uide
- i n c tG
a c s den
EXEC DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(service_name =>
'HOT_BATCH', module_name =>'POSTING',
a n @ Stu action_name => NULL);
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an and
Service, Module, l i c en Monitoring
Action
e ng
eYou
e Hcan enable performance data tracing for important modules and actions within each
Ch service. The performance statistics are available in the V$SERV_MOD_ACT_STATS view.
Consider the following actions, as implemented in the slide above:
• For the ERP service, enable monitoring for the exceptions pay action in the payroll
module.
• Under the ERP service, enable monitoring for all the actions in the payroll module.
• Under the HOT_BATCH service, enable monitoring for all actions in the posting module.
Verify the enabled service, module, action configuration with the SELECT statement below:
COLUMN AGGREGATION_TYPE FORMAT A21 TRUNCATED HEADING 'AGGREGATION'
COLUMN PRIMARY_ID FORMAT A20 TRUNCATED HEADING 'SERVICE'
COLUMN QUALIFIER_ID1 FORMAT A20 TRUNCATED HEADING 'MODULE'
COLUMN QUALIFIER_ID2 FORMAT A20 TRUNCATED HEADING 'ACTION'
SELECT * FROM DBA_ENABLED_AGGREGATIONS ;
The output might appear as follows:
AGGREGATION SERVICE MODULE ACTION
--------------------- -------------------- ---------- -------------
SERVICE_MODULE_ACTION ERP PAYROLL EXCEPTIONS PAY
SERVICE_MODULE_ACTION ERP PAYROLL
SERVICE_MODULE_ACTION HOT_BATCH POSTING
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 27
Service Performance Views

• Service, module, and action information in:


– V$SESSION
– V$ACTIVE_SESSION_HISTORY
• Service performance in:
– V$SERVICE_STATS
– V$SERVICE_EVENT
– ble
V$SERVICE_WAIT_CLASS
fe r a
– V$SERVICEMETRIC
ans
– V$SERVICEMETRIC_HISTORY n - t r
o
– V$SERV_MOD_ACT_STATS s an
– DBA_ENABLED_AGGREGATIONS ) ha ฺ
o m ide
– DBA_ENABLED_TRACES cฺc u
s -
ac tude
in nt G
a @
n is S
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
e
Ta licViews
Service Performance
g
en module, and action information are visible in V$SESSION and
e
The
e Hservice,
Ch V$ACTIVE_SESSION_HISTORY.
The call times and performance statistics are visible in V$SERVICE_STATS,
V$SERVICE_EVENT, V$SERVICE_WAIT_CLASS, V$SERVICEMETRIC, and
V$SERVICEMETRIC_HISTORY.
When statistics collection for specific modules and actions is enabled, performance measures
are visible at each instance in V$SERV_MOD_ACT_STATS.
There are more than 600 performance-related statistics that are tracked and visible in
V$SYSSTAT. Of these, 28 statistics are tracked for services. To see the statistics measured for
services, run the following query: SELECT DISTINCT stat_name FROM
v$service_stats
Of the 28 statistics, DB time and DB CPU are worth mentioning. DB time is a statistic that
measures the average response time per call. It represents the actual wall clock time for a call
to complete. DB CPU is an average of the actual CPU time spent per call. The difference
between response time and CPU time is the wait time for the service. After the wait time is
known, and if it consumes a large percentage of response time, then you can trace at the action
level to identify the waits.
Note: DBA_ENABLED_AGGREGATIONS displays information about enabled on-demand
statistic aggregation.
Unauthorized reproductionDBA_ENABLED_TRACES displays
or distribution prohibitedฺ information
Copyright© about
2010, enabled
Oracle traces.
and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 28
Quiz

Which of the following statements regarding Oracle Services is


not correct?
1. You can group work by type under services.
2. Users who share a service should have the same service-
level requirements.
3. Use DBMS_SERVICE to manage services, not srvctl or le
a b
Enterprise Manager. fer s
- t r an
o n
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 3g T
an licen
H en 3 is not correct.
Statement
he e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 29
Quiz

Is the following statement regarding performance thresholds


true or false? The two performance thresholds that can be
explicitly set for each service are:
(a) SERVICE_ELAPSED_TIME: The response time for calls
(b) SERVICE_CPU_TIME: The CPU time for calls
1. True
ble
2. False r a
s fe
- t r an
o n
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 1g T
an licen
TheH en
statement is true.
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 30
Summary

In this lesson, you should have learned how to:


• Configure and manage services in a RAC environment
• Use services with client applications
• Use services with the Database Resource Manager
• Use services with the Scheduler
• Set performance-metric thresholds on services ble
fe r a
• Configure services aggregation and tracing
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 31
Practice 15: Overview

This practice covers the following topics:


• Creating and managing services using EM
• Using server-generated alerts in combination with services

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 15 - 32
Design for High Availability

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to:


• Design a Maximum Availability Architecture in your
environment
• Determine the best RAC and Data Guard topologies for
your environment
• Configure the Data Guard Broker configuration files in a le
a b
RAC environment
s f er
n
• Identify successful disk I/O strategies -tra on
a n
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
gT
a lice
H en
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 2
Causes of Unplanned Down Time

Unplanned Down time

Software failures Hardware failures Human errors Disasters

Operating system CPU Operator error Fire

Database Memory User error Flood

Middleware Power supply DBA Earthquake


ble
fe r a
Application Bus System admin. Power failure
t r a ns
Network Disk Sabotage n -
Bombing
o
an s
Tape
) h a
Controllers o m d e ฺ
c ฺc Gu i
Network s - i n n t
c
a tud e
Powern@ S
ฺ t a h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
Causes ofgUnplanned l iceDown Time
OneH enthe true challenges in designing a highly available solution is examining and
of
e e
Ch addressing all the possible causes of down time. It is important to consider causes of both
unplanned and planned down time. The diagram shown in the slide, which is a taxonomy of
unplanned failures, classifies failures as software failures, hardware failures, human error, and
disasters. Under each category heading is a list of possible causes of failures related to that
category.
Software failures include operating system, database, middleware, application, and network
failures. A failure of any one of these components can cause a system fault.
Hardware failures include system, peripheral, network, and power failures.
Human error, which is a leading cause of failures, includes errors by an operator, user,
database administrator, or system administrator. Another type of human error that can cause
unplanned down time is sabotage.
The final category is disasters. Although infrequent, these can have extreme impacts on
enterprises, because of their prolonged effect on operations. Possible causes of disasters
include fires, floods, earthquakes, power failures, and bombings. A well-designed high-
availability solution accounts for all these factors in preventing unplanned down time.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 3
Causes of Planned Down Time

Planned down time

Routine operations Periodic maintenance New deployments

Backups Storage maintenance HW upgrade

Performance mgmt Initialization parameters OS upgrades

Security mgmt Software patches DB upgrades


bl e
fe r a
Batches Schema management MidW upgrades
t r a ns
Operating system n -
Appoupgrades
aNetn
Middleware
h a s upgrades
m ) e ฺ
Network o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
Causes ofgPlanned l ice Time
Down
H endown time can be just as disruptive to operations, especially in global enterprises that
Planned
e e
Ch support users in multiple time zones, up to 24 hours per day. In these cases, it is important to
design a system to minimize planned interruptions. As shown by the diagram in the slide,
causes of planned down time include routine operations, periodic maintenance, and new
deployments. Routine operations are frequent maintenance tasks that include backups,
performance management, user and security management, and batch operations.
Periodic maintenance, such as installing a patch or reconfiguring the system, is occasionally
necessary to update the database, application, operating system middleware, or network.
New deployments describe major upgrades to the hardware, operating system, database,
application, middleware, or network. It is important to consider not only the time to perform
the upgrade, but also the effect the changes may have on the overall application.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 4
Oracle’s Solution to Down Time

RMAN backup/recovery
Fast-Start (Data Recovery Advisor)
RAC
Fault Recovery
Data Guard ASM
Streams
System
failures Flashback
Unplanned
down time
Data HARD
failures
ble
Data Guard fe r a
& ans
n -
Streams t r
System
Rolling upgrades/
Online patching a no
changes h a s
Planned Dynamic provisioning
m ) e ฺ
o
ฺc Gu i d
down time
- i n c t
Data c s e n
changes @
n
a Online
S tud redefinition
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T a n cen
Oracle’s Solution
g toliDown Time
n
e down time is primarily the result of computer failures or data failures. Planned
e e H
Unplanned
Ch down time is primarily due to data changes or system changes:
• RAC provides optimal performance, scalability, and availability gains.
• Fast-Start Fault Recovery enables you to bound the crash/recovery time. The database
self-tunes checkpoint processing to safeguard the desired recovery time objective.
• ASM provides a higher level of availability using online provisioning of storage.
• Flashback provides a quick resolution to human errors.
• Oracle Hardware Assisted Resilient Data (HARD) is a comprehensive program designed
to prevent data corruptions before they happen.
• Recovery Manager (RMAN) automates database backup and recovery. Data Recovery
Advisor (not supported for RAC) diagnoses data failures and presents repair options.
• Data Guard must be the foundation of any Oracle database disaster-recovery plan.
• The increased flexibility and capability of Streams over Data Guard with SQL Apply
requires more expense and expertise to maintain an integrated high availability solution.
• With online redefinition, the Oracle database supports many maintenance operations
without disrupting database operations, or users updating or accessing data.
• Oracle Database continues to broaden support for dynamic reconfiguration, enabling it to
adapt to changes in demand and hardware with no disruption of service.
• Oracle Database supports the application of patches to the nodes of a RAC system, as
wellreproduction
Unauthorized as database software upgrades,
or distribution in a rolling
prohibitedฺ fashion. 2010, Oracle and/or its affiliatesฺ
Copyright©
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 5
RAC and Data Guard Complementarity

Resource Cause Protection

Nodes RAC

Component
failure e
Instances RAC
r a bl
Software failure fe ns
Human - t r a
error o
Data Guardn
Data
Environment s a &n
) ha Flashback

o m i d e
n c ฺc Gu Data Guard
s - i n t
Site a c u d e &
n@ is S t Streams
ฺ t
g e tha
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
RAC and g Data
e
TaGuardlicComplementarity
n
eand
eRAC
e H Data Guard together provide the benefits of system-level, site-level, and data-level
Ch protection, resulting in high levels of availability and disaster recovery without loss of data.
• RAC addresses system failures by providing rapid and automatic recovery from failures,
such as node failures and instance crashes.
• Data Guard addresses site failures and data protection through transactionally consistent
primary and standby databases that do not share disks, enabling recovery from site
disasters and data corruption.
Note: Unlike Data Guard using SQL Apply, Oracle Streams enables updates on the replica and
provides support for heterogeneous platforms with different database releases. Therefore,
Oracle Streams may provide the fastest approach for database upgrades and platform
migration.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 6
Maximum Availability Architecture
Real-time query

Clients
Oracle Oracle
Application Application
Server Server
WAN Traffic
Manager

Real-time query
ble
fe r a
t r a ns
Primary Secondary
o n -
site Data Guard n
a site
ha ฺ s
)
ฺ c om uide
- i n c tG
RAC
a cs uden RAC databases:
database n@ is S t Phys&log standby
ฺ t
g e tha
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Ta lice
MaximumgAvailability Architecture (MAA)
n
eand Data Guard provide the basis of the database MAA solution. MAA provides the
H
RAC
e
C hemost comprehensive architecture for reducing down time for scheduled outages and
preventing, detecting, and recovering from unscheduled outages. The recommended MAA has
two identical sites. The primary site contains the RAC database, and the secondary site
contains both a physical standby database and a logical standby database on RAC. Identical
site configuration is recommended to ensure that performance is not sacrificed after a failover
or switchover. Symmetric sites also enable processes and procedures to be kept the same
between sites, making operational tasks easier to maintain and execute.
The graphic illustrates identically configured sites. Each site consists of redundant components
and redundant routing mechanisms, so that requests are always serviceable even in the event of
a failure. Most outages are resolved locally. Client requests are always routed to the site
playing the production role.
After a failover or switchover operation occurs due to a serious outage, client requests are
routed to another site that assumes the production role. Each site contains a set of application
servers or mid-tier servers. The site playing the production role contains a production database
using RAC to protect from host and instance failures. The site playing the standby role
contains one standby database, and one logical standby database managed by Data Guard.
Data Guard switchover and failover functions allow the roles to be traded between sites.
Note: For more information, see the following Web site:
http://otn.oracle.com/deploy/availability/htdocs/maa.htm
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 7
RAC and Data Guard Topologies

• Symmetric configuration with RAC at all sites:


– Same number of instances
– Same service preferences
• Asymmetric configuration with RAC at all sites:
– Different number of instances
– Different service preferences e
r a bl
• Asymmetric configuration with mixture of RAC and single fe
instance: t r a ns
- n
– All sites running under Oracle Clusterware a no
– Some single-instance sites not running under
h a s Oracle
Clusterware m ) e ฺ
c o i d ฺ u
i n c t G
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n cen
TaGuard
RAC and g
n Data li Topologies
YouH e configure a standby database to protect a primary database in a RAC environment.
can
e e
Ch Basically, all kinds of combinations are supported. For example, it is possible to have your
primary database running under RAC, and your standby database running as a single-instance
database. It is also possible to have both the primary and standby databases running under
RAC.
The slide explains the distinction between symmetric environments and asymmetric ones.
If you want to create a symmetric environment running RAC, then all databases need to have
the same number of instances and the same service preferences. As the DBA, you need to
make sure that this is the case by manually configuring them in a symmetric way.
However, if you want to benefit from the tight integration of Oracle Clusterware and Data
Guard Broker, make sure that both the primary site and the secondary site are running under
Oracle Clusterware, and that both sites have the same services defined.
Note: Beginning with Oracle Database 11g, the primary and standby systems in a Data Guard
configuration can have different CPU architectures, operating systems (for example, Windows
and Linux for physical standby database only with no EM support for this combination),
operating system binaries (32-bit and 64-bit), and Oracle database binaries (32-bit and 64-bit).

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 8
RAC and Data Guard Architecture

Primary instance A Standby receiving instance C

ARCn
ARCn LGWR RFS
Flash
Primary recovery
database Standby area
Online redo
bl e
redo files fe r a
files Standby an s
Flash
- t r
recovery databaseon
area LGWR RFS
s an
) haApply ฺ
o m i d e
ARCn
n c ฺc ARCn G u
i
s- den t
a c u
Primary instance B
a n @ StStandby apply instance D
g ฺt t hi s
h n e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
RAC and g Data
e
TaGuardlicArchitecture
H en it is perfectly possible to use a “RAC to single-instance Data Guard (DG)”
Although
e e
Ch configuration, you also have the possibility to use a RAC-to-RAC DG configuration. In this
mode, although multiple standby instances can receive redo from the primary database, only
one standby instance can apply the redo stream generated by the primary instances.
A RAC-to-RAC DG configuration can be set up in different ways, and the slide shows you one
possibility with a symmetric configuration where each primary instance sends its redo stream
to a corresponding standby instance using standby redo log files. It is also possible for each
primary instance to send its redo stream to only one standby instance that can also apply this
stream to the standby database. However, you can get performance benefits by using the
configuration shown in the slide. For example, assume that the redo generation rate on the
primary is too great for a single receiving instance on the standby side to handle. Suppose
further that the primary database is using the SYNC redo transport mode. If a single receiving
instance on the standby cannot keep up with the primary, then the primary’s progress is going
to be throttled by the standby. If the load is spread across multiple receiving instances on the
standby, then this is less likely to occur.
If the standby can keep up with the primary, another approach is to use only one standby
instance to receive and apply the complete redo stream. For example, you can set up the
primary instances
Unauthorized to remotely
reproduction archive to
or distribution the same Oracle
prohibitedฺ Net service
Copyright© 2010,name.
Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 9
RAC and Data Guard Architecture (continued)
You can then configure one of the standby nodes to handle that service. This instance then
both receives and applies redo from the primary. If you need to do maintenance on that node,
then you can stop the service on that node and start it on another node. This approach allows
for the primary instances to be more independent of the standby configuration because they are
not configured to send redo to a particular instance.
Note: For more information, refer to the Oracle Data Guard Concepts and Administration
guide.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 10
Data Guard Broker (DGB) and
Oracle Clusterware (OC) Integration
• OC manages intrasite HA operations.
• OC manages intrasite planned HA operations.
• OC notifies when manual intervention is required.
• DBA receives notification.
• DBA decides to switch over or fail over using DGB.
• DGB manages intersite planned HA operations.
a b le
• DGB takes over from OC for intersite failover, switchover, s f er
and protection mode changes: - t r an
on n
– DMON notifies OC to stop and disable the site, s aleaving all or
one instance.
a
) h eฺ
m
o theusite
cstart id according to
– DMON notifies OC to enable and
n c ฺ G
i
the DG site role. cs- ent
a tud
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Data Guard g
a ice and Oracle Clusterware Integration
TBroker l(DGB)
DGBH eisntightly integrated with Oracle Clusterware. Oracle Clusterware manages individual
e e
Ch instances to provide unattended high availability of a given clustered database. DGB manages
individual databases (clustered or otherwise) in a Data Guard configuration to provide disaster
recovery in the event that Oracle Clusterware is unable to maintain availability of the primary
database.
For example, Oracle Clusterware posts NOT_RESTARTING events for the database group and
service groups that cannot be recovered. These events are available through Enterprise
Manager, ONS, and server-side callouts. As a DBA, when you receive those events, you might
decide to repair and restart the primary site, or to invoke DGB to fail over if not using Fast-
Start Failover.
DGB and Oracle Clusterware work together to temporarily suspend service availability on the
primary database, accomplish the actual role change for both databases during which Oracle
Clusterware works with the DGB to properly restart the instances as necessary, and then to
resume service availability on the new primary database. The broker manages the underlying
Data Guard configuration and its database roles, whereas Oracle Clusterware manages service
availability that depends upon those roles. Applications that rely upon Oracle Clusterware for
managing service availability will see only a temporary suspension of service as the role
change occurs
Unauthorized within the
reproduction orData Guard configuration.
distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 11
Fast-Start Failover: Overview

• Fast-Start Failover implements automatic failover to a


standby database:
– Triggered by failure of site, hosts, storage, data file offline
immediate, or network
– Works with and supplements RAC server failover
• Failover occurs in seconds (< 20 seconds).
ble
– Comparable to cluster failover
fe r a
• Original production site automatically rejoins the t r a ns
configuration after recovery. o n -
s an
• Automatically monitored by an Observer h process:
a
m data) eฺcenter
– Locate it on a distinct server on a c
ฺ o
distinct
u i d
i n c t G
a cs- uden
– Enterprise Manager can restart it on failure
– Installed through Oracle
n@ Client
ta thisSt Administrator
g ฺ
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
Tan lOverview
Fast-StartgFailover: ice
H en Failover is a feature that automatically, quickly, and reliably fails over to a
Fast-Start
he e
C designated, synchronized standby database in the event of loss of the primary database,
without requiring manual intervention to execute the failover. In addition, following a fast-
start failover, the original primary database is automatically reconfigured as a new standby
database upon reconnection to the configuration. This enables Data Guard to restore disaster
protection in the configuration as soon as possible.
Fast-Start Failover is used in a Data Guard configuration under the control of the Data Guard
Broker, and may be managed using either dgmgrl or Oracle Enterprise Manager Grid
Control. There are three essential participants in a Fast-Start Failover configuration:
• The primary database, which can be a RAC database
• A target standby database, which becomes the new primary database following a fast-
start failover
• The Fast-Start Failover Observer, which is a separate process incorporated into the
dgmgrl client that continuously monitors the primary database and the target standby
database for possible failure conditions. The underlying rule is that out of these three
participants, whichever two can communicate with each other will determine the
outcome of the fast-start failover. In addition, a fast-start failover can occur only if there
is a guarantee that no data will be lost.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 12
Fast-Start Failover: Overview (continued)
For disaster recovery requirements, install the Observer in a location separate from the primary
and standby data centers. If the designated Observer fails, Enterprise Manager can detect the
failure and can be configured to automatically restart the Observer on the same host.
You can install the Observer by installing the Oracle Client Administrator (choose the
Administrator option from the Oracle Universal Installer). Installing the Oracle Client
Administrator results in a small footprint because an Oracle instance is not included on the
Observer system. If Enterprise Manager is used, also install the Enterprise Manager Agent on
the Observer system.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 13
Data Guard Broker Configuration Files

*.DG_BROKER_CONFIG_FILE1=+DG1/orcl/dr1config.dat
*.DG_BROKER_CONFIG_FILE2=+DG1/orcl/dr2config.dat

orcl1 orcl2

ble
fe r a
ans
n - t r
a no
Shared storage
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Data Guardg
a ice
TBroker lConfiguration Files
n
ecopies of the Data Guard Broker (DGB) configuration files are maintained for each
eTwo
e H
Ch database so as to always have a record of the last-known valid state of the configuration. When
the broker is started for the first time, the configuration files are automatically created and
named using a default path name and file name that is operating system specific.
When using a RAC environment, the DGB configuration files must be shared by all instances
of the same database. You can override the default path name and file name by setting the
following initialization parameters for that database: DG_BROKER_CONFIG_FILE1,
DG_BROKER_CONFIG_FILE2.
You have three possible options to share those files:
• Cluster file system
• Raw devices
• ASM
The example in the slide illustrates a case where those files are stored in an ASM disk group
called DG1. It is assumed that you have already created a directory called orcl in DG1.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 14
Real-Time Query Physical Standby Database

Client Client Client Client


Read/Write Read only

Primary Standby
Redo apply
cluster cluster
ble
fe r a
Redo apply
instance
ans
n - t r
n o
RAC RAC s a
database database
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
Tan Physical
Real-TimegQuery l ice Standby Database
n
eGuard
e e H
Data Redo Apply (physical standby database) has proven to be a popular solution for
C h disaster recovery due to its relative simplicity, high performance, and superior level of data
protection. Beginning with Oracle Database 11g, a physical standby database can be open
read-only while redo apply is active. This means that you can run queries and reports against
an up-to-date physical standby database without compromising data protection or extending
recovery time in the event a failover is required. This makes every physical standby database
able to support productive uses even while in standby role. To enable real-time query, open the
database in read-only mode and then issue the ALTER DATABASE RECOVER MANAGED STANDBY
statement. Real-time query provides an ultimate high availability solution because it:
• Is totally transparent to applications
• Supports Oracle RAC on the primary and standby databases: Although Redo Apply can
be running on only one Oracle RAC instance, you can have all of the instances running
in read-only mode while Redo Apply is running on one instance.
• Returns transactionally consistent results that are very close to being up-to-date with the
primary database.
• Enables you to use fast-start failover to allow for automatic fast failover in the case the
primary database fails

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 15
Hardware Assisted Resilient Data

Blocks validated and


Oracle database
protection information added to blocks
[DB_BLOCK_CHECKSUM=TRUE] Vol Man/ASM
Operating system
• Prevents corruption introduced Device driver
in I/O path Host Bus Adapter
• Is supported by major storage vendors: bl e
fe r a
– EMC, Fujitsu, Hitachi, HP, NEC, SAN
t r a ns
– Network Appliance &on-
– Sun Microsystems an
Virtualization
s
h a
• All file types and block sizes checkedm) e ฺ
o
ฺc GuSAN interface i d
Protection information validated - i n c t
by storage device when enabled c s
a tud e n
Storage device
symchksum –type Oracle n @ S
enable
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
Tan lResilient
HardwaregAssisted ice Data
e n
eOne
e Hproblem that can cause lengthy outages is data corruption. Today, the primary means for
C h detecting corruptions caused by hardware or software outside of the Oracle database, such as
an I/O subsystem, is the Oracle database checksum. However, after a block is passed to the
operating system, through the volume manager and out to disk, the Oracle database itself
cannot validate whether the block being written is still correct.
With disk technologies expanding in complexity, and with configurations such as Storage Area
Networks (SANs) becoming more popular, the number of layers between the host processor
and the physical spindle continues to increase. With more layers, the chance of any problem
increases. With the HARD initiative, it is possible to enable the verification of database block
checksum information by the storage device. Verifying that the block is still the same at the
end of the write as it was in the beginning gives you an additional level of security.
By default, the Oracle database automatically adds checksum information to its blocks. These
checksums can be verified by the storage device if you enable this possibility. In case a block
is found to be corrupted by the storage device, the device logs an I/O corruption, or it cancels
the I/O and reports the error back to the instance.
Note: The way you enable the checksum validation at the storage device side is vendor
specific. The example given in the slide was used with EMC Symmetrix storage.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 16
Database High Availability: Best Practices

Use SPFILE. Create two or Set Multiplex


more control files. CONTROL_FILE_RECO production and
RD_KEEP_TIME long standby redo
enough. logs
Log checkpoints to Use auto-tune Enable ARCHIVELOG Enable
the alert log. checkpointing. mode and use a flash Flashback
recovery area. Database.
ble
Enable block Use Automatic Use locally managed Use Automatic fe r a
checking. Undo tablespaces. Segment Space an s
Management. n - t
Management.
r
a no
Use resumable Use Database Register all instances h a s Use temporary
space allocation. Resource with remote listeners. m ) e ฺtablespaces.
o
ฺc Gu i d
Manager.
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a ice Best Practices
T Availability:
DatabasegHigh l
TheH en in the slide gives you a short summary of the recommended practices that apply to
table
he e
C single-instance databases, RAC databases, and Data Guard standby databases.
These practices affect the performance, availability, and mean time to recover (MTTR) of your
system. Some of these practices may reduce performance, but they are necessary to reduce or
avoid outages. The minimal performance impact is outweighed by the reduced risk of
corruption or the performance improvement for recovery.
Note: For more information about how to set up the features listed in the slide, refer to the
following documents:
• Administrator’s Guide
• Data Guard Concepts and Administration
• Net Services Administrator’s Guide

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 17
How Many ASM Disk Groups Per Database?

• Two disk groups are recommended.


– Leverage maximum of LUNs.
Data DG FRA DG
– Backups can be stored on one
ERP DB
FRA disk group.
CRM DB
– Lower performance may be
HR DB
used for FRA (or inner tracks).
• Exceptions: ble
fe r a
– Additional disk groups for different capacity or performance
t r a ns
characteristics o n -
n
– Different ILM storage tiers sa ha ฺ
)
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

How Many T
ASM
en Per Database?
anDisklicGroups
e ng
e e H of the time, only two disk groups are enough to share the storage between multiple
Most
Ch databases.
That way you can maximize the number of Logical Unit Numbers (LUNs) used as ASM disks,
which gives you the best performance, especially if these LUNs are carved on the outer edge
of your disks.
Using a second disk group allows you to have a backup of your data by using it as your
common fast recovery area (FRA). You can put the corresponding LUNs on the inner edge of
your disks because less performance is necessary.
The two noticeable exceptions to this rule are whenever you are using disks with different
capacity or performance characteristics, or when you want to archive your data on lower-end
disks for Information Lifecycle Management (ILM) purposes.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 18
Which RAID Configuration for High Availability?

A. ASM mirroring
B. Hardware RAID 1 (mirroring)
C. Hardware RAID 5 (parity protection)
D. Both ASM mirroring and hardware RAID
ble
fe r a
Depends on business requirement and budget ns
Answer:
-tra
(cost, availability, performance, and utilization)
on
a n
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Which RAIDg
a l ice for Best Availability?
TConfiguration
To H en availability, you have multiple choices as shown in the slide.
favor
e e
Ch You could just use ASM mirroring capabilities, or hardware RAID 1 (Redundant Array of
Inexpensive Disks) which is a hardware mirroring technique, or hardware RAID 5. The last
possible answer, which is definitely not recommended, is to use both ASM mirroring and
hardware mirroring. Oracle recommends the use of external redundancy disk groups when
using hardware mirroring techniques to avoid an unnecessary overhead.
Therefore, between A, B, and C, the choice depends on your business requirements and
budget.
RAID 1 has the best performance but requires twice the storage capacity. RAID 5 is a much
more economical solution but with a performance penalty essentially for write-intensive
workloads.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 19
Should You Use ASM Mirroring Protection?

• Best choice for low-cost storage


• Enables extended clustering solutions
• No hardware mirroring

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Should Youg
a
TUse ASM l iceMirroring Protection?
H en leverage the storage array hardware RAID-1 mirroring protection when possible to
Basically,
e e
Ch offload the mirroring overhead from the server. Use ASM mirroring in the absence of a
hardware RAID capability.
However hardware RAID 1 in most Advanced Technology Attachment (ATA) storage
technologies is inefficient and degrades the performance of the array even more. Using ASM
redundancy has proven to deliver much better performance in ATA arrays.
Because the storage cost can grow very rapidly whenever you want to achieve extended
clustering solutions, ASM mirroring should be used as an alternative to hardware mirroring for
low-cost storage solutions.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 20
What Type of Striping Works Best?

A. ASM only striping (no RAID 0)


B. RAID 0 and ASM striping
C. Use LVM
D. No striping
ble
fe r a
Answer: A and B ans
n - t r
a no
h a s
m ) e ฺ
o i d
ASM and RAID striping-iare n cฺccomplementary.
t G u
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T n cen
aStriping
What Type
n g of li Works Best?
As H e in the slide, you can use ASM striping only, or you can use ASM striping in
shown
e e
Ch combination with RAID 0.
With RAID 0, multiple disks are configured together as a set, or a bank, and data from any one
data file is spread, or striped, across all the disks in the bank.
Combining both ASM striping and RAID striping is called stripe-on-stripe. This combination
offers good performance too.
However, there is no longer a need to use a Logical Volume Manager (LVM) for your
database files, nor is it recommended to not use any striping at all.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 21
ASM Striping Only

Pros: Cons:
• Drives evenly distributed for Data & FRA • Not well balanced across
• Higher bandwidth ALL disks
• Allows small incremental growth (73 GB) • LUN size limited to disk size
• No drive contention

Oracle DB size: 1 TB
Data DG FRA DG
Storage configuration: e
1 TB 16°73 GB 8°arrays with 2 TB 32°73e r
GB a bl
LUNs 12°73 GB disks per array
a n sf
LUNs
n-tr no
s a
) h a
o m d e ฺ
RAID 1

c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T Only lice
a
ASM Striping
g
n
In
e theecase shown in this slide, you want to store a one-terabyte database with a corresponding
H
e
Ch two-terabyte flash recovery area. You use RAID 1 to mirror each disk. In total, you have eight
arrays of twelve disks, with each disk being 73 GB. ASM mirroring and hardware RAID 0 are
not used.
In addition, each ASM disk is represented by one entire LUN of 73 GB. This means that the
Data disk group (DG) is allocated 16 LUNs of 73 GB each.
On the other side, the Fast Recovery Area disk group is assigned 32 LUNs of 73 GB each.
This configuration enables you to evenly distribute disks for your data and backups, achieving
good performance and allowing you to manage your storage in small incremental chunks.
However, using a restricted number of disks in your pool does not balance your data well
across all your disks. In addition, you have many LUNs to manage at the storage level.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 22
Hardware RAID–Striped LUNs

Pros: Cons:
• Fastest region for Data DG • Large incremental growth
• Balanced data distribution • Data & FRA “contention”
• Fewer LUNs to manage while max
spindles

Oracle DB size: 1 TB
Data DG FRA DG
Storage configuration: e
1 TB 4°250 GB 8°arrays with 2 TB 4°500 e r
GB a bl
LUNs 12°73 GB disks per array
a n sf
LUNs
n-tr no
s a
) h a

RAID 0+1

o m i d e
n c ฺc Gu
s - i n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
HardwaregRAID–Striped l ice LUNs
n
In
e theecase shown in this slide, you want to store a one-terabyte database with a corresponding
H
C he two-terabyte flash recovery area. You use RAID 0+1, which is a combination of hardware
striping and mirroring to mirror and stripe each disk. In total, you have eight arrays of twelve
disks, with each disk being 73 GB. ASM mirroring is not used.
Here, you can define bigger LUNs not restricted to the size of one of your disks. This allows
you to put the Data LUNs on the fastest region of your disks, and the backup LUNs on slower
parts. By doing this, you achieve a better data distribution across all your disks, and you end
up managing a significantly less number of LUNs.
However, you must manipulate your storage in much larger chunks than in the previous
configuration.
Note: The hardware stripe size you choose is also very important because you want 1 MB
alignment as much as possible to keep in sync with ASM AUs. Therefore, selecting power-of-
two stripe sizes (128 KB or 256 KB) is better than selecting odd numbers. Storage vendors
typically do not offer many flexible choices depending on their storage array RAID technology
and can create unnecessary I/O bottlenecks if not carefully considered.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 23
Hardware RAID–Striped LUNs HA

Pros: Cons:
• Fastest region for Data DG • Large incremental growth
• Balanced data distribution • Might waste space
• Fewer LUNs to manage
• More high available

Oracle DB size: 1 TB
Data DG FRA DG
Storage configuration: e
1 TB 2°500 GB 8°arrays with 1.6 TB 2°800 e r
GB a bl
LUNs 12°73 GB disks per array
a n sf
LUNs
n-tr no
s a
) h a

RAID 0+1

o m i d e
n c ฺc Gu
s - i n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
HardwaregRAID–Striped l ice LUNs HA
n
In
e theecase shown in this slide, you want to store a one-terabyte database with a corresponding
H
C he 1.6-TB fast recovery area. You use RAID 0+1, which is a combination of hardware striping
and mirroring to mirror and stripe each disk. In total, you have eight arrays of twelve disks,
with each disk being 73 GB. ASM mirroring is not used.
Compared to the previous slide, you use bigger LUNs for both the Data disk group and the
Fast Recovery Area disk group. However, the presented solution is more highly available than
the previous architecture because you separate the data from the backups into different arrays
and controllers to reduce the risk of down time in case one array fails.
By doing this, you still have a good distribution of data across your disks, although not as
much as in the previous configuration. You still end up managing a significantly less number
of LUNs than in the first case.
However, you might end up losing more space than in the previous configuration. Here, you
are using the same size and number of arrays to be consistent with the previous example.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 24
Disk I/O Design Summary

• Use external RAID protection when possible.


• Create LUNs by using:
– Outside half of disk drives for highest performance
– Small disk, high rpm (that is, 73 GB/15k rpm)
• Use LUNs with the same performance characteristics.
• Use LUNs with the same capacity. ble
fe r a
• Maximize the number of spindles in your disk group. s
- t r an
n no
s a
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Disk I/O Design
g
a
T Summary l ice
UseH en for volume and file management to equalize the workload across disks and
ASM
e
heeliminate hot spots. The following are simple guidelines and best practices when configuring
C ASM disk groups:
• Use external RAID protection when possible.
• Create LUNs using:
- Outside half of disk drives for highest performance
- Small disk with high rpm (for example, 73 GB with 15k rpm). The reason why
spindle (platter) speed is so important is that it directly impacts both positioning
time and data transfer. This means that faster spindle speed drives have improved
performance regardless of whether they are used for many small, random accesses,
or for streaming large contiguous blocks from the disk. The stack of platters in a
disk rotates at a constant speed. The drive head, while positioned close to the center
of the disk, reads from a surface that is passing by more slowly than the surface at
the outer edges.
• Maximize the number of spindles in your disk group.
• LUNs provisioned to ASM disk groups should have the same storage performance and
availability characteristics. Configuring mixed speed drives will default to the lowest
common denominator.
• ASM data distribution policy is capacity based. Therefore, LUNs provided to ASM
should
Unauthorized have the same
reproduction capacity forprohibitedฺ
or distribution each disk group to avoid2010,
Copyright© imbalance
Oracleand hot spots.
and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 25
Extended RAC: Overview

• Full utilization of resources, no matter where they are


located

Site A RAC Site B


database

ble
Clients fe r a
ans
n - t r
Site A RAC a no Site B
database h a s
m ) e ฺ
o
ฺc Gu i d
• Fast recovery from site failure-inc t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
a ice
T Overview
ExtendedgRAC: l
H en RAC databases share a single set of storage and are located on servers in the same
Typically,
he e
C data center.
With extended RAC, you can use disk mirroring and Dense Wavelength Division Multiplexing
(DWDM) equipment to extend the reach of the cluster. This configuration allows two data
centers, separated by up to 100 kilometers, to share the same RAC database with multiple
RAC instances spread across the two sites.
As shown in the slide, this RAC topology is very interesting, because the clients’ work gets
distributed automatically across all nodes independently of their location, and in case one site
goes down, the clients’ work continues to be executed on the remaining site. The types of
failures that extended RAC can cover are mainly failures of an entire data center due to a
limited geographic disaster. Fire, flooding, and site power failure are just a few examples of
limited geographic disasters that can result in the failure of an entire data center.
Note: Extended RAC does not use special software other than the normal RAC installation.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 26
Extended RAC Connectivity

• Distances over ten kilometers require dark fiber.


• Set up buffer credits for large distances.

Dark fiber
Site A Site B

DWDM DWDM
ble
device device
fe r a
an s
n - t r
n o
DB DB s a
copy
) h a
copy
o m d e ฺ
c ฺc Gu i
s - i n Public
n t
network
c e
Clientsa
n @ S tud
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
TanConnectivity
ExtendedgRAC l ice
In H ento extend a RAC cluster to another site separated from your data center by more than
order
he e
C ten kilometers, it is required to use DWDM over dark fiber to get good performance results.
DWDM is a technology that uses multiple lasers, and transmits several wavelengths of light
simultaneously over a single optical fiber. DWDM enables the existing infrastructure of a
single fiber cable to be dramatically increased. DWDM systems can support more than 150
wavelengths, each carrying up to 10 Gbps. Such systems provide more than a terabit per
second of data transmission on one optical strand that is thinner than a human hair.
As shown in the slide, each site should have its own DWDM device connected together by a
dark fiber optical strand. All traffic between the two sites is sent through the DWDM and
carried on dark fiber. This includes mirrored disk writes, network and heartbeat traffic, and
memory-to-memory data passage. Also shown on the graphic are the sets of disks at each site.
Each site maintains a copy of the RAC database.
It is important to note that depending on the site’s distance, you should tune and determine the
minimum value of buffer credits in order to maintain the maximum link bandwidth. Buffer
credit is a mechanism defined by the Fiber Channel standard that establishes the maximum
amount of data that can be sent at any one time.
Note: Dark fiber is a single fiber optic cable or strand mainly sold by telecom providers.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 27
Extended RAC Disk Mirroring

• Need copy of data at each location


• Two options:
– Host-based mirroring
– Remote array-based mirroring

Recommended solution with ASM

ble
Site A Site B Primary Secondary
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺcDB Gu i d
DB DB
- i n c t DB
copy copy s
ac tude n
copy copy

a @
n is S
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
ce
Ta DiskliMirroring
ExtendedgRAC
H en there is only one RAC database, each data center has its own set of storage that is
Although
he e
C synchronously mirrored using either a cluster-aware, host-based Logical Volume Manager
(LVM) solution, such as SLVM with MirrorDiskUX, or an array-based mirroring solution,
such as EMC SRDF.
With host-based mirroring, shown on the left of the slide, the disks appear as one set, and all
I/Os get sent to both sets of disks. This solution requires closely integrated clusterware and
LVM, and ASM is the recommended solution.
With array-based mirroring, shown on the right, all I/Os are sent to one site, and are then
mirrored to the other. In fact, this solution is like a primary/secondary site setup. If the primary
site fails, all access to primary disks is lost. An outage may be incurred before you can switch
to the secondary site.
Note: With extended RAC, designing the cluster in a manner that ensures the cluster can
achieve quorum after a site failure is a critical issue. For more information regarding this topic,
refer to the Oracle Technology Network site.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 28
Achieving Quorum with Extended RAC

Third
site
Voting disk (NFS or iSCSI)

Redundant public network

Redundant private network ble


fe r a
t r a ns
Site A n-Site Bno
Redundant SAN with ASM mirroring s a
RAC ) h a RAC
o m d e ฺ
database
c ฺc Gu i database

s - i n n t files (ASM failure group)


Database files (ASM failure group)
OCR
c
a tud e
Database
OCR
a n @ S
Voting Disk
ฺ t h i s Voting Disk

h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
AchievinggQuorum l ice Extended RAC
with
n
As
e fareas voting disks are concerned, a node must be able to access strictly more than half of
H
C he the voting disks at any time, or that node will be evicted from the cluster. Extended clusters
are generally implemented with only two storage systems, one at each site. This means that the
site that houses the majority of the voting disks is a potential single point of failure for the
entire cluster. To prevent this potential outage, Oracle Clusterware supports a third voting disk
on an inexpensive, low-end, standard NFS-mounted device somewhere on the network. It is
thus recommended to put this third NFS voting disk on a dedicated server visible from both
sites. This situation is illustrated in the slide. The goal is that each site can run independently
of the other when a site failure occurs.
Note: For more information about NFS configuration of the third voting disk, refer to the
Oracle Technology Network site.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 29
Additional Data Guard Benefits

• Greater disaster protection


– Greater distance
– Additional protection against corruptions
• Better for planned maintenance
– Full rolling upgrades
• More performance neutral at large distances e
r a bl
– Option to do asynchronous
nsfe
• tra
If you cannot handle the costs of a DWDM network,n-Data
o
an
Guard still works over cheap, standard networks.
s
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Additional g
a
T Guard
Data l iceBenefits
n
eGuard
e e H
Data provides a greater disaster protection:
C h • Distance over 100 kilometers without performance hit
• Additional protection against corruptions because it uses a separate database
• Optional delay to protect against user errors
Data Guard also provides better planned maintenance capabilities by supporting full rolling
upgrades.
Also, if you cannot handle the costs of a DWDM network, then Data Guard still works over
cheap, standard networks.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 30
Using a Test Environment

• The most common cause of down time is change.


• Test your changes on a separate test cluster before
changing your production environment.

ble
Production Test fe r a
ans
cluster cluster
n - t r
a no
h a s
m ) e ฺ
RAC o
ฺc databaseRAC u i d
database i n c t G
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T
Using a Test
an licen
Environment
e ng
e e H is the most likely cause of down time in a production environment. A proper test
Change
Ch environment can catch more than 90 percent of the changes that could lead to a down time of
the production environment, and is invaluable for quick test and resolution of issues in
production.
When your production environment is RAC, your test environment should be a separate RAC
cluster with all the identical software components and versions.
Without a test cluster, your production environment will not be highly available.
Note: Not using a test environment is one of the most common errors seen by Oracle Support
Services.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 31
Quiz

Which of the following statements regarding Disk I/O design are


true?
1. Use external RAID protection when possible.
2. Use LUNs with the same performance characteristics.
3. Use LUNs with the same capacity.
4. Minimize the number of spindles in your disk group. ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T
Answer: 1,2,3
an licen
e ng
e e H
Ch

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 32
Summary

In this lesson, you should have learned how to:


• Design a Maximum Availability Architecture in your
environment
• Determine the best RAC and Data Guard topologies for
your environment
• Configure the Data Guard Broker configuration files in a le
a b
RAC environment
s f er
n
• Identify successful disk I/O strategies -tra on
a n
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
gT
a lice
H en
h e e
C

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated 16 - 33
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Appendix A
Practices and Solutions

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Table of Contents
Practices for Lesson 1 ......................................................................................................... 4
Practice 1-1: Discovering Cluster Environment ............................................................. 5
Practices for Lesson 2 ......................................................................................................... 9
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure ............ 10
Practice 2-2: Installing Oracle Grid Infrastructure ....................................................... 19
Practice 2-3: Creating ASM Disk Groups .................................................................... 32
Practice 2-4: Creating ACFS File System .................................................................... 35
Practice 2-5: Installing a Silent Database ..................................................................... 37
Practices for Lesson 3 ....................................................................................................... 39
Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware ............................ 40
Practice 3-2: Adding and Removing Oracle Clusterware Configuration Files............. 49 ble
Practice 3-3: Performing a Backup of the OCR and OLR ............................................ 53 fe r a
an s
Practices for Lesson 4 ....................................................................................................... 55
Practice 4-1: Adding a Third Node to Your Cluster ..................................................... 56 n - t r
o
s an
Practices for Lesson 5 ....................................................................................................... 77
Practice 5-1: Protecting the Apache Application .......................................................... 78
) ha ฺ
Practices for Lesson 6 ....................................................................................................... 86
c om uide
Practice 6-1: Working with Log Files ........................................................................... 87

i n c tG
Practice 6-2: Working with OCRDUMP ...................................................................... 90
-
cs uden
Practice 6-3: Working with CLUVFY .......................................................................... 93
a
a n @ St
Practices for Lesson 7 ....................................................................................................... 97
g ฺ t t h is
Practice 7-1: Administering ASM Instances................................................................. 98
e h en use
Practices for Lesson 8 ..................................................................................................... 112

( c he se to
Practice 8-1: Administering ASM Disk Groups ......................................................... 113
Practices for Lesson 9 ..................................................................................................... 119
T an licen
Practice 9-1: Administering ASM Files, Directories, and Templates ........................ 120
en g
Practices for Lesson 10 ................................................................................................... 131
e e H Practice 10-1: Managing ACFS .................................................................................. 132
C h Practice 10-2: Uninstalling RAC Database................................................................. 143
Practices for Lesson 11 ................................................................................................... 151
Practice 11-1: Installing the Oracle Database Software ............................................. 152
Practice 11-2: Creating a RAC Database .................................................................... 154
Practices for Lesson 12 ................................................................................................... 156
Practice 12-1: Operating System and Password File Authenticated Connections ...... 157
Practice 12-2: Oracle Database Authenticated Connections ...................................... 160
Practice 12-3: Stopping a Complete ORACLE_HOME Component Stack ............... 164
Practices for Lesson 13 ................................................................................................... 167
Practice 13-1: Configuring Archive Log Mode .......................................................... 168
Practice 13-2: Configuring Specific Instance Connection Strings ............................. 171
Practice 13-3: Configuring RMAN and Performing Parallel Backups ....................... 175
Practices for Lesson 14 ................................................................................................... 181
Practice 14-1: ADDM and RAC Part I ....................................................................... 182
Practice 14-2: ADDM and RAC Part II ...................................................................... 189
Practice 14-3: ADDM and RAC Part III..................................................................... 193

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-2


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 15 ................................................................................................... 196
Practice 15-1: New Practice Title ............................................................................... 197
Practice 15-2: Monitoring Services ............................................................................ 201
Practice 15-3: Services and Alert Thresholds ............................................................. 204

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-3


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 1
The practice for Lesson 1 is a discovery of the classroom environment.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-4


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 1-1: Discovering Cluster Environment
In this practice, you will explore the classroom and cluster environment in preparation for
cluster installation and management.
1) In an Oracle cluster, there are pieces of information that are important for a successful
installation. Several of these are unique to each cluster. In order to make this practice
document generic to allow every participant to use the same document and specific
enough that each participant can be successful, an environment file has been created
called /home/oracle/labs/st_env.sh. This file holds the information that is
specific to your cluster. This script is not a script supplied with Oracle software. This
script was developed specifically for this class.
2) Open a VNC session as the oracle OS user on your first node. (The instructor will
give this node name.) Click the VNC icon on the desktop. In the VNC
ble
Viewer:Connection Details box, enter the name of your first node with ':1' as a suffix.
fe r a
In the VNC Viewer:Authentication box, enter the password for the oracle user.
ans
The password is oracle. n - t r
o
an
3) In the VNC session, open a terminal window. Right-click in the VNC session
s
window, and select Open Terminal from the shortcut menu.
) ha ฺ
4) Find and list the /home/oracle/labs/st_env.sh c m ide
oscript.
n c ฺ G u
i t
$ cat /home/oracle/labs/st_env.sh
#Common environment Variables a cs- uden
n @ St
NODELIST=`cat /home/oracle/nodeinfo`
a
NODE1=host01 g ฺ t t h is
NODE2=host02 hen se
e u
NODE3=host03e
( c h se to
CLUSTER_NAME=cluster01
T an licen
GNS_NAME=host01-gns

H eng# Hardware environment definitions


export ST_SOFTWARE_STAGE_GRID=/stage/clusterware/Disk1
h e e export ST_SOFTWARE_STAGE_DB=/stage/database/Disk1
C export ST_SOFTWARE_STAGE_DEINST=/stage/deinstall
export ST_NODE1=$NODE1
export ST_NODE2=$NODE2
export ST_NODE3=$NODE3
export ST_CLUSTER_NAME=$CLUSTER_NAME
export ST_NODE3_VIP=${NODE3}-vip
export ST_NODE_DOMAIN=example.com
export ST_NODE_LIST=${NODE1},${NODE2}
export ST_NODE_LIST2=${NODE1},${NODE2},${NODE3}
export ST_GNS_NAME=${GNS_NAME}.${ST_NODE_DOMAIN}
# Software Ownership definitions
export ST_GRID_OWNER=grid
export ST_GRID_OWNER_HOME=/home/grid
export ST_DB1_OWNER=oracle
export ST_DB1_OWNER_HOME=/home/oracle
# Grid Infrastructure definitions
export ST_ASM_INSTANCE1=+ASM1
export ST_ASM_INSTANCE2=+ASM2

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-5


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 1-1: Discovering Cluster Environment (continued)

export ST_ASM_INSTANCE3=+ASM3
export ST_ASM_HOME=/u01/app/11.2.0/grid
# Database 1 definitions
export ST_DB1_NAME=orcl
export ST_DB1_INSTANCE1=orcl1
export ST_DB1_INSTANCE2=orcl2
export ST_DB1_INSTANCE3=orcl3
export
ST_DB1_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome
_1
# Database 2 definitions (standby, rcat for example)
export ST_DB2_NAME=orcl
export ST_DB2_INSTANCE1=orcl1 e
export ST_DB2_INSTANCE2=orcl2
r a bl
export ST_DB2_INSTANCE3=orcl3
s fe
export ST_DB2_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
- t r an
# Database 3 definitions
export ST_DB3_NAME=
nno
export ST_DB3_INSTANCE1= s a
) h a
o m d e ฺ
5) The environment for this class can change. The node c i
ฺc namesGwill
u be different for
i n t
a
Discover and record values for your cluster. c - udeforneach setup inside the class.
every cluster and the cluster names will besdifferent

Note: The environment variables n @ forSthis t class begin with ST_ and are
ฺt a created s
hi $. For example, the ST_NODE1 variable is
referenced by prefixing n thegvariable
e t
with
referenced withe e he o. us
$ST_NODE1
ch nsevariablest
a) Set n the(environment found in st_env.sh for your terminal session. To
a c e
Tset the variables
li in a session from a script, you must “source” the script—that is,
e n g cause it to run in the same shell. This is done by using the “.” command as shown
e e H in the code example.
C h
$ . /home/oracle/labs/st_env.sh

b) What are the short names for the nodes of your cluster?
These nodes are called host01, host02, and host03 in the practices that follow.
You will substitute your node names or use the environment variable to refer to
the node. For example, $ST_NODE2 refers to your second node, and in the code
output, is shown as host02. Your nodes will be named something else. Record the
names of your nodes.
First node ($ST_NODE1) _________________
Second node ($ST_NODE2) __________________
Third node ($ST_NODE3) __________________
$ echo $ST_NODE1
host01 << Your node names will be different
$ echo $ST_NODE2
host01

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-6


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 1-1: Discovering Cluster Environment (continued)

$ echo $ST_NODE3
host03
6) What is the name of your cluster?
Cluster name ($ST_CLUSTER_NAME) ________________
$ echo $ST_CLUSTER_NAME
cluster01 << Your cluster name will be different

7) What is the node domain?


Node domain ($ST_NODE_DOMAIN) _____________________
$ echo $ST_NODE_DOMAIN
ble
example.com
fe r a
8) What is the name of your GNS ($ST_GNS_NAME)__________________________ ans
n - t r
$ echo $ST_GNS_NAME
cluster01-gns.example.com a no
<< Your gns name will be different
h a s
9) Verify that the environment variables for your cluster have)been set ฺin your
environment. Use the env | grep ST_ command. ฺ c om uide
- i n c tG
$ env | grep ST_
a c s den
ST_DB1_NAME=orcl
a n @ Stu
g ฺ t
ST_DB1_OWNER_HOME=/home/oracle
t h is
h en use
ST_SOFTWARE_STAGE_GRID=/stage/clusterware/Disk1
ST_CLUSTER_NAME=cluster01
e
h
ST_NODE1=host01
c e e to
n (
ST_NODE2=host02 ns
T a li
ST_NODE3=host03 c e
e n g
ST_GRID_OWNER=grid
e e H ST_ASM_HOME=/u01/app/11.2.0/grid
C h ST_NODE_LIST=host01,host02
ST_GRID_OWNER_HOME=/home/grid
ST_DB2_INSTANCE2=orcl2
ST_GNS_NAME=cluster01-gns.example.com
ST_DB2_INSTANCE3=orcl3
ST_SOFTWARE_STAGE_DB=/stage/database/Disk1
ST_DB2_NAME=orcl
ST_DB2_INSTANCE1=orcl1
ST_DB1_INSTANCE1=orcl1
ST_DB1_INSTANCE3=orcl3
ST_DB1_INSTANCE2=orcl2
ST_DB1_OWNER=oracle
ST_DB2_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
ST_NODE3_VIP=host03-vip
ST_DB3_NAME=
ST_ASM_INSTANCE3=+ASM3
ST_ASM_INSTANCE2=+ASM2
ST_ASM_INSTANCE1=+ASM1
ST_DB3_INSTANCE1=

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-7


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 1-1: Discovering Cluster Environment (continued)

ST_NODE_LIST2=host01,host02,host03
ST_SOFTWARE_STAGE_DEINST=/stage/deinstall
ST_DB1_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome
_1
ST_NODE_DOMAIN=example.com

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-8


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 2
In the practices for this lesson, you will perform the tasks that are prerequisites to
successfully installing Oracle Grid Infrastructure. You will configure ASMLib to manage
your shared disks and finally, you will install and verify Oracle Grid Infrastructure 11.2.
In addition, you will create ASM disk groups and an ACFS file system by using
ASMCA.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A-9


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure
In this practice, you perform various tasks that are required before installing Oracle Grid
Infrastructure. These tasks include:
• Setting up required groups and users
• Creating base directory
• Configuring hangcheck-timer
• Configuring Network Time Protocol (NTPD)
• Setting shell limits
• Editing profile entries
• Configuring ASMLib and shared disks

1) From a graphical terminal session, make sure that the groups asmadmin, asmdba, e
and asmoper exist (cat /etc/group). Make sure that the user grid exists with r a bl
the primary group of oinstall and the secondary groups of asmadmin, asmdba, and sfe
asmoper. Make sure that the oracle user’s primary group is oinstall with - t r an
o n
s an
secondary groups of dba, oper, and asmdba. Running the script
/home/oracle/labs/less_02/usrgrp.sh as the root user will complete
) ha ฺ
all these tasks. Perform this step on all three of your nodes.
ฺ c om uide
$ cat /home/oracle/labs/less_02/usrgrp.sh
- i n c tG
#!/bin/bash
a cs uden
groupadd -g 503 oper
a n @ St
g
groupadd -g 505 asmdba ฺ t t h is
groupadd -g 506he n se
asmoper
e o
groupadd -g e504 asmadmin
t u
h
(c nse
n
grepa-q
g T grid l ice/etc/passwd
H en UserGridExists=$?
if [[ $UserGridExists == 0 ]]; then
h e e usermod -g oinstall -G asmoper,asmdba,asmadmin grid
C else
useradd -u 502 -g oinstall -G asmoper,asmdba,asmadmin grid
fi
echo oracle | passwd --stdin grid
usermod -g oinstall -G dba,oper,asmdba oracle

$ su -
Password: oracle << password is not displayed

# . /home/oracle/labs/st_env.sh

<<< On node 1 >>>

# /home/oracle/labs/less_02/usrgrp.sh
Creating mailbox file: File exists
Changing password for user grid.
passwd: all authentication tokens updated successfully.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 10


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

<<< On node 2 >>>

# ssh $ST_NODE2 /home/oracle/labs/less_02/usrgrp.sh


The authenticity of host 'gr7273 (10.196.180.73)' can't be
established.
RSA key fingerprint is
4a:8c:b8:48:51:04:2e:60:e4:f4:e6:39:13:39:48:8f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'gr7273,10.196.180.73' (RSA) to the
list of known hosts.
root's password: oracle << password is not displayed
Creating mailbox file: File exists
ble
Changing password for user grid.
fe r a
passwd: all authentication tokens updated successfully.
an s
<<< On node 3 >>>
n - t r
o
# ssh $ST_NODE3 /home/oracle/labs/less_02/usrgrp.sh
s an
established. ) ha ฺ
The authenticity of host 'host03 (10.196.180.14)' can't be

RSA key fingerprint is ฺ c om uide


i n c tG
4a:8c:b8:48:51:04:2e:60:e4:f4:e6:39:13:39:48:8f.
-
cs uden
Are you sure you want to continue connecting (yes/no)? yes
a
a n @ St
Warning: Permanently added 'host03,10.196.180.14' (RSA) to the
list of known hosts.
g ฺ t t h is
e h en use
root's password: oracle << password is not displayed
Creating mailbox file: File exists
c he se to
Changing password for user grid.
(
an licen
passwd: all authentication tokens updated successfully.
T
en g
e H
C he 2) As the root user, create the oracle and grid user base directories. Perform this
step on all three of your nodes.
# mkdir -p /u01/app/grid
# chown -R grid:oinstall /u01/app
# chmod -R 775 /u01/app/grid
# mkdir -p /u01/app/oracle
# chown -R oracle:oinstall /u01/app/oracle

3) View the /etc/sysconfig/ntpd file and confirm that the –x option is specified
to address slewing. If necessary, change the file, and then restart the ntpd service
with the service ntpd restart command. Perform this step on all three of
your nodes.
[root]# cat /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 11


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

# Set to 'yes' to sync hw clock after successful ntpdate


SYNC_HWCLOCK=no

# Additional options for ntpdate


NTPDATE_OPTIONS=""
[root]#

4) As the root user, start the local naming cache daemon on all three nodes with the
service ncsd start command. Perform this step on all three of your nodes.
ble
[root]# service nscd start
fe r a
Starting nscd: [ OK
an
] s
# ssh $ST_NODE2 service nscd start
n - t r
root's password: oracle << password is not displayed
Starting nscd: [ OK ] a no
[root]# ssh $ST_NODE3 service nscd start h a s
root's password: oracle << password is not displayed m ) e ฺ
Starting nscd: [ OK ] o
ฺc Gu i d
- i n c t
c s e
5) As the root user, run the /home/oracle/labs/less_02/limits.sh
a d n script.
This script replaces the profile for n @
the oracle t u
S and grid users and replaces
/etc/profile. It replaces ฺ t a th i s
one with entries for h e ng theands/etc/security/limits.conf
oracle e
grid. Cat the
file with a new

h e e to u
/home/oracle/labs/less_02/bash_profile and
( c s e
T
step
anall three
on l i eofnyour nodes.
/home/oracle/labs/less_02/profile
c
to view the new files. Perform this

H eng # cat /home/oracle/labs/less_02/bash_profile


h e e # .bash_profile
C
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin
export PATH

umask 022

# cat /home/oracle/labs/less_02/profile
# /etc/profile

# System wide environment and startup programs, for login


setup

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 12


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

# Functions and aliases go in /etc/bashrc

pathmunge () {
if ! echo $PATH | /bin/egrep -q "(^|:)$1($|:)" ; then
if [ "$2" = "after" ] ; then
PATH=$PATH:$1
else
PATH=$1:$PATH
fi
fi
}
ble
# ksh workaround
fe r a
if [ -z "$EUID" -a -x /usr/bin/id ]; then
ans
EUID=`id -u`
n - t r
UID=`id -ru` o
fi
s an
# Path manipulation ) ha ฺ
if [ "$EUID" = "0" ]; then ฺ c om uide
pathmunge /sbin
- i n c tG
pathmunge /usr/sbin cs
a d e n
a n @ Stu
pathmunge /usr/local/sbin
fi
g ฺ t t h is
e
# No core filesh ebyn default
u se
( -ce 0 > e
ulimit -Sch to
/dev/null 2>&1
n n s
a lice
ifT[ -x /usr/bin/id ]; then
e n g USER="`id -un`"
e e H LOGNAME=$USER
C h MAIL="/var/spool/mail/$USER"
fi

HOSTNAME=`/bin/hostname`
HISTSIZE=1000

if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then


INPUTRC=/etc/inputrc
fi

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC

for i in /etc/profile.d/*.sh ; do
if [ -r "$i" ]; then
. $i
fi
done

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 13


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then


umask 022
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

unset i
unset pathmunge
ble
fe r a
# cat /home/oracle/labs/less_02/limits.conf
t r a ns
# - priority - the priority to run user process o n -
with
# - locks - max number of file locks the user a ncan hold
# - sigpending - max number of pending a s
h signals
# - msgqueue - max memory used by POSIX
m ) ฺ
message
e queues
(bytes) o
ฺc Gto i d
u raise to
# - nice - max nice priority i n c t
allowed
# - rtprio - max realtime
a c s-priority
d e n
#<domain> <type> <item>
a n @ Stu <value>
g ฺ t core t h is
#*
h e n se
soft 0
#*
e e hard
t o u rss 10000
#@studentch ehard nproc 20
n (
#@faculty n s soft nproc 20
T a lice hard nproc
#@faculty 50
e n g
#ftp hard nproc 0
e e H #@student - maxlogins 4
C h # End of file
oracle soft nofile 131072
oracle hard nofile 131072
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 3500000
oracle hard memlock 3500000
grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
grid hard nproc 131072
grid soft core unlimited
grid hard core unlimited
grid soft memlock 3500000
grid hard memlock 3500000
# Recommended stack hard limit 32MB for oracle installations

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 14


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

# oracle hard stack 32768

# cat /home/oracle/labs/less_02/limits.sh
cp /home/oracle/labs/less_02/profile /etc/profile
cp /home/oracle/labs/less_02/bash_profile
/home/oracle/.bash_profile
cp /home/oracle/labs/less_02/bash_profile
/home/grid/.bash_profile
cp /home/oracle/labs/less_02/limits.conf
/etc/security/limits.conf

<<< On Node 1 >>>


ble
# /home/oracle/labs/less_02/limits.sh
fe r a
an s
<<< On Node 2 >>>
n - t r
# ssh $ST_NODE2 /home/oracle/labs/less_02/limits.sh
root@gr7213's password: oracle << password is not displayed a no
h a s
<<< On Node 3 >>>>
m ) e ฺ
o i d
ฺc Gisu not displayed
# ssh $ST_NODE3 /home/oracle/labs/less_02/limits.sh
root@host03's password: oracle <<in
- c
password t
c
a tuds e n
#
a n @ S
ฺ t h i s
h e ng se t
6) As root, executee t o u configure –i command to configure the
ethe oracleasm
Oracle ASM
h
(clibraryndriver.
se The owner should be grid and the group should be
a n e
T
asmadmin.
g l ic sure that the driver loads and scans disks on boot. Perform this
Make

H en step on all three of your nodes.


h e e <<< On Node 1 >>>
C # oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM


library
driver. The following questions will determine whether the
driver is
loaded on boot and what permissions it will have. The current
values
will be shown in brackets ('[]'). Hitting <ENTER> without
typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid


Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 15


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

<<< On Node 2 >>>


# ssh $ST_NODE2 oracleasm configure –i

… output not shown

<<< On Node 3 >>>


# ssh $ST_NODE3 oracleasm configure –i

… output not shown

ble
fe r a
7) (On the first node only) Create the ASM disks needed for the practices. The
an s
/home/oracle/labs/less_02/createdisk.sh script has been provided to n - t r
o
step on the first node only. s an
do this for you. Look at the script, and then execute it as the root user. Perform this

) ha ฺ
# cat /home/oracle/labs/less_02/createdisk.sh
ฺ c om uide
inc nt G
oracleasm init
oracleasm createdisk ASMDISK01 s -
/dev/sda1
oracleasm createdisk ASMDISK02
@ t u de
ac /dev/sda2
oracleasm createdisk ASMDISK03
t a S
n is/dev/sda3
oracleasm createdisk ฺ
ng ASMDISK05h
t /dev/sda6
ASMDISK04 /dev/sda5
oracleasm creatediskh e u s e
e e toASMDISK06
oracleasm createdisk
h /dev/sda7
n (
oracleasm c s e
createdisk
n ASMDISK07 /dev/sda8
T a
oracleasm
l c e
createdisk
icreatedisk ASMDISK08 /dev/sda9

en g
oracleasm ASMDISK09 /dev/sda10

e e H oracleasm
oracleasm
createdisk
createdisk
ASMDISK10
ASMDISK11
/dev/sda11
/dev/sdb1
C h oracleasm createdisk ASMDISK12 /dev/sdb2
oracleasm createdisk ASMDISK13 /dev/sdb3
oracleasm createdisk ASMDISK14 /dev/sdb5

# /home/oracle/labs/less_02/createdisk.sh
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 16


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

Instantiating disk: done


Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done
Instantiating disk: done
Writing disk header: done
ble
Instantiating disk: done
fe r a
Writing disk header: done
an s
Instantiating disk: done
n - t r
Writing disk header: done
Instantiating disk: done a no
Writing disk header: done
h a s
Instantiating disk: done
m ) e ฺ
o
ฺc Gu i d
#
- i n c t
c
a tuds e n
a n @ S
8) As the root user, scan the
g ฺ t
disks to make
t h i ssure that they are available with the
oracleasm scandisksh n
e command. s e Perform an oracleasm listdisks
e o u
e sureeallt the disks have been configured. Perform this step on all
command to make
(chnodes.
three ofnyour ns
T a c e
li exit
e n g
# oracleasm

e e H # oracleasm init
C h Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm

# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASMDISK01"
Instantiating disk "ASMDISK02"
Instantiating disk "ASMDISK03"
Instantiating disk "ASMDISK04"
Instantiating disk "ASMDISK05"
Instantiating disk "ASMDISK06"
Instantiating disk "ASMDISK07"
Instantiating disk "ASMDISK08"
Instantiating disk "ASMDISK09"
Instantiating disk "ASMDISK10"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 17


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure (continued)

Instantiating disk "ASMDISK11"


Instantiating disk "ASMDISK12"
Instantiating disk "ASMDISK13"
Instantiating disk "ASMDISK14"

# oracleasm listdisks
ASMDISK01
ASMDISK02
ASMDISK03
ASMDISK04
ASMDISK05
ASMDISK06
ble
ASMDISK07
fe r a
ASMDISK08
ans
ASMDISK09
n - t r
ASMDISK10 o
ASMDISK12
s an
ASMDISK13
ASMDISK14 ) ha ฺ
ฺ c om uide
#
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 18


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure
In this practice, you install Oracle Grid Infrastructure.
1) Use the Oracle Universal Installer (runInstaller) to install Oracle Grid Infrastructure.
You need to know your cluster name and GNS name and address. You also need to
know your cluster subdomain. Your instructor will provide this information. YOU
MUST USE THE NAMES ASSIGNED TO YOU BY YOUR INSTRUCTOR.
Failure to use the proper names will result in a failed or inoperative cluster
installation.
For this example, assume the following:
• Your assigned cluster nodes are host01, host02, and host03.
• Your instructor has assigned the cluster name cluster01 and GNS e
name cluster01-gns.example.com with an IP address of 192.0.2.155. r a bl
sfe
• Your assigned cluster subdomain is cluster01.example.com.
- t r an
no
These values are for illustrational purposes only. PLEASE USE THE VALUES n
GIVEN TO YOU BY YOUR INSTRUCTOR.
s a
a) As the grid user, start a VNC session on any available h a
) foreฺthe grid user.
port
o m id make sure that
n c ฺcthe
Change the Xstartup file for vnc. Before starting installation,
G u
the DNS server can resolve your GNS name.
s - i The GNS
n t name in the classroom

@ t u de
ac $ST_GNS_NAME
environment is available with the echo command. Record the
IP address you get for $ST_GNS_NAME
t a S
n is ___________________
[oracle]$ su – grid

ng se t h
Password: oracle h e u
e <<topassword not displayed
h e
(c ngid=501(oinstall)
[grid]$ id
n se
uid=502(grid)
T a l ic e
en g
groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper)

e e H [grid]$ mkdir .vnc


C h
[grid]$ cp /home/oracle/.vnc/xstartup /home/grid/.vnc/xstartup

[grid]$ vncserver :3 <= Starting VNC on port 5803

You will require a password to access your desktops.

Password: oracle << password not displayed


Verify: oracle << password not displayed
xauth: creating new authority file /home/grid/.Xauthority

New 'host03.example.com:3 (grid)' desktop is


host03.example.com:3

Creating default startup script /home/grid/.vnc/xstartup


Starting applications specified in /home/grid/.vnc/xstartup
Log file is /home/grid/.vnc/host03.example.com:3.log

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 19


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

[grid]$ . /home/oracle/labs/st_env.sh
[grid]$ echo $ST_GNS_NAME
cluster01-gns.example.com
[grid]$ nslookup $ST_GNS_NAME
Server: 192.0.2.14
Address: 192.0.2.14#53

Name: cluster01-gns.example.com
Address: 192.0.2.155

b) To start a VNC viewer session, click the VNC icon on your desktop, and connect
ble
to the grid user’s session. In keeping with the example above, you would enter
fe r a
host01:3. Open a terminal window and change directory to the staged software
an s
location provided by your instructor and start the OUI by executing the n - t r
runInstaller command from the $ST_SOFTWARE_STAGE_GRID directory.
a no
[grid]$ id h a s
uid=502(grid) gid=501(oinstall) groups=501(oinstall) m ) e ฺ ...
[grid]$ . /home/oracle/labs/st_env.shฺc o u i d
i n c t G
cs- uden
[grid]$ cd $ST_SOFTWARE_STAGE_GRID

[grid]$ ./runInstaller n@
a t
a s S
n g ฺt t hi
c) On the Select e h e Option
Installation
e
us page, select the “Install and Configure Grid
e t o
n (ch for
Infrastructure
n saeCluster” option and click Next.
d) TOn
g l iceInstallation Type page, select Advanced Installation and click Next.
a the Select
H en e) On the Product Languages page, select all languages and click Next.
h e e
C f) The “Grid Plug and Play Information” page appears next. Make sure that the
Configure GNS check box is selected. Input the proper data carefully. DO NOT
GUESS here. If you are unsure of a value, PLEASE ASK YOUR
INSTRUCTOR. You must input:
• Cluster Name
• SCAN Name
• SCAN Port
• GNS Subdomain
• GNS Address
The values given here are based on the example data presented earlier. To
continue with the example, the following values would be assigned:
• Cluster Name: cluster01 (Yours will be different Use
the value recorded in practice 1 step 6.)

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 20


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

• SCAN Name: cluster01-scan.cluster01.example.com (Yours will


be different.)
• SCAN Port: 1521 (DEFAULT)
• GNS Sub Domain: cluster01.example.com (Yours will be different.
The subdomain is $ST_CLUSTER_NAME.$ST_NODE_DOMAIN.)
• GNS Address: 192.0.2.155 (Yours will be different. Use the
value you found in practice 2-2 step 1a.)
Hint: If you enter the cluster name (for example, cluster01), and then enter GNS
Sub Domain (for example, cluster01.example.com), the SCAN name will autofill
correctly. Leave SCAN Port to default to 1521 and enter the IP address for your
GNS. Verify all data entered on this page, and then click Next. ble
fe r a
g) On the Cluster Node Information page, you add your second node only. DO
an s
NOT for any reason install to all three nodes. Click the Add button and enter
n - t r
the fully qualified name of your second node into the box and click OK. Your
a no
second node should appear in the window under your first node. The Virtual IP
s
h a
name values for both nodes will be AUTO. Click the SSH Connectivity button.
)
o m d e ฺ
Enter the grid password, which is oracle. Click the Setup button. A dialog box
c ฺc Gu i
s - n
stating that you have successfully established passwordless SSH connectivity
i n t
c e
appears. Click OK to close the dialog box. Click Next to continue.
a tud
h) On the Specify Network Usage a n @page, Smust configure the correct interface
you
ฺ t h i s
enginterfaces.
types for the listed network
usage for eacheofhyour s e t Your instructor will indicate the proper
interface.
u Again, DO NOT GUESS. The systems that
were used e
hto develop t o
eeth2the(private
course had four interfaces: eth0 (storage network), eth1
n (cnetwork),
(storage n s
T a leth0
example, i c eand eth1 would benetwork),
marked
and eth3 (public network). Using that
“Do Not Use,” eth2 would be marked
en g Private, and eth3 would be marked Public. Again, this is only an example. Check
e e H with your instructor for proper network interface usage. When you have correctly
C h assigned the interface types, click Next to continue.
i) On the Storage Option Information page, select Automatic Storage Management
(ASM) and click Next.
j) On the Create ASM Disk Group page, make sure that Disk Group Name is DATA
and Redundancy is Normal. In the Add Disks region, select ORCL:ASMDISK01
ORCL:ASMDISK02, ORCL:ASMDISK03, and ORCL:ASMDISK04. Click
Next.
k) On the ASM Passwords page, click the Use Same Password for these accounts
button. In the Specify Password field, enter oracle_4U and confirm it in the
Confirm Password field. Click Next to continue.
l) Select the “Do not use Intelligent Platform Management Interface (IPMI)” option
on the Failure Isolation page and click Next to continue.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 21


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

m) On the Privileged Operating System Groups page, select “asmdba” for the ASM
Database Administrator (OSDBA) group, “asmoper” for the ASM Instance
Administration Operator (ASMOPER) group, and “asmadmin” for the ASM
Instance Administrator (OSASM) group. Click Next to continue.
n) On the Specify Installation Location page, make sure that Oracle Base is
/u01/app/grid and Software Location is /u01/app/11.2.0/grid. Click
Next.
o) On the Create Inventory page, Inventory Directory should be
/u01/app/oraInventory and the oraInventory Group Name should be
oinstall. Click Next.
p) On the Perform System Prerequisites page, the Installer checks whether all the ble
systems involved in the installation meet the minimum system requirements for fe r a
an s
that platform. If the check is successful, click Next. If any deficiencies are found,
click the “Fix & Check Again” button. The Execute Fixup Script dialog boxn - t r
o
s an
appears. You are instructed to execute a script as root on each node involved in
the installation— two nodes in this case. Open a terminal window on the first
) ha ฺ
node, become the root user, and set up the classroom environment variables with
ฺ c om uide
the st_env.sh script. Execute the fixup script on your first node, and then ssh to
- i n c tG
your second node and repeat the process. Exit the root user session on the first
a cs uden
node, and then exit the terminal session. When the script has been run on each
n @ St
node, click OK to close the dialog box. Click Next to continue.
a
g ฺ t t h is
e h en >>> u se
c
[grid]$ (suh-e root
<<< On first
to
node
e
a n
Password: e ns << password not displayed
oracle
c
g T . li/home/oracle/labs/st_env.sh
[root]#
n
e [root]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
e e H Response file being used is
C h :/tmp/CVU_11.2.0.1.0_grid/fixup.response
Enable file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log
Setting Kernel Parameters...
fs.file-max = 327679
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576
#
<<< On Second Node >>>
[root]# ssh $ST_NODE2 /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
root@host02's password: oracle <= password not echoed
Response file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.response
Enable file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.enable

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 22


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log


Setting Kernel Parameters...
fs.file-max = 327679
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576
[root]# exit
logout
[grid]$ exit

q) Click Finish on the Summary screen. From this screen, you can monitor the
progress of the installation.
ble
r) When the remote operations have finished, the Execute Configuration Scripts
fe r a
window appears. You are instructed to run the orainstRoot.sh and root.sh
an s
- t r
scripts as the root user on both nodes. Open a terminal window and as the root
n
o
user set the classroom environment variables.
s an
# su –
) ha ฺ
Password: oracle
om uide
<< password not displayed
ฺ c
# . /home/oracle/labs/st_env.sh -inc t G
s
ac tude n
(On the first node)
a @
n is S
ฺ t
g e th
e n
h oofus/u01/app/oraInventory.
# /u01/app/oraInventory/orainstRoot.sh
e e
Changing permissions
t
h se permissions
n ( c
Adding read,write
n for group.
T a
Removing
l ic e
read,write,execute permissions for world.

en g
H Changing groupname of /u01/app/oraInventory to oinstall.
h e e The execution of the script is complete.
C
# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory:


[/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 23


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Now product-specific root actions will be performed.


2009-08-25 14:46:20: Parsing the host name
2009-08-25 14:46:20: Checking for super user privileges
2009-08-25 14:46:20: User has super user privileges
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export e
peer wallet
r a bl
profile reader wallet
s fe
pa wallet
- t r an
peer wallet keys
o n
pa wallet keys
peer cert request s an
pa cert request ) ha ฺ
peer cert
ฺ c om uide
pa cert
- i n c tG
peer root cert TP
a cs uden
a n @ St
profile reader root cert TP
pa root cert TP
peer pa cert TPg ฺ t t h is
h en use
pa peer cert TP
e
( c he se to
profile reader pa cert TP

an licen
profile reader peer cert TP

g Tpeer user cert

H en pa user cert
Adding daemon to inittab
h e e CRS-4123: Oracle High Availability Services has been started.
C ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'host01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'host01'
CRS-2676: Start of 'ora.gipcd' on 'host01' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host01'
CRS-2676: Start of 'ora.gpnpd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01'
CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host01'
CRS-2672: Attempting to start 'ora.diskmon' on 'host01'
CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded
CRS-2676: Start of 'ora.cssd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host01'
CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded

ASM created and started successfully.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 24


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

DiskGroup DATA created successfully.

clscfg: -install mode specified


Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk
0c270623caf14f07bf57a7e9a1eb5f5c.
Successful addition of voting disk
04f565f5c6ed4f1cbf7444c2b24ebf1a. e
Successful addition of voting disk
r a bl
34c5c13ae40a4f21bf950e0de2777cdf.
s fe
Successfully replaced voting disk group with +DATA.
- t r an
CRS-4256: Updating the profile
o n
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id s an
File Name Disk
group ) ha ฺ
-- ----- -----------------
ฺ c om uide
--------- ------
---
- i n c tG
1. ONLINE
cs uden
0c270623caf14f07bf57a7e9a1eb5f5c (ORCL:ASMDISK01)
a
[DATA]
a n @ St
2. ONLINE
[DATA] g ฺ t h is
04f565f5c6ed4f1cbf7444c2b24ebf1a (ORCL:ASMDISK02)
t
3. ONLINE
e h en use
34c5c13ae40a4f21bf950e0de2777cdf (ORCL:ASMDISK03)
[DATA]
( c he se to
an licen
Located 3 voting disk(s).

g T
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'

H en CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded


CRS-2673: Attempting to stop 'ora.asm' on 'host01'
h e e CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
C CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host01'
CRS-2677: Stop of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01'
CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'host01'
CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01'
CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'host01'
CRS-2676: Start of 'ora.mdnsd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host01'
CRS-2676: Start of 'ora.gipcd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host01'
CRS-2676: Start of 'ora.gpnpd' on 'host01' succeeded

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 25


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01'


CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host01'
CRS-2672: Attempting to start 'ora.diskmon' on 'host01'
CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded
CRS-2676: Start of 'ora.cssd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host01'
CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host01'
CRS-2676: Start of 'ora.asm' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'host01' e
CRS-2676: Start of 'ora.evmd' on 'host01' succeeded
r a bl
CRS-2672: Attempting to start 'ora.asm' on 'host01'
s fe
CRS-2676: Start of 'ora.asm' on 'host01' succeeded
- t r an
CRS-2672:
CRS-2676:
Attempting to start 'ora.DATA.dg' on 'host01'
Start of 'ora.DATA.dg' on 'host01' succeeded
n
no
CRS-2672: Attempting to start 'ora.registry.acfs' on 'host01's a
CRS-2676: ) h a
Start of 'ora.registry.acfs' on 'host01' succeeded
o m d e ฺ
host01 2009/08/25 14:53:37 c ฺc Gu i
s - i n n t
/u01/app/11.2.0/grid/cdata/host01/backup_20090825_145337.olr
c
a tud
Preparing packages for installation...
e
a n @ S
cvuqdisk-1.0.7-1
Configure Oracle Grid ฺ t
g et h i s
Infrastructure for a Cluster ...
succeeded e n
h o us
e e tproperties for clusterware
h sUniversal
Updating inventory
Starting(cOracle n e Installer...
T a n c e
n g li
Checking swap space: must be greater than 500 MB. Actual
H e 3007 MB Passed
h e e The inventory pointer is located at /etc/oraInst.loc
C The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
#

(On the second node)


# ssh $ST_NODE2 /u01/app/oraInventory/orainstRoot.sh
root's password: oracle << password not displayed
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.


The execution of the script is complete.

# ssh $ST_NODE2 /u01/app/11.2.0/grid/root.sh


Running Oracle 11g root.sh script...

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 26


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory:


[/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by e
Database Configuration Assistant when a database is created
r a bl
Finished running generic part of root.sh script.
s fe
Now product-specific root actions will be performed.
- t r an
2009-08-25 14:56:42: Parsing the host name
o n
2009-08-25 14:56:42: User has super user privileges s an
2009-08-25 14:56:42: Checking for super user privileges

Using configuration parameter file: ) ha ฺ


ฺ c om uide
/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
- i n c tG
LOCAL ADD MODE
a cs uden
a n @ St
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
g ฺ t
Adding daemon to inittab t h is
h en use
CRS-4123: Oracle High Availability Services has been started.
e
( c he se to
ohasd is starting

an licen
CRS-4402: The CSS daemon was started in exclusive mode but

g T
found an active CSS daemon on node host01, number 1, and is

H en terminating
An active cluster was found during exclusive startup,
h e e restarting to join the cluster
C CRS-2672: Attempting to start 'ora.mdnsd' on 'host02'
CRS-2676: Start of 'ora.mdnsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host02'
CRS-2676: Start of 'ora.gipcd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host02'
CRS-2676: Start of 'ora.gpnpd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host02'
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded
CRS-2676: Start of 'ora.cssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host02'
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'host02'
CRS-2676: Start of 'ora.drivers.acfs' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 27


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

CRS-2672: Attempting to start 'ora.crsd' on 'host02'


CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded

host02 2009/08/25 15:01:15


/u01/app/11.2.0/grid/cdata/host02/backup_20090825_150115.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ...
succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer... e
r a bl
Checking swap space: must be greater than 500 MB. Actual
s fe
3007 MB Passed
- t r an
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
n
no
'UpdateNodeList' was successful. s a
# ) h a
o m d e ฺ
c ฺc Gu i
s i n
- click n t
s) After the scripts are executed on bothcnodes,
d e
a willtucontinuethe OK button to close the

a n @
dialog box. The configuration assistants S to execute from the Setup
page. ฺ t
g e th i s
e n
e e h oassistants
t) When the configuration
t us have finished, click the Close button on the
n (ch nse
Finish page to exit the Installer.
2) When
g T l i ce finishes, you should verify the installation. You should check to
athe installation
H en make sure that the software stack is running, as it should. Execute the crsctl stat
res –t command:
h e e
C [grid]$ /u01/app/11.2.0/grid/bin/crsctl stat res –t

--------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------
Local Resources
--------------------------------------------------------------
ora.DATA1.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.LISTENER.lsnr
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.asm
ONLINE ONLINE host01 Started
ONLINE ONLINE host02 Started
ora.eons
ONLINE ONLINE host01

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 28


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

ONLINE ONLINE host02


ora.gsd
OFFLINE OFFLINE host01
OFFLINE OFFLINE host02
ora.net1.network
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.ons
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.registry.acfs
ONLINE ONLINE host01
ONLINE ONLINE host02 e
--------------------------------------------------------------
r a bl
Cluster Resources
s fe
--------------------------------------------------------------
- t r an
ora.LISTENER_SCAN1.lsnr
o n
1 ONLINE ONLINE
ora.LISTENER_SCAN2.lsnr
host02
s an
1 ONLINE ONLINE host01 ) ha ฺ
ora.LISTENER_SCAN3.lsnr
ฺ c om uide
1 ONLINE ONLINE host01
- i n c tG
ora.gns
a cs uden
1 ONLINE ONLINE
a n @ St host01
ora.gns.vip
1 g ฺ t
ONLINE ONLINE t h is host01
ora.host01.vip
e h en use
1
( c he se to
ONLINE ONLINE host01

an licen
ora.host02.vip

g T 1 ONLINE ONLINE host02

H en ora.oc4j
1 OFFLINE OFFLINE
h e e ora.scan1.vip
C 1 ONLINE ONLINE host02
ora.scan2.vip
1 ONLINE ONLINE host01
ora.scan3.vip
1 ONLINE ONLINE host01
The inventory is located at /u01/app/oraInventory

3) Use the dig command to check your GNS and make sure that it is being updated with
your cluster addresses. In this example, the IP address is the one assigned to your
GNS.
# dig @192.0.2.155 cluster01-scan.cluster01.example.com

# dig @${ST_GNS_NAME} ${ST_CLUSTER_NAME}-


scan.${CLUSTER_NAME}.${ST_NODE_DOMAIN}

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 29


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

; <<>> DiG 9.3.4-P1 <<>> @cluster01-gns cluster01-


scan.cluster01.example.com
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43003
;; flags: qr aa; QUERY: 1, ANSWER: 3, AUTHORITY: 1,
ADDITIONAL: 1

;; QUESTION SECTION:
;cluster01-scan.cluster01.example.com. IN A

;; ANSWER SECTION: e
cluster01-scan.cluster01.example.com. 120 IN A
r a bl
192.0.2.231
sfe
cluster01-scan.cluster01.example.com. 120 IN A
- t r an
192.0.2.229
o n
cluster01-scan.cluster01.example.com. 120 IN
192.0.2.232 s an A

) ha ฺ
;; AUTHORITY SECTION:
c o m ide
c
cluster01-gns-vip.cluster01.example.com.
n ฺ G u IN NS
10800
i t
cs- uden
cluster01-gns-vip.cluster01.example.com.
a
;; ADDITIONAL SECTION:an@ s S
t
n g ฺt t hi
cluster01-gns-vip.cluster01.example.com. 10800 IN A
192.0.2.155 e
h o us e
e e t
h 55e msec
;; Query(ctime:ns
an lic192.0.2.155#53(192.0.2.155)
;;TSERVER: e
g
en ;; WHEN: Tue Aug 25 15:42:56 2009
e e H ;; MSG SIZE rcvd: 174
C h
4) Use the dig command to make sure that your name server is properly forwarding
address requests in your cluster subdomain back to your GNS for resolution.
# cat /etc/resolv.conf
domain example.com
nameserver 10.216.104.27
search example.com

# dig @10.216.104.27 ${ST_CLUSTER_NAME}-


scan.${CLUSTER_NAME}.${ST_NODE_DOMAIN}

; <<>> DiG 9.3.4-P1 <<>> @10.216.104.27 cluster01-


scan.cluster01.example.com
; (1 server found)
;; global options: printcmd
;; Got answer:

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 30


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-2: Installing Oracle Grid Infrastructure (continued)

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14358


;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1,
ADDITIONAL: 1

;; QUESTION SECTION:
; cluster01-scan.cluster01.example.com. IN A

;; ANSWER SECTION:
cluster01-scan.cluster01.example.com. 120 IN A
10.196.180.214
cluster01-scan.cluster01.example.com. 120 IN A
10.196.180.233
cluster01-scan.cluster01.example.com. 120 IN A e
10.196.182.230
r a bl
nsfe
;; AUTHORITY SECTION:
- t r a
cluster01.example.com. 86400 IN NS n
cluster01-
o
gns.example.com.
s an
;; ADDITIONAL SECTION: ) h a
cluster01-gns.example.com. 86400 IN om A ide

10.196.183.12
n c ฺc Gu
s - i n t
;; Query time: 62 msec
c
a tud e
a n @ S
ฺ h i s
;; SERVER: 10.216.104.27#53(10.216.104.27)
t
;; WHEN: Tue Sep 29
h e ng137 se t
14:53:49 2009

h e e to u
;; MSG SIZE rcvd:

n (c nse
gT
a lice
H en
h e e
C

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 31


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-3: Creating ASM Disk Groups
In this practice, you create additional ASM disk groups to support the activities in the rest
of the course. You create a disk group to hold ACFS file systems and another disk group
to hold the Fast Recovery Area (FRA).
1) In the VNC session for the grid user, open a terminal window as the grid user, and
set the oracle environment with the oraenv tool to the +ASM1 instance.
[grid]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
2) Start the ASM Configuration Assistant (ASMCA).
ble
$ asmca
fe r a
3) Create a disk group named ACFS with four disks and external redundancy—choose an s
disks ASMDISK05 through ASMDISK08. Set the disk group attribute ADVM n - t r
Compatibility to 11.2.0.0.0. a no
h a s
Step Screen/Page Description Choices or Values
)
a. Configure ASM :DiskGroups Click Create.om d e ฺ
b. Create DiskGroup Enter: c ฺc Gu i
s i n t ACFS
- GroupeName:
n
cDisk
n @ S ud
a In thetRedundancy section, select “External
g ฺt a t s
hi(None).”
h n
e us e In the Select Member Disk section, select:
e
he se to ASMDISK05
n ( c n ASMDISK06
T a l ic e ASMDISK07
n g ASMDISK08
ee He Click Show Advanced Options.
C h In the Disk Group Attributes section, set
ADVM Compatibility to 11.2.0.0.0.
Click OK.
c. Disk Group:Creation Click OK.
4) Using ASMCMD, create a disk group named FRA over the disks ASMDISK09
through ASMDISK11 with external redundancy. Using the command
asmcmd mkdg FRA_dg_config.xml
Review the FRA_dg_config.xml file, and then execute the command.
[grid]$ cat /home/oracle/labs/less_02/FRA_dg_config.xml
<dg name="FRA" redundancy="external">
<dsk> <dsk string="/dev/oracleasm/disk/ASMDISK09"/> </dsk>
<dsk> <dsk string="/dev/oracleasm/disk/ASMDISK10"/> </dsk>
<dsk> <dsk string="/dev/oracleasm/disk/ASMDISK11"/> </dsk>
<a name="compatible.asm" value="11.2"/>
<a name="compatible.rdbms" value="11.2"/>

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 32


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-3: Creating ASM Disk Groups (continued)

<a name="compatible.advm" value="11.2"/>


</dg>
[grid]$ asmcmd mkdg /home/oracle/labs/less_02/FRA_dg_config.xml
[grid]$
5) Use ASMCMD to confirm the creation of the FRA disk group and to see which disks
are included in the FRA disk group.
[grid]$ asmcmd
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB
Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 10644
ble
4396 0 4396 0
fe r a
N ACFS/
ans
MOUNTED NORMAL N 512 4096 1048576 10648
n - t r
6249
N DATA/
1111 2569 0
a no
MOUNTED EXTERN N 512 4096 1048576 7984 h a s
7928 0 7928 0 m ) e ฺ
N FRA/ o
ฺc Gu i d
ASMCMD> lsdsk -G FRA - i n c t
Path c
a tuds e n
ORCL:ASMDISK09
a n @ S
ORCL:ASMDISK10 ฺ t h i s
ORCL:ASMDISK11
h e ng se t
ASMCMD> exit
h e e to u
(c nse
[grid]$
n
a
T the FRA
6) Mount
g l icedisk group on your second node.
n
ee He Note: The ASMCMD commands operate only on the local node, so the FRA disk
group is mounted only on your first node.
C h
[grid}$ . /home/oracle/labs/st_env.sh
[grid]$ ssh $ST_NODE2
[grid@host02] $ . oraenv
ORACLE_SID = [grid] ? +ASM2
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
[grid@ host02]$ asmcmd mount FRA
[grid@ host02]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB
Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 9996
3748 0 3748 0
N ACFS/
MOUNTED NORMAL N 512 4096 1048576 9998
5556 1126 2215 0
N DATA/

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 33


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-3: Creating ASM Disk Groups (continued)

MOUNTED EXTERN N 512 4096 1048576 7497


7398 0 7398 0
N FRA/
[grid@ host02]$ exit

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 34


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-4: Creating ACFS File System
In this practice, you create an ASM volume to use as the shared ORACLE_HOME for a
database that is being used as the Enterprise Manager repository. Create this volume in
the disk group called ACFS, the volume name is DBHOME_1, and the mount point is
/u01/app/oracle/acfsmount/11.2.0/sharedhome.
1) Start the ASM Configuration Assistant (ASMCA).
$ asmca
2) Create an ASM volume named DBHOME_1 with size 6 GB in the ACFS disk group.
Step Screen/Page Description Choices or Values
a. Configure ASM :DiskGroups Click the Volumes tab.
b. Configure ASM :Volumes Click Create. ble
c. Create Volume Enter: fe r a
Volume Name: DBHOME_1 an s
Size: 6 G Bytes n - t r
Click OK.
a no
d. Volume: Creation Click OK.
h a s
m ) e ฺ
o
ฺc passwordi d
u oracle. Create the
3) Open a terminal window and become the root
i n c
user,
t G
mount point directory at
a cs- uden
a n @ St
/u01/app/oracle/acfsmount/11.2.0/sharedhome.
Do this on all three nodes. gฺt t h is
$ su – root eeh
en use
c o password is not displayed
horaclese t<<
Password: (
T an mkdir
[root]#
l i c en-p /u01/app/oracle/acfsmount/11.2.0/sharedhome
e ng[root]# . /home/oracle/labs/st_env.sh
ee H
C h [root]# ssh $ST_NODE2 mkdir -p
/u01/app/oracle/acfsmount/11.2.0/sharedhome
root@host02's password: oracle << password is not displayed

[root]# ssh $ST_NODE3 mkdir -p


/u01/app/oracle/acfsmount/11.2.0/sharedhome
root@host03's password: oracle << password is not displayed

4) Create the ACFS file system, and mount and register the ACFS file system.
Step Screen/Page Choices or Values
Description
a. Configure Click the ASM Cluster File Systems tab.
ASM:
Volumes
b. Configure Click Create.
ASM: ASM
Cluster File

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 35


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-4: Creating ACFS File System (continued)

Step Screen/Page Choices or Values


Description
Systems
c. Create ASM Verify that Database Home File System is selected.
Cluster File Enter
System Database Home Mountpoint:
/u01/app/oracle/acfsmount/11.2.0/sharedhome
Database Home Owner Name: oracle
Database Home Owner Group: oinstall
Click OK.
d. Database Copy the script shown to the root terminal window.
Home: Run
ble
ACFS Script
fe r a
e. Root terminal Execute the script:
ans
window
n - t r
o
#
s an
/u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh
# ) ha ฺ
ฺ c om uide
f. Database Click Close.
- i n c tG
Home: Run a cs uden
ACFS Script
a n @ St
g. Configure ฺ t
Click Exit.
g t h is
ASM: ASM
e h en use
Cluster File
c h e e to
Systems
n ( s
nClick
h. T a
ASM l i c e Yes.
g
n Configuration
ee He Assistant
C h
5) Exit the root terminal window.
# exit
$

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 36


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-5: Installing a Silent Database
In this practice, you start a silent installation of an Oracle RAC database that will be used
in this course to enable Enterprise Manager Database Control.
1) In a VNC session for the oracle user (host01:1). Establish the ssh user equivalency
for the oracle user between your two cluster nodes. Open a terminal window on one
as the oracle user and name the other in place of remote_host in the example.
[oracle]$ cd /home/oracle/labs/silent_inst
[oracle]$ ./ssh_setup.sh
oracle@host02's password: oracle << password not displayed
Warning: Permanently added 'gr7213,10.196.180.13' (RSA) to the
list of known hosts.
oracle@host02's password: oracle << password not displayed
ble
oracle@host02's password: oracle << password not displayed
id_rsa.pub 100% 395 fe r a
0.4KB/s 00:00 ans
oracle@host02's password: oracle << password not displayed n - t r
[oracle]$
a no
2) Confirm that the oracle user in this session is a member )of h the
s
arequired groups. The
m ฺ
required groups are: dba, oinstall, oper, and asmdba. o
use the su – oracle command to reset the user c ฺ cIf allG
groups. u ide do not appear,
the groups

Note: The groups are only reset for this c s -in window.
terminal e n t
@ a tud
$ id
t a n i s S

ng se t
uid=501(oracle) gid=502(oinstall) h
h e
groups=501(dba),502(oinstall)
u
[oracle]$ suee - oraclet o
Password:
h
(c oracle e << password not displayed
s~]$
a n
[oracle@gr7212 c e n id
g T l i
H en uid=501(oracle) gid=502(oinstall)
groups=501(dba),502(oinstall),503(oper),505(asmdba)
h e e
C 3) In the same terminal window, use the classroom environment variables run
/home/oracle/labs/st_env.sh.
$ . /home/oracle/labs/st_env.sh

4) In a terminal window, as the oracle user, change directory to the following or to a


directory specified by your instructor:
$ cd $ST_SOFTWARE_STAGE_DB

5) Start the silent installation by using the following command: (Be sure to provide the
parameters. The installer shows a warning that can be ignored [WARNING] [INS-
35421].This option installs a single instance database only.)
$ ./runInstaller –silent –responseFile \
/home/oracle/labs/silent_inst/db.rsp –waitforcompletion \
ORACLE_HOST=`hostname` CLUSTER_NODES=$ST_NODE_LIST
Starting Oracle Universal Installer...

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 37


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 2-5: Installing a Silent Database (continued)

Checking Temp space: must be greater than 80 MB. Actual


29270 MB Passed
Checking swap space: must be greater than 150 MB. Actual
3971 MB Passed
Preparing to launch Oracle Universal Installer from
/tmp/OraInstall2009-10-02_11-51-20AM. Please wait ...[WARNING]
[INS-35421] This option installs a single instance database
only.
CAUSE: You have chosen to perform a Desktop class install
on a cluster. This option will not install Oracle RAC.
ACTION: If you wish to install Oracle RAC, choose to
perform a Server class install.
[WARNING] [INS-35421] This option installs a single instance e
database only.
r a bl
CAUSE: You have chosen to perform a Desktop class install
s fe
on a cluster. This option will not install Oracle RAC.
- t r an
ACTION: If you wish to install Oracle RAC, choose to
o n
perform a Server class install.
You can find the log of this install session at: s an
) ha ฺ
/u01/app/oraInventory/logs/installActions2009-10-02_11-51-
20AM.log
ฺ c om uide
- i n c tG
[WARNING] [INS-35421] This option installs a single instance
database only.
a cs uden
a n @ St
CAUSE: You have chosen to perform a Desktop class install

g ฺ t t h is
on a cluster. This option will not install Oracle RAC.
ACTION: If you wish to install Oracle RAC, choose to
h en use
perform a Server class install.
e
( c he se to
T anthe root.sh
6) Execute
l i c en script on both nodes to complete the installation.
e ng-- On the first node
ee H
C h # /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/root.sh
Check
/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/install/r
oot_host01_2009-09-11_10-35-54.log for the output of root
script
#

-- On the second node

# /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/root.sh
Check
/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/install/r
oot_host02_2009-09-11_10-43-07.log for the output of root
script
#

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 38


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 3
In these practices, you will verify, stop, and start Oracle Clusterware. You will add and
remove Oracle Clusterware configuration files and backup the Oracle Cluster Registry
and the Oracle Local Registry.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 39


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware
In this practice, you check the status of Oracle Clusterware using both the operating
system commands and the crsctl utility. You will also start and stop Oracle
Clusterware.
1) Connect to the first node of your cluster as the grid user. You can use the oraenv
script to define ORACLE_SID, ORACLE_HOME, PATH, ORACLE_BASE, and
LD_LIBRARY_PATH for your environment.
$ id
uid=502(grid) gid=501(oinstall)
groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper)
$ . oraenv ble
ORACLE_SID = [grid] ? +ASM1
fe r a
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
ans
/u01/app/grid
n - t r
a no
a s
2) Using the operating system commands, verify that the OraclehClusterware daemon
of )the Oracle
processes are running on the current node. (Hint: Mostm ฺ Clusterware
daemon processes have names that end with d.bin.)
c ฺ co Guide
s - in nt
$ pgrep -l d.bin
12895 ohasd.bin @ ac tude
14838 mdnsd.bin ฺ t a n is S
14850 gipcd.bin ng
e e th
14862 gpnpd.bin
e e h o us
c h se t
14916 ocssd.bin
(
15062 noctssd.bin
T a oclskd.bin
15166 l i c en
H eng15181 crsd.bin
h e e 15198 evmd.bin
C 15222
15844
oclskd.bin
gnsd.bin
24709 oclskd.bin

3) Using the crsctl utility, verify that Oracle Clusterware is running on the current
node.
$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
4) Verify the status of all cluster resources that are being managed by Oracle
Clusterware for all nodes.
$ crsctl stat res -t
-------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 40


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

-------------------------------------------------------------
Local Resources
-------------------------------------------------------------
ora.ACFS.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.DATA.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.FRA.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
ble
ora.LISTENER.lsnr
fe r a
ONLINE ONLINE host01
ans
ONLINE ONLINE host02
n - t r
ora.acfs.dbhome1.acfs o
ONLINE ONLINE host01
s an
ora.asm
ONLINE ONLINE host02
) ha ฺ
ONLINE ONLINE host01ฺ c om uide
Started
ONLINE ONLINE
- i n c tG
host02 Started
ora.eons
a cs uden
a n @ St
ONLINE ONLINE host01

g ฺ t
ONLINE ONLINE
t h is host02
ora.gsd
e h en use
OFFLINE OFFLINE host01
( c he se to
OFFLINE OFFLINE host02

T an licen
ora.net1.network
ONLINE ONLINE host01
en g ONLINE ONLINE host02
e e H ora.ons
C h ONLINE ONLINE host01
ONLINE ONLINE host02
ora.registry.acfs
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host01
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host01
ora.gns
1 ONLINE ONLINE host01
ora.gns.vip
1 ONLINE ONLINE host01
ora.host01.vip
1 ONLINE ONLINE host01
ora.host02.vip

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 41


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

1 ONLINE ONLINE host02


ora.oc4j
1 OFFLINE OFFLINE
ora.orcl.db
1 ONLINE ONLINE host01 Open
2 ONLINE ONLINE host02 Open
ora.scan1.vip
1 ONLINE ONLINE host02
ora.scan2.vip
1 ONLINE ONLINE host01
ora.scan3.vip
1 ONLINE ONLINE host01
ble
5) Attempt to stop Oracle Clusterware on the current node while logged in as the grid fe r a
user. What happens and why? an s
n - t r
$ crsctl stop crs
a no
CRS-4563: Insufficient user privileges.
h a s
m )
CRS-4000: Command Stop failed, or completed with errors.
e ฺ
o
ฺc only
6) Switch to the root account and stop Oracle Clusterware i d
uon the current node.
i n c t G
a cs- succeeds.
Exit the switch user command when the stop
d e n
$ su -
a n @ Stu
Password: oracle <<ฺt Passwordisis not displayed
e n g e th
e h o us
# /u01/app/11.2.0/grid/bin/crsctl
e stop crs
CRS-2791: h
Starting t
(c nseresources on 'host01'High Availability
shutdown of Oracle
n ce
Ta liAttempting
Services-managed

e n g
CRS-2673:
CRS-2790: Starting
to stop 'ora.crsd' on 'host01'
shutdown of Cluster Ready Services-managed
e e H resources on 'host01'
C h CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host01'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'host01'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'host01'
CRS-2673: Attempting to stop 'ora.gns' on 'host01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on
'host01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on
'host01'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.host01.vip' on 'host01'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host01'
succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host01'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host01'
succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host01'
CRS-2677: Stop of 'ora.host01.vip' on 'host01' succeeded

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 42


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

CRS-2672: Attempting to start 'ora.host01.vip' on 'host02'


CRS-2677: Stop of 'ora.scan3.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'host02'
CRS-2677: Stop of 'ora.scan2.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'host02'
CRS-2677: Stop of 'ora.gns' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'host01'
CRS-2677: Stop of 'ora.gns.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'host02'
CRS-2677: Stop of 'ora.FRA.dg' on 'host01' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on
ble
'host02'
fe r a
CRS-2676: Start of 'ora.gns.vip' on 'host02' succeeded
ans
CRS-2672: Attempting to start 'ora.gns' on 'host02'
n - t r
CRS-2676: Start of 'ora.scan3.vip' on 'host02' succeeded o
an
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on
s
'host02'
) ha ฺ
CRS-2674: Start of 'ora.host01.vip' on 'host02' failed
c om uide
CRS-2679: Attempting to clean 'ora.host01.vip' on 'host02'

i n c tG
CRS-2674: Start of 'ora.gns' on 'host02' failed
-
cs uden
CRS-2679: Attempting to clean 'ora.gns' on 'host02'
a
a n @ St
CRS-2681: Clean of 'ora.gns' on 'host02' succeeded
ฺ t is
CRS-2681: Clean of 'ora.host01.vip' on 'host02' succeeded
g t h
succeeded e h en use
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'host02'

c he se to
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'host02'
(
T an licen
succeeded
CRS-2677: Stop of 'ora.orcl.db' on 'host01' succeeded
en gCRS-2673: Attempting to stop 'ora.acfs.dbhome1.acfs' on
e e H 'host01'
C h CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host01'
CRS-2677: Stop of 'ora.DATA.dg' on 'host01' succeeded
CRS-5014: Agent "/u01/app/11.2.0/grid/bin/orarootagent.bin"
timed out starting process
"/u01/app/11.2.0/grid/bin/acfssinglefsmount" for action
"stop": details at "(:CLSN00009:)" in
"/u01/app/11.2.0/grid/log/host01/agent/crsd/orarootagent_root/
orarootagent_root.log"
(:CLSN00009:)Utils:execCmd aborted
CRS-2675: Stop of 'ora.acfs.dbhome1.acfs' on 'host01' failed
CRS-2679: Attempting to clean 'ora.acfs.dbhome1.acfs' on
'host01'
CRS-2675: Stop of 'ora.registry.acfs' on 'host01' failed
CRS-2679: Attempting to clean 'ora.registry.acfs' on 'host01'
CRS-2678: 'ora.acfs.dbhome1.acfs' on 'host01' has experienced
an unrecoverable failure
CRS-0267: Human intervention required to resume its
availability.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 43


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

CRS-2673: Attempting to stop 'ora.asm' on 'host01'


CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2681: Clean of 'ora.registry.acfs' on 'host01' succeeded
CRS-2794: Shutdown of Cluster Ready Services-managed resources
on 'host01' has failed
CRS-2675: Stop of 'ora.crsd' on 'host01' failed
CRS-2795: Shutdown of Oracle High Availability Services-
managed resources on 'host01' has failed
CRS-4687: Shutdown command has completed with error(s).
CRS-4000: Command Stop failed, or completed with errors.

7) If the crsctl utility fails to stop Oracle Clusterware, reissue the command on the e
same node. r a bl
s fe
# /u01/app/11.2.0/grid/bin/crsctl stop crs
- t r an
CRS-2791: Starting shutdown of Oracle High Availability
o n
Services-managed resources on 'host01'
CRS-2673: Attempting to stop 'ora.crsd' on 'host01' s an
) ha ฺ
CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'host01'
ฺ c om uide
- i n c tG
CRS-2679: Attempting to clean 'ora.acfs.dbhome1.acfs' on
'host01'
a cs uden
CRS-2673: Attempting to stop 'ora.ACFS.dg' on 'host01'
n @ St
CRS-2681: Clean of 'ora.acfs.dbhome1.acfs' on 'host01'
a
succeeded g ฺ t t h is
h en use
CRS-2677: Stop of 'ora.ACFS.dg' on 'host01' succeeded
e
( c he se to
CRS-2673: Attempting to stop 'ora.ons' on 'host01'
CRS-2673: Attempting to stop 'ora.eons' on 'host01'
T an licen
CRS-2677: Stop of 'ora.ons' on 'host01' succeeded
en g
CRS-2673: Attempting to stop 'ora.net1.network' on 'host01'

e e H CRS-2677: Stop of 'ora.net1.network' on 'host01' succeeded

C h CRS-2677: Stop of 'ora.eons' on 'host01' succeeded


CRS-2792: Shutdown of Cluster Ready Services-managed resources
on 'host01' has completed
CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host01'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
CRS-2673: Attempting to stop 'ora.evmd' on 'host01'
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host01'
CRS-2677: Stop of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded
CRS-5016: Process "/u01/app/11.2.0/grid/bin/acfsload" spawned
by agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" for
action "stop" failed: details at "(:CLSN00010:)" in
"/u01/app/11.2.0/grid/log/host01/agent/ohasd/orarootagent_root
/orarootagent_root.log"
Waiting for ASM to shutdown.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 44


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

acfsload: ACFS-9118: oracleadvm.ko driver in use - can not


unload.
acfsload: ACFS-9118: oracleoks.ko driver in use - can not
unload.
CRS-2677: Stop of 'ora.drivers.acfs' on 'host01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01'
CRS-2673: Attempting to stop 'ora.diskmon' on 'host01'
CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded
ble
CRS-2673: Attempting to stop 'ora.gipcd' on 'host01'
fe r a
CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded
an s
CRS-2677: Stop of 'ora.diskmon' on 'host01' succeeded
n - t r
CRS-2793: Shutdown of Oracle High Availability Services-
managed resources on 'host01' has completed a no
CRS-4133: Oracle High Availability Services has been stopped.
h a s
# exit
m ) e ฺ
o i d
c thatGituhas been successfully
ฺnow
8) Attempt to check the status of Oracle Clusterware
- i n c t
stopped. c s
a tud e n
@
$ crsctl check crs
CRS-4639: Could notgcontact ฺ n is S High Availability Services
ta thOracle
e h en use
$ crsctl check
( c e cluster
hCould e tocontact Oracle High Availability Services
CRS-4639:
a n
CRS-4000: c e ns Check failed, or completed with errors.
Command
not

g T li
n
e9) Connect to the second node of your cluster and verify that Oracle Clusterware is still
e e H running on that node. You may need to set your environment for the second node by
C h using the oraenv utility.
$ . /home/oracle/labs/st_env.sh
$ ssh $ST_NODE2
Last login: Thu Aug 27 17:28:29 2009 from host01.example.com

$ . oraenv
ORACLE_SID = [grid] ? +ASM2
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid

$ crsctl check crs


CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 45


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

10) Verify that all cluster resources are running on the second node, stopped on the first
node, and that the VIP resources from the first node have migrated or failed over to
the second node. The ora.oc4j and the ora.gsd resources are expected to be
offline. Exit the connection to the second node when done.
$ crsctl stat res -t
----------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
----------------------------------------------------------
Local Resources
----------------------------------------------------------
ora.ACFS.dg
ble
ONLINE ONLINE host02
fe r a
ora.DATA.dg
an s
ONLINE ONLINE host02
n - t r
ora.FRA.dg o
ONLINE ONLINE host02
s an
ora.LISTENER.lsnr
ONLINE ONLINE host02 ) ha ฺ
ora.acfs.dbhome1.acfs
ฺ c om uide
ONLINE ONLINE i n
host02
- c tG
ora.asm
a cs uden
ONLINE ONLINE
a n @ St host02
ora.eons
g ฺ t t h is
ora.gsd e h en use
ONLINE ONLINE host02

( c he se to
OFFLINE OFFLINE host02

T an licen
ora.net1.network
ONLINE ONLINE host02
en g
ora.ons
e e H ONLINE ONLINE host02
C h ora.registry.acfs
ONLINE ONLINE host02
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host02
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host02
ora.gns
1 ONLINE ONLINE host02
ora.gns.vip
1 ONLINE ONLINE host02
ora.host01.vip
1 ONLINE INTERMEDIATE host02 FAILED OVER
ora.host02.vip
1 ONLINE ONLINE host02
ora.oc4j
1 OFFLINE OFFLINE

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 46


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

ora.orcl.db
1 ONLINE OFFLINE
2 ONLINE ONLINE host02
ora.scan1.vip
1 ONLINE ONLINE host02
ora.scan2.vip
1 ONLINE ONLINE host02
ora.scan3.vip
1 ONLINE ONLINE host02

$ exit
Connection to host02 closed.
ble
$
fe r a
11) Restart Oracle Clusterware on the first node as the root user. Return to the grid an s
account and verify the results. n - t r
no
Note: You may need to check the status of all the resources several times until they
a
a s
all have been restarted. You can tell that they are all complete when the
h
) ฺ
ora.orcl.db resource has a State Details of Open. It may take several minutes to
m e
completely restart all resources.
o
ฺc Gu i d
- i n c t
$ su - c s e n
Password: oracle << Password
n @ a is not
S tuddisplayed
g ฺ ta this
h e n se
# /u01/app/11.2.0/grid/bin/crsctl start crs
CRS-4123: Oraclee High o uAvailability Services has been started.
# exit che t
$ n (
e n se
T a l ic
g
en $ crsctl stat res -t
e e H -----------------------------------------------------------
C h NAME TARGET STATE SERVER STATE_DETAILS
-----------------------------------------------------------
Local Resources
-----------------------------------------------------------
ora.ACFS.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.DATA.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.FRA.dg
OFFLINE OFFLINE host01
ONLINE ONLINE host02
ora.LISTENER.lsnr
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.acfs.dbhome1.acfs
ONLINE ONLINE host01

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 47


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-1: Verifying, Starting, and Stopping Oracle
Clusterware (continued)

ONLINE ONLINE host02


ora.asm
ONLINE ONLINE host01 Started
ONLINE ONLINE host02
ora.eons
ONLINE ONLINE host01
ONLINE ONLINE host02
ora.gsd
OFFLINE OFFLINE host01
OFFLINE OFFLINE host02
ora.net1.network
ONLINE ONLINE host01
ble
ONLINE ONLINE host02
fe r a
ora.ons
ans
ONLINE ONLINE host01
n - t r
ONLINE ONLINE host02
n o
ora.registry.acfs
s a
ONLINE ONLINE
ONLINE ONLINE
host01
host02 )
ha ฺ
ora.LISTENER_SCAN1.lsnr ฺ c om uide
1 ONLINE ONLINE
- i n chost01t G
ora.LISTENER_SCAN2.lsnr
a cs uhost02d en
1 @ St
ONLINE ONLINE
a n
ฺ t
ora.LISTENER_SCAN3.lsnr
g t h is
1
h e n se
ONLINE ONLINE host02
ora.gns
h e e to u
n (c nse
1
ora.gns.vip
ONLINE ONLINE host02
a lice
en gT 1
ora.host01.vip
ONLINE ONLINE host02

e e H 1 ONLINE ONLINE host01


C h ora.host02.vip
1 ONLINE ONLINE host02
ora.oc4j
1 OFFLINE OFFLINE
ora.orcl.db
1 ONLINE ONLINE host01 Open
2 ONLINE ONLINE host02
ora.scan1.vip
1 ONLINE ONLINE host01
ora.scan2.vip
1 ONLINE ONLINE host02
ora.scan3.vip
1 ONLINE ONLINE host02

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 48


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-2: Adding and Removing Oracle Clusterware
Configuration Files
In this practice, you determine the current location of your voting disks and Oracle
Cluster Registry (OCR) files. You will then add another OCR location and remove it.
1) Use the crsctl utility to determine the location of the voting disks that are currently
used by your Oracle Clusterware installation.
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk
group
-- ----- ----------------- --------- -----
-----
1. ONLINE 0f7a12c15ece4fcebf4dfe8ab209406e (ORCL:ASMDISK01)
[DATA] ble
2. ONLINE 718cf2c254b44f21bfb8d41fbf21fdd5 (ORCL:ASMDISK02)
fe r a
[DATA]
an s
3. ONLINE 449422aa69534f81bfc54e41087d677c - t
(ORCL:ASMDISK03)
n r
[DATA]
Located 3 voting disk(s). a no
h a s
2) Use the ocrcheck utility to determine the location ofm the)Oraclee ฺ
Clusterware
Registry (OCR) files. o
ฺc Gu i d
- i n c t
$ ocrcheck c s e n
a tisudas follows :
Status of Oracle Cluster @ Registry
Version t a n i s S: 3
g ฺ t h
h
Usedespaceen (kbytes)
Total space (kbytes)
u se :
:
262120
3228
e t o
n (cIDh nse
Available space (kbytes) : 258892
: 1581544792
T a li c e
Device/File Name : +DATA
e n g Device/File integrity
e e H check succeeded
C h
Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-


privileged user

3) Verify that the FRA ASM disk group is currently online for all nodes using the
crsctl utility.
$ crsctl stat res ora.FRA.dg -t
-----------------------------------------------------------

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 49


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-2: Adding and Removing Oracle Clusterware
Configuration Files (continued)

NAME TARGET STATE SERVER STATE_DETAILS


-----------------------------------------------------------
Local Resources
----------------------------------------------------------
ora.FRA.dg
OFFLINE OFFLINE host01
ONLINE ONLINE host02

4) If the FRA ASM disk group is not online, use the asmcmd utility to mount the FRA
disk group.
Note: This step may not be necessary if it is already in an online state on each node.
Verify the results. You may have to run the commands on each node.
ble
$ asmcmd mount FRA fe r a
an s
$
n - t r
$ crsctl stat res ora.FRA.dg -t
a no
-----------------------------------------------------------
h a s
NAME TARGET STATE SERVER STATE_DETAILS
m ) e ฺ
o
-----------------------------------------------------------
ฺc Gu i d
Local Resources
- i n c t
s e n
----------------------------------------------------------
c
a tud
ora.FRA.dg
a n @ S
ONLINE ONLINE
ฺ t h i s
host01

h e ng se t
ONLINE ONLINE host02

h e e to u
(c rootnaccount
5) Switch to the
n se and add a second OCR location that is to be stored in the
a
T ASM disk
FRA l e
icgroup. Use the ocrcheck command to verify the results.
en g
e e H $ su -
Password: oracle << Password is not displayed
C h
# /u01/app/11.2.0/grid/bin/ocrconfig -add +FRA

# /u01/app/11.2.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3228
Available space (kbytes) : 258892
ID : 1581544792
Device/File Name : +DATA
Device/File integrity
check succeeded
Device/File Name : +FRA
Device/File integrity
check succeeded

Device/File not configured

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 50


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-2: Adding and Removing Oracle Clusterware
Configuration Files (continued)

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

6) Examine the contents of the ocr.loc configuration file to see the changes made to
the file referencing the new OCR location. ble
fe r a
# cat /etc/oracle/ocr.loc
an s
#Device/file getting replaced by device +FRA
n - t r
ocrconfig_loc=+DATA
ocrmirrorconfig_loc=+FRA a no
h a s
m ) e ฺ
7) Open a connection to your second node as the root
o
ฺc user,G
and i d
uremove the second
i n c t
OCR file that was added from the first node.
a c s-Exit dtheeremote
n connection and verify
the results when completed.
a n @ Stu
g ฺ
# . /home/oracle/labs/st_env.sht t h is
# ssh $ST_NODE2 en s e
e h u
root@host02's
( c e to25 13:04:32
heTuepassword: oracle << Password is not displayed
Last login:
n e n sAug 2009 from host01.example.com
T a l ic
en g
# /u01/app/11.2.0/grid/bin/ocrconfig -delete +FRA

e e H # exit
C h Connection to host02 closed.

# /u01/app/11.2.0/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3228
Available space (kbytes) : 258892
ID : 1581544792
Device/File Name : +DATA
Device/File integrity
check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 51


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-2: Adding and Removing Oracle Clusterware
Configuration Files (continued)

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 52


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-3: Performing a Backup of the OCR and OLR
In this practice, you determine the location of the Oracle Local Registry (OLR) and
perform backups of the OCR and OLR files.
1) Use the ocrconfig utility to list the automatic backups of the Oracle Cluster
Registry (OCR) and the node or nodes on which they have been performed.
# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup

host02 2009/08/27 20:17:20


/u01/app/11.2.0/grid/cdata/cluster03/backup00.ocr

host01 2009/08/27 14:53:56


/u01/app/11.2.0/grid/cdata/cluster03/backup01.ocr
ble
host01 2009/08/27 10:53:54
fe r a
/u01/app/11.2.0/grid/cdata/cluster03/backup02.ocr
ans
n - t r
host01 2009/08/26 02:53:44
/u01/app/11.2.0/grid/cdata/cluster03/day.ocr a no
h a s
host01 2009/08/24 18:53:34
m ) e ฺ
/u01/app/11.2.0/grid/cdata/cluster03/week.ocr o
ฺc Cluster i d
u Registry are
PROT-25: Manual backups for the Oracle i n c t G
not available
a cs- uden
2) Perform a manual backup of tthe
@
OCR. is S
n t
g ฺ a th
h e n s e
# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup
e o
e e t 21:37:49 u
host02 (ch2009/08/27
a n c e ns
/u01/app/11.2.0/grid/cdata/cluster03/backup_20090827_213749.oc
g
r
T li
e n
e e H 3) Perform a logical backup of the OCR, storing the file in the /home/oracle
C h directory.
# /u01/app/11.2.0/grid/bin/ocrconfig -export
/home/oracle/ocr.backup

4) Display only the manual backups that have been performed and identify the node for
which the backup was stored. Do logical backup appear in the display?
# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup manual

host02 2009/08/27 21:37:49


/u01/app/11.2.0/grid/cdata/cluster03/backup_20090827_213749.oc
r

5) Determine the location of the Oracle Local Registry (OLR) using the ocrcheck
utility.
# /u01/app/11.2.0/grid/bin/ocrcheck -local
Status of Oracle Local Registry is as follows :

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 53


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 3-3: Performing a Backup of the OCR and OLR
(continued)

Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2204
Available space (kbytes) : 259916
ID : 1965295944
Device/File Name :
/u01/app/11.2.0/grid/cdata/host01.olr
Device/File integrity
check succeeded

Local registry integrity check succeeded


ble
Logical corruption check succeeded
fe r a
6) Perform a logical backup of the OLR, storing the file in the /home/oracle an s
directory. n - t r
o
# /u01/app/11.2.0/grid/bin/ocrconfig -local -export s an
/home/oracle/olr.backup ) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
H eng
h e e
C

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 54


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 4
In this practice, you will add a third node to your cluster.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 55


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster
The goal of this lab is to extend your cluster to a third node.

Before you start this lab, make sure that you went through the following steps on
your third node: lab 2-1: 1, 2, 3, 4, 5, 6, 8.

In this practice, you extend your cluster to the third node given to you.
Note
• All the scripts you need to execute this lab are located on your first node in
the /home/oracle/labs/node_addition directory.
• Unless specified otherwise, you are connected on your first node as the grid
user using a terminal session.
ble
1) Set up the ssh user equivalence for the grid user between your first node and your
fe r a
third node.
ans
$ /home/oracle/solutions/configure_ssh n - t r
Setting up SSH user equivalency.
a no
grid@gr7213's password:
h a s
grid@gr7214's password:
m ) e ฺ
Checking SSH user equivalency. o
ฺc Gu i d
gr7212
- i n c t
gr7213 c
a tuds e n
gr7214
a n @ S
ฺ t h i s
$
h e ng se t
2) Make sure that e e can connect
you t o u from your first node to the third one without being
prompted (
for
h
c passwords.
se
a n e n
$ T
g l ic
. /home/oracle/labs/st_env.sh
H en $ ssh $ST_NODE3 date
h e e Fri Sep 4 10:38:10 EDT 2009
C $

3) Make sure that you set up your environment variables correctly for the grid user to
point to your grid installation.
$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
$
4) Check your pre-grid installation for your third node using the Cluster Verification
Utility. This fails because the fixup scripts have not been run on the third node.
$ . /home/oracle/labs/st_env.sh
$
$ cluvfy stage -pre crsinst -n $ST_NODE3

Performing pre-checks for cluster services setup

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 56


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Checking node reachability...


Node reachability check passed from node "gr7212"

Checking user equivalence...


User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful e


r a bl
Node connectivity passed for subnet "10.216.52.0" with node(s)
s fe
gr7214
- t r an
TCP connectivity check passed for subnet "10.216.52.0"
no n
Node connectivity passed for subnet "10.216.96.0" a
s with node(s)
gr7214 ) h a
TCP connectivity check passed for subnet o m d e ฺ
"10.216.96.0"
c ฺc Gu i
Node connectivity passed for subnet s i n n t
- "10.196.28.0" with node(s)
a c u d e
gr7214
n
TCP connectivity checkapassed @ forStsubnet

g et t h i s "10.196.28.0"

e npassed
e e h
Node connectivity
t o us for subnet "10.196.180.0" with
(ch nsecheck passed for subnet "10.196.180.0"
node(s) gr7214
TCP connectivity
n
Ta lice
en g
e e H Interfaces found on subnet "10.216.52.0" that are likely
C h candidates for VIP are:
gr7214 eth0:10.216.54.232

Interfaces found on subnet "10.216.96.0" that are likely


candidates for a private interconnect are:
gr7214 eth1:10.216.98.155

Interfaces found on subnet "10.196.28.0" that are likely


candidates for a private interconnect are:
gr7214 eth2:10.196.31.14

Interfaces found on subnet "10.196.180.0" that are likely


candidates for a private interconnect are:
gr7214 eth3:10.196.180.14

Node connectivity check passed

Total memory check passed


Available memory check passed

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 57


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Swap space check passed


Free disk space check passed for "gr7214:/tmp"
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as
Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
gr7214
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors" e
Hard limits check passed for "maximum user processes"
r a bl
Soft limits check passed for "maximum user processes"
s fe
System architecture check passed
- t r an
Kernel version check passed
o n
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns" s an
Kernel parameter check passed for "semopm" ) ha ฺ
Kernel parameter check passed for "semmni"
ฺ c om uide
- i n c tG
Kernel parameter check passed for "shmmax"
cs uden
Kernel parameter check passed for "shmmni"
a
a n @ St
Kernel parameter check passed for "shmall"

g ฺ t
Check failed on nodes: t h is
Kernel parameter check failed for "file-max"

h
gr7214
e en use
( c he se to
Kernel parameter check passed for "ip_local_port_range"

an licen
Kernel parameter check passed for "rmem_default"

g T
Kernel parameter check passed for "rmem_max"

H en Kernel parameter check passed for "wmem_default"


Kernel parameter check failed for "wmem_max"
h e e Check failed on nodes:
C gr7214
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1.2"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libgomp-4.1.2"
Package existence check passed for "libaio-0.3.106"
Package existence check passed for "glibc-2.5-24"
Package existence check passed for "compat-libstdc++-33-3.2.3"
Package existence check passed for "elfutils-libelf-0.125"
Package existence check passed for "elfutils-libelf-devel-
0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "libaio-devel-0.3.106"
Package existence check passed for "libgcc-4.1.2"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 58


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Package existence check passed for "libstdc++-4.1.2"


Package existence check passed for "libstdc++-devel-4.1.2"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11"
Package existence check passed for "unixODBC-devel-2.2.11"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Current group ID check passed
Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed


Default user file creation mask check passed

ble
Starting Clock synchronization checks using Network Time
Protocol(NTP)... fe r a
an s
NTP Configuration file check started... n - t r
NTP Configuration file check passed
a no
h a s
Checking daemon liveness...
m ) e ฺ
Liveness check passed for "ntpd" o
ฺc Gu i d
- i n c t
NTP daemon slewing option checkspassed
c e n
n @a Studcheck for slewing option
NTP daemon's boot timeaconfiguration
passed g ฺ t t h is
e h en use
( c e Server
NTP commonhTime
e to Check started...
Check of common
a n c e nsNTP Time Server passed
g T li
e n Clock time offset check from NTP Time Server started...

e e H Clock time offset check passed


C h
Clock synchronization check using Network Time Protocol(NTP)
passed

Pre-check for cluster services setup was unsuccessful on all


the nodes.
$

5) Generate the fixup script for your third node with cluvfy with the –fixup option.
$ cluvfy stage -pre crsinst -n $ST_NODE3 –fixup
Performing pre-checks for cluster services setup

Checking node reachability...


Node reachability check passed from node "host01"

Checking user equivalence...

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 59


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "10.216.52.0" with node(s)


host03
TCP connectivity check passed for subnet "10.216.52.0"

Node connectivity passed for subnet "10.216.96.0" with node(s) e


host03
r a bl
TCP connectivity check passed for subnet "10.216.96.0"
ns fe
Node connectivity passed for subnet "10.196.28.0" with n - t r
node(s)
a
o
host03
TCP connectivity check passed for subnet "10.196.28.0" s an
) ha ฺ
o
Node connectivity passed for subnet "10.196.180.0"
c m ide with
node(s) host03
n c ฺ G u
i t
TCP connectivity check passed for
a cs- subnet
d e n "10.196.180.0"
a n @ Stu
Interfaces found on g ฺ t
subnet t h is
"10.216.52.0" that are likely
e n e
candidates for hVIP are:
e e t o us
(ch nse
host03 eth0:10.216.54.232

T n e on subnet "10.216.96.0" that are likely


a licfound
Interfaces
g
en candidates for a private interconnect are:
e e H host03 eth1:10.216.98.155
C h
Interfaces found on subnet "10.196.28.0" that are likely
candidates for a private interconnect are:
host03 eth2:10.196.31.14

Interfaces found on subnet "10.196.180.0" that are likely


candidates for a private interconnect are:
host03 eth3:10.196.180.14

Node connectivity check passed

Total memory check passed


Available memory check passed
Swap space check passed
Free disk space check passed for " host03:/tmp"
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 60


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Membership check for user "grid" in group "oinstall" [as


Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
host03
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl" e
Kernel parameter check passed for "semmns"
r a bl
Kernel parameter check passed for "semopm"
s fe
Kernel parameter check passed for "semmni"
- t r an
Kernel parameter check passed for "shmmax"
o n
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall" s an
Kernel parameter check failed for "file-max" ) ha ฺ
Check failed on nodes:
ฺ c om uide
host03
- i n c tG
cs uden
Kernel parameter check passed for "ip_local_port_range"
a
a n @ St
Kernel parameter check passed for "rmem_default"

g ฺ t t h is
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
h en use
Kernel parameter check failed for "wmem_max"
e
( c he se to
Check failed on nodes:

an licen
host03

g T
Kernel parameter check passed for "aio-max-nr"

H en Package existence check passed for "make-3.81"


Package existence check passed for "binutils-2.17.50.0.6"
h e e Package existence check passed for "gcc-4.1.2"
C Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libgomp-4.1.2"
Package existence check passed for "libaio-0.3.106"
Package existence check passed for "glibc-2.5-24"
Package existence check passed for "compat-libstdc++-33-3.2.3"
Package existence check passed for "elfutils-libelf-0.125"
Package existence check passed for "elfutils-libelf-devel-
0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "libaio-devel-0.3.106"
Package existence check passed for "libgcc-4.1.2"
Package existence check passed for "libstdc++-4.1.2"
Package existence check passed for "libstdc++-devel-4.1.2"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11"
Package existence check passed for "unixODBC-devel-2.2.11"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 61


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Package existence check passed for "ksh-20060214"


Check for multiple users with UID value 0 passed
Current group ID check passed
Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed


Default user file creation mask check passed

Starting Clock synchronization checks using Network Time


Protocol(NTP)...

NTP Configuration file check started...


NTP Configuration file check passed e
r a bl
Checking daemon liveness...
s fe
Liveness check passed for "ntpd"
- t r an
NTP daemon slewing option check passed no n
s a
NTP daemon's boot time configuration check )for haslewing
ฺ option
o m e
passed
n c ฺc Guid
s
NTP common Time Server Check started...
c -i ent
Check of common NTP Time Server
n @ S tud
a passed
Clock time offset n g ฺta fromthNTP
check
is Time Server started...
Clock time offset h e
e checkuspassed
h e
e e to
( c ns
Clock synchronization
n check using Network Time Protocol(NTP)
T a
passed li c e
e n g
e e H Fixup information has been generated for following node(s):
C h host03
Please run the following script on each node as "root" user to
execute the fixups:
'/tmp/CVU_11.2.0.1.0_grid/runfixup.sh'

Pre-check for cluster services setup was unsuccessful on all


the nodes.
6) Run the fixup script as directed.
$ ssh root@${ST_NODE3} /tmp/CVU_11.2.0.1.0_grid/runfixup.sh
root@host03's password:
Response file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.response
Enable file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log
Setting Kernel Parameters...
fs.file-max = 327679

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 62


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 262144
net.core.wmem_max = 1048576
uid=502(grid) gid=502(oinstall)
groups=505(asmdba),506(asmoper),504(asmadmin),502(oinstall)
$
7) Using the Cluster Verification Utility, make sure that you can add your third node to
the cluster.
$ . /home/oracle/labs/st_env.sh
$
$ cluvfy stage -pre nodeadd -n $ST_NODE3
ble
fe r a
Performing pre-checks for node addition
an s
n - t r
Checking node reachability... o
Node reachability check passed from node "host01"
s an
) ha ฺ
Checking user equivalence... ฺ c om uide
User equivalence check passed for - i n c "grid"
user t G
s
ac tude n
Checking node connectivity...
a @
n is S
ฺ t
g file... th
Checking hosts confige n
h o us e
e e
h ofsthe t hosts config file successful
( c
Verification
n e
T an Node
l i c econnectivity
n g
Check: for interface "eth3"
e Node connectivity passed for interface "eth3"
e e H
C h Node connectivity check passed

Checking CRS integrity...

CRS integrity check passed

Checking shared resources...


Shared resources check for node addition failed

Check failed on nodes: <-------- You can safely ignore


host03

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 63


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Node connectivity passed for subnet "10.216.52.0" with node(s)


host01,host02,host03
TCP connectivity check passed for subnet "10.216.52.0"

Node connectivity passed for subnet "10.216.96.0" with node(s)


host01,host02,host03
TCP connectivity check passed for subnet "10.216.96.0"

Node connectivity passed for subnet "10.196.28.0" with node(s)


host01,host02,host03
TCP connectivity check passed for subnet "10.196.28.0"

ble
Node connectivity passed for subnet "10.196.180.0" with
node(s) host01,host02,host03 fe r a
TCP connectivity check failed for subnet "10.196.180.0" ans
n - t r
a no
Interfaces found on subnet "10.216.52.0" that are
h a s likely
candidates for VIP are:
m ) e ฺ
host01 eth0:10.216.54.233 o
ฺc Gu i d
host02 eth0:10.216.54.234
- i n c t
host03 eth0:10.216.54.235
c
a tuds e n
a n @ S
Interfaces found on subnet
ฺ t h i s
"10.216.96.0" that are likely

h e ng se t
candidates for a private interconnect are:

h e e to u
host01 eth1:10.216.101.101
host02 eth1:10.216.96.144
(c nse
host03 eth1:10.216.100.226
n
g T a l i ce
H en Interfaces found on subnet "10.196.28.0" that are likely
candidates for a private interconnect are:
h e e host01 eth2:10.196.31.15
C host02 eth2:10.196.31.16
host03 eth2:10.196.31.17

Interfaces found on subnet "10.196.180.0" that are likely


candidates for a private interconnect are:
host01 eth3:10.196.180.15 eth3:10.196.181.239
eth3:10.196.183.15 eth3:10.196.182.229 eth3:10.196.180.232
host02 eth3:10.196.180.16 eth3:10.196.181.231
eth3:10.196.181.244
host03 eth3:10.196.180.17

Node connectivity check passed

Total memory check passed


Available memory check passed
Swap space check passed
Free disk space check passed for "host03:/tmp"
User existence check passed for "grid"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 64


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Run level check passed


Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni" e
Kernel parameter check passed for "shmall"
r a bl
Kernel parameter check passed for "file-max"
s fe
Kernel parameter check passed for "ip_local_port_range"
- t r an
Kernel parameter check passed for "rmem_default"
o n
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default" s an
Kernel parameter check passed for "wmem_max" ) ha ฺ
Kernel parameter check passed for "aio-max-nr"
ฺ c om uide
- i n c tG
Package existence check passed for "make-3.81"
cs uden
Package existence check passed for "binutils-2.17.50.0.6"
a
a n @ St
Package existence check passed for "gcc-4.1.2"

g ฺ t t h is
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libgomp-4.1.2"
h en use
Package existence check passed for "libaio-0.3.106"
e
( c he se to
Package existence check passed for "glibc-2.5-24"

an licen
Package existence check passed for "compat-libstdc++-33-3.2.3"

g T
Package existence check passed for "elfutils-libelf-0.125"

H en Package existence check passed for "elfutils-libelf-devel-


0.125"
h e e Package existence check passed for "glibc-common-2.5"
C Package existence check passed for "glibc-devel-2.5"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "libaio-devel-0.3.106"
Package existence check passed for "libgcc-4.1.2"
Package existence check passed for "libstdc++-4.1.2"
Package existence check passed for "libstdc++-devel-4.1.2"
Package existence check passed for "sysstat-7.0.2"
Package existence check passed for "unixODBC-2.2.11"
Package existence check passed for "unixODBC-devel-2.2.11"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed

User "grid" is not part of "root" group. Check passed

Starting Clock synchronization checks using Network Time


Protocol(NTP)...

NTP Configuration file check started...

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 65


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

NTP Configuration file check passed

Checking daemon liveness...


Liveness check passed for "ntpd"

NTP daemon slewing option check passed

NTP daemon's boot time configuration check for slewing option


passed

NTP common Time Server Check started...


Check of common NTP Time Server passed

ble
Clock time offset check from NTP Time Server started...
Clock time offset check passed fe r a
t r a ns
Clock synchronization check using Network Time Protocol(NTP)
o n -
passed
s an
) ha ฺ
Pre-check for node addition was unsuccessful
ฺ c om uonideall the nodes.
$
- i n c tG
en
cs firstudnode:
8) Add your third node to the cluster fromayour
a n @ St
ฺ t
$ . /home/oracle/labs/st_env.sh
g t h is
$
e h en use
$ cd $ORACLE_HOME/oui/bin
$
( c he se to
T an licen-silent "CLUSTER_NEW_NODES={$ST_NODE3}"
$ ./addNode.sh
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={$ST_NODE3_VIP}"
H engStarting Oracle Universal Installer...
h e e
C Checking swap space: must be greater than 500 MB.
3755 MB Passed
Actual

Oracle Universal Installer, Version 11.2.0.1.0 Production


Copyright (C) 1999, 2009, Oracle. All rights reserved.

Performing tests to see whether nodes host02,host03 are


available
..............................................................
. 100% Done.

.
--------------------------------------------------------------
---------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/11.2.0/grid
New Nodes

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 66


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Space Requirements
New Nodes
host03
/: Required 4.76GB : Available 32.38GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.1.0
Sun JDK 1.5.0.17.0
Installer SDK Component 11.2.0.1.0
Oracle One-Off Patch Installer 11.2.0.0.2
Oracle Universal Installer 11.2.0.1.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.2 e
Oracle DBCA Deconfiguration 11.2.0.1.0
r a bl
Oracle RAC Deconfiguration 11.2.0.1.0
s fe
Oracle Quality of Service Management (Server) 11.2.0.1.0
- t r an
Installation Plugin Files 11.2.0.1.0
o n
Universal Storage Manager Files 11.2.0.1.0
Oracle Text Required Support Files 11.2.0.1.0 s an
) ha ฺ
Automatic Storage Management Assistant 11.2.0.1.0
ฺ c om uide
Oracle Database 11g Multimedia Files 11.2.0.1.0

- i n c tG
Oracle Multimedia Java Advanced Imaging 11.2.0.1.0
cs uden
Oracle Globalization Support 11.2.0.1.0
a
a n @ St
Oracle Multimedia Locator RDBMS Files 11.2.0.1.0

g ฺ t
Bali Share 1.1.18.0.0 t h is
Oracle Core Required Support Files 11.2.0.1.0

h en use
Oracle Database Deconfiguration 11.2.0.1.0
e
( c he se to
Oracle Quality of Service Management (Client) 11.2.0.1.0

an licen
Expat libraries 2.0.1.0.1

g T Oracle Containers for Java 11.2.0.1.0

H en Perl Modules 5.10.0.0.1


Secure Socket Layer 11.2.0.1.0
h e e Oracle JDBC/OCI Instant Client 11.2.0.1.0
C Oracle Multimedia Client Option 11.2.0.1.0
LDAP Required Support Files 11.2.0.1.0
Character Set Migration Utility 11.2.0.1.0
Perl Interpreter 5.10.0.0.1
PL/SQL Embedded Gateway 11.2.0.1.0
OLAP SQL Scripts 11.2.0.1.0
Database SQL Scripts 11.2.0.1.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.1.0
SQL*Plus Files for Instant Client 11.2.0.1.0
Oracle Net Required Support Files 11.2.0.1.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client
11.2.0.1.0
Enterprise Manager Minimal Integration 11.2.0.1.0
XML Parser for Java 11.2.0.1.0
Oracle Security Developer Tools 11.2.0.1.0
Oracle Wallet Manager 11.2.0.1.0

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 67


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Enterprise Manager plugin Common Files 11.2.0.1.0


Platform Required Support Files 11.2.0.1.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.1.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.2
Deinstallation Tool 11.2.0.1.0
Oracle Java Client 11.2.0.1.0
Cluster Verification Utility Files 11.2.0.1.0
Oracle Notification Service (eONS) 11.2.0.1.0
Oracle LDAP administration 11.2.0.1.0
Cluster Verification Utility Common Files 11.2.0.1.0 e
Oracle Clusterware RDBMS Files 11.2.0.1.0
r a bl
Oracle Locale Builder 11.2.0.1.0
s fe
Oracle Globalization Support 11.2.0.1.0
- t r an
Buildtools Common Files 11.2.0.1.0
o n
Oracle RAC Required Support Files-HAS 11.2.0.1.0
SQL*Plus Required Support Files 11.2.0.1.0 s an
XDK Required Support Files 11.2.0.1.0 ) ha ฺ
Agent Required Support Files 10.2.0.4.2
ฺ c om uide
- i n c tG
Parser Generator Required Support Files 11.2.0.1.0
cs uden
Precompiler Required Support Files 11.2.0.1.0
a
a n @ St
Installation Common Files 11.2.0.1.0

g ฺ t t h is
Required Support Files 11.2.0.1.0
Oracle JDBC/THIN Interfaces 11.2.0.1.0
h en use
Oracle Multimedia Locator 11.2.0.1.0
e
( c he se to
Oracle Multimedia 11.2.0.1.0

an licen
HAS Common Files 11.2.0.1.0

g T Assistant Common Files 11.2.0.1.0

H en PL/SQL 11.2.0.1.0
HAS Files for DB 11.2.0.1.0
h e e Oracle Recovery Manager 11.2.0.1.0
C Oracle Database Utilities 11.2.0.1.0
Oracle Notification Service 11.2.0.0.0
SQL*Plus 11.2.0.1.0
Oracle Netca Client 11.2.0.1.0
Oracle Net 11.2.0.1.0
Oracle JVM 11.2.0.1.0
Oracle Internet Directory Client 11.2.0.1.0
Oracle Net Listener 11.2.0.1.0
Cluster Ready Services Files 11.2.0.1.0
Oracle Database 11g 11.2.0.1.0
--------------------------------------------------------------
---------------

Instantiating scripts for add node (Tuesday, September 1, 2009


2:46:41 PM EDT)
.
1% Done.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 68


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, September 1, 2009 2:46:45 PM


EDT)
..............................................................
.................................
96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, September 1, 2009 2:51:33


PM EDT)
.
100% Done. e
Save inventory complete
r a bl
WARNING:A new inventory has been created on one or more nodes
sfe
in this session. However, it has not yet been registered as
- t r an
the central inventory of this system.
o n
To register the new inventory please run the script at
s an
'/u01/app/oraInventory/orainstRoot.sh' with root privileges on
nodes 'host03'. ) ha ฺ
ฺ c om uide
If you do not register the inventory, you may not be able to

- i n c tG
update or patch the products you installed.
cs uden
The following configuration scripts need to be executed as the
a
a n @ St
"root" user in each cluster node.

g ฺ t t h is
/u01/app/oraInventory/orainstRoot.sh #On nodes host03
/u01/app/11.2.0/grid/root.sh #On nodes host03
h en use
To execute the configuration scripts:
e
( c he se to
1. Open a terminal window

an licen
2. Log in as "root"

g T 3. Run the scripts in each cluster node

H en The Cluster Node Addition of /u01/app/11.2.0/grid was


h e e successful.
C Please check '/tmp/silentInstall.log' for more details.
$

9) Connected as the root user on your third node using a terminal session, execute the
following scripts: /u01/app/oraInventory/orainstRoot.sh and
/u01/app/11.2.0/grid/root.sh.
[grid]$ . /home/oracle/labs/st_env.sh
[grid]$ ssh root@${ST_NODE3}
root@host03's password:
Last login: Tue Sep 29 09:59:03 2009 from host01.example.com
# /u01/app/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 69


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

The execution of the script is complete.


#
# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory:


[/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ... e
Copying coraenv to /usr/local/bin ...
r a bl
s fe
- t r an
Creating /etc/oratab file...
o n
s an
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.) ha ฺ
ฺ c om uide
Now product-specific root actions will be performed.

- i n c tG
2009-09-01 15:05:27: Parsing the host name
cs uden
2009-09-01 15:05:27: Checking for super user privileges
a
a n @ St
2009-09-01 15:05:27: User has super user privileges

g ฺ t t h is
Using configuration parameter file:
/u01/app/11.2.0/grid/crs/install/crsconfig_params
h en use
Creating trace directory
e
( c he se to
LOCAL ADD MODE

an licen
Creating OCR keys for user 'root', privgrp 'root'..

g T
Operation successful.

H en Adding daemon to inittab


CRS-4123: Oracle High Availability Services has been started.
h e e ohasd is starting
C CRS-4402: The CSS daemon was started in exclusive mode but
found an active CSS daemon on node host01, number 1, and is
terminating
An active cluster was found during exclusive startup,
restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'host03'
CRS-2676: Start of 'ora.mdnsd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host03'
CRS-2676: Start of 'ora.gipcd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host03'
CRS-2676: Start of 'ora.gpnpd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host03'
CRS-2676: Start of 'ora.cssdmonitor' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host03'
CRS-2672: Attempting to start 'ora.diskmon' on 'host03'
CRS-2676: Start of 'ora.diskmon' on 'host03' succeeded
CRS-2676: Start of 'ora.cssd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host03'

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 70


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

CRS-2676: Start of 'ora.ctssd' on 'host03' succeeded


CRS-2672: Attempting to start 'ora.drivers.acfs' on 'host03'
CRS-2676: Start of 'ora.drivers.acfs' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host03'
CRS-2676: Start of 'ora.asm' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host03'
CRS-2676: Start of 'ora.crsd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'host03'
CRS-2676: Start of 'ora.evmd' on 'host03' succeeded
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'.. e
Operation successful.
r a bl
sfe
host03 2009/09/01 15:09:15
- t r an
/u01/app/11.2.0/grid/cdata/host03/backup_20090901_150915.olr
Preparing packages for installation... no n
cvuqdisk-1.0.7-1 s a
) h
Configure Oracle Grid Infrastructure for a Cluster ...
a
succeeded o m d e ฺ
c ฺc Gu
Updating inventory properties for clusterware
i
s -
Starting Oracle Universal Installer...i n n t
c
a tud e
@
Checking swap space: must
4095 MB Passed ngฺ
n be
is S than 500 MB. Actual
ta thgreater
h e uisselocated at /etc/oraInst.loc
The inventory pointer
e
h
The inventory
( c to at /u01/app/oraInventory
e is elocated
n cens was successful.
'UpdateNodeList'
# Ta
n g li
H e
10) From your first node, check that your cluster is integrated and that the cluster is not
he e divided into separate parts.
C $ . /home/oracle/labs/st_env.sh
$
$ cluvfy stage -post nodeadd -n $ST_NODE3

Performing post-checks for node addition

Checking node reachability...


Node reachability check passed from node "host01"

Checking user equivalence...


User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 71


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Verification of the hosts config file successful

Check: Node connectivity for interface "eth3"


Node connectivity passed for interface "eth3"

Node connectivity check passed

Checking cluster integrity...

Cluster integrity check passed

ble
Checking CRS integrity... fe r a
ans
CRS integrity check passed n - t r
a no
Checking shared resources...
h a s
Shared resources check for node addition passed
m ) e ฺ
o
ฺc Gu i d
i n c t
Checking node connectivity... s- n
c e
Checking hosts config a n @a Stud
g ฺ t file...
t h is
Verification ofhe
n seconfig file successful
e e the hosts
t o u
h
(c nse passed for subnet "10.216.52.0" with node(s)
Node connectivity
T a n ce
g l i
host03,host02,host01

H en TCP connectivity check passed for subnet "10.216.52.0"


h e e Node connectivity passed for subnet "10.216.96.0" with node(s)
C host03,host02,host01
TCP connectivity check passed for subnet "10.216.96.0"

Node connectivity passed for subnet "10.196.28.0" with node(s)


host03,host02,host01
TCP connectivity check passed for subnet "10.196.28.0"

Node connectivity passed for subnet "10.196.180.0" with


node(s) host03,host02,host01
TCP connectivity check passed for subnet "10.196.180.0"

Interfaces found on subnet "10.216.52.0" that are likely


candidates for VIP are:
host03 eth0:10.216.54.235
host02 eth0:10.216.54.234
host01 eth0:10.216.54.233

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 72


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Interfaces found on subnet "10.216.96.0" that are likely


candidates for a private interconnect are:
host03 eth1:10.216.100.226
host02 eth1:10.216.96.144
host01 eth1:10.216.101.101

Interfaces found on subnet "10.196.28.0" that are likely


candidates for a private interconnect are:
host03 eth2:10.196.31.17
host02 eth2:10.196.31.16
host01 eth2:10.196.31.15

Interfaces found on subnet "10.196.180.0" that are likely e


candidates for a private interconnect are:
r a bl
host03 eth3:10.196.180.17 eth3:10.196.182.229
s fe
eth3:10.196.180.224
- t r an
host02 eth3:10.196.180.16 eth3:10.196.181.231
o n
eth3:10.196.181.244
host01 eth3:10.196.180.15 eth3:10.196.181.239 s an
eth3:10.196.183.15 eth3:10.196.180.232 ) ha ฺ
c o m ide
Node connectivity check passed
n c ฺ G u
i t
a cs- uden
n @ S t
Checking node application
g ฺt a t hi s
existence...

Checking existenceh n
e of uVIP e
s node application (required)
e e t o
(ch nse
Check passed.

T n ce
a existence
Checking
g l i of ONS node application (optional)

H en Check passed.

h e e Checking existence of GSD node application (optional)


C Check ignored.

Checking existence of EONS node application (optional)


Check passed.

Checking existence of NETWORK node application (optional)


Check passed.

Checking Single Client Access Name (SCAN)...

Checking name resolution setup for "cl7215-


scan.cl7215.example.com"...

Verification of SCAN VIP and Listener setup passed

User "grid" is not part of "root" group. Check passed

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 73


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

Checking if Clusterware is installed on all nodes...


Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...


CTSS resource check passed

Querying CTSS for time offset on all nodes...


Query of CTSS for time offset passed

Check CTSS state started...


CTSS is in Observer state. Switching over to clock
synchronization checks using NTP e
r a bl
s fe
Starting Clock synchronization checks using Network Time
- t r an
Protocol(NTP)... n
no
NTP Configuration file check started... s a
NTP Configuration file check passed ) h a
o m d e ฺ
Checking daemon liveness... c ฺc Gu i
Liveness check passed for "ntpd" s - i n n t
c
a tud e
a n @ S
NTP daemon slewing option

g et t check
h i s passed

h e ntime s
NTP daemon's boot
e e t o uconfiguration check for slewing option

(ch nse
passed

T
NTP
n e Server Check started...
acommonlicTime
g
en Check of common NTP Time Server passed
e e H
C h Clock time offset check from NTP Time Server started...
Clock time offset check passed

Clock synchronization check using Network Time Protocol(NTP)


passed

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.


$
11) Make sure that the FRA and ASCF ASM disk groups are mounted on all three nodes.

[grid@host01]$ crsctl stat res ora.FRA.dg -t


--------------------------------------------------------------
------------------

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 74


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

NAME TARGET STATE SERVER


STATE_DETAILS
--------------------------------------------------------------
------------------
Local Resources
--------------------------------------------------------------
------------------
ora.FRA.dg
ONLINE ONLINE host01
ONLINE ONLINE host02
OFFLINE OFFLINE host03

[grid@host01]$ crsctl stat res ora.ACFS.dg -t


ble
--------------------------------------------------------------
------------------ fe r a
NAME TARGET STATE SERVER ans
STATE_DETAILS n - t r
--------------------------------------------------------------
a no
------------------
h a s
Local Resources
m ) e ฺ
o d
--------------------------------------------------------------
ฺc Gu i
------------------
- i n c t
ora.ACFS.dg
c s
a tud e n
ONLINE ONLINE host01
a n @ S
ONLINE ONLINE
ฺ t h i s
host02

h e ng se t
OFFLINE OFFLINE host03

e o u
e ssh t$ST_NODE3
h
[grid@host01]$
(c Sat e 12 13:16:53 2009 from host01.example.com
sSep
Last login:
a n e n
g T l ic= [grid]
[grid@host03]$ . oraenv

H en ORACLE_SID ? +ASM3
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
h e e /u01/app/grid
C [grid@host03]$ asmcmd mount ACFS
[grid@host03]$ asmcmd mount FRA
[grid@host03]$ exit
logout
Connection to host03 closed.
[grid@host01]$ crsctl stat res ora.FRA.dg -t
--------------------------------------------------------------
------------------
NAME TARGET STATE SERVER
STATE_DETAILS
--------------------------------------------------------------
------------------
Local Resources
--------------------------------------------------------------
------------------
ora.FRA.dg
ONLINE ONLINE host01
ONLINE ONLINE host02

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 75


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 4-1: Adding a Third Node to Your Cluster (continued)

ONLINE ONLINE host03

[grid@host011]$ crsctl stat res ora.ACFS.dg -t


--------------------------------------------------------------
------------------
NAME TARGET STATE SERVER
STATE_DETAILS
--------------------------------------------------------------
------------------
Local Resources
--------------------------------------------------------------
------------------
ora.ACFS.dg e
ONLINE ONLINE host01
r a bl
ONLINE ONLINE host02
s fe
ONLINE ONLINE host03
- t r an
[grid@host01 ~]$
o n
$
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 76


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 5
In this practice, you will use Oracle Clusterware to protect the Apache application.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 77


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application
In this practice, you use Oracle Clusterware to protect the Apache application. To do this,
you create an application VIP for Apache (HTTPD), an action script, and a resource.
1) As root, source the environment script /home/oracle/labs/st_env.sh. Then
verify that the Apache RPMs; httpd, httpd-devel, and httpd-manual are
installed on your first two nodes.
# su –
Password: oracle <<password not displayed

# . /home/oracle/labs/st_env.sh

# rpm -qa|grep httpd


ble
httpd-2.2.3-22.0.1.el5 fe r a
httpd-devel-2.2.3-22.0.1.el5 an s
httpd-manual-2.2.3-22.0.1.el5 n - t r
a no
[root]# . /home/oracle/labs/st_env.sh
h a s
m ) e ฺ
Repeat on second node o
ฺc Gu i d
[root] ssh $ST_NODE2 rpm -qa|grep httpd
- i n c t not displayed
root@host02's password: oracle s<<password
c
a tud e n
httpd-2.2.3-22.0.1.el5
a n @ S
httpd-devel-2.2.3-22.0.1.el5
ฺ t h i s
h e ng se t
httpd-manual-2.2.3-22.0.1.el5
[root]#
h e e to u
n (c nse
AsT
2) g
a ce start the Apache application on your first node with the
the rootliuser,
H en apachectl start command.
h e e # apachectl start
C
From a VNC session on one of your three nodes, access the Apache test page on your
first node. For example, if your first node was named host01, the HTTP address
would look something like this:
http://host01.example.com

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 78


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

ble
fe r a
an s
n - t r
a no
h a s
After you have determined that Apache is working properly, m ) repeatethis
ฺ step on your
second host. After you have determined that Apache o
ฺcis working i d
u correctly on both
i n c t G
nodes, you can stop Apache with the apachectl
a cs- udestop n command.
a n @ St
g
3) Create an action script toncontrol
is
ฺt the tapplication.
h This script must be accessible by all
nodes on which the e
happlication s e
uresource can be located.
e e t o
ch user,
a) As the(root
n n secreate a script on the first node called apache.scr in
a ic
T/usr/local/bin
l e that will start, stop, check status, and clean up if the
en g application does not exit cleanly. Make sure that the host specified in the
e e H WEBPAGECHECK variable is your first node. Use the
C h /home/oracle/labs/less_05/apache.scr.tpl file as a template for
creating the script. Make the script executable and test the script.
# cp /home/oracle/labs/less_05/apache.tpl
/usr/local/bin/apache.scr
# vi /usr/local/bin/apache.scr

#!/bin/bash

HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://host01.example.com:80/icons/apache_pb.gif
case $1 in
'start')
/usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
RET=$?
;;
'stop')
/usr/sbin/apachectl -k stop

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 79


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

RET=$?
;;
'clean')
/usr/sbin/apachectl -k stop
RET=$?
;;
'check')
/usr/bin/wget -q --delete-after $WEBPAGECHECK
RET=$?
;;
*)
RET=0
;; e
esac
r a bl
# 0: success; 1 : error
s fe
if [ $RET -eq 0 ]; then
- t r an
exit 0
else
n
no
exit 1 s a
fi ) h a
o m d e ฺ
Save the file c ฺc Gu i
s - i n n t
# chmod 755 /usr/local/bin/apache.scr
c
a tud e
a n @ S
# apache.scr start
ฺ t h i s
Verify web page
# apache.scr e h eng use t
stop
( c e todisplay
he noslonger
Web page should
b) TAs c en a script on the second node called apache.scr in
anroot,licreate
H eng /usr/bin/local that will start, stop, check status, and clean up if the
h e e application does not exit cleanly. Make sure that the host specified in the
C WEBPAGECHECK variable is your second node. Use the
/home/oracle/labs/less_05/apache.scr.tpl file as a template for
creating the script. Make the script executable and test the script.
# ssh $ST_NODE2
root@host02's password: oracle << password not displayed
# cp /home/oracle/labs/less_05/apache.tpl
/usr/local/bin/apache.scr
# vi /usr/local/bin/apache.scr

#!/bin/bash

HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://host02.example.com:80/icons/apache_pb.gif
case $1 in
'start')
/usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
RET=$?

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 80


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

;;
'stop')
/usr/sbin/apachectl -k stop
RET=$?
;;
'clean')
/usr/sbin/apachectl -k stop
RET=$?
;;
'check')
/usr/bin/wget -q --delete-after $WEBPAGECHECK
RET=$?
;; e
*)
r a bl
RET=0
s fe
;;
- t r an
esac
o n
# 0: success; 1 : error
if [ $RET -eq 0 ]; then s an
exit 0 ) ha ฺ
else
ฺ c om uide
exit 1
- i n c tG
fi
a cs uden
Save the file a n @ St
g ฺ t t h is
h en use
# chmod 755 /usr/local/bin/apache.scr
e
# apache.scr
( c he starte to
Verify web
n page ns
g# T a
apache.scr l icestop
H en Web page should no longer display
h e e
C
4) Next, you must validate the return code of a check failure using the new script. The
Apache server should NOT be running on either node. Run apache.scr check
and immediately test the return code by issuing an echo $? command. This must be
run immediately after the apache.scr check command because the shell
variable $? holds the exit code of the previous command run from the shell. An
unsuccessful check should return an exit code of 1. You should do this on both nodes.
# apache.scr check
# echo $?
1

Repeat on second node

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 81


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

5) As the grid user, create a server pool for the resource called myApache_sp. This
pool contains your first two hosts and is a child of the Generic pool.
$ id
uid=502(grid) gid=501(oinstall)
groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper)

$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
$ /u01/app/11.2.0/grid/bin/crsctl add serverpool myApache_sp
-attr "PARENT_POOLS=Generic, SERVER_NAMES=$ST_NODE1 $ST_NODE2"
ble
fe r a
6) Check the status of the new pool on your cluster. ans
n - t r
$ /u01/app/11.2.0/grid/bin/crsctl status server -f
a no
NAME=host01
STATE=ONLINE h a s
ACTIVE_POOLS=Generic myApache_sp ora.orcl m ) e ฺ
o
ฺc Gu i d
STATE_DETAILS=
- i n c t
NAME=host02 c s
a tud e n
STATE=ONLINE
a n @ S
ACTIVE_POOLS=Generic ฺ
g et t h i
myApache_sp s ora.orcl
STATE_DETAILS= hen
... e e t o us
n (ch nse
g Ta lice
H e7)n Add the Apache Resource, which can be called myApache, to the myApache_sp
h e e subpool that has Generic as a parent. It must be performed as root because the
C resource requires root authority because of listening on the default privileged port 80.
Set CHECK_INTERVAL to 30, RESTART_ATTEMPTS to 2, and PLACEMENT
to restricted.
# su –
Password: oracle << Password not displayed
# id
uid=0(root) gid=0(root)
groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel
)
# /u01/app/11.2.0/grid/bin/crsctl add resource myApache -type
cluster_resource -attr
"ACTION_SCRIPT=/usr/local/bin/apache.scr,
PLACEMENT='restricted', SERVER_POOLS=myApache_sp,
CHECK_INTERVAL='30', RESTART_ATTEMPTS='2'"

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 82


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

8) View the static attributes of the myApache resource with the crsctl status
resource myApache -p –f command.
# /u01/app/11.2.0/grid/bin/crsctl status resource myApache –f
NAME=myApache
TYPE=cluster_resource
STATE=OFFLINE
TARGET=OFFLINE
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=/usr/local/bin/apache.scr
ACTIVE_PLACEMENT=0
ble
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
fe r a
AUTO_START=restore
an s
CARDINALITY=1
n - t r
CARDINALITY_ID=0 o
CHECK_INTERVAL=30
s an
CREATION_SEED=123
CURRENT_RCOUNT=0 ) ha ฺ
DEFAULT_TEMPLATE= ฺ c om uide
DEGREE=1 - i n c tG
DESCRIPTION=
a cs uden
ENABLED=1
a n @ St
FAILOVER_DELAY=0
g ฺ t t h is
FAILURE_COUNT=0
e h
FAILURE_HISTORY=
en use
c he se to
FAILURE_INTERVAL=0
(
an licen
FAILURE_THRESHOLD=0
T
en g
HOSTING_MEMBERS=
ID=myApache
e e H INCARNATION=0
C h LAST_FAULT=0
LAST_RESTART=0
LAST_SERVER=
LOAD=1
LOGGING_LEVEL=1
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
SERVER_POOLS=myApache_sp
START_DEPENDENCIES=
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STATE_CHANGE_VERS=0
STATE_DETAILS=
STOP_DEPENDENCIES=

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 83


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h

9) Use the crsctl start resource myApache command to start the new
resource. Use the crsctl status resource myApache command to confirm
that the resource is online on the first node. If you like, open a browser and point it to
your first node as shown in step 2.
# /u01/app/11.2.0/grid/bin/crsctl start resource myApache
CRS-2672: Attempting to start 'myApache' on 'host01'
CRS-2676: Start of 'myApache' on 'host01' succeeded

# /u01/app/11.2.0/grid/bin/crsctl status resource myApache


NAME=myApache ble
TYPE=cluster_resource fe r a
TARGET=ONLINE an s
STATE=ONLINE on host01 n - t r
a no
h a s
10) Confirm that Apache is NOT running on your second node.
m ) The easiest
e ฺ way to do this
is to check for the running /usr/sbin/httpd ฺ-k o
c start u id-f
/etc/httpd/conf/httpd.confd processes n c
-i with G
tthe ps command.
c
a tuds e n
# . /home/oracle/labs/st_env.sh
a n @ S –k"
-iis"httpd
# ssh $ST_NODE2 ps -ef|grep
ฺ t h
root@host02's password:
h e ng soracle e t << password is not displayed
#
h e e to u
n (c nse
11)g T simulatelicaenode failure on your first node using the init command as root.
Next,
a
n
ee He Before issuing the reboot on the first node, open a VNC session on the second node
h and as the root user, execute the
C /home/oracle/labs/less_05/monitor.sh script so that you can monitor
the failover.
ON THE FIRST NODE AS THE root USER

# init 6 To initiate a reboot, simulating a node failure

ON THE second NODE AS THE root USER

# cat /home/oracle/labs/less_05/monitor.sh
while true
do
ps -ef | grep -i "httpd -k"
sleep 1
done

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 84


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 5-1: Protecting the Apache Application (continued)

# /home/oracle/labs/less_05/monitor.sh Execute this on the


second node

root 21940 18530 0 11:01 pts/4 00:00:00 grep -i httpd -k


root 21948 18530 0 11:01 pts/4 00:00:00 grep -i httpd -k
root 21951 18530 0 11:01 pts/4 00:00:00 grep -i httpd –k
...
apache 22123 22117 0 11:01 ? 00:00:00
/usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.conf

apache 22124 22117 0 11:01 ? 00:00:00


/usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.conf
ble
apache 22125 22117 0 11:01 ? 00:00:00
fe r a
/usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.conf
ans
...
n - t r
Issue a ctl-c to stop the monitoring a no
h a s
# m ) e ฺ
o
ฺc Gu i d
- i n c t
c s e n
12) Verify the failover from the first node a
n @ S t ud with the crsctl stat
to the second
resource myApache –tta command.is
n g ฺ th
e
# /u01/app/11.2.0/grid/bin/crsctl
h u s e stat resource myApache –t
e
he se t o
( c
--------------------------------------------------------------
NAMEan
T l i c en STATE
TARGET SERVER STATE_DETAILS

en g--------------------------------------------------------------

e e H Cluster Resources
--------------------------------------------------------------
C h myApache
1 ONLINE ONLINE host02

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 85


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 6
In this lesson you will work with Oracle Clusterware log files and learn to use the
ocrdump and cluvfy utilities.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 86


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-1: Working with Log Files
In this practice, you will examine the Oracle Clusterware alert log and then package
various log files into an archive format suitable to send to My Oracle Support.
1) While connected as the grid user to your first node, locate and view the contents of
the Oracle Clusterware alert log.
$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid

$ . /home/oracle/labs/st_env.sh

$ cd $ORACLE_HOME/log/$ST_NODE1 ble
$ view alert$ST_NODE1.log fe r a
Oracle Database 11g Clusterware Release 11.2.0.1.0 - an s
Production Copyright 1996, 2009 Oracle. All rights reserved. n - t r
o
2009-08-24 14:32:24.580
[client(12578)]CRS-2106:The OLR location s an
ha ฺ
/u01/app/11.2.0/grid/cdata/host01.olr is inaccessible. Details
)
ฺ c om uide
in /u01/app/11.2.0/grid/log/host01/client/ocrconfig_12578.log.
2009-08-24 14:32:24.933
- i n c tG
2009-08-24 14:32:58.972 a cs uden
[client(12578)]CRS-2101:The OLR was formatted using version 3.

n @ St
[ohasd(12895)]CRS-2112:The OLR service started on node host01.
a
g ฺ t
2009-08-24 14:33:00.090 t h is
h en use
[ohasd(12895)]CRS-2772:Server 'host01' has been assigned to
e
( c he se to
pool 'Free'.
2009-08-24 14:33:58.187
an licen
[ohasd(12895)]CRS-2302:Cannot get GPnP profile. Error
T
en g
CLSGPNP_NO_DAEMON (GPNPD daemon is not running).

e e H 2009-08-24 14:34:02.853

C h [cssd(14068)]CRS-1713:CSSD daemon is started in exclusive mode


2009-08-24 14:34:05.230
[cssd(14068)]CRS-1709:Lease acquisition failed for node host01
because no voting file has been configured; Details at
(:CSSNM00031:) in
/u01/app/11.2.0/grid/log/host01/cssd/ocssd.log
2009-08-24 14:34:22.893
[cssd(14068)]CRS-1601:CSSD Reconfiguration complete. Active
nodes are host01

:q!

2) Navigate to the Oracle Cluster Synchronization Services daemon log directory and
determine whether any log archives exist.
$ cd ./cssd

$ pwd

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 87


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-1: Working with Log Files (continued)

/u01/app/11.2.0/grid/log/host01/cssd

$ ls -alt ocssd*
-rw-r--r-- 1 grid oinstall 7564217 Sep 1 14:06 ocssd.log
-rw-r--r-- 1 grid oinstall 52606470 Sep 1 06:06 ocssd.l01
-rw-r--r-- 1 grid oinstall 52527387 Aug 29 22:26 ocssd.l02
-rw-r--r-- 1 root root 52655425 Aug 27 14:18 ocssd.l03
-rw-r--r-- 1 grid oinstall 114 Aug 24 14:34 ocssd.trc

3) Switch to the root user and set up the environment variables for the Grid
Infrastructure. Change to the /home/oracle/labs directory and run the
diagcollection.pl script to gather all log files that can be send to My Oracle
Support for problem analysis.
ble
$ su - fe r a
Password: oracle << Password is not displayed ans
n - t r
# . oraenv
a no
ORACLE_SID = [root] ? +ASM1
h a s
m )
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
e ฺ
/u01/app/grid o
ฺc Gu i d
- i n c t
# cd /home/oracle/labs
c
a tuds e n
a
# diagcollection.pl --collectn @ S
--crshome /u01/app/11.2.0/grid
Production Copyright ฺ
g ett
2004, h i
2008,
s Oracle. All rights reserved
h e n
Cluster Ready Services s
(CRS) diagnostic collection tool
e o u
( c he CRS
The following
s
local directory. e t
diagnostic archives will be created in the

T an licen
crsData_host01_20090901_1413.tar.gz -> logs,traces and cores
en g
from CRS home. Note: core files will be packaged only with the

e e H --core option.
C h ocrData_host01_20090901_1413.tar.gz -> ocrdump, ocrcheck etc
coreData_host01_20090901_1413.tar.gz -> contents of CRS core
files in text format

osData_host01_20090901_1413.tar.gz -> logs from Operating


System
Collecting crs data
/bin/tar:
log/host01/agent/crsd/orarootagent_root/orarootagent_root.log:
file changed as we read it
Collecting OCR data
Collecting information from core files
No corefiles found
Collecting OS logs
4) List the resulting log file archives that were generated with the
diagcollection.pl script.
# ls -la *tar.gz

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 88


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-1: Working with Log Files (continued)

-rw-r--r-- 1 root root 89083873 Sep 1 14:17


crsData_host01_20090901_1413.tar.gz
-rw-r--r-- 1 root root 11821 Sep 1 14:18
ocrData_host01_20090901_1413.tar.gz
-rw-r--r-- 1 root root 98372 Sep 1 14:18
osData_host01_20090901_1413.tar.gz

5) Exit the switch user command to return to the grid account.


# exit
$

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 89


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-2: Working with OCRDUMP
In this practice, you will work with the OCRDUMP utility and dump the binary file into
both text and XML representations.
1) While connected to the grid account, dump the contents of the OCR to the standard
output and count the number of lines of output.
$ ocrdump -stdout | wc -l
469

2) Switch to the root user, dump the contents of the OCR to the standard output and
count the number of lines of output. Compare your results with the previous step.
How do the results differ?
$ su -
ble
Password: oracle << Password is not displayed
fe r a
an s
# . oraenv
n - t r
ORACLE_SID = [root] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is a no
/u01/app/grid h a s
m ) e ฺ
# ocrdump -stdout | wc -l o
ฺc Gu i d
3243 - i n c t
c s e n
d using XML format.
3) Dump the first 25 lines of the OCR@ to a
standardtu output
n
ta| head S
is-25
# ocrdump -stdout -xml g ฺ t h
<OCRDUMP>
e h en use
( c he se to 16:55:11</TIMESTAMP>
<TIMESTAMP>09/01/2009
T an licen
<COMMAND>/u01/app/11.2.0/grid/bin/ocrdump.bin -stdout -xml
en g
</COMMAND>

e e H
C h <KEY>
<NAME>SYSTEM</NAME>
<VALUE_TYPE>UNDEF</VALUE_TYPE>
<VALUE><![CDATA[]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
<GROUP_NAME>root</GROUP_NAME>

<KEY>
<NAME>SYSTEM.version</NAME>
<VALUE_TYPE>UB4 (10)</VALUE_TYPE>
<VALUE><![CDATA[5]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
<GROUP_NAME>root</GROUP_NAME>

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 90


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-2: Working with OCRDUMP (continued)

4) Create an XML file dump of the OCR in the /home/oracle/labs directory.


Name the dump file ocr_current_dump.xml.
# ocrdump –xml /home/oracle/labs/ocr_current_dump.xml

5) Find the node and directory that contains the automatic backup of the OCR from 24
hours ago.
# ocrconfig -showbackup

host02 2009/09/01 16:17:55


/u01/app/11.2.0/grid/cdata/cluster01/backup00.ocr

host02 2009/09/01 12:17:54


ble
/u01/app/11.2.0/grid/cdata/cluster01/backup01.ocr
fe r a
an s
host02 2009/09/01 08:17:53
n - t r
/u01/app/11.2.0/grid/cdata/cluster01/backup02.ocr
a no
host02 2009/08/31 04:17:44
h a s
/u01/app/11.2.0/grid/cdata/cluster01/day.ocr
m ) e ฺ
o
ฺc Gu i d
host01 2009/08/24 18:53:34
- i n c t
c s
/u01/app/11.2.0/grid/cdata/cluster01/week.ocr
a tud e n
2009/08/27ฺta n @ S
host02 21:37:49
h i s
h e ng se t
/u01/app/11.2.0/grid/cdata/cluster01/backup_20090827_213749.oc
r
h e e to u
6) Copy the n (chour old
24 n e
sautomatic backup of the OCR into the /home/oracle/labs
a c e
T Thisli is not a dump, but rather an actual backup of the OCR.
directory.
e n g
Note: It may be necessary to use scp if the file is located on a different node. Be sure
e e H to use your cluster name in place of cluster01 in the path.
C h
# . /home/oracle/labs/st_env.sh
# cp /u01/app/11.2.0/grid/cdata/cluster01/day.ocr
/home/oracle/labs/day.ocr

>>>> Or <<<<<

# scp $ST_NODE2:/u01/app/11.2.0/grid/cdata/cluster01/day.ocr
/home/oracle/labs/day.ocr
root@host02's password: oracle << Password is not displayed
day.ocr 100% 7436KB 7.3MB/s 00:00

7) Dump the contents of the day.ocr backup OCR file in XML format saving the file
in the /home/oracle/labs directory. Name the file day_ocr.xml.
# ocrdump -xml -backupfile /home/oracle/labs/day.ocr
/home/oracle/labs/day_ocr.xml

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 91


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-2: Working with OCRDUMP (continued)

8) Compare the differences between the day_ocr.xml file and the


ocr_current_dump.xml file to determine all changes made to the OCR in the
last 24 hours. Exit the switch user command when done.
# diff /home/oracle/labs/day_ocr.xml
/home/oracle/labs/ocr_current_dump.xml
3,5c3,4
< <TIMESTAMP>09/01/2009 17:25:28</TIMESTAMP>
< <DEVICE>/home/oracle/labs/day.ocr</DEVICE>
< <COMMAND>/u01/app/11.2.0/grid/bin/ocrdump.bin -xml -
backupfile /home/oracle/labs/day.ocr
/home/oracle/labs/day_ocr.xml </COMMAND>
---
ble
> <TIMESTAMP>09/01/2009 17:17:56</TIMESTAMP>
> <COMMAND>/u01/app/11.2.0/grid/bin/ocrdump.bin -xml fe r a
/home/oracle/labs/ocr_current_dump2 </COMMAND> ans
8438c8437 n - t r
o
< <VALUE><![CDATA[2009/08/31 00:17:43]]></VALUE>
--- s an
> <VALUE><![CDATA[2009/09/01 16:17:55]]></VALUE>) ha ฺ
8486c8485
ฺ c om uide
- i n c tG
< <VALUE><![CDATA[2009/08/30 20:17:42]]></VALUE>
---
a cs uden
> <VALUE><![CDATA[2009/09/01 12:17:54]]></VALUE>
8534c8533 a n @ St
g ฺ t t h is
< <VALUE><![CDATA[2009/08/30 16:17:40]]></VALUE>
---
e h en use
( c
8582c8581
he se to
> <VALUE><![CDATA[2009/09/01 08:17:53]]></VALUE>

T an licen
< <VALUE><![CDATA[2009/08/29 04:17:29]]></VALUE>
en g---

e e H > <VALUE><![CDATA[2009/08/31 04:17:44]]></VALUE>


C h 8630c8629
< <VALUE><![CDATA[2009/08/30 04:17:36]]></VALUE>
---
> <VALUE><![CDATA[2009/09/01 04:17:51]]></VALUE>

# exit
$

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 92


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-3: Working with CLUVFY
In this practice, you will work with CLUVFY to verify the state of various cluster
components.
1) Determine the location of the cluvfy utility and its configuration file.
$ . /home/oracle/labs/st_env.sh
$ which cluvfy
/u01/app/11.2.0/grid/bin/cluvfy

$ cd $ORACLE_HOME/cv/admin
$ pwd
/u01/app/11.2.0/grid/cv/admin

$ cat cvu_config
ble
# Configuration file for Cluster Verification Utility(CVU)
fe r a
# Version: 011405
an s
#
n - t r
# NOTE:
# 1._ Any line without a '=' will be ignored a no
a s
# 2._ Since the fallback option will look into the environment
h
variables,
m ) e ฺ
# o
ฺc Gu i d
please have a component prefix(CV_) for each property to
define a
- i n c t
# namespace. c s
a tud e n
#
a n @ S
ฺ t h i s
h e ng sIfe tCRS home is not installed, this
#Nodes for the
list willchbee
e to u
cluster.

n
#picked (up whenn e all is mentioned in the commandline
s-n
T a lic e
gargument.
en #CV_NODE_ALL=
e e H
C h #if enabled, cvuqdisk rpm is required on all nodes
CV_RAW_CHECK_ENABLED=TRUE

# Fallback to this distribution id


CV_ASSUME_DISTID=OEL4

# Whether X-Windows check should be performed for user


equivalence with SSH
#CV_XCHK_FOR_SSH_ENABLED=TRUE

# To override SSH location


#ORACLE_SRVM_REMOTESHELL=/usr/bin/ssh

# To override SCP location


#ORACLE_SRVM_REMOTECOPY=/usr/bin/scp

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 93


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-3: Working with CLUVFY (continued)

2) Display the stage options and stage names that can be used with the cluvfy utility.
$ cluvfy stage -list

USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific
options> [-verbose]

Valid stage options and stage names are:


-post hwos : post-check for hardware and operating
system
-pre cfs : pre-check for CFS setup
ble
-post cfs : post-check for CFS setup
fe r a
-pre crsinst : pre-check for CRS installation
an s
-post crsinst : post-check for CRS installation
-pre hacfg : pre-check for HA configuration n - t r
o
-post hacfg : post-check for HA configuration
s an
)
-pre acfscfg : pre-check for ACFS Configuration.ha ฺ
-pre dbinst : pre-check for database installation

c om uide
-post acfscfg : post-check for ACFS Configuration.

-pre dbcfg n c tG
: pre-check for database configuration
- i
cs uden
-pre nodeadd : pre-check for node addition.
a
a n @ St
-post nodeadd : post-check for node addition.

g ฺ t t h is
-post nodedel : post-check for node deletion.
enthe ACFS
3) Perform a postcheckhfor
e u seconfiguration on all nodes.
he s-post
$ cluvfy(cstage e toacfscfg -n $ST_NODE_LIST
T a n cen
Performingli post-checks for ACFS Configuration
g
n
ee He Checking node reachability...
C h Node reachability check passed from node "host01"

Checking user equivalence...


User equivalence check passed for user "grid"

Task ACFS Integrity check started...

Starting check to see if ASM is running on all cluster


nodes...

ASM Running check passed. ASM is running on all cluster nodes

Starting Disk Groups check to see if at least one Disk Group


configured...
Disk Group Check passed. At least one Disk Group configured

Task ACFS Integrity check passed

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 94


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-3: Working with CLUVFY (continued)

UDev attributes check for ACFS started...


UDev attributes check passed for ACFS

Post-check for ACFS Configuration was successful.

4) Display a list of the component names that can be checked with the cluvfy utility.
$ cluvfy comp -list

USAGE:
cluvfy comp <component-name> <component-specific options> [-
ble
verbose]
fe r a
an s
Valid components are:
n - t r
nodereach : checks reachability between nodes o
nodecon : checks node connectivity
s an
cfs : checks CFS integrity
) ha ฺ
ssa
space ฺ
: checks space availabilityc om uide
: checks shared storage accessibility

sys - i n c tG
: checks minimum system requirements
clu a cs uden
: checks cluster integrity
clumgr @ St
: checks cluster manager integrity
a n
ocr ฺ t is
: checks OCR integrity
g t h
olr
ha e h en use
: checks OLR integrity
: checks HA integrity
( c he se to
crs : checks CRS integrity
T an licen
nodeapp : checks node applications existence

en g admprv
peer
: checks administrative privileges
: compares properties with peers
e e H software : checks software distribution
C h asm : checks ASM integrity
acfs : checks ACFS integrity
gpnp : checks GPnP integrity
gns : checks GNS integrity
scan : checks SCAN configuration
ohasd : checks OHASD integrity
clocksync : checks Clock Synchronization
vdisk : check Voting Disk Udev settings

5) Display the syntax usage help for the space component check of the cluvfy utility.
$ cluvfy comp space -help

USAGE:
cluvfy comp space [-n <node_list>] -l <storage_location>
-z <disk_space>{B|K|M|G} [-verbose]
<node_list> is the comma separated list of non-domain
qualified nodenames, on which the test should be conducted. If

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 95


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 6-3: Working with CLUVFY (continued)

"all" is specified, then all the nodes in the cluster will be


used for verification.
<storage_location> is the storage path.
<disk_space> is the required disk space, in units of
bytes(B),kilobytes(K),megabytes(M) or gigabytes(G).

DESCRIPTION:
Checks for free disk space at the location provided by '-l'
option on all the nodes in the nodelist. If no '-n' option is
given, local node is used for this check.

6) Verify that on each node of the cluster that the /tmp directory has at least 200 MB of
free space in it using the cluvfy utility. Use verbose output.
ble
$ cluvfy comp space -n $ST_NODE_LIST2 -l /tmp -z 200M -verbose
fe r a
an s
Verifying space availability
n - t r
o
Checking space availability...
s an
) ha ฺ
Check: Space available on "/tmp"
Node Name Available ฺ c om uide
Required Comment
-------- ---------------------- - i n c tG
----------------- --------
host02 29.14GB (3.055228E7KB)a cs u200MBd en (204800.0KB) passed
host01
a n @ St 200MB (204800.0KB)
29.74GB (3.118448E7KB) passed
host03 ฺ t h
29.69GB (3.1132244E7KB)
g t is 200MB (204800.0KB) passed

Verificationee
en uavailability
Result: Space availability
ofh space
se check passedwas
for "/tmp"
successful.
( c h se to
T an licen
H eng
h e e
C

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 96


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 7
In these practices, you will adjust ASM initialization parameters, stop and start instances,
and monitor the status of instances.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 97


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances
In this practice, you adjust initialization parameters in the SPFILE, and stop and start the
ASM instances on local and remote nodes.
1) Disk groups are reconfigured occasionally to move older data to slower disks. Even
though these operations occur at scheduled maintenance times in off-peak hours, the
rebalance operations do not complete before regular operations resume. There is some
performance impact to the regular operations. The setting for the
ASM_POWER_LIMIT initialization parameter determines the speed of the rebalance
operation. Determine the current setting and increase the speed by 2.
a) Open a terminal window on the first node, become the grid user, and set the
environment to use the +ASM1 instance. Connect to the +ASM1 instance as SYS
with the SYSASM privilege. What is the setting for ASM_POWER_LIMIT? ble
fe r a
[oracle]$ su - grid
an s
Password: oracle << password is not displayed
n - t r
[grid]$ . oraenv
ORACLE_SID = [grid] ? +ASM1 a no
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
h a s
/u01/app/grid
m ) e ฺ
[grid]$ sqlplus / as sysasm o
ฺc Gu i d
- i n c t
c s
SQL*Plus: Release 11.2.0.1.0 Production
a tud e non Thu Aug 27 09:57:16
2009
a n @ S
ฺ t h i s
Copyright (c) 1982,
h e ng2009,s e t
Oracle. All rights reserved.

h e e to u
n (c to: nse
Connected
g
a
T Database
Oracle l ice 11g Enterprise Edition Release 11.2.0.1.0 -
H en Production
h e e With the Real Application Clusters and Automatic Storage
C Management options

SQL> show parameter ASM_POWER_LIMIT

NAME TYPE VALUE


---------------- ----------- ------------------------------
asm_power_limit integer 1
SQL>

b) This installation uses an SPFILE. Use the ALTER SYSTEM command to change
the ASM_POWER_LIMIT for all nodes.
SQL> show parameter SPFILE

NAME TYPE VALUE


---------------- ----------- ------------------------------
spfile string +DATA/cluster01/asmparameterfi
le/registry.253.695669633

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 98


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

SQL> ALTER SYSTEM set ASM_POWER_LIMIT=3 SCOPE=BOTH SID='*';

System altered.

SQL> show parameter ASM_POWER_LIMIT

NAME TYPE VALUE


---------------- ----------- ------------------------------
asm_power_limit integer 3
SQL>

2) You have decided that due to other maintenance operations you want one instance
ble
+ASM1 to handle the bulk of the rebalance operation, so you will set the ASM
fe r a
POWER_LIMIT to 1 on instance +ASM2 and 5 on instance +ASM1.
ans
SQL> ALTER SYSTEM set ASM_POWER_LIMIT=1 SCOPE=BOTH n - t r
2> SID='+ASM2'; a no
h a s
System altered.
m ) e ฺ
o i d
ฺc SCOPE=BOTH
u
SQL> ALTER SYSTEM set ASM_POWER_LIMIT=5 i n c t G
2> SID='+ASM1';
a cs- uden
a n @ St
System altered
g ฺ t t h is
e h
SQL> show parameter en ASM_POWER_LIMIT
u se
( c he se to
NAME n
a c e n TYPE VALUE
T li
----------------
g ----------- ------------------------------
e n asm_power_limit integer 5
e e H
C h SQL> column NAME format A16
SQL> column VALUE format 999999
SQL> select inst_id, name, value from GV$PARAMETER
2> where name like 'asm_power_limit'

INST_ID NAME VALUE


---------- ---------------- ------
1 asm_power_limit 5

2 asm_power_limit 1

3 asm_power_limit 3
SQL>
3) Exit the SQL*Plus application.
SQL> exit

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 99


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

4) The ASM instance and all associated applications, ACFS, database, and listener on
one node must be stopped for a maintenance operation to the physical cabling. Stop
all the applications, ASM, and listener associated with +ASM2 using srvctl.
a) In a new terminal window, as the oracle user, stop Enterprise Manager on your
first node.
[oracle]$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome
_1 is /u01/app/oracle
[oracle]$ export ORACLE_UNQNAME=orcl
[oracle]$ . /home/oracle/labs/st_env.sh ble
[oracle]$ emctl stop dbconsole
fe r a
Oracle Enterprise Manager 11g Database Control Release
an s
11.2.0.1.0
n - t r
Copyright (c) 1996, 2009 Oracle Corporation. All rights
reserved. a no
https://host01.example.com:1158/em/console/aboutApplication h a s
m
Stopping Oracle Enterprise Manager 11g Database Control ... ) e ฺ
... Stopped. o
ฺc Gu i d
- i n c t
b) Stop the orcl database. c s e n
n @a -dStorcl ud
[oracle]$ srvctl stop a instance s -n $ST_NODE1
ฺ t h i
e ngis stopped
c) Verify that the database
h s e t on ST_NODE1. The pgrep command shows
that no orcl e ebackground
t o uprocesses are running.
h
(c nse
a n
[oracle]$ ce –lf orcl
g T l ipgrep
H en d) In a terminal window, become the grid OS user, set the oracle environment, set
h e e the class environment, and then stop the ASM instance +ASM using the srvctl
C stop asm –n $STNODE command.
[oracle]$ su - grid
Password: oracle << password is not displayed
[grid@host01 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
[grid]$ . /home/oracle/labs/st_env.sh
[grid]$ srvctl stop asm -n $ST_NODE1
PRCR-1014 : Failed to stop resource ora.asm
PRCR-1065 : Failed to stop resource ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would
require stopping or relocating 'ora.DATA.dg', but the force
option was not specified

e) Attempt to stop the ASM instance on ST_NODE1 using the force option, –f.
[grid]$ srvctl stop asm -n $ST_NODE1 -f

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 100


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

PRCR-1014 : Failed to stop resource ora.asm


PRCR-1065 : Failed to stop resource ora.asm
CRS-2670: Unable to start/relocate 'ora.asm' because
'ora.ACFS.dg' has a stop-time 'hard' dependency on it
CRS-0222: Resource 'ora.ACFS.dg' has dependency error.
CRS-2670: Unable to start/relocate 'ora.ACFS.dg' because
'ora.acfs.dbhome1.acfs' has a stop-time 'hard' dependency on
it
CRS-0245: User doesn't have enough privilege to perform the
operation

f) The ACFS file system is dependent on the disk group, so root must dismount
the file system. Become root and dismount the sharedhome file system. Then e
attempt to stop ASM. Why does it still fail? Check the mounted volumes with the r a bl
df –h command. sfe
- t r an
[grid]$ su - root
o n
Password: oracle << password is not displayed
[root]# . oraenv s an
ORACLE_SID = [root] ? +ASM1 ) ha ฺ
ฺ c om uide
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
- i n c tG
a cs uden
[root@host01 ~]# . /home/oracle/labs/st_env.sh
[root@host01 ~]# srvctl stop asm -n $ST_NODE1 -f
a n @ St
PRCR-1014 : Failed to stop resource ora.asm
g ฺ t t h is
PRCR-1065 : Failed to stop resource ora.asm
h en use
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
e
( c he se to
ORA-15097: cannot SHUTDOWN ASM instance with connected client
CRS-2675: Stop of 'ora.asm' on 'host01' failed
T an licen
en gCRS-2675: Stop of 'ora.asm' on 'host01' failed
e e H
C h [root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
9.7G 4.1G 5.1G 45% /
/dev/xvda1 99M 20M 74M 22% /boot
tmpfs 1.1G 154M 871M 16% /dev/shm
/dev/mapper/VolGroup01-LogVol00
30G 4.5G 24G 16% /u01
192.0.2.10:/mnt/shareddisk01/software/software
60G 40G 17G 71% /mnt/software
[root]#

g) The df –h command shows that the


/u01/app/oracle/acfsmount/sharedhome file system has been
dismounted, but the ASM instance still has clients dependent on it. In this case,
the Oracle clusterware is dependent on the ASM instance because the OCR is in
the ASM disk group.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 101


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

h) Stop the ASM instance with crsctl stop cluster –n $ST_NODE1


command. This command will stop all the Cluster services on the node.
[root]# crsctl stop cluster -n $ST_NODE1
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'host01'
CRS-2673: Attempting to stop 'ora.gns' on 'host01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on
'host01'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01'
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on
'host01' ble
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded fe r a
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host01'
an s
succeeded
n - t r
o
an
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host01'
s
CRS-2677: Stop of 'ora.scan3.vip' on 'host01' succeeded
ha ฺ
CRS-2672: Attempting to start 'ora.scan3.vip' on 'host02'
)
ฺ c om uide
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host01' succeeded

- i n c tG
CRS-2673: Attempting to stop 'ora.host01.vip' on 'host01'

a cs uden
CRS-2677: Stop of 'ora.host01.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.host01.vip' on 'host02'
n @ St
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host01'
a
succeeded g ฺ t t h is
e h en use
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host01'

( c he se to
CRS-2677: Stop of 'ora.scan2.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'host02'
an licen
CRS-2677: Stop of 'ora.gns' on 'host01' succeeded
T
en gCRS-2673: Attempting to stop 'ora.gns.vip' on 'host01'

e e H CRS-2677: Stop of 'ora.gns.vip' on 'host01' succeeded


CRS-2672: Attempting to start 'ora.gns.vip' on 'host02'
C h CRS-2676: Start of 'ora.scan2.vip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on
'host02'
CRS-2676: Start of 'ora.gns.vip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'host02'
CRS-2676: Start of 'ora.scan3.vip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on
'host02'
CRS-2674: Start of 'ora.host01.vip' on 'host02' failed
CRS-2679: Attempting to clean 'ora.host01.vip' on 'host02'
CRS-2674: Start of 'ora.gns' on 'host02' failed
CRS-2679: Attempting to clean 'ora.gns' on 'host02'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'host02'
succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'host02'
succeeded
CRS-2681: Clean of 'ora.gns' on 'host02' succeeded
CRS-2681: Clean of 'ora.host01.vip' on 'host02' succeeded

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 102


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

CRS-2673: Attempting to stop 'ora.ons' on 'host01'


CRS-2673: Attempting to stop 'ora.eons' on 'host01'
CRS-2677: Stop of 'ora.ons' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'host01'
CRS-2677: Stop of 'ora.net1.network' on 'host01' succeeded
CRS-2677: Stop of 'ora.eons' on 'host01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources
on 'host01' has completed
CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host01'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
CRS-2673: Attempting to stop 'ora.evmd' on 'host01'
CRS-2673: Attempting to stop 'ora.asm' on 'host01' e
CRS-2677: Stop of 'ora.cssdmonitor' on 'host01' succeeded
r a bl
CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded
s fe
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
- t r an
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01' non
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded s a
CRS-2673: Attempting to stop 'ora.diskmon' on 'host01') h a
o m
CRS-2677: Stop of 'ora.diskmon' on 'host01' succeeded d e ฺ
[root]# c ฺc Gu i
i n
i) Confirm that the listener has been a cs- udent
stopped.
a n @ St
[root]# lsnrctl status
g ฺ t listener
t h is
e
LSNRCTL for Linux: h en Versionu se 11.2.0.1.0 - Production on 28-AUG-
c
2009 08:46:51
( he se to
T an li(c)
Copyright c en1991, 2009, Oracle. All rights reserved.
H eng
h e e Connecting to
C (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Linux Error: 2: No such file or directory
5) Restart all the cluster resources on node ST_NODE1.
[root]# crsctl start cluster -n $ST_NODE1
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01'
CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host01'
CRS-2672: Attempting to start 'ora.diskmon' on 'host01'
CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded
CRS-2676: Start of 'ora.cssd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'host01'
CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host01'
CRS-2672: Attempting to start 'ora.evmd' on 'host01'

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 103


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

CRS-2676: Start of 'ora.evmd' on 'host01' succeeded


CRS-2676: Start of 'ora.asm' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded
6) Verify that the resources, database, and Enterprise manager are restarted on
ST_NODE1. The crsctl status resource –n $ST_NODE1 command shows
that ASM is online and the ACFS volume dbhome1 is mounted.
[root]# crsctl status resource -n $ST_NODE1
NAME=ora.ACFS.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE
STATE=ONLINE ble
fe r a
NAME=ora.DATA.dg
an s
TYPE=ora.diskgroup.type
n - t r
o
an
TARGET=ONLINE
STATE=ONLINE s
) ha ฺ
NAME=ora.FRA.dg
ฺ c om uide
TYPE=ora.diskgroup.type
TARGET=ONLINE - i n c tG
STATE=ONLINE a cs uden
a n @ St
g ฺ
NAME=ora.LISTENER.lsnr t t h is
e h en use
TYPE=ora.listener.type

( c he se to
TARGET=ONLINE
STATE=ONLINE
T an licen
eng
NAME=ora.LISTENER_SCAN1.lsnr

e e H TYPE=ora.scan_listener.type
CARDINALITY_ID=1
C h TARGET=ONLINE
STATE=ONLINE

NAME=ora.acfs.dbhome1.acfs
TYPE=ora.acfs.type
TARGET=ONLINE
STATE=ONLINE

NAME=ora.asm
TYPE=ora.asm.type
TARGET=ONLINE
STATE=ONLINE

NAME=ora.eons
TYPE=ora.eons.type
TARGET=ONLINE
STATE=ONLINE

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 104


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

NAME=ora.gsd
TYPE=ora.gsd.type
TARGET=OFFLINE
STATE=OFFLINE

NAME=ora.host01.vip
TYPE=ora.cluster_vip_net1.type
CARDINALITY_ID=1
TARGET=ONLINE
STATE=ONLINE

NAME=ora.net1.network
TYPE=ora.network.type e
TARGET=ONLINE
r a bl
STATE=ONLINE
sfe
- t r an
NAME=ora.ons
o n
TYPE=ora.ons.type
TARGET=ONLINE s an
STATE=ONLINE ) ha ฺ
c o m ide
NAME=ora.registry.acfs
n c ฺ G u
i t
TYPE=ora.registry.acfs.type
a cs- uden
TARGET=ONLINE
a n @ St
STATE=ONLINE
g ฺ t t h is
NAME=ora.scan1.vip
e h en use
h e e to
TYPE=ora.scan_vip.type
c
(
CARDINALITY_ID=1
n ns
T a li
TARGET=ONLINE c e
e n g
STATE=ONLINE

e e H
C h [root]#

7) In a terminal window, on the first node as the oracle user, start the orcl instance
and Enterprise Manager on ST_NODE1.
[oracle]$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome
_1 is /u01/app/oracle
[oracle]$ . /home/oracle/labs/st_env.sh
[oracle]$ srvctl start instance -d orcl -n $ST_NODE1
[oracle]$ export ORACLE_UNQNAME=orcl
[oracle]$ emctl start dbconsole
Oracle Enterprise Manager 11g Database Control Release
11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights
reserved.
https://host01.example.com:1158/em/console/aboutApplication

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 105


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

Starting Oracle Enterprise Manager 11g Database Control


............ started.
--------------------------------------------------------------
----
Logs are generated in directory
/u01/app/oracle/acfsmount/11.2.0/sharedhome/host01_orcl/sysman
/log
8) Determine the Enterprise Manager DB control configuration on the cluster. Notice
that dbconsole is running on your first node, an agent is running on your second node,
and no EM components have been started on the third node.
[oracle]$ emca -displayConfig dbcontrol -cluster
ble
STARTED EMCA at Aug 28, 2009 9:27:22 AM
fe r a
EM Configuration Assistant, Version 11.2.0.0.2 Production
an s
Copyright (c) 2003, 2005, Oracle. All rights reserved.
n - t r
o
Enter the following information:
s an
Database unique name: orcl
) ha ฺ
Service name: orcl
Do you wish to continue? [yes(Y)/no(N)]: Y ฺ c om uide
- i n c tG
Aug 28, 2009 9:27:31 AM oracle.sysman.emcp.EMConfig perform
a cs uden
INFO: This operation is being logged at
n @ St
/u01/app/oracle/cfgtoollogs/emca/orcl/emca_2009_08_28_09_27_21
a
.log.
g ฺ t t h is
e h en use
Aug 28, 2009 9:27:36 AM oracle.sysman.emcp.EMDBPostConfig

INFO: ( c he se to
showClusterDBCAgentMessage

an licen
**************** Current Configuration ****************
T
en gINSTANCE
----------
NODE
----------
DBCONTROL_UPLOAD_HOST
---------------------
e e H
C h orcl host01 host01.example.com
orcl host02 host01.example.com

Enterprise Manager configuration completed successfully


FINISHED EMCA at Aug 28, 2009 9:27:36 AM

9) In a terminal window, become the root user. Set the grid home environment and set
the practice environment variables. Stop all the cluster resources on ST_NODE2.
[oracle]$ su - root
Password: oracle << password is not displayed
[root]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
[root]# . ~oracle/labs/st_env.sh
[root]# crsctl stop cluster -n $ST_NODE2

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 106


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

CRS-2673: Attempting to stop 'ora.crsd' on 'host02'


CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'host02'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on
'host02'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'host02'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'host02'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host02'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'host02'
succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'host02'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.host02.vip' on 'host02' e
CRS-2677: Stop of 'ora.scan1.vip' on 'host02' succeeded
r a bl
CRS-2672: Attempting to start 'ora.scan1.vip' on 'host01'
s fe
CRS-2677: Stop of 'ora.host02.vip' on 'host02' succeeded
- t r an
CRS-2672: Attempting to start 'ora.host02.vip' on 'host01'
o n
s an
CRS-2677: Stop of 'ora.registry.acfs' on 'host02' succeeded
CRS-2676: Start of 'ora.host02.vip' on 'host01' succeeded
) ha ฺ
CRS-2676: Start of 'ora.scan1.vip' on 'host01' succeeded
ฺ c om uide
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on
'host01'
- i n c tG
cs uden
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'host01'
a
succeeded
a n @ St
g ฺ t t h is
CRS-2677: Stop of 'ora.orcl.db' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.acfs.dbhome1.acfs' on
'host02'
e h en use
( c he se to
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host02'

an licen
CRS-2677: Stop of 'ora.DATA.dg' on 'host02' succeeded

g T
CRS-2677: Stop of 'ora.acfs.dbhome1.acfs' on 'host02'

H en succeeded
CRS-2673: Attempting to stop 'ora.ACFS.dg' on 'host02'
h e e CRS-2677: Stop of 'ora.ACFS.dg' on 'host02' succeeded
C CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'host02'
CRS-2673: Attempting to stop 'ora.eons' on 'host02'
CRS-2677: Stop of 'ora.ons' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'host02'
CRS-2677: Stop of 'ora.net1.network' on 'host02' succeeded
CRS-2677: Stop of 'ora.eons' on 'host02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources
on 'host02' has completed
CRS-2677: Stop of 'ora.crsd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host02'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host02'
CRS-2673: Attempting to stop 'ora.evmd' on 'host02'
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2677: Stop of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host02' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host02' succeeded

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 107


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

CRS-2677: Stop of 'ora.asm' on 'host02' succeeded


CRS-2673: Attempting to stop 'ora.cssd' on 'host02'
CRS-2677: Stop of 'ora.cssd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'host02'
CRS-2677: Stop of 'ora.diskmon' on 'host02' succeeded
[root]#
10) What is the status of the orcl database on your cluster?
[root]# srvctl status database -d orcl
Instance orcl1 is running on node gr7212
Instance orcl2 is not running on node gr7213

11) As the root user on your first node, start the cluster on your second node e
(ST_NODE2). r a bl
s fe
[root@host01 ~]# crsctl start cluster -n $ST_NODE2
- t r an
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
o n
CRS-2672: Attempting to start 'ora.cssd' on 'host02' s an
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded

) ha ฺ
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
ฺ c om uide
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded

- i n c tG
CRS-2676: Start of 'ora.cssd' on 'host02' succeeded
cs uden
CRS-2672: Attempting to start 'ora.ctssd' on 'host02'
a
a n @ St
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded

g ฺ t t h is
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2672: Attempting to start 'ora.asm' on 'host02'
h en use
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded
e
( c he se to
CRS-2676: Start of 'ora.asm' on 'host02' succeeded

an licen
CRS-2672: Attempting to start 'ora.crsd' on 'host02'
T
CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
g
H en [root]#

h e e
C 12) Did the orcl instance on ST_NODE2 start? Use the svrctl status database –
orcl command as any of the users (oracle, grid, root) as long as the oracle
environment is set for that user.
Note: The database may take a couple of minutes to restart. If the orcl2 instance is
not running, try the status command again, until instance orcl2 is running.
[root]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
/u01/app/grid
[root]# srvctl status database -d orcl
Instance orcl1 is running on node host01
Instance orcl2 is running on node host02

13) Did Enterprise Manager restart on your second node (ST_NODE2)? Verify. As the
oracle user, connect to your second node and run emctl status dbconsole.
[root]# exit

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 108


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

logout

[oracle]$ ssh $ST_NODE2


Last login: Fri Aug 28 09:41:21 2009 from host01.example.com

[oracle]$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is
/u01/app/oracle

[oracle]$ export ORACLE_UNQNAME=orcl


[oracle]$ emctl status dbconsole
ble
Oracle Enterprise Manager 11g Database Control Release
11.2.0.1.0 fe r a
Copyright (c) 1996, 2009 Oracle Corporation. All rights an s
reserved. n - t r
o
Oracle Enterprise Manager 11g is not running. s an
https://host02.example.com:1158/em/console/aboutApplication

) ha ฺ
--------------------------------------------------------------
----
ฺ c om uide
Logs are generated in directory
- i n c tG
cs uden
/u01/app/oracle/acfsmount/11.2.0/sharedhome/host02_orcl/sysman
a
/log
a n @ St
[oracle]$ emctl start dbconsole
g ฺ t t h is
Oracle Enterprise Manager 11g Database Control Release
11.2.0.1.0
e h en use
( c he se to
Copyright (c) 1996, 2009 Oracle Corporation. All rights

an licen
reserved.
T
https://host02.example.com:1158/em/console/aboutApplication
g
H en Oracle Enterprise Manager 11g is not running.
--------------------------------------------------------------
h e e ----
C Logs are generated in directory
/u01/app/oracle/acfsmount/11.2.0/sharedhome/host02_orcl/sysman
/log

14) Configure database control to monitor the ASM instance on your first node. Open a
terminal window on your first node Prepare the oracle environment then use lsnrctl
status command to determine the listener IP address. Use the second IP address listed.

[oracle]$ . oraenv
ORACLE_SID = [+ASM1] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome
_1 is /u01/app/oracle
[oracle]$ lsnrctl status

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 109


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 30-SEP-


2009 15:37:15

Copyright (c) 1991, 2009, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version
11.2.0.1.0 - Production
Start Date 30-SEP-2009 14:42:04
Uptime 0 days 0 hr. 55 min. 11 sec e
Trace Level off
r a bl
Security ON: Local OS Authentication
s fe
SNMP OFF
- t r an
Listener Parameter File
/u01/app/11.2.0/grid/network/admin/listener.ora no n
Listener Log File s a
) h
/u01/app/grid/diag/tnslsnr/gr7212/listener/alert/log.xml
a
Listening Endpoints Summary... o m d e ฺ
c ฺc Gu
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
i
s - i n n t
c
a tud e
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.196.180.12)(PORT=
a n @ S
1521)))
ฺ t h i s
h e ng se t
=1521))) hee t o u
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.196.180.213)(PORT

Services(cSummary...
n se
a n e
g T
Service ic"+ASM1",
"+ASM"
l has 1 instance(s).

H en Instance
service...
status READY, has 1 handler(s) for this

h e e Service "orcl.example.com" has 1 instance(s).


C Instance "orcl1", status READY, has 1 handler(s) for this
service...
Service "orclXDB.example.com" has 1 instance(s).
Instance "orcl1", status READY, has 1 handler(s) for this
service...
The command completed successfully

Step Screen/Page Description Choices or Values


a. Cluster Database Home In the Instances section, click the link for
+ASM1.example.com.
b. Automatic Storage Management: If the instance appears to be down, click
Instance name Home Configure.
c. Monitoring Configuration In the Machine Name field enter the IP
address that you discovered in Step 3
(preceding).

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 110


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 7-1: Administering ASM Instances (continued)

Step Screen/Page Description Choices or Values


Click Test Connection.
d. Monitoring Configuration An information message displays “Test
Successful.”
Click OK.
e. Automatic Storage Management: Click the browser refresh button.
Instance name Home You may have to refresh several times
before EM reports that ASM instance is up.

Verify the configuration by starting Enterprise Manager DBcontrol. Use the URL:
https://first_ node_ name.example.com:1158/em

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 111


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 8
In these practices, you will add, configure, and remove disk groups, manage rebalance
operations, and monitor disk and disk group IO statistics.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 112


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 8-1: Administering ASM Disk Groups
In this practice, you will change the configuration of a disk group, and control the
resulting rebalance operations. You will determine the connected clients to the existing
disk groups, and perform disk group checks.
Because the asmadmin group has only one member, grid, open a terminal window and
become the grid OS user for this practice.
You will use several tools, such as EM, ASMCMD, and ASMCA, to perform the same
operations.
1) The FRA disk group has more disks allocated than is needed. So that one disk will be
dropped from the FRA disk group, remove ASMDISK11 from the disk group. Use
ASMCMD.
a) As the grid OS user on your first node, confirm that the FRA disk group is ble
mounted. If it is not mounted, mount it on your first and second nodes.
fe r a
an s
[grid]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1 n - t r
o
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
s an
/u01/app/grid
[grid]$ . /home/oracle/labs/st_env.sh ) ha ฺ
[grid]$ asmcmd
ฺ c om uide
ASMCMD> lsdg
- i n c tG
State Type cs uden
Rebal Sector Block
a AU Total_MB

a n @ St
Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
g ฺ t t h is
MOUNTED EXTERN N
3706 e h en use
0
512 4096 1048576
3706 0
9996

N ACFS/
( c he se to
T
5280an licen
MOUNTED NORMAL N
1198
512 4096 1048576
2041 0
9998

en g
N DATA/
e e H ASMCMD> mount FRA
C h ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB
Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 9996
3706 0 3706 0
N ACFS/
MOUNTED NORMAL N 512 4096 1048576 9998
5280 1198 2041 0
N DATA/
MOUNTED EXTERN N 512 4096 1048576 7497
7088 0 7088 0
N FRA/
ASMCMD> exit

[grid]$ crsctl status resource ora.FRA.dg -t


--------------------------------------------------------------
------------------

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 113


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 8-1: Administering ASM Disk Groups (continued)

NAME TARGET STATE SERVER


STATE_DETAILS
--------------------------------------------------------------
------------------
Local Resources
--------------------------------------------------------------
------------------
ora.FRA.dg
ONLINE ONLINE gr7212
OFFLINE OFFLINE gr7213
ONLINE ONLINE gr7214

[grid]$ ssh $ST_NODE2


ble
Last login: Tue Sep 29 16:56:00 2009 from gr7212.example.com
[grid@host02]$ . oraenv fe r a
ORACLE_SID = [grid] ? +ASM2 an s
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is n - t r
o
/u01/app/grid
grid@host02]$ asmcmd mount FRA s an
grid@host02]$ exit ) ha ฺ
grid]$ crsctl status resource ora.FRA.dg -t
ฺ c om uide
- i n c tG
--------------------------------------------------------------
------------------
a cs uden
NAME TARGET STATE
a n @ St SERVER
STATE_DETAILS
g ฺ t t h is
--------------------------------------------------------------
h en use
------------------
e
( c he se to
Local Resources

an licen
--------------------------------------------------------------
T
------------------
g
H en ora.FRA.dg
ONLINE ONLINE gr7212
h e e ONLINE ONLINE gr7213
C ONLINE ONLINE gr7214

b) Use the chdg command with inline XML. Note that the command is typed
without a return, all on one line.
chdg <chdg name="FRA" power="5"> <drop> <dsk
name="ASMDISK11"/> </drop> </chdg>
[grid]$ asmcmd
ASMCMD> chdg <chdg name="FRA" power="5"> <drop> <dsk
name="ASMDISK11"/> </drop> </chdg>
ASMCMD>
2) In preparation for adding another disk to the DATA disk group, perform a disk check
to verify the disk group metadata. Use the check disk group command chkdg.
ASMCMD> chkdg DATA

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 114


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 8-1: Administering ASM Disk Groups (continued)

3) Add another disk (ASMDISK12) to the DATA disk group and remove a disk
(ASMDISK04), but the rebalance operation must wait until a quiet time and then
proceed as quickly as possible. Use Enterprise Manager Database Control.
Step Screen/Page Description Choices or Values
a. Automatic Storage Management: Click Disk Groups tab.
Instance name Home
b. Automatic Storage Management Enter:
Login Username: SYS
Password: oracle_4U
Click Login.
c. Automatic Storage Management: Click the DATA link.
Instance name Disk Groups ble
d. Disk Group: DATA Click ADD. fe r a
e. Add Disks Change Rebalance Power to 0 an s
n - t r
(this prevents any rebalance).
a no
Select Disks: ORCL:ASMDISK12
Click Show SQL. h a s
f. Show SQL m ) ฺ
The SQL statement that will be executed is
e
shown:
o
ฺc Gu i d
- i n c t DATA ADD DISK
c
ALTER
s
a REBALANCE e n
DISKGROUP
d POWER 0SIZE 2500 M
'ORCL:ASMDISK12'
@ t u
S Return.
n isClick
ฺ t a h
g. Add Disks h e ng se t Click OK.
h. e tmessage
Add DiskshineProgress o u
(c nse
displayed
n
i. a
TDisk Group:
l e
icDATA Select ASMDISK04.
n g Click Remove.
ee He j. Confirmation Click Advanced Options.
C h Set Rebalance Power to 0.
Click Yes.

4) Perform the pending rebalance operation on disk group DATA.


Step Screen/Page Description Choices or Values
a. Disk Group: DATA Click the Automatic Storage Management:
+ASM1_host01.example.com locator link at
the top of page.
b. Automatic Storage Management: Select the DATA disk group.
+ASM1_host01.example.com Click Rebalance.
c. Select ASM Instances Select both ASM instances.
Click OK.
d. Confirmation Click Show Advanced Options.
Set Rebalance Power to 11.
Click Yes.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 115


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 8-1: Administering ASM Disk Groups (continued)

Step Screen/Page Description Choices or Values


e. Automatic Storage Management: Update Message:
+ASM1_host01.example.com Disk Group DATA rebalance request has
been submitted.
Click the DATA disk group.
f. Disk Group: DATA Observe the Used(%) column for the various
disk in the DATA disk group.
Notice the status column (ASMDISK04 is
marked as DROPPING).
Click the browser’s Refresh button.
g. Disk Group: DATA Notice the change in the Used(%) column.
Refresh again. After a few minutes the
ble
ASMDISK04 will not appear in the list on
fe r a
Member disks.
ans
h. Exit EM.
n - t r
a no
5) Determine which clients are using ASM and ACFS. This is helpfulh a s to determine
which clients are preventing the shutdown of ASM. Usem ) ASMCMD
the e ฺ lsct
command. Notice that the expect database clientcisฺc o i d
shown anduthe ACFS client called
- i
asmvol, but the cluster +ASM1 instance issalso n listed t G the OCR, and voting
because
n
@ ac Open
disk files are in the ASM disk group DATA.
t u daeterminal window on your first
node as the grid user. For ASMCMD
t a S the grid infrastructure environment
n toiswork,
must be set.

ng se t h
h e
e to u
[grid]$ . oraenv
h e
ORACLE_SID
n (c = [+ASM1]
n e ? +ASM1
sfor
TheT aOracle
l e
ic [grid]$ORACLE_HOME=/u01/app/11.2.0/grid
base is
en g
/u01/app/grid asmcmd lsct

e e H DB_Name Status Software_Version Compatible_version

C h Instance_Name Disk_Group
+ASM CONNECTED 11.2.0.1.0 11.2.0.1.0
+ASM1 DATA
asmvol CONNECTED 11.2.0.1.0 11.2.0.1.0
+ASM1 ACFS
orcl CONNECTED 11.2.0.1.0 11.2.0.0.0
orcl1 DATA

6) Examine the disk activity involving the DATA disk group.


a) Examine the statistics for all the disk groups and then check the individual disks
in the DATA disk group using EM.
Step Screen/Page Description Choices or Values
a. Cluster Database: In the Instances section, click the
orcl.example.com: Home +ASM1.example.com link.
b. Automatic Storage Management: Click the Performance tab.
+ASM1_host01.example.com:

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 116


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 8-1: Administering ASM Disk Groups (continued)

Step Screen/Page Description Choices or Values


Home
c. Automatic Storage Management: Notice the graphs of the various metrics:
+ASM1_host01.example.com: Response Time, Throughput, Operations per
Performance Second, and Operation Size. Almost all the
data operations involved the DATA disk
group.
In the Additional Monitoring Links section,
click Disk Group I/O Cumulative Statistics.
d. Automatic Storage Management Enter:
Login Username: SYS
Password: oracle_4U
ble
Click Login.
fe r a
e. Disk Group I/O Cumulative
an
Notice that almost all of the I/O calls ares
Statistics against the DATA disk group. n - t r
no
Expand the DATA disk group to show the
a
individual disk statistics.
h a s
f. Disk Group I/O Cumulative )
Examine the number of reads and writes to
m e ฺ
Statistics o i d
each disk in the DATA disk group. The I/O
ฺc Gu
- i n c
to the disks will not be balanced because
t
c s e n
ASMDISK12 was just added.
a tud
g.
a n @ Exit EM.
S
ฺ t h i s
h e ng se t
b) Examine the disk
e o u using lsdsk --statistics command.
e I/O tstatistics
h
c nse
[grid]$ (asmcmd
T a n ce --statistics
ASMCMD>
g i
lsdsk
l
H en Reads Write Read_Errs Write_Errs Read_time Write_Time
h e e Bytes_Read Bytes_Written
6893 10103 0
Voting_File Path
0 324.968 74.8
C 664005632 300485632 Y ORCL:ASMDISK01
10273 11096 0 0 338.432 84.596
743536128 346663936 Y ORCL:ASMDISK02
6345 10971 0 0 283.62 59.944
599138304 246011392 Y ORCL:ASMDISK03
153 0 0 0 1 0
1568768 0 N ORCL:ASMDISK05
124 4 0 0 1.056 .016
1490944 16384 N ORCL:ASMDISK06
97 0 0 0 1.1 0
397312 0 N ORCL:ASMDISK07
99 0 0 0 1.152 0
405504 0 N ORCL:ASMDISK08
897 1027 0 0 3.792 5.224
48025600 118779904 N ORCL:ASMDISK09
447 631 0 0 4.644 3.8
23568384 92385280 N ORCL:ASMDISK10

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 117


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 8-1: Administering ASM Disk Groups (continued)

2050 7008 0 0 6.436 199.012


29081600 1234462720 N ORCL:ASMDISK12
c) Examine the disk statistics bytes and time for the DATA disk group with the
iostat –t –G DATA command.
ASMCMD> iostat -t -G DATA
Group_Name Dsk_Name Reads Writes Read_Time Write_Time
DATA ASMDISK01 664112128 301286912 324.972 74.908
DATA ASMDISK02 744060416 346926080 338.444 84.624
DATA ASMDISK03 600055808 246699520 283.656 60.032
DATA ASMDISK12 30916608 1235182080 6.452 199.116
ASMCMD> exit
ble
7) Run the following ASMCMD commands to return the DATA and FRA disk groups
fe r a
to the configuration at the beginning of the practice.
t r a ns
[grid]$ asmcmd chdg /home/oracle/labs/less_08/reset_DATA.xml
o n -
an
[grid]$ asmcmd chdg /home/oracle/labs/less_08/reset_FRA.xml
s
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 118


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 9
In this practice, you will administer ASM files, directories, and templates.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 119


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates
In this practice, you use several tools to navigate the ASM file hierarchy, manage aliases,
manage templates, and move files to different disk regions.
1) ASM is designed to hold database files in a hierarchical structure. Navigate the orcl
database files with ASMCMD.
[grid]$ asmcmd
ASMCMD> ls
ACFS/
DATA/
FRA/
ASMCMD> ls DATA
ble
ORCL/
cluster02/ fe r a
ASMCMD> ls DATA/ORCL an s
CONTROLFILE/ n - t r
o
DATAFILE/
ONLINELOG/ s an
PARAMETERFILE/ ) ha ฺ
TEMPFILE/
ฺ c om uide
spfileorcl.ora
- i n c tG
ASMCMD> ls -l DATA/ORCL/*
a csTimeuden
Type Redund Striped
n @ S t Sys Name

g ฺ
+DATA/ORCL/CONTROLFILE/:
ta this
CONTROLFILE
e h en uFINE
HIGH se AUG 31 13:00:00 Y
e t o
(ch nse
Current.263.695918587

T n
a lice
+DATA/ORCL/DATAFILE/:
g
en DATAFILE MIRROR COARSE AUG 31 13:00:00 Y

e e H EXAMPLE.264.695918623
C h DATAFILE MIRROR COARSE SEP 01 04:00:00 Y
SYSAUX.260.695918435
DATAFILE MIRROR COARSE AUG 31 13:00:00 Y
SYSTEM.258.695918433
DATAFILE MIRROR COARSE AUG 31 13:00:00 Y
UNDOTBS1.259.695918437
DATAFILE MIRROR COARSE AUG 31 13:00:00 Y
UNDOTBS2.257.695918879
DATAFILE MIRROR COARSE AUG 31 13:00:00 Y
USERS.268.695918437

+DATA/ORCL/ONLINELOG/:
ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y
group_1.262.695918591
ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y
group_2.261.695918595
ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y
group_3.265.695918947

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 120


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y


group_4.256.695918951

+DATA/ORCL/PARAMETERFILE/:
PARAMETERFILE MIRROR COARSE AUG 31 22:00:00 Y
spfile.266.695918955

+DATA/ORCL/TEMPFILE/:
TEMPFILE MIRROR COARSE AUG 31 13:00:00 Y
TEMP.267.695918609
N
spfileorcl.ora =>
ble
+DATA/ORCL/PARAMETERFILE/spfile.266.695918955
fe r a
ASMCMD>
t r a ns
2) The default structure may not be the most useful for some sites. Create a setnof
o - aliases
for directories and files to match a file system. Use EM.
s an
Step Screen/Page Description Choices or Values ) ha ฺ
a. Cluster Database: In the Instances
ฺ c omsection,
u
e the
idclick
orcl.example.com
- i n c
+ASM1_node_name.example.com
t G link.
b. Automatic Storage Management:
a s thedDisk
cClick n
e Groups tab.
+ASM1_node_name.example.com
n@ is S t u
Home ฺ t a
g e th Enter:
c. n
Automatic StorageeManagement
h o us
Login e e
h se t Username: SYS
n ( c n Password: oracle_4U
T a l ic e Click Login.
g
n Automatic Storage Management: Click the DATA disk group link.
d.
ee He +ASM1_host01.example.com
C h DiskGroups
e. Disk Group: DATA General Click the Files tab.
f. Disk Group: DATA Files Select ORCL.
Click Create Directory.
g. Create Directory Enter:
New Directory: oradata
Click Show SQL.
h. Show SQL The SQL that will be executed is shown
ALTER DISKGROUP DATA ADD DIRECTORY
'+DATA/ORCL/oradata'
Click Return.
i. Create Directory Click OK.
j. Disk Group: DATA Files Expand the ORCL folder.
Expand the DATAFILE folder.
Select EXAMPLE.nnn.NNNNNN.
Click Create Alias.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 121


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

Step Screen/Page Description Choices or Values


k. Create Alias Enter:
User Alias: +DATA/ORCL/oradata/example_01.dbf
Click Show SQL.
l. Show SQL The SQL that will be executed is shown
ALTER DISKGROUP DATA ADD ALIAS
'+DATA/ORCL/oradata/example_01.dbf' FOR
'+DATA/ORCL/DATAFILE/EXAMPLE.264.698859675'
Click Return.
m. Create Alias Click OK.
n. Disk Group: DATA: Files Click the EXAMPLE.nnn.NNNNN link. e
o. EXAMPLE.nnn.NNNNNN: r a
Notice the properties that are displayed in the General bl
Properties section. s fe
Click OK. - t r an
p. Disk Group: DATA Files Click the example_01.dbf link. non
q. example_01.dbf: Properties s a
Note that the properties include System Name.
Click OK. ) h a
o m d e ฺ
r. Exit EM.
c ฺc Gu i
s - i n n t
c e
a tud and display the properties.
3) Using ASMCMD, navigate to view example_01.dbf
a
Using the system name, find tthen @
alias. Use S ls –a command.
the

g et h i s
[grid]$ asmcmd hen
e e t o us
ASMCMD> ls +DATA/ORCL/oradata/*
(ch nse
example_01.dbf
n
ASMCMD>
g c-le +DATA/ORCL/oradata/*
Ta lsliRedund
H en Type Striped Time
N
Sys Name

h e e example_01.dbf => +DATA/ORCL/DATAFILE/EXAMPLE.264.695918623


C ASMCMD> ls -a +DATA/ORCL/DATAFILE/example*
+DATA/ORCL/oradata/example_01.dbf => EXAMPLE.264.695918623
4) Create a new tablespace. Name the file using a full name. Use EM.
Step Screen/Page Description Choices or Values
a. Cluster Database: Click the Server tab.
orcl.example.com Home
b. Cluster Database: In the Storage section, click the Tablespaces
orcl.example.com Server link.
c. Tablespaces Click Create.
d. Create Tablespace: General Enter:
Name: XYZ
In the Datafiles section, click Add.
e. Add Datafile Enter:
Alias directory: +DATA/ORCL/oradata
Alias name: XYZ_01.dbf

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 122


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

Step Screen/Page Description Choices or Values


Click Continue.
f. Create Tablespace: General Click OK.
g. Tablespaces

5) Create another data file for the XYZ tablespace. Allow the file to receive a default
name. Did both the files get system-assigned names?
Step Screen/Page Description Choices or Values
a. Tablespaces Select XYZ tablespace.
Click Edit.
ble
b. Edit Tablespace: XYZ In the Datafiles section, click Add.
fe r a
c. Add Datafile Click Continue. an s
d. Edit Tablespace: XYZ Click Show SQL. n - t r
o
e. Show SQL
an
Note: The SQL provides only the disk group
name. s
)
Click Return.ha ฺ
f. Edit Tablespace: XYZ c om uide
Click Apply.

g. Edit Tablespace: XYZ - i n c tG
In the Datafiles section, note the names of
a cs uden
the two files. One name was specified in the
a n @ St previous practice step, xyz_01.dbf, and the
g ฺ t t h is other is a system-assigned name.
e h en use Click the Database tab.
h.
( c he se to
Cluster Database: In the Instances section, click the

i. g T
an cen Management:
orcl.example.com
AutomaticliStorage
+ASM1_node_name.example.com link.
Click the Disk Groups tab.
n
ee He +ASM1_node_name.example.com
Home
C h j. Automatic Storage Management Enter:
Login Username: SYS
Password: oracle_4U
Click Login.
k. Automatic Storage Management: Click the DATA disk group link.
+ASM1_host01.example.com
DiskGroups
l. Disk Group: DATA General Click the Files tab.
m. Disk Group: DATA Files Expand the ORCL folder.
Expand the DATAFILE folder.
Note that there are two system-named files
associated with the XYZ tablespace.
Expand the oradata folder.
Click the xyz_01.dbf link.
n. XYZ_01.dbf: Properties Observe that the xyz_01.dbf file is an
alias to a file with a system name.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 123


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

Step Screen/Page Description Choices or Values


Click OK.
o. Disk Group: DATA Files

6) Move the files for the XYZ tablespace to the hot region of the DATA disk group.
Step Screen/Page Description Choices or Values
a. Disk Group: DATA: Files Click the General tab.
b. Disk Group: DATA: General In the Advanced Attributes section, click
Edit.
c. Edit Advanced Attributes for Disk Change Database Compatibility to ble
Group: DATA 11.2.0.0.
fe r a
Show SQL.
an s
d. Show SQL Notice the SET ATTRIBUTE clause. n - t r
Click Return.
a no
e. Edit Advanced Attributes for Disk Click OK.
h a s
Group: DATA
m ) e ฺ
f. Disk Group: DATA General Click theฺFileso
c Gu tab. i d
g. Disk Group: DATA: Files Expand
- i n cthe t folder.
ORCL
c s
Expand
a tud e
the
n
oradata folder.
a @
n isSelectS the xyz_01.dbf file.

g ett h Click Edit File.
h. e n
h o us
Edit File: XYZ_01.dbf In the Regions section, select the Primary
e e
h se t Hot and Mirror Hot options.
( c
i. g T
an licen
Show SQL
Click Show SQL.
Note that the SQL statement uses the alias
n
ee He name and attributes clause.
Click Return.
C h j. Edit File: XYZ_01.dbf Click Apply.
k. Disk Group: DATA: Files Expand the DATAFILES folder.
Select the XYZ file that is not in the HOT
region.
Click Edit File.
l. Edit File:XYZ.nnn.NNNNN In the Regions section, select Primary Hot
and Mirror Hot options.
Click Show SQL.
m. Show SQL Note that the SQL statement uses the system
name and attributes clause.
Click Return.
n. Edit File: XYZ.nnn.NNNNNN Click Apply.
o. Disk Group: DATA: Files

7) Create a template that changes the default placement of files to the hot region.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 124


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

Step Screen/Page Description Choices or Values


a. Disk Group: DATA: Files Click the Templates tab.
b. Disk Group: DATA: Templates Click Create.
c. Create Template Enter:
Template Name: HOT_FILES
In the Regions section, select the Primary
Hot and Mirror Hot options.
Click Show SQL.
d. Show SQL Note the SQL statement attributes clause.
Click Return.
e. Create Template Click OK. ble
f. Disk Group: DATA: Templates Note the attributes of the HOT_FILES fe r a
template compared with the DATAFILE an s
template. n - t r
a no
8) Add another data file to the XYZ tablespace using the template.h a s
Was the file placed in
the HOT region? m ) e ฺ
o
ฺcValues u i d
Step Screen/Page Description i
Choicesn c or t G
a. Disk Group: DATA: Templates acClick s- thedDatabase
e n tab.
b. Cluster Database: n @ Click S u Server tab.
tthe
orcl.example.com Home g ฺt a t hi s
c. h
Cluster Database:
n
e us e In the Storage section, click the Tablespaces
e e
h se
orcl.example.com t
Servero link.
( c
d.
T an licen
Tablespaces Select the XYZ tablespace.
Click Edit.
n g
ee Hee. Edit Tablespace: XYZ
f. Add Datafile
In the Datafiles section, click Add.
Change Template to HOT_FILES.
C h Click Continue.
g. Edit Tablespace: XYZ Click Show SQL.
h. Show SQL Note the data file specification
"+DATA(HOT_FILES)."
Click Return.
i. Edit Tablespace: XYZ Click Apply.
Click the Database tab.
j. Cluster Database: In the Instances section, click the
orcl.example.com +ASM1_node_name.example.com link.
k. Automatic Storage Management: Click the Disk Groups tab.
+ASM1_node_name.example.com
Home
l. Automatic Storage Management Enter:
Login Username: SYS
Password: oracle_4U
Click Login.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 125


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

Step Screen/Page Description Choices or Values


m. Automatic Storage Management: Click the DATA disk group link.
+ASM1_host01.example.com
DiskGroups
n. Disk Group: DATA General Click the Files tab.
o. Disk Group: DATA Files Expand the ORCL folder.
Expand the DATAFILE folder.
Notice that there are three system-named
files associated with the XYZ tablespace.
All have the HOT and HOT MIRROR
attributes set.
ble
fe r a
9) Create a table in the XYZ tablespace. ans
n - t r
no
a) In a terminal window, as the oracle OS user, use the following command
a
h a s
connect to the ORCL database (the password for SYS is oracle_4U):
sqlplus sys@orcl AS SYSDBA m ) e ฺ
o
ฺc Gu i d
i n c
[oracle]$ . oraenv
ORACLE_SID = [orcl] ? orcl ac
s- dent
The Oracle base for
a n @ Stu
g ฺ t h is
ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome
t is
/u01/app/oracle en e
e e
[oracle]$ sqlplus
t o us as sysdba
h sys@orcl
n ( ch nse
SQL*Plus:
g
2009
ce 11.2.0.1.0 Production on Tue Sep 1 12:23:43
Ta liRelease
H en
h e e Copyright (c) 1982, 2009, Oracle. All rights reserved.
C
Enter password:oracle_4U << password is not displayed

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>
b) Create a large table in the XYZ tablespace called CUST_COPY by executing the
cr_cust_copy.sql script. This script makes a copy of the SH.CUSTOMERS
table into the XYZ tablespace.
SQL> @/home/oracle/labs/less_09/cr_cust_copy.sql
SQL>

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 126


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

SQL> CREATE TABLE Cust_copy TABLESPACE XYZ AS


2 SELECT * FROM SH.CUSTOMERS;

Table created.

SQL>
10) Query the new table. Select all the rows to force some read activity with the
command: SELECT * FROM CUST_COPY. Us the SET PAGESIZE 300 command to
speed up the display processing
SQL> SET PAGESIZE 300
SQL> SELECT * FROM CUST_COPY; ble
… /* rows removed */ fe r a
100055 Andrew Clark an s
F n - t r
o
74673
1978 Married
Duncan 51402 s an
77 Cumberland Avenue

SC ) ha ฺ
52722 52790
ฺ c om uide
260-755-4130 J: 190,000 - 249,999
- i n c tG
11000
Clark@company.com a cs uden
Customer total 52772
01-JAN-98 A a n @ St
g ฺ t t h is
e h en use
e e to
55500 rowshselected.
c
n ( ns
T
SQL>a li c e
e n g
e e H
C h 11) View the I/O statistics by region. Use EM to view the statistics, and then repeat using
ASMCMD.
a) View the I/O statistics by region with Enterprise Manager.
Step Screen/Page Description Choices or Values
a. Disk Group: DATA Files Click the Performance tab.
b. Disk Group: DATA: Performance In the Disk Group I/O Cumulative Statistics
section, observe the values for Hot Reads
and Hot Writes.

b) In a terminal window, on your first node as the grid user, set the oracle
environment for the +ASM1 instance. View the I/O statistics by region using
ASMCMD.
[grid@host01]$ . oraenv
ORACLE_SID = [grid] ? +ASM1

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 127


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is


/u01/app/grid
[grid]$ asmcmd
ASMCMD> iostat --io --region -G DATA
Group_Name Dsk_Name Reads Writes Cold_Reads Cold_Writes
Hot_Reads Hot_Writes
DATA ASMDISK01 11071 83680 5345 70653
20 263
DATA ASMDISK02 13056 73855 5411 58344
24 226
DATA ASMDISK03 46914 48823 36155 29176
31 367
ble
DATA ASMDISK04 204358 71893 196787 56579
fe r a
32 362
ans
ASMCMD> exit
n - t r
a no
h a s
12) Drop the tablespaces and templates created in this practice.) ฺ
o m i d e
c ฺc G
a) As the oracle OS user, connect to the orcl database,
n anduthen use the
- i n t
drop_XYZ.sh script to drop the XYZ
a cstablespace.
d e
[oracle]$ sqlplus sys@orcl n @as sysdba
S tu
g ฺt a t hi s
SQL*Plus: Release
h n e
e 11.2.0.1.0
s Production on Tue Sep 1 13:29:05
e o u
2009
( c he se t
an li(c)
Copyright
T c en1982, 2009, Oracle. All rights reserved.
e ngEnter password: oracle_4U << password is not displayed
ee H
C h Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @/home/oracle/labs/less_09/drop_XYZ.sql

SQL>
SQL> DROP TABLESPACE XYZ INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

SQL>
SQL> EXIT;
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - Production

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 128


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

With the Partitioning, Real Application Clusters, Automatic


Storage Management, OLAP,
Data Mining and Real Application Testing options
[oracle]$

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 129


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 9-1: Administering ASM Files, Directories, and
Templates (continued)

b) As the grid OS user, use asmcmd to remove the HOT_FILES template.


[grid]$ asmcmd
ASMCMD> rmtmpl -G DATA HOT_FILES
ASMCMD> exit
[grid@host01 ~]$

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 130


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 10
In this practice you will create, register, and mount an ACFS file system. In addition, you
will manage ACFS snapshots.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 131


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS
In this practice, you will create, register, and mount an ACFS file system for general use.
You will see the acfs modules that are loaded for ACFS. You create, use, and manage
ACFS snapshots.
1) Open a terminal window on your first node and become the root user. Use the
lsmod command to list the currently loaded modules. Use the grep command to
display only the modes that have the ora string in them. Note the first three modules
in the list below. These modules are required to enable ADVM and ACFS. The
oracleasm module is loaded to enable ASMlib management of the ASM disks.
Check all three nodes. If the ACFS drivers are not loaded, load them with the
Grid_home/bin/acfsload script. The drivers are loaded and registered
automatically when an ACFS volume is created with ASMCA. In this case, the e
configuration must be manually extended to ST_NODE3. r a bl
s fe
[oracle]$ su – root
- t r an
Password: oracle << password is not displayed
o n
/* on ST_NODE1 */ s an
) ha ฺ
[root@host01]# lsmod | grep ora
c o m ide
oracleacfs 787588 2
n c ฺ G u
oracleadvm 177792 6s- i n t
e
226784ac2 oracleacfs,oracleadvm
d
oracleoks
oracleasm a n @ 1 Stu
46356
g ฺ t t h is
e h en use
[root@host01]#
( c to
he s. e/home/oracle/labs/st_env.sh
T n second
/*aon
l i c ennode */
H eng[root@host01]# ssh $ST_NODE2 lsmod| grep ora
root@host02's password: oracle << password is not displayed
h e e oracleacfs 787588 3
C oracleadvm 177792 7
oracleoks 226784 2 oracleacfs,oracleadvm
oracleasm 46356 1

/* on ST_NODE3 */

[root@host01 ~]# ssh $ST_NODE3 lsmod | grep ora


root@host03's password: oracle << password is not displayed
oracleacfs 787588 0
oracleadvm 177792 0
oracleoks 226784 2 oracleacfs,oracleadvm
oracleasm 46356 1
[root@host01 ~]# exit

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 132


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

2) Scenario: Your database application creates a number of image files stored as


BFILES and external tables. These must be stored on a shared resource. An ACFS
file system meets that requirement. Create an ASM volume and the ACFS file
system. The ACFS volume should be 3 GB on the ACFS disk group. The mount point
should be /u01/app/oracle/asfcmounts/images. These operations can be
done with ASMCA as in Practice 2, ASMCMD, Enterprise Manager, or SQL*Plus.
The ASMCMD solution is shown here.
a) Open another terminal window as the grid OS user on your first node.
[grid@host01]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is
ble
/u01/app/grid
fe r a
b) Start ASMCMD as the grid OS user.
an s
n - t r
[grid@host01]$ asmcmd o
ASMCMD>
s an
c) Create a volume using the volcreate command. ) h
a
o m d e ฺ
ASMCMD> volcreate -G ACFS -s 3G IMAGES c ฺc Gu i
ASMCMD> s - i n n t
c
a tud command. e
d) Find the volume device name.
a n @
Use the volinfo
S
ASMCMD> volinfo -Gng
ฺ t
ACFS -a t h i s
Diskgroup Name: e h e
ACFS u se
( c he se to
a n Volume
c e n Name: DBHOME_1
g T li
Volume Device: /dev/asm/dbhome_1-407
e n State: ENABLED
e e H Size (MB): 6144
C h Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath:
/u01/app/oracle/acfsmount/11.2.0/sharedhome

Volume Name: IMAGES


Volume Device: /dev/asm/images-407
State: ENABLED
Size (MB): 3072
Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 133


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

ASMCMD>
e) Create an ACFS file system in the IMAGES volume—using the terminal window
that you used in step 1—as the root user. When using the mkfs –t acfs
command, the volume device must be supplied. Use the volume device name that
you found in the previous step.
[root@host01 ~]# mkfs -t acfs /dev/asm/images-407
mkfs.acfs: version = 11.2.0.1.0.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/images-407
mkfs.acfs: volume size = 3221225472
mkfs.acfs: Format complete.
[root@host01 ~]# ble
fe r a
f) Mount the volume—using the terminal window that you used in step 1—as the
an s
t r
root user, create the mount point directory, and mount the volume. The volume
n -
no
device is used again in the mount command. Enter the mount command all on one
a
line. Repeat these commands on the second and third nodes of the cluster as the
s
root user. ) h a
o m d e ฺ
/* On first node */ c ฺc Gu i
s - i n n t
c
a tud e
[root@host01 ~]# mkdir -p /u01/app/oracle/acfsmount/images
[root@host01 ~]# mount -t
a n @ S
acfs /dev/asm/images-407
ฺ t
/u01/app/oracle/acfsmount/imagesh i s
h e ng se t
node */o u
/* On secondee
( c h e t
s
n ssh $ST_NODE2
an lice~]#
[root@host01
T
eng
root@host02's password: oracle << password not displayed
H [root@host02 ~]# mkdir /u01/app/oracle/acfsmount/images
h e e [root@host02 ~]# mount -t acfs /dev/asm/images-407
C /u01/app/oracle/acfsmount/images
[root@host02 ~]# exit
logout

Connection to host02 closed.


[root@host01 ~]#

/* On third node*/

[root@host01]# ssh $ST_NODE3


root@host03's password: oracle << password not displayed
[root@host03]# mkdir -p /u01/app/oracle/acfsmount/images
[root@host03]# mount -t acfs /dev/asm/images-407
/u01/app/oracle/acfsmount/images
[root@host03]# exit

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 134


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

g) Verify that the volume is mounted.


/* On first node */

[root@host01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
9.7G 2.6G 6.6G 28% /
/dev/xvda1 99M 20M 74M 22% /boot
tmpfs 1.1G 482M 543M 48% /dev/shm
/dev/mapper/VolGroup01-LogVol00
30G 4.7G 24G 17% /u01
192.0.2.10:/mnt/shareddisk01/software/software
ble
60G 34G 23G 60% /mnt/software
fe r a
/dev/asm/dbhome_1-407
an s
6.0G 4.8G 1.3G 79%
/u01/app/oracle/acfsmount/11.2.0/sharedhome n - t r
/dev/asm/images-407 3.0G 43M 3.0G 2%
a no
/u01/app/oracle/acfsmount/images
h a s
m ) e ฺ
/* On second node */ o
ฺc Gu i d
i n c
- -h ent
[root@host01 ~]# ssh $ST_NODE2 s df
c
a tud
/u01/app/oracle/acfsmount/images
a n @ S
root@host02's password:
ฺ t oracle
h i s << password not displayed
Filesystem
h e ng 3.0G
Size t Used Avail Use% Mounted on
se 73M 3.0G 3%
/dev/asm/images-407
e u
( c he se to
/u01/app/oracle/acfsmount/images
n
an lice~]#
T
[root@host01

H eng
h e e /* On third node */
C
[root@host01 ~]# ssh $ST_NODE3 df -h
/u01/app/oracle/acfsmount/images
root@host03's password: oracle << password not displayed

Filesystem Size Used Avail Use% Mounted on


/dev/asm/images-407 3.0G 109M 2.9G 4%
/u01/app/oracle/acfsmount/images

[root@host01 ~]#

h) Register the volume and mount point. As the root user, run the command: (Use
your actual device name.)
acfsutil registry –a /dev/asm/images-nnn
/u01/app/oracle/acfsmount/images
[root@host01]# acfsutil registry -a /dev/asm/images-407
/u01/app/oracle/acfsmount/images

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 135


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

acfsutil registry: mount point


/u01/app/oracle/acfsmount/images successfully added to Oracle
Registry

i) As the root user, view the registry status of the volume with the acfsutil
registry –l command.
[root@host01]# acfsutil registry -l
Device : /dev/asm/images-407 : Mount Point :
/u01/app/oracle/acfsmount/images : Options : none : Nodes :
all : Disk Group : ACFS : Volume : IMAGES
[root@host01]#
3) An ACFS file system can be resized, and it will automatically resize the volume, if
ble
there is sufficient space in the disk group. The images file system is near capacity.
fe r a
Increase the file system by 512 MB to 3.5 GB. As the root user, use the command
an s
acfsutil size +512M /u01/app/oracle/acfsmount/images.
n - t r
[root@host01]# acfsutil size +512M a no
/u01/app/oracle/acfsmount/images h a s
m
acfsutil size: new file system size: 3758096384 (3584MB) ) e ฺ
[root@host01 ~]# o
ฺc Gu i d
i n c
- afterethetresize operation with
s
4) As the grid user, check the size of the volume
c
a tud n
asmcmd volinfo.
a n @ S
ฺ t h i s
[grid@host03 ~]$ asmcmd
Diskgroup Name:he
ng volinfo
ACFS use
t -G ACFS IMAGES
h e e to
n (cVolume n e IMAGES
sName:
T a l e
ic Device:
Volume /dev/asm/images-407
en g State: ENABLED
e e H Size (MB): 4096
C h Resize Unit (MB): 256
Redundancy: UNPROT
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /u01/app/oracle/acfsmount/images

[grid@host03 ~]$

5) As root, source the environment script /home/oracle/labs/st_env.sh. The


IMAGES file system holds the image files for the orcl database owned by the
oracle user. As the root user, change the permissions on the mount point so that
the oracle user will own the file system on all three nodes.
/* on ST_NODE1 */
[root@host01]# . /home/oracle/labs/st_env.sh

[root@host01]# chown oracle:dba


/u01/app/oracle/acfsmount/images

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 136


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

/* On second node */

[root@host01 ~]# ssh $ST_NODE2 chown oracle:dba


/u01/app/oracle/acfsmount/images
root@host02's password: oracle << password is not displayed

/* on third node */

[root@host01]# ssh $ST_NODE3 chown oracle:dba


/u01/app/oracle/acfsmount/images
root@host03's password: oracle << password is not displayed

ble
[root@host01 ~]#
fe r a
6) As the oracle user, transfer a set of images to ans
/u01/app/oracle/acfsmount/images. Unzip the images in n - t r
/home/oracle/labs/less_10/images.zip to the IMAGES file system.
a no
[oracle@host01]$ cd /home/oracle/labs/less_10 h a s
[oracle@host01]$ unzip images.zip -d m ) e ฺ
o
ฺc Gu i d
/u01/app/oracle/acfsmount/images
- i n c t
Archive: images.zip
c s
a tud e n
creating: /u01/app/oracle/acfsmount/images/gridInstall/
inflating:
a n @ S

g et t h i s
/u01/app/oracle/acfsmount/images/gridInstall/asm.gif
inflating: hen
e e t o us
/u01/app/oracle/acfsmount/images/gridInstall/bullet2.gif
n (ch nse
Ta removed
… Lines
g l ice …
H en inflating:
h e e /u01/app/oracle/acfsmount/images/gridInstall/view_image.gif
C extracting:
/u01/app/oracle/acfsmount/images/gridInstall/white_spacer.gif
[oracle@host01 less_10]$
7) Verify that the files have been extracted.
[oracle@host01]$ ls -R /u01/app/oracle/acfsmount/images
/u01/app/oracle/acfsmount/images:
gridInstall lost+found

/u01/app/oracle/acfsmount/images/gridInstall:
asm.gif t20108.gif t30104.gif t30119d.gif
bullet2.gif t20109a.gif t30105.gif t30119.gif
bullet.gif t20109b.gif t30106.gif t30120a.gif
divider.gif t20110.gif t30107.gif t30120b.gif
gradient.gif t20111a.gif t30108a.gif t30121d.gif
MoveAllButton.gif t20111b.gif t30108.gif t30123a.gif
MoveButton.gif t20111c.gif t30109.gif t30123b.gif

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 137


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

rpm-oracleasm.gif t20111.gif t30110.gif t30123c.gif


show_me.gif t20112.gif t30111.gif t30201.gif
t10101.gif t20113.gif t30112a.gif t30202.gif
t10102.gif t20113h.gif t30112.gif t30203.gif
t10103.gif t20114c.gif t30113a.gif t30204a.gif
t10201.gif t20114login.gif t30113b.gif t30204.gif
t10202.gif t20114server.gif t30114a.gif t30205.gif
t10203.gif t20117add.gif t30114b.gif t30206.gif
t10204.gif t20117crtbs.gif t30114.gif t30207.gif
t10205.gif t20117emctl.gif t30115a.gif t30208.gif
t20101.gif t20117tbs.gif t30115.gif t40101.gif
t20102.gif t20119asm.gif t30116a.gif t40102.gif
t20103.gif t2017emctl.gif t30116b.gif t40104.gif e
t20104.gif t30101a.gif t30116c.gif t40105a.gif
r a bl
t20105.gif t30101b.gif t30116d.gif t40105b.gif
s fe
t20106.gif t30101c.gif t30118b.gif Thumbs.db
- t r an
t20107a.gif
view_image.gif
t30102.gif t30119b.gif n no
t20107.gif t30103.gif t30119c.gif s a
white_spacer.gif ) h a
o m
ls: /u01/app/oracle/acfsmount/images/lost+found: Permission d e ฺ
denied c ฺc Gu i
[oracle@host01]$ s - i n n t
c
a tud e
8) Create a snapshot of the IMAGES
a n @ S Use the ASMCMD utility as the root
file system.
ฺ t h i s
h e ng se t
user to execute the command:
/sbin/acfsutil
e e snapt o ucreate snap_001 \
h
(c ns/sbin/acfsutil
e
/u01/app/oracle/acfsmount/images
a n
[root@host01]#
T c e snap create snap_001
l i
engacfsutil snap create: Snapshot operation is complete.
/u01/app/oracle/acfsmount/images

e e H
C h
9) Find the .SNAP directory and explore the entries. How much space does the
gridInstall directory tree use? How much space does the
.ACFS/snaps/snap_001/rgidInstall directory tree use?
[root@host01]# cd /u01/app/oracle/acfsmount/images
[root@host01]# ls –la
total 92
drwxrwx--- 5 oracle dba 4096 Sep 7 23:31 .
drwxr-xr-x 4 root root 4096 Sep 7 11:53 ..
drwxr-xr-x 5 root root 4096 Sep 7 15:04 .ACFS
drwxr-xr-x 2 oracle oinstall 12288 May 6 16:30 gridInstall
drwx------ 2 root root 65536 Sep 7 15:04 lost+found
[root@host01]# du -h gridInstall
2.0M gridInstall
[root@host01]# ls .ACFS
repl snaps
[root@host01 images]# ls .ACFS/snaps

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 138


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

snap_001
[root@host01]# ls .ACFS/snaps/snap_001
gridInstall lost+found
[root@host01]# du -h .ACFS/snaps/snap_001/gridInstall
2.0M .ACFS/snaps/snap_001/gridInstall

10) Delete the asm.gif file from the IMAGES file system.
[root@host01]# rm gridInstall/asm.gif
rm: remove regular file `gridInstall/asm.gif'? y

11) Create another snapshot of the IMAGES file system.


[root@host01]# /sbin/acfsutil snap create snap_002
/u01/app/oracle/acfsmount/images ble
acfsutil snap create: Snapshot operation is complete. fe r a
an s
n - t r
o
12) How much space is being used by the snapshots and the files that are stored in the
an
IMAGES file system. Use the acfsutil info command to find this information.
s
[root@host01]# /sbin/acfsutil info fs ) ha ฺ
/u01/app/oracle/acfsmount/images ฺ c om uide
/u01/app/oracle/acfsmount/images-inc t G
ACFS Version: 11.2.0.1.0.0cs
a tude n
flags: @
MountPoint,Available
n is S 2009
mount time: Monฺta
Sep 7 12:02:42
volumes: e n1g e th
total size:
e e t o us
h 4294967296
totalch e 4112216064
n ( free: n s
T
primary
a label:
l i c e
volume: /dev/asm/images-407

H eng flags: Primary,Available,ADVM

h e e on-disk version: 39.0


C allocation unit:
major, minor:
4096
252, 208386
size: 4294967296
free: 4112216064
ADVM diskgroup ACFS
ADVM resize increment: 268435456
ADVM redundancy: unprotected
ADVM stripe columns: 4
ADVM stripe width: 131072
number of snapshots: 2
snapshot space usage: 2265088
[root@host01 images]#

13) Restore the asm.gif file to the file system from the snapshot. This can be done with
OS commands or from Enterprise Manager. This solution uses the OS commands.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 139


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

a) The snapshot is a sparse file representation of the file system, so you can browse
the snapshot as if it were a full file system. All the OS file commands are
functional. Find the asm.gif file in the snapshot. The find command ID is
shown in this solution. Perform this operation as the root user.
[root]$ cd /u01/app/oracle/acfsmount/images/gridInstall
[root]$ find . –name asm.gif
find: ./.ACFS/.fileid: Permission denied
find: ./.ACFS/repl: Permission denied
find: ./.ACFS/snaps/snap_001/.ACFS/.fileid: Permission denied
find: ./.ACFS/snaps/snap_001/.ACFS/repl: Permission denied
find: ./.ACFS/snaps/snap_001/.ACFS/snaps: Permission denied
find: ./.ACFS/snaps/snap_001/lost+found: Permission denied e
./.ACFS/snaps/snap_001/gridInstall/asm.gif
r a bl
find: ./.ACFS/snaps/snap_002/.ACFS/.fileid: Permission denied
sfe
find: ./.ACFS/snaps/snap_002/.ACFS/repl: Permission denied
- t r an
find: ./.ACFS/snaps/snap_002/.ACFS/snaps: Permission denied
find: ./.ACFS/snaps/snap_002/lost+found: Permission denied no n
find: ./lost+found: Permission denied s a
h a
) to theeoriginal
b) Restore the asm.gif file by copying from the snapshoto m d ฺ location.
c ฺc Gu i
s - i n
[root]$ cp ./.ACFS/snaps/snap_001/gridInstall/asm.gif
n t
./gridInstall/asm.gif c
a tud e
a n @ S nodes. This command must be
14) Dismount the Images file system
ฺ t from all
h i sthree
executed by the roote
h ngIf thesdirectory
user. e t is busy, use lsof to find the user that is
h e e opentoandustop that session.
holding the directory

n (c nsumount
[root@host01]# e /u01/app/oracle/acfsmount/images
a ic e
T /u01/app/oracle/acfsmount/images:
umount: l device is busy
n g
umount: /u01/app/oracle/acfsmount/images: device is busy
ee He
C h [root@host01]# lsof +d /u01/app/oracle/acfsmount/images
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
lsof 5770 root cwd DIR 252,5634 4096 2
/u01/app/oracle/acfsmount/images
lsof 5771 root cwd DIR 252,5634 4096 2
/u01/app/oracle/acfsmount/images
bash 23971 root cwd DIR 252,5634 4096 2
/u01/app/oracle/acfsmount/images
/* change directories the root session then continue */

[root@host01]# cd

[root@host01]# umount /u01/app/oracle/acfsmount/images

[root@host01 ~]# ssh $ST_NODE2 umount


/u01/app/oracle/acfsmount/images
root@host02's password: oracle << password is not displayed

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 140


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

[root@host01 ~]# ssh $ST_NODE3 umount


/u01/app/oracle/acfsmount/images
root@host03's password: oracle << password is not displayed

[root@host01 ~]#

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 141


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-1: Managing ACFS (continued)

15) Remove the IMAGES ACFS file system and volume using Enterprise Manager.
Step Screen/Page Description Choices or Values
a. Enter the URL in the browser https//host01:1158/em
b. Login Name: SYS
Password: oracle_4U
Connect as SYSDBA
Click Login.
c. Cluster Database: In the Instances section, click the link for
orcl.example.com Home +ASM1_host01.example.com.
d. Automatic Storage Management: Click the ASM Cluster File System tab.
+ASM1_host01.example.com
ble
Home
fe r a
e. Automatic Storage Management Username: SYS
an s
Login Password: oracle_4U n - t r
Click Login.
a no
f. Automatic Storage Management: s
Select /u01/app/oracle/acfsmount/images.
h a
+ASM1_host01.example.com Click Deregister.
m ) e ฺ
ASM Cluster File System o
ฺcgridGu i d
g. ASM Cluster File System Host i n c
Username:
s- deoracle
Credentials: host01.example.com acPassword:
nt
n @ Click S u
tContinue.
h. Deregister ASM Cluster g t a
ฺFile t s
hiClick OK.
h n
e us
System: /dev/asm/images-407 e
i. Automatic
e
hStorage to
e eManagement: Click the Disk Groups tab.
( c n s
T anCluster
+ASM1_host01.example.com
ASM l i c eFile System
n g
ee Hej. Automatic Storage Management: Click the link for the ACFS disk group.
+ASM1_host01.example.com Disk
C h Groups
k. Disk Group: ACFS General Click the Volumes tab.
l. Disk Group: ACFS Volumes Select the IMAGES volume.
Click Delete.
m. Confirmation Click Yes.
n. Disk Group: ACFS General
o. Disk Group: ACFS Click Logout.
Close the browser.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 142


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database
In this practice, you will uninstall the RAC database, leaving the clusterware installed on
three nodes.
1) Open a terminal window as the oracle OS user on your first node and set the
environment with the oraenv and st_env.sh commands. Enter the name of the
database when prompted.
[oracle@host01]$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is
/u01/app/oracle
[oracle] $ . /home/oracle/labs/st_env.sh
ble
2) Stop Enterprise Manager on your first and second nodes. It was not started on your fe r a
third node. an s
n - t r
[oracle@host01]$ export ORACLE_UNQNAME=orcl
[oracle@host01]$ emctl stop dbconsole a no
Oracle Enterprise Manager 11g Database Control Release h a s
11.2.0.1.0 m ) e ฺ
o
ฺc Gu
Copyright (c) 1996, 2009 Oracle Corporation. All rights i d
reserved. - i n c t
c
a tuds e n
https://host01.example.com:1158/em/console/aboutApplication
@ S
Stopping Oracle Enterprise Manager 11g Database Control ...
a n
ฺ t h i s
all attemps to stop
h e ngoc4jsefailed...
t now trying to kill 9
... Stopped. e u
c he sesshto$ST_NODE2
[oracle@host01]$
(
n ceThu
Lastalogin: n Sep 24 20:56:37 2009 from host01.example.com
g T l i
[oracle@host02]$ . oraenv
e n ORACLE_SID = [oracle] ? orcl
e e H The Oracle base for
C h ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is
/u01/app/oracle
[oracle@host02]$ export ORACLE_UNQNAME=orcl
[oracle@host02]$ emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release
11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights
reserved.
https://host02.example.com:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...

Cannot determine Oracle Enterprise Manager 11g Database


Control process.
/u01/app/oracle/acfsmount/11.2.0/sharedhome/host02_orcl/emctl.
pid does not exist.
[oracle@host02 ~]$ exit
logout

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 143


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

Connection to host02 closed.


[oracle@host01 ~]$

3) Change directories to the oracle user's home directory: /home/oracle.


[oracle@host01]$ cd /home/oracle
4) Start the deinstall tool. The tool does an auto discover of the database parameters. The
next prompt allows you to change the discovered parameters, answer n.
[oracle@host01]$ $ORACLE_HOME/deinstall/deinstall -silent
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/ e
r a bl
############ ORACLE DEINSTALL & DECONFIG TOOL START
s fe
############
- t r an
no n
######################## CHECK OPERATION STARTas
a
######################## ) h ฺ
Install check configuration START m
co Guide
c ฺ
s - in nt
acOracle u e location
dhome
Checking for existence of the
@
n is S t
ฺ a
/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1
t
g efor
Oracle Home type selected th de-install is: RACDB
e n
h ofor s de-install is: /u01/app/oracle
Oracle Base selected
e e t uof
( c h se
Checking for existence central inventory location

an for c n
/u01/app/oraInventory
eexistence
T
Checking
g l i of the Oracle Grid Infrastructure home

H en /u01/app/11.2.0/grid

h e e The following nodes are part of this cluster: host01,host02


C Install check configuration END

Network Configuration check config START

Network de-configuration trace file location:


/u01/app/oraInventory/logs/netdc_check5746757283308326616.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location:


/u01/app/oraInventory/logs/databasedc_check7510048152035885516
.log

###### For Database 'orcl' ######

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 144


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

RAC Database
The nodes on which this database has instances: [host01,
host02]
The instance names: [orcl1, orcl2]
The local instance name on node: orcl1
The diagnostic destination location of the database:
/u01/app/oracle/diag/rdbms/orcl
Storage type used by the Database: ASM
Database Check Configuration END

Enterprise Manager Configuration Assistant START

ble
EMCA de-configuration trace file location:
/u01/app/oraInventory/logs/emcadc_check.log fe r a
ans
Checking configuration for database orcl n - t r
Enterprise Manager Configuration Assistant END
a no
Oracle Configuration Manager check START
h a s
OCM check log file location :
m ) e ฺ
/u01/app/oraInventory/logs//ocm_check7494.log o
ฺc Gu i d
Oracle Configuration Manager check END
- i n c t
c s e n
######################### CHECK
n @ S tud END
a OPERATION
#########################
g ฺ ta this
e h en use
h e e to CHECK OPERATION SUMMARY
#######################
c
( ns
#######################
n
a
T cluster
Oracle Grid
l c e
i node(s) on whichHome
Infrastructure is: /u01/app/11.2.0/grid
en g
The the Oracle home exists are:

e e H (Please input nodes seperated by ",", eg:


C h node1,node2,...)host01,host02
Oracle Home selected for de-install is:
/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1
Inventory Location where the Oracle home registered is:
/u01/app/oraInventory
The following databases were selected for de-configuration :
orcl
Database unique name : orcl
Storage used : ASM
Will update the Enterprise Manager configuration for the
following database(s): orcl
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
host01 : Oracle Home exists with CCR directory, but CCR is not
configured
host02 : Oracle Home exists with CCR directory, but CCR is not
configured
CCR check is finished

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 145


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

A log of this session will be written to:


'/u01/app/oraInventory/logs/deinstall_deconfig2009-09-28_08-
24-07-AM.out'
Any error messages from this session will be written to:
'/u01/app/oraInventory/logs/deinstall_deconfig2009-09-28_08-
24-07-AM.err'

######################## CLEAN OPERATION START


########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: e


/u01/app/oraInventory/logs/emcadc_clean.log
r a bl
s fe
Updating Enterprise Manager Database Control configuration for
- t r an
database orcl
Updating Enterprise Manager ASM targets (if any) non
Updating Enterprise Manager listener targets (if any) s a
Enterprise Manager Configuration Assistant END ) h a
Database de-configuration trace file location: o m d e ฺ
c ฺc Gu i
/u01/app/oraInventory/logs/databasedc_clean8536757532603850049
.log s - i n n t
c
a tud
Database Clean Configuration START orcl
e
a n @ S
This operation may take few minutes.
ฺ t h i s
h e ng se t
Database Clean Configuration END orcl

h e e to uclean config START


Network Configuration
n (c nse
T a
Network
g i ce
de-configuration
l trace file location:

H en /u01/app/oraInventory/logs/netdc_clean4830516361423529044.log

h e e De-configuring backup files...


C Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START


OCM clean log file location :
/u01/app/oraInventory/logs//ocm_clean7494.log
Oracle Configuration Manager clean END
Oracle Universal Installer clean START

Detach Oracle home


'/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from
the central inventory on the local node : Done

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 146


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

Delete directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' on the
local node : Done
<<< following Failed messages are expected >>>
Failed to delete the directory
'/u01/app/oracle/acfsmount/images'. The directory is in use.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/.fileid'.
The directory is in use.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/repl'. The
directory is in use.
Failed to delete the directory e
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/snaps'. The
r a bl
directory is in use.
s fe
Failed to delete the directory
- t r an
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS'. The
o n
directory is not empty.
Failed to delete the directory s an
) ha ฺ
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/lost+found'. The
directory is in use.
ฺ c om uide
Failed to delete the directory
- i n c tG
cs uden
'/u01/app/oracle/acfsmount/11.2.0/sharedhome'. The directory
a
is not empty.
a n @ St
Failed to delete the directory
g ฺ t t h is
'/u01/app/oracle/acfsmount/11.2.0'. The directory is not
empty.
e h en use
( c he se to
Failed to delete the directory '/u01/app/oracle/acfsmount'.

an licen
The directory is not empty.

g T
Failed to delete the directory

H en '/u01/app/oracle/acfsmount/images'. The directory is in use.


Failed to delete the directory
h e e '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/.fileid'.
C The directory is in use.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/repl'. The
directory is in use.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/snaps'. The
directory is in use.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS'. The
directory is not empty.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/lost+found'. The
directory is in use.
Failed to delete the directory
'/u01/app/oracle/acfsmount/11.2.0/sharedhome'. The directory
is not empty.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 147


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

Failed to delete the directory


'/u01/app/oracle/acfsmount/11.2.0'. The directory is not
empty.
Failed to delete the directory '/u01/app/oracle/acfsmount'.
The directory is not empty.
Failed to delete the directory '/u01/app/oracle'. The
directory is not empty.
Delete directory '/u01/app/oracle' on the local node : Failed
<<<<

Detach Oracle home


'/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from
the central inventory on the remote nodes 'host02' : Done e
r a bl
Delete directory '/u01/app/oracle' on the remote nodes
s fe
'host02' : Failed <<<<
- t r an
o n
The directory '/u01/app/oracle' could not be deleted on the
nodes 'host02'. s an
)
Oracle Universal Installer cleanup completed with errors.ha ฺ
c o m ide
Oracle Universal Installer clean END
n c ฺ G u
i t
a cs- uden
a n @ St
Oracle install clean START
g ฺ t t h is
h en usremoving
Clean install operation
e
e temporary directory
e t o
(ch noperation
'/tmp/install' on node 'host01'
Clean install
n e se removing temporary directory
T a li c
'/tmp/install' on node 'host02'
e n g
e e H Oracle install clean END
C h
######################### CLEAN OPERATION END
#########################

####################### CLEAN OPERATION SUMMARY


#######################
Updated Enterprise Manager configuration for database orcl
Successfully de-configured the following database instances :
orcl
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR
configuration
CCR clean is finished
Successfully detached Oracle home
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from
the central inventory on the local node.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 148


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

Successfully deleted directory


'/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' on the
local node.
Failed to delete directory '/u01/app/oracle' on the local
node.
Successfully detached Oracle home
'/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from
the central inventory on the remote nodes 'host02'.
Failed to delete directory '/u01/app/oracle' on the remote
nodes 'host02'.
Oracle Universal Installer cleanup completed with errors.

Oracle install successfully cleaned up the temporary e


directories.
r a bl
##############################################################
s fe
- t r an
############# ORACLE DEINSTALL & DECONFIG TOOL END ########### no n
s a
[oracle@host01 ~]$ ) h a
o m d e ฺ
5) The deinstall tool does not completely remove c i
ฺcthe ORACLE_HOME
u directory
i n t G
cs- ufiledesystem.
when ORACLE_HOME is placed in the ACFS
the ACFS file system. a n Using ASMCA, remove
a n @ St
a) Open a window as theg ฺ
grid s first node and set the Grid Infrastructure
t usertonhiyour
home with oraenv.
e h en use
(
$ su - gridc he se to
T c en
an . lioracle
Password: <<< paswword not displayed

engORACLE_SID = [grid] ? +ASM1


[grid]$ oraenv

e e H The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is


C h /u01/app/grid
[grid]$

b) Start ASMCA with the asmca command as the grid user.


[grid]$ asmca

c) Dismount the ACFS file system.


Step Screen/Page Description Choices or Values
a. ASM Configuration Assistant Click the ASM Cluster File Systems tab.
b. ASM Cluster File System Right-click the row with Volume
DBHOME_1.
c. In the popup-menu Select Show Dismount Command.
d. Dismount ACFS Command Follow instructions
Start a window as the root user.
Copy the command in the ASMCA window
to the terminal window and execute.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 149


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 10-2: Uninstalling RAC Database (continued)

Step Screen/Page Description Choices or Values


e. Dismount ACFS Command Click Close.

d) Remove the ORACLE_HOME directory


Step Screen/Page Description Choices or Values
a. ASM Cluster File System Right-click the row with Volume
DBHOME_1.
b. In the popup-menu Select Delete.
c. Confirm Delete Click Yes.
d. ASM Cluster File System: Follow instructions
Deletion Start a window as the root user
ble
Copy the command in the ASMCA window
fe r a
to the terminal window and execute.
ans
Exit the terminal window.
n - t r
o
e. ASM Cluster File System:
Deletion
Click close.
s an
f. ASM Cluster File System Click Exit. ) ha ฺ
g. ASM Configuration Assistant Click Yes.ฺ c om uide
- i n c tG
6) Close all terminal windows.
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
e ng
ee H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 150


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 11
In this practice, you will install the Oracle Database 11g Release 2 software and create a
three-node cluster database.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 151


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 11-1: Installing the Oracle Database Software
In this practice, you will install the Oracle Database 11g Release 2 software on three
nodes.
1) Use the VNC session on ST_NODE1 on display :1. Click the VNC icon on your
desktop, enter Server ST_NODE1:1, substituting the name of your first node for
ST_NODE1. Enter the password oracle.
2) In the VNC session, confirm that you are connected as the oracle user. Change
directory to the staged software location provided by your instructor and start the OUI
by executing the runInstaller command from the
/staged_software_location/database/Disk1 directory.
$ id
ble
uid=501(oracle) gid=502(oinstall)
fe r a
groups=501(dba),502(oinstall),503(oper),505(asmdba)
an s
$ cd /stage/database/Disk1
n - t r
$ ./runInstaller
a no
h a s
a) On the Configure Security Updates page, deselectothe m )
“I wish e
to ฺreceive security
updates” check box and click Next. Note: n If c ฺcwere G
this u i d
a production machine or
part of an important test environment, -
s de
you
i might n t
consider this option. A dialog
a c
box appears making sure that you
n @and S tu uninformed about the updates.
want to remain
Click Yes to close the dialog
g ฺt a box
t hi s
continue.
n
e Option
b) On the Select Installation
h e
s page, select the “Install database software only”
e
e Next.
option andhclick t o u
( c n s e
T anthe Node
c) On
l i c eSelection page, select Real Application Clusters database

en g installation. Select all three of your assigned hosts and click Next. If the ssh
H connectivity test fails, click the SSH Connectivity button. Enter the password
h e e oracle for the oracle user. Select the Reuse public and private keys check box
C and click the Setup button. After the setup completes, click Next to continue.
d) On the Select Product Languages, promote all languages from the Available
Languages window to the Selected Languages window on the right-hand side.
Click Next to continue.
e) On the Select Database Edition, select Enterprise Edition and click Next to
continue.
f) On the Specify Installation edition, the Oracle Base should be
/u01/app/oracle and the Software Location should be
/u01/app/oracle/product/11.2.0/dbhome_1. Do not install the
database to a shared location. Click Next to continue.
g) On the Privileged Operating System Groups, select dba as the Database
Administrator Group and oper as the Database Operator Group. Click Next to
continue.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 152


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 11-1: Installing the Oracle Database Software
(continued)

h) When the prerequisites have successfully been checked on the Perform


Prerequisites Check page, click Next to continue. If any checks fail, click the Fix
and Check Again button. Run the scripts on the cluster nodes as root as directed,
and then click Next.
i) Check the information on the Summary page and click Finish.
j) The Install Progress screen allows you to monitor the progression of the
installation.
k) When the files have been copied to all nodes, the Execute Configuration Scripts
window is presented. Execute the
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh script on ble
all three nodes. fe r a
an s
# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
n - t r
Running Oracle 11g root.sh script... a no
h a s
The following environment variables are set m ) as: eฺ
ORACLE_OWNER= oracle o
ฺc Guid
n c
-i ent
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
c s
a tudbin directory:
Enter the full pathname n
a @
of S
the local
[/usr/local/bin]: ฺ t
g eexists t h i s
The file "dbhome"
h e nalready s in /usr/local/bin. Overwrite
it? (y/n) ee
to u
[n]: n (ch e
The a n
file e
"oraenv"
c ns already exists in /usr/local/bin. Overwrite
g T (y/n) li
it?
n
e [n]: n
e e H The file "coraenv" already exists in /usr/local/bin.
C h Overwrite it? (y/n)
[n]: n

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
#

Run this script on the remaining two nodes before continuing.


l) When you have run the root.sh scripts on all three nodes, click the OK button
to close the Execute Configuration Scripts window.
m) Click the Close button on the Finish page to complete the installation and exit the
Installer.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 153


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 11-2: Creating a RAC Database
In this practice, you will create a three-node RAC database.
1) From the oracle VNC session that you used to install the database software in the
preceding practice, change directory to
/u01/app/oracle/product/11.2.0/dbhome_1/bin and launch the
Database Configuration Assistant by executing the dbca command.
$ id
uid=500(oracle) gid=501(oinstall)
groups=500(dba),501(oinstall),502(oper),505(asmdba)

$ cd /u01/app/oracle/product/11.2.0/dbhome_1/bin

$ ./dbca
ble
fe r a
a) On the Welcome page, select Oracle Real Application Clusters database and click s
Next.
- t r an
b) On the Operations page, select “Create a Database” and click Next. no n
a
sor Transaction
c) On the Database Templates page, select the General Purpose
) h a
Processing template. Click Next to continue.
o m d e ฺ
c i
ฺc Gu and enter orcl in
d) On the Database Identification page, select
s i nAdmin-Managed
n t button to install the database
- SelecteAll
the Global Database Name field. Click c the
aNext. tud
a @
to all three of your nodes andnclick S
e) On the ManagementnOptions ฺ t
g page, h i s
t make sure that the Configure Enterprise
Manager check h e
box is u s
selected
eand the “Configure Database Control for local
e e t o
n (ch option
management”
n s e is selected. Click Next to continue.
a “Use
f) Select e Same Administrative Password for All Accounts” on the
licthe
e n gT Database Credentials page. Enter oracle_4U in the Password and Confirm
e e H Password fields and click Next.
Ch g) On Database File locations, you can specify your storage type and location for
your database files. Select Automatic Storage Management (ASM) from the drop-
down list for Storage Type. Select Use Oracle-Managed Files and make sure that
the Database Area value is +DATA. Click Next to continue.
h) When the ASM Credentials box appears, enter the ASM password that you used
during the clusterware installation practice 2.2, step k. The password should be
oracle_4U. Enter the correct password and click OK to close the box.
i) On the Recovery Configuration page, enter +FRA in the Recovery Area field and
accept the default for the Recovery Area Size. Click Next to continue.
j) On the Database Content page, select the Sample Schemas check box and click
Next.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 154


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 11-2: Creating a RAC Database (continued)
k) On the Memory tabbed page of the Initialization Parameters page, make sure that
you enter 550 in the Memory Size (SGA and PGA) field. Then click the
Character Sets tab, and select Use Unicode on the Database Character Set tabbed
page. Click Next.
l) Accept the default values on the Database Storage page and click Next to
continue.
m) Select Create Database on the Creation Options page and click Finish.
n) On the Database Configuration Assistant: Summary page, click OK.
o) You can monitor the database creation progress from the Database Configuration
Assistant window.
p) At the end of the installation, a dialog box with your database information
ble
including the Database Control URL is displayed. Click Exit. This will close the
fe r a
dialog box and end the Database Configuration Assistant.
t r a ns
q) Open a browser and enter the Database Control URL displayed in the previous
o n -
nare up across
step. Verify that all instances are up. Verify that all cluster resources
a
all three nodes.
ha ฺ s
)
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 155


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 12
In these practices, you will contrast operating system, password file authenticated
connections, and Oracle database authenticated connections. You will also learn to stop a
complete ORACLE_HOME component stack

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 156


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-1: Operating System and Password File
Authenticated Connections
In this practice, you will make both operating system authenticated connections and
password file authenticated connections to the database instance. You will also examine
problems with the oraenv script.
1) Connect to your first node as the oracle user and set up your environment variables
using the oraenv script.
$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is
/u01/app/oracle
ble
2) Identify all the database instance names that are currently executing on your machine fe r a
using the Linux ps command. Note: All database instances have a mandatory an s
n - t r
process name. a no
background process named pmon, and the instance name will be part of the complete

h a s
$ ps -ef | grep -i pmon
m ) e ฺ
grid 4111 1 0 10:53 ? o
ฺc 00:00:00
00:00:00 i d asm_pmon_+ASM1
u ora_pmon_orcl1
oracle 4987 1 0 10:56 ? inc G
oracle 5299 5257 0 10:58 cpts/0 s- de00:00:00
n t grep -i pmon
a t u
ฺ t a n@ is S
3) Attempt to make a localn
e g e toththe orcl1 instance using SQL*Plus with the
connection
sysdba privilege.
e e t us as operating system authentication because a
hThis is oknown
password is h needed.
(cnot s eWhat happens when trying to connect to the instance?
$ T
n n
a li/ceas sysdba
sqlplus
en g
e e H SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 10:59:04
C h 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL> exit
Disconnected.
$

4) Attempt to connect to the instance using a network connection string @orcl with the
sysdba privilege. This is known as password file authentication. Is the connection
successful this time?
$ sqlplus sys@orcl as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 11:03:35


2009

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 157


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-1: Operating System and Password File
Authenticated Connections (continued)

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Enter password: oracle_4U << Password is not displayed

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options
ble
SQL> exit
fe r a
an s
$
n - t r
no
5) Display the values of the environment variables (ORACLE_BASE, ORACLE_HOME,
a
a s
ORACLE_SID, PATH, and LD_LIBRARY_PATH) that were defined with the
h
oraenv script in step 1. m ) e ฺ
o
ฺc Gu i d
$ env | grep ORA
- i n c t
ORACLE_SID=orcl c s
a tud e n
ORACLE_BASE=/u01/app/oracle @
t a n i s S
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

ng se t h
h e
e to u
$ env | grep PATH
h e
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib
n (c nse
PATH=/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/NX/bi
T a l ice
n:/home/oracle/bin:/u01/app/oracle/product/11.2.0/dbhome_1/bin
g
H en6) Modify the ORACE_SID environment variable to match the actual database instance
h e e name for the orcl database.
C
$ export ORACLE_SID=orcl1

7) Attempt the local connection with system authentication to the orcl1 instance using
SQL*Plus with the sysdba privilege. This is the same command as in step 3.
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 11:01:48


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 158


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-1: Operating System and Password File
Authenticated Connections (continued)

With the Partitioning, Real Application Clusters, Automatic


Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>

8) Query the instance_name column of the v$instance dynamic performance


view to validate the instance that you connected with. Exit SQL*Plus when finished.
SQL> select instance_name from v$instance;

INSTANCE_NAME
---------------- ble
orcl1 fe r a
an s
SQL> exit n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 159


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-2: Oracle Database Authenticated Connections
In this practice, you will make multiple Oracle database authenticated connections to a
database instance and notice the effects of load balanced connections.
1) From your first node, connected as the oracle user, validate the instance names on
each host.
$ . /home/oracle/labs/st_env.sh
$ ssh $ST_NODE1 ps -ef | grep pmon
grid 4111 1 0 10:53 ? 00:00:00 asm_pmon_+ASM1
oracle 4987 1 0 10:56 ? 00:00:00 ora_pmon_orcl1
oracle 7932 7779 0 11:56 pts/0 00:00:00 grep pmon

$ ssh $ST_NODE2 ps -ef | grep pmon


grid 4104 1 0 10:53 ? 00:00:00 asm_pmon_+ASM2 ble
oracle 4959 1 0 10:56 ? 00:00:00 ora_pmon_orcl2 fe r a
an s
$ ssh $ST_NODE3 ps -ef | grep pmon
n - t r
grid 4096 1 0 10:53 ? no
00:00:00 asm_pmon_+ASM3
a
oracle 4841 1 0 10:55 ? s
00:00:00 ora_pmon_orcl3
h a
2) Verify the current host name, and then set the environmentm )variables
e ฺusing the
oraenv script.
o
ฺc Gu i d
- i n c t
$ hostname c
a tuds e n
host01.example.com
a n @ S
ฺ t h i s
$ . oraenv
h e ng se t
ORACLE_SID = e [oracle] u ? orcl
h e t o
The Oracle
n (c nse base for

T a l i c e
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is

eng
/u01/app/oracle

e e H 3) Connect to a database instance by using SQL*Plus with the system account. This is
C h known as Oracle database authentication. After it is connected, query the
instance_name column from the v$instance dynamic performance view.
Note: Your instance names may vary from the ones displayed below.
$ sqlplus system@orcl

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 12:05:06


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Enter password: oracle_4U << Password is not displayed

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 160


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-2: Oracle Database Authenticated Connections
(continued)

Data Mining and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl2

SQL>

4) Use the SQL*Plus host command to temporarily exit SQL*Plus and return to the
operating system prompt. Note: SQL*Plus is still running when this is performed. e
Validate that you are still on your first node. Repeat step 3 from the operating system r a bl
prompt to establish a second SQL*Plus session and database instance connection. s fe
What instance name did you connect to? - t r an
no n
SQL> host
s a
$ hostname
) h a
host01.example.com
o m d e ฺ
c ฺc Gu i
$ sqlplus system@orcl
s - i n n t
c
aProduction e
ud on Tue Sep 8 12:13:02
SQL*Plus: Release 11.2.0.1.0
n @ S t
2009
g ฺ ta this
Copyright (c)e1982,h en 2009,
u seOracle. All rights reserved.
( c he se to
Enter npassword:
a c e n oracle_4U << Password is not displayed
g T lito:
e n Connected
e e H Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
C h Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl1

SQL>

5) Use the SQL*Plus host command to temporarily exit SQL*Plus and return to the
operating system prompt. Note: SQL*Plus is still running when this is performed.
Validate that you are still on your first node. Repeat step 3 from the operating system
prompt to establish a third SQL*Plus session and database instance connection. What
instance name did you connect to?

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 161


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-2: Oracle Database Authenticated Connections
(continued)

SQL> HOST
$ hostname
host01.example.com
[oracle@host01 ~]$ sqlplus system@orcl

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 12:15:52


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Enter password: oracle_4U << Password is not displayed


ble
fe r a
Connected to:
an s
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
n - t r
Production
With the Partitioning, Real Application Clusters, Automatic a no
Storage Management, OLAP,
h a s
Data Mining and Real Application Testing options m ) e ฺ
o
ฺc Gu i d
SQL> select instance_name from v$instance;
- i n c t
c
a tuds e n
INSTANCE_NAME
a n @ S
---------------- ฺ t h i s
orcl3
h e ng se t
h e e to u
SQL>
n (c nse
6) Exit
g
a ce
Tthe threeliSQL*Plus sessions that are currently executing on the first node.
H en SQL> exit
h e e Disconnected from Oracle Database 11g Enterprise Edition
C Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

$ exit
exit

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

$ exit
exit

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 162


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-2: Oracle Database Authenticated Connections
(continued)

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

$ exit << Optional. This will exit your terminal session.

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 163


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-3: Stopping a Complete ORACLE_HOME Component
Stack
In this practice, you will use the srvctl utility to stop all resource components
executing from a single home location.
1) From your first node, connect to the oracle account and set up the environment
variables for the database instance.
$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is
/u01/app/oracle
$ . /home/oracle/labs/st_env.sh
ble
2) Validate that the instances are running on each node of the cluster. fe r a
Note: There are several ways this step can be performed. an s
n - t r
$ ssh $ST_NODE1 ps -ef | grep pmon n o
grid 4111 1 0 10:53 ? a
00:00:00 sasm_pmon_+ASM1
oracle 4987 1 0 10:56 ? ha ora_pmon_orcl1
00:00:00
) ฺ pmon
oracle 5321 5254 0 12:45 pts/0 m
00:00:00
o i d e
grep
n c ฺc Gu
$ ssh $ST_NODE2 ps -ef | grep pmon
s - i n t
grid 4104 1 0 10:53 c
a tud
? e00:00:00 asm_pmon_+ASM2
oracle 4959 1 0n @
10:56 ? S 00:00:00 ora_pmon_orcl2
g ฺ ta this
$ ssh $ST_NODE3 e psn -ef |se grep pmon
e h u
grid
oracle (ch
e
4096 t1o 0 10:53 ?
4841 se 1 0 10:55 ?
00:00:00 asm_pmon_+ASM3
00:00:00 ora_pmon_orcl3
a n e n
g T l ic or <<<<<<<<<<<<<<<<<
>>>>>>>>>>>>>
H en
h e e $ srvctl status database -d orcl -v
C Instance orcl1 is running on node host01
Instance orcl2 is running on node host02
Instance orcl3 is running on node host03

$ srvctl status asm -a


ASM is running on host01,host02,host03
ASM is enabled.

>>>>>>>>>>>>> or <<<<<<<<<<<<<<<<<

$ /u01/app/11.2.0/grid/bin/crsctl status resource ora.orcl.db


NAME=ora.orcl.db
TYPE=ora.database.type
TARGET=ONLINE , ONLINE , ONLINE
STATE=ONLINE on host01, ONLINE on host02, ONLINE on host03

$ /u01/app/11.2.0/grid/bin/crsctl status resource ora.asm


NAME=ora.asm

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 164


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-3: Stopping a Complete ORACLE_HOME Component
Stack (continued)

TYPE=ora.asm.type
TARGET=ONLINE , ONLINE , ONLINE
STATE=ONLINE on host01, ONLINE on host02, ONLINE on host03

3) Display the syntax usage help for the srvctl status home command.
$ srvctl status home -help

Displays the current state of of all resources for the Oracle


home.

Usage: srvctl status home -o <oracle_home> -s <state_file> -n


<node_name> ble
-o <oracle_home> ORACLE_HOME path fe r a
-s <state_file> Specify a file path for the 'srvctl stop
an s
home' command to store the state of the resources
n - t r
-n <node_name> Node name
a no
-h Print usage
h a s
4) Use the srvctl status home command to check m the)state ofeall
ฺ resources
o
ฺc Gu
running from the /u01/app/oracle/product/11.2.0/dbhome_1 i d home
i n c t
a c s-/tmpddirectory
location. Create the required state file in the
e n with the file name
host01_dbhome_state.dmp@
n S u only.
for the firsttnode
$ srvctl status home g t
ฺ-oa t hi s
n
e us e
/u01/app/oracle/product/11.2.0/dbhome_1
h -s
e
he se t
/tmp/host01_dbhome_state.dmpo -n $ST_NODE1
( c
an orcl
Database
T l i c enis running on node host01
H eng
5) Display the syntax usage help for the srvctl stop home command.
h e e
C $ srvctl stop home -help

Stops all Oracle clusterware resources that run from the


Oracle home.

Usage: srvctl stop home -o <oracle_home> -s <state_file> -n


<node_name> [-t <stop_options>] [-f]
-o <oracle_home> ORACLE_HOME path
-s <state_file> Specify a file path for the 'srvctl stop
home' command to store the state of the resources
-n <node_name> Node name
-t <stop_options> Stop options for the database.
Examples of shutdown options are normal, transactional,
immediate, or abort.
-f Force stop
-h Print usage

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 165


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 12-3: Stopping a Complete ORACLE_HOME Component
Stack (continued)

6) Stop all resources executing in the


/u01/app/oracle/product/11.2.0/dbhome_1 home using the state file
created in step 4. Do not use the optional parameters identified by square brackets
“[]” displayed in the syntax usage help.
$ srvctl stop home -o /u01/app/oracle/product/11.2.0/dbhome_1
-s /tmp/host01_dbhome_state.dmp -n $ST_NODE1
$
7) Check the status of the database instances on each node.
Note: There are several ways this step can be performed. Do not use the srvctl
status home command with the same state file created above.
ble
$ srvctl status database -d orcl -v fe r a
an s
Instance
Instance
orcl1 is not running on node host01
orcl2 is running on node host02 n - t r
Instance orcl3 is running on node host03
a no
8) Start all resources for the /u01/app/oracle/product/11.2.0/dbhome_1 h a s
m ) e ฺ
home using the state file created by the stop command. o
ฺc Gu i d
- i n c
$ srvctl start home -o /u01/app/oracle/product/11.2.0/dbhome_1 t
-s /tmp/host01_dbhome_state.dmp c
a tuds-n e n
$ST_NODE1
$
a n @ S
ฺ t h i s
ng sinstances
9) Check the status of theedatabase
h e t on each node.
e ways
Note: There areeseveral
t o uthis step can be performed.
h
(cstatus e
sdatabase
a n
$ srvctl
c e n -d orcl -v
g T
Instance li is
orcl1 running on node host01
e n Instance orcl2 is running on node host02
e e H Instance orcl3 is running on node host03
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 166


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 13
In this practice, you will configure ARCHIVELOG mode for your RAC database,
configure instance-specific connect strings for RMAN, and configure persistent RMAN
settings.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 167


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-1: Configuring the Archive Log Mode
In this practice, you will configure the archive log mode of a Real Applications Cluster
database.
1) From the first node of your cluster, open a terminal session as the oracle user and
set up the environment variables using the oraenv script for the database instance.
Change the value of the ORACLE_SID variable to allow local system authenticated
connections.
$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is
/u01/app/oracle
ble
fe r a
$ export ORACLE_SID=orcl1
t r a ns
o n -
2) Make a local connection using operating system authentication to the database
instance and then use the archive log list SQL command toadetermine n if the
database is in archive log mode. Exit SQL*Plus when done. as
m ) h eฺ
$ sqlplus / as sysdba
c ฺ co Guid
SQL*Plus: Release 11.2.0.1.0 Production s - in nont Tue Sep 8 15:48:58
2009
@ ac tude
tan Oracle.i s S
Copyright (c) 1982,g2009,
n ฺ t h All rights reserved.
e
h o us e
e e
hto: se t
Connected ( c
T an Database
Oracle
l i c en 11g Enterprise Edition Release 11.2.0.1.0 -
H engProduction
With the Partitioning, Real Application Clusters, Automatic
h e e Storage Management, OLAP,
C Data Mining and Real Application Testing options

SQL> archive log list


Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11
Current log sequence 12
SQL> exit
$

3) Stop the orcl database on each node of the cluster using the srvctl stop
database command.
$ srvctl stop database -d orcl

4) Verify that the orcl database is not running on any node of the cluster using the
srvctl status database command.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 168


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-1: Configuring the Archive Log Mode (continued)

$ srvctl status database -d orcl -v


Instance orcl1 is not running on node host01
Instance orcl2 is not running on node host02
Instance orcl3 is not running on node host03
5) Make a local connection using operating system authentication to the database
instance and then start up the database on the first node only with the mount option.
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 16:27:17


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved. ble


fe r a
Connected to an idle instance.
an s
n - t r
SQL> startup mount
ORACLE instance started. a no
h a s
Total System Global Area 577511424 bytes m ) e ฺ
Fixed Size o
ฺc Gu
1338000 bytes i d
Variable Size 444597616 i n c
- bytes
bytes t
Database Buffers c s
125829120 d e
a5746688tubytes n
Redo Buffers n @ S
Database mounted.
g ฺ ta this
SQL>
e h en use
6) Issue the alter
( c he database
e to archivelog SQL command to change the archive
modeaofnthe database
c e nsand then verify the results with the archive log list
g T command.
SQL li
e n
e e H SQL> alter database archivelog;
C h
Database altered.

SQL> archive log list


Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11
Next log sequence to archive 12
Current log sequence 12
7) Shut down the database instance with the immediate option and exit SQL*Plus. Use
the srvctl utility to restart the database instances on all nodes of the cluster.
SQL> shutdown immediate
ORA-01109: database not open

Database dismounted.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 169


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-1: Configuring the Archive Log Mode (continued)

ORACLE instance shut down.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

$ srvctl start database -d orcl


$

8) Verify that the orcl database is running on all the three nodes of your cluster by
using the srvctl status database command. ble
fe r a
$ srvctl status database -d orcl -v
an s
Instance orcl1 is running on node host01
n - t r
o
an
Instance orcl2 is running on node host02
Instance orcl3 is running on node host03 s
$
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 170


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-2: Configuring Specific Instance Connection Strings
In this practice, you will modify the tnsnames.ora file to disable connection load
balancing and allow specific named instances to be used for connectivity.
1) Examine the $ORACLE_HOME/network/admin/tnsnames.ora file. There should
be only one entry. This entry allows load balancing of connections as you observed in
Practice 12-2.
$ cat
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames
.ora

# tnsnames.ora Network Configuration File:


/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames
ble
.ora
# Generated by Oracle configuration tools. fe r a
an s
ORCL = n - t r
(DESCRIPTION =
a no
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster03-
h a s
scan.cluster03.example.com)(PORT = 1521))
m ) e ฺ
(CONNECT_DATA = o
ฺc Gu i d
(SERVER = DEDICATED)
- i n c t
(SERVICE_NAME = orcl.example.com)
c
a tud s e n
)
a n @ S
)
ฺ t h i s
e ng se t
2) Execute the /home/oracle/labs/less_13/fix_tns.sh
h script. This script
e
e e entries
will add threehadditional o u
t to the tnsnames.ora file that disable load
balancing
n (
of
c s
connections
n by requiring a specific INSTANCE_NAME when used.
a
T the lchanges
Examine i c e made to the tnsnames.ora file.
e n g
e e H $ /home/oracle/labs/less_13/fix_tns.sh
C h $ cat
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames
.ora
# tnsnames.ora Network Configuration File:
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames
.ora
# Generated by Oracle configuration tools.

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster03-
scan.cluster03.example.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 171


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-2: Configuring Specific Instance Connection Strings
(continued)

# Added by lab 13
orcl1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = host01)(PORT=1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
(INSTANCE_NAME = orcl1)
)
)
orcl2 =
ble
(DESCRIPTION =
fe r a
(ADDRESS_LIST =
an s
(ADDRESS = (PROTOCOL = TCP)(HOST = host02)(PORT=1521))
n - t r
) o
(CONNECT_DATA =
s an
(SERVICE_NAME = orcl)
(INSTANCE_NAME = orcl2) ) ha ฺ
) ฺ c om uide
)
- i n c tG
orcl3 =
a cs uden
(DESCRIPTION =
a n @ St
(ADDRESS_LIST =
g ฺ t t h is
) e h en use
(ADDRESS = (PROTOCOL = TCP)(HOST = host03)(PORT=1521))

c he se to
(CONNECT_DATA =
(
T an licen
(SERVICE_NAME = orcl)
(INSTANCE_NAME = orcl3)
en g )
e e H )
C h
3) Using one of the three new entries in the tnsnames.ora file, connect to the system
database account using SQL*Plus and verify the instance name to see that it matches
the specific entry.
$ sqlplus system@orcl2

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 17:21:26


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Enter password: oracle_4U << Password is not displayed

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 172


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-2: Configuring Specific Instance Connection Strings
(continued)

Data Mining and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl2

SQL>

4) Use the SQL*Plus host command to temporarily exit SQL*Plus and return to the
operating system prompt. Note: SQL*Plus is still running when this is performed. e
Repeat step 3 from the operating system prompt to establish a second SQL*Plus r a bl
session and database instance connection using the same connection string. Verify s fe
that the INSTANCE_NAME stays the same. - t r an
no n
SQL> host
s a
) h a
$ sqlplus system@orcl2
o m d e ฺ
c ฺc on GTuei
u Sep 8 17:26:30
SQL*Plus: Release 11.2.0.1.0 Production i n t
2009
a cs- uden
n @ S t
Copyright (c) 1982, 2009,
g ฺ t a i
Oracle.
t h s All rights reserved.

h n se
eoracle_4U
Enter password:
e e t o u << Password is not displayed
n (chto: nse
Connected
gOracle l ice 11g Enterprise Edition Release 11.2.0.1.0 -
Ta Database
H en Production
h e e With the Partitioning, Real Application Clusters, Automatic
C Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl2
5) Exit both SQL*Plus sessions.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

$ exit

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 173


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-2: Configuring Specific Instance Connection Strings
(continued)

exit

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options
$

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 174


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-3: Configuring RMAN and Performing Parallel
Backups
In this practice, you will designate your first and second nodes of the cluster as nodes
responsible for performing parallel backups of the database. The database will be backed
up to the +FRA ASM disk group by default.
1) Using the recovery manager utility (RMAN), connect to the orcl database as the
target database.
$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Tue Sep 8


17:31:34 2009
ble
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All
fe r a
rights reserved.
an s
connected to target database: ORCL (DBID=1224399398) n - t r
a no
RMAN>
h a s
2) Display all of the current RMAN settings. m ) e ฺ
o
ฺc Gu i d
RMAN> show all; - i n c t
c s d
a filetuinstead e n
using target database control
a n @ S of recovery catalog
RMAN configuration parameters
g ฺ t t h i s
for database with db_unique_name
n
e POLICY e
ORCL are:
e
CONFIGURE RETENTION e h
t o us TO REDUNDANCY 1; # default
CONFIGURE
n ( chBACKUP n seOPTIMIZATION OFF; # default
T a
CONFIGURE
l c e
DEFAULT DEVICE TYPE TO DISK; # default
iCONTROLFILE
g
CONFIGURE AUTOBACKUP OFF; # default
en CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK
e e H TO '%F'; # default
C h CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO
BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; #
default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
# default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE
'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO
'/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_orcl1.f';
# default
3) Configure RMAN to automatically back up the control file and server parameter file
each time any backup operation is performed.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 175


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-3: Configuring RMAN and Performing Parallel
Backups (continued)

RMAN> configure controlfile autobackup on;

new RMAN configuration parameters:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored
4) Configure all backups done to disk to be done in parallel 2 degrees of parallelism.
RMAN> configure device type disk parallelism 2;

new RMAN configuration parameters:


CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO
BACKUPSET; ble
new RMAN configuration parameters are successfully stored fe r a
ans
5) Configure channel 1 and channel 2 to use the connect string
n - t r
a no
‘sys/oracle_4U@orcl#’ when performing a parallel backup to disk. Replace the
pound sign (#) with 1 for channel 1 and 2 for channel 2, respectively. This will
h a s
designate your first and second nodes as dedicated backup nodes for the cluster using
)
o m e ฺ
the node specific connection strings created earlier. Without node specific connection
d
c ฺc Gu i
strings, there would be no control over which nodes are being connected to in order to
s - i n n t
perform the backups. c e
a tud
RMAN> configure channel n 1@deviceStype disk
g ฺ ta this
connect='sys/oracle_4U@orcl1';
e h en use
c e e to parameters:
new RMAN configuration
hCHANNEL
(
CONFIGURE
n n s 1 DEVICE TYPE DISK CONNECT '*';
a ic e
T RMAN configuration
new
l parameters are successfully stored
en g
e e H RMAN> configure channel 2 device type disk

C h connect='sys/oracle_4U@orcl2';

new RMAN configuration parameters:


CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT '*';
new RMAN configuration parameters are successfully stored
RMAN>

6) Open a second terminal session as the oracle account and set up the environment
variables for the orcl database. Navigate to the ~/labs/less_13 directory,
invoke SQL*plus as the system user, and run the monitor_rman.sql script. Do
not exit the first session with the RMAN prompt or this second session with the SQL
prompt.
$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is
/u01/app/oracle

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 176


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-3: Configuring RMAN and Performing Parallel
Backups (continued)

$ cd /home/oracle/labs/less_13

$ sqlplus system@orcl

SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 8 20:37:36


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Enter password: oracle_4U << Password is not displayed

Connected to:
ble
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
fe r a
Production
an s
With the Partitioning, Real Application Clusters, Automatic
n - t r
Storage Management, OLAP,
Data Mining and Real Application Testing options a no
h a s
SQL> @monitor_rman.sql
m ) e ฺ
o
ฺc Gu i d
no rows selected
- i n c t
c s e n
7) In the first session with the RMAN prompt,
n @ a perform
S t ud a full database backup with
archive logs. The backup should
g ฺ a happen
tnodes. t h ionly
s on the designated nodes (your first and
n se
second nodes) as the backup
h e Do not wait for this step to finish before
proceeding to theenext step! u
c h e e to
(
RMAN> backup database
n ns plus archivelog;
T a li c e
e n g
e e H Starting backup at 08-SEP-09
C h current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=31 instance=orcl1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=55 instance=orcl2 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=4 RECID=1 STAMP=697048583
input archived log thread=3 sequence=10 RECID=4
STAMP=697062223
channel ORA_DISK_1: starting piece 1 at 08-SEP-09
channel ORA_DISK_2: starting archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=12 RECID=2
STAMP=697062220
input archived log thread=2 sequence=5 RECID=3 STAMP=697062220
channel ORA_DISK_2: starting piece 1 at 08-SEP-09
channel ORA_DISK_2: finished piece 1 at 08-SEP-09

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 177


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-3: Configuring RMAN and Performing Parallel
Backups (continued)

piece
handle=+FRA/orcl/backupset/2009_09_08/annnf0_tag20090908t20234
7_0.268.697062231 tag=TAG20090908T202347 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time:
00:00:04
channel ORA_DISK_1: finished piece 1 at 08-SEP-09
piece
handle=+FRA/orcl/backupset/2009_09_08/annnf0_tag20090908t20234
7_0.267.697062229 tag=TAG20090908T202347 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time:
00:00:05
Finished backup at 08-SEP-09
ble
fe r a
Starting backup at 08-SEP-09
ans
using channel ORA_DISK_1
n - t r
using channel ORA_DISK_2 o
channel ORA_DISK_1: starting full datafile backup set
s an
input datafile file number=00001 ) ha ฺ
channel ORA_DISK_1: specifying datafile(s) in backup set

c
name=+DATA/orcl/datafile/system.260.696618497
ฺ om uide
input datafile file number=00005
- i n c tG
cs uden
name=+DATA/orcl/datafile/example.264.696618709
a
a n @ St
input datafile file number=00004
ฺ t is
name=+DATA/orcl/datafile/users.267.696618501
g t h
e h en use
input datafile file number=00006
name=+DATA/orcl/datafile/undotbs2.259.696619015
c he se to
channel ORA_DISK_1: starting piece 1 at 08-SEP-09
(
T an licen
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
en ginput datafile file number=00002
e e H name=+DATA/orcl/datafile/sysaux.268.696618499
C h input datafile file number=00003
name=+DATA/orcl/datafile/undotbs1.263.696618499
input datafile file number=00007
name=+DATA/orcl/datafile/undotbs3.258.696619021
channel ORA_DISK_2: starting piece 1 at 08-SEP-09
channel ORA_DISK_2: finished piece 1 at 08-SEP-09
piece
handle=+FRA/orcl/backupset/2009_09_08/nnndf0_tag20090908t20235
4_0.270.697062239 tag=TAG20090908T202354 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time:
00:01:35
channel ORA_DISK_1: finished piece 1 at 08-SEP-09
piece
handle=+FRA/orcl/backupset/2009_09_08/nnndf0_tag20090908t20235
4_0.269.697062235 tag=TAG20090908T202354 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time:
00:01:47
Finished backup at 08-SEP-09

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 178


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-3: Configuring RMAN and Performing Parallel
Backups (continued)

Starting backup at 08-SEP-09


current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=13 RECID=5
STAMP=697062345
input archived log thread=2 sequence=6 RECID=7 STAMP=697062345
channel ORA_DISK_1: starting piece 1 at 08-SEP-09
channel ORA_DISK_2: starting archived log backup set
ble
channel ORA_DISK_2: specifying archived log(s) in backup set
fe r a
input archived log thread=3 sequence=11 RECID=6
an s
STAMP=697062345
n - t r
channel ORA_DISK_2: starting piece 1 at 08-SEP-09 o
channel ORA_DISK_1: finished piece 1 at 08-SEP-09
s an
piece
) ha ฺ
handle=+FRA/orcl/backupset/2009_09_08/annnf0_tag20090908t20254
c om uide
6_0.274.697062347 tag=TAG20090908T202546 comment=NONE

i n c tG
channel ORA_DISK_1: backup set complete, elapsed time:
-
00:00:02
a cs uden
a n @ St
channel ORA_DISK_2: finished piece 1 at 08-SEP-09
piece
g ฺ t t h is
e h en use
handle=+FRA/orcl/backupset/2009_09_08/annnf0_tag20090908t20254
6_0.275.697062347 tag=TAG20090908T202546 comment=NONE
c he se to
channel ORA_DISK_2: backup set complete, elapsed time:
(
T an licen
00:00:01
Finished backup at 08-SEP-09
g
en Starting
e e H Control File and SPFILE Autobackup at 08-SEP-09
C h piece
handle=+FRA/orcl/autobackup/2009_09_08/s_697062349.276.6970623
53 comment=NONE
Finished Control File and SPFILE Autobackup at 08-SEP-09

RMAN>
8) While the backup is in progress, rerun the query on the second terminal window to
monitor the RMAN backup session progress within the cluster. The backup should be
done in parallel, with work distributed to both the backup nodes of the cluster. Enter
the slash (/) symbol and press the Enter key to rerun the query. It may be necessary
to do this multiple times until the output appears.
SQL> /

no rows selected

SQL> /

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 179


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 13-3: Configuring RMAN and Performing Parallel
Backups (continued)

no rows selected
SQL> /

INST_ID SID SERIAL# CONTEXT SOFAR TOTALWORK %_COMPLETE


------- ---- -------- ------- -------- ---------- ----------
1 31 1913 1 13308 104960 12.68
2 55 284 1 23934 101760 23.52

SQL> /

INST_ID SID SERIAL# CONTEXT SOFAR TOTALWORK %_COMPLETE


ble
------- --- -------- -------- -------- ---------- ----------
fe r a
1 31 1913 1 44798 104960 42.68
an s
2 55 284 1 49278 101760 48.43
n - t r
SQL> exit a no
h a s
9) Run the /home/oracle/labs/less_13/cleanup_13.sh
m ) script.
e ฺ
o
ฺc Gu i d
$ /home/oracle/labs/less_13/cleanup_13.sh
- i n c t
The Oracle base for
c s
a tud e
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
n is
/u01/app/oracle
a @
n is S
ฺ t
g e th
10) Exit all windows whene n
finished.
h o us
e e
h se t
( c
T an licen
e ng
ee H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 180


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 14
This practice is designed to show you how to discover performance problems in your
RAC environment. In this practice, you identify performance issues by using Enterprise
Manager, and fix issues in three different steps. At each step, you will generate the same
workload to make sure that you are making progress in your resolution.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 181


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I
The goal of this lab is to show you how to manually discover performance issues by
using the Enterprise Manager performance pages as well as ADDM. This first part
generates a workload that uses a bad RAC application design.
Note that all the necessary scripts for this lab are located in the
/home/oracle/labs/seq directory on your first cluster node.
1) Before you start this lab, make sure that you have the necessary TNS entries in the
tnsnames.ora file located in your ORACLE_HOME for the orcl database. You
can execute the following script to create those entries: add_tnsinstances.sh.
$ cd /home/oracle/labs/seq
$ ./add_tnsinstances.sh
# tnsnames.ora… Network Configuration File:
ble
/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames
fe r a
.ora…
an s
# Generated by Oracle configuration tools.
n - t r
a no
ORCL =
h a s
(DESCRIPTION =
m ) e ฺ
(ADDRESS = (PROTOCOL = TCP)(HOST ฺ= o
c Gu i d
….example.com)(PORT =
1521))
- i n c t
(CONNECT_DATA = c s
a tud e n
(SERVER = DEDICATED)
(SERVICE_NAME =ฺta n @
orcl) is
S
) ng se t h
) h e
e to u
h e
n (c nse
T = lice
a
orcl1
g
H en (DESCRIPTION =
h e e (ADDRESS_LIST =
C 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = <host01>)(PORT =

)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
(INSTANCE_NAME = orcl1)
)
)

orcl2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = <host02>)(PORT =
1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
(INSTANCE_NAME = orcl2)

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 182


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I (continued)

)
)

orcl3 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = <host03>)(PORT =
1521))
)
(CONNECT_DATA =
(SERVICE_NAME = orcl)
(INSTANCE_NAME = orcl3)
) e
)
r a bl
s fe
$
- t r an
o n
an
2) Execute the setupseq1.sh script to set up the necessary configuration for this lab.
s
$ ./setupseq1.sh
) ha ฺ
ฺ c om uide
PL/SQL procedure successfully completed. - i n c tG
a cs uden
a n @ St
ฺ t
PL/SQL procedure successfully
g t h iscompleted.
drop user jfv e h en se
cascadeu
( c h*e se to
T anat line
ERROR
l i c en1:'JFV' does not exist
eng
ORA-01918: user

e e H
C h drop tablespace seq including contents and datafiles
*
ERROR at line 1:
ORA-00959: tablespace 'SEQ' does not exist

Tablespace created.

User created.

Grant succeeded.

drop sequence s
*
ERROR at line 1:

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 183


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I (continued)

ORA-02289: sequence does not exist

drop table s purge


*
ERROR at line 1:
ORA-00942: table or view does not exist

drop table t purge


*
ERROR at line 1:
ORA-00942: table or view does not exist e
r a bl
sfe
- t r an
Table created.
o n
s an
Table created. ) ha ฺ
ฺ c om uide
- i n c tG
Index created.
a cs uden
a n @ St
1 row created. g ฺ t t h is
e h en use
c h e e to
(
Commit complete.
n ns
T a li c e
e n g
e e H PL/SQL procedure successfully completed.
C h
$

3) Using Database Control, and connected as the SYS user, navigate to the Performance
page of your Cluster Database.
a) Click the Performance tab from the Cluster Database Home page.
b) On the Cluster Database Performance page, make sure that Real Time: 15
Seconds Refresh is selected from the View Data drop-down list.
4) Use PL/SQL to create a new AWR snapshot.
$ ./create_snapshot.sh

PL/SQL procedure successfully completed.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 184


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I (continued)

5) Execute the startseq1.sh script to generate a workload on all instances of your


cluster. Do not wait; proceed with the next step.
$ ./startseq1.sh
$ old 7: insert into t values(v,'&1');
new 7: insert into t values(v,'orcl2');
old 7: insert into t values(v,'&1');
new 7: insert into t values(v,'orcl1');
old 7: insert into t values(v,'&1');
new 7: insert into t values(v,'orcl3');

… Do not wait after this point and go to the next step.

PL/SQL procedure successfully completed. ble


fe r a
an s
PL/SQL procedure successfully completed.
n - t r
a no
PL/SQL procedure successfully completed. h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
$
c s e n
6) Using Database Control, determine n @ a blocking
the list of
S tud locks in your database.
a) Still on the Performanceg ฺ ta click
page, t h s Database Locks link in the Additional
ithe
h n ofsthee page.
esection
Monitoring Links
e e t o u
h
c nsLocks
b) On the(Database e page, make sure that Blocking Locks is selected from the
a n e
g l ic list.
TView drop-down
H en c) If you do not see any locks, refresh the page by clicking Refresh. Perform this
h e e until you see locks. When you see a session lock, you should also see that the
C other session is waiting for that same lock. By clicking Refresh several times, you
must see that all sessions are alternatively waiting for the other to release the
exclusive lock held on table S.
7) While the scripts are still executing, look at the Average Active Sessions graphic.
Then, drill down to the Cluster wait class for the first node. What are your
conclusions?
a) By using the drill-down method of Enterprise Manager, you can quickly identify
the top waiting SQL statements and the top waiting sessions on both instances.
Here it appears that an UPDATE statement on table S is causing most of the waits
for the Cluster wait class.
b) Click Cluster Database in the locator link at the top of the page to return to the
Cluster Database Performance page.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 185


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I (continued)

c) From there you can now see the Average Active Sessions graph. Make sure that
the View Data field is set to Real Time:15 Seconds Refresh. After a few seconds,
the graphic must clearly show that the Cluster and Application wait classes are
causing most waits. Using the Throughput tabbed page graph underneath the
Average Active Sessions graph, you should also notice that the transaction rate is
about 250 per second.
d) In the Average Active Sessions graph, click the Cluster link on the right. This
takes you to the Active Sessions By Instance: Cluster page.
e) On the Active Sessions By Instance: Cluster page, you will see that the number of
active sessions is almost the same on all nodes. Click the first instance’s link
(instance number 1). This takes you to the Active Sessions Waiting: Cluster page e
for the corresponding instance. r a bl
s fe
f) On the Active Sessions Waiting: Cluster page, you can see the most important
-
wait events causing most of the waits in the Cluster wait class on the first t r an
no n
instance. In the Top SQL: Cluster section, click the SQL identifier that uses most
s a
of the resources. This takes you to the SQL Details page for the corresponding
) h a
statement. You will see that the script running on the first instance is executing a
o m d e ฺ
c ฺc Gu i
SELECT/UPDATE statement on table S that causes most of the Cluster waits.
8) Using Database Control, look at the Cluster s i n t page. What are your
-Cache Coherency
n
conclusions? c
a tud e
a n @ S
a) On the Cluster DatabaseฺHome
g ett i
page,
h s
click the Performance tab.
b) On the Performance h e npage, s the Cluster Cache Coherency link in the
e e t o uclick
Additional
( c h se
Monitoring Links section.
anCluster
c) TThe
l i c en Coherency page clearly shows that there are lots of blocks
Cache

H eng transferred per second on the system. This represents more than 17% of the total
logical reads. This is reflected in both the Global Cache Block Transfer Rate and
h e e the Global Cache Block Transfers and Physical Reads (vs. Logical Reads)
C graphics.
d) On the Cluster Cache Coherency page, you can also click Interconnects in the
Additional Links section of the page to get more information about your private
interconnect.
9) While the scripts are still executing, look at the Average Active Sessions graph on the
Database Performance page. Then drill down to the Application wait class for the first
instance. What are your conclusions?
a) By using the drill-down method of Enterprise Manager, you can quickly identify
the top waiting SQL statements and the top waiting sessions on both instances.
Here it appears that a LOCK statement on table S is causing most of the waits for
the Application wait class.
b) Go back to the Cluster Database Home page by clicking the Database tab located
on the top right-end corner. On the Cluster Database Home page, click the
Performance tab.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 186


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I (continued)

c) On the Performance page, make sure that the View Data field is set to Real Time:
15 Seconds Refresh. After a few seconds, the graphic should clearly show that the
Cluster and Application wait classes are causing most waits. You will also notice
that the transaction rate is about 100 per second.
d) In the Average Active Sessions graph, click the Application link on the right. This
takes you to the Active Sessions By Instance: Application page.
e) On the Active Sessions By Instance: Application page, you must see that the
number of active sessions is almost the same on all nodes. Click the link for the
first instance (number 1) on the Summary Chart graph. This takes you to the
Active Sessions Waiting: Application page of the first instance.
f) On the Active Sessions Waiting: Application page, you can see the most
ble
important wait events causing most of the waits in the Application wait class on
fe r a
the first instance. In the Top SQL: Application section, click the SQL identifier
an s
-
that uses most of the resources. This takes you to the SQL Details page for the
n t r
no
corresponding statement. You must see that the script running on the first instance
a
h a s
is executing a LOCK statement on table S that causes most of the Application
waits.
m ) e ฺ
g) After a while, you can see that all scripts arec
o
ฺc G
executed byu i d
looking at the Average
i
Active Sessions graph as well as the Database n
s- second t
Throughput
n graphics again. You
a
should see the number of transactions cper d e going down.
a n @ Stu
g t PL/SQL
10) After the workload finishes,ฺuse
t h isto create a new AWR snapshot.
h
$ ./create_snapshot.sh
e en use
( c he se to
PL/SQL
a n procedure
c e n successfully completed.
g T li
n $
ee He
h 11) Using Database Control, review the latest ADDM run. What are your conclusions?
C
a) On the Cluster Database Home page, click the Advisor Central link in the Related
Links section.
b) On the Advisor Central page, make sure that the Advisory Type field is set to All
Types, and that the Advisor Runs field is set to Last Run. Click Go.
c) In the Results table, select the latest ADDM run corresponding to Instance All.
Then click View Result. This takes you to the Automatic Database Diagnostic
Monitor (ADDM) page.
d) On the Automatic Database Diagnostic Monitor (ADDM) page, the ADDM
Performance Analysis table shows you the consolidation of ADDM reports from
all instances running in your cluster. This is your first entry point before drilling
down to specific instances. From there, investigate the Top SQL Statements,
Table Locks, and Global Cache Messaging findings.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 187


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-1: ADDM and RAC Part I (continued)

e) Click the Top SQL Statements finding, which affects all instances, revealing
LOCK TABLE S and UPDATE S commands as a possible problem to
investigate. Click the Back button to return to the ADDM report.
f) Click the Table Locks finding, which affects all instances, revealing that you
should investigate your application logic regarding the JFV.S object.
g) Click the Global Cache Messaging finding revealing again the UPDATE S
command as responsible for approximately 30% of Cluster waits during the
analysis period.
h) Back to the Automatic Database Diagnostic Monitor (ADDM) page, you now
have the possibility to drill down to each instance using the links located in the
Affected Instances table. Click the link corresponding to the most affected ble
instance (although all should be equally affected). fe r a
an s
i) On the corresponding ADDM Database Diagnostic Monitor (ADDM) instance
n - t r
o
page, you should retrieve similar top findings you previously saw at the cluster
level.
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 188


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-2: ADDM and RAC Part II
The goal of this lab is to show you how to manually discover performance issues by
using the Enterprise Manager performance pages as well as ADDM. In this second part
of the practice, you are going to correct the previously found issue by creating a sequence
number instead of by using a table.
Note that all the necessary scripts for this lab are located in the
/home/oracle/labs/seq directory on your first cluster node.
1) Execute the setupseq2.sh script to create the necessary objects used for the rest
of this practice.
$ ./setupseq2.sh

PL/SQL procedure successfully completed.


ble
fe r a
an s
PL/SQL procedure successfully completed.
n - t r
n o
s a
User dropped.
) ha ฺ
ฺ c om uide
Tablespace dropped.
- i n c tG
a cs uden
Tablespace created. ฺtan
@ St
g t h is
e h en use
c he se to
User created.
(
T an licen
H engGrant succeeded.
h e e
C drop table s purge
*
ERROR at line 1:
ORA-00942: table or view does not exist

drop sequence s
*
ERROR at line 1:
ORA-02289: sequence does not exist

drop table t purge


*
ERROR at line 1:
ORA-00942: table or view does not exist

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 189


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-2: ADDM and RAC Part II (continued)

Table created.

Index created.

Sequence created.

PL/SQL procedure successfully completed.

$ e
r a bl
2) Using Database Control, and connected as the SYS user, navigate to the Performance
sfe
page of your Cluster Database.
- t r an
a) Click the Performance tab from the Cluster Database Home page. no n
b) On the Cluster Database Performance page, make sure RealaTime:
a
s 15 Seconds
Refresh is selected from the View Data drop-down list.) h ฺ
o m i d e
3) Use PL/SQL to create a new AWR snapshot. cฺc u
i n t G
$ ./create_snapshot.sh a cs- uden
a n @ St
PL/SQL procedure successfully
g ฺ t t h iscompleted.
$ e h en use
( c he se to
T
4) Execute l i c en
anthe startseq2.sh script to generate a workload on all instances of your
e n g
cluster. Do not wait; proceed with the next step.
e e H
C h $ ./startseq2.sh
$ old 3: insert into t values(s.nextval,'&1');
new 3: insert into t values(s.nextval,'orcl1');
old 3: insert into t values(s.nextval,'&1');
new 3: insert into t values(s.nextval,'orcl3');
old 3: insert into t values(s.nextval,'&1');
new 3: insert into t values(s.nextval,'orcl2');

… Do not wait after this point and go to the next step.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.


$

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 190


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-2: ADDM and RAC Part II (continued)

5) While the scripts are still executing, look at the Average Active Sessions graphic.
Then drill down to the Cluster wait class for the first node. What are your
conclusions?
a) By using the drill-down method of Enterprise Manager, you can quickly identify
the top waiting SQL statements and the top waiting sessions on both instances.
Here it appears that an INSERT statement on table T is causing most of the waits
for the Cluster wait class.
b) Click Cluster Database in the locator link at the top of the page to return to the
Cluster Database Performance page.
c) From there you can now see the Average Active Sessions graph. Make sure that
the View Data field is set to Real Time:15 Seconds Refresh. After a few seconds, ble
the graphic will clearly show that the Cluster and Application wait classes are fe r a
causing most waits. Using the Throughput tabbed page graph underneath the an s
- t
Average Active Sessions graph, you should also notice that the transaction rate is
n r
about 320 per second (a better rate than in the previous practice).
a no
d) In the Average Active Sessions graph, click the Cluster link h a s the right. This
on
takes you to the Active Sessions By Instance: Cluster m )
page. e ฺ
o
ฺc Gmust i d
u see that the number
e) On the Active Sessions By Instance: Cluster i n cpage, you
t
of active sessions is almost the samecon
a s-all nodes.
d e nClick the first instance’s link
(instance number 1). This takes n @you to the
S u Sessions Waiting: Cluster page
tActive
ฺt a
for the corresponding instance.
g t hi s
n
e Waiting: e
f) On the Activee
e h
Sessions
t o us Cluster page, you can see the most important
n (cInh causing
wait events
instance. then s emost of the waits in the Cluster wait class on the first

Tof l i e This takesCluster


athe resources.
c Top SQL:
you to
section, click the SQL identifier that uses most
the SQL Details page for the corresponding
en g statement. You will see that the script running on the first instance is executing an
e e H INSERT statement on table T that causes most of the Cluster waits.
C h
g) After a while you can see that all are executed by looking at the Average Active
Sessions graphic again. The Database Throughput graphic tells you that this time,
the number of transactions per second was a bit higher than in the previous lab for
the same workload. Using the sequence number was a bit better in this case.
6) After the workload finishes, use PL/SQL to create a new AWR snapshot.
$ ./create_snapshot.sh

PL/SQL procedure successfully completed.

$
7) Using Database Control, review the latest ADDM run. What are your conclusions?
a) On the Cluster Database Home page, click the Advisor Central link.
b) On the Advisor Central page, make sure that the Advisory Type field is set to All
Types, and that the Advisor Runs field is set to Last Run. Click Go.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 191


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-2: ADDM and RAC Part II (continued)

c) In the Results table, select the latest ADDM run corresponding to Instance All.
Then click View Result. This takes you to the Automatic Database Diagnostic
Monitor (ADDM) page.
d) On the Automatic Database Diagnostic Monitor (ADDM) page, the ADDM
Performance Analysis table shows you the consolidation of ADDM reports from
all instances running in your cluster. This is your first entry point before drilling
down to specific instances. From there, investigate the Top SQL Statements,
Sequence Usage, and Unusual “Concurrency” Wait Event findings.
e) The Top SQL Statements should reveal an INSERT INTO T command using
sequence S as a possible problem to investigate.
f) The Sequence Usage finding reveals that you should use larger cache size for ble
your hot sequences. fe r a
g) The Unusual “Concurrency” Wait Event finding asks you to investigate the cause an s
n - t r
for high “row cache lock” waits. Refer to the Oracle Database Reference for the
description of this wait event. a no
h a s
h) Back to the Automatic Database Diagnostic Monitor (ADDM) ) page,
ฺ you now
have the possibility to drill down to each instance
c o m
using theid e
links located in the
c ฺ u
s - in nt G
Affected Instances table. Click the link corresponding to the most affected

@ ac tude
instance (although all should be equally affected).
8) On the corresponding ADDM a
t Database s S Monitor (ADDM) instance page,
n iDiagnostic

ng ssimilar
you should retrieve top findings h
t to those you previously saw at the cluster
h e e
level.
h e e to u
n (c nse
g T a l ice
H en
h e e
C

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 192


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-3: ADDM and RAC Part III
The goal of this lab is to show you how to manually discover performance issues by
using the Enterprise Manager performance pages as well as ADDM. This last part
generates the same workload as in the previous lab but uses more cache entries for
sequence number S.
Note that all the necessary scripts for this lab are located in the
/home/oracle/labs/seq directory on your first cluster node.
1) Execute the setupseq3.sh script to create the necessary objects used for the rest
of this practice.
$ ./setupseq3.sh

PL/SQL procedure successfully completed.


ble
fe r a
an s
PL/SQL procedure successfully completed.
n - t r
n o
s a
User dropped.
) ha ฺ
ฺ c om uide
Tablespace dropped.
- i n c tG
a cs uden
Tablespace created. ฺtan
@ St
g t h is
e h en use
c he se to
User created.
(
T an licen
H engGrant succeeded.
h e e
C drop table s purge
*
ERROR at line 1:
ORA-00942: table or view does not exist

drop sequence s
*
ERROR at line 1:
ORA-02289: sequence does not exist

drop table t purge


*
ERROR at line 1:
ORA-00942: table or view does not exist

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 193


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-3: ADDM and RAC Part III (continued)

Table created.

Index created.

Sequence created.

PL/SQL procedure successfully completed.

2) Using Database Control, and connected as the SYS user, navigate to the Performance ble
page of your Cluster Database. fe r a
an s
a) Click the Performance tab from the Cluster Database Home page.
n - t r
no
b) On the Cluster Database Performance page, make sure that Real Time: 15
a
h a s
Seconds Refresh is selected from the View Data drop-down list.
3) Use PL/SQL to create a new AWR snapshot. m ) e ฺ
o
ฺc Gu i d
$ ./create_snapshot.sh - i n c t
c s e n
PL/SQL procedure successfully
n @ S tud
a completed.
g ฺ ta this
$
e h en use
he se to script to generate the same workload on both instances
4) Execute thecstartseq2.sh
( n the previous lab. Do not wait, and proceed with the next step.
anclusterlicasefor
of your
T
H eng$ ./startseq2.sh
h e e $ old
new 3:
3: insert into
insert into t
t values(s.nextval,'&1');
values(s.nextval,'orcl3');
C old 3: insert into t values(s.nextval,'&1');
new 3: insert into t values(s.nextval,'orcl2');
old 3: insert into t values(s.nextval,'&1');
new 3: insert into t values(s.nextval,'orcl1');

… Do not wait after this point and go to the next step.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 194


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 14-3: ADDM and RAC Part III (continued)

5) Until the scripts are executed, look at the Sessions: Waiting and Working graphic.
What are your conclusions?
a) This time, looking at the Sessions: Waiting and Working graphic, it is clear that
there are no significant waits. The sequence has a big enough cache value to avoid
the most significant waits.
b) Click Cluster Database in the locator link at the top of the page to return to the
Cluster Database Performance page.
c) On the Performance page, make sure that the View Data field is set to Real
Time:15 Seconds Refresh. After all the scripts have finished their execution, the
Average Active Sessions graph will clearly show that there are no significant
waits on your cluster. You must also notice that the transaction rate is now around
ble
2400 per second.
fe r a
6) After the workload finishes, use PL/SQL to create a new AWR snapshot. an s
n - t r
$ ./create_snapshot.sh
a no
PL/SQL procedure successfully completed. h a s
m ) e ฺ
o
ฺc Gu i d
$
i n c
- run. t
7) Using Database Control, review the latest c
a tud s
ADDM e nWhat are your conclusions?
a) On the Cluster Database Home a @ S the Advisor Central link.
n page,isclick
ฺ t h
b) On the Advisor h e ngpage,smake
Central e t sure that the Advisory Type field is set to All
Types, and e ethe Advisor
that t o u Runs field is set to Last Run. Click Go.
h
(c nseselect the latest ADDM run corresponding to Instance All.
n
c) Inathe Resultsetable,
TThen clicklicView Result. This takes you to the Automatic Database Diagnostic
en g
H Monitor (ADDM) page.
h e e d) On the Automatic Database Diagnostic Monitor (ADDM) page, the ADDM
C Performance Analysis table shows you the consolidation of ADDM reports from
all instances running in your cluster. This is your first entry point before drilling
down to specific instances. From there, investigate the Buffer Busy – Hot Block,
Buffer Busy – Hot Objects, and Global Cache Busy findings. You should no
longer see the Sequence Usage, nor specific instances impacted.
e) The Buffer Busy – Hot Block finding should not reveal any particular object.
f) The Buffer Busy – Hot Objects finding should not reveal any particular object.
g) The Global Cache Busy finding should not reveal anything special.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 195


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practices for Lesson 15
In these practices, you will create, manage, and monitor services.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 196


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-1: Working with Services
In this practice, you will use Enterprise Manager to create one service called PROD1.
You then observe what happens to your service when you terminate one of the instances
on which it is running.

1) Use Enterprise Manager to create the PROD1 service. Make sure that you define your
first and second instances (ORCL1 and ORCL2) as preferred, and the third instance
(ORCL3) as available.
a) Enter your EM address in a browser. It will look something like this:
https://your_host_name:1158/em
b) Log in using SYS credentials as SYSDBA. e
r a bl
c) Click the Availability folder tab.
sfe
d) Click the Cluster Managed Database Services link under the Services section.
- t r an
no
e) On the Cluster Managed Database Services: Cluster and Database Login page,
n
s a
provide the login credentials for the operating system user (oracle/oracle)
) h a
and the SYSDBA credentials for the database (sys/oracle_4U) and click ฺ
o m i d e
Continue.
n c ฺc Gu
f) Click the Create Service button on the s i
- Managed
Cluster n t Database Services page.
c e
g) On the Create Service page, n @
enter
a for
PROD1S tudthe service name. Verify that the
ฺ ta checkthbox
“Start service after creation”
g is is selected, and select the “Update local
naming” check box.
e h enUnderuthe
seHigh Availability Configuration section, set the
( c he forsorcl1
service policy
e to and orcl2 to Preferred and ORCL3 to Available.
Leave
n the remaining
n fields with their default values and click the OK button.
a c e
en gh) TAfter the lservice
i has been created, you will be returned to the Cluster Managed

e e H Database Services page. Check the Running Instances column for PROD1, it
C h should indicate the service running on orcl1 and orcl2. Select PROD1 from
the Services list and click the Test Connection button. It should test successfully.
Click the Show All TNS Strings button and inspect the new entry to the
tnsnames.ora file. It should look like this:
PROD1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)
(HOST = cluster01-scan.cluster01.example.com)
(PORT = 1521))(LOAD_BALANCE = YES)(CONNECT_DATA =
(SERVER = DEDICATED)(SERVICE_NAME = PROD1)))
i) Click the Return button.
2) Use the srvctl command to check the status of the new service.
$ srvctl status service -d ORCL -s PROD1
Service PROD1 is running on instance(s) orcl1,orcl2

3) Use the crsctl command to view server pool relationships with the new service.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 197


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-1: Working with Services (continued)

$ /u01/app/11.2.0/grid/bin/crsctl status serverpool -p

NAME=Free
IMPORTANCE=0
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r-x

NAME=Generic
IMPORTANCE=0 e
MIN_SIZE=0
r a bl
MAX_SIZE=-1
sfe
SERVER_NAMES=host01 host02 host03
- t r an
PARENT_POOLS=
o n
EXCLUSIVE_POOLS=
ACL=owner:grid:r-x,pgrp:oinstall:r-x,other::r-x s an
) ha ฺ
NAME=ora.orcl
c o m ide
IMPORTANCE=1
n c ฺ G u
i t
MIN_SIZE=0
a cs- uden
MAX_SIZE=-1
a n @host03St
SERVER_NAMES=host01 host02
g
PARENT_POOLS=Generic ฺ t t h is
h
EXCLUSIVE_POOLS=
e en use
( c he se to
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--

T an licen
NAME=ora.orcl_PROD1

H engIMPORTANCE=0
MIN_SIZE=0
h e e MAX_SIZE=-1
C SERVER_NAMES=host01 host02 host03
PARENT_POOLS=ora.orcl
EXCLUSIVE_POOLS=
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--

4) Connect to the service and look at the current value of the SERVICE_NAMES
initialization parameter, and verify that it is set correctly. Query V$INSTANCE and
determine what instance you are connected to.
$ /u01/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus
sys/oracle_4U@PROD1 as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Sep 4 11:10:29


2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 198


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-1: Working with Services (continued)

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter service

NAME TYPE VALUE


---------------------- ----------- -------------- e
service_names string PROD1
r a bl
s fe
SQL> select instance_name from v$instance;
- t r an
INSTANCE_NAME no n
---------------- s a
orcl1 ) h a
o m d e ฺ
SQL> exit c ฺc Gu i
s - i n n t
c
a tud e
n @
5) From a terminal session as the oracle
a Scrash the instance on the first node.
user,

g et t
Find and kill the ora_pmon_orcl h i s
process. Use the pkill -9 –f pmon_orcl
e n
h database s
command to crashethe
e t o uinstance. The orcl1 instance will crash and the

n ( h nswill
clusterwarecservices e restart it very quickly
$ T a -9lic-fe pmon_orcl1
pkill
en g
e e H 6) Use the srvctl command to check the status of the PROD1 service.
C h
$ srvctl status service -d ORCL -s PROD1
Service PROD1 is running on instance(s) orcl2,orcl3

7) Return to Enterprise Manager. Click the Availability folder tab. In the instance list
under the Instances section, you should be able to verify that the first instance is
indeed down.
8) Click the Cluster Managed Database Services link. On the Cluster Managed Database
Service page, you can see orcl2 and orcl3 in the running instances column for
PROD1. Select Manage from the Actions drop-down list and click Go.
9) Under the instances section, find the host that orcl3 is running on and select the
option in the Select column for that host. Click the Relocate button.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 199


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-1: Working with Services (continued)

10) On the Relocate Service from Instance: orcl03 page, select the host name that
orcl1 is running on and click OK.
11) You should see a message indicating that the service was relocated successfully.
Under the Instances section of the page, you should see the service running on
orcl1 and orcl2 and stopped on orcl3.
Note: The instance status shown in EM may still show that the instance is down.
Click the browser Refresh button to see the actual status of the instance.

ble
fe r a
an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 200


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-2: Monitoring Services
In this practice, you will use Database Control to determine the amount of resources used
by sessions executing under a particular service.

1) As the oracle user, open a terminal session to your first node. Execute the
/home/oracle/labs/less_15/createuser.sh script. This script creates a new
user called FOO identified by the password foo. The default tablespace of this user
is USERS, and its temporary tablespace is TEMP. This new user has the CONNECT,
RESOURCE, and DBA roles.
$ cat /home/oracle/labs/less_15/createuser.sh

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=orcl1
ble
/u01/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus -s /NOLOG
fe r a
<<EOF
an s
n - t r
connect / as sysdba o
drop user FOO cascade;
s an
ha ฺ
create user FOO identified by foo default tablespace users
)
temporary tablespace temp;
grant connect, resource, dba to FOO; ฺ c om uide
- i n c tG
EOF
a cs uden
a n @ St
$ /home/oracle/labs/less_15/createuser.sh
drop user FOO cascadeฺt
g t h is
*
ERROR at linee1: h en use
ORA-01918:
( c e 'FOO'
huser e to does not exist
a n c e ns
g T li
e n
e e H User created.
C h
Grant succeeded.

2) Using SQL*Plus, connect to PROD1 as FOO. When connected, determine the


instance on which your session is currently running. Then execute the following
query:
select count(*) from dba_objects,dba_objects,dba_objects
Do not wait; instead, proceed with the next step.
$ sqlplus foo/foo@PROD1
SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl1

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 201


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-2: Monitoring Services (continued)

SQL> select count(*) from dba_objects,dba_objects,dba_objects;

3) After a few moments, go to the Database Control Top Consumers page from the
Cluster Database page. Connect as user SYS. Then check that PROD1 is using more
and more resources.
a. From the Cluster Database Home page, click the Performance tab.
b. On the Performance page, click the Top Consumers link in the Additional
Monitoring Links section.
c. This takes you to the Top Consumers page with the Overview tab selected.
d. On the Overview page, you can see the Top Services pie chart. ble
e. Make sure that the View Data drop-down list is set to Real Time: 15 Second fe r a
an
Refresh. Wait for the page to be refreshed a couple of times. Little by little,
s
PROD1 is consuming almost all the resources (up to 100%). n - t r
a no
f. To have more details, click Top Services tab on the Top Consumers page.
h a s
g. Make sure that the View Data drop-down list is set to Real Time: 15 Second
m ) e ฺ
Refresh, and View drop-down list is set to Active Services. You can click the “+”
o
ฺc Gu i d
- i c
icon on the left of the PROD1 link to expand the service. This shows you the list
n t
c s
a tud e n
of instances currently running the service. You can also click the PROD1 link
itself to look at the detailed Statistics of the corresponding service.
@
t a n i s S
4) In another terminal window as the oracle user, check statistics on your service with

ng se t h
h e
gv$service_stats from a SQL*Plus session connected as SYSDBA.
e to u
e
$ /u01/app/oracle/product/11.2.0/dbhome_1/bin/sqlplus
h
(c nse as sysdba
sys/oracle_4U@orcl
n
T a l i c e
g
en SQL*Plus: Release 11.2.0.1.0 Production on Fri Sep 4 16:16:48
e e H 2009

C h Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Real Application Clusters, Automatic
Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select stat_name, sum(value) from gv$service_stats where


service_name = 'PROD1' group by stat_name;

STAT_NAME SUM(VALUE)
--------------------------------------------------- ----------
user calls 43
DB CPU 1368469523
redo size 1564

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 202


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-2: Monitoring Services (continued)

db block changes 8
DB time 1473281835
user rollbacks 0
gc cr blocks received 2
gc cr block receive time 0
gc current blocks received 2
opened cursors cumulative 99
workarea executions - multipass 0

STAT_NAME SUM(VALUE)
-------------------------------------------------- -----------
session cursor cache hits 45
user I/O wait time 3540 e
parse count (total) 71
r a bl
physical reads 4
s fe
gc current block receive time 0
- t r an
workarea executions - optimal
concurrency wait time
22
17361 no n
parse time elapsed s
110704a
physical writes ) h a 0
workarea executions - onepass o m d e
0

execute count c ฺc Gu i 96
s - i n n t
STAT_NAME
c
a tud e SUM(VALUE)
n @ S
session logical reads g ฺ ta this
--------------------------------------------------- ----------
3825
cluster wait time e n
h o us e 2161
e e t
(ch nse
application wait time 20622
logons cumulative 2
T a n c e
n
sql
g li
execute
user commits
elapsed time 1473090354
0
H e
h e e 28 rows selected.
C
SQL>

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 203


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-3: Services and Alert Thresholds
In this practice, you will set thresholds for service PROD1, and use Database Control to
monitor the response time metric for this service. In this practice, you will set the Elapsed
Time in seconds warning threshold at 4 and the critical threshold at 1. Preferred instances
should be orcl1 and orcl2, and orcl3 should be available.

1) Set alert thresholds for your service PROD1 using Database Control.
a) Log in as sys with SYSDBA privileges.
b) On the Database Home page, click the Availability folder tab. Then click the
Cluster Managed Service link.
c) Select PROD1 from the Services list, select Edit Properties from the Actions
ble
drop-down list and click Go.
fe r a
d) Under the High Availability section, set the Service Policy for orcl1 to Preferred an s
and Available for orcl2 and orcl3. Then click OK. n - t r
n o
e) Return to the Cluster Database Home page, click the link correspondinga
s runningtoPROD1.
your
h a
first instance in the Instances table. This is the instance currently
)
o m d e ฺ
f) On the Database Instance page, click Metric and Policy i
ฺc Gu in the Related
settings
Links section at the bottom of the page. -inc t
c s e n
g) On the Metric and Policy Settings a
n @ S t udAll metrics from the View
page, select
drop-down list.
g ฺ ta this
h) Scroll down theh
e en anduPolicy
Metric se Settings page until you find the Service
Responseh
( c e (pereuser
Time to call) (microseconds) metric.
i) Ona nthe same e
c nsclick the corresponding multi-pens icon in the last column (Edit
line,

en j)g Tcolumn). li
e e H On the Edit Advanced Settings: Service Response Time (per user call)
C h (microseconds) page, click Add.
k) The Monitored Objects table should now show two entries.
l) Enter PROD1 in the Service Name field, 40000000 in the Warning Threshold
field, and 100000000 in the Critical Threshold field. Make sure that the
corresponding line is selected, and click Continue.
m) On the Metric and Policy Settings page, you should see an Information warning
explaining that your settings have been modified but not saved. Click OK to save
the new settings.
n) On the Confirmation page, you can see an Update succeeded message. Click OK.
o) This takes you back to the Database Instance page.
2) Use Database Control to view the Service Response Time Metric Value graphic for
PROD1.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 204


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-3: Services and Alert Thresholds (continued)

a) From the Database Instance page, click All Metrics in the Related Links section at
the bottom of the page.
b) On the All Metrics page, expand the Database Services link. On the All Metrics
page, click the Service Response Time (per user call) (microseconds) link.
c) On the Service Response Time (per user call) (microseconds) page, click the
PROD1 link in the Service Name column.
d) On the Service Response Time (per user call) (microseconds): Service Name
PROD1: Last 24 hours page, select Real Time: 30 Second Refresh from the View
Data drop-down list.
e) You should now see the Service Response Time (per user call) (microseconds):
Service Name PROD1 page with your warning and critical thresholds set ble
correctly. fe r a
an s
3) Execute the serv_wkload.sh script to generate workload on your database.
n - t r
no
Looking at the Service Response time graphic for PROD1, what do you observe?
a
$ cd /home/oracle/labs/less_15 h a s
$ ./serv_wkload.sh m ) e ฺ
o
ฺcuser call) i d
u(microseconds): Service
a) Still looking at the Service Response Timein(per c t G
Name PROD1 page on your first session,
a cs- youudshould
en see the graphic crossing the
warning threshold after few minutes.
n t trigger a warning alert soon after
@ ThisSwill
t a
the warning threshold is ฺcrossed.
g t hi s
n
e propagated e
b) You can see this
e e halert
t o us to your Database Instance Home page, and
n ( h nse
Cluster c
Database Home page.

g
ago back
c) TTo
l i ctoeyour Database Instance Home page, click the Database Instance
H en locator link on the Service Response Time page.
h e e d) You should see the warning raised in the Alerts section of the Database Instance
C page.
e) On the Database Instance page, click the Cluster Database locator link of the
page.
f) You should see the warning alert in the Problem Services line in the High
Availability section of the page. Clicking this link takes you to the Cluster Home
page. From there you can click the PROD1 link to directly go to the Cluster
Managed Database Services: PROD1 page after you clicked Continue on the
Login page. The PROD1 page shows you the alert with its details.
g) Soon after the script finishes its execution, you should not see the corresponding
alert on your Cluster Database Home page anymore. You can go to the Alert
History page on the first instance to look at the alert history for your services. You
can go to the Database Instance Home page using the locator links at the top of
any pages. From the Database Instance Home page, scroll down to the bottom of
the page, and click Alert History in the Related Links section.

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 205


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Practice 15-3: Services and Alert Thresholds (continued)

4) Use Database Control to remove the thresholds that you specified during this practice.
a) From the Cluster Database Home page, click the link corresponding to the first
instance of your cluster in the Instances section at the bottom of the page.
b) On the Database Instance page, scroll down to the bottom of the page. Click
Metric and Policy Settings in the Related Links section.
c) On the Metric and Policy Settings page, scroll down the page until you see
PROD1 in the Metric Thresholds table.
d) On the line corresponding to the PROD1 entry, remove both the Warning
Threshold and Critical Threshold values.
e) Click OK.
ble
f) On the Confirmation page, you should see an Update succeeded message. Click fe r a
OK. an s
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Oracle 11g: RAC and Grid Infrastructure Administration Accelerated A - 206


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle RAC One Node

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle RAC One Node

• Allows you to standardize Oracle database


App
deployments across the enterprise
Servers • Allows you to consolidate many databases
into a single cluster with minimal overhead
• Supports live migration of instances across
Front Back servers
Office Office
• Simplifies rolling patches for single instance
databases a b le
• Employs built-in cluster failover for high s f er
availability - t r an
DW Free
• Is supported on all platforms n on Oracle
where
Real Application Clusters s ais certified
a
) handephysical
• Complements virtual
o m ฺ servers
RAC One c id
ฺc Gu
Node s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Oracle RACg
a
TOne Node
l ice
H enRAC One Node allows you to standardize all Oracle Database deployments across the
Oracle
e e
Ch enterprise. Oracle RAC One Node is supported on all platforms where Oracle Real Application
Clusters (RAC) is certified. Many databases can be consolidated into a single cluster with
minimal overhead while providing the high availability benefits of failover protection, online
rolling patch application, as well as rolling upgrades for the operating system and Oracle
Clusterware.
Oracle Clusterware provides failover protection to Oracle RAC One Node. If the node fails,
Oracle Clusterware will automatically restart the Oracle RAC One Node instance on another
server in the cluster.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated B - 2
The Omotion Utility

• Omotion migrates an Oracle RAC One Node instance to


another cluster node without application down time.
• Once the instance has been migrated, the node can be
patched or upgraded.
• When the maintenance is complete, the Oracle RAC One
Node instance can be migrated back to its original home.
a b le
• Omotion provides the same load balancing benefits of virtual
s f er
machines (VMs) by allowing a migration of a database t r n
afrom
a busy server to a server with spare capacity. non
-
s a
a
) h eฺ
co Guidm
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
The Omotiong TaUtility lice
TheH en utility is provided to migrate an Oracle RAC One Node instance to another node in
Omotion
e e
Ch the cluster without down time to the application. Once the instance has been migrated, the node
can be patched or upgraded. Once the maintenance is complete, the Oracle RAC One Node can
be migrated back to its original home.
The Omotion feature provides the same load balancing benefits of VMs by allowing a migration
of a database from a busy server to a server with spare capacity. Omotion leverages the ability of
Oracle Real Application Clusters to simultaneously run multiple instances servicing a single
database. In the figure above, the DB2 RAC One Node database on Server A is migrated to
Server B. Oracle RAC One Node starts up a second DB2 instance on server B, and for a short
period of time runs in an active-active configuration. As connections complete their transactions
on server A, they are migrated to the instance on server B. Once all the connections have
migrated, the instance on server A is shut down and the migration is complete.
Omotion does not require quiescing the environment even when the system is running at peak
capacity. VMs generally require the environment to be quiesced in order for medium to heavy
database workloads to be moved from one server to another. This requirement does not apply for
light workloads or demos.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated B - 3
Oracle RAC One Node and OVM

• Using Oracle RAC One Node with Oracle VM increases


the scalability and high availability benefits of Oracle VM.
• If a VM is sized too small, you can migrate the Oracle RAC
One instance to another Oracle VM node in your cluster
and resize the Oracle VM.
• Usually, migrating VMs requires quiescing the source VM,
mirroring the memory, and then switching to the migrated a b le
s f er
VM. an
- t r
• Using Omotion, highly loaded instances can be n on
migrated
as the work is split between the two servers s a the
during
a
) h eฺ
migration. m
c ฺ co Guid
• Omotion provides the ability to
s - inmovenbetween
t servers of
c e
@a tud
different processor generations.
a n is S
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Oracle RACg
a Node
TOne l iceand OVM
H enVM is a free server virtualization and management solution that makes enterprise
Oracle
e e
Ch applications easier to deploy, manage, and support. Using Oracle RAC One Node with Oracle
VM increases the benefit of Oracle VM with the high availability and scalability of Oracle RAC.
If your VM is sized too small, then you can migrate the Oracle RAC One instance to another
Oracle VM node in your cluster using Omotion, and then resize the Oracle VM. When you move
the instance back to the newly resized Oracle VM node, you can dynamically increase any limits
programmed with Resource Manger Instance Caging.
When migrating a VM, the VM must mirror its complete memory state across a network to the
target host, recreating the state of that machine. If the database in question is highly loaded and
is actively changing blocks in its database cache, it is very difficult for the memory mirroring
function to keep up with the rate of changes. It becomes likely that the only way to successfully
mirror the memory is to quiesce the source VM, mirror the memory, and then switch to the
migrated VM. With Omotion, highly loaded instances pose no problem, as the work is actually
split between two servers during the migration. Oracle RAC One Node can, therefore, easily
migrate even heavy workloads to another server. VM migration normally requires that the
processors be identical. Both processors must have exactly the same instruction set. Omotion
provides the ability to move between servers of different processor generations. Omotion
supports migration to new processors, or even between different vendors like Intel and AMD.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated B - 4
Cloning Oracle Clusterware

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to:


• Describe the cloning procedure
• Prepare the software for cloning
• Describe the cloning script variables
• Clone Oracle Clusterware to create a new cluster
• Clone Oracle Clusterware to extend an existing cluster ble
fe r a
• Examine cloning log files
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 2
What Is Cloning?

Cloning is the process of copying an existing Oracle


Clusterware installation to a different location. It:
• Requires a successful installation as a baseline
• Can be used to create new clusters
• Cannot be used to remove nodes from the cluster
• Does not perform the operating system prerequisites to an b le
installation f er a
n s
• Is useful to build many clusters in an organization n-tra
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T
What Is Cloning?
g
a l ice
H enis a process that allows the copying of an existing Oracle Clusterware installation to a
Cloning
e e
Ch different location and then updating the copied installation to work in the new environment. The
cloned copy can be used to create a new cluster from a successfully installed cluster. To add or
delete Oracle Clusterware from nodes in the cluster, use the addNode.sh and rootcrs.pl
scripts. The cloning procedure cannot be used to remove a node from an existing cluster.
The cloning procedure is responsible for the work that would have been done by the Oracle
Universal Installer (OUI) utility. It does not automate the prerequisite work that must be done on
each node before installing the Oracle software.
This technique is very useful if a large number of clusters need to be deployed in an
organization. If only one or two clusters are being deployed, you should probably use the
traditional installation program to perform the installations.
Note: The Oracle Enterprise Manager administrative tool with the Provisioning Pack feature
installed has automated wizards to assist with cloning exercises.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 3
Benefits of Cloning Oracle Clusterware

The following are some of the benefits of cloning Oracle


Clusterware:
• Can be completed in silent mode from a Secure Shell
(SSH) terminal session
• Contains all patches applied to the original installation
• Can be done very quickly le
a b
• Is a guaranteed method of repeating the same installationsfer
n
on multiple clusters -tra on
a n
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Benefits ofg T a
Cloning l ice Clusterware
Oracle
n
ethe
e e H
Using cloning procedure presented in this lesson has several benefits compared to the
Ch traditional Oracle Universal Installer (OUI) installation method. The OUI utility is a graphical
program and must be executed from a graphical session. Cloning can be completed in silent
mode from a command-line Secure Shell (SSH) terminal session without the need to load a
graphical windows system. If the OUI program were to be used to install the software from the
original installation media, all patches that have been applied since the first installation would
have to be reapplied. The clone technique presented in this lesson includes all successfully
applied patches, and can be performed very quickly to a large number of nodes. When the OUI
utility performs the copying of files to remote servers, the job is executed serially to one node at
a time. With cloning, simultaneous transfers to multiple nodes can be achieved. Finally, the
cloning method is a guaranteed way of repeating the same installation on multiple clusters to
help avoid human error.
The cloned installation acts the same as the source installation. It can be patched in the future
and removed from a cluster if needed using ordinary tools and utilities. Cloning is an excellent
way to instantiate test clusters or a development cluster from a successful base installation.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 4
Creating a Cluster by Cloning
Oracle Clusterware

Cluster A Cluster B

Node 1 Node 1

ble
Clusterware installed fe r a
ans
n - t r
Node 2
a no
h a s
m ) e ฺ
o
ฺc Gu i d
i n c
- Clusterware t cloned
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
T
Creating agCluster byl icCloning Oracle Clusterware
e n
e e H
This example shows the result when you successfully clone an installed Oracle Clusterware
C h environment to create a cluster. The environment on Node 1 in Cluster A is used as the source,
and Cluster B, Nodes 1 and 2 are the destination. The clusterware home is copied from Cluster
A, Node 1, to Cluster B, Nodes 1 and 2. When completed, there will be two separate clusters.
The OCR and Voting disks are not shared between the two clusters after you successfully create
a cluster from a clone.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 5
Preparing the Oracle Clusterware
Home for Cloning
The following procedure is used to prepare the Oracle
Clusterware home for cloning:
1. Install Oracle Clusterware on the first machine.
A. Use the Oracle Universal Installer (OUI) GUI interactively.
B. Install patches that are required (for example, 11.1.0.n).
C. Apply one-off patches, if necessary. e
2. Shut down Oracle Clusterware. r a bl
s fe
- t r an
# crsctl stop crs -wait n no
a
3. Make a copy of the Oracle Clusterware)home. has ฺ
ฺ c om uide
# mkdir /stagecrs - i n c tG
# cp –prf /u01/app/11.2.0/grid a cs ud/stagecrs en
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n cen
TaOracle
Preparinggthe
n li Clusterware Home for Cloning
TheH e method requires that an existing, successful installation of Oracle Clusterware be
cloning
e e
Ch already performed in your organization. Verify that all patch sets and one-off patches have been
applied before starting the clone procedure to minimize the amount of work that will have to be
performed for the cloning exercise.
To begin the cloning process, start by performing a shutdown of Oracle Clusterware on one of
the nodes in the existing cluster with the crsctl stop crs –wait command. The other
nodes in the existing cluster can remain active. After the prompt is returned from the shutdown
command, make a copy of the existing Oracle Clusterware installation into a temporary staging
area of your choosing. The disk space requirements in the temporary staging area will be equal
to the current size of the existing Oracle Clusterware installation.
The copy of the Oracle Clusterware files will not include files that are outside the main
installation directory such as /etc/oraInst.loc and the /etc/oracle directory. These
files will be created later by running various root scripts.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 6
Preparing the Oracle Clusterware
Home for Cloning
4. Remove files that pertain only to the source node.
# cd /stagecrs/grid
# rm -rf /stagecrs/grid/log/<hostname>
# rm -rf root.sh*
# rm -rf gpnp/*
# find . -name '*.ouibak' -exec rm {} \;
# find . -name '*.ouibak.1' -exec rm {} \;
a b le
er
# rm –rf \
s f
an
inventory/ContentsXML/oraclehomeproperties.xml
# cd ./cfgtoollogs - t r
# find . -type f -exec rm -f {} \; non a
a
h ฺ s
m )
5. Create an archive of the source. co
c ฺ u ide
s - in nt G
# cd /stagecrs/grid
@ ac tude.
# tar –zcvf /tmp/crs111060.tgz
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
Preparinggthe
e
TaOraclelicClusterware Home for Cloning (continued)
n
einstallation of Oracle Clusterware on local storage devices contains files and directories
H
Each
e
C he that are applicable to only that node. These files and directories should be removed before
making an archive of the software for cloning. Perform the commands in the slide to remove
node-specific information from the copy that was made in step 3. Then create an archive of the
Oracle Clusterware copy. An example of using the Linux tar command followed by the
compress command to reduce the file size is shown in the slide. On Windows systems, use the
WinZip utility to create a zip file for the archive.
Note: Do not use the Java Archive (JAR) utility to copy and compress the Oracle Clusterware
home.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 7
Preparing the Oracle Clusterware
Home for Cloning
6. Restart Oracle Clusterware.
# crsctl start crs

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 8
Cloning to Create a New Oracle
Clusterware Environment
The following procedure uses cloning to create a new cluster:
1. Prepare the new cluster nodes. (See the lesson titled “Grid
Infrastructure Installation” for details.)
A. Check system requirements.
B. Check network requirements.
C. Install the required operating system packages.
a b le
D. Set kernel parameters.
s f er
E. Create groups and users.
- t r an
F. Create the required directories. n on
a
G. Configure installation owner shell limits. has
m ) e ฺ
H. o
Configure SSH and enable user ฺequivalency.
c Gu i d
I. Use the Cluster Verify Utility- i n c
(cluvfy) t to check
c s e n
prerequisites. @a tud
a n is S
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Cloning tog Ta a lNew
Create ice Oracle Clusterware Environment
TheH en of the source files that was made when preparing the Oracle Clusterware home for
archive
e e
Ch cloning will become the source files for an installation on the new cluster system and the
clone.pl script will be used to make it a working environment. These actions do not perform
the varied prerequisite tasks that must be performed on the operating system before an Oracle
Clusterware installation. The prerequisite tasks are the same as the ones presented in the lesson
titled “Grid Infrastructure Installation” in the topic about installing Oracle Clusterware.
One of the main benefits of cloning is rapid instantiation of a new cluster. The prerequisite tasks
are manual in nature, but it is suggested that a shell script be developed to automate these tasks.
One advantage of the Oracle Enterprise Linux operating system is that an oracle-
validated-1.x.x.rpm file can be obtained that will perform almost all the prerequisite
checks including the installation of missing packages with a single command.
This cloning procedure is used to build a new distinct cluster from an existing cluster. In step 1,
prepare the new cluster nodes by performing prerequisite setup and checks.
If cluvfy fails to execute because of user equivalence errors, the passphrase needs to be loaded
with the following commands before executing cluvfy:
exec /usr/bin/ssh-agent $SHELL
ssh-add
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 9
Cloning to Create a New Oracle
Clusterware Environment
2. Deploy Oracle Clusterware on each of the destination
nodes.
A. Extract the tar file created earlier.
# mkdir –p /u01/app/11.20/grid
# cd /u01/app/11.2.0
# tar –zxvf /tmp/crs111060.tgz
bl e
B. Change the ownership of files, and create Oracle Inventory.
fe r a
# chown –R crs:oinstall /u01/app/11.2.0/gridtran
s
# mkdir –p /u01/app/oraInventory o n -
n
sa
# chown grid:oinstall /u01/app/oraInventory
ha ฺ
C. Remove any network files from com
)
c ฺ u ide
/u01/app/11.2.0/grid/network/admin
s - in nt G
@ ac tude
$ rm /u01/app/11.2.0/grid/network/admin/*
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
Cloning to g Ta a lNew
Create ice Oracle Clusterware Environment (continued)
n
H
Step
e 2eis the deployment and extraction of the source archive to the new cluster nodes. If a
e
Ch shared Oracle Cluster home on a Cluster File System (CFS) is not being utilized, extract the
source archive to each node’s local file system. It is possible to change the operating system
owner to a different one from that of the source with recursive chown commands as illustrated
in the slide. If other Oracle products have been previously installed on the new nodes, the
Central Oracle Inventory directory may already exist. It is possible for this directory to be owned
by a different user than the Oracle Grid Infrastructure user; however, both should belong to the
same primary group oinstall. Execute the preupdate.sh script on each target node,
logged in as root. This script will change the ownership of the CRS home and root-owned
files to the Oracle CRS user.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 10
The clone.pl Script

Cloning to a new cluster and cloning to extend an existing


cluster both use a PERL script. The clone.pl script is used,
which:
• Can be used on the command line
• Can be contained in a shell script
• Accepts many parameters as input le
r a b
• Is invoked by the PERL interpreter sfe
tra n
o n -
s an
) a
h$E01
# perl <CRS_home>/clone/bin/clone.pl
o m d e ฺ$E02 $E03
$E04 $C01 $CO2 c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T Scriptlice
a
The clone.pl
g
TheH en
PERL-based clone.pl script is used in place of the graphical OUI utility to perform the
e e
Ch installation on the new nodes so that they may participate in the existing cluster or become valid
nodes in a new cluster. The script can be executed directly on the command line or at a DOS
command prompt for Windows platforms. The clone.pl script accepts several parameters as
input, typed directly on the command line. Because the clone.pl script is sensitive to the
parameters being passed to it, including the use of braces, single quotation marks, and double
quotation marks, it is recommended that a shell script be created to execute the PERL-based
clone.pl script to input the arguments. This will be easier to rework if there is a syntax error
generated. There are a total of seven arguments that can be passed as parameters to the
clone.pl script. Four of them define environment variables, and the remaining three supply
processing options. If your platform does not include a PERL interpreter, you can download one
at:
http://www.perl.org

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 11
The clone.pl Environment Variables

The clone.pl script accepts four environment variables as


input. They are as follows:

Symbol Variable Description


E01 ORACLE_BASE The location of the Oracle base directory
E02 ORACLE_HOME The location of the Oracle Grid Infrastructure
home. This directory location must exist and
must be owned by the Oracle operating ble
system group: oinstall. fe r a
ans
E03 ORACLE_HOME_NAME - t
The name of the Oracle Grid Infrastructure
n r
home
a no
E04 INVENTORY_LOCATION The location of the Oracle h a sInventory
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
The clone.pl
g
a l ice
T Environment Variables
e n
e
The
e Hclone.pl script accepts four environment variables as command-line arguments
Ch providing input. These variables would correlate to some of the questions presented by the OUI
utility during an installation. Each variable is associated with a symbol that is used for reference
purposes for developing a shell script. The choice of symbol is arbitrary. Each variable is case-
sensitive. The variables are as follows:
• ORACLE_BASE: The location of the Oracle Base directory. A suggested value is
/u01/app/grid. The ORACLE_BASE value should be unique for each software owner.
• ORACLE_HOME: The location of the Oracle Grid Infrastructure home. This directory
location must exist and be owned by the Oracle Grid Infrastructure software owner and the
Oracle Inventory group, typically grid:oinstall. A suggested value is
/u01/app/<version>/grid.
• ORACLE_HOME_NAME: The name of the Oracle Grid Infrastructure home. This is stored
in the Oracle inventory and defaults to the name Ora11g_gridinfrahome1 when
performing an installation with OUI. Any name can be selected, but it should be unique in
the organization.
• INVENTORY_LOCATION: The location of the Oracle Inventory. This directory location
must exist and must initially be owned by the Oracle operating system group: oinstall. A
typical location is /u01/app/oraInventory.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 12
The clone.pl Command Options

The clone.pl script accepts two required command options


as input. They are as follows:
# Variable Data Type Description
C01 CLUSTER_NODES String This list of short node names for the
nodes in the cluster
C02 LOCAL_NODE String The short name of the local node
ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
The clone.pl
g
a
T Command l ice Options
TheH en script accepts two command options as input. The command options are case-
e e clone.pl
Ch sensitive and are as follows:
• CLUSTER_NODES: This list of short node names for the nodes in the cluster
• LOCAL_NODES: The short name of the local node

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 13
Cloning to Create a New Oracle
Clusterware Environment
3. Create a shell script to invoke clone.pl supplying input.
#!/bin/sh
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/11.2.0/grid
THIS_NODE=`hostname -s`

E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${ORACLE_HOME}
ble
E03=ORACLE_HOME_NAME=OraGridHome1
fe r a
E04=INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory
ans
n - t r
o
#C00="-O'-debug'"
s a nper bug
Syntax
C01="-O'\"CLUSTER_NODES={node1,node2}\"'"ha
C02="-O'\"LOCAL_NODE=${THIS_NODE}\"'"m) eฺ8718041
c ฺ co Guid
s - in n-silent t
perl ${GRID_HOME}/clone/bin/clone.pl
a c d e $E01 $E02
$E03 $E04 $C01 $C02
a n @ Stu
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Cloning to T an a lNew
Create i c enOracle Clusterware Environment
e n g
e
The
e Hclone.pl script requires you to provide many setup values to the script when it is
C h executed. You may enter the values interactively on the command line or create a script to
supply the input values. By creating a script, you will have the ability to modify it and execute
the script a second time if errors exist. The setup values to the clone.pl script are case-
sensitive and sensitive to the use of braces, single quotation marks, and double quotation marks.
For step 3, create a shell script that invokes clone.pl supplying command-line input
variables. All the variables should appear on a single line.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 14
Cloning to Create a New Oracle
Clusterware Environment
4. Run the script created in Step 3 on each node.
$ /tmp/my-clone-script.sh

5. Prepare the crsconfig_params file.

ble
fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Cloning to
g
a
T a lNew
Create ice Oracle Clusterware Environment (continued)
H
Step enRun the script you created in Step 3 as the operating system user that installed Oracle
4:
e e
Ch Clusterware. If you do not have a shared Oracle grid infrastructure home run this script on each
node. The clone.pl command instantiates the crsconfig_params file in the next step.
Step 5: Prepare the /u01/app/11.2.0/grid/install/crsconfig_params file on all
of the nodes in the cluster. You can copy the file from one node to all of the other nodes. More
than 50 parameters are named in this file.
Notes: In 11.2.0.1 the clone.pl script does not propagate the crsconfig_params file.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 15
Cloning to Create a New Oracle
Clusterware Environment
6. Run the orainstRoot.sh script on each node.
# /u01/app/oraInventory/orainstRoot.sh

7. Run the <CRS_home>/root.sh and rootcrs.pl scripts


on each node.
# /u01/app/11.2.0/grid/root.sh
a b le
# /u01/app/11.2.0/grid/perl/bin/perl \
s f er
-I/u01/app/11.2.0/grid/perl/lib \
- t r an
-I/u01/app/11.2.0/grid/crs/install \
n o n
a
/u01/app/11.2.0/grid/crs/install/rootcrs.pl
s
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Cloning tog
a
T a lNew
Create ice Oracle Clusterware Environment (continued)
n shell script containing clone.pl successfully completes, it is necessary to execute
ethe
e e H
When
Ch the orainstRoot.sh script for step 6 and the root.sh script for step 7 on each node, both
as the root user.
Note
Step 4 must be run to completion before you start Step 5. Similarly, Step 5 must be run to
completion before you start Step 6.
You can perform Step 4, Step 5, and Step 6 simultaneously on different nodes. Step 6 must be
complete on all nodes before you can run Step 7.
Step 7: Ensure that the root.sh and rootcrs.pl scripts have completed on the first node
before running them on the second node and subsequent nodes.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 16
Cloning to Create a New Oracle
Clusterware Environment
8. Run the configuration assistants on each new node.
$ /u01/app/11.2.0/grid/bin/netca \
/orahome /u01/app/11.2.0/grid \
/orahnam OraGridHome1 /instype typical \
/inscomp client,oraclenet,javavm,server\
/insprtcl tcp /cfg local \
/authadp NO_VALUE \
a b le
/responseFile \
/u01/app/11.2.0/grid/network/install/netca_typ.rsp n \ sf
er
a
/silent n-tr no
s a
) h a
9. Run the cluvfy utility to validate them
o installation.
d e ฺ
c ฺc Gu i
s i n
- –n eall n t -verbose
$ cluvfy stage –post crsinst c
a tud
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Cloning to g
a
T a lNew
Create ice Oracle Clusterware Environment (continued)
H
Step enYou should execute the command shown in the slide, on the first node.
8:
e e
Ch If you are using ASM, then run the following command:
$ /u01/app/11.2.0/grid/bin/asmca -silent –postConfigureASM \
-sysAsmPassword oracle -asmsnmpPassword oracle
If you a plan to run a pre-11g Release 2 (11.2) database on this cluster, then you should run
oifcfg as described in the Oracle Database 11g Release 2 (11.2) documentation.
To use IPMI, use the crsctl command to configure IPMI on each node:
# crsctl set css ipmiaddr ip_address
At this point, Oracle Clusterware is fully installed and configured to operate in the new cluster.
The cluvfy utility can be invoked in the post-CRS installation stage to verify the installation
with the following syntax:
cluvfy stage –post crsinst -n all –verbose
If cluvfy fails to execute because of user equivalence errors, the passphrase needs to be loaded
with the following commands before executing cluvfy again:
exec /usr/bin/ssh-agent $SHELL
ssh-add
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 17
Log Files Generated During Cloning

The following log files are generated during cloning to assist


with troubleshooting failures.
• Detailed log of the actions that occur during the OUI part:
/u01/app/oraInventory/logs/cloneActions<timestamp>.log

• Information about errors that occur when OUI is running: ble


fe r a
/u01/app/oraInventory/logs/oraInstall<timestamp>.err
an s
n - t r
a no
• Other miscellaneous messages generated by s OUI:
) h a
/u01/app/oraInventory/logs/oraInstall<timestamp>.out o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce Cloning
Log Files g
Generated l iDuring
H enlog files are generated when the clone.pl script is executed and are useful in
Several
e
hediagnosing any errors that may occur. For a detailed log of the actions that occur during the OUI
C
part of the cloning, examine the log file:
<Central_Inventory>logs/cloneActions/<timestamp>.log
If errors occurred during the OUI portion of the cloning process, examine the log file:
<Central_Inventory>logs/oraInstall/<timestamp>.err
Other miscellaneous messages generated by OUI can be found in the output file:
<Central_Inventory>logs/oraInstall/<timestamp>.out

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 18
Log Files Generated During Cloning

The following log files are generated during cloning to assist


with troubleshooting failures:
• Detailed log of the actions that occur before cloning as well
as during cloning operations:
/u01/app/grid/clone/logs/clone-<timestamp>.log

ble
• Information about errors that occur before cloning as well s fe r a
n
as during cloning operations: -tra
a
/u01/app/grid/clone/logs/error-<timestamp>.log non
h a s
m ) e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
T a n ce Cloning (continued)
Log Files gGenerated l iDuring
H enlog files are generated when the clone.pl script is executed and are useful in
Several
e e
Ch diagnosing errors that may occur. For a detailed log of the actions that occur before cloning as
well as during the cloning process, examine the log file:
<CRS home>/clone/logs/clone-<timestamp>.log
For a detailed log of the errors that occur before cloning as well as during the cloning process,
examine the log file:
<CRS home>/clone/logs/error-<timestamp>.log

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 19
Cloning to Extend Oracle Clusterware
to More Nodes
The procedure is very similar to cloning to create new clusters.
1. Prepare the new cluster nodes. (See the lesson titled
“Oracle Clusterware Installation” for details. Suggest that
this be made into a shell script for reuse.)
A. Check system requirements.
B. Check network requirements.
ble
C. Install the required operating system packages.
fe r a
D. Set kernel parameters. ans
n - t r
E. Create groups and users.
a no
F. Create the required directories. a
h ฺs
m )
G. o uide
Configure installation owner shellclimits.

H. Configure SSH and enablesuser- c t G
in equivalency.
e n
c udprerequisites.
I. Use the cluvfy utility@toa check
a n is S t
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Cloning tog Ta Oracle
Extend l ice Clusterware to More Nodes
TheH en procedure using the clone.pl script can also be used to extend Oracle
cloning
e e
Ch Clusterware to more nodes within the same cluster with steps similar to the procedure for
creating a new cluster. The first step shown here is identical to the first step performed when
cloning to create a new cluster. These steps differ depending on the operating system used.
When you configure Secure Shell (SSH) and enable user equivalency, remember that the
authorized_keys and known_hosts files exist on each node in the cluster. Therefore, it
will be necessary to update these files on the existing nodes with information about the new
nodes that Oracle Clusterware will be extended to.
Note: Not supported in 11.2.0.1

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 20
Cloning to Extend Oracle Clusterware
to More Nodes
2. Deploy Oracle Clusterware on the destination nodes.
A. Extract the tar file created earlier.
# mkdir –p /u01/app/crs
# cd /u01/app/crs
# tar –zxvf /tmp/crs111060.tgz

B. Change the ownership of files and create Oracle Inventory. e


r a bl
# chown –R crs:oinstall /u01/app/crs
ns fe
# mkdir –p /u01/app/oraInventory
t r a
on-
# chown crs:oinstall /u01/app/oraInventory
s an
C. Run the preupdate.sh script on each target
) ha ฺnode.
# /u01/app/crs/install/preupdate.sh
ฺ c om uide\
–crshome /u01/app/crs –crsuser - i n c tG crs -noshutdown
s
ac tude n
a @
n is S
ฺ t
g e th
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Cloning to g Ta Oracle
Extend l ice Clusterware to More Nodes (continued)
n
H
Step
e 2eis the deployment and extraction of the source archive to the new cluster nodes. If a
e
Ch shared Oracle Cluster home on a CFS is not being used, extract the source archive to each
node’s local file system. Because this node will participate with the existing nodes in a cluster, it
is not possible to use different account names for the Oracle Clusterware software owner. If
other Oracle products have been previously installed on the new nodes, the Central Oracle
Inventory directory may already exist. It is possible for this directory to be owned by a different
user than the Oracle Clusterware user; however, both should belong to the same primary group
oinstall. Execute the preupdate.sh script on each target node, logged in as root. This
script will change the ownership of the CRS home and root-owned files to the Oracle CRS
user if needed.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 21
Cloning to Extend Oracle Clusterware
to More Nodes
3. Create a shell script to invoke clone.pl supplying input.
#!/bin/sh
# /tmp/my-clone-script.sh
E01=ORACLE_BASE=/u01/app
E02=ORACLE_HOME=/u01/app/crs
E03=ORACLE_HOME_NAME=OraCrs11g
C01="-O'sl_tableList={node3:node3-priv:node3-vip:N:Y}'"
C02="-O'INVENTORY_LOCATION=/u01/app/oraInventory'"
C03="-O'-noConfig'"
bl e
fe r a
ns
perl /u01/app/crs/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02 $C03

- t r a
4. Run the shell script created in step 3 on each newon node.
s an
$ /tmp/my-clone-script.sh
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Cloning to T an Oracle
Extend l i c enClusterware to More Nodes (continued)
e n g
e
For
e Hstep 3, the quantity of input variables to the clone.pl procedure is greatly reduced
Ch because the existing cluster has already defined many of the settings that are needed. Creating a
shell script to provide these input values is still recommended. For step 4, run the shell script
that you developed to invoke the clone.pl script.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 22
Cloning to Extend Oracle Clusterware
to More Nodes
5. Run the orainstRoot.sh script on each new node.
# /u01/app/oraInventory/orainstRoot.sh

6. Run the addNode script on the source node.


$ /u01/app/crs/oui/bin/addNode.sh –silent \
"CLUSTER_NEW_NODES={new_nodes}" \
a b le
"CLUSTER_NEW_PRIVATE_NODE_NAMES={new_node-priv}" \
s f er
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={new_node-vip}" \ ran
t
–noCopy on-
s an
a
) hsource
7. Run the rootaddnode.sh script on m
o the
d e ฺ node.
c ฺc Gu i
# /u01/app/crs/install/rootaddnode.sh
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Cloning tog T a
Extend l ice Clusterware to More Nodes (continued)
Oracle
ForH en5, run the orainstRoot.sh script on each new node as the root user. For step 6,
step
e e
Ch run the addNode.sh script on the source node, not on the destination node. Because the
clone.pl scripts have already been run on the new nodes, this script only updates the
inventories of the existing nodes. For step 7, again on the source node, run the
rootaddnode.sh script as root to instantiate the node.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 23
Cloning to Extend Oracle Clusterware
to More Nodes
8. Run the <CRS_home>/root.sh script on the new node.
# /u01/app/crs/root.sh

9. Run the configuration assistants on each node that is listed


in the configToolAllCommands file.
$ cat /u01/app/crs/cfgtoollogs/configToolAllCommands ble
fe r a
ans
$ onsconfig add_config node3:6501
n - t r
a no
s
10. Run cluvfy to validate the installation.) ha ฺ
c o m ide
$ cluvfy stage –post crsinst –n
n c ฺ all G u
-verbose
i t
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Cloning to T an Oracle
Extend l i c enClusterware to More Nodes (continued)
e n g
e e H in step 8, run the root.sh script on each new node joining the cluster as the root user.
Next,
Ch For step 9, additional commands for the configuration assistants need to be run on the new nodes
joining the cluster as the Oracle Clusterware software owner. The list of commands can be found
in the <CRS home>/cfgtoollogs/configToolAllCommands file. Finally, for step
10, which is the last step, run cluvfy to verify the success of the Oracle Clusterware
installation.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 24
Quiz

An Oracle Clusterware home that was created with cloning


techniques can be used as the source for additional cloning
exercises.
1. True
2. False

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 1g T
an licen
H en
e e
Ch

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 25
Quiz

Which scripting language is used for the cloning script?


1. Java
2. PERL
3. Shell
4. Python
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Answer: 2g T
an licen
H en
e e
Ch

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 26
Summary

In this lesson, you should have learned how to:


• Describe the cloning process
• Describe the clone.pl script and its variables
• Perform a clone of Oracle Clusterware to a new cluster
• Extend an existing cluster by cloning
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated C - 27
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
High Availability of Connections

ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Objectives

After completing this lesson, you should be able to:


• Configure client-side, connect-time load balancing
• Configure client-side, connect-time failover
• Configure server-side, connect-time load balancing
• Use the Load Balancing Advisory (LBA)
• Describe the benefits of Fast Application Notification (FAN)
a b le
• Configure server-side callouts s f er
- t r an
• Configure Transparent Application Failover (TAF)on
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

For moreg Tan lisee


information cen
en
http://www.oracle.com/technology/products/database/clustering/pdf/awmrac11g.pdf
H
e e
Ch Note: Much of this appendix is geared toward Oracle Database 11g Release 1 connections.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 2
Types of Workload Distribution

• Connection balancing is rendered possible by configuring


multiple listeners on multiple nodes:
– Client-side, connect-time load balancing
– Client-side, connect-time failover
– Server-side, connect-time load balancing
• Run-time connection load balancing is rendered possible e
by using connection pools: r a bl
sfe
– Work requests automatically balanced across the pool tof
- r an
connections n on
s
– Native feature of the JDBC implicit connection acache and
a
h ฺ
ODP.NET connection pool m) e co Guid
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
Types of Workload
g l ice
Ta Distribution
n
eRAC,
e e H
With multiple listeners on multiple nodes can be configured to handle client connection
Ch requests for the same database service.
A multiple-listener configuration enables you to leverage the following failover and load-
balancing features:
• Client-side, connect-time load balancing
• Client-side, connect-time failover
• Server-side, connect-time load balancing
These features can be implemented either one by one, or in combination with each other.
Moreover, if you are using connection pools, you can benefit from readily available run-time
connection load balancing to distribute the client work requests across the pool of connections
established by the middle tier. This possibility is offered by the Oracle JDBC implicit
connection cache feature as well as Oracle Data Provider for .NET (ODP.NET) connection pool.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 3
Client-Side Load Balancing

bl e
fe r a
Oracle n s
aRAC
n - t r
oDatabase
s an
SCAN
Local m)
ha ฺ
Listeners
Listeners c ฺ co Guide
s - in nt
Clients
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
Client-Side,
a lice Load Balancing
TConnect-Time
g
en load balancing is defined in your client connection definition by setting the
H
Client-side
e
C heparameter LOAD_BALANCE=ON. When you set this parameter to ON, Oracle Database
randomly selects an address in the address list, and connects to that node's listener. This balances
client connections across the available SCAN listeners in the cluster.
The SCAN listener redirects the connection request to the local listener of the instance that is
least loaded and provides the requested service. When the listener receives the connection
request, the listener connects the user to an instance that the listener knows provides the
requested service. When using SCAN, Oracle Net automatically load balances client connection
requests across the three IP addresses you defined for the SCAN, except when using EZConnect.
To see what services a listener supports, run the lsnrctl services command.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 4
Client-Side, Connect-Time Failover

ERP =
(DESCRIPTION =
(ADDRESS_LIST =
(LOAD_BALANCE=ON)
(FAILOVER=ON) 3
(ADDRESS=(PROTOCOL=TCP)(HOST=node1vip)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=node2vip)(PORT=1521))
) 4
ble
(CONNECT_DATA=(SERVICE_NAME=ERP))) fe r a
ans
n - t r
a no
h a s
m ) e ฺ
o
ฺc Gu i d
1
- i n c t
c
node2vip
a tuds e n
2@ node1vip
t a n i s S

ng se t h
h e
e © t2010, uOracle and/or its affiliates. All rights reserved.
h eCopyright
o
n (c nse
Client-Side,
g
a l ice Failover
TConnect-Time
ThisH en enables clients to connect to another listener if the initial connection to the first
feature
e
helistener fails. The number of listener protocol addresses in the connect descriptor determines
C how many listeners are tried. Without client-side, connect-time failover, Oracle Net attempts a
connection with only one listener. As shown in the example in the slide, client-side, connect-
time failover is enabled by setting the FAILOVER=ON clause in the corresponding client-side
TNS entry.
In the example, you expect the client to randomly attempt connections to either NODE1VIP or
NODE2VIP, because LOAD_BALANCE is set to ON. In the case where one of the nodes is down,
the client cannot know this. If a connection attempt is made to a down node, the client needs to
wait until it receives the notification that the node is not accessible, before an alternate address
in the ADDRESS_LIST is tried.
Therefore, using virtual host names in the ADDRESS_LIST of your connect descriptors is
highly recommended . If a failure of a node occurs (1), the virtual IP address assigned to that
node is failed over and brought online on another node in the cluster (2). Thus, all client
connection attempts are still able to get a response from the IP address, without the need to wait
for the operating system TCP/IP timeout (3). Therefore, clients get an immediate
acknowledgement from the IP address, and are notified that the service on that node is not
available. The next address in the ADDRESS_LIST can then be tried immediately with no delay
(4).
Note: If you use connect-time failover, do not set GLOBAL_DBNAME in your listener.ora
Unauthorized
file. reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 5
Server-Side, Connect-Time Load Balancing

ERP = (DESCRIPTION=
(ADDRESS_LIST=(LOAD_BALANCE=ON)(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)(HOST=node1vip)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=node2vip)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=ERP)))
6 4 3 2
ERP started on both instances
Listener
1
Listener
ble
5 1 1 fe r a
ans
PMON PMON
n - t r
Node1
*.REMOTE_LISTENER=RACDB_LISTENERS
a no Node2
h a s
m ) e ฺ
RACDB_LISTENERS= o
ฺc Gu i d
(DESCRIPTION=
- i n c t
c s
(ADDRESS=(PROTOCOL=tcp)(HOST=node1vip)(PORT=1521))
a tud e n
@ S
(ADDRESS=(PROTOCOL=tcp)(HOST=node2vip)(PORT=1521)))
a n
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Server-Side,
g
a l ice Load Balancing
TConnect-Time
TheH en shows you how listeners distribute service connection requests across a RAC cluster.
slide
e
heHere, the client application connects to the ERP service. On the server side, the database is using
C
the dynamic service registration feature. This allows the PMON process of each instance in the
cluster to register service performance information with each listener in the cluster (1). Each
listener is then aware of which instance has a particular service started, as well as how that
service is performing on each instance.
You configure this feature by setting the REMOTE_LISTENER initialization parameter of each
instance to a TNS name that describes the list of all available listeners. The slide shows the
shared entry in the SPFILE as well as its corresponding server-side TNS entry.
Depending on the load information, as computed by the Load Balancing Advisory, and sent by
each PMON process, a listener redirects the incoming connection request (2) to the listener of
the node where the corresponding service is performing the best (3).
In the example, the listener on NODE2 is tried first. Based on workload information dynamically
updated by PMON processes, the listener determines that the best instance is the one residing on
NODE1. The listener redirects the connection request to the listener on NODE1 (4). That listener
then starts a dedicated server process (5), and the connection is made to that process (6).
Note: For more information, refer to the Net Services Administrator’s Guide.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 6
Fast Application Notification: Overview

.NET app C app C app Java app Java app

ODP.NET OCI API ONS OCI ONS Java JDBC


API API

ONS ONS

ble
AQ
fe r a
HA
ans
Proxy Events
ONS
n - t r
app
a no
Callout
HA
CRS
HA
EMD ha
s DB
script Events Events
m ) e ฺ Control
o
ฺc Gu i d
i n c
Callout
exec HA ac
s- dent
a n @ Stu
Events Node1

g ฺt t hi s
h n e
e© 2010,uOracle
s and/or its affiliates. All rights reserved.
e e
Copyright
t o
n (ch nse
Fast Application
g ice
Ta Notification:
l Overview
e n
e e H
Fast Application Notification (FAN) enables end-to-end, lights-out recovery of applications and
Ch load balancing based on real transaction performance in a RAC environment. With FAN, the
continuous service built into Oracle Real Application Clusters is extended to applications and
mid-tier servers. When the state of a database service changes, (for example, up, down, or not
restarting), the new status is posted to interested subscribers through FAN events. Applications
use these events to achieve very fast detection of failures, and rebalancing of connection pools
following failures and recovery. The easiest way to receive all the benefits of FAN, with no
effort, is to use a client that is integrated with FAN:
• Oracle Database JDBC
• Server-side callouts
• Connection Manager (CMAN)
• Listeners
• Oracle Notification Service (ONS) API
• OCI Connection Pool or Session Pool
• Transparent Application Failover (TAF)
• ODP.NET Connection Pool
Note: The integrated Oracle clients must be Oracle Database 10g Release 2 or later to take
advantage of the load balancing advisory FAN events.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 7
Fast Application Notification: Benefits

• No need for connections to rely on connection timeouts


• Used by Load Balancing Advisory to propagate load information
• Designed for enterprise application and management console
integration
• Reliable distributed system that:
– Detects high-availability event occurrences in a timely
manner a b le
– Pushes notification directly to your applications s f er
- t r an
• Tightly integrated with:
n on
– Oracle JDBC applications using connection s a
pools
a
) h eฺ
– Enterprise Manager m
– Data Guard Broker c ฺ co Guid
in s - n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Fast Application
g
a l ice
T Notification: Benefits
e n
e e H
Traditionally, client or mid-tier applications connected to the database have relied on connection
Ch timeouts, out-of-band polling mechanisms, or other custom solutions to realize that a system
component has failed. This approach has huge implications in application availability, because
down times are extended and more noticeable.
With FAN, important high-availability events are pushed as soon as they are detected, which
results in a more efficient use of existing computing resources, and a better integration with your
enterprise applications, including mid-tier connection managers, or IT management consoles,
including trouble ticket loggers and email/paging servers.
FAN is a distributed system that is enabled on each participating node. This makes it very
reliable and fault tolerant because the failure of one component is detected by another.
Therefore, event notification can be detected and pushed by any of the participating nodes.
FAN events are tightly integrated with Oracle Data Guard Broker, Oracle JDBC implicit
connection cache, ODP.NET, TAF, and Enterprise Manager. For example, Oracle Database
JDBC applications managing connection pools do not need custom code development. They are
automatically integrated with the ONS if implicit connection cache and fast connection failover
are enabled.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 8
FAN-Supported Event Types

Event type Description


SERVICE Primary application service

SRV_PRECONNECT Shadow application service event


(mid-tiers and TAF using primary and
secondary instances)
SERVICEMEMBER Application service on a specific instance
ble
DATABASE Oracle database fe r a
ans
INSTANCE Oracle instance n - t r
a no
ASM Oracle ASM instance h a s
m ) e ฺ
Oracle cluster node o
ฺc Gu i d
NODE
- i n c t
c s
a tuAdvisory d e n
SERVICE_METRICS Load Balancing
n @ S
g ฺ ta this
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

FAN-Supported T anEventl i c en
Types
e ng
e
FAN
e H delivers events pertaining to the list of managed cluster resources shown in the slide. The
Ch table describes each of the resources.
Note: SRV_PRECONNECT and SERVICE_METRICS are discussed later in this lesson.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 9
FAN Event Status

Event status Description


up Managed resource comes up.
down Managed resource goes down.
preconn_up Shadow application service comes up.
ble
preconn_down Shadow application service goes down.
fe r a
ans
nodedown Managed node goes down.
n - t r
na o
not_restarting s a to
Managed resource cannot fail over
remote node. ) ha m e ฺ
o
ฺc Gu i d
- i n c t
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
FAN Event g T a
Status l ice
H
This en describes the event status for each of the managed cluster resources seen previously.
table
e e
Ch

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 10
FAN Event Reasons

Event Reason Description

user User-initiated commands, such as srvctl and


sqlplus
failure Managed resource polling checks for and detects a
failure.
dependency Dependency of another managed resource that
triggered a failure condition ble
fe r a
Initial cluster boot: Managed resource has profile
autostart
attribute AUTO_START=1, and was offline before t r a ns
the last Oracle Clusterware shutdown. non
-
s a
a
) h eฺ
co Guidm
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
eCopyright
t o
n (ch nse
FAN Event g Ta lice
Reasons
TheH en status for each managed resource is associated with an event reason. The reason
event
e e
Ch further describes what triggered the event. The table in the slide gives you the list of possible
reasons with a corresponding description.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 11
FAN Event Format

<Event_Type>
VERSION=<n.n>
[service=<serviceName.dbDomainName>]
[database=<dbName>] [instance=<sid>]
[host=<hostname>]
status=<Event_Status>
reason=<Event_Reason>
[card=<n>] ble
timestamp=<eventDate> <eventTime> fe r a
ans
n - t r
SERVICE VERSION=1.0 service=ERP.oracle.com
a no
s
database=ORCL status=up reason=user card=4
a
timestamp=21-Jul-2009 15:24:19 )h ฺ
ฺ c om uide
- i n c tG
cs uden 12:41:02
NODE VERSION=1.0 host=host01
status=nodedown timestamp=21-Jul-2009
@ St a
ฺ t a n is
g t h
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

FAN Event T an licen


Format
e ng
e H
Ineaddition to its type, status, and reason, a FAN event has other payload fields to further
Ch describe the unique cluster resource whose status is being monitored and published:
• The event payload version, which is currently 1.0
• The name of the primary or shadow application service. This name is excluded from NODE
events.
• The name of the RAC database, which is also excluded from NODE events
• The name of the RAC instance, which is excluded from SERVICE, DATABASE, and
NODE events
• The name of the cluster host machine, which is excluded from SERVICE and DATABASE
events
• The service cardinality, which is excluded from all events except for SERVICE
status=up events
• The server-side date and time when the event is detected
The general FAN event format is described in the slide along with possible FAN event examples.
Note the differences in event payload for each FAN event type.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 12
Load Balancing Advisory: FAN Event

Parameter Description
Version Version of the event record
Event type SERVICE, SERVICE_MEMBER, DATABASE,
INSTANCE, NODE, ASM, SRV_PRECONNECT
Service Matches the service in DBA_SERVICES
Database unique name Unique DB name supporting the service ble
fe r a
Time stamp Date and time stamp (local time zone) ans
n - t r
Instance no
Instance name supporting the service
a
Percent Percentage of work to send h a s this database
to
and instance m ) e ฺ
o
ฺc Gu i d
i n c
Flag -
GOOD, VIOLATING,
a nt DATA, BLOCKED
NO
cs ude
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

T
Load Balancing
anAdvisory:
l i c en FAN Event
e ng
e
The
e HLoad Balancing Advisory FAN event is described in the slide. Basically, it contains a
Ch calculated percentage of work requests that should be sent to each instance. The flag indicates
the behavior of the service on the corresponding instance relating to the thresholds set on that
instance for the service.
Use the following example to monitor load balancing advisory events:
SET PAGES 60 COLSEP '|' LINES 132 NUM 8 VERIFY OFF FEEDBACK OFF
COLUMN user_data HEADING "AQ Service Metrics" FORMAT A60 WRAP
BREAK ON service_name SKIP 1

SELECT TO_CHAR(enq_time, 'HH:MI:SS') Enq_time, user_data


FROM sys.sys$service_metrics_tab
ORDER BY 1 ;

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 13
Implementation of Server-Side Callouts

• The callout directory:


– <GRID_Home>/racg/usrco
– Can store more than one callout
– Grants execution on callouts and the callout directory to the
Oracle Clusterware user
• The order in which callouts are executed is
ble
nondeterministic. r a
s fe
• Writing callouts involves:
- t r an
1. Parsing callout arguments: The event payload no n
s a
2. Filtering incoming FAN events
) a
h ฺ
3. Executing event-handling programs
co m ide
n c ฺ G u
i t
a cs- uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

ImplementationT l i c en
anof Server-Side Callouts
n g
edatabase event detected by the RAC High Availability (HA) framework results in the
H
Each
e
C he
execution of each executable script or program deployed in the standard Oracle Clusterware
callout directory. On UNIX, it is GRID_Home/racg/usrco. You must deploy each new callout
on each RAC node.
The order in which these callouts are executed is nondeterministic. However, RAC guarantees
that all callouts are invoked once for each recognized event in an asynchronous fashion. Thus,
merging callouts whose executions need to be in a particular order is recommended.
You can install as many callout scripts or programs as your business requires, provided each
callout does not incur expensive operations that delay the propagation of HA events. If many
callouts are going to be written to perform different operations based on the event received, it
might be more efficient to write a single callout program that merges each single callout.
Writing server-side callouts involves the steps shown in the slide. In order for your callout to
identify an event, it must parse the event payload sent by the RAC HA framework to your
callout. After the sent event is identified, your callout can filter it to avoid execution on each
event notification. Then, your callout needs to implement a corresponding event handler that
depends on the event itself and the recovery process required by your business.
Note: As a security measure, make sure that the callout directory and its contained callouts have
write permissions
Unauthorized only to
reproduction orthe system user
distribution who installed
prohibitedฺ Oracle Clusterware.
Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 14
Server-Side Callout Parse: Example
#!/bin/sh
NOTIFY_EVENTTYPE=$1
for ARGS in $*; do
PROPERTY=`echo $ARGS | $AWK -F"=" '{print $1}'`
VALUE=`echo $ARGS | $AWK -F"=" '{print $2}'`
case $PROPERTY in
VERSION|version) NOTIFY_VERSION=$VALUE ;;
SERVICE|service) NOTIFY_SERVICE=$VALUE ;;
DATABASE|database) NOTIFY_DATABASE=$VALUE ;;
a b le
INSTANCE|instance) NOTIFY_INSTANCE=$VALUE ;;
s f er
HOST|host)
STATUS|status)
NOTIFY_HOST=$VALUE ;;
NOTIFY_STATUS=$VALUE ;;n-tr
an
o
n;;
REASON|reason) NOTIFY_REASON=$VALUE
s a
CARD|card)
) h a
NOTIFY_CARDINALITY=$VALUE ;;
om uide
TIMESTAMP|timestamp) NOTIFY_LOGDATE=$VALUE ฺ ;;
??:??:??) c ฺ c
NOTIFY_LOGTIME=$PROPERTY
G ;;
i n
s- den t
esac
a c
done @ tu
n is S
ฺ t
g e tha
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Server-Side
g TCallout ice Example
a lParse:
H enyou want your callouts to be executed on each event notification, you must first identify
Unless
he e
C the event parameters that are passed automatically to your callout during its execution. The
example in the slide shows you how to parse these arguments by using a sample Bourne shell
script.
The first argument that is passed to your callout is the type of event that is detected. Then,
depending on the event type, a set of PROPERTY=VALUE strings are passed to identify exactly
the event itself.
The script given in the slide identifies the event type and each pair of PROPERTY=VALUE
string. The data is then dispatched into a set of variables that can be used later in the callout for
filtering purposes.
As mentioned in the previous slide, it might be better to have a single callout that parses the
event payload, and then executes a function or another program on the basis of information in
the event, as opposed to having to filter information in each callout. This becomes necessary
only if many callouts are required.
Note: Make sure that executable permissions are set correctly on the callout script.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 15
Server-Side Callout Filter: Example

if ((( [ $NOTIFY_EVENTTYPE = "SERVICE" ] ||


[ $NOTIFY_EVENTTYPE = "DATABASE" ] || \
[ $NOTIFY_EVENTTYPE = "NODE" ] \
) && \
( [ $NOTIFY_STATUS = "not_restarting" ] \
)) && \
( [ $NOTIFY_DATABASE = "PROD" ] || \
[ $NOTIFY_SERVICE = "ERP" ] \
a b le
))
s f er
then
- t r an
/usr/local/bin/logTicket $NOTIFY_LOGDATE \on
$NOTIFY_LOGTIMEa \
n
s
ha ฺ \
)
$NOTIFY_SERVICE
c om uide \
$NOTIFY_DBNAME

- i n c tG
$NOTIFY_HOST
fi acs den @ Stu
ฺ t a n is
g t h
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

Server-Side T an lFilter:
Callout i c en Example
e ng
e
The
e Hexample in the slide shows you a way to filter FAN events from a callout script. This
C h example is based on the example in the previous slide.
Now that the event characteristics are identified, this script triggers the execution of the trouble-
logging program /usr/local/bin/logTicket only when the RAC HA framework posts
a SERVICE, DATABASE, or NODE event type, with a status set to not_restarting, and
only for the production PROD RAC database or the ERP service.
It is assumed that the logTicket program is already created and that it takes the arguments
shown in the slide.
It is also assumed that a ticket is logged only for not_restarting events, because they are
the ones that exceeded internally monitored timeouts and seriously need human intervention for
full resolution.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 16
Configuring the Server-Side ONS

Mid-tier1
localport=6100
remoteport=6200 ONS

$ srvctl add ons -t 6200 node1:6200 node2:6200


2
$ srvctl add ons -t midtier1:6200
ble
$ onsctl reload 3 3 $ onsctl reload fe r a
ans
n - t r
ONS a
ONSno
h a s
m ) e ฺ
OCR
o
ฺc G1 uons.config i d
ons.config 1
- i n c t
c s
a tudNode2 e n
Node1
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Configuring
g
a l ice ONS
Tthe Server-Side
TheH en configuration is controlled by the <GRID Home>/opmn/conf/ons.config
ONS
he e
C configuration file. This file is automatically created during installation. There are three
important parameters that should always be configured for each ONS:
• The first is localport, the port that ONS uses to talk to local clients.
• The second is remoteport, the port that ONS uses to talk to other ONS daemons.
• The third parameter is called nodes. It specifies the list of other ONS daemons to talk to.
This list should include all RAC ONS daemons, and all mid-tier ONS daemons. Node
values are given as either host names or IP addresses followed by its remoteport.
In the slide, it is assumed that ONS daemons are already started on each cluster node. This
should be the default situation after a correct RAC installation. However, if you want to use
OCR, you should edit the ons.config file on each node, and then add the configuration to OCR
before reloading it on each cluster node. This is illustrated in the slide.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 17
Optionally Configure the Client-Side ONS

Mid-tier1

ons.config ONS $ onsctl start 2

localport=6100
remoteport=6200 1
ble
nodes=node1:6200,node2:6200
fe r a
ans
n - t r
ONS a no
ONS
h a s
m ) e ฺ
OCR
o
ฺc Guons.config i d
ons.config
- i n c t
c s
a tud Node2 e n
Node1
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e Client-Side ONS
T
OptionallygConfigure l icthe
H en
Oracle Database FAN uses Oracle Notification Service (ONS) on the mid-tier to receive FAN
h e e
C events when you are using the Java Database Connectivity (JDBC) connection cache. To use
ONS on the mid-tier, you need to install ONS on each host where you have client applications
that need to be integrated with FAN. Most of the time, these hosts play the role of a mid-tier
application server. Therefore, on the client side, you must configure all the RAC nodes in the
ONS configuration file. A sample configuration file might look like the one shown in the slide.
After configuring ONS, you start the ONS daemon with the onsctl start command. It is
your responsibility to make sure that an ONS daemon is running at all times. You can check that
the ONS daemon is active by executing the onsctl ping command.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 18
JDBC Fast Connection Failover: Overview

Mid-tier1
Service Service or node
ONS
UP event DOWN event
JDBC ICC
Event
handler

Connections Connections e
reconnected Connection Cache marked down & r a bl
cleaned up fens
Connections Connections n- t r a
using Listeners using no
service names service s a
names
Connections ) a
h ฺ
load balancing o
c m ide
ONS ……… in c ฺ ONS G u
s - n t
Node1 n@
ac tudeNoden
t a i s S

ng se t h
h e
e © t2010, uOracle and/or its affiliates. All rights reserved.
h eCopyright
o
n (c nse
JDBC Fast g T a
Connectionl iceFailover: Overview
H enApplication Server integrates Implicit Connection Cache (ICC) with the ONS API by
Oracle
e e
Ch having application developers enable Fast Connection Failover (FCF). FCF works in
conjunction with the ICC to quickly and automatically recover lost or damaged connections.
This automatic connection management results from FAN events received by the local ONS
daemon, or by a remote ONS if a local one is not used, and handled by a special event handler
thread. Both JDBC thin and JDBC OCI drivers are supported.
Therefore, if ICC and FCF are enabled, your Java program automatically becomes an ONS
subscriber without having to manage FAN events directly.
Whenever a service or node down event is received by the mid-tier ONS, the event handler
automatically marks the corresponding connections as down and cleans them up. This prevents
applications that request connections from the cache from receiving invalid or bad connections.
Whenever a service up event is received by the mid-tier ONS, the event handler recycles some
unused connections, and reconnects them using the event service name. The number of recycled
connections is automatically determined by the connection cache. Because the listeners perform
connection load balancing, this automatically rebalances the connections across the preferred
instances of the service without waiting for application connection requests or retries.
For more information, refer to the Oracle Database JDBC Developer’s Guide and Reference.
Note: Similarly, ODP.NET also allows you to use FCF using AQ for FAN notifications.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 19
Using Oracle Streams Advanced Queuing for FAN

• Use AQ to publish FAN to ODP.NET and OCI.


• Turn on FAN notification to alert queue.

exec DBMS_SERVICE.MODIFY_SERVICE (
service_name => 'SELF-SERVICE', aq_ha_notification => TRUE);

• View published FAN events:


SQL> select object_name,reason
ble
2 from dba_outstanding_alerts;
fe r a
ans
OBJECT_NAME REASON
n - t r
----------- ----------------------------------------
a no
xwkE Database xwkE (domain ) up as of time
2005-12-30 11:57:29.000000000 -05:00; h a s
m ) e ฺ
reason code: user o
ฺc Gu i d
JWSERV Composite service xwkE up as of time
- i n c t
c s
2006-01-02 05:27:46.000000000 -05:00;
a tud e n
reason code: user
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n e
Using Oracle
g TStreamsl icAdvanced Queuing for FAN
n
epublishes FAN events to a system alert queue in the database by using Oracle Streams
e
RAC
e H
Ch Advanced Queuing (AQ). ODP.NET and OCI client integration uses this method to subscribe to
FAN events.
To have FAN events for a service posted to that alert queue, the notification must be turned on
for the service by using either the DBMS_SERVICE PL/SQL package as shown in the slide, or
by using the Enterprise Manager interface.
To view FAN events that are published, you can use the DBA_OUTSTANDING_ALERTS or
DBA_ALERT_HISTORY views. An example using DBA_OUTSTANDING_ALERTS is shown
in the slide.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 20
JDBC/ODP.NET FCF Benefits

• Database connections are balanced across preferred


instances according to LBA.
• Database work requests are balanced across preferred
instances according to LBA.
• Database connections are anticipated.
• Database connection failures are immediately detected e
and stopped. r a bl
s fe
- t r an
n no
s a
) h a
o m d e ฺ
c ฺc Gu i
s - i n n t
c
a tud e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
a n ce
T FCFliBenefits
JDBC/ODP.NET
g
By H en FCF, your existing Java applications connecting through Oracle JDBC Universal
enabling
he e
C Connection Pool (UCP) and application services, or your .NET applications using ODP.NET
connection pools and application services benefit from the following:
• All database connections are balanced across all RAC instances that support the new
service name, instead of having the first batch of sessions routed to the first RAC instance.
This is done according to the Load Balancing Advisory algorithm you use (see the next
slide). Connection pools are rebalanced upon service, instance, or node up events.
• The connection cache immediately starts placing connections to a particular RAC instance
when a new service is started on that instance.
• The connection cache immediately shuts down stale connections to RAC instances where
the service is stopped, or whose node goes down.
• Your application automatically becomes a FAN subscriber without having to manage FAN
events directly by just setting up flags in your connection descriptors..
Note: For more information about how to subscribe to FAN events, refer to the Oracle Database
JDBC Developer’s Guide and Oracle Data Provider for .NET Developer’s Guide.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 21
Load Balancing Advisory

• The Load Balancing Advisory (LBA) is an advisory for


sending work across RAC instances.
• LBA advice is available to all applications that send work:
– JDBC and ODP connection pools
– Connection load balancing
• There are two types of service-level goals:
– SERVICE_TIME: Directs work requests to instances ble
fe r a
according to response time.
t r a ns
$ srvctl modify service -d PROD -s OE -B SERVICE_TIME o n -
-j SHORT
s an
– THROUGHPUT: Directs requests based on ) a rate that work is
hthe
completed in the service plus available o m d
bandwidth. e ฺ
c ฺc Gu i
s i n
- BATCH n t-B THROUGHPUT -j LONG
$ srvctl modify service -d PROD c
a tud -s e
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Load Balancing
g
a
T Advisoryl ice
n
ebalancing
e e H
Load distributes work across all available database instances. The LBA provides
Ch advice about how to direct incoming work to the instances that provide the optimal quality of
service for that work, minimizing the need to relocate the work later. By using the
SERVICE_TIME or THROUGHPUT goals, feedback is built into the system.
• SERVICE_TIME: Attempts to direct work requests to instances according to response
time. Load balancing advisory data is based on elapsed time for work done in the service
plus available bandwidth to the service. An example for the use of SERVICE_TIME is for
workloads such as Internet shopping where the rate of demand changes
• THROUGHPUT: Attempts to direct work requests according to throughput. The load
balancing advisory is based on the rate that work is completed in the service plus available
bandwidth to the service. An example for the use of THROUGHPUT is for workloads such
as batch processes, where the next job starts when the last job completes:
Work is routed to provide the best service times globally, and routing responds gracefully to
changing system conditions. In a steady state, the system approaches equilibrium with improved
throughput across all of the Oracle RAC instances. The load balancing advisory is deployed with
key Oracle clients, such as a listener, the JDBC universal connection pool, and the ODP.NET
Connection Pool. The load balancing advisory is also open for third-party subscription by way of
the JDBC and Oracle RAC FAN API or through callbacks with the Oracle Call Interface.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 22
JDBC/ODP.NET Runtime Connection
Load Balancing: Overview

CRM work requests

Connection Cache
10%
? ble
fe r a
60%
30%
ans
n - t r
a no
h a s
m ) e ฺ
o
c Gu i d
RAC RAC incฺ t RAC
CRM is ac
s - e n
Inst2 tud
CRM is CRM is
Inst1 very busy. @ Inst3
n
not busy.
ta this S busy.
g ฺ
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

JDBC/ODP.NETT anRuntime
l i c enConnection Load Balancing: Overview
e ng
e e H using the Load Balancing Advisory, work requests to RAC instances are assigned on a
Without
C h random basis, which is suitable when each instance is performing equally well. However, if one
of the instances becomes more burdened than the others because of the amount of work resulting
from each connection assignment, the random model does not perform optimally.
The Runtime Connection Load Balancing feature provides assignment of connections based on
feedback from the instances in the RAC cluster. The Connection Cache assigns connections to
clients on the basis of a relative number indicating what percentage of work requests each
instance should handle.
In the diagram in the slide, the feedback indicates that the CRM service on Inst1 is so busy that
it should service only 10% of the CRM work requests; Inst2 is so lightly loaded that it should
service 60%; and Inst3 is somewhere in the middle, servicing 30% of requests. Note that these
percentages apply to, and the decision is made on, a per-service basis. In this example, CRM is
the service in question.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 23
Monitor LBA FAN Events

SQL> SELECT TO_CHAR(enq_time, 'HH:MI:SS') Enq_time, user_data


2 FROM sys.sys$service_metrics_tab
3 ORDER BY 1 ;

ENQ_TIME USER_DATA
-------- -----------------------------------------------------

ble
04:19:46 SYS$RLBTYP('JFSERV', 'VERSION=1.0 database=xwkE
fe r a
service=JFSERV { {instance=xwkE2 percent=50
ans
flag=UNKNOWN}{instance=xwkE1 percent=50 flag=UNKNOWN}
n - t r
} timestamp=2009-01-02 06:19:46')
04:20:16 SYS$RLBTYP('JFSERV', 'VERSION=1.0 database=xwkE a no
service=JFSERV { {instance=xwkE2 percent=80 h a s
flag=UNKNOWN}{instance=xwkE1 percent=20 flag=UNKNOWN} m ) e ฺ
o
ฺc Gu i d
} timestamp=2009-01-02 06:20:16')
- i n c t
SQL>
c
a tuds e n
a n @ S
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
Monitor LBA
g
a
TFAN Events
l ice
YouH enuse the SQL query shown in the slide to monitor the Load Balancing Advisory FAN
can
e e
Ch events for each of your services.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 24
Transparent Application Failover: Overview

TAF Basic TAF Preconnect

2 Application 2 Application
OCI Library 6 OCI Library 6

4 Net Services 4 Net Services


ble
5 3 7 5 3
fe r a
ans
8 - t7r
AP 3 ERP 3 n no
s3 a
1
7
1
) h a
o m d e ฺ
c ฺc ERP_PRECONNECT
u i
AP i n ERP t G
a cs- uden
a n @ St 1
ฺ t
g e th i s
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Transparent
a lice Failover (TAF): Overview
TApplication
g
ena run-time feature of the OCI driver. It enables your application to automatically
TAF
e His
C he
reconnect to the service if the initial connection fails. During the reconnection, although your
active transactions are rolled back, TAF can optionally resume the execution of a SELECT
statement that was in progress. TAF supports two failover methods:
• With the BASIC method, the reconnection is established at failover time. After the service
has been started on the nodes (1), the initial connection (2) is made. The listener establishes
the connection (3), and your application accesses the database (4) until the connection fails
(5) for any reason. Your application then receives an error the next time it tries to access
the database (6). Then, the OCI driver reconnects to the same service (7), and the next time
your application tries to access the database, it transparently uses the newly created
connection (8). TAF can be enabled to receive FAN events for faster down events detection
and failover.
• The PRECONNECT method is similar to the BASIC method except that it is during the
initial connection that a shadow connection is also created to anticipate the failover. TAF
guarantees that the shadow connection is always created on the available instances of your
service by using an automatically created and maintained shadow service.
Note: Optionally, you can register TAF callbacks with the OCI layer. These callback functions
are automatically invoked at failover detection and allow you to have some control of the
failover process. For more information, refer to the Oracle Call Interface Programmer’s Guide.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 25
TAF Basic Configuration Without FAN: Example

$ srvctl add service -d RACDB -s AP -r I1,I2 \


> -P BASIC
$ srvctl start service -d RACDB -s AP

AP =
(DESCRIPTION =(FAILOVER=ON)(LOAD_BALANCE=ON)
a b le
(ADDRESS=(PROTOCOL=TCP)(HOST=cluster01-scan)(PORT=1521)) fer
(CONNECT_DATA =
t r a ns
(SERVICE_NAME = AP) o n -
(FAILOVER_MODE = s an
(TYPE=SELECT) ) ha ฺ
(METHOD=BASIC)
ฺ c om uide
(RETRIES=180) - i n c tG
(DELAY=5)))) acs den @ Stu
ฺ t a n is
g t h
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

n
Tan lice Without FAN: Example
TAF BasicgConfiguration
H enusing TAF, it is recommended that you create and start a service that is used during
Before
e e
Ch connections. By doing so, you benefit from the integration of TAF and services. When you want
to use BASIC TAF with a service, you should have the -P BASIC option when creating the
service. After the service is created, you simply start it on your database.
Then, your application needs to connect to the service by using a connection descriptor similar
to the one shown in the slide. The FAILOVER_MODE parameter must be included in the
CONNECT_DATA section of your connection descriptor:
• TYPE specifies the type of failover. The SELECT value means that not only the user
session is reauthenticated on the server side, but also the open cursors in the OCI can
continue fetching. This implies that the client-side logic maintains the fetch-state of each
open cursor. A SELECT statement is reexecuted by using the same snapshot, discarding
those rows already fetched, and retrieving those rows that were not fetched initially. TAF
verifies that the discarded rows are those that were returned initially, or it returns an error
message.
• METHOD=BASIC is used to reconnect at failover time.
• RETRIES specifies the number of times to attempt to connect after a failover.
• DELAY specifies the amount of time in seconds to wait between connect attempts.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 26
TAF Basic Configuration with FAN: Example

$ srvctl add service -d RACDB -s AP -r I1,I2

$ srvctl start service -d RACDB -s AP

srvctl modify service -d RACDB -s AP -q TRUE -P BASIC \


-e SELECT -z 180 -w 5 -j LONG
ble
fe r a
ans
AP =
(DESCRIPTION = n - t r
(ADDRESS=(PROTOCOL=TCP)(HOST=cluster01-scan)(PORT=1521))
a no
(LOAD_BALANCE = YES)
h a s
(CONNECT_DATA =
m ) e ฺ
(SERVER = DEDICATED) o
ฺc Gu i d
- i n c t
(SERVICE_NAME = TEST)
(FAILOVER_MODE = c
a tuds e n
n @ S
(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))
a
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
T a
TAF BasicgConfiguration l ice with FAN: Example
H en
Oracle Database 11g Release 2 supports server-side TAF with FAN. To use server-side TAF,
e e
Ch create and start your service using SRVCTL, and then configure TAF in the RDBMS by using
the srvctl command as shown in the slide. When done, make sure that you define a TNS
entry for it in your tnsnames.ora file. Note that this TNS name does not need to specify TAF
parameters as in the previous slide.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 27
TAF Preconnect Configuration: Example

ERP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan)(PORT = 1521))
(LOAD_BALANCE = YES)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ERP)
(FAILOVER_MODE =
(BACKUP = ERP_PRECONNECT)
(TYPE = SELECT)(METHOD = PRECONNECT)(RETRIES = 180)(DELAY = 5))))
ble
fe r a
ERP_PRECONNECT =
ans
(DESCRIPTION =
n - t r
(ADDRESS = (PROTOCOL = TCP)(HOST = cluster01-scan)(PORT = 1521))
a no
(LOAD_BALANCE = YES)
h a s
(CONNECT_DATA =
m ) e ฺ
(SERVER = DEDICATED) o
ฺc Gu i d
(SERVICE_NAME = ERP_PRECONNECT)
- i n c t
(FAILOVER_MODE = c
a tuds e n
n @ S
(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5)))
a
ฺ t h i s
h e ng se t
e e © t2010,
Copyright
o uOracle and/or its affiliates. All rights reserved.
h
(c nse
n
TAF Preconnect
g
a l ice
T Configuration: Example
n
e for the shadow service to be created and managed automatically by Oracle Clusterware,
e H
Ineorder
Ch you must define the service with the –P PRECONNECT option. The shadow service is always
named using the format <service_name>_PRECONNECT.
The main differences with the previous example are that METHOD is set to PRECONNECT and
an additional parameter is added. This parameter is called BACKUP and must be set to another
entry in your tnsnames.ora file that points to the shadow service.
Note: In all cases where TAF cannot use the PRECONNECT method, TAF falls back to the
BASIC method automatically.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 28
TAF Verification
SELECT machine, failover_method, failover_type,
failed_over, service_name, COUNT(*)
FROM v$session
GROUP BY machine, failover_method, failover_type,
failed_over, service_name;

MACHINE FAILOVER_M FAILOVER_T FAI SERVICE_N COUNT(*)


First ------- ---------- ---------- --- -------- --------
node node1 BASIC SESSION NO AP 1
node1 PRECONNECT SESSION NO ERP 1 bl e
fe r a
Second r
MACHINE FAILOVER_M FAILOVER_T FAI SERVICE_N COUNT(*)
t a ns
n -
------- ---------- ---------- --- ---------o--------
node
node2 NONE NONE an 1
NO ERP_PRECO
s
h a
)SERVICE_N
MACHINE FAILOVER_M FAILOVER_T FAI o m d e ฺ COUNT(*)
Second
c ฺc--- G u i
node
------- ---------- ----------
- i n n t -------- --------
after
node2 BASIC cs udeYES AP
SESSION
a 1
node2 PRECONNECTn@ SESSION t
S YES ERP_PRECO 1
ฺ t
g e tha i s
e n
h © 2010, s and/or its affiliates. All rights reserved.
e e
Copyright
t o uOracle
n (ch nse
Ta lice
TAF Verification
g
To H en whether TAF is correctly configured and that connections are associated with a
determine
e e
Ch failover option, you can examine the V$SESSION view. To obtain information about the
connected clients and their TAF status, examine the FAILOVER_TYPE, FAILOVER_METHOD,
FAILED_OVER, and SERVICE_NAME columns. The example includes one query that you
could execute to verify that you have correctly configured TAF.
This example is based on the previously configured AP and ERP services, and their
corresponding connection descriptors.
The first output in the slide is the result of the execution of the query on the first node after two
SQL*Plus sessions from the first node have connected to the AP and ERP services, respectively.
The output shows that the AP connection ended up on the first instance. Because of the load-
balancing algorithm, it can end up on the second instance. Alternatively, the ERP connection
must end up on the first instance because it is the only preferred one.
The second output is the result of the execution of the query on the second node before any
connection failure. Note that there is currently one unused connection established under the
ERP_PROCONNECT service that is automatically started on the available ERP instance.
The third output is the one corresponding to the execution of the query on the second node after
the failure of the first instance. A second connection has been created automatically for the AP
service connection,
Unauthorized reproductionand or
thedistribution
original ERPprohibitedฺ
connectionCopyright©
now uses the2010,
preconnected connection.
Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 29
FAN Connection Pools and TAF Considerations

• Both techniques are integrated with services and provide


service connection load balancing.
• Do not use FCF when working with TAF, and vice versa.
• Connection pools that use FAN are always preconnected.
• TAF may rely on operating system (OS) timeouts to detect
failures. le
r a b
• FAN never relies on OS timeouts to detect failures. sfe
tra n
o n -
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en© 2010,uOracle
se and/or its affiliates. All rights reserved.
( c he se to
Copyright

an Pools c n
eand
T
FAN Connection
g l i TAF Considerations
n
e the connection load balancing is a listener functionality, both FCF and TAF
e e H
Because
Ch automatically benefit from connection load balancing for services.
When you use FCF, there is no need to use TAF. Moreover, FCF and TAF cannot work together.
For example, you do not need to preconnect if you use FAN in conjunction with connection
pools. The connection pool is always preconnected.
With both techniques, you automatically benefit from VIPs at connection time. This means that
your application does not rely on lengthy operating system connection timeouts at connect time,
or when issuing a SQL statement. However, when in the SQL stack, and the application is
blocked on a read/write call, the application needs to be integrated with FAN in order to receive
an interrupt if a node goes down. In a similar case, TAF may rely on OS timeouts to detect the
failure. This takes much more time to fail over the connection than when using FAN.

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 30
Summary

In this lesson, you should have learned how to:


• Configure client-side, connect-time load balancing
• Configure client-side, connect-time failover
• Configure server-side, connect-time load balancing
• Use the Load Balancing Advisory
• Describe the benefits of Fast Application Notification
a b le
• Configure server-side callouts s f er
- t r an
• Configure Transparent Application Failover on n
s a
a
) h eฺ
co Guidm
c ฺ
s - in nt
@ ac tude
ฺ t a n is S
e n g e th
e h © 2010, s and/or its affiliates. All rights reserved.
uOracle
e
Copyright
t o
n (ch nse
a lice
en gT
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ
Oracle 11g: RAC and Grid Infrastructure Administration Accelerated D - 31
ble
fe r a
ans
n - t r
o
s an
) ha ฺ
ฺ c om uide
- i n c tG
a cs uden
a n @ St
g ฺ t t h is
e h en use
( c he se to
T an licen
en g
e e H
C h

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

You might also like