You are on page 1of 51

Oracle Grid

Infrastructure 11g:
Manage Clusterware
and ASM

Student Guide - Volume II

D59999GC30
Edition 3.0
December 2012
D78227
Note : all the passwords in these practices are oracle1.

Practices for Lesson 4


In the practices for this lesson, you will perform the tasks that are prerequisites to
successfully installing Oracle Grid Infrastructure. You will configure ASMLib to manage
your shared disks and, finally, you will install and verify Oracle Grid Infrastructure 11.2.
Practice 4-1: Performing Preinstallation Tasks for Oracle Grid
Infrastructure
In this practice, you perform various tasks that are required before installing Oracle Grid
Infrastructure. These tasks include:
• Setting up required groups and users 

• Creating base directory 

• Configuring Network Time Protocol (NTPD) 

• Setting shell limits 

• Editing profile entries 

• Configuring ASMLib and shared disks 

1) From a graphical terminal session, make sure that the oracle user exists and it’s
primary group is oinstall and secondary group is dba. Perform this step on all three of
your nodes.

2) As the root user, check that the following directories exist. Perform this step on all
three of your nodes.
/u01/app/11.2.0/grid
/u01/app/oracle/product/11.2.0/db_1

3) View the /etc/sysconfig/ntpd file and confirm that the –x option is specified. If
necessary, change the file, and then restart the ntpd service
with the service ntpd restart command. Perform this step on all three of your
nodes.

4) Check the content of the /etc/security/limits.conf file. The oracle related parameters of
the kernel are in this file. Perform this step on all three of your nodes.

5) Check the content of the /home/oracle/.bash_profile, /home/oracle/db_env and


/home/oracle/grid_env files. Perform this step on all three of your nodes.

6) Now, we will setup the storage for ASM.


Perform this step only on ol5-112-rac1.
Got to /dev and view the drives that will be used.
# cd /dev
# ls sd*
It should list the drives from sdb to sdm.
Use the "fdisk" command to partition the disks sdb to sdm. The following output shows
the expected fdisk output for the sdb disk.
# fdisk /dev/sdb

Here is what you sould do :


Device contains neither a valid DOS partition table, nor Sun, SGI
or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the
previous
content won't be recoverable.

The number of cylinders for this disk is set to 1305.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be
corrected by w(rite)

Command (m for help): n


Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default
1305):
Using default value 1305

Command (m for help): p

Disk /dev/sdb: 10.7 GB, 10737418240 bytes


255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 1 1305 10482381 83 Linux

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
#

In each case, the sequence of answers is "n", "p", "1", "Return", "Return" and “w".
Once all the disks are partitioned, the results can be seen by repeating the previous "ls"
command.
# cd /dev
# ls sd*

7) As root, execute the /usr/sbin/oracleasm configure –i command to configure the


Oracle ASM library driver. The owner should be oracle and the group should be dba.
Make sure that the driver loads and scans disks on boot. Perform this step on all three of
your nodes.

8) Perform this step only on ol5-112-rac1.


As root, load the kernel module using the following command:
/usr/sbin/oracleasm init

9) Perform this step only on ol5-112-rac1.


Create the ASM disks needed for the practices.
/usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
/usr/sbin/oracleasm createdisk DISK2 /dev/sdc1
/usr/sbin/oracleasm createdisk DISK3 /dev/sdd1
/usr/sbin/oracleasm createdisk DISK4 /dev/sde1
/usr/sbin/oracleasm createdisk DISK5 /dev/sdf1
/usr/sbin/oracleasm createdisk DISK6 /dev/sdg1
/usr/sbin/oracleasm createdisk DISK7 /dev/sdh1
/usr/sbin/oracleasm createdisk DISK8 /dev/sdi1
/usr/sbin/oracleasm createdisk DISK9 /dev/sdj1
/usr/sbin/oracleasm createdisk DISK10 /dev/sdk1
/usr/sbin/oracleasm createdisk DISK11 /dev/sdl1
/usr/sbin/oracleasm createdisk DISK12 /dev/sdm1

10) Perform this step on all the three nodes.


Run this command to refresh the ASM disk configuration:
/usr/sbin/oracleasm scandisks
We can see the disks are now visible to ASM using the "listdisks" command.
# /usr/sbin/oracleasm listdisks

11) Perform this step on all the three nodes as root.


$ shutdown -h now

Practice 4-2: Installing Oracle Grid Infrastructure


In this practice, you install Oracle Grid Infrastructure. Start the three nodes.
1) Use the Oracle Universal Installer (runInstaller) to install Oracle Grid Infrastructure.
• Your assigned cluster nodes are ol5-112-rac1, ol5-112-rac2, and ol5-112-rac3.
• Your cluster name is cluster01. 

• Your SCAN is ol5-112-scan. 

• Your Oracle Grid Infrastructure software location is /logiciels

a) Log into ol5-112-rac1, open a terminal window and connect as oracle.


b) Change directory to the stage software location and start the OUI by
executing the runInstaller command.
$cd /logiciels/grid
$./runInstaller 

c) On the Select Installation Option page, select the “Install and Configure Oracle
Grid Infrastructure for a Cluster” option and click Next. 

d) On the Select Installation Type page, select Typical Installation and click
Next. 


e) On the "Specify Cluster Configuration" screen, enter the SCAN name and click
the "Add" button.
Enter the details of the second node in the cluster, then click the "OK" button.

f) Click the "SSH Connectivity..." button and enter the password for the
"oracle" user. Click the "Setup" button to configure SSH connectivity, and the "Test"
button to test it once it is complete.
g) Click the "Identify network interfaces..." button and check the public and
private networks are specified correctly. Once you are happy with them, click the "OK"
button and the "Next" button on the previous screen.
h) Enter "/u01/app/11.2.0/grid" as the software location and "Automatic Storage
Manager" as the cluster registry storage type. Enter the ASM password(oracle1), enter
dba as OSASM group and click the "Next" button.
i)On the Create ASM Disk Group page, make sure that Disk Group Name is
DATA and Redundancy is Normal. In the Add Disks region, select DISK1, DISK2,
DISK3, DISK4. Click Next.
j) On the Create Inventory page, Inventory Directory should be
/u01/app/oraInventory and the oraInventory Group Name should be oinstall. Click Next.
k)Wait while the prerequisite checks complete. If you have any issues, fix
them and click the "Next" button.
l) Click the "Finish" button.
m) Wait while the setup takes place.
n) When prompted, run the configuration scripts on the first node. When
it completes on the first node, then run them on the second one (do not run them at the
same time).
o) The output from the "orainstRoot.sh" file should look something like that
listed below.

# cd /u01/app/oraInventory
# ./orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.


The execution of the script is complete.
#

The output of the root.sh will vary a little depending on the node it is run on.
Example output can be seen here (Node1, Node2).
p) Once the scripts have completed, return to the "Execute
Configuration Scripts" screen on the first node and click the "OK" button.
Wait for the configuration assistants to complete.
If you have errors related to nodeapps, don’t mind, we will start these resources later.
We expect the verification phase to fail with an error relating to the SCAN.

INFO: Checking Single Client Access Name (SCAN)...


INFO: Checking name resolution setup for "rac-scan.localdomain"...
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCA
N name "rac-scan.localdomain"
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdom
ain" (IP address: 192.168.2.201) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCA
N name "rac-scan.localdomain"
INFO: Verification of SCAN VIP and Listener setup failed
q) Provided this is the only error, it is safe to ignore this and continue by
clicking the "Next" button.
Click the "Close" button to exit the installer.

The grid infrastructure installation is now complete.

r) When the installation finishes, you should verify the installation. You should
check to make sure that the software stack is running, as it should. Execute the crsctl stat
res –t command on both nodes. The resources which have target state ONLINE should be
ONLINE.
$ grid_env
$ crsctl stat res -t
Practice 4-3: Creating Additional ASM Disk Groups
In this practice, you create additional ASM disk groups to support the activities in the rest
of the course. You create a disk group to hold the Fast Recovery Area (FRA).
1) From the same terminal window you used to install Grid Infrastructure, set the
oracle user environment with the grid_env tool to the +ASM1 instance.
$ . grid_env
$ echo $ORACLE_SID (should be ASM1)

View the diskgroup DATA that has already been created.


$asmcmd
ASMCMD>ls

Type ls at the asmcmd prompt and then exit.


2) Start the ASM Configuration Assistant (ASMCA).
$asmca
3) Create a disk group named FRA with external redundancy. Choose DISK5 and
DISK6.
Practices for Lesson 5
In this practice, you will add a third node to your cluster.
Practice 5-1: Adding a Third Node to Your Cluster
The goal of this practice is to extend your cluster to a third node. This third node is ol5-
112-rac3.
1) Set up the ssh user equivalence for the oracle user between your first node and your
third node.
a) Log into ol5-112-rac3 as oracle. make sure you are in /home/oracle and run this:
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa # Accept the default settings.
b) Log in as the oracle user on ol5-112-rac1 and copy "authorized_keys" file to
rac2.localdomain using the following commands.
cd ~/.ssh
scp authorized_keys ol5-112-rac3:/home/oracle/.ssh/

c) Next, log in as the oracle user on ol5-112-rac3 and perform the following commands
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys ol5-112-rac1:/home/oracle/.ssh/
d) The "authorized_keys" file on both servers now contains the public keys
generated on all nodes.
e) To enable SSH user equivalency on the cluster member nodes issue the following
commands on ol5-112-rac1 and ol5-112-rac3.
ssh ol5-112-rac1 date
ssh ol5-112-rac3 date
ssh ol5-112-rac1.localdomain date
ssh ol5-112-rac3.localdomain date
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
You should now be able to SSH and SCP between servers without entering
passwords.
2) We will now add the node
a) Log into ol5-112-rac1 as oracle and run this to set the grid variables
$ . grid_env
$ echo $ORACLE_SID (should be ASM1)
b) Check your pre-grid installation for your third node using the Cluster Verification
Utility.
$ cluvfy stage -pre crsinst -n ol5-112-rac3 (should be
succesfull)

c) Use the Cluster Verification Utility to make sure that you can add your third
node to the cluster (type this command, do not copy/paste).
$ cluvfy stage -pre nodeadd –n ol5-112-rac3 (should be
successful)

d) Add your third node to the cluster(type and check this command, do not
copy/paste):
$ cd $ORACLE_HOME/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={ ol5-112-
rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={ ol5-112-rac3-
vip}"

e) Connected as the root user on your third node using a terminal session, execute
the following scripts: /u01/app/oraInventory/orainstRoot.sh and
/u01/app/11.2.0/grid/root.sh

f)Make sure that local and cluster resources are placed properly and that the FRA
ASM disk group is mounted on all three nodes (connected as oracle):
$ . grid_env
$ echo $ORACLE_SID
$ crsctl stat res -t
Practices for Lesson 6
In these practices, you will verify, stop, and start Oracle Clusterware.
You will add and remove Oracle Clusterware configuration files and
back up the Oracle Cluster Registry and the Oracle Local Registry.
Practice 6-1: Verifying, Starting, and Stopping
Oracle Clusterware
In this practice, you check the status of Oracle Clusterware using both the operating
system commands and the crsctl utility. You will also start and stop Oracle Clusterware.
1) Connect to the first node of your cluster as the oracle user. You can use the
grid_env script to define ORACLE_SID, ORACLE_HOME, PATH,
ORACLE_BASE for your environment.
$ . grid_env

2) Using the operating system commands, verify that the Oracle Clusterware
daemon processes are running on the current node. (Hint: Most of the Oracle
Clusterware daemon processes have names that end with d.bin.)
$ pgrep -l d.bin

3) Using the crsctl utility, verify that Oracle Clusterware is running on the current node.
$crsctl check crs

4) Verify the status of all cluster resources that are being managed by Oracle
Clusterware for all nodes.
$ crsctl stat res –t

5) Attempt to stop Oracle Clusterware on the current node while logged in as the
oracle user. What happens ?
$ crsctl stop crs

6) Switch to the root account and stop Oracle Clusterware only on the current node.
Exit the switch user command when the stop succeeds.
$su
$ crsctl stop crs

7) Attempt to check the status of Oracle Clusterware now that it has been
successfully stopped.
$ crsctl check crs
$crsctl check cluster

8) Connect to the second node of your cluster and verify that Oracle Clusterware is
still running on that node. You may need to set your environment for the second
node by using the grid_env utility.
$ crsctl check crs

9) Restart Oracle Clusterware on the first node as the root user. Return to the grid
account and verify the results. Note: You may need to check the status of all the
resources several times until they all have been restarted.
$crsctl start crs
$crsctl stat res –t
Practice 6-2: Adding and Removing Oracle Clusterware
Configuration Files
In this practice, you determine the current location of your voting disks and Oracle
Cluster Registry (OCR) files. You will then add another OCR location and remove it.
1) Use the crsctl utility to determine the location of the voting disks that are
currently used by your Oracle Clusterware installation.
$ crsctl query css votedisk

2) Use the ocrcheck utility to determine the location of the Oracle Clusterware Registry
(OCR) files.
$ ocrcheck

3) Verify that the FRA ASM disk group is currently online for all nodes using the crsctl
utility.
$ crsctl stat res ora.FRA.dg –t

4) If the FRA ASM disk group is not online, use the asmcmd utility to mount the FRA disk
group as the oracle user. Note: This step may not be necessary if it is already in an online
state on each node. Verify the results. You may have to run the commands on each node.
$ asmcmd mount FRA
$ crsctl stat res ora.FRA.dg –t

5) Switch to the root account and add a second OCR location that is to be stored in
the FRA ASM disk group. Use the ocrcheck command to verify the results.
$ su
$ocrconfig -add +FRA
$ocrcheck
$ cat /etc/oracle/ocr.loc

6) On your second node, as the root user, remove the second OCR file that was
added from the first node. Verify the results when completed.
$ocrconfig -delete +FRA
$ocrcheck
Practice 6-3: Performing a Backup of the OCR and OLR
In this practice, you determine the location of the Oracle Local Registry (OLR) and
perform backups of the OCR and OLR files.
1) Use the ocrconfig utility to list the automatic backups of the Oracle Cluster
Registry (OCR) and the node or nodes on which they have been performed.
Note: You will see backups listed only if it has been more than four hours
since Grid Infrastructure was installed.
$ocrconfig –showbackup

2) Perform a manual backup of the OCR.


$ocrconfig –manualbackup

3) Display only the manual backups that have been performed and identify the
node for which the backup was stored. Do logical backups appear in the
display?
$ocrconfig -showbackup manual

4) Determine the location of the Oracle Local Registry (OLR) using the ocrcheck
utility.
$ocrcheck –local
Practices for Lesson 8
In this practice, you will work with Oracle Clusterware log files and learn to
use the ocrdump and cluvfy utilities.
Practice 8-1: Working with Log Files
In this practice, you will examine the Oracle Clusterware alert log and then package
various log files into an archive format suitable to send to My Oracle Support.
1) While connected as the oracle user to your first node, locate and view the
contents of the Oracle Clusterware alert log.
$ cd /u01/app/11.2.0/grid/log/host01ol5-112-rac1
$ view alerthost01*.log

2) Navigate to the Oracle Cluster Synchronization Services daemon log


directory and determine whether any log archives exist.
$ cd ./cssd
$ pwd
$ ls -alt ocssd* (probably not)

3) Switch to the root user. Change to $ORACLE_HOME/bin and run the


diagcollection.pl script to gather all log files that can be sent to My Oracle
Support for problem analysis.
$ diagcollection.pl --collect --crshome
/u01/app/11.2.0/grid
4) List the resulting log file archives that were generated with the
diagcollection.pl script.
$ ls -la *tar.gz
5) Exit the switch user command to return to the grid account.
Practice 8-2: Working with OCRDUMP and CLUVRFY
In this practice, you will work with the OCRDUMP utility and dump the binary file into
both text and XML representations.
1) As root, dump the first 100 lines of the OCR to standard output using XML
format and view.
$ ocrdump -stdout -xml | head -100 | more
2) Determine the location of the cluvfy utility and its configuration file.
$ which cluvfy
$ cd $ORACLE_HOME/cv/admin
$ pwd
$ cat cvu_config
3) Display the stage options and stage names that can be used with the cluvfy
utility.
$ cluvfy stage –list
4) Perform a postcheck for the ACFS configuration on all nodes.
$ cluvfy stage -post acfscfg -n all
5) Display a list of the component names that can be checked with the cluvfy
utility.
$ cluvfy comp -list
6) Display the syntax usage help for the space component check of the cluvfy
utility.
$ cluvfy comp space –help
7) Verify that on each node of the cluster the /tmp directory has at least 200 MB
of free space in it using the cluvfy utility. Use verbose output.
$ cluvfy comp space -n ol5-112-rac1, ol5-112-rac2,
ol5-112-rac 3 -l /tmp -z 200M –verbose
Practices for Lesson 9
In this practice, you will perfom the database install over the grid infrastructure.
Practice 9-1: Perform RAC Installation
In this practice, you use will use the OUI to install the RAC database software and create
a three-node RAC database to make Enterprise Manager DB Console available for the
ASM labs.
1) Reboot the three nodes and check that the resources with ONLINE targets
are ONLINE.
2) Log into ol5-112-rac1 as the oracle user and open a NEW terminal window
(do not use the previous ones).
3) Navigate to /logiciels, unzip the database files
(linux.x86_11gr2_database_1of2.zip and linux.x86_11gr2_database_2of2.zip
and cd to database. Run ./runInstaller.
Uncheck the security updates checkbox and click the "Next" button.

Accept the "Create and configure a database" option by clicking the "Next" button.
Accept the "Server Class" option by clicking the "Next" button.
Make sure both nodes are selected, then click the "Next" button.
Click on SSH Connectivity, enter the oracle users’s password and click on Setup.
Accept the "Typical install" option by clicking the "Next" button.
Enter "/u01/app/oracle/product/11.2.0/db_1" for the software location. The storage type
should be set to "Automatic Storage Manager". Enter the appropriate passwords and
database name, in this case oracle1.
Wait for the prerequisite check to complete. If there are any problems either fix them, or
check the "Ignore All" checkbox and click the "Next" button.
If everything is ok in the summary information, click the "Finish" button.
If you have an error related to nodeapps, then run this from the first node:
$ srvctl stop nodeapps -n ol5-112-racn
$ srvctl start nodeapps -n ol5-112-racn
ol5-112-racn is the node for which the error is reported.
Wait while the installation takes place.
Once the software installation is complete the Database Configuration Assistant (DBCA)
will start automatically.

Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.
When prompted, run the configuration scripts on each node. When the scripts have been
run on each node, click the "OK" button.

Click the "Close" button to exit the installer.


The RAC database creation is now complete.
Practices for Lesson 11
In these practices, you will adjust ASM initialization parameters, stop and start instances,
and monitor the status of instances.
Practice 11-1: Administering ASM Instances
In this practice, you adjust initialization parameters in the SPFILE, and stop and start the
ASM instances on local and remote nodes.

1) Disk groups are reconfigured occasionally to move older data to slower disks. Even
though these operations occur at scheduled maintenance times in off-peak hours, the
rebalance operations do not complete before regular operations resume. There is some
performance impact to the regular operations. The setting for the ASM_POWER_LIMIT
initialization parameter determines the speed of the rebalance operation. Determine the
current setting and increase the speed by 2.
a) Open a terminal window on the first node, become the grid user, and set the
environment to use the +ASM1 instance. Connect to the +ASM1 instance as SYS with
the SYSASM privilege. What is the setting for ASM_POWER_LIMIT?
b)This installation uses an SPFILE. Use the ALTER SYSTEM command to
change ASM_POWER_LIMIT for all nodes.
SQL> show parameter SPFILE
SQL> ALTER SYSTEM set ASM_POWER_LIMIT=3 SCOPE=BOTH
SID='*';
SQL> show parameter ASM_POWER_LIMIT

2) You have decided that due to other maintenance operations you want one instance
+ASM1 to handle the bulk of the rebalance operation, so you will set the ASM
POWER_LIMIT to 1 on instance +ASM2 and 5 on instance +ASM1.
SQL> ALTER SYSTEM set ASM_POWER_LIMIT=1 SCOPE=BOTH
SID='+ASM2';
SQL> ALTER SYSTEM set ASM_POWER_LIMIT=5 SCOPE=BOTH
SID='+ASM1';
SQL> show parameter ASM_POWER_LIMIT NAME
SQL> column NAME format A16
SQL> column VALUE format 999999
SQL> select inst_id, name, value from GV$PARAMETER
where name like 'asm_power_limit';

Exit the SQL*Plus application.


SQL> exit
2) The ASM instance and all associated applications, database, and listener on one
node must be stopped for a maintenance operation to the physical cabling. Stop all
the applications, ASM, and listener associated with +ASM1 using srvctl.
a) In a new terminal window, as the oracle user, stop Enterprise Manager on
your first node.
$ . db_env
$ export ORACLE_UNQNAME=RAC
$ emctl stop dbconsole
b) Stop the RAC database.
$ srvctl stop instance -d RAC -n ol5-112-rac1
c) Verify that the database is stopped on host01. The pgrep command shows that
no RAC background processes are running.
$ pgrep -lf RAC
d) Run this to set the grid environement variables:
$. grid_env
Stop the ASM instance +ASM using the srvctl stop asm –n command. $
srvctl stop asm -n ol5-112-rac1
PRCR-1014 : Failed to stop resource ora.asm PRCR-
1065 : Failed to stop resource ora.asm CRS-2529:
Unable to act on 'ora.asm' because that would
require stopping or relocating 'ora.DATA.dg', but
the force option was not specified
e) Attempt to stop the ASM instance on host01 using the force option, –f.
$ srvctl stop asm -n ol5-112-rac1 -f
f) As the root user, stop the ASM instance with crsctl stop cluster –n ol5-112-
rac1 command. This command will stop all the Cluster services on the node.
$ crsctl stop cluster -n ol5-112-rac1
g) Confirm that the listener has been stopped.

3) Restart all the cluster resources on node ol5-112-rac1.


$ crsctl start cluster -n ol5-112-rac1
4) Verify that the resources, database, and Enterprise manager are restarted on
host01. The crsctl status resource –n ol5-112-rac1 command shows that ASM is
online
5) set the database variables
$ . db_env
6) Determine the Enterprise Manager DB control configuration on the cluster.
$ emca -displayConfig dbcontrol –cluster
7) Become the root user and stop all the cluster resources on node ol5-112-rac2.
$su
$. grid_env
$ crsctl stop cluster –n ol5-112-rac2
8) What is the status of the RAC database on your cluster?
$ srvctl status database -d RAC
9) As the root user on your first node, start the cluster on ol5-112-rac2.
$ crsctl start cluster -n ol5-112-rac2
10) Did the RAC instance on ol5-112-rac2 start? Use the svrctl status database –d
RAC command as any of the users (oracle, root) as long as the oracle
environment is set for that user. Note: The database may take a couple of minutes
to restart. If the RAC2 instance is not running, try the status command again, until
instance RAC2 is running.
$. db_env
$ srvctl status database -d RAC
Practices for Lesson 12
In these practices, you will add, configure, and remove disk groups and
manage rebalance operations
Practice 12-1: Administering ASM Disk Groups
In this practice, you will change the configuration of a disk group, and control the
resulting rebalance operations. You will determine the connected clients to the existing
disk groups, and perform disk group checks. You will use several tools, such as EM,
ASMCMD, and ASMCA, to perform the same operations.
1) The FRA disk group has more disks allocated than are needed. So that one
disk will be dropped from the FRA disk group, remove DISK6 from the disk
group. Use ASMCMD.
a) As the oracle OS user on your first node, confirm that the FRA disk
group is mounted. If it is not mounted, mount it on your first and second
nodes.
ASMCMD> lsdg
$ crsctl status resource ora.FRA.dg –t
b) Use the chdg command with inline XML. Note that the command is
typed without a return, all on one line
ASMCMD> chdg <chdg name="FRA" power="5"> <drop>
<dsk name="DISK6"/> </drop> </chdg>
2) In preparation for adding another disk to the DATA disk group, perform
a disk check to verify the disk group metadata. Use the check disk group
command chkdg.
ASMCMD> chkdg DATA
2) Add another disk (DISK07) to the DATA disk group and remove a disk
(DISK04), but the rebalance operation must wait until a quiet time and then
proceed as quickly as possible. Use Enterprise Manager Database Control. On
your classroom PC desktop, open a browser and enter this address: https://ol5-
112-rac2:1158/em. Login using sys and oracle1 as sysdba.
Go to Cluster tab, then Target tab. Click on +ASM1.ol5-112-rac1.localdomain.
Click on Disk Groups tab.
a) Add DISK7 to DATA.
b) Remove DISK4 from DATA

3) Perform the pending rebalance operation on disk group DATA.


Step Screen/Page Description Choices or values
a Automatic Storage Management: Select the DATA disk
+ASM1.ol5-112-rac1.localdomain group check box. Click
Rebalance.
b Confirmation Click Show Advanced
Options. Set Rebalance
Power to 11.
c Confirmation Click on Show SQL and
read the SQL statement
d Confirmation Click on Yes
e Automatic Storage Management: Update Message: Disk
+ASM1.ol5-112-rac1.localdomain Group DATA rebalance
request has been
submitted. Click the
DATA disk group.
f Disk group: DATA Observe the Used(%)
column for the various
disk in the DATA disk
group. Notice the status
column (ASMDISK04 is
marked as DROPPING).
Click the browser’s
Refresh button.
g Disk group: DATA Notice the change in the
Used(%) column.
Refresh again. After a
few minutes the
ASMDISK04 will not
appear in the list on
Member disks

4)
a) Examine the disk I/O statistics using the lsdsk --statistics command.
ASMCMD > lsdsk –statistics
b)Examine the disk statistics bytes and time for the DATA disk group with
the iostat –t –G DATA command.
ASMCMD> iostat -t -G DATA Group
Practices for Lesson 13
In this practice, you will administer ASM files, directories, and templates.
Practice 13-1: Administering ASM Files, Directories, and
Templates
In this practice, you use several tools to navigate the ASM file hierarchy and manage
aliases.
1) ASM is designed to hold database files in a hierarchical structure. After setting up
the grid environment, navigate the RAC database files with ASMCMD. Use the cd
and ls commands (like OS).
2) The default structure may not be the most useful for some sites. Create a set of
aliases for directories and files to match a file system. Use EM.

Step Screen/Page Description Choices or values


a Cluster Database: RAC
In the Instances section, click the
+ASM1_node_name link.
b Automatic Storage Click the diskgroups tab
Management:
+ASM1_node_name home
c Automatic Storage Click DATA diskgroup link
Management:
+ASM1_node_name
diskgroups
d Disk Group: DATA General Click the Files tab.
e Disk Group: DATA Files Select RAC. Click Create Directory.
f Create Directory Enter: New Directory: oradata Click
Show SQL.
g Show SQL The SQL that will be executed is
shown ALTER DISKGROUP DATA
ADD DIRECTORY
'+DATA/RAC/oradata' Click Return.
h Create Directory Click OK
i Disk Group: DATA Files Expand the RAC folder. Expand the
DATAFILE folder. Select
EXAMPLE.nnn.NNNNNN. Click
Create Alias.
j Create Alias Enter: User Alias:
+DATA/RAC/oradata/example_01.db
f Click Show SQL.
k Show SQL The SQL that will be executed is
shown ALTER DISKGROUP DATA
ADD ALIAS
'+DATA/RAC/oradata/example_01.db
f' FOR
'+DATA/RAC/DATAFILE/EXAMPL
E.264.6988 59675' Click Return.
l Create Alias Click OK.
m Disk Group: DATA: Files Click the EXAMPLE.nnn.NNNNN
link.
n EXAMPLE.nnn.NNNNNN: Properties
Properties
Notice the properties that are
displayed in the General section. Click
OK.
o Disk Group: DATA Files Click the example_01.dbf link.
p example_01.dbf: Properties Note that the properties include
System Name. Click OK.
q Exit EM

3) Using ASMCMD, navigate to view example_01.dbf and display the


properties. Using the system name, find the alias. Use the ls –a command.
ASMCMD> ls +DATA/RAC/oradata/*
ASMCMD> ls -l +DATA/RAC/oradata/*
ASMCMD> ls --absolutepath
+DATA/RAC/DATAFILE/example*
a. Create a new tablespace. Name the file using a full name. Use EM.
Step Screen/Page Description Choices or values
x Cluster Database: Click the Server tab.
orcl.example.com Home

y Cluster Database: In the Storage section, click the


orcl.example.com Server Tablespaces link.
z Create Tablespace: General Enter: Name: XYZ In the Datafiles
section, click Add.
aa Add Datafile Enter: Alias directory:
+DATA/ORCL/oradata Alias name:
XYZ_01.dbf Click Continue.
bb Create Tablespace: General Click OK.
cc Tablespaces

4) Create another data file for the XYZ tablespace. Allow the file to receive a
default name. Did both the files get system-assigned names?

Step Screen/Page Description Choices or Values


a. Tablespaces Select XYZ tablespace. Click
Edit.
b. Edit Tablespace: XYZ In the Datafiles section, click Add.
c. Add Datafile Click Continue.
d. Edit Tablespace: XYZ Click Show SQL.
e. Show SQL Note: The SQL provides only the disk group
name.
Click Return.
f. Edit Tablespace: XYZ Click Apply.
g. Edit Tablespace: XYZ In the Datafiles section, note the names of
the two files. One name was specified in the
previous practice step, xyz_01.dbf, and the
other is a system-assigned name.
Click the Database tab.
h. Cluster Database: In the Instances section, click the
orcl.example.com +ASM1_ol5-112-rac1.localdomain link.
i. Automatic Storage Management: Click the Disk Groups tab.
+ASM1_ol5-112-rac1.localdomain
Home
j. Automatic Storage Management Enter:
Login Username: SYS
Password: oracle1
Click Login.
k. Automatic Storage Management: Click the DATA disk group link.
+ASM1_ol5-112-rac1.localdomain
DiskGroups
l. Disk Group: DATA General Click the Files tab.
m. Disk Group: DATA Files Expand the RAC folder.
Expand the DATAFILE folder.
Note that there are two system-named files
associated with the XYZ tablespace.
Expand the oradata folder.
Click the xyz_01.dbf link.
n. XYZ_01.dbf: Properties Observe that the xyz_01.dbf file is an alias to
a file with a system name.
Click OK.
o. Disk Group: DATA Files

You might also like