RAC ORACLE INSTALLATION

Automatic Storage Management: Types of Files: 1. Regular files on File system 2. OMF files 3. files on Raw devices 4. ASM files 1. Regular files system: arrangement of file system effective utilization. Applications Database File System OS HARDWARE ACCESSING DATABASE

When we are accessing database from file system if the buffer size is low then we will having low performance. OMF: Oracle managed files. Parameters: 1. db_create_file_dest  database files storage location. 2. db_create_online_log_dest_1  redo log files. RAW DEVICES: It will skip the OS buffer. 1. 2. 3. 4. 5. File system check will not be done. Only one file we can store at raw devices We can’t increase the raw device sizes. We can only create 16 raw devices. Linux commands can’t work. But dd works.

Note: for baking of the files in raw devices we can’t use normal Unix commands like cp, tar.. We can use dd. This takes the backup of entire raw device irrespective of file system. 6. We can’t calculate I/O statistics on raw devices.

Fine(128k) control files/redo log files. External redundancy: one failover group. . ASM provides performance through striping availability. ASM: convenience of OMF plus performance of raw devices. High redundancy: three failover groups. High redundancy. Solaris volume manager  O/S Implementing RAID  Redundant array of Disks RAID 0 striping. Normal redundancy 3. RAID 1: mirroring—availability RAID 5: mirroring with parity—performance plus availability. Oracle 10G introduces ASM similar to LVM. Normal redundancy: two failover groups. RAID 5 – mirroring with parity RAID 0: we can’t have good backup. ASM – Instance  60 to 120 Mb It contains few background processes Redundancy levels: 1. 2. availability and mirroring.Applications Database File System OS HARDWARE LVM: In order to use the raw devices we can configure LVM. External redundancy 2. Coarse(Allocation unit)database files. through mirroring. 1. RAID 1 mirroring. Two types of stripe units are there 1. The number of failover groups gives the redundancy. VERITAS volume manager  third party tool. It will give stripping. 2. Disk Group: combination of raw devices.

PSPO: process spanner is responsible for creating all background process. 7. PZ9X: is responsible for selecting of v$ views. We can add the raw devices as well as we can drop them.When we add one more raw device to existing disk group oracle will do automatic rebalancing of extents. RBAL: RBAL in a database instance performs opening of the disk in the disk group. ASMB runs in RDBMS database instances and connect to foreground process of ASM instances. ASM INSTANCE PARAMETERS: 1. KATE: conductor of ASM is responsible for making disk groups online. 5. Mandatory background process: ASMB RBAL 10G Mirroring will be done at extend level not at disk level. It also analyze the plan for well load balancing when new disk is added to existing disk group. GMON: group monitor is responsible for certain disk group monitoring operations that maintain ASM metadata inside disk groups. 2. Instance_name=+ASM Instance_type=asm Asm_diskgroups Asm_diskstrigs Large_pool_size Shared_pool_size Asm_power_limit (default 1) When we are adding raw devices this asm_power_limit is useful. we can have 63 disk groups and one million ASM files. ASMB: ASM master rebalance process is responsible for coordinating the rebalance operation. . 3. ARBN: ASM slave rebalance process it is responsible for moving extent for one disk to another disk within a disk group. ARBN GMON KATE PSPO 11G PZ9X Note: ASM instance is a gateway between ASM disk groups and ASM client databases. 4. 6. In 11G SYSASM is the user to connect to ASM instance.

currently not these must be made as candidate to make Foreign  previous part of ASM DG. Global pfile = when we are setting up RAC. We can have ten thousand ASM disks for disk groups. Making raw devices as candidate: #mk2tfs –j /dev/sdb10 Startup Procedure: 1. currently not Use for disk groups.11G enhancements in disk group: we can use 140pb(Peta bytes) asm file in external redundancy disk group. Global spfile Local pfile1 Local pfile2 ./runInstaller nextnextnextclick on select All nextnext install software only  next  install. 2. Start ASM instance Mount disk group Startup database Reverse order when we shutdown. 42pb for normal redundancy. 4. On node 2 $cd $ORACLE_HOME $watch ls Pfile Management: PFILE = static. ARBN responsible for communicating with ASM instances by the databases. 15pb for high redundancy. 63 disk groups for ASM instances. Header status: To check whether the raw device is used for creating a disk group or not Candidate  ready to be used Member  already part of ASM DG Former  previous part of ASM DG. Oracle Software Installation: #crsctl check crs #crs_stat –t Repeat it on node2. 3. SPFILE = dynamic 9i onwards. ASMB. $cd database $.

but a file can’t be created directly on a raw device so we use links to perform this operation.ora Now the physical global spfile will be created specifying the location of the link with the help of global pfile. Now for starting of the instances in the nodes we need to have the local pfiles.GLOBAL PFILE-. Instance number. 2.ora’ from pfile. Since all the nodes must access same parameter file for all the instances we need to have spfile which is shared. Ex: $cd /home/oracle/ASM $ln –s /dev/raw/raw7 spfile+ASM. So for creating global spfile we need to have pfile. . This global can be created in dbs directory on any one node. Since the global spfile must be shared it has to be kept/created on raw device of shared storage. If a link is created for a source file having data to a raw device./DEV/RAW/RAW7 SHARED STORAGE NODE1 LOCAL PPILE1 $O_H/dbs NODE2 LOCAL PPILE2 $O_H/dbs SPFILE: ‘/home/oracle/ASM/spfile+ASM. This contains only two parameters 1. Whatever the data we had to the source file after we creation of link so that data will now be copied to the raw devices So a link for a global spfile( which is not existing) will be create to a raw device in a specific directory. 3. the link will be created but the data will not be copied to the raw devices. Ex: sql>create spfile=’/home/oracle/ASM/spfile+ASM. spfile.$ORACLE_HOME/dbs GLOBAL SPFILE-.ora’=spfile Note: 1.

RAC specific parameters: Cluster_database=true or false Cluster_database_instances=? [value=20 note: 10G=100. so if any instance gets crashed the other instance will continue their usage of redo logs with log switches and generation of archive logs.RDBMS Instance: Global pfile $O_H/dbs Global spfile Normal diskgroup Node2 Local pfile1(Node1) Instance_number=1 Local pfile2(Node2) instance_number=2 Whenever any parameter value in the spfile is modified that will be effective for all the instances accessing that file.11G=100+] Redo Log File management: Latest transactional information will be written to redo log files. GRD recovery will be done by lmon. it requires instance crash recovery and is not possible because the data got overwritten in the redo’s by other instances. all instances will be using the same redo log for their transactions. Note: if we have only one redo log thread for multiple instances. now if the crashed instance is starting up.ora cluster_database=true . Note: whenever any instance is to be added for database we need to create redo log thread and tablespace for this instance. Creating ASM instance and diskgroup: #/crs/oracle/bin/crsctl check crs #/home/oracle/bin/crs_stat –t $mkdir ASM $cd $ORACLE_HOME/dbs $vi init+ASM. so to make a value effective to a specific instance we use an option called SID after scope Instance=local pfile.

Creating ASM local Instances: $export ORACLE_SID=+ASM1 $vi init+ASM1. gv$instance g=global. >select name.’/dev/sdb11’ failgroup f2 disk ‘/dev/sdb12’.ora’ $sqlplus / as sysasm >startup nomount. >create diskgroup nordg normal redundancy failgroup f1 disk ‘/dev/sdb10’.ora Instance_number=2 Spfile=’/home/oracle/ASM/spfile+ASM.header_status from v$asm_disk.12.ora’ from pfile.header_status from v$asm_disk.’/dev/sdb13’.10. #mke2fs -j /dev/sdbx #for converting member/foreign to candidate.path.’/dev/sdb9’.9. X=8.instance_number=2 :wq $cd /home/oracle/ASM $ln –s /dev/raw/raw7 spfile+ASM.13 >create diskgroup extdg external redundancy disk ‘/dev/sdb8’.ora Instance_number=1 Spfile=’/home/oracle/ASM/spfile+ASM. v$instance. On node2: Adding +ASM2 instance on node2: $mkdir ASM $cd ASM $ln –s /dev/raw/raw7 spfile+ASM.11.path.ora’ . >alter diskgroup extdg mount. >select name.ora $ls –l $export ORACLE_SID=+ASM2 $cd $ORACLE_HOME/dbs $vi init+ASM2. We will get error like already mounted.instance_type=asm asm_diskstring='/dev/sdb*' diagnostic_dest=/home/oracle/ASM large_pool_size=12m remote_login_passwordfile=shared +ASM1.instance_number=1 +ASM2.ora $sqlplus / as sysasm (new user) >create spfile=’/home/oracle/ASM/spfile+ASM.

undo_tablespace=undo1 dbrac2. >alter diskgroup extdg mount.instance_name.com db_files=50 global_names=true job_queue_processes=3 log_checkpoint_interval=10000 undo_management=auto shared_pool_size=120m open_cursors=50 processes=50 dbrac1.status from gv$instance. # for the first time we have to mount the disk group after we don’t need to do it.instance_name.thread=2 dbrac1.instance_name=dbrac1 dbrac2. >select instance_number.1. >exit $asmcmd >ls >exit Creating RDBMS instances & databases: On Node1: $export ORACLE_SID=DBRAC $cd $ORACLE_HOME/dbs $vi init+DBRAC. >select instance_number.instance_name=dbrac2 dbrac1.ora cluster_database=false cluster_database_instances=5 compatible=11.undo_tablespace=undo2 diagnostic_dest=/home/oracle/DBRAC #log_archive_dest='+disk1/dbrac/ARCH' remote_login_passwordfile=exclusive .instance_number=1 dbrac2.ctl' db_name=DBRAC db_domain=rp.status from v$instance.thread=1 dbrac2.instance_number=2 dbrac1. >alter diskgroup nordg mount.0 control_files='+nordg/dbrac/control.$sqlplus / as sysasm >Startup nomount.

>exit $vi cr8db. :wq $vi run.dbf' size 100m logfile group 1 '+extdg/dbrac/thread1_redo1a.dbf' size 50m default tablespace userdata datafile '+nordg/dbrac/userdata.sql.sql @$ORACLE_HOME/rdbms/admin/catproc.sql create database DBRAC maxinstances 5 datafile '+nordg/dbrac/system01.dbf' size 50m default temporary tablespace temp tempfile '+nordg/dbrac/temp01. >select count(*) from tab.log' size 4m controlfile reuse. $asmcmd .dbf' size 200m autoextend on sysaux datafile '+nordg/dbrac/sysaux.sql @$ORACLE_HOME/rdbms/admin/catclustdb.dbf' size 150m autoextend on undo tablespace undo1 datafile '+nordg/dbrac/undo01.sql @$ORACLE_HOME/rdbms/admin/catalog. group 2 '+extdg/dbrac/thread1_redo2a.log' size 4m.sql :wq $!sql >@cr8db.ora Instance_number=1 Spfile=’+nordg/spfileDBRAC.sql conn system/manager @?/sqlplus/admin/pupbld. $export ORACLE_SID=+ASM1.:wq $cd $mkdir DBRAC $sqlplus / as sysdba >create spfile=’+nordg/spfileDBRAC. >run.sql. >exit $export ORACLE_SID=dbrac1 $cd $ORACLE_HOME/dbs $vi initdbrac1.ora’ from pfile.ora’ :wq $sqlplus / as sysdba >startup nomount.

>startup. Shutdown process: 1. >shut immediate.instance_number.ora Instance_number=2 Spfile=’nordg/spfileDBRAC. >show parameter cluster >alter system set cluster_database=true scope=spfile.>cd extdg >mkdir archive $export ORACLE_SID=dbrac1 $sqlplus / as sysdba >archive log list. >Shut immediate.dbf’ size 50m. Thread – combination of two groups.log’ size 4m. Shutdown both the +ASM instances. group 4 ‘+extdg/redo4. status from gv$instance. 3. >alter system set db_recovery_file_dest_size=2G. >alter database flashback on. Configure shared storage. Adding a node[for load balancing]: As a root user: 1. >alter system set log_archive_dest_1=’Location=+extdg/arch’. Shutdown both instance node1 and node 2 2. >alter database enable public thread 2. >startup mount. >alter database add logfile thread 2 group 3 ‘+extdg/redo3. >alter database open.ora’ $sqlplus / as sysdba >exit $mkdir DBRAC $!sql >startup >select instance_name. >alter database archivelog. Update /etc/hosts. instance_number. 2. >create undo tablespace undo2 datafile ‘+nordg/dbrac/undo2. On node2: Adding RDBMS Instance: $export ORACLE_SID=dbrac2 $vi $ORACLE_HOME/dbs/initdbrac2. .status from gv$instance. >alter system set db_recovery_file_dest=’+extdg’. >select instance_name.log’ size 4m. Configure networking.

rac2.rac2. Update bash profile. 4./addnode. Cluvfy check: $cd /crs/home/ $.rac2. 5./runcluvfy./addnode. 2. Configure ssh. Install cluster RPM. 8. Deleting node: 1. groups & permissions. 3. Remove the node apps from crs. Stop the services 2.4.sh Nextgive public hostname rac9stop the vip from new node #ifconfig eth1:0 down click on nextclick on install run the scripts according to the sequenceokfinish./runcluvfy. 6. Add cluster s/w from existing node. Update $O_H & $C_H.sh comp ssa –n rac1. #/crs/oracle/bin/crsctl check crs #/crs/oracle/bin/crs_stat –t Adding oracle s/w on any of the existing node: $cd $ORACLE_HOME/oui/bin $.sh stage –pre crsint –n rac1. 5.rac9 $. 7./runcluvfy. asm instance. Install O/S RPM’S. Register the services into ocr. Cluvfy check. As an Oracle User: 1.rac9 Adding cluster software: #xhost + $cd /crs/oracle/oui/bin $. Add the services ( listener. rdbms instance) 7. Update kernel parameters. .rac9 $. Remove the services from crs. Create user. Mapping raw devices. 4. Add oracle s/w from existing node 6. 3.sh nextnextinstall.sh comp nodecon –n rac1.