You are on page 1of 49

NEW: Oracle Real Application Clusters (RAC)

and Oracle Clusterware 11g Release 2


Markus Michalewicz
Product Manager Oracle Clusterware

The preceding is intended to outline our general


product direction. It is intended for information
purposes only, and may not be incorporated into any
contract. It is not a commitment to deliver any
material, code, or functionality, and should not be
relied upon in making purchasing decisions.
The development, release, and timing of any
features or functionality described for Oracles
products remains at the sole discretion of Oracle.

Agenda
Overview
Easier Installation
SSH Setup, prerequisite checks, and FixUp-scripts
Automatic cluster time synchronization configuration
OCR & Voting Files can be stored in Oracle ASM

Easier Management

Policy-based and Role-separated Cluster Management


Oracle EM-based Resource and Cluster Management
Grid Plug and Play (GPnP) and Grid Naming Service
Single Client Access Name (SCAN)

Summary

<Insert Picture Here>

The Traditional Data Center


Expensive and Inefficient
Dedicated silos are inefficient
Sized for peak load
Constrained performance
Difficult to scale
Expensive to manage
Dedicated Stacks

Grid Computing
Virtualize Pools and Resources

Oracle RAC One Node


Better Virtualization for Databases
A virtualized single instance database
Delivers value of server virtualization
to databases on physical servers
Server consolidation
Online upgrade to RAC
Standardized deployment
across all Oracle databases
Built-in cluster failover
for high availability
Live migration of instances across servers
Rolling patches for single instance databases

Oracle Grid Infrastructure


The universal grid foundation
Standardize infrastructure software
PSFT
RAC
DB1
RAC

RAC
One

FREE

DB2

Oracle Grid Infrastructure

Siebel

Eliminates the need for 3rd-party solutions


Combines Oracle Automatic Storage
Management (ASM) & Oracle Clusterware
Typically used by System Administrators

Includes:
File
System
Binaries
OCR &
Voting Files

DB
Datafiles

Oracle ASM
ASM Cluster File System (ACFS)
ACFS Snapshots
Oracle Clusterware
Cluster Health Manager

Oracle Database 11g Release 2


Lowering CapEx and OpEx using Oracle RAC

Servers

CAPEX

Low cost servers


Consolidation

Oracle RAC
Oracle ASM

Storage

Fewer disks/LUNs

Software

Full Oracle Stack

Data Center

Consolidation

OPEX
Admin

Easy Provisioning
Easy Management

Oracle Grid
Infrastructure

Easier
Installation

Oracle Database 11g Release 2


Easier Grid Installation and Provisioning
New intelligent installer
PSFT
RAC
DB1
RAC

RAC
One

FREE

DB2

Oracle Grid Infrastructure

Siebel

40% fewer steps to install Oracle Real


Application Clusters and Oracle Grid Infra.
Integrated Validation and Automation

Nodes can be easily repurposed


Nodes can be dynamically added
or removed from the cluster
Network and storage information are read
from profile and configured automatically
No need to manually prepare a node
Data Center

OPEX
Admin

Consolidation
Easy Provisioning
Easy Management

Easier Grid Installation


1

Typical and Advanced Installation

Software Only Installation for Oracle Grid Infrastructure

Grid Naming Service (GNS) and Auto assignment of VIPs

SSH Setup, prerequisite checks, and FixUp-scripts

Automatic cluster time synchronization configuration

OCR & Voting Files can be stored in Oracle ASM

Secure Shell (SSH) Setup

CVU-based Prerequisite Checks

and FixUp-Scripts

Automatic Cluster Time Synchronization

Oracle Grid Infrastructure

Oracle Cluster Time Syncronization Service (CTSS)

Time synchronization between


cluster nodes is crucial

Typically, a central time server, accessed


by NTP, is used to synchronize the time in
the data center

Oracle provides the Oracle CTSS as an


alternative for cluster time synchronization

CTSS runs in 2 ways:

Observer mode: whenever NTP is installed


on the system, CTSS only observes
Active mode: time in cluster is synchronized
against the CTSS master (node)

OCR / Voting Files stored in Oracle ASM

OCR / Voting Files stored in Oracle ASM

Create ASM Disk Group

The OCR Managed in Oracle ASM

The OCR is managed like a datafile in ASM (new type)


It adheres completely to the redundancy settings for the DG

Voting Files Managed in Oracle ASM


Unlike the OCR, Voting Files are
Stored on distinguished ASM disks
[GRID]> crsctl query css votedisk
1.

2 1212f9d6e85c4ff7bf80cc9e3f533cc1 (/dev/sdd5) [DATA]

2.

2 aafab95f9ef84f03bf6e26adc2a3b0e8 (/dev/sde5) [DATA]

3.

2 28dd4128f4a74f73bf8653dabd88c737 (/dev/sdd6) [DATA]

Located 3 voting disk(s).

ASM auto creates 1/3/5 Voting Files


Based on Ext/Normal/High redundancy
and on Failure Groups in the Disk Group
Per default there is one failure group per disk
ASM will enforce the required number of disks
New failure group type: Quorum Failgroup

Easier
Management

Oracle Database 11g Release 2


Easier Grid Management
Oracle Enterprise Manager
(EM) is able to manage the
full stack, including Oracle
Clusterware
Manage and monitor
clusterware components
Manage and monitor
application resources

New Grid Concepts:

Data Center

OPEX
Admin

Consolidation
Easy Provisioning
Easy Management

Server Pools
Grid Plug and Play (GPnP)
Grid Naming Service (GNS)
Auto-Virtual IP assignment
Single Client Access Name
(SCAN)

Easier Grid Management


1

OCR & Voting Files can be stored in Oracle ASM

Clusterized Commands

Policy-based and Role-separated Cluster Management

Oracle EM-based Resource and Cluster Management

Grid Plug and Play (GPnP) and Grid Naming Service

Single Client Access Name (SCAN)

New Grid Concept: Server Pools


Foundation for a Dynamic Cluster Partitioning

DB1
RAC
DB2
RAC
One

FREE

RAC

Oracle RAC DBs

PSFT

Oracle Grid Infrastructure

Siebel

Logical division of a cluster into pools of


servers.
Hosts applications
(which could be databases or applications)
Why Use Server Pools?
Easy allocation of resources to workload
Easy management of Oracle RAC
Just define instance requirements
(# of nodes no fixed assignment)

Facilitates Consolidation of Applications


and Databases on Clusters

Policy-based Cluster Management


Ensure Isolation based on Server Pools
Policy-based management uses server pools to
PSFT One

RAC DB1
RAC DB2

FREE

Oracle Grid Infrastructure

Siebel RAC

Enable dynamic capacity assignment when needed


Ensure isolation
where necessary (dedicated servers in a cluster)

In order to guarantee:
Applications get the required minimum resources
(whenever possible)
Applications do not take resources from
more important applications

Resource
management
without policies

Policy-based Cluster Management


Ensure Isolation based on Server Pools
Policy-based management uses server pools to

DB1
RAC
DB2
RAC
One

FREE

RAC

Oracle RAC DBs

PSFT

Oracle Grid Infrastructure

Siebel

Enable dynamic capacity assignment when needed


Ensure isolation
where necessary (dedicated servers in a cluster)

In order to guarantee:
Applications get the required minimum resources
(whenever possible)
Applications do not take resources from
more important applications

Resource
management w ith
without policies
policies

Enable Policy-based Cluster Management


Define Server Pools using the appropriate Definition

A Server Pool is defined by 4 attributes:

DB1
RAC
DB2
RAC
One

FREE

RAC

Oracle RAC DBs

PSFT

Oracle Grid Infrastructure

Siebel

1. Server Pool Name


2. Min specifies the minimum number of
servers that should run in the server pool
3. Max states the maximum number of
servers that can run in the server pool.
4. Imp importance specifies the relative
importance between server pools. This
parameter is of relevance at the time of the
server assignment to server pools or when
servers need to be re-shuffled in the cluster
due to failures.

Role-separated Cluster Management


Addresses organizations with strict separation of duty
Role-separated management is implemented in 2 ways:
1. Vertically: Use a different user (groups) for each layer in the stack
2. Horizontally: ACLs on server pools for policy-managed DBs / Apps.

The default installation assumes no separation of duty

DBA1
PSFT

DBA2
RAC DB1

RAC DB2

Adm1
Siebel

OS User

Oracle RAC DBs


Oracle Grid Infrastructure

DBAn User
Grid User

Oracle EM Integrated Server Pool Mgmt

Grid Plug and Play (GPnP)


Foundation for a Dynamic Cluster Management
GPnP eliminates the need for a per node configuration
It is an underlying grid concept that enables
the automation of operations in the cluster
Allows nodes to be dynamically added or removed from the cluster
Provides an easier management to build large clusters
It is the basis for the Grid Naming Service (GNS)

Technically, GPnP is based on an XML profile

Defining node personality (e.g. cluster name, network classification)


Created during installation
Updated with every relevant change (using oifcfg, crsctl)
Stored in local files per home and in the OCR
Wallet protected

GPnP is apparent in things that you do not


see and that you are not asked for (anymore).

OUI does not ask


for a private node
name anymore.

Grid Naming Service (GNS)


Dynamic Virtual IP and Naming
The Grid Naming Service (GNS) allows
dynamic name resolution in the cluster

PSFT
RAC
DB1
RAC

RAC
One

FREE

DB2

Oracle Grid Infrastructure

Siebel

The Cluster manages its own virtual IPs


Removes hard coded node information
No VIPs need to be requested, if cluster changes

Enables nodes to be dynamically


added or removed from the cluster
Defined in the DNS as a
delegated domain
Mycluster.myco.com
DHCP provides IPs inside delegated domain

Grid Naming Service (GNS)


Steps to set up GNS
Benefit: Reduced configuration for VIPs in the cluster
Defined in the DNS as a delegated domain
DNS delegates request to mycluster.myco.com to GNS

Needs its own IP address (the GNS VIP)


This is the only NAME IP assignment required in DNS

All other VIPs, and SCAN-VIPs are defined in the GNS for a cluster
DHCP is used for dynamic IP assignment

Optional way of resolving addresses


Requires novel configuration by DNS administrator

Grid Naming Service


Client Connect
delegated
cluster domain

DNS
corporate
domain

SCAN listeners

client
Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Grid Naming Service


Client Connect
delegated
cluster domain

DNS
corporate
domain

SCAN listeners

client
Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Grid Naming Service


Client Connect
delegated
cluster domain

DNS
corporate
domain

client

SCAN listeners

Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Grid Naming Service


Client Connect
delegated
cluster domain

DNS
corporate
domain

client

SCAN listeners

2
3

Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Grid Naming Service


Client Connect
delegated
cluster domain

DNS
corporate
domain

1
4

client

SCAN listeners

2
3

Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Grid Naming Service


Client Connect
5

delegated
cluster domain

DNS
corporate
domain

1
4

client

SCAN listeners

2
3

Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Grid Naming Service


Client Connect
5

delegated
cluster domain

DNS
corporate
domain

client

SCAN listeners

2
3

Local listeners

dynamic VIP
assignment

GNS
DHCP
server

Oracle RAC cluster - GRIDA

Single Client Access Name (SCAN)


The New Database Cluster Alias

RAC
DB1
RAC

RAC
One

FREE

DB2

Used by clients to connect to


any database in the cluster

ClusterSCANname

PSFT

Oracle Grid Infrastructure

Siebel

Removes the requirement to


change the client connection
if cluster changes
Load balances across the
instances providing a service
Provides failover between
moved instances

Single Client Access Name


Network Configuration for SCAN
Requires a DNS entry or GNS to be used
In DNS, SCAN is a single name defined to resolve to 3 IP-addresses:
clusterSCANname.example.com
IN A 133.22.67.194
IN A 133.22.67.193
IN A 133.22.67.192

Each cluster will have 3 SCAN-Listeners,


combined with a SCAN-VIP defined as cluster resources
The SCAN VIP/LISTENER combination will failover
to another node in the cluster, if the current node fails
Cluster Resources
-------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE

node1

ora.LISTENER_SCAN2.lsnr 1

ONLINE

ONLINE

node2

ora.LISTENER_SCAN3.lsnr 1

ONLINE

ONLINE

node3

Single Client Access Name


Easier Client Configuration
Without SCAN (pre-11g Rel. 2) TNSNAMES has 1 entry per node
With every cluster change, all client TNSNAMES need to be changed
PMRAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))

(ADDRESS = (PROTOCOL = TCP)(HOST = nodeN)(PORT = 1521))


(CONNECT_DATA =
))

With SCAN only 1 entry per cluster is used, regardless of the # of nodes:
PMRAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = clusterSCANname)(PORT = 1521))
(CONNECT_DATA =
))

Connection Load Balancing using SCAN

Application Server

SCAN
Listeners
Clients

Local
Listeners

Oracle RAC
Database
Cluster

Connection Load Balancing using SCAN

Application Server

SCAN
Listeners
Clients

Local
Listeners

Oracle RAC
Database
Cluster

Summary

The Evolution of the Grid


Lowering the Cost of Database Deployments
Datacenter
Grid

GRID for DB
Oracle RAC for
Oracle RAC
for HA
Lower the cost
of HA

Scale out
Lower the
cost scalability

Lower the infrastructure costs


Improved utilization
Storage consolidation
Management efficiency (Shared DB)
EM

EM

RAC

RAC

RAC RAC

RAC

RAC

RAC

RAC

RAC

RAC RAC

RAC

RAC

RAC

AS
AS

AS
AS

AS

RAC
RAC

RAC
RAC

RAC
RAC

RAC

RAC

Standardized
Infrastructure

RAC

Shared Infrastructure
Dedicated Infrastructure

EM
AS
AS
EM

RAC RAC RAC RAC

EM

Lower the cost of


deployments
Lower CAPEX
Lower OPEX

Shared Cluster Shared Database


Shared Storage

<Insert Picture Here>

Questions
and
Answers

You might also like