Solutions Guide

Database Consolidation with
Oracle 12c Multitenancy
James Morle
Scale Abilities, Ltd.

Table of contents
Introduction ......................................................................................................................................................3
Testing scenario and architecture ....................................................................................................................3
Introduction to testing .....................................................................................................................................10
Proof Point – move a PDB between CDBs ....................................................................................................10
Proof Point – relocate a datafile to an alternate diskgroup .............................................................................14
Proof Point – create a clone from PDB ..........................................................................................................17
Conclusion .....................................................................................................................................................19
For more information ......................................................................................................................................20

improves quality of service and -.apart from all that -. Scale Abilities was awarded sole editorial control and Emulex granted sole veto rights for publication. this paper focuses on the proven combination of Fibre Channel and Oracle’s Automatic Storage Management functionality. instances and storage tiers. A CDB can host multiple PDBs. Disclosure: Scale Abilities was commissioned commercially by Emulex Corporation to conduct this study.corrals all those pesky databases into a contained area! 3 Database Consolidation with Oracle 12c Multitenancy . which ensures that maximum bandwidth is available in the Storage Area Network for any operations that require data copying. A PDB is essentially the same as what we always used to have pre-12c with all the generic data dictionary information removed. hundreds or even thousands of databases. These PDBs cannot execute by themselves. Testing scenario and architecture The Multitenant Option of the Oracle 12c Database is implemented through the concept of Pluggable Databases (PDBs).Solution Implementer’s Series Introduction This whitepaper is a study of the new Multitenant features of Oracle Database 12c and how a shared disk architecture using Fibre Channel infrastructure can be used to facilitate dramatically simplified database consolidation compared to previous releases. within which we relocate databases between nodes. which contains all the data dictionary information that is absent from the PDB. Although shared disk storage may be achieved through other means. all important for the operation of the business. Access to the high bandwidth throughput provided by Gen 5 parts will become increasingly important for data mobility in consolidated cloud environments such as the one demonstrated here. they need to be plugged into a Container Database (CDB). The consolidation of these databases into a single managed entity dramatically reduces management costs. and so allows a single set of data dictionary information. background processes and memory allocation to be shared across multiple PDBs. The new functionality in Oracle 12c makes these operations extremely straightforward and provides a compelling case for consolidation using a shared disk infrastructure. This is an attractive proposition for corporations with tens. The scenario chosen for this testing was one perceived to become common for Database Administrators( DBAs) involved in consolidating many databases into a single shared compute environment. All testing performed for this whitepaper was carried out using the latest Gen 5 (16GFC) Fibre Channel components from Emulex. To ensure impartial commentary under this arrangement. frequently referred to as a Private Cloud. and provides low latency access for all other database I/O operations. In this paper we will explore the new multitenant functionality by building a test database cluster.

potentially with different memory footprints on their respective server nodes. and CDB2 as the container for mission-critical workloads.Solution Implementer’s Series The logical architecture chosen was that of a shared cluster of compute nodes. Figure 1. with a variety of different storage devices and tiers available to those nodes. Two CDBs were defined: CDB1 was nominated as the container for low-criticality workloads. The Oracle Database was then tasked to provide two Container Databases (CDBs). where different CDBs are used to host databases of different service levels. Real end-user clusters could have considerably higher node counts depending on the amount of consolidation required and the desired maximum cluster size. managed using Oracle Grid Infrastructure 12c. which were constrained to discrete subsets of the cluster using the server pool functionality within the Oracle Clusterware. Architecture Implementation 4 Database Consolidation with Oracle 12c Multitenancy . The test scenario was implemented as a five-node cluster. with a Gen 5 (16GFC) Fibre Channel network connecting all the storage to all hosts. This architecture is representative of an end-user cluster.

Given that all the data is available to all of the cluster nodes. making the relocation of databases to different servers a very straightforward process.” to represent a top-tier traditional storage array and “Fast Storage Array.” to represent a mid-tier traditional storage array. All the storage is shared across all cluster nodes. These were defined as “Small Storage Array. a migration of PDBs between CDBs only requires a logical ‘unplug’ from the source CDB and a ‘plug’ into the recipient CDB – No data copying is required. only shared access. three different tiers of storage were defined to reflect the common reality of multiple classes of storage being present in the data center. especially with the Multitenant Option in Oracle 12c. “Big Storage Array.Solution Implementer’s Series From a storage perspective. Figure 2. The following diagram shows how a similar implementation of two databases would have worked in the 11g release: 5 Database Consolidation with Oracle 12c Multitenancy . Migration of PDBs between CDBs The inherent shared architecture of the Multitenant Option makes the consolidation of databases much simpler than in previous releases.” to represent the growing reality of solid-state storage.

Accordingly. rather than raw performance.Solution Implementer’s Series Figure 3. Certain measures could be taken. Any performance observations noted in this whitepaper are included 6 Database Consolidation with Oracle 12c Multitenancy . but these were really just workarounds prior to the 12c solution. such as Instance Caging and Preferred Nodes. with just a few large tables created in each. and its own set of background processes and memory allocations (ie: the instance) on each node in the cluster. The testing focus was primarily on functionality and architecture. the resources of all the databases can be effectively resource managed. and there is no overhead for hosting multiple databases other than the resource actually required to host those databases. and had no direct control over the resource consumption by the other instances. both PDB1 and PDB2 were kept relatively simple. Consolidation with Oracle 11g In this 11g example. each database would exist as an entity in its own right. with a full copy of the data dictionary. Each of those instances would more or less compete for resource. In the 12c Multitenant world. Each database was approximately 1TB in size to demonstrate a relatively large data transport requirement.

By using Emulex OneCommand Manager we were able to update online the latest firmware version for Emulex Lightpulse LPe16002B adapter. updating the firmware was a simple process during the initial setup.Solution Implementer’s Series for completeness. Each server was equipped with two port Emulex LPe16002B Gen5 Fibre Channel HBAs. Physical hardware topology Each server node was an HP DL360 Generation 8 server with 96GB of memory and two CPU sockets populated with 8 core Intel E5-2690 processors running at 2. In addition. Hardware configuration This paper was written following the execution of a series of tests in Emulex’s Technical Marketing Costa Mesa lab environment. The diagram below shows a simplified physical hardware topology. with only limited investigation performed regarding areas of potential improvement. with each port connected to independent Fibre Channel zones via Brocade 6510 Gen 5 Fibre Channel switches. 7 Database Consolidation with Oracle 12c Multitenancy .9GHz. By using Emulex Onecommand Manager we verified Emulex Gen 5 Fibre Channel adapter’s connectivity to the FC targets. Figure 2. excluding Ethernet networks.

The low-criticality container database (CDB1) is hosted by a server pool named ‘lesscritical.’ which includes the oracle4 and oracle5 hosts. The larger of these pools was used to provide the role of the “Big Storage Array” in our scenario.emulex.39-400 UEK Kernel  Emulex HBA device driver . Min: 0. Max: -1 Category: Candidate server names: Server pool name: Generic Importance: 0. The server was configured with the following software:  Operating System .Release 1. The mission-critical container (CDB2) is hosted by a server pool named ‘missioncritical. [oracle@Oracle1 ~]$ srvctl config srvpool Server pool name: Free Importance: 0. oracle2 and oracle 3 hosts.Solution Implementer’s Series The storage tier was provided by two physical hardware devices.3.com oracle3. +SMALLARRAY and +FASTARRAY to correspond with the underlying hardware. Min: 0.Oracle Linux 6. using the 2.emulex. Max: -1 Category: Candidate server names: Server pool name: lesscritical Importance: 0.’ which includes the oracle1.com oracle2. The role of the “Fast Storage Array” was provided by a SanBlaze v7. The traditional storage was provided by an HP 3PAR StoreServ V10000 storage array.emulex. configured to present LUNS from two separate storage pools.emulex. The five node cluster comprised of the following hostnames: oracle1.6.7. Min: 0.com oracle4.0 target emulation DRAM-based device.18  Oracle 12c . These were named +BIGARRAY.com The cluster was split into two server pools using Clusterware server pools.1 (RDBMS and Grid Infrastructure) All storage was presented directly through to ASM via dm-multipath and ASMlib. Max: -1 Category: 8 Database Consolidation with Oracle 12c Multitenancy .one for each of the three ‘scenarios’ of storage arrays in the configuration.0. and the smaller one provided the role of the “Small Storage Array”. version 12. Three ASM diskgroups were created -.Release 8.emulex.4.com oracle5.1.

STABLE 2 ONLINE OFFLINE STABLE 3 ONLINE ONLINE oracle2 Open.oracle2. the full ASM instances where local on nodes 1.db 1 ONLINE OFFLINE STABLE 2 ONLINE ONLINE oracle4 Open.STABLE 3 ONLINE OFFLINE STABLE 4 ONLINE ONLINE oracle5 Open. Min: 0. From Oracle 12c. This is a very useful new feature that moves away from the former dependency to have one ASM instance on every cluster node.oracle5 Server pool name: missioncritical Importance: 0.STABLE ASM is configured using the new FlexASM feature of Oracle12c.cdb2.db 1 ONLINE ONLINE oracle1 Open. 3 and remote via the Proxy instance on nodes 4 and 5. it is also possible to perform all of the testing in this whitepaper using the traditional ASM model of having one instance of ASM on each cluster node. 9 Database Consolidation with Oracle 12c Multitenancy .asm 1 ONLINE ONLINE oracle1 STABLE 2 ONLINE ONLINE oracle3 STABLE 3 ONLINE ONLINE oracle2 STABLE Although FlexASM was used in this case.STABLE 4 ONLINE OFFLINE STABLE 5 ONLINE ONLINE oracle3 Open.’ which in turn connects to the full ASM instances. [oracle@Oracle1 ~]$ crsctl stat res -t ----------------------------------------------------------------------------Cluster Resources ----------------------------------------------------------------------------ora. For this testing. 2.Solution Implementer’s Series Candidate server names: oracle4.oracle3 We can also view the CRS resources to ensure we have instances of our CDBs running on the nodes we expect: [oracle@Oracle1 ~]$ crsctl stat res -t ----------------------------------------------------------------------------Name Target State Server State details ----------------------------------------------------------------------------ora.STABLE 5 ONLINE OFFLINE STABLE ora. and for other cluster nodes to collect ASM metadata information via a local ‘proxy instance.cdb1. Max: -1 Category: Candidate server names: oracle1. it is possible to host full instances of ASM on a subset of the cluster nodes.

as workloads would very often end up hosted on an inappropriate node. All the PDBs that are plugged into a CDB share a single Oracle instance (per node. Scale Abilities checks that there is connection to the root of the CDB and checks the status of PDB1: 10 Database Consolidation with Oracle 12c Multitenancy .Solution Implementer’s Series Introduction to testing In pre-12c releases. The concept of Container Databases (CDBs) and Pluggable Databases (PDBs) is new in Oracle 12c. in the CDB named CDB1. as well as movement between CDBs. and is fundamental to providing the ‘missing link’ in database consolidation for Oracle databases. Currently CDB2 has another PDB hosted within it named PDB2. PDBs can be plugged into and unplugged from CDBs using simple commands. It is believed this functionality goes hand in hand with the management of PDBs. in the case of RAC). The objective is to move PDB1 to a different CDB named CDB2. This resulted in gross memory and kernel resource waste on each server. resulting in wasted resources and rudimentary resource management capabilities. Proof Point – move a PDB between CDBs This test is a straightforward move of a PDB from one CDB to another within the same cluster. Resource management was even more restricted. A CDB is the container for one or more PDB. For example. PDB1 is a little over 1TB in size. it was necessary to host multiple full instances of each database on each node required. The results worsened when instances would failover to other nodes. Each of these instances would have to be configured to accommodate the full workload of its application on each node. and they can be cloned and moved to other CDBs. 12c also now supports fully online datafile moves. and can be resource-managed by a single set of controls within the CDB. and would be a cumbersome database to move using previous releases of Oracle. Since full access is available to all storage on each node on the cluster. The philosophy behind this testing was to perform key operations on PDBs that would be frequently required in a true consolidated environment and to discover what the utility-value was from running large shared clusters of multiple CDBs. First. The starting point of the test is a single PDB. named PDB1. it should be a simple operation in this configuration. if one wanted to consolidate databases named DB1 and DB2 onto a shared cluster using pre-12c technology. as it offers the ability to relocate databases onto different tiers of storage. as each instance had no visibility or control of the resource requirements of instances on the shared server. In addition to the Multitenant option. and a PDB is the actual ‘database’ from the viewpoint of the application. including any passive nodes if running in active/passive mode. it would be necessary to have instances for both DB1 and DB2 running on at least two nodes of the cluster.

It produces an XML file in the specified location that contains all the metadata required to plug the database back into a CDB. location. such as a list of files.-----------------------------. One aspect of this operation not immediately obvious is the XML file created by the foreground process on the database server to which the user is connected.---------. database ID.name.---------2 PDB$SEED READ ONLY 3 PDB1 READ WRITE In order to unplug the PDB. this is not necessarily the same server which is running SQL*Plus.inst_id.2 SQL> / CON_ID INST_ID NAME OPEN_MODE ---------. care must be taken to understand where the database connection is made.open_mode from gv$PDBs 2* order by 1. therefore. 'CON_NAME') FROM DUAL. version and so on. Pluggable database altered.'CON_NAME') ----------------------------------------------------------------------------CDB$ROOT SQL> select con_id. In a RAC environment (and in a client/server environment). SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1_unplug. it is evident that PDB1 is now unplugged and cannot be opened up again: 11 Database Consolidation with Oracle 12c Multitenancy .-----------------------------.open_mode from v$PDBs. This XML file is the master description of the PDB and is used by the next CDB into which the PDB is plugged to determine the steps required.xml'.Solution Implementer’s Series SQL> SELECT SYS_CONTEXT ('USERENV'. PDB must be closed on all instances: SQL> alter pluggable database pdb1 close immediate instances=all.---------2 2 PDB$SEED READ ONLY 2 4 PDB$SEED READ ONLY 3 2 PDB1 MOUNTED 3 4 PDB1 MOUNTED Next PDB1 is unplugged. SQL> select con_id. The resulting XML contains a variety of interesting information. SYS_CONTEXT('USERENV'. Looking back at CDB1.name. size. CON_ID NAME OPEN_MODE ---------. options installed. container ID. Pluggable database altered. This changes the state of the PDB in the current CDB (CDB1) to be ‘unplugged’ and prevents all access to PDB1.

1. Automatic Storage Management. Pluggable database created. OLAP. 00000.------------3 PDB1 E5A299062A7FF04AE043975BC00A8790 UNPLUGGED 2 PDB$SEED E5A288645805E887E043975BC00AF91C NORMAL SQL> alter pluggable database pdb1 open instances=all. alter pluggable database pdb1 open instances=all * ERROR at line 1: ORA-65107: Error encountered when processing the current task on instance:2 ORA-65086: cannot open/close the pluggable database SQL> !oerr ora 65086 65086. Pluggable database dropped. Start by connecting to the CDB2 root using the TNS alias created by dbca: [oracle@Oracle1 ~]$ sqlplus sys@cdb2 as sysdba SQL*Plus: Release 12.64bit Production With the Partitioning.status from CDB_PDBS SQL> / PDB_ID PDB_NAME GUID STATUS ---------.pdb_name. Oracle. PDB_ID PDB_NAME GUID STATUS ---------. Enter password: Connected to: Oracle Database 12c Enterprise Edition Release 12. 2013.1. Now PDB1 can be plugged into CDB2.------------3 PDB2 E5A3120148FF19CDE043975BC00AAF39 NORMAL 2 PDB$SEED E5A30153B3FB11AFE043975BC00AC732 NORMAL SQL> create pluggable database pdb1 using '/home/oracle/pdb1_unplug.0 Production on Fri Sep 6 08:13:08 2013 Copyright (c) 1982.0. // *Action: The pluggable database can only be dropped. 12 Database Consolidation with Oracle 12c Multitenancy .guid.Solution Implementer’s Series 1* select pdb_id. All rights reserved.---------.1.xml' nocopy. "cannot open/close the pluggable database" // *Cause: The pluggable database has been unplugged.guid.status from CDB_PDBS. // SQL> drop pluggable database pdb1.pdb_name.-------------------------------. Real Application Clusters.---------.0 . Advanced Analytics and Real Application Testing options SQL> select pdb_id.1.-------------------------------.0.

Solution Implementer’s Series The ‘nocopy’ option was selected to instruct Oracle to leave the datafiles in their original locations.3) to disk (ORCL:ELX_3PAR_100G_3) NOTE: Assigning number (2.1) to disk (ORCL:ELX_3PAR_100G_1) NOTE: Assigning number (2.0) to disk (ORCL:ELX_3PAR_100G_0) NOTE: Assigning number (2.SMALLARRAY. The default action is to make copies of the datafile and leave the originals intact.5) to disk (ORCL:ELX_3PAR_100G_5) NOTE: Assigning number (2.7) to disk (ORCL:ELX_3PAR_100G_7) NOTE: Assigning number (2.2) to disk (ORCL:ELX_3PAR_100G_2) NOTE: Assigning number (2. the following messages were emitted in the alert file. showing the mounting of the +SMALLARRAY diskgroup and the creation of a CRS dependency for future startup operations: NOTE: ASMB mounting group 2 (SMALLARRAY) NOTE: ASM background process initiating disk discovery for grp 2 NOTE: Assigning number (2.name. During the plugging in of PDB1.very useful for migrating to another diskgroup and/or storage array.inst_id. let’s check the status and open up PDB1 in its new container database: SQL> select con_id.8) to disk (ORCL:ELX_3PAR_100G_8) NOTE: Assigning number (2. Another option is to select the ‘move’ option.9) to disk (ORCL:ELX_3PAR_100G_9) SUCCESS: mounted group 2 (SMALLARRAY) NOTE: grp 2 disk 0: ELX_3PAR_100G_0 path:ORCL:ELX_3PAR_100G_0 NOTE: grp 2 disk 1: ELX_3PAR_100G_1 path:ORCL:ELX_3PAR_100G_1 NOTE: grp 2 disk 2: ELX_3PAR_100G_2 path:ORCL:ELX_3PAR_100G_2 NOTE: grp 2 disk 3: ELX_3PAR_100G_3 path:ORCL:ELX_3PAR_100G_3 NOTE: grp 2 disk 4: ELX_3PAR_100G_4 path:ORCL:ELX_3PAR_100G_4 NOTE: grp 2 disk 5: ELX_3PAR_100G_5 path:ORCL:ELX_3PAR_100G_5 NOTE: grp 2 disk 6: ELX_3PAR_100G_6 path:ORCL:ELX_3PAR_100G_6 NOTE: grp 2 disk 7: ELX_3PAR_100G_7 path:ORCL:ELX_3PAR_100G_7 NOTE: grp 2 disk 8: ELX_3PAR_100G_8 path:ORCL:ELX_3PAR_100G_8 NOTE: grp 2 disk 9: ELX_3PAR_100G_9 path:ORCL:ELX_3PAR_100G_9 NOTE: dependency between database CDB2 and diskgroup resource ora.4) to disk (ORCL:ELX_3PAR_100G_4) NOTE: Assigning number (2. which were all in the +SMALLARRAY diskgroup.---------2 1 2 3 2 5 3 1 3 3 3 5 4 1 4 3 4 5 13 NAME -----------------------------PDB$SEED PDB$SEED PDB$SEED PDB2 PDB2 PDB2 PDB1 PDB1 PDB1 Database Consolidation with Oracle 12c Multitenancy OPEN_MODE ---------READ ONLY READ ONLY READ ONLY READ WRITE READ WRITE READ WRITE MOUNTED MOUNTED MOUNTED .open_mode from gv$PDBs order by 1. which allows the files to be moved to a new location .6) to disk (ORCL:ELX_3PAR_100G_6) NOTE: Assigning number (2.2 2 / CON_ID INST_ID ---------.dg is established Next.

825412 353' to '+SMALLARRAY' * ERROR at line 1: ORA-01516: nonexistent log file.825310527 260 +SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/sysaux.304.304.260.------------+BIGARRAY/CDB2/DATAFILE/undotbs1. Pluggable database altered. Previous releases allowed a certain degree of online relocation but still mandated a brief period of unavailability when the new file location was brought online. Oracle 12c promises to move a datafile with zero downtime.00 05:21:21 SQL> alter database move datafile '+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.825312223 110 +SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/system. data file.281. Elapsed: 00:00:00.282. start with a tablespace named APPDATA which has a single datafile that has been created in the wrong diskgroup: 1* select name. In this scenario.304. SQL> alter pluggable database pdb1 open instances=all.284.bytes/1048576 sz from v$datafile SQL> / NAME SZ ----------------------------------------------------------------------------. Session altered. and the error message was not entirely helpful: 05:20:37 SQL> alter database move datafile '+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata. SQL> Proof Point – relocate a datafile to an alternate diskgroup Online relocation of datafiles is an important feature of Oracle 12c. alter database move datafile '+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.825412353 1090560 Note The attempt to move a datafile for a PDB while connected to the CDB$ROOT was unsuccessful.Solution Implementer’s Series 9 rows selected.825412 353' to '+SMALLARRAY'.825310527 630 +SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/users. or temporary file "45" Connecting to the PDB fixes this problem and allowed the file to be moved successfully: 05:20:48 SQL> alter session set container=PDB1.304.825310547 5 +BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.825412 14 Database Consolidation with Oracle 12c Multitenancy .

In the meantime here’s what a trace of the session executing the move looked like: WAIT #140641124141656: nam='control file sequential read' ela= 971 file#=0 block#=59 blocks=1 obj#=-1 tim=393150810594 WAIT #140641124141656: nam='db file sequential read' ela= 1494 file#=45 block#=119070465 blocks=128 obj#=-1 tim=393150812119 WAIT #140641124141656: nam='db file single write' ela= 1759 file#=45 block#=119070465 blocks=128 obj#=-1 tim=393150813980 WAIT #140641124141656: nam='DFS lock handle' ela= 193 type|mode=1128857605 id1=110 id2=1 obj#=-1 tim=393150814264 WAIT #140641124141656: nam='DFS lock handle' ela= 218 type|mode=1128857605 id1=110 id2=3 obj#=-1 tim=393150814537 WAIT #140641124141656: nam='DFS lock handle' ela= 11668 type|mode=1128857605 id1=110 id2=2 obj#=-1 tim=393150826256 WAIT #140641124141656: nam='control file sequential read' ela= 249 file#=0 block#=1 blocks=1 obj#=-1 tim=393150826570 WAIT #140641124141656: nam='control file sequential read' ela= 982 file#=0 block#=48 blocks=1 obj#=-1 tim=393150827577 15 Database Consolidation with Oracle 12c Multitenancy . 9 rows updated. SUM(OBJECT_ID) -------------414 Now. Commit complete.Solution Implementer’s Series 353' to '+SMALLARRAY'. it was very slow (more on that shortly). SQL> select sum(object_id) from bigtable where rownum<10. As mentioned earlier. The Scale Abilities lab will perform a more technical investigation in the future and write a blog post about it. however.49 Although the move was successful. considering the long runtime for the move operation. During the move. Elapsed: 04:51:57. that was a very long runtime and worthy of some explanation. Database altered. this paper isn’t focused on performance. Scale Abilities was able to read and update a table within that tablespace: SQL> select sum(object_id) from bigtable where rownum<10 SUM(OBJECT_ID) -------------324 SQL> update bigtable set object_id=object_id+10 where rownum<10. SQL> commit.

The total elapsed time for one iteration (1MB moved) in this trace file fragment is 18. but does not remove it altogether: WAIT #140653075283640: nam='DFS lock handle' ela= 4542 type|mode=1128857605 id1=110 id2=2 obj#=-1 tim=2388675442209 WAIT #140653075283640: nam='control file sequential read' ela= 806 file#=0 block#=1 blocks=1 obj#=-1 tim=2388675443083 WAIT #140653075283640: nam='control file sequential read' ela= 921 file#=0 block#=48 blocks=1 obj#=-1 tim=2388675444060 WAIT #140653075283640: nam='control file sequential read' ela= 965 file#=0 block#=50 blocks=1 obj#=-1 tim=2388675445060 WAIT #140653075283640: nam='control file sequential read' ela= 967 file#=0 block#=60 blocks=1 obj#=-1 tim=2388675446062 WAIT #140653075283640: nam='db file sequential read' ela= 1748 file#=45 block#=20943489 blocks=128 obj#=-1 tim=2388675447849 WAIT #140653075283640: nam='db file single write' ela= 2891 file#=45 block#=20943489 blocks=128 obj#=-1 tim=2388675450857 16 Database Consolidation with Oracle 12c Multitenancy .Solution Implementer’s Series WAIT #140641124141656: nam='control file sequential read' ela= 952 file#=0 block#=50 blocks=1 obj#=-1 tim=393150828576 WAIT #140641124141656: nam='control file sequential read' ela= 957 file#=0 block#=59 blocks=1 obj#=-1 tim=393150829578 WAIT #140641124141656: nam='db file sequential read' ela= 1712 file#=45 block#=119070593 blocks=128 obj#=-1 tim=393150831343 WAIT #140641124141656: nam='db file single write' ela= 1843 file#=45 block#=119070593 blocks=128 obj#=-1 tim=393150833283 WAIT #140641124141656: nam='DFS lock handle' ela= 205 type|mode=1128857605 id1=110 id2=1 obj#=-1 tim=393150833571 WAIT #140641124141656: nam='DFS lock handle' ela= 164 type|mode=1128857605 id1=110 id2=3 obj#=-1 tim=393150833794 An interesting observation for aficionados of trace files is that the file number of the original and copy datafile appear to be registered as the same (45). The CI enqueue is a cross-instance invocation most likely associated with coordinating the DBWR processes on other instances to ensure dirty blocks are written out to both the original datafile and the new copy. note the 11.6ms wait for ‘DFS lock handle’ highlighted in green. It seems the move operation is taking place 128 blocks at a time (128*8KB=1MB in this case). however.984ms. Scale Abilities also tried a datafile move operation with the PDB explicitly closed on all but one instance. which (when decoded) shows a wait for the CI enqueue. This can be observed in the respective waits for ‘db file sequential read’ and ‘db file single write. which implies a disk throughput of 52MB/s. In particular.’ where it is reported as the file# parameter. but it did not improve the throughput and requires further investigation. very little of which is spent actually transferring data because of the time spent waiting for the DFS lock handle. Shutting down all the instances except the one where the move takes place dramatically reduces the overhead. which is likely a type of anomaly with the wait interface abstraction. there are a series of coordinating measures taking place in the RAC tier which slow down the progress.

which is still not very high. at least. The observed 406MB/s would saturate a 4GFC fibre channel 17 Database Consolidation with Oracle 12c Multitenancy . SQL> alter pluggable database pdb1 open read only.---------Online data file move 77. The progress of the move can. Rather than perform all the cross-instance work every 1MB. One (fairly major.Solution Implementer’s Series This shows that the DFS lock handle calls no longer require servicing by other instances in the cluster. in the opinion of Scale Abilities) downside to the clone functionality is that the source PDB must be opened “READ ONLY” for the clone operation to run. ROUND(sofar/totalwork*100. This is a very useful feature for managing development and test databases and significantly reduces the associated labor intensity. considering the specification of this system: A copy time of 43:16 represents a decent average write for a bandwidth of 406MB/s. SQL> create pluggable database pdb1_clone from pdb1 2 file_name_convert=('+SMALLARRAY'.2) PCT FROM gv$session_longops WHERE opname like 'Online%' AND sofar<totalwork / SID SERIAL# ---------. but these results are a good start for a nonperformance focused exercise. and it should be possible to use a much higher percentage of the 16GFC connectivity that we have available to speed up the relocation of data files between storage tiers. Pluggable database altered. The ability to achieve a much higher number than this with a little careful tuning of the I/O subsystem seems promising. Pluggable database created.42 Proof Point – create a clone from PDB Oracle 12c also allows the creation of clones from existing PDBs.'+BIGARRAY'). serial#. This is an important feature for managing a consolidated platform. Pluggable database altered. for example. opname. the throughput went up to around 90MB/s. SQL> alter pluggable database pdb1 close instances=all.---------392 6745 OPNAME PCT ----------------------------------------. but that there is still an overhead in making them. After making this change.13 This is much more indicative of the performance we should be seeing. Elapsed: 00:43:16. be viewed in v$session_longops: 1 2 3 4* SQL> SELECT sid. The low throughput seems like it could be dramatically improved by Oracle with some algorithmic changes. perform it every 50MB.

In this test case we had only one file that was large enough to incur a high copy time – it is possible that multiple PX slaves could operate on PDBs that contain a number of large file. the following trace file fragment shows the PX slave performing the actual copy: [root@Oracle2 trace]# more CDB2_3_p000_54766. This would seem to indicate that considerably more parallel capabilities should be possible when cloning PDBs. One interesting observation is that the clone operation is carried out by PX slave processes rather than the user’s foreground process. For example.trc WAIT #140320369183712: nam='ASM file metadata operation' ela= 36 msgop=33 locn=0 p3=0 obj#=-1 tim=146952678700 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952678788 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952678864 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1249 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952680181 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 423 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952680671 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2322 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952683051 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1259 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952684325 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952684418 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952684430 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2272 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952686758 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 248 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952687020 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 333 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952687421 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1722 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952689153 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 122 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952689395 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1651 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952691056 WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1237 count=1 intr=256 timeout=2147483647 obj#=-1 tim=146952692361 It is likely that the block size used for the copy is equal to one ASM allocation unit (default 1MB). considering the trace file indicates that this session is coordinating the copy directly with the ASM instance. 18 Database Consolidation with Oracle 12c Multitenancy .Solution Implementer’s Series HBA even without further tuning – such operations highlight the need to provide for significantly greater I/O bandwidth in a multitenant environment than in a traditional Oracle cluster.

it is evident that the bandwidth provision in the SAN infrastructure will become increasingly important.Solution Implementer’s Series Conclusion The new Multitenant functionality seems a natural fit for consolidation using server pooled clusters and shared disk solutions using Gen 5 Fibre Channel. This will be especially true for the server HBAs. Multitenancy can also be implemented without shared disk and with locally attached SSD storage. The easy access to data using a shared storage infrastructure complements the ease of use afforded by the PDB concept and effectively eliminates data copies for many use cases. by its very nature. 19 Database Consolidation with Oracle 12c Multitenancy . the ability to move workloads between servers and storage tiers as the demands and growth of the business dictate. in both consolidated environments and more traditional ones. One possible implication of the increased mobility of databases and their storage is that the bottleneck will move from the labor intensive manual processes formerly required. and put more pressure on the storage tier. Doing so removes many of the advantages of consolidation.potentially migrating large data sets from many storage arrays at any point in time. In particular. Although some issues were observed with the throughput of the online datafile move functionality. The availability of Gen 5 a Fibre Channel HBAs will be instrumental in providing this uplift in data bandwidth. and this will become a frequently used operation for DBAs. The ability to pool multiple pluggable databases into a small number of container databases dramatically improves both the system management burden and the ability to effectively resource-manage databases in a shared environment. because the data mobility is a server-side action -. Scale Abilities fully expects the throughput of such moves to be significantly higher once the root cause is found. and perhaps indicates the systems in question were not ideal candidates for consolidation. it should be stressed that there is nothing architecturally ‘wrong’ in the method Oracle has elected to use. Management of a consolidated environment requires.

20 Database Consolidation with Oracle 12c Multitenancy . and other countries. Oracle is a registered trademark of Oracle Corporation.ImplementersLab. HP is a registered trademark in the U. please provide feedback at implementerslab@emulex. The information contained herein is subject to change without notice. © Copyright 2013 Emulex Corporation. The only warranties for Emulex products and services are set forth in the express warranty statements accompanying such products and services.com.Solution Implementer’s Series For more information www.uk To help us improve our documents. Emulex shall not be liable for technical or editorial errors or omissions contained herein.co.com www. OneCommand is a registered trademark of Emulex Corporation.Emulex.scaleabilities.com www.S.

Ireland+35 3 (0)1 652 1700 | Munich. California 92626 +1 714 662 5600 Bangalore. India +91 80 40156789 | Beijing. China +86 10 68499547 Dublin.Solution Implementer’s Series World Headquarters 3333 Susan Street. Germany +49 (0) 89 97007 177 Paris. Costa Mesa. Japan +81 3 5322 1348 Wokingham. United Kingdom +44 (0) 118 977 2929 21 Database Consolidation with Oracle 12c Multitenancy . France +33 (0) 158 580 022 | Tokyo.