You are on page 1of 32

10g Features For Administrators Part1.

Oracle Database 10g has achieved a flexible and cost effective way to manage
enterprise data.

Manageability, Grid Computing, Configuration Techniques, Transportable Tablespaces


(TTS), Data pump And External Table Popup.

Oracle Database 10g New Features

Oracle Database 10g Manageability Objectives.

Oracle Database 10g Manageability Strategy.

Grid Computing.

Oracle Database 10g Enhancements

Enhanced Installation

Database
Performance Enhancements
Different Installation Options

Database Configuration Tools

Statistics Collections

Policy Based Database Framework

Simplified Installation Parameters

Loading Unloading Data

Transportable Tablespaces (TTS)

Data pump Including Import N Export

Uses of Data pump

External Table population and Its Operation.


Manageability Goals
Manageability enhancements that simplifies Task of DBA

Common Challenges Faced By DBAs

Managing Database Storage

Handling System Resources

Managing Application tuning and SQL Operation

Managing Backup and Recovery Strategies

Managing Space and objects

Oracle Database 10g - Overcome the Challenges by Supporting Several


New Enhancements.

DBA can efficiently manage database space,

System resources by easily monitoring CPU consumption and resizing


database buffers

Oracle Database 10g - Manageability Enhancements

Performing Routine Operations

10g eliminates the need to manually configure Administration tools


& Components

Oracle Database 10g - Includes

Fast and Light Weight Installation with Pre and Post Installation
Validations.

Oracle Database 10g - Supports

New Transportable Tablespaces (TTS) Feature.

Which enables Data migration accross various Platforms?

Oracle Database 10g - Supports

Data Pump

Which allows Data to be loaded or unloaded Quickly.

Oracle Database 10g - Uses Only One Upgrade Script.

Simplify the Upgrade Process by automatically performing pre-


upgrade and post upgrade checks.

Oracle Database 10g - Reduce Administration Expenses.

It automates most of the database administration operations,


which reduces manual efforts.
Oracle Database 10g - Aims To Reduce Capital Expenses.
It meets Objective by using adaptive capital instead of oversized
capital.

IT supports Grid Computing, that allows sharing of resources and


as a result, reduce overall capital expenses

Oracle Database 10g - implements Integrated Processes instead of Third


party implementations

It supports self management feature that allows process


integration with lower costs.

Oracle Database 10g - aims to reduce Failure expenses based on


Preventive strategy.

It reduces expenditure on recovery procedure.


It reduces expenses by using time tested strategies

Manageability Strategy Features


Oracle Database 10g is self Managing database.

We will discuss about features of the Oracle Database 10g


Manageability strategy.

Oracle Database 10g Automates Space and Application Management.

IT automatically monitors backups and recovery operations.( Req.


to checked while installation of Oracle Database 10g Software)

With Oracle Database 10g, there is no need to manually administer


System Resources.

Oracle Database 10g eliminates the need to monitor the Performance of


Database related operations

If problem occurs during resource allocation, it automatically


informs by sending alerts.

It may automatically rectify the problem or provide suggestions.

Oracle Database 10g simplifies the Management of Large Oracle


databases in Data Centers.

It makes Databases scalable and efficient, so management of


Databases is made easy.

Oracle Database 10g provides an integrated solution Enterprise Manager


(EM)

It monitors Application and Systems.

These applications and systems are based on the Oracle Technology


Stack.

--> The HTML based EM 10g allows us to administer a large number of


Databases from Single access poling

It allows us to monitor integrated databases on computers located


at different locations.

 EM 10g enhances databases management by automating critical


operations to reduce task time and the risk of errors.

 It also minimizes the risk involved in performing system


provisioning, policy-based standardization and application
management

Oracle Databases 10g Centralizes Database Management

EM grid control provides us with single tool to manage Oracle


softwares elements,
Like Application Server 10g & web applications.

Grid Computing - Overview


Most Applications are independently developed and sized for peak load.
However this application mostly remain underutilized because

Peak load occurs only occasionally. In this case where loads differ
based on user preferences, there is a need for grid computing system.

Underutilization of independently constructed applications is


expensive. (So it is not economically feasible for organizations,
That has adequate scalability for peak load)

If organizations invest in a system that can be fully utilized under


normal conditions,
There will be problems in meeting demands of peak load. In addition
adding capacity to existing systems to fulfill the growing needs can be
very expensive. in such cases Organizations try to avoid adding
capacity ,and this leads to problems during peak loads.

Grid is clusters of servers linked to gather that enable the pooling of


computing resources. The concept of Grid computing is based on treating
Computing as a utility. Grid computing helps in reducing costs of
adding symmetric multiprocessing (SMP) systems another application.

Grid computing, the data for information sharing becomes easily


accessible. The functioning of Grid computing is not dependent on the
location of data storage and the type of computer processes that are
requested. Grid computing helps in adequate allocation of resources
requirements by aggregating several database servers.

Grid computing is a sophisticated technology. The attributes of a grid


computing system, such as Virtualization, Provisioning, Pooling of
resources.

 Virtualization involves the separation of the hard-coded


association between applications and resources, which
ensures improved Performance, manageability and parallelism.

 Provisioning ensures that resources are dynamically


allocated to applications as and when needed. Resource
provisioning is done on the basis of policies and dynamic
requirements of an organization. This helps you allocate
additional resources to application during peak load. We can
reallocate the resources to other applications when peak
load is over.

 To meet business requirements and ensure increased


utilization, resources can be pooled. Pooling involves
having a limited number of large resource pools. Large
resource pool simplifies the dynamic reallocation of
resources according to the demands of the business.

The standards body responsible for developing standards for Grid


computing is the Global Grid Forum (GGF). The working groups of this
Committee focus on the diverse aspects of Grid computing. The committee
comprises participants mostly from the research community, commercial
institutation and academia.

Grid Computing is expected to resolve the problems related to adequate


provisioning of application capacity and scalability.
Oracle Database 10g Database for Grid
Grid computing facilitates optimum utilization of resources by
dynamically allocating the resources to application as and when they
need them.
Oracle Database 10g is the first database that supports Grid computing.

Oracle Database 10g includes features that enable dynamic allocation of


enterprise resources based on business requirements.

> Real Application Clusters (RAC)


> Portable Clusterware
> Automatic Storage Management (ASM)
> High Speed Network Interconnect Support
> Resource Manager
> Oracle Streams
> Enterprise Manager (EM) Grid Control
> Self Management

Real Application Cluster

It allows a single database to run on a number of clustered nodes


in a Grid. Oracle Database 10g provides an automatic management
feature to manage the workload for services available in an RAC
database. Oracle Database 10g includes Portable Clusterware to run
on all OSs.

Automatic Storage Management (ASM)

That simplifies storage management for the database. ASM


automatically distributes the storage workload in the database to
improve the performance of the storage system.

High Speed Network Interconnect Support.

Such as Infiniband. These technologies offer better performance


and scalability.

Resource Manager

That enables additional mappings for the consumer groups. These


mappings are based on user host machine, application, OS Username,
Service.

Oracle Streams

It transfers data between databases, nodes, or blade farms in a


Grid. Oracle Streams also facilitate sharing information,
combining Messages, queuing, data warehouse loading, events, and
replication in a single framework.

Enterprise Manager (EM) Grid Control


EM grid control provides a centralized tool to monitor and manage
Oracle Software elements including Oracle Database 10g and Oracle
Application Server 10g (OracleAS) in the grid. IN addition, it
monitors and manages web applications using application
Performance Management (APM), hosts, storage devices, and
server load balancers.

Self Management

It provides feature to automatically diagnose and resolve problems


in the database system. This reduces the cost of maintenance and
administration for the database.
Oracle Database 10g - Installation New Features
Designed for Grid Computing.
Oracle Database 10g is simple to install and configure.

Installation Features of Oracle Database 10g

 Configurations using ASM


 Installation and configuration of new EM framework
 Optional Backup Strategy
 E-Mail notifications
 Installation support for cluster Ready Services
 Cloning of Oracle Homes.

Three Storage mechanisms in Oracle Database 10g

File System

It is OS bound and formatted disks that could be local to


the server or a storage area network (SAN)

Automatic Storage Management (ASM)

It simplifies database storage administration and optimizes


database layout for I/O performance. Configurations that use
ASM are importance feature of Oracle Database 10g. ASM
configurations enable easy management of files and folders.

Raw Devices.

Raw partitions or volumes can provide the required shared


storage for Real Application Clusters (RAC) databases. A raw
device (Row Partitions), on the other hand, is like
partition that is not formatted. The I/O calls of data
transfer bypasses the Operating System buffer. A raw device
is primarily used in the RAC environment of Oracle Databases
if you do not use Automatic Storage Management and Cluster
File System is not available. You need to have created one
raw device for each datafile, control file, log file.

Enterprise Manager Feature

Oracle Database 10g is the installation and configuration of the


new EM framework. Installed in the same Oracle Home as the
Database, The EM Databases Control is configured to run on a stand
alone Oracle Application Server Container for Java 2 Enterprise
Edition (OC4J) Instance.

A separate installation has to be done to access the EM central


management capabilities. Then, a fully functional EMM database
Control is automatically configured.

EM database Control also enables us to send E-Mail alerts. The


Alerts can be based on issues related to disk space storage limit,
the unexpected shutting down of a database, node availability, and
Tablespaces space usage.

EM database Control enables you to optionally configure the


Database to make use of the Oracle recommended default backup
strategy.
RAC enhancement support is another feature, which CRS. CRS
provides additional management services to the cluster
environment, such as group services and node membership oracle
Database 10g also improves the portable Clusterware capability,
which is now available on most platforms.

The Enterprise Configuration Management tool enables you to clone


Oracle Homes. Available through EM grid Control, you can create
clone requests through this tool. These requests can then be
scheduled and processed.
Oracle Database 10g Installation Requirements.

Prudent (meaning - Careful) management of memory and disk space during


the installation, regardless of the OS enhance performance of the
database installation.

For standard installation of Oracle database 10g

 Minimum of 512 MB is required.

 Without Oracle database minimum 256 MB is required.

 For Linux - disk space requirements are 1 GB swap space or twice


the RAM space and a /temp directory of 400 MB.

 Disk space of 0.5 GB to 2.5 GB required.

 Two optional requirements are 1.2 GB for preconfigured database


and 2.4 GB for the flash recovery area.

 Oracle Database 10g is available on 650 MB CD. CD Contains only


one seed database template.

 The setups for all demos and components, such as Apache and EM
Webstage are kept on separate CD.

Requirements before Installing Oracle Database 10g

Oracle Database 10g automatically checks for essential


requirements.

• Sufficient Temporary space.

o Oracle Database 10g checks for minimum temporary disk


space required for installation and configuration. The
64 bit Oracle Home is not installed over a
preinstalled 32 bit Oracle and vice Versa.

• Resolving 32 bit and 64 bit issues.

• Setting the OS and kernel parameters.

• Checking correctness of OS patches.

• Verifying X server Permissions.

• Sufficient swapping space.

• Oracle Home should be empty or can be over-written.

For Linux, only RedHat 2.1, RedHat 3 and unitedLinux 1.0 are
certified.

Solaris 2.8 and 2.9 and higher versions are certified for Oracle
Database 10g.

The database installation then checks for all the OS patches. It


verifies if the system and kernel parameters are correctly set.
There are 11 significant kernel variables.
Oracle Database 10g checks for a user specific environment and for
startup programs. It confirms that the DISPLAY environment
variable is set. It also checks if the user has enough permissions
to display the content of the DISPLAY variables. There should be
sufficient memory space for swapping.

Oracle 10g also checks for Oracle Home. Oracle Database 10g can be
installed on an empty Oracle Home or on Release that can be
overwritten. It also sets the PATH and ORACLE_SID variables.

In Linux, You can view, add, modify, and two tabs on the User
Manager. There are two tabs on the User Manager window, Users and
Groups.

you can select the groups tab to view the list of all local groups
on the system.

Oracle Database 10g has improvements over the previous version of


Oracle. The improvements are a single password entry, clean
deinstallation, specific Oracle Application Server Container for
JEEE (OC4J) requirements, and for windows platform, a
b_disablestartmenu Boolean variable for independent software
vendors (ISV) support.

Oracle Database 10g provides an option to specify only one


password for all accounts, you can specify different password if
required.

Unlike previous versions of Oracle, this version supports the


clean deinstallation process that removes all Oracle files.

Microsoft Windows, all registry entries pertaining to it are also


deleted.

There are some specific requirements to install stand alone OC4J


in the Oracle Home. It is installed on RAC environment. It is
installed on Cluster File System (CFS).

Installing OC4J is patchable. Once OC4J is configured, all


individual applications can start and shutdown independently.

ISVs require the creation of startup menu items to be disabled


while installing Oracle Database 10g on a Windows OS.

To enforce the disabling of Startup Menu items on Windows, a


public variable, B_disablestartmenu is created on the server.

This is a Boolean variable. To disable the creation of startup


menu items, it is set to True.

By default this variable is assigned the false value.


CD pack contents comprises many CDs

• Oracle Database 10g

• Companion

• Oracle9iAS Infrastructure

• Oracle Database 10g Client

• Oracle Enterprise Manager

• Oracle Documentation

• Oracle Database 10g Demos

The mid tier components and the components that are not able to
fit in database CD are written to a separate CD called Companion
CD. These CDs can be shipped on a single DVD also.

The companion CD contains

• HTML DB
• Workflow
• Oracle HTTP Server (OHS)
• SQL for Java (SQLJ)
• Jpublisher
• Context Knowledgebase
• Java
• Intermediate nocomps
• Legato storage Manager
• Examples of each Component

Installation Specifying Options

Oracle Database 10g provides many installation options. This topic


covers different installation options in Oracle Database 10g.

Installation Options in Oracle Database 10g

 Management Options
 File Storage Options
 Backup and Recovery Options
 Password Options

In management Options, you have the naming option where you name
the database. You can specify the global database Name using the
name.domain format or can simply define the database name.

The Oracle System Identifier (SID) can be different from the


Global Database Name.

The Database Character Set region in Management options provides


the character set that stores the data.
You should determine the base character set carefully. Changing
the base character set needs down time for the database and can be
quite time consuming.

Management Options include a Sample Schemas Option. This option


gives you a set of schemas.

Using Management Options, you can manage a database. You select


the Use Grid Control for Database Management option to manage your
database stored in one or more computers. you can also select the
use database control for database management option. This helps us
to manage a single database.

The installation options in Oracle Database 10g include File


storage Options. File System is one of the file storage options,
which stores your files in an OS-configured file system.

File storage options include the Automatic Storage Management


(ASM) option.

The process of creating and managing the ASM database file is


automatic. ASM files can be mirrored and stripped.

Raw Devices, one of File Storage Options, are disk partitions with
no file system.

Familiarity with the use of raw partition is a prerequisite to the


use of these file systems.

With Raw Devices you can manage the storage devices outside the OS
file system. You should use

This method in an Oracle environment with Real Application


Clusters.

The Oracle Database 10g installation also contains Backup and


Recovery Options.
Database Configuration Tools
Configuration tools enable you to quickly and easily configure
databases. They help in automatically configuring and cloning
databases and installing patches.

The Configuration tools provided by Oracle Database 10g includes

• Database Configuration Assistant (DBCA) ,


• Enterprise Manager (EM),
• Clone Database,
• Metalink Integration

DBCA enables us to quickly create an Oracle Databases. DBCA


supports and implements the architectural enhancements provided by
Oracle Database 10g. These enhancements facilitate self-management
activities and optimize performance coordination. DBCA
enhancements are displayed on the screen

DBCA automatically creates a centralized System-owned SYSAUX


Tablespace. You can use this to store all auxiliary database
metadata that is not stored in the SYSTEM Tablespace. This reduces
the number of Tablespace created by default, both in the seed and
user-defined databases.

DBCA automatically configures a default, centralized flash


recovery area that stores the files related to recovery and other
activities in an Oracle database. In addition, DBCA implements the
flash backup and recovery strategy.

DBCA configures the oracle EM repository, job, and event


subsystems. A self managing repository further collects the
workload information and performance statistics. This helps in
reducing administration costs. In addition, the LDAP(lightweight
directory access protocol).ORA configuration is not required
because DBCA configures a new directory.

Oracle Database 10g provides a simple seed database characterized


by the decrease in the number of initialization parameters. This
makes configuring an optimal database easy. Parameters are
categorized into basic and advanced group. You need to specify
only 20to 25 basic parameters for optimal configuration. Sample
schemas can also be installed using DBCA.

You can configure Automatic Storage Management (ASM) for


conventional and Real Application Clusters (RAC) environments. ASM
provides a vertical integration of the file system and volume
manager built for Oracle Database files.

You can set at database to a managed by Grid Control Management or


Database Control Management. DBCA confirms the installation of the
Oracle Management agent on the host computer. If the agent is
present, you can select Use Grid Control for Database Management
Option button.

You can select an Oracle Management Service box. On computing the


installation process, the Grid Control displays the service as a
managed target.
You can use EM to manage a database even when you are not
centrally managing the Oracle environment. EM Database Control is
installed automatically at the time of installation of the
database. To monitor and administer the single-instance or
clustered database, you can provide web-based features using
Database Control Management.

EM can also be configured to receive e-mail notifications for the


SYSMAN user profile. To access EM Database Control from a browser
on a client computer, the computer with EM Database Control
installed must have the dbconsole process running on it. This
process is automatically started when the installation completes.

To access EM Database Control through a Web browser, you can user


the HTTP://hostname;portnumber/em URM format. The term, hostname,
in the URL refers to the name or address of your computer, and
port number refers to the EM Database Control HTTP port number
assigned during installation.

The default port number for the EM Database Control HTTP is 5500.
you can obtain the port number for you system from the
$ORACLE_HOME/install/portlist.ini file.

To access the Database Control home page, you use the Database
Control Login page. This Page is displayed when the instance
starts. You specify the authorized user name and password to
access the Database Control. Initially, the user name is SYS. The
password for this user name is the one that is specified at the
time of installation.

You select the SYSDBA option from the connect As box and select
the Login Button to access the Database Control Home page.
Clone Database
Another configuration tool is the Clone Database wizard that
enables you to replicate a configured, tuned, and tested database
to another Oracle home. This wizard can be used to clone Oracle
databases release 8.1.7 and later. The wizard can replicate a
database in its committed form, even when the database is in use.

The Clone Database wizard creates backups of the database files


and copies them to the target Oracle home. The wizard then creates
a new database by restoring and recovering the backup files with
archived logs.

The clone Database wizard creates a new instance of the database


at the target Oracle home. It creates a password file,
initialization files, spfile, and networking files. The wizard
starts the new instance in the open mode.

The EM clone Database wizard helps you to easily clone a database.


To use this wizard, you navigate to the maintenance tab for the
source database and select the Clone Database and select the Clone
Database link in the Deployments region.

The displays the Clone Database: Source Type page. On this page
you can choose to clone a database either from the running
database instance or a previous clone operation. The Clone
Database wizard clones a Database using the Recovery Manager
(RMAN).

In Oracle Database 10g, the EM provides software patching using


its built-in MetaLink integration. The EM sends alerts for new
critical patches and identifies the system that requires a
particular patch. To view and select the available interim
patches, you can use the patch wizard. You can view the details of
patches and README patch notes from EM.

To display the Patch wizard, you select the Maintenance tab on the
Database Control home page. In the maintenance tab, you select the
patch link from the Deployments region. You can also download
interim patches from MetaLink into the EM patch cache, which is a
part of the EM repository. This prevents repeated downloads.

You can also store patches on the destination systems, and apply
them manually at a later time. To automate the patching process,
you can provide a customizable patch application script. The
resident EM agents run this script on the destination system at a
specified time. The Oracle Universal Installer (OUI) keeps track
of the system’s correct patch level.
Statistics Collection
The activities performed using Oracle Database 10g can be tracked
with the help of the Automatic Workload Repository (AWR). The AWR
supports a usage metrics that specifies how Oracle Database 10g
has been used for an activity.

The usage metrics categories are database features usage


statistics and database High Water Mark (HWM) statistics.

The Database feature usage statistics tracks the usage of database


features such as Oracle Streams, Advanced Replication, Audit
Option, Virtual Private Database, and Advance Queuing.

The database HWM statistics provides data on activities such as


calculating the maximum number of sessions, the size of the
largest segment, the maximum size of the database, the maximum
number of tables, and the maximum number of data files.

Both the database features usage statistics and the database HWM
statistics can be tracked and recorded weekly by using the
Manageability Monitor Process. This process first tracks the
statistics by using a sampling method of the data dictionary and
then records the statistics in AWR snapshots.

To view the database feature usage statistics, you query the


DBA_FEATURE_USAGE_STATISTICS view.

To view HWM statistics, you query the


DBA_HIGH_WATER_MARK_STATISTICS view.

You can view the recorded statistics in Oracle Database 10g by


using EM. To view the database feature usage statistics, you first
select the administration tab on the Database Control home page.

You then select Database Usage Statistics link in the


Configuration Management section of the Administration tab.

The Database Usage Statistics page helps you to ascertain the


number of times a feature has been used. It also helps you to
determine the use of certain database features such as advanced
security, advanced replication, and audit options.

To obtain detailed information about a feature, you select the


specific feature from the Feature Name column.

After you select a feature, the detail page is displayed. This


page contains information about the feature such as the database
name and description and the total samples with the first and last
usage time stamps. o view the database HWM statistics, you select
the High Water Marks tab.

The HWM page displays the statistical information such as maximum


number of CPUs, data files, services, and Tablespaces. In
addition, it displays the highest usage reached by feature in a
given time.

You can also use the HWM page to view the last sampled value for a
particular feature and its database version.
Policy Framework
The policy framework is built over the configuration and metric
collection service of the EM. It enables you to evaluate, compare,
and retrieve the configuration information stored in the target.
This target may be a database, host, or listener. Each target has
a regular collection of configuration information that specifies
the previous configuration state.

The policy framework defines a set of certain predefined policies


or recommendations to monitor and ascertain the optimum
performance of all the targets. This enables you to identify the
targets whose configurations do not comply with these predefined
policies.

EM provides these policies and their reviews, which are derived


from the configuration information defined in the oracle’s best
practice recommendations. An example of a recommended
configuration setting is the existence of two or more copies of
the control files on all Oracle databases. These copies should be
stored on separate disks for reducing the risk of data loss.

The policy framework defines in the EM enables you to examine the


validation results. The Database Control home page allows you to
access the review page. This page contains a diagnostic summary
region, which displays the information regarding the number of
policy violations for a particular database.

Oracle Database 10g provides a policy violations page that reviews


and summarizes the policy rule violations pertaining to a
particular target. You use the policy Violations page for ignoring
the existing policies related to a particular target.

The policy Rule page has a summary list of the policy rule
violation details such a priority, violations count, last
evaluation, and description. On this page, the related links
region contains links to the Manage Policy Library page and the
Manage Policy Violations page. To open the Manage Policy
Violations page, you select the Manage Policy violations link.

The Manage Policy Violations page allows you to maintain the


already present policy violations for a business unit.

To open the Manage policy Library page, you select the Manage
Policy link from the Related Links region on the Policy Violations
page The Manage Policy Library page lists the priority, policy
rule, category, target type, and description of the different
policies. It also allows you to disable the existing policies
pertaining to a specific target.

To open the Manage Policy Library page, you select the Manage
Policy link from the Related Links region on the Policy Violations
page. The Manage Policy Library page lists the priority rule,
category, target type, and description of the different policies.
It also allows you to disable the existing policies pertaining to
a specific target.
Simplified Initiation Parameters

You can set and tune the initialization parameters in Oracle


Database 10g and obtain good performance from the database. This
also makes the work of database administrators easy.

The Initialization parameters in Oracle Database 10g are


categorized as basic and advanced. The basic parameters are about
25 in numbers, and usually it is enough to set and tune them to
obtain good database performance. all other parameters are termed
as advanced, and it is a rare thing to set and tune them obtain
good database performance.

Example of basic Parameters includes cluster_database,


compatible,control_files,db_block_size, db_create_file_dest,
db_create_online_log_dest1,db_create_online_log_dest_2,db_create_o
nline_log_dest_3,db_create_online_log_dest_4,db_create_online_log_
dest_5,db_domain and db_name.

Examples of advanced parameters include active_instance_count,


aq_tm_processes, archive_lag_target, asm_diskgroups, and
asm_diskstring.

In Oracle Database 10g, you can display a list of the


initialization parameters using the Enterprise Manager (EM).
First, you select the administration tab from the Database Control
home page.

Next, you select the All Initialization parameters link from the
Instance region of the Administration page.

The Initialization Parameters page, under the Current tab,


displays a table with the current value of each initialization
parameters as it appears in the database instance.

You can make changes to the parameters in the current tab of the
initiation parameters page. To save the changes, you select the
save to the File button.

To display the changes in the current instance, you select SPFILE


tab and select the apply changes in SPFILE mode to current running
instance(s) checkbox.

Compatible initialization Parameters

You can not set the value of the COMPATIBLE Initialization


parameter to a value that is less than the previous value.
Starting with Oracle Database 10g, the COMPATIBLE initialization
parameter becomes irreversible.

For instance, you try to change the value of the COMPATIBLE


initialization parameter from 10.1.1 to 9.2.1. after you restart
the database, an error massage indicating that the compatible
setting cannot be reversed is displayed. However, you can make
this change if you do a point-in-time recovery before
compatibility was advanced.

If you make changes to the COMPATIBLE initialization parameter,


any changes thereafter are not saved. Therefore, you will not be
able to use the ALTER DATABASE RESET COMPITIBILITY command.
Transportable Tablespace

Oracle Database 10g allows users to move data and databases across
platform boundaries by transporting user Tablespaces.

The old releases of Oracle database allowed users to move their


Tablespaces across Oracle databases that function on the same
operating system and have the same architecture. Oracle Database
10g supports an enhanced cross-platform TTS feature that enables
users to move data across platform boundaries.

The cross-platform TTS feature in Oracle Database 10g allows the


migration of a database within different platforms by transporting
the user Tablespaces and rebuilding the dictionary. This feature
facilitates the process of disturbing data from a data warehouse
environment to data marts that run on small platforms.

The TTS feature can be used only if the source system and the
target system are compatible. Both the systems should run on any
one of the supported platforms and use the same character sets.
The list of supported platforms is displayed using the select *
from V$Transportable_plaform view.

To use the cross-platform TTS feature, both the source and target
databases should have their COMPATIBLE initialization parameter
set to 10.0.0 or higher. This makes the data files platform-aware
when they are opened under Oracle Database 10g. These files are
identified and verified using their identical on-disk formats for
file header blocks.

You can set the compatibility of read-only and offline files at a


higher level only after making them online. This implies that the
read-only Tablespaces must be made read or write at least once for
using the cross-platform TTS feature of Oracle Database 10g.

To begin the TTS procedure to transport a Tablespace from source


platform to target system, you should convert the data files of
the Tablespace into a format compatible with both the platforms.
In Oracle Database 10g, the source and target platforms may use a
different endian format, which is a standard format to store data,
even through the disk structure adhere to common format.

To use the TTS feature for different platforms having different


endian formats, you should use the CONVERT command of the Recovery
Manager (RMAN) utility. The CONVERT command changes the endian
formats of either the source platforms. The platforms that use
same endian formats do not need any conversation.

To determine the endian format of target and source platforms, you


can query the V$TRANSPORTABLE_PLATFORM view. The platform name and
platform identifier can be determined by the V$DATABASE view.

The SELECT statement displays the endian format of a database


hosted on which operating platform.

You can start a TTS procedure by making Tablespaces read-only at


the source platform and extracting metadata using data Pump. Data
Pump is utility that is used by Oracle Database 10g for improving
and exporting data.
The next step in a TTS procedure is to determine the endian format
of the target platform. If the target platform uses the same
endian formats as of the source platform, the data files and dump
files are sent to the target platforms. If the endian format is
different for source and target platforms, the data files are
converted using the RMAN utility.

The last step in the TTS procedure is to use Data Pump to import
metadata and give the Tablespaces read and write permissions at
the target platform.

In the earlier releases of Oracle database, there was certain


endian-dependent character large object (CLOB), which had to be
converted in Oracle Database 10g after transporting them. While
dealing with CLOB, the RMAN utility does not handle its conversion
during the CONVERT phase.

Oracle database automatically handles the conversion of the CLOB


data while accessing it. In the earlier versions of Oracle
database, the CLOBs were represented in the UCS2 format whereas in
Oracle Database 10g they are represented in the AL16UTF16 format.

Endian format is of two kinds, big-endian format and little-


endian format.UCS2 format is endian-dependent whereas AL16UTF16
format is endian-independent. In Oracle Database 10g, the big-
endian UCS2 format is compatible with AL16UTF16 format. CLOB data,
created on a big-endian format does not require data conversion.

For transporting UCS2 little-endian format CLOBs to a big- endian


format using AL16UTF16 format, data conversion is required. In
this case, the oracle database converts the CLOB data while
accessing it.

Using the CREATE TABLE AS SELECT command to access the CLOB data
can eliminate the run-time CLOB data conversion. The new CLOB data
using the endian- independent AL16UTF16 format is created by
running the CREATE TABLE AS SELECT command.

In the data file conversion displayed, the EXAMPLE Tablespaces


transported from a source database on an Intel based Linux 32-bit
platform to a target database on an Intel-based windows 32 bit
platform. This Tablespace is stored at the location
d:\oradata\orcl\EXAMPLE01.DBF and target location is D:\oradata\
EXAMPLE01.DBF

C:\Documents and Settings\bhavin>rman nocatalog target = sys/sys

Recovery Manager: Release 10.1.0.2.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

connected to target database: ORCL (DBID=1181893653)


using target database controlfile instead of recovery catalog

RMAN> convert tablespace 'EXAMPLE'


2> to platform ='Microsoft Windows IA (64-bit)'
3> db_file_name_convert=
4> 'D:\oradata\orcl\EXAMPLE01.DBF',
5> 'D:\oradata\EXAMPLE01.DBF';
Starting backup at 16-JUN-08
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00005 name=D:\ORADATA\ORCL\EXAMPLE01.DBF
converted datafile=D:\ORADATA\EXAMPLE01.DBF
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:16
Finished backup at 16-JUN-08

In the second example, the RMAN utility converts the data files
transported from Windows platform to Linux platform. This file is
previously converted at the source database and is now stored at
the location

The DB_FILE_NAME_CONVERT clause takes the file name, performs the


conversion, and places the converted file at a new location. By
default, the converted files with the identical data file names
are stored in the new flash recovery area.

To perform multiple file conversions in parallel, you can use the


parallelism option in RMAN. The time required to convert a file is
proportional to the time required to do an RMAN backup of that
file. The size of the remains same before and after conversion.

To display the platform identification number and platform name of


the database, you use the V$DATABASE view, which is an enhanced
view.

The V$TRANSACTION_PLATFORM is a new and fixed view that contains


the list of all the supported platforms. It also has information
about the platform identification number, platform name and
corresponding endian format for each platform.
Data Pump Architecture

Data movements across different databases often affects the speed


of the system. Oracle Database 10g supports Data Pump, which
allows high-speed data movement across Oracle databases.

Data Pump is a server-based feature that allows uploading and


downloading of data and transfer of metadata between Oracle
databases. This feature is available in all the editions of Oracle
Database 10g.

You can call Data Pump by using the PL/SQL package DBMS_DATAPUMP.
This allows you to create custom data movement utilities.

Oracle Database 10g provides new tools that support Data Pump
including the expdb export client and impdb import client. Another
tool that supports the functional architecture of Data Pump is a
web-based export and import interface. This interface can be
accessed using Database Control.

The general architecture of Data Pump consist of several


functional components and consumers including

• Direct path API (DPAPI)


• External table application Programming interface (API)
• DBMS_METADATA package
• DBMS_DATAPUMP package
• SQL Loader client
• EXPDP, IMPDP
• Other clients.

During Uploading and downloading of data and transfer of metadata,


DPAPI improves the performance by eliminating unnecessary data
conversion and parsing. It uses the direct path internal stream
format as the format stored in the Data Pump dump files.

Using the external table API services, you can assign access to
external tables. These services include the Oracle Loader and
Oracle DATAPUMP clients. For example, using the ORACLE_DATAPUMP
access driver, you can assign external tables and read and write
access rights to files with binary streams.

While loading and unloading of metadata, all processes use


DBMS_METADATA package. This package uses XML to retrieve an
object’s metadata, transform the metadata into SQL Data Definition
Language (DDL), and submit the XML to recreate retrieved object.

For movement of large data and metadata across Oracle databases,


you can use the DBMS_DATAPUMP package. This package supports APIs
that contain high-speed export and import utilities.

You can use the SQL Loader client, also referred to as SQL*Loader,
to import data from external tables into tables in an Oracle
database. External tables contain the SQL loader clients. This
helps in automatically migrating loader control files to external
table access parameters.
To start and monitor Data Pump operations, you can use expdp and
impdp clients. These clients, in turn, call the DBMS_PUMP package.
The expdp and impdp clients, however, support features of the
original export and import clients.

Other clients such as SQL*PLUS may also call the DBMS_DATAPUMP


package for querying the status of Data Pump operations.

Data Pump helps in improving the performance of applications such


as Database Control Web interface, transportable Tablespaces
(TTS), and replication. For example, Data Pump, which is built-in
within the Database Control Web interface, improves the
performance of all data transfer operations with the interface.
Data Pump Export and Import – Features

Data Pump export and import are new utilities of Oracle Database
10g.

The Data Pump export utility enables you to unload data and
metadata from the database to OS files called dump file sets. The
Data Pump import utility is used to load data and metadata from
dump file sets to target system. The program interface (API) of
the Data Pump application uses the data files that are located on
the server.

The Data Pump utilities enable you to export data from a remote
database to a dump file set. You can also load the source database
directly to the target database eliminating the use of intervening
files. Exporting data using any of these methods is called network
modes. The network mode is useful in exporting data from a read-
only database.

Each Data Pump operation needs a master table (MT). The MT


contains all the data related to the Data Pump operation. It is
located in the database schema, which runs the Data Pump
operation. While creating an MT is the final task in the export
job of the Data Pump operation, loading the MT is the first task
in the in the import job.

During file-based import, the first step is to load the MT into


the current user’s schema. The MT is used during the import to
sequence creation of the objects that are imported.

Benefits of Data Pump Export Import Utilities

There are several benefits of using the Data Pump export and
import utilities. The benefits include automatic decision on data-
access methods, fine-grained object selection, detaching from and
reattaching to long running jobs, and restarting of Data Pumps
jobs. The utilities also allow version specification and parallel
execution.

Automatically decide the data-access methods


Allow fine-grained object selection
Allow detaching from and reattaching to long running jobs.
Enable restarting of Data Pump jobs.
Allow explicit version specification.
Allow parallel execution.

The Data Pump export and import utilities automatically choose the
method of data access. The data-access method can be direct path
or external tables. The Data Pump export and import utilities also
provide three parameters, EXCLUDE, INCLUDE, and CONTENT to enable
fine-grained object selection.

The Data Pump export and import utilities can detach from and
reattach to jobs without affecting the job. This ability of the
utilities enables you to monitor jobs from different locations.
The Data Pump export and import utilities enable restarting of
jobs without loss of data if the metaininformation is intact. The
voluntary or involuntary stopping of a job does not affect its
restarting.
The Data Pump export and import utilities allow you to specify the
version of the objects that are to be exported. A dump file set
containing objects with versions is compatible with any release of
Oracle Database 10g that supports Data Pump. The VERSION parameter
is used to specify the version for objects and is reserved for
future releases.

The Data Pump export and import utilities allow you to specify the
maximum number of threads of active execution servers, which
operate on behalf of an export job, using the parallel parameter.

Other Benefits of Data Pump Export and Import Utilities.


In addition to the benefits discussed. There are some more
benefits include estimating the space consumption for an export
job, providing the network mode to work in a distributed
environment, and renaming of data files, schemas, and Tablespaces
during import.

Allow estimation of space consumption for an export job.


Provide the network mode to work in a distributed environment.
Allow renaming data files, schemas, and Tablespaces during import.

The Data Pump export and import utilities also allow you to
estimate the space that may be consumed by an export job using
ESTIMATE_ONLY parameter.

The network mode of the Data Pump export and import utilities
allows a direct export from a remote database to a dump file set.
This is done using a database link to the source system. The Data
Pump export and import utilities also allow renaming the target
data files, schemas, and Tablespaces during import.

The implementation of the Data Pump export and import utilities


involves a client process that calls the Data Pump API. A client
is not needed after the job is started and may detach from the
job. Later, multiple clients may attach to monitor the jobs.

When a client logs on to Oracle Database 10g a shadow process is


created to service the Data Pump API requests. The shadow process
creates a job on receiving a DBMS_DATAPUMP.OPEN request. A shadow
process exists in association with the client. If a client
detaches, the shadow process also ends.

The Data Pump job primarily consists of creating the MT, the AQ
queues that communicate with other processes, and the Master
Control Process (MCP). While a job is running, the shadow process
services GET_STATUS requests from the client.

The MCP is responsible for controlling the execution and


sequencing of a Data Pump job. The MCP stores the information
about the job state, job description, job restart, and job
dumpfile in the MT. The process name of MCP is DMnn.

The MCP creates worker processes on receiving the START_JOB


request. The number of the worker processes depends on the value
of the PARALLEL parameter. The MCP requests the worker processes
to perform tasks such as loading and unloading of metadata and
data. The process name of a worker process is DWnn.
If the data access method for unloading or loading is external
tables, the worker process coordinates parallel execution servers
based on the load or unload task. As a result, intrapartition
loading and loading is enabled.

Data Pump provides two methods of accessing table row data, direct
path load using the direct path API(DPAPI) and external tables.
Data Pump selects direct path load and unload method when the
structure of the table allows it. The direct path method is also
used when maximum single-stream performance is required.

Data Pump uses external tables when there is a presence of:

• Tables that have fine-grained access control enabled in select


and insert modes.
• Domain index on LOB columns.
• Clustered tables.
• Tables with active triggers.

Data Pump uses external tables for loading and unloading when
certain conditions hold true. These conditions include presence of
fine-grained access control enabled in the select and insert modes
for tables, domain index on Large Objects (LOB) columns, clustered
tables, and tables with active triggers.

Data Pump also uses external tables when there is a presence


of:

• Global index on partitioned tables with single-partition load.


• BFILE or opaque type columns.
• Referential integrity constraints.
• VARRAY columns with an embedded opaque type.
• Tables with encrypted columns.
• Tables that are partitioned differently at load and unload
times.

The other conditions when Data Pump uses external tables are
presence of global index on partitioned tables with single-
partition load, BFILE or opaque type columns referential integrity
constraints, VARRAY columns with embedded opaque type, tables with
encrypted columns, and tables are partitioned differently at load
and unload times.

Data that is loaded using one method can be unloaded using another
method because the external data representation is same for both
the methods.
Data Pump Import Export – Parameters

Oracle Database 10g provides tools to export and import data to


Oracle database.

Data Pump jobs manage three different types of files.

• Dump files
• Log files
• SQL files

The paths used to access files used by the Data Pump export and
import utilities are relative to the location of servers because
the pump operations are performed there. Absolute paths are
avoided to maintain network security.

When searching for the path, the Data Pump export and import
utilities first search for individual directory objects that may
be associated with each file. If these directory objects are not
found, the directory object specified by the DIRECTORY parameter
is used. You can create an environment variable DATA_PUMP_DIR, to
avoid using the DIRECTROY parameter.

Dump files are used by the Data Pump export and import utilities
to store metadata about the objects being transferred. Log files
are used by the utilities to record all console messages hat may
be generated. SQL files are used to store the results of all
SQLFILE operations.

There are four interfaces using which the Data Pump export and
import utilities can be accessed. These are the command-line
interface, parameter files, the interactive command-line
interface, and Database Control. Database control provides Web
access to the Data Pump export and import utilities.

You can use the command-line interface to specify the export


parameters directly from the command line. All system messages
generated during the execution of the export or import command are
logged to the console.

You create parameter files when you want to execute an export or


import command repeatedly and with the same parameters. All
parameter, with the exception of the PARFILE parameter, can be
listed in parameter file. The PARFILE parameter, which lists the
path of the parameter file, needs to be listed on the command line

You can access the interactive command-line interface by pressing


the CTRL key and the C key simultaneously. This key combination
stops console message logging and displays and import and export
prompt the interactive command-line interface is accessed when
operation is underway, or when you attach to a stopped job, or a
job that is in the process of executing.

Database control provides Web access to the data Pump export and
import utilities. To access these utilities, first access the
Database Control home page. Then, from the Utilities section on
the Maintenance tab, you can select the Export to Files, Import
from Files, or Import from Database link to access the required
Data Pump utility.
The Data Pump export and import utilities operate in several
modes, which define different types of import or export operation
specified in the command line. These modes are Full, Schema,
Table, Tablespace, and Transportable Tablespace (TTS).

If you want to export a database, you use the expdp command. The
DUMPFILE parameter is used to specify the output directory for the
dumpfile. The FILESIZE parameter enables you to specify the
maximum file size. Further, you can use the PARALLEL parameters to
indicate the number of parallel processes used to create the
export dump.

You use the impdp command to import databases. The DIRECTORY


parameter is used to set the location of the source files for the
import, and the DUMPFILE parameter is used to list the files from
which the import must take place. In addition, the PARALLEL
parameter is used to define the number of parallel streams of load
to be created.

To execute a limited export or import, you can create a parameter


file and include all your rules in it. To include all the rules in
the export or import command, you need to list the path of the
parameter file after the PARFILE parameter.

You can also use objects filters and content filters on the export
and import operations. Object filters are used to filter objects
such as views and packages from the operation. Content filters are
used to filter content such as metadata and queries from the
operation.

Objects filters are imposed using the EXCLUDE and INCLUDE


parameters, and content filters are imposed using the CONTENT and
QUERY parameters, among others.

When you use the EXCLUDE parameter, all objects with the exception
of those that are listed after the parameter are included in the
import or export command.

When the INCLUDE parameter is used, only those objects that are
listed after the parameter are included in the import or export
command.

The three switches of the CONTENT parameter are METADATA_ONLY,


DATA_ONLY, and all. You use the METADATA_ONLY switch to export or
import only the metadata and the DATA_ONLY switch to export or
import only the data. To export or import both the metadata and
the data, you use the ALL switch.

The QUERY parameter allows you to export or import specific data


using an SQL statement. Unlike the original export utilities, the
QUERY parameter can be used in import operations. It can also be
used to limit the export and import operations to specific tables.

The object filters keywords EXCLUDE and INCLUDE are mutually


exclusive. You can not use the EXCLUDE or INCLDUE parameters if
the CONTENT=DATA_ONLY parameter is specified.
Object metadata is stored as an XML file. You can transform this
metadata at the time of creation of the Data Definition Language
(DDL), when importing using the Data Pump import utility. This is
very useful if the storage characteristics of the target instance
are different from the characteristics of the source instance.

Data Pump imports supports three types of transformations that are


specified using the REMAP_DATAFILE, REMAP_TABLESPACE, and
REMAP_SCHEMA keywords.

The REMAP_DATAFILE keyword is used to move objects across platform


having different file storage characteristics. The
REMAP_TABLESPACE keyword is used to move objects among
Tablespaces. The REMAP_SCHEMA keyword is used to change ownership
of objects.

To perform an import from a remote source database, you can use


the NETWORK_LINK parameter.

Network_link=//oracle10gserver/compname/dbname

Using Export and Import Modes


Oracle Database 10g provides various export and import modes that
allow you to export and import the database or schema objects. In
this topic, you learn about features of the various export and
import modes.

The Database Control home page contains the Maintenance tab that
comprises the Utilities section. This section contains links that
allow you to access Data Pump utilities. Each link, on being
clicked, launches wizards that guide you step-by-step towards
defining all the parameters of your Data Pump jobs. Database
control schedules these jobs as repeatable jobs.

Oracle provides the EXPDP and IMPDP command line utilities that
support data pump activities. EXPDP or data pump operations create
dump file in the directories pointed by database directory
objects. These directories contain database objects definitions
and data. If multiple directories are specified for dump files
then the dump files created in a round robin fashion.

The example displayed on the screen exports full database by


deploying four parallel worker processes. To export full database,
you specify FULL=Y. the database directory objects. DATADIR1,
DATADIR2, DATADIR3 and DATADIR4 are specified.

Each dump file is maximum 2GB in size. A minimum of four dump


files are created, one in each specified directory. The job and
master table have the default name SYSTEM_EXPORT_FULL_01.

Once dump files are exported, you can import them using IMPDP. The
example displayed on the screen, depicts the full import of the
dump files stored in the directory object, IMP_DIR. While
importing, you need not specify FULL=Y as the default behavior is
to import the entire data of the dump files. The job and master
table have the default name SYSTEM_IMPORT_FULL_01
Instead of exporting the whole database, you can also selectively
export database schemas. Schemas objects that are exported include
functions, procedures, packages, or user-defined types. Database
users who have been granted the EXPORT_FULL_DATABASE role can
export multiple schemas whereas normal database users can export
only their own schema.

For example, you export CTADMIN schema into the directory pointed
by CT_DIR directory object using the export parameters file,
exconfig.txt. the dump file to be created is ctadmin.dmp.

The export parameter file, exconfig.txt is displayed on the


screen. This file contains parameters that include a function,
procedure, package, type, and view.

The network mode import involves the import of data through


database links using the impdp utility.

The NETWORK_LINK parameter specifies the database link used for


importing data. Data that is imported is consistent at the time
stamp specified in the FLASHBACK_TIME parameter.

The CTADMIN schema on the source database is remapped to CTASM


schema on the target database using the REMAP_SCHEMA parameter.

Database users who have been granted the IMPORT_FULL_DATABASE role


can import multiple schemas whereas normal database users can
import only their own schema. In addition, for users having the
IMPORT_FULL_DATABASE role, the schema definition is created on the
target database if it does not already exist.

Data Pump Job Monitoring Views

Data Pump enables high-speed data transfer between Oracle


databases. To view the performance of Data Pump, you can use
various dictionary views provided by Oracle Database 10g.

Data pump Job Monitoring Views

DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
V$SESSION_LONGOPS

The DBA_DATAPUMP_JOBS view display the active Data Pump jobs on


one instance or all instances from Real Application Cluster (RAC).
You can stop a job, change its parallelism, or monitor its
advancement by using the job information displayed.

The DBA_DATAPUMP_JOBS view also displayed the Data Pump master


tables (MTs) that are not related to an active job. To restart a
job, you can use MTs associated with the JOB_NAME. You can remove
the MTs that are not currently in use from the DBA_DATAPUMP_JOBS
view.

The DBA_DATAPUMP_SESSIONS view displays the use session associated


with job.
The V$SESSION_LONGOPS view displays the Data Pump job progress in
terms of data transferred from tables. The quality of transferred
data is noted in this view in megabytes. This record is updated at
a specific time interval to display the actual data transfer.

Using the pump job monitoring views, you can perform various
operations on existing jobs such as, attaching to an existing job,
restarting a job, or just unloading data.

Operations that can perform on existing job


Attaching Job
Restarting Job
Unloading Job

For example, you want to attach to a previously initiated job. The


command to perform the same is displayed on the screen. If there
is a single active job running in the specified schema then you
are not required to specify the name of the job in the ATTACH
parameter.

The description and the progress of the job are displayed with an
interactive prompt after it. To stop a job, you can use the
STOP_JOB command. This ends the client session and terminates the
job without affecting any functions to be run in future. You can
restart the job if the dump file and the
SYSTEM.SYS_EXPORT_SCHEMA_01 table are intact.

To restart a job, you specify the name of the job if multiple jobs
are present in the specified schema. In the commands displayed on
the screen a job is started with a higher degree of parallelism.
Status of messages is displayed with a process progress status of
each worker at a specified time interval in the logging mode.

You can add a new file to the dump file set of the associated job.
to cancel the job, you can use the KILL_JOB command.

To stop the import client session, you can use the EXIT_CLIENT
command. This command is use when you want to keep the current job
running.

You can only unload data from tables of a specified schema. The
example displayed on the screen unloads data from the CT_EMPLOYEE
and CT_DEPARTMENT tables in the database schema.

The unload process performs a schema mode export, which is the


default mode. You can restrict export of data to tables only by
using the CONTENT parameter. The Directory object refers to a
specific directory on the server. User CTadmin (db user) is
authorized to read and write export dump files to this directory.
The directory contains the CT_EXP.dmp file.

You might also like