Professional Documents
Culture Documents
Oracle Database 10g has achieved a flexible and cost effective way to manage
enterprise data.
Grid Computing.
Enhanced Installation
Database
Performance Enhancements
Different Installation Options
Statistics Collections
Fast and Light Weight Installation with Pre and Post Installation
Validations.
Data Pump
Peak load occurs only occasionally. In this case where loads differ
based on user preferences, there is a need for grid computing system.
Resource Manager
Oracle Streams
Self Management
File System
Raw Devices.
The setups for all demos and components, such as Apache and EM
Webstage are kept on separate CD.
For Linux, only RedHat 2.1, RedHat 3 and unitedLinux 1.0 are
certified.
Solaris 2.8 and 2.9 and higher versions are certified for Oracle
Database 10g.
Oracle 10g also checks for Oracle Home. Oracle Database 10g can be
installed on an empty Oracle Home or on Release that can be
overwritten. It also sets the PATH and ORACLE_SID variables.
In Linux, You can view, add, modify, and two tabs on the User
Manager. There are two tabs on the User Manager window, Users and
Groups.
you can select the groups tab to view the list of all local groups
on the system.
• Companion
• Oracle9iAS Infrastructure
• Oracle Documentation
The mid tier components and the components that are not able to
fit in database CD are written to a separate CD called Companion
CD. These CDs can be shipped on a single DVD also.
• HTML DB
• Workflow
• Oracle HTTP Server (OHS)
• SQL for Java (SQLJ)
• Jpublisher
• Context Knowledgebase
• Java
• Intermediate nocomps
• Legato storage Manager
• Examples of each Component
Management Options
File Storage Options
Backup and Recovery Options
Password Options
In management Options, you have the naming option where you name
the database. You can specify the global database Name using the
name.domain format or can simply define the database name.
Raw Devices, one of File Storage Options, are disk partitions with
no file system.
With Raw Devices you can manage the storage devices outside the OS
file system. You should use
The default port number for the EM Database Control HTTP is 5500.
you can obtain the port number for you system from the
$ORACLE_HOME/install/portlist.ini file.
To access the Database Control home page, you use the Database
Control Login page. This Page is displayed when the instance
starts. You specify the authorized user name and password to
access the Database Control. Initially, the user name is SYS. The
password for this user name is the one that is specified at the
time of installation.
You select the SYSDBA option from the connect As box and select
the Login Button to access the Database Control Home page.
Clone Database
Another configuration tool is the Clone Database wizard that
enables you to replicate a configured, tuned, and tested database
to another Oracle home. This wizard can be used to clone Oracle
databases release 8.1.7 and later. The wizard can replicate a
database in its committed form, even when the database is in use.
The displays the Clone Database: Source Type page. On this page
you can choose to clone a database either from the running
database instance or a previous clone operation. The Clone
Database wizard clones a Database using the Recovery Manager
(RMAN).
To display the Patch wizard, you select the Maintenance tab on the
Database Control home page. In the maintenance tab, you select the
patch link from the Deployments region. You can also download
interim patches from MetaLink into the EM patch cache, which is a
part of the EM repository. This prevents repeated downloads.
You can also store patches on the destination systems, and apply
them manually at a later time. To automate the patching process,
you can provide a customizable patch application script. The
resident EM agents run this script on the destination system at a
specified time. The Oracle Universal Installer (OUI) keeps track
of the system’s correct patch level.
Statistics Collection
The activities performed using Oracle Database 10g can be tracked
with the help of the Automatic Workload Repository (AWR). The AWR
supports a usage metrics that specifies how Oracle Database 10g
has been used for an activity.
Both the database features usage statistics and the database HWM
statistics can be tracked and recorded weekly by using the
Manageability Monitor Process. This process first tracks the
statistics by using a sampling method of the data dictionary and
then records the statistics in AWR snapshots.
You can also use the HWM page to view the last sampled value for a
particular feature and its database version.
Policy Framework
The policy framework is built over the configuration and metric
collection service of the EM. It enables you to evaluate, compare,
and retrieve the configuration information stored in the target.
This target may be a database, host, or listener. Each target has
a regular collection of configuration information that specifies
the previous configuration state.
The policy Rule page has a summary list of the policy rule
violation details such a priority, violations count, last
evaluation, and description. On this page, the related links
region contains links to the Manage Policy Library page and the
Manage Policy Violations page. To open the Manage Policy
Violations page, you select the Manage Policy violations link.
To open the Manage policy Library page, you select the Manage
Policy link from the Related Links region on the Policy Violations
page The Manage Policy Library page lists the priority, policy
rule, category, target type, and description of the different
policies. It also allows you to disable the existing policies
pertaining to a specific target.
To open the Manage Policy Library page, you select the Manage
Policy link from the Related Links region on the Policy Violations
page. The Manage Policy Library page lists the priority rule,
category, target type, and description of the different policies.
It also allows you to disable the existing policies pertaining to
a specific target.
Simplified Initiation Parameters
Next, you select the All Initialization parameters link from the
Instance region of the Administration page.
You can make changes to the parameters in the current tab of the
initiation parameters page. To save the changes, you select the
save to the File button.
Oracle Database 10g allows users to move data and databases across
platform boundaries by transporting user Tablespaces.
The TTS feature can be used only if the source system and the
target system are compatible. Both the systems should run on any
one of the supported platforms and use the same character sets.
The list of supported platforms is displayed using the select *
from V$Transportable_plaform view.
To use the cross-platform TTS feature, both the source and target
databases should have their COMPATIBLE initialization parameter
set to 10.0.0 or higher. This makes the data files platform-aware
when they are opened under Oracle Database 10g. These files are
identified and verified using their identical on-disk formats for
file header blocks.
The last step in the TTS procedure is to use Data Pump to import
metadata and give the Tablespaces read and write permissions at
the target platform.
Using the CREATE TABLE AS SELECT command to access the CLOB data
can eliminate the run-time CLOB data conversion. The new CLOB data
using the endian- independent AL16UTF16 format is created by
running the CREATE TABLE AS SELECT command.
In the second example, the RMAN utility converts the data files
transported from Windows platform to Linux platform. This file is
previously converted at the source database and is now stored at
the location
You can call Data Pump by using the PL/SQL package DBMS_DATAPUMP.
This allows you to create custom data movement utilities.
Oracle Database 10g provides new tools that support Data Pump
including the expdb export client and impdb import client. Another
tool that supports the functional architecture of Data Pump is a
web-based export and import interface. This interface can be
accessed using Database Control.
Using the external table API services, you can assign access to
external tables. These services include the Oracle Loader and
Oracle DATAPUMP clients. For example, using the ORACLE_DATAPUMP
access driver, you can assign external tables and read and write
access rights to files with binary streams.
You can use the SQL Loader client, also referred to as SQL*Loader,
to import data from external tables into tables in an Oracle
database. External tables contain the SQL loader clients. This
helps in automatically migrating loader control files to external
table access parameters.
To start and monitor Data Pump operations, you can use expdp and
impdp clients. These clients, in turn, call the DBMS_PUMP package.
The expdp and impdp clients, however, support features of the
original export and import clients.
Data Pump export and import are new utilities of Oracle Database
10g.
The Data Pump export utility enables you to unload data and
metadata from the database to OS files called dump file sets. The
Data Pump import utility is used to load data and metadata from
dump file sets to target system. The program interface (API) of
the Data Pump application uses the data files that are located on
the server.
The Data Pump utilities enable you to export data from a remote
database to a dump file set. You can also load the source database
directly to the target database eliminating the use of intervening
files. Exporting data using any of these methods is called network
modes. The network mode is useful in exporting data from a read-
only database.
There are several benefits of using the Data Pump export and
import utilities. The benefits include automatic decision on data-
access methods, fine-grained object selection, detaching from and
reattaching to long running jobs, and restarting of Data Pumps
jobs. The utilities also allow version specification and parallel
execution.
The Data Pump export and import utilities automatically choose the
method of data access. The data-access method can be direct path
or external tables. The Data Pump export and import utilities also
provide three parameters, EXCLUDE, INCLUDE, and CONTENT to enable
fine-grained object selection.
The Data Pump export and import utilities can detach from and
reattach to jobs without affecting the job. This ability of the
utilities enables you to monitor jobs from different locations.
The Data Pump export and import utilities enable restarting of
jobs without loss of data if the metaininformation is intact. The
voluntary or involuntary stopping of a job does not affect its
restarting.
The Data Pump export and import utilities allow you to specify the
version of the objects that are to be exported. A dump file set
containing objects with versions is compatible with any release of
Oracle Database 10g that supports Data Pump. The VERSION parameter
is used to specify the version for objects and is reserved for
future releases.
The Data Pump export and import utilities allow you to specify the
maximum number of threads of active execution servers, which
operate on behalf of an export job, using the parallel parameter.
The Data Pump export and import utilities also allow you to
estimate the space that may be consumed by an export job using
ESTIMATE_ONLY parameter.
The network mode of the Data Pump export and import utilities
allows a direct export from a remote database to a dump file set.
This is done using a database link to the source system. The Data
Pump export and import utilities also allow renaming the target
data files, schemas, and Tablespaces during import.
The Data Pump job primarily consists of creating the MT, the AQ
queues that communicate with other processes, and the Master
Control Process (MCP). While a job is running, the shadow process
services GET_STATUS requests from the client.
Data Pump provides two methods of accessing table row data, direct
path load using the direct path API(DPAPI) and external tables.
Data Pump selects direct path load and unload method when the
structure of the table allows it. The direct path method is also
used when maximum single-stream performance is required.
Data Pump uses external tables for loading and unloading when
certain conditions hold true. These conditions include presence of
fine-grained access control enabled in the select and insert modes
for tables, domain index on Large Objects (LOB) columns, clustered
tables, and tables with active triggers.
The other conditions when Data Pump uses external tables are
presence of global index on partitioned tables with single-
partition load, BFILE or opaque type columns referential integrity
constraints, VARRAY columns with embedded opaque type, tables with
encrypted columns, and tables are partitioned differently at load
and unload times.
Data that is loaded using one method can be unloaded using another
method because the external data representation is same for both
the methods.
Data Pump Import Export – Parameters
• Dump files
• Log files
• SQL files
The paths used to access files used by the Data Pump export and
import utilities are relative to the location of servers because
the pump operations are performed there. Absolute paths are
avoided to maintain network security.
When searching for the path, the Data Pump export and import
utilities first search for individual directory objects that may
be associated with each file. If these directory objects are not
found, the directory object specified by the DIRECTORY parameter
is used. You can create an environment variable DATA_PUMP_DIR, to
avoid using the DIRECTROY parameter.
Dump files are used by the Data Pump export and import utilities
to store metadata about the objects being transferred. Log files
are used by the utilities to record all console messages hat may
be generated. SQL files are used to store the results of all
SQLFILE operations.
There are four interfaces using which the Data Pump export and
import utilities can be accessed. These are the command-line
interface, parameter files, the interactive command-line
interface, and Database Control. Database control provides Web
access to the Data Pump export and import utilities.
Database control provides Web access to the data Pump export and
import utilities. To access these utilities, first access the
Database Control home page. Then, from the Utilities section on
the Maintenance tab, you can select the Export to Files, Import
from Files, or Import from Database link to access the required
Data Pump utility.
The Data Pump export and import utilities operate in several
modes, which define different types of import or export operation
specified in the command line. These modes are Full, Schema,
Table, Tablespace, and Transportable Tablespace (TTS).
If you want to export a database, you use the expdp command. The
DUMPFILE parameter is used to specify the output directory for the
dumpfile. The FILESIZE parameter enables you to specify the
maximum file size. Further, you can use the PARALLEL parameters to
indicate the number of parallel processes used to create the
export dump.
You can also use objects filters and content filters on the export
and import operations. Object filters are used to filter objects
such as views and packages from the operation. Content filters are
used to filter content such as metadata and queries from the
operation.
When you use the EXCLUDE parameter, all objects with the exception
of those that are listed after the parameter are included in the
import or export command.
When the INCLUDE parameter is used, only those objects that are
listed after the parameter are included in the import or export
command.
Network_link=//oracle10gserver/compname/dbname
The Database Control home page contains the Maintenance tab that
comprises the Utilities section. This section contains links that
allow you to access Data Pump utilities. Each link, on being
clicked, launches wizards that guide you step-by-step towards
defining all the parameters of your Data Pump jobs. Database
control schedules these jobs as repeatable jobs.
Oracle provides the EXPDP and IMPDP command line utilities that
support data pump activities. EXPDP or data pump operations create
dump file in the directories pointed by database directory
objects. These directories contain database objects definitions
and data. If multiple directories are specified for dump files
then the dump files created in a round robin fashion.
Once dump files are exported, you can import them using IMPDP. The
example displayed on the screen, depicts the full import of the
dump files stored in the directory object, IMP_DIR. While
importing, you need not specify FULL=Y as the default behavior is
to import the entire data of the dump files. The job and master
table have the default name SYSTEM_IMPORT_FULL_01
Instead of exporting the whole database, you can also selectively
export database schemas. Schemas objects that are exported include
functions, procedures, packages, or user-defined types. Database
users who have been granted the EXPORT_FULL_DATABASE role can
export multiple schemas whereas normal database users can export
only their own schema.
For example, you export CTADMIN schema into the directory pointed
by CT_DIR directory object using the export parameters file,
exconfig.txt. the dump file to be created is ctadmin.dmp.
DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
V$SESSION_LONGOPS
Using the pump job monitoring views, you can perform various
operations on existing jobs such as, attaching to an existing job,
restarting a job, or just unloading data.
The description and the progress of the job are displayed with an
interactive prompt after it. To stop a job, you can use the
STOP_JOB command. This ends the client session and terminates the
job without affecting any functions to be run in future. You can
restart the job if the dump file and the
SYSTEM.SYS_EXPORT_SCHEMA_01 table are intact.
To restart a job, you specify the name of the job if multiple jobs
are present in the specified schema. In the commands displayed on
the screen a job is started with a higher degree of parallelism.
Status of messages is displayed with a process progress status of
each worker at a specified time interval in the logging mode.
You can add a new file to the dump file set of the associated job.
to cancel the job, you can use the KILL_JOB command.
To stop the import client session, you can use the EXIT_CLIENT
command. This command is use when you want to keep the current job
running.
You can only unload data from tables of a specified schema. The
example displayed on the screen unloads data from the CT_EMPLOYEE
and CT_DEPARTMENT tables in the database schema.