Professional Documents
Culture Documents
answer: a, c
explanation:
monitoring index usage
oracle provides a means of monitoring indexes to determine if they are being used
or not used. if it is determined that an index is not being used, then it can be
dropped, thus eliminating unnecessary statement overhead.
to start monitoring an index's usage, issue this statement:
alter index index monitoring usage;
later, issue the following statement to stop the monitoring:
alter index index nomonitoring usage;
the view v$object_usage can be queried for the index being monitored to see if the
index has been used. the view contains a used column whose value is yes or no,
depending upon if the index has been used within the time period being monitored.
the view also contains the start and stop times of the monitoring period, and a
monitoring column (yes/no) to indicate if usage monitoring is currently active.
each time that you specify monitoring usage, the v$object_usage view is reset for
the specified index. the previous usage information is cleared or reset, and a new
start time is recorded. when you specify nomonitoring usage, no further monitoring
is performed, and the end time is recorded for the monitoring period. until the
next alter index ... monitoring usage statement is issued, the view information is
left unchanged.
2. you need to create an index on the sales table, which is 10 gb in size. you
want your index to be spread across many tablespaces, decreasing contention for
index lookup, and increasing scalability and manageability.
which type of index would be best for this table?
a. bitmap
b. unique
c. partitioned
d. reverse key
e. single column
f. function-based
answer: c
explanation:
i suggest that you read chapters 10 & 11 in oracle9i database concepts release 2
(9.2) march 2002 part no. a96524-01 (a96524.pdf)
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) ch 10 bitmap indexes
the purpose of an index is to provide pointers to the rows in a table that contain
a given key value. in a regular index, this is achieved by storing a list of
rowids for each key corresponding to the rows with that key value. oracle stores
each key valuerepeatedly with each storedrowid.in abitmap index,abitmapfor eachkey
value is used instead of a list of rowids.
each bit in the bitmap corresponds to a possible rowid. if the bit is set, then it
means that the row with the corresponding rowid contains the key value. a mapping
function converts the bit position to an actual rowid, so the bitmap index
provides the same functionality as a regular index even though it uses a different
representation internally. if the number of different key values is small, then
bitmap indexes are very space efficient.
bitmap indexing efficiently merges indexes that correspond to several conditions
in a where clause. rows that satisfy some, but not all, conditions are filtered
out before the table itself is accessed. this improves response time, often
dramatically.
note: bitmap indexes are available only if you have purchased the oracle9i
enterprise edition.
see oracle9i database new features for more information about the features
available in oracle9i and the oracle9i enterprise edition.
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) ch 11
partitioned indexes
just like partitioned tables, partitioned indexes improve manageability,
availability, performance, and scalability. they can either be partitioned
independently (global indexes) or automatically linked to a table's partitioning
method (local indexes).
local partitioned indexes
local partitioned indexes are easier to manage than other types of partitioned
indexes. they also offer greater availability and are common in dss environments.
the reason for this is equipartitioning: each partition of a local index is
associated with exactly one partition of the table. this enables oracle to
automatically keep the index partitions in sync with the table partitions, and
makes each table-index pair independent. any actions that make one partition's
data invalid or unavailable only affect a single partition.
you cannot explicitly add a partition to a local index. instead, new partitions
are added to local indexes only when you add a partition to the underlying table.
likewise, you cannot explicitly drop a partition from a local index. instead,
local index partitions are dropped only when you drop a partition from the
underlying table.
a local index can be unique. however, in order for a local index to be unique, the
partitioning key of the table must be part of the index's key columns. unique
localindexes are useful for oltp environments.
see also: oracle9i data warehousing guide for more information about partitioned
indexesname, and stores the index partition in the same tablespace as the table
partition.
global partitioned indexes
global partitioned indexes are flexible in that the degree of partitioning and the
partitioning key are independent from the table's partitioning method. they are
commonly used for oltp environments and offer efficient access to any individual
record.
the highest partition of a global index must have a partition bound, all of whose
values are maxvalue. this ensures that all rows in the underlying table can be
represented in the index. global prefixed indexes can be unique or nonunique. you
cannot add a partition to a global index because the highest partition always has
a partition bound of maxvalue. if you wish to add a new highest partition, use the
alter index split partition statement. if a global index partition is empty, you
can explicitly drop it by issuing the alter index drop partition statement. if a
global index partition contains data, dropping the partition causes the next
highest partition to be marked unusable. you cannot drop the highest partition in
a global index.
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) ch 10
unique and nonunique indexes
indexes canbeunique or nonunique.unique indexesguaranteethat notworows of a table
have duplicate values in the key column (or columns). nonunique indexes do not
impose this restriction on the column values. oracle recommends that unique
indexes be created explicitly, and not through
enabling a unique constraint on a table.
alternatively, you can define unique integrity constraints on the desired columns.
oracle enforces unique integrity constraints by automatically defining a unique
index on the unique key. however, it is advisable that any index that exists for
query performance, including unique indexes, be created explicitly.
a. bitmap
b. b-tree
c. partitioned
d. reverse key
answer: b
explanation:
oracle provides several indexing schemes that provide complementary performance
functionality. these are:
1 b-tree indexes-the default and the most common.
2 b-tree cluster indexes-defined specifically for cluster.
3 hash cluster indexes-defined specifically for a hash cluster.
4 global and local indexes-relate to partitioned tables and indexes.
5 reverse key indexes-most useful for oracle real application cluster
applications.
6 bitmap indexes-compact; work best for columns with a small set of values.
7 function-based indexes-contain the precomputed value of a function/expression.
8 domain indexes-specific to an application or cartridge.
4. the credit controller for your organization has complained that the report she
runs to show customers with bad credit ratings takes too long to run. you look at
the query that the report runs and determine that the report would run faster if
there were an index on the credit_rating column of the customers table.<br />
the customers table has about 5 million rows and around 100 new rows are added
every month. old records are not deleted from the table.<br />
the credit_rating column is defined as a varchar2(5) field. there are only 10
possible credit ratings and a customer's credit rating changes infrequently.
customers with bad credit ratings have a value in the credit_ratings column of
'bad' or 'f'.<br />
which type of index would be best for this column?
a. b-tree
b. bitmap
c. reverse key
d. function-based
answer: b
explanation:
ad a: why b-tree is not good for this problem:
(1) b-trees provide excellent retrieval performance for a wide range of queries,
including exact match and range searches.
(2) inserts, updates, and deletes are efficient, maintaining key order for fast
retrieval.
since, we will not update this column, and no records are deleted, also we don't
have a wide range of queries for this column so b-tree is not a good solution.
5. your developers asked you to create an index on the prod_id column of the
sales_history table, which has 100 million rows.
the table has approximately 2 million rows of new data loaded on the first day of
every month. for the remainder of the month, the table is only queried. most
reports are generated according to the prod_id, which has 96 distinct values.
a. bitmap
b. reverse key
c. unique b-tree
d. normal b-tree
e. function based
f. non-unique concatenated
answer: a
explanation:
regular b*-tree indexes work best when each key or key range references only a few
records, such as employee names. bitmap indexes, by contrast, work best when each
key references many records, such as employee gender.
bitmap indexes can substantially improve performance of queries with the following
characteristics:
(a) the where clause contains multiple predicates on low- or medium-cardinality
columns.
(b) the individual predicates on these low- or medium-cardinality columns select a
large number of rows.
(c) bitmap indexes have been created on some or all of these low- or medium-
cardinality columns.
(d) the tables being queried contain many rows.
you can use multiple bitmap indexes to evaluate the conditions on a single table.
bitmap indexes are thus highly advantageous for complex ad hoc queries that
contain lengthy where clauses. bitmap indexes can also provide optimal performance
for aggregate queries. 96<<100 million low cardinality ==> bitmap indexes, lot of
rows ==> bitmap indexes.
see oracle8 tuning release 8.0 december, 1997 part no. a58246-01 (a58246.pdf) pg.
181. (10-13)
a. bitmap
b. unique
c. partitioned
d. reverse key
e. single column
f. function-based
answer: a
explanation:
bitmap indexes can substantially improve performance of queries with the following
characteristics:
(a) the where clause contains multiple predicates on low- or medium-cardinality
columns.
(b) the individual predicates on these low- or medium-cardinality columns select a
large number of rows.
(c) bitmap indexes have been created on some or all of these low- or medium-
cardinality columns.
(d) the tables being queried contain many rows.
you can use multiple bitmap indexes to evaluate the conditions on a single table.
bitmap indexes are thus highly advantageous for complex ad hoc queries that
contain lengthy where clauses. bitmap indexes can also provide optimal performance
for aggregate queries.
ad a: true. low cardinality ==> bitmap indexes, lot of rows ==> bitmap indexes.
see oracle8 tuning release 8.0 december, 1997 part no. a58246-01 (a58246.pdf) pg.
181. (10-13)
7. the user smith created the sales history table. smith wants to find out the
following information about the sales history table:<br />
<br />
- the size of the initial extent allocated to the sales history data segment<br />
- the total number of extents allocated to the sales history data segment<br />
<br />
which data dictionary view(s) should smith query for the required information?
a. user_extents
b. user_segments
c. user_object_size
d. user_object_size and user_extents
e. user_object_size and user_segments
answer: b
explanation:
sql> desc user_segments
segment_name varchar2(81)
partition_name varchar2(30)
segment_type varchar2(18)
tablespace_name varchar2(30)
bytes number
blocks number
extents number
initial_extent number
next_extent number
min_extents number
max_extents number
pct_increase number
freelists number
freelist_groups number
buffer_pool varchar2(7)
8. which password management feature ensures a user cannot reuse a password for a
specified time interval?
a. account locking
b. password history
c. password verification
d. password expiration and aging
answer: b
explanation:
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 22-8
account locking
oracle can lock a user's account if the user fails to login to the system within a
specified number of attempts. depending on how the account is configured, it can
be unlocked automatically after a specified time interval or it must be unlocked
by the database administrator.
password complexity verification
complexity verification checks that each password is complex enough to provide
reasonable protection against intruders who try to break into the system by
guessing passwords.
password history
the password history option checks each newly specified password to ensure that a
password is not reused for the specified amount of time or for the specified
number of password changes. the database administrator can configure the rules for
password reuse with create profile statements.
9. which view provides the names of all the data dictionary views?
a. dba_names
b. dba_tables
c. dictionary
d. dba_dictionary
answer: c
explanation:
http://docs.rinet.ru:8080/o8/ch02/ch02.htm
all the data dictionary tables and views are owned by sys. you can query the
dictionary table to obtain the list of all dictionary views.
10. the control file defines the current state of the physical database.
which three dynamic performance views obtain information from the control file?
(choose three.)
a. v$log
b. v$sga
c. v$thread
d. v$version
e. v$datafile
f. v$parameter
answer: a, c, e
explanation:
v$log: this view contains log file information from the control files.
v$sga: this view contains summary information on the system global area (sga).
v$thread: this view contains thread information from the control file.
v$version: version numbers of core library components in the oracle server. there
is one row for each component.
v$datafile: this view contains datafile information from the control file.
v$parameter: displays information about the initialization parameters that are
currently in effect for the session. a new session inherits parameter values from
the instance-wide values displayed by the v$system_parameter view.
11. which data dictionary view shows the available free space in a certain
tablespace?
a. dba_extents
b. v$freespace
c. dba_free_space
d. dba_tablespacfs
e. dba_free_extents
answer: c
12. which data dictionary view would you use to get a list of object privileges
for all database users?
a. dba_tab_privs
b. all_tab_privs
c. user_tab_privs
d. all_tab_privs_made
answer: a
explanation:
ad a: true. dba_tab_privs this view lists all grants on objects in the database.
(a58242.pdf) pg. 261. (2-91).
ad b: false. all_tab_privs this view lists the grants on objects for which the
user or public is the grantee. (a58242.pdf) pg. 203. (2-33).
ad c: false. user_tab_privs this view contains information on grants on objects
for which the user is the owner, grantor, or grantee. (a58242.pdf) pg. 333. (2-
163).
ad d: false. all_tab_privs_made this view lists the user's grants and grants on
the user's objects. (a58242.pdf) pg. 204. (2-34).
13. user smith created indexes on some tables owned by user john. you need to
display the following:
<br />
index names<br />
index types<br />
<br />
which data dictionary view(s) would you need to query?
a. dba_indexes only
b. dba_ind_columns only
c. dba_indexes and dba_users
d. dba_ind columns and dba_users
e. dba_indexes and dba_ind_expressions
f. dba_indexes, dba_tables, and dba_users
answer: a
explanation:
ad a: dba_indexes. this view contains descriptions for all indexes in the
database. to gather statistics for this view, use the sql command analyze. this
view supports parallel partitioned index scans. (a58242.pdf) pg. 230. (2-60).
ad b: dba_ind_columns. this view contains descriptions of the columns comprising
the indexes on all tables and clusters. (a58242.pdf) pg. 232. (2-62).
ad c: dba_users. this view lists information about all users of the database.
(a58242.pdf) pg. 267. (2-97).
ad e: dba_ind_expressions does not exist.
ad f: dba_tables. this view contains descriptions of all relational tables in the
database. to gather statistics for this view, use the sql command analyze.
(a58242.pdf) pg. 262. (2-92).
14. you need to know how many data files were specified as the maximum for the
database when it was created. you did not create the database and do not have the
script used to create the database. how could you find this information?
answer: d
explanation:
ad a: false. dba_data_files contains information about database files. we need
information about max number of datafiles. see (a58242.pdf) pg. 225. (2-55)
ad b: v$datafile ontains datafile information from the control file. (a58242.pdf)
pg. 363. (3-23)
ad c: this command just shows the locations of the current control files.
ad d: v$controlfile_record_section displays information about the controlfile
record sections. (a58242.pdf) pg. 360. (3-20)
the emp table contains self referential integrity requiring all not null values
inserted in the manager_id column to exist in the employee_id column. which view
or combination of views is required to return the name of the foreign key
constraint and the referenced primary key?
a. dba_tables only
b. dba_constraints only
c. dba_tab_columns only
d. dba_cons_columns only
e. dba_tables and dba_constraints
f. dba_tables and dba_cons_columns
answer: b
explanation:
ad a: false. dba_tables contains descriptions of all relational tables in the
database. to gather statistics for this view, use the sql command analyze. no
constraint information see (a58242.pdf) pg. 262 (2-92).
ad b: true. dba_constraints contains constraint definitions on all tables. see
(a58242.pdf) pg. 253 (2-83).
ad c: false. dba_tab_columns contains information which describes columns of all
tables, views, and clusters. no constraint name information. see (a58242.pdf) pg.
259 (2-89).
ad d: false. dba_cons_columns contains information about accessible columns in
constraint definitions. see (a58242.pdf) pg. 224 (2-54).
ad e: false. we don't need the dba_tables.
ad f: false.
16. which data dictionary view(s) do you need to query to find the following
information about a user?<br />
- whether the user's account has expired<br />
- the user's default tablespace name<br />
- the user's profile name<br />
a. dba_users only
b. dba_users and dba_profiles
c. dba_users and dba_tablespaces
d. dba_users, dba_ts_quotas, and dba_profiles
e. dba_users, dba_tablespaces, and dba_profiles
answer: a
explanation:
sql> desc dba_users
17. you need to determine the location of all the tables and indexes owned by one
user. in which dba view would you look?
a. dba_tables
b. dba_indexes
c. dba_segments
d. dba_tablespaces
answer: c
explanation:
ad a: false. dba_tables contains descriptions of all relational tables in the
database. to gather statistics for this view, use the sql command analyze. no
index information see (a58242.pdf) pg. 262 (2-92).
ad b: false. dba_indexes contains descriptions for all indexes in the database. to
gather statistics for this view, use the sql command analyze. this view supports
parallel partitioned index scans. no table information. see (a58242.pdf) pg. 230
(2-60).
ad c: true. dba_segments contains information about storage allocated for all
database segments. username of the segment owner, type of segment: ... table,
index .... see (a58242.pdf) pg. 254 (2-84).
ad d: false. dba_tablespaces contains descriptions of all tablespaces. no table
and index information. see (a58242.pdf) pg. 264 (2-94).
18. which data dictionary view would you use to get a list of all database users
and their default settings?
a. all_users
b. user_users
c. dba_users
d. v$session
answer: c
explanation:
ad a: false. all_users this view contains information about all users of the
database: name of the user, id number of the user, user creation date, but no
default settings. see (a58242.pdf) pg. 209 (2-39).
ad b: false. user_users this view contains information about the current user. not
all user. see (a58242.pdf) pg. 339 (2-169).
ad c: true. dba_users this view lists information about all users of the database.
default tablespace for data, default tablespace for temporary table see
(a58242.pdf) pg. 267 (2-97).
ad d: false. v$session this view lists session information for each current
session. see (a58242.pdf) pg. 417 (3-77).
19. you want to limit the number of transactions that can simultaneously make
changes to data in a block, and increase the frequency with which oracle returns a
block back on the free list.
answer: d
explanation:
http://perun.si.umich.edu/~radev/654/resources/oracledefs.html
pctfree
specifies the percentage of space in each of the table's data blocks reserved for
future updates to the table's rows. the value of pctfree must be a positive
integer from 1 to 99. a value of 0 allows the entire block to be filled by
inserts of new rows. the default value is 10. this value reserves 10% of each
block for updates to existing rows and allows inserts of new rows to fill a
maximum of 90% of each block. pctfree has the same function in the commands that
create and alter clusters, indexes, snapshots, and snapshot logs. the combination
of pctfree and pctused determines whether inserted rows will go into existing data
blocks or into new blocks.
pctused
specifies the minimum percentage of used space that oracle maintains for each data
block of the table. a block becomes a candidate for row insertion when its used
space falls below pctused. pctused is specified as a positive integer from 1 to
99 and defaults to 40. pctused has the same function in the commands that create
and alter clusters, snapshots, and snapshot logs. the sum of pctfree and pctused
must be less than 100. you can use pctfree and pctused together use space within
a table more efficiently.
initrans
specifies the initial number of transaction entries allocated within each data
block allocated to the table. this value can range from 1 to 255 and defaults to
1. in general, you should not change the initrans value from its default. each
transaction that updates a block requires a transaction entry in the block. the
size of a transaction entry depends on your operating system. this parameter
ensures that a minimum number of concurrent transactions can update the block and
helps avoid the overhead of dynamically allocating a transaction entry. the
initrans parameter serves the same purpose in clusters, indexes, snapshots, and
snapshot logs as in tables. the minimum and default initrans value for a cluster
or index is 2, rather than 1.
maxtrans
specifies the maximum number of concurrent transactions that can update a data
block allocated to the table. this limit does not apply to queries. this value
can range from 1 to 255 and the default is a function of the data block size. you
should not change the maxtrans value from its default. if the number concurrent
transactions updating a block exceeds the initrans value, oracle dynamically
allocates transaction entries in the block until either the maxtrans value is
exceeded or the block has no more free space. the maxtrans parameter serves the
same purpose in clusters, snapshots, and snapshot logs as in tables.
20. which steps should you take to gather information about checkpoints?
answer: a
explanation:
testking said b.<br /><br />
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96536/ch1103.htm#1019186
log_checkpoints_to_alert lets you log your checkpoints to the alert file. doing so
is useful for determining whether checkpoints are occurring at the desired
frequency.
fast_start_mttr_target: lets you specify in seconds the expected mean time to
recover (mttr), which is the expected amount of time oracle takes to perform
recovery and startup the instance.
log_checkpoint_timeout: limits the number of seconds between the most recent redo
record and the checkpoint.
log_checkpoint_interval: limits the number of redo blocks generated between the
most recent redo record and the checkpoint.
21. you decided to use oracle managed files (omf) for the control files in your
database. which initialization parameter do you need to set to specify the default
location for control files if you want to multiplex the files in different
directories?
a. db_files
b. db_create_file_dest
c. db_file_name_convert
d. db_create_online_log_dest_n
answer: d
explanation:
http://www.orafaq.net/parms/>
http://www.orafaq.net/archive/oracle-l/2002/07/08/102823.htm:
db_file_name_convert converts the db file name:
db_file_name_convert=('/vobs/oracle/dbs','/fs2/oracle/stdby')
http://www.oracle-base.com/articles/9i/oraclemanagedfiles.asp:
managing redo log files using omf
when using omf for redo logs the db_creat_online_log_dest_n parameters in the
init.ora file decide on the locations and numbers of logfile members. for exmple:
db_create_online_log_dest_1 = c:\oracle\oradata\tsh1
db_create_online_log_dest_2 = d:\oracle\oradata\tsh1
22. which command can you use to display the date and time <br />
in the form 17:45:01 jul-12-2000 using the default us7ascii character set?
answer: c
explanation:
http://www.idera.com/support/documentation/oracle_date_format.htm<br />
alter session set nls_date_format = <date_format>
23. which initialization parameter determines the location of the alert log file?
a. user_dump_dest
b. db_create_file_dest
c. background_dump_dest
d. db_create_online_log_dest_n
answer: c
http://www.experts-exchange.com/databases/oracle/q_20308350.html
location:
all trace files for background processes and the alert log are written to the
destination specified by the initialization parameter background_dump_dest. all
trace files for server processes are written to the destination specified by the
initialization parameter user_dump_dest. the names of trace files are operating
system specific, but usually include the name of the process writing the file
(such as lgwr and reco).
24. which two environment variables should be set before creating a database?
(choose two.)
a. db_name
b. oracle_sid
c. oracle_home
d. service_name
e. instance_name
answer: b, c
explanation:
check out: in this question we deal with environment variables, not parameters!
instance_name
represents the name of the instance and is used to uniquely identify a specific
instance when clusters share common services names. the instance name is
identified by the instance_name parameter in the instance initialization file,
initsid.ora. the instance name is the same as the oracle system identifier (sid).
oracle_home
corresponds to the environment in which oracle products run. this environment
includes location of installed product files, path variable pointing to products'
binary files, registry entries, net service name, and program groups.
if you install an ofa-compliant database, using oracle universal installer
defaults, oracle home (known as \oracle_home in this guide) is located beneath
x:\oracle_base. it contains subdirectories for oracle software executables and
network files.
oracle corporation recommends that you never set the oracle_home environment
variable, because it is not required for oracle products to function properly. if
you set the oracle_home environment variable, then oracle universal installer will
unset it for you.
service_name
a logical representation of a database. this is the way a database is presented to
clients. a database can be presented as multiple services and a service can be
implemented as multiple database instances. the service name is a string that
includes:
(a) the global database name
(b) a name comprised of the database name (db_name)
(c) domain name (db_domain)
(d) the service name is entered during installation or database creation.
if you are not sure what the global database name is, you can obtain it from the
combined values of the service_names parameter in the common database
initialization file, initdbname.ora.
a. log_checkpoint_target
b. fast_start_mttr_target
c. log_checkpoint_io_target
d. fast_start_checkpoint_target
answer: b
explanation:
ad a: false. there is no log_checkpoint_target parameter in oracle.
ad b: true. fast_start_mttr_target parameter determines the number of buffers
being written by dbwn. parameter fast_start_mttr_target has been introduced in
oracle9i and it replaces fast_start_io_target and log_checkpoint_interval in
oracle8i, although the old parameters can still be set if required in oracle9i.
fast_start_mttr_target enables you to specify the number of seconds the database
takes to perform crash recovery of a single instance.
ad c: false. there is no log_checkpoint_io_target parameter in oracle.
ad d: false. there is no fast_start_checkpoint_target parameter in oracle.
26. the orders table has a constant transaction load 24 hours a day, so down time
is not allowed. the indexes become fragmented. which statement is true?
answer: c
explanation:
http://www.dbatoolbox.com/wp2001/spacemgmt/reorg_defrag_in_o8i_fo.pdf>
oracle8i can create an index online; users can continue to update and query the
base table while the index is being created. no table or row locks are held during
the creation operation. changes to the base table and index during the build are
recorded in a journal table and merged into the new index at the completion of the
operation, as illustrated in figure 1. these online operations also support
parallel index creation and can act on some or all of the partitions of a
partitioned index. online index creation improves database availability by
providing users full access to data in the base table during an index build.
27. you set the value of the os_authent_prefix initialization parameter to ops$
and created a user account by issuing this sql statement:<br />
answer: a, e
explanation:
with external authentication, your database relies on the underlying operating
system or network authentication service to restrict access to database accounts.
a database password is not used for this type of login. if your operating system
or network service permits, you can have it authenticate users. if you do so, set
the parameter os_authent_prefix, and use this prefix in oracle usernames. this
parameter defines a prefix that oracle adds to the beginning of every user's
operating system account name. oracle compares the prefixed username with the
oracle usernames in the database when a user attempts to connect. if a user with
an operating system account named tsmith" is to connect to an oracle database and
be authenticated by the operating system, oracle checks that there is a
corresponding database user "ops$tsmith" and, if so, allows the user to connect.
see: (a58397.pdf) pg. 377. (20-9)<br /><br />
ad a: true. profile reassigns the profile named to the user. the profile limits
the amount of database resources the user can use. if you omit this clause, oracle
assigns the default profile to the user. see (a58225.pdf) pg. 541. (4-357).
ad b: when you choose external authentication for a user, the user account is
maintained by oracle, but password administration and user authentication is
performed by an external service. this external service can be the operating
system or a network service, such as oracle net.
ad c: false.
ad d: false.
ad e: when you choose external authentication for a user, the user account is
maintained by oracle, but password administration and user authentication is
performed by an external service. this external service can be the operating
system or a network service, such as oracle net.
a. index
b. table
c. temporary
d. boot strap
answer: a
explanation:
http://vsbabu.org/oracle/sect16.html
29. which three are the physical structures that constitute the oracle database?
(choose three)
a. table
b. extent
c. segment
d. data file
e. log file
f. tablespace
g. control file
answer: d, e, g
explanation:
http://www.adp-gmbh.ch/ora/notes.html
control files
an oracle database must at least have one control file, but usually (for backup
und recovery http://www.adp-gmbh.ch/ora/concepts/backup_recovery/index.html
reasons) it has more than one (all of which are exact copies of one control file).
the control file contains a number of important information that the instance
needs to operate the database. the following pieces of information are held in a
control file: the name (os path) of all datafiles that the database consists of,
the name of the database, the timestamp of when the database was created, the
checkpoint (all database changes prior to that checkpoint are saved in the
datafiles) and information for rman.
when a database is mounted, its control file is used to find the datafiles and
redo log files for that database. because the control file is so important, it is
imperative to back up the control file whenever a structural change was made in
the database. redo log
whenever something is changed on a datafile, oracle records it in a redo log. the
name redo log indicates its purpose: when the database crashes, oracle can redo
all changes on datafiles which will take the database data back to the state it
was when the last redo record was written. use v$log http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html, v$logfile http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html, v$log_history http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views> and v$thread http://www.adp-
gmbh.ch/ora/misc/dynamic_performance_views.html to find information about the redo
log of your database.
each redo log file belongs to exactly on group (of which at least two must exist).
exactly one of these groups is the current group (can be queried using the column
status of v$log http://www.adp-gmbh.ch/ora/misc/dynamic_performance_views>).
oracle uses that current group to write the redo log entries. when the group is
full, a log switch occurs, making another group the current one. each log switch
causes checkpoint, however, the converse is not true: a checkpoint does not cause
a redo log switch.
i believe the website - files are usually said physical, and these are basic ones
-> d e g
30. which three statements about the oracle database storage structure are true?
(choose three)
answer: a, c, e
explanation:
a is ok, see q29.
b is false (oracle7 documentation, server concepts, 4-10): a tablespace in an
oracle database consists of one or more physical datafiles. a datafile can be
associated with only one tablespace, and only one database.
c is ok (oracle7 documentation, server concepts, 3-10): an extent is a logical
unit of database storage space allocation made up of a number of contiguous data
blocks. each segment is composed of one or more extents.
d is false (oracle7 documentation, server concepts, 3-3): oracle allocates space
for segments in extents. therefore, when the existing extents of a segment are
full, oracle allocates another extent for that segment. because extents are
allocated as needed, the extents of a segment may or may not be contiguous on
disk. the segments also can span files, but the individual extents cannot.
e is ok (oracle7 documentation, server concepts, 4-3): each tablespace in an
oracle database is comprised of one or more operating system files called
datafiles. a tablespace's datafiles physically store the associated database data
on disk.
f is false - see ans for d
a. extent
b. segment
c. oracle block
d. operating system block
answer: a
explanation:
the extent_management_clause lets you specify how the extents of the tablespace
will be managed.
(a) specify local if you want the tablespace to be locally managed. locally
managed tablespaces have some part of the tablespace set aside for a bitmap. this
is the default.
(b) autoallocate specifies that the tablespace is system managed. users cannot
specify an extent size. this is the default if the compatible initialization
parameter is set to 9.0.0 or higher.
(c) uniform specifies that the tablespace is managed with uniform extents of size
bytes. use k or m to specify the extent size in kilobytes or megabytes. the
default size is 1 megabyte.
note: once you have specified extent management with this clause, you can change
extent management only by migrating the tablespace.
32. which is a complete list of the logical components of the oracle database?
answer: b
see q29
33. which option lists the correct hierarchy of storage structures, from largest
to the smallest?
answer: d
explanation:
logical database structures: the logical structures of an oracle database include
schema objects, data blocks, extents, segments, and tablespaces.
oracle data blocks: at the finest level of granularity, oracle database data is
stored in data blocks. one data block corresponds to a specific number of bytes of
physical database space on disk.
extents: the next level of logical database space is an extent. an extent is a
specific number of contiguous data blocks, obtained in a single allocation, used
to store a specific type of information.
segments: above extents, the level of logical database storage is a segment. a
segment is a set of extents allocated for a certain logical structure. the
following table describes the different types of segments.
tablespaces: a database is divided into logical storage units called tablespaces,
which group related logical structures together.
a. segments
b. database blocks
c. tablespaces
d. operating system blocks
answer: b
explanation:
an extent is a specific number of contiguous data blocks, obtained in a single
allocation, and used to store a specific type of information.
35. which two statements about segments are true? (choose two.)
answer: b, c
explanation:
a single data segment in an oracle database holds all of the data for one of the
following:
(a) a table that is not partitioned or clustered.
(b) a partition of a partitioned table.
(c) a cluster of tables.
a table or materialized view can contain lob, varray, or nested table column
types. these entities can be stored in their own segments.
ad a: false. each table in a cluster does not have its own segment. clustered
tables contain some blocks as a common part for two or more tables. clusters
enable you to store data from several tables inside a single segment so users can
retrieve data from those two tables together very quickly.
ad d: false. for each index, oracle allocates one or more extents to form its
index segment.
ad e: false. oracle creates this data segment when you create the nonclustered
table or cluster with the create command.
ad f: false. a nested table of a column within a table does not use the parent
table segment: it has its own.
oracle databases use four types of segments:
(a) data segments
(b) index segments
(c) temporary segments
(d) rollback segments
see: (a58227.pdf) pg. 107. (2-15)
36. which type of table is usually created to enable the building of scalable
applications, and is useful for large tables that can be queried or manipulated
using several processes concurrently?
a. regular table
b. clustered table
c. partitioned table
d. index-organized table
answer: c
what is scalability?
in the case of web applications, scalability is the capacity to serve additional
users or transactions without fundamentally altering the application's
architecture or program design. if an application is scalable, you can maintain
steady performance as the load increases simply by adding additional resources
such as servers, processors or memory.
cluster
a cluster is an oracle
http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm object that allows
one to store related rows from different tables in the same data
http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm block
http://infoboerse.doag.de/mirror/frank/glossary/faqglosb.htm. table
http://infoboerse.doag.de/mirror/frank/glossary/faqglost.htm clustering is very
seldomly used by oracle
http://infoboerse.doag.de/mirror/frank/glossary/faqgloso.htm dba
http://infoboerse.doag.de/mirror/frank/glossary/faqglosd.htm's and developers.
answer: a
explanation:
sql 18-47
set role
purpose: use the set role statement to enable and disable roles for your current
session.
in the identified by password clause, specify the password for a role. if the role
has a password, then you must specify the password to enable the role.
d is out of question, bad syntax. i would go for a, if the role does not have a
password, this command is ok.
38. your database is currently configured with the database character set to
we8iso8859p1 and national character set to af16utf16.
business requirements dictate the need to expand language requirements beyond the
current character set, for asian and additional western european languages, in the
form of customer names and addresses.
which solution saves space storing asian characters and maintains consistent
character manipulation performance?
a. use sql char data types and change the database character set to utf8.
b. use sql nchar data types and change the national character set to utf8.
c. use sql char data types and change the database character set to af32utf8.
d. use sql nchar data types and keep the national character set to af16utf16.
answer: d
explanation:
sql nchar
supporting multilingual data often means using unicode. unicode is a universal
character encoding scheme that allows you to store information from any major
language using a single character set. unicode provides a unique code value for
every character, regardless of the platform, program, or language. for many
companies with legacy systems making the commitment to migrating their entire
database to support unicode is not practical. an alternative to storing all data
in the database as unicode is to use the sql nchar datatypes. unicode characters
can be stored in columns of these datatypes regardless of the setting of the
database character set. the nchar datatype has been redefined in oracle9i to be a
unicode datatype exclusively. in other words, it stores data in the unicode
encoding only. the national character set supports utf-16 and utf-8 in the
following encodings:
(a) al16utf16 (default)
(b) utf8
sql nchar datatypes (nchar, nvarchar2, and nclob) can be used in the same way as
the sql char datatypes. this allows the inclusion of unicode data in a non
unicode database. some of the key benefits for using the nchar datatype versus
having the entire database as unicode include:
you only need to support multilingual data in a limited number of columns - you
can add columns of the sql nchar datatypes to existing tables or new tables to
support multiple languages incrementally. or you can migrate specific columns from
sql char datatypes to sql nchar datatypes easily using the alter table modify
column command.
example: alter table emp modify (ename nvarchar2(10));
you are building a packaged application that will be sold to customers, then you
may want to build the application using sql nchar datatypes - this is because
with the sql nchar datatype the data is always stored in unicode, and the length
of the data is always specified in utf-16 code units. as a result, you need only
test the application once, and your application will run on your customer
databases regardless of the database character set.
you want the best possible performance - if your existing database character set
is single-byte then extending it with sql nchar datatypes may offer better
performance then migrating the entire database to unicode.
your applications native environment is ucs-2 or utf-16 - a unicode database must
run as utf-8. this means there will be conversion between the client and database.
by using the nchar encoding al16utf16, you can eliminate this conversion.
39. you have just accepted the position of dba with a new company. one of the
first things you want to do is examine the performance of the database. which tool
will help you to do this?
a. recovery manager
b. oracle enterprise manager
c. oracle universal installer
d. oracle database configuration assistant
answer: b
explanation:
http://www.orafaq.com/faqoem.htm
what is oem (oracle enterprise manager)?
oem is a set of system management tools provided by oracle for managing the oracle
environment. it provides tools to automate tasks (both one-time and repetitive in
nature) to take database administration a step closer to "lights out" management.
what are the components of oem?
oracle enterprise manager (oem) has the following components:
management server (oms): middle tier server that handles communication with the
intelligent agents. the oem console connects to the management server to monitor
and configure the oracle enterprise.
console: this is a graphical interface from where one can schedule jobs, events,
and monitor the database. the console can be opened from a windows workstation,
unix xterm (oemapp command) or web browser session (oem_webstage).
intelligent agent (oia): the oia runs on the target database and takes care of the
execution of jobs and events scheduled through the console.
data gatherer (dg): the dg runs on the target database and takes care of the
gathering database statistics over time.
40. you have a database with the db_name set to prod and oracle_sid set to prod.
these files are in the default location for the initialization files:
- init.ora
- initprod.ora
- spfile.ora
- spfileprod.ora<br />
which initialization files does the oracle server attempt to read, and in which
order?
answer: c
explanation:
http://www.trivadis.ch/publikationen/e/spfile_and_initora.en.pdf http://www.adp-
gmbh.ch/ora/notes.html
up to version 8i, oracle traditionally stored initialization parameters in a text
file init.ora (pfile). with oracle9i, server parameter files (spfile) can also be
used. an spfile can be regarded as a repository for initialization parameters
which is located on the database server. spfiles are small binary files that
cannot be edited. editing spfiles corrupts the file and either the instance fails
to start or an active instance may crash.
41. you are in the planning stages of creating a database. how should you plan to
influence the size of the control file?
answer: c
explanation:
control_files
is a string -> name of the files -> does not influence the size
sql 13-15
create controlfile
use the create controlfile statement to re-create a control file in one of the
following cases:
(a) all copies of your existing control files have been lost through media
failure.
(b) you want to change the name of the database.
(c) you want to change the maximum number of redo log file groups, redo log file
members, archived redo log files, datafiles, or instances that can concurrently
have the database mounted and open.
http://coffee.kennesaw.edu/tests/oracle/ch3.doc:
create database
question 19. which clauses in the create database command specify limits for the
database?
the control file size depends on the following limits (maxlogfiles, maxlogmembers,
maxloghistory, maxdatafiles, maxinstances), because oracle pre-allocates space in
the control file.
maxlogfiles: specifies the maximum number of redo log groups that can ever be
created in the database.
maxlogmembers: specifies the maximum number of redo log members (copies of the
redo logs) for each redo log group.
maxloghistory: is used only with parallel server configuration. it specifies the
maximum number of archived redo log files for automatic media recovery.
maxdatafiles: specifies the maximum number of data files that can be created in
this database. data files are created when you create a tablespace, or add more
space to a tablespace by adding a data file.
maxinstances: specifies the maximum number of instances that can simultaneously
mount and open this database.
if you want to change any of these limits after the database is created, you must
re-create the control file.
answer: b
explanation:
http://www.dbaoncall.net/references/ht_startup_shutdown_db.html
1. no two rows of a table can have duplicate values in the specified column.
2. a column cannot contain null values.<br />
which type of constraint ensures that both of the above rules are true?
a. check
b. unique
c. not null
d. primary key
e. foreign key
answer: d
no comment
44. your company hired joe, a dba who will be working from home. joe needs to have
the ability to start the database remotely.
you created a password file for your database and set remote_login_passwordfile =
exclusive in the parameter file. which command adds joe to the password file,
allowing him remote dba access?
answer: b
explanation:
oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 1-20.
using orapwd
when you invoke the password file creation utility without supplying any
parameters, you receive a message indicating the proper use of the command as
shown in the following sample output:
orapwd
usage: orapwd file=<fname> password=<password> entries=<users>
where
file - name of password file (mand).
password - password for sys (mand).
entries - maximum number of distinct dbas and opers (opt).
45. you need to drop two columns from a table. which sequence of sql statements
should be used to drop the columns and limit the number of times the rows are
updated?<br />
a. alter table employees drop column comments drop column email;<br />
b. alter table employees drop column comments; <br />
alter table employees drop column email; <br />
c. alter table employees set unused column comments; <br />
alter table employees drop unused columns;<br />
alter table employees set unused column email; <br />
alter table employees drop unused columns;<br />
d. alter table employees set unused column comments; <br />
alter table employees set unused column email;<br />
alter table employees drop unused columns;<br />
answer: d
explanation:
http://certcities.com/certs/oracle/columns/story.asp?editorialsid=36:
reorganizing columns
while it has been possible to add new columns to an existing table in oracle for
quite a while now, until oracle 8i it was not possible to drop or remove a column
from a table without dropping the table first and then re-creating it without the
column you wanted to drop. with this method, you needed to perform an export
before dropping the table and then an import after creating it without the column,
or issue a create table ... as select statement with all of its associated
headaches (see above).
in oracle 8i, we now have a way of marking columns unused and then dropping them
at a later date. oracle is a little behind the times here compared to sql server,
which does not require a complete rebuild of the table after dropping the column,
but i'm just happy that i have the feature and hope that they'll improve it in
oracle 9i.
to get rid of columns with this new method, the first step is to issue the alter
table <tablename> set unused column <columnname>, which sets the column to
no longer be used within the table but does not change the physical structure of
the table. all rows physically have the column's data stored, and a physical place
is kept for the column on disk, but the column cannot be queried and, for all
intents and purposes, does not exist. in essence, the column is flagged to be
dropped, though you cannot reverse setting the column to unused.
it is possible to set a number of columns unused in a table before actually
dropping them. the overhead of setting columns unused is fairly minimal and allows
you to continue to operate normally, except that any actions on the unused columns
will result in an error. the next step, when you have configured all the columns
you want to get rid of as unused, is to actually physically reorganize the table
so that the data for the unused columns is no longer on disk and the columns are
really gone. this is done by issuing the command alter table ... drop column.
physically dropping a column in an oracle table is a process that will prevent
anyone from accessing the table while the removal of the column(s) is processed.
the commands that will affect an actual removal of a column are:
alter table <tablename> drop column <columnname>
alter table <tablename> drop unused columns
the commands will always do the same thing. this means that if you mark two or
three columns as unused in a table, if you decide you want to drop one of them
using the alter table ... drop column command, you will drop all columns marked as
unused whether you want to or not. the alter table ... drop column can also be
used when a column has not previously been marked as unused but you simply want to
drop it right away, but you will also drop any unused columns because that's the
way it works.
if constraints depend on the column being dropped, you can use the cascade
constraints option to deal with them; if you also want to explicitly mark views,
triggers, stored procedures or other stored program units referencing the parent
table and force them to be recompiled the next time they are used, you can also
specify the invalidate option.
a problem could arise if you issue the drop column command and the instance
crashes during the rebuild of the table. in this case, the table will be marked as
invalid and will not be available to anyone. oracle forces you to complete the
drop column operation before the table can be used again. to get out of this
situation, issue the command alter table ... drop columns continue. this will
complete the process and mark the table as valid upon completion.
http://whizlabs.com/ocp/ocp-1z0-007-tips.html
tip 34: oracle allows columns to be dropped with the 'alter table drop columns'
command. dropping of columns generally takes a lot of time, so an alternative
(faster) option would be to mark the column as unused with the 'set unused column'
clause and later drop the unused column.
a. perform i/o
b. lock rows that are not data dictionary rows
c. monitor other oracle processes
d. connect users to the oracle instance
e. execute sql statements issued through an application
answer: a, c
explanation:
oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 5-11.
to maximize performance and accommodate many users, a multiprocess oracle system
uses some additional processes called background processes. background processes
consolidate functions that would otherwise be handled by multiple oracle programs
running for each user process. background processes asynchronously perform i/o and
monitor other oracle processes to provide increased parallelism for better
performance and reliability.
47. you omit the undo tablespace clause in your create database statement. the
undo_management parameter is set to auto.
answer: c
explanation:
http://www.oracle-
base.com/articles/9i/automaticundomanagement.asp#enablingautomaticundomanagement
which two columns are required from dba_tables to determine the size of the extent
when it extends? (choose two)
a. blocks
b. pct_free
c. next_extent
d. pct_increase
e. initial_extent
answer: c, d
explanation:
the size parameter of the allocate extent clause is the extent size in bytes,
rounded up to a multiple of the block size. if you do not specify size, then
oracle calculates the extent size according to the values of the next and
pctincrease storage parameters.
oracle does not use the value of size as a basis for calculating subsequent extent
allocations, which are determined by the values set for the next and pctincrease
parameters.
49. bob is an administrator who has full dba privileges. when he attempts to drop
the default profile as shown below, he receives the error message shown. which
option best explains this error?<br /><br />
sql> drop profile sys.default;<br />
drop profile sys.default<br />
*<br />
error at line 1:<br />
ora-00950: invalid drop option<br />
answer: a
explanation:
sql 16-94
restriction on dropping profiles: you cannot drop the default profile.
50. you are in the process of dropping the building_location column from the
hr.employees table. the table has been marked invalid until the operation
completes. suddenly the instance fails. upon startup, the table remains invalid.
which step(s) should you follow to complete the operation?
answer: a
testking said d.
explanation:
drop unused columns clause: specify drop unused columns to remove from the table
all columns currently marked as unused. use this statement when you want to
reclaim the extra disk space from unused columns in the table. if the table
contains no unused columns, then the statement returns with no errors.
column specify one or more columns to be set as unused or dropped. use the column
keyword only if you are specifying only one column. if you specify a column list,
then it cannot contain duplicates.
cascade constraints: specify cascade constraints if you want to drop all foreign
key constraints that refer to the primary and unique keys defined on the dropped
columns, and drop all multicolumn constraints defined on the dropped columns. if
any constraint is referenced by columns from other tables or remaining columns in
the target table, then you must specify cascade constraints. otherwise, the
statement aborts and an error is returned..alter table
invalidate: the invalidate keyword is optional. oracle automatically invalidates
all dependent objects, such as views, triggers, and stored program units. object
invalidation is a recursive process. therefore, all directly dependent and
indirectly dependent objects are invalidated. however, only local dependencies are
invalidated, because oracle manages remote dependencies differently from local
dependencies. an object invalidated by this statement is automatically revalidated
when next referenced. you must then correct any errors that exist in that object
before referencing it.
checkpoint: specify checkpoint if you want oracle to apply a checkpoint for the
drop column operation after processing integer rows; integer is optional and must
be greater than zero. if integeris greater than the number of rows in the table,
then oracle applies a checkpoint after all the rows have been processed. if you do
not specify integer, then oracle sets the default of 512. checkpointing cuts down
the amount of undo logs accumulated during the drop column operation to avoid
running out of rollback segment space. however, if this statement is interrupted
after a checkpoint has been applied, then the table remains in an unusable state.
while the table is unusable, the only operations allowed on it are drop table,
truncate table, and alter table drop columns continue (described in sections that
follow). you cannot use this clause with set unused, because that clause does not
remove column data.
drop columns continue clause: specify drop columns continue to continue the drop
column operation from the point at which it was interrupted. submitting this
statement while the table is in a valid state results in an error. see
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96540/statements_32a.htm#2103766.
51. as sysdba you created the payclerk role and granted the role to bob. bob in
turn attempts to modify the authentication method of the payclerk role from salary
to not identified, but when doing so he receives the insufficient privilege error
shown below.<br />
sql> connect bob/crusader<br />
connected.<br />
<br />
sql> alter role payclerk not identified;<br />
alter role payclerk not identified<br />
*<br />
error at line 1:<br />
ora-01031: insufficient privileges<br />
which privilege does bob require to modify the authentication method of the
payclerk role?
answer: a
wrong: b, c, d - manage any role, update any role, modify any role don't exist.
52. you are going to re-create your database and want to reuse all of your
existing database files.
you issue the following sql statement:
answer: b
explanation:
ad b: the initial control files of an oracle database are created when you issue
the create database statement. the names of the control files are specified by the
control_files parameter in the initialization parameter file used during database
creation. the filenames specified in control_files should be fully specified and
are operating system specific. if control files with the specified names
currently exist at the time of database creation, you must specify the controlfile
reuse clause in the create database statement, or else an error occurs.
ad a: maxlogile min and max value is operating system dependent. but i think the
min value is 1.
ad c: you can reuse a online redo log file.
ad d: datafile clause specify one or more files to be used as datafiles. all these
files become part of the system tablespace.
which three statements correctly describe what user oe can or cannot do? (choose
three.)
answer: b, c, e
explanation:
granting multiple object privileges on individual columns: example to grant to
user oe the references privilege on the employee_id column and the update
privilege on the employee_id, salary, and commission_pct columns of the employees
table in the schema hr, issue the following statement:
grant references (employee_id),
update (employee_id, salary, commission_pct)
on hr.employees
to oe;
the constraint in_emp ensures that all dependents in the dependent table
correspond to an employee in the employees table in the schema hr.
a. checkpoint occurs.
b. a fast commit occurs.
c. reco performs the session recovery.
d. pmon rolls back the user's current transaction.
e. smon rolls back the user's current transaction.
f. smon frees the system resources reserved for the user session.
g. pmon releases the table and row locks held by the user session.
answer: d, g
explanation:
smon: the system monitor performs crash recovery when a failed instance starts up
again. in a cluster database (oracle9i real application clusters), the smon
process of one instance can perform instance recovery for other instances that
have failed. smon also cleans up temporary segments that are no longer in use and
recovers dead transactions skipped during crash and instance recovery because of
file-read or offline errors. these transactions are eventually recovered by smon
when the tablespace or file is brought back online.
pmon: the process monitor performs process recovery when a user process fails.
pmon is responsible for cleaning up the cache and freeing resources that the
process was using. pmon also checks on the dispatcher processes (see below) and
server processes and restarts them if they have failed.
the process monitor process (pmon) cleans up failed user processes and frees up
all the resources used by the failed process. it resets the status of the active
transaction table and removes the process id from the list of active processes.
it reclaims all resources held by the user and releases all locks on tables and
rows held by the user. pmon wakes up periodically to check whether it is needed.
reco: the recoverer process is used to resolve distributed transactions that are
pending due to a network or system failure in a distributed database. at timed
intervals, the local reco attempts to connect to remote databases and
automatically complete the commit or rollback of the local portion of any pending
distributed transactions.
checkpoint (ckpt): at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the
checkpoint process is responsible for signaling dbwn at checkpoints and updating
all the datafiles and control files of the database to indicate the most recent
checkpoint.
answer: d
explanation:
pctincrease
specify the percent by which the third and subsequent extents grow over the
preceding extent. the default value is 50, meaning that each subsequent extent is
50% larger than the preceding extent. the minimum value is 0, meaning all extents
after the first are the same size. the maximum value depends on your operating
system.
specify the size of the file in bytes. use k or m to specify the size in kilobytes
or megabytes. no minimum size for datafile. no max size either. it can be
unlimited. operating system dependent. if you omit this clause when creating an
oracle-managed file, then oracle creates a 100m file.
default storage_clause
specify the default storage parameters for all objects created in the tablespace.
for a dictionary-managed temporary tablespace, oracle considers only the next
parameter of the storage_clause. restriction on default storage: you cannot
specify this clause for a locally managed tablespace.
segment_management_clause
the segment_management_clause is relevant only for permanent, locally managed
tablespaces. it lets you specify whether oracle should track the used and free
space in the segments in the tablespace using free lists or bitmaps.
initial
specify in bytes the size of the object's first extent. oracle allocates space for
this extent when you create the schema object. use k or m to specify this size in
kilobytes or megabytes.
the default value is the size of 5 data blocks. in tablespaces with manual
segmentspace management, the minimum value is the size of 2 data blocks plus one
data block for each free list group you specify. in tablespaces with automatic
segmentspace management, the minimum value is 5 data blocks. the maximum value
depends on your operating system.
in dictionary-managed tablespaces, if minimum extent was specified for the
tablespace when it was created, then oracle rounds the value of initial up to the
specified minimum extent size if necessary. if minimum extent was not specified,
then oracle rounds the initial extent size for segments created in that tablespace
up to the minimum value (see preceding paragraph), or to multiples of 5 blocks if
the requested size is greater than 5 blocks.
in locally managed tablespaces, oracle uses the value of initial in conjunction
with the size of extents specified for the tablespace to determine the object's
first extent. for example, in a uniform locally managed tablespace with 5m
extents, if you specify an initial value of 1m, then oracle creates five 1m
extents.
restriction on initial: you cannot specify initial in an alter statement.
next
specify in bytes the size of the next extent to be allocated to the object. use k
or m to specify the size in kilobytes or megabytes. the default value is the size
of 5 data blocks. the minimum value is the size of 1 data block. the maximum value
depends on your operating system. oracle rounds values up to the next multiple of
the data block size for values less than 5 data blocks. for values greater than 5
data blocks, oracle rounds up to a value that minimizes fragmentation, as
described in oracle9i database administrator's guide.
if you change the value of the next parameter (that is, if you specify it in an
alter statement), then the next allocated extent will have the specified size,
regardless of the size of the most recently allocated extent and the value of the
pctincrease parameter.
temporary
specify temporary if the tablespace will be used only to hold temporary objects,
for example, segments used by implicit sorts to handle order by clauses.
temporary tablespaces created with this clause are always dictionary managed, so
you cannot specify the extent management local clause. to create a locally managed
temporary tablespace, use the create temporary tablespace statement.
answer: b, d, e
explanation:
ad d, e, f: see http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96524/c04space.htm#10136
when a tablespace goes offline
when a tablespace goes offline or comes back online, this is recorded in the data
dictionary in the system tablespace. if a tablespace is offline when you shut down
a database, the tablespace remains offline when the database is subsequently
mounted and reopened.
you can drop a tablespace regardless of whether it is online or offline (-> this
makes a wrong). oracle recommends that you take the tablespace offline before
dropping it to ensure that no sql statements in currently running transactions
access any of the objects in the tablespace.
restriction on the offline clause: you cannot take a temporary tablespace offline.
normal
specify normal to flush all blocks in all datafiles in the tablespace out of the
sga. you need not perform media recovery on this tablespace before bringing it
back online. this is the default. -> b is true.
temporary
if you specify temporary, then oracle performs a checkpoint for all online
datafiles in the tablespace but does not ensure that all files can be written.
any offline files may require media recovery before you bring the tablespace back
online.
specify offline to take the tablespace offline and prevent further access to its
segments. when you take a tablespace offline, all of its datafiles are also
offline.
which three statements are true about dropping a table? (choose three.)
answer: b, c, e
explanation:
ad a: in general, the extents of a segment do not return to the tablespace until
you drop the schema object whose data is stored in the segment (using a drop table
or drop cluster statement).
ad b: dropping a table removes the table definition from the data dictionary. all
rows of the table are no longer accessible.
ad c: all indexes and triggers associated with a table are dropped.
ad d: false. all synonyms for a dropped table remain, but return an error when
used.
ad e: if the table to be dropped contains any primary or unique keys referenced by
foreign keys of other tables and you intend to drop the foreign key constraints of
the child tables, include the cascade clause in the drop table statement
answer: b, c, d, e
explanation:
to remove all rows from a table or cluster and reset the storage parameters to the
values when the table or cluster was created.
you can use the truncate command to quickly remove all rows from a table or
cluster. removing rows with the truncate command is faster than removing them with
the delete command for the following reasons:
the truncate command is a data definition language (ddl) command and generates no
rollback information.
truncating a table does not fire the table's delete triggers.
the truncate command allows you to optionally deallocate the space freed by the
deleted rows. the drop storage option deallocates all but the space specified by
the table's minextents parameter. deleting rows with the truncate command is also
more convenient than dropping and re-creating a table because dropping and re-
creating:
invalidates the table's dependent objects, while truncating does not requires you
to regrant object privileges on the table, while truncating does not requires you
to re-create the table's indexes, integrity constraints, and triggers and
respecify its storage parameters.
see: oracle8 sql reference release 8.0 december 1997 part no. a58225-01
(a58225.pdf) pg.722. (4-538)
59. tom was allocated 10 mb of quota in the users tablespace. he created database
objects in the users tablespace. the total space allocated for the objects owned
by tom is 5 mb. you need to revoke tom's quota from the users tablespace. you
issue this command: alter user tom quota 0 on users; <br />
what is the result?
answer: d
explanation:
use the alter user statement to change the authentication or database resource
characteristics of a database user.
tom quota on the users tablespace is revoked with this statement. the objects are
not deleted from the tablespace.
after you have set the quota to zero, you still can insert and delete records from
the table which was created in the users tablespace. so i guess, it will use the
system tablespace for the new results.
a. lgwr
b. smon
c. dbwn
d. ckpt
e. pmon
answer: c
explanation:
refer to question 54 for the explanation of some of the words used in the answer.
database writer (dbw n)
the database writer writes modified blocks from the database buffer cache to the
datafiles. although one database writer process (dbw0) is sufficient for most
systems, you can configure additional processes (dbw1 through dbw9 and dbwa
through dbwj) to improve write performance for a system that modifies data
heavily. the initialization parameter db_writer_processes specifies the number of
dbwn processes.
checkpoint (ckpt)
at specific times, all modified database buffers in the sga are written to the
datafiles by dbwn. this event is called a checkpoint. the checkpoint process is
responsible for signaling dbwn at checkpoints and updating all the datafiles and
control files of the database to indicate the most recent checkpoint.
61. which command would revoke the role_emp role from all users?
answer: b
explanation:
privileges and roles can also be granted to and revoked from the user group
public. because public is accessible to every database user, all privileges and
roles granted to public are accessible to every database user.
errors given by the answers:
answer a: ora-00987: missing or invalid username(s).
answer b: (it's ok as long as the role was granted to public).
answer c: ora-00987: missing or invalid username(s).
answer d: ora-01917: user or role 'all_users' does not exist.
62. you are experiencing intermittent hardware problems with the disk drive on
which your control file is located. you decide to multiplex your control file.<br
/>
while your database is open, you perform these steps:<br />
1. make a copy of your control file using an operating system command.<br />
2. add the new file name to the list of files for the control files parameter in
your text intialization parameter file using an editor.<br />
3. shut down the instance.<br />
4. issue the startup command to restart the instance, mount, and open the
database.<br />
a. you copied the control file before shutting down the instance.
b. you used an operating system command to copy the control file.
c. the oracle server does not know the name of the new control file.
d. you added the new control file name to the control_files parameter before
shutting down the instance.
answer: a
explanation:
to multiplex or move additional copies of the current control file:
1. shutdown the database.
2. exit server manager.
3. copy an existing control file to a different location, using operating system
commands.
4. edit the control_files parameter in the database's parameter file to add the
new control file's name, or to change the existing control filename.
5. restart server manager.
6. restart the database.
for more information refer to steps for creating new control files in
administrator's guide on page 6-7
answer: e
explanation:
minimum extent clause
specify the minimum size of an extent in the tablespace. this clause lets you
control free space fragmentation in the tablespace by ensuring that every used or
free extent size in a tablespace is at least as large as, and is a multiple of,
integer.
the storage_clause is interpreted differently for locally managed tablespaces. at
creation, oracle ignores maxextents and uses the remaining parameter values to
calculate the initial size of the segment.
64. you are going to create a new database. you will not use operating system
authentication.
which two files do you need to create before creating the database? (choose two.)
a. control file
b. password file
c. redo log file
d. alert log file
e. initialization parameter file
answer: b, e
explanation:
i guess answer a is stupid. first of all, creating control files is the purpose of
the whole create database procedure. other proof:
oracle9i sql reference release 2 (9.2) march 2002 part no. a96540-01 (a96540.pdf)
13-26
controlfile reuse clause
specify controlfile reuse to reuse existing control files identified by the
initialization parameter control_files, thus ignoring and overwriting any
information they currently contain. normally you use this clause only when you are
re-creating a database, rather than creating one for the first time. you cannot
use this clause if you also specify a parameter value that requires that the
control file be larger than the existing files. these parameters are maxlogfiles,
maxlogmembers, maxloghistory, maxdatafiles, and maxinstances. if you omit this
clause and any of the files specified by control_files already exist, oracle
returns an error.
password file
but since no os authentication is used, the other choice can only be password-file
authentication. for this purpose a password file is needed.
redo log file
redo log file is used for the transactions within a database, not for database
creation.
alert log file
see question 11 ("trace files, on the other hand, are generated by the oracle
background processes or other connected net8 processes when oracle internal errors
occur and they dump all information about the error into the trace files.").
initialization parameter file
we need one, this file contains the description of the created database.
so the answer is b, e.
remark: it would be logical, that if oracle wants to read from a file, the file
needs to be there. if oracle wants to write to a file, it will create one.
65. based on the following profile limits, if a user attempts to log in and fails
after five tries, how long must the user wait before attempting to log in
again?<br />
a. 1 minute
b. 5 minutes
c. 10 minutes
d. 14 minutes
e. 18 minutes
f. 60 minutes
answer: a
explanation:
password_lock_time is the interesting parameter: password_lock_time specifies the
number of days an account will be locked after the specified number of consecutive
failed login attempts.
now we have 1/1440 days = 24/1440 hours = 24*60/1440 minutes = 1 minute.
password_parameters
failed_login_attempts: specify the number of failed attempts to log in to the user
account before the account is locked.
password_life_time: specify the number of days the same password can be used for
authentication. the password expires if it is not changed within this period, and
further connections are rejected.
password_reuse_time: specify the number of days before which a password cannot be
reused. if you set password_reuse_time to an integer value, then you must set
password_reuse_max to unlimited.
password_reuse_max: specify the number of password changes required before the
current password can be reused. if you set password_reuse_max to an integer value,
then you must set password_reuse_time to unlimited.
password_lock_time: specify the number of days an account will be locked after the
specified number of consecutive failed login attempts.
password_grace_time: specify the number of days after the grace period begins
during which a warning is issued and login is allowed. if the password is not
changed during the grace period, the password expires.
password_verify_function: the password_verify_function clause lets a pl/sql
password complexity verification script be passed as an argument to the create
profile statement. oracle provides a default script, but you can create your own
routine or use third-party software instead. for function, specify the name of the
password complexity verification routine, specify null to indicate that no
password verification is performed.
answer: b, d, e
explanation:
create any materialized view: create materialized views in any schema.
create any dimension: create dimensions in any schema.
drop any dimension: drop dimensions in any schema.
query rewrite: enable rewrite using a materialized view, or create a functionbased
index, when that materialized view or index references tables and views that are
in the grantee's own schema.
global query rewrite: enable rewrite using a materialized view, or create a
functionbased index, when that materialized view or index references tables or
views in any schema.
with admin option: specify with admin option to enable the grantee to:
(a) grant the role to another user or role, unless the role is a global role.
(b) revoke the role from another user or role.
(c) alter the role to change the authorization needed to access it.
(d) drop the role.
67. which constraint state prevents new data that violates the constraint from
being entered, but allows invalid data to exist in the table?
a. enable validate
b. disable validate
c. enable novalidate
d. disable novalidate
answer: c
explanation:
enable validate specifies that all old and new data also complies with the
constraint. an enabled validated constraint guarantees that all data is and will
continue to be valid.
enable novalidate ensures that all new dml operations on the constrained data
comply with the constraint. this clause does not ensure that existing data in the
table complies with the constraint and therefore does not require a table lock.
disable validate disables the constraint and drops the index on the constraint,
but keeps the constraint valid. this feature is most useful in data warehousing
situations, because it lets you load large amounts of data while also saving space
by not having an index. this setting lets you load data from a nonpartitioned
table into a partitioned table using the exchange_partition_clause of the alter
table statement or using sql*loader. all other modifications to the table
(inserts, updates, and deletes) by other sql statements are disallowed.
disable novalidate signifies that oracle makes no effort to maintain the
constraint (because it is disabled) and cannot guarantee that the constraint is
true (because it is not being validated).
for more info look at page 7-20 of oracle9 i sql reference document.
68. which storage structure provides a way to physically store rows from more than
one table in the same data block?
a. cluster table
b. partitioned table
c. unclustered table
d. index-organized table
answer: a
explanation:
clusters:
(a) group of one or more tables physically stored together because they share
common columns and are often used together.
(b) since related rows are stored together, disk access time improves.
(c) clusters do not affect application design.
(d) data stored in a clustered table is accessed by sql in the same way as data
stored in a non-clustered table.
partitioning addresses key issues in supporting very large tables and indexes by
letting you decompose them into smaller and more manageable pieces called
partitions. sql queries and dml statements do not need to be modified in order to
access partitioned tables. however, after partitions are defined, ddl statements
can access and manipulate individuals partitions rather than entire tables or
indexes.
this is how partitioning can simplify the manageability of large database objects.
also, partitioning is entirely transparent to applications.
more info look at page 10-64 oracle9 idatabase concepts (nice diagram of cluster
and non-cluster)
a. only lobs
b. only nested tables
c. only index-organized tables
d. only lobs and index-organized tables
e. only nested tables and index-organized tables
f. only lobs, nested tables, and index-organized tables
g. nested tables, lobs, index-organized tables, and boot straps
answer: g
explanation:
2-12 oracle9i database concepts:
a single data segment in an oracle database holds all of the data for one of the
following:
(a) a table that is not partitioned or clustered.
(b) a partition of a partitioned table segments overview.
(c) a cluster of tables.
70. select the memory structure(s) that would be used to store the parse
information and actual value of the bind variable id for the following set of
commands:
variable id number;
begin
:id:=1;
end;
/
a. pga only
b. row cache and pga
c. pga and library cache
d. shared pool only
e. library cache and buffer cache
answer: c
explanation:
reason for c instead of b:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96524/c16sqlpl.htm#cncpt416:
parsing is one stage in the processing of a sql statement. when an application
issues a sql statement, the application makes a parse call to oracle. during the
parse call, oracle:
(a) checks the statement for syntactic and semantic validity.
(b) determines whether the process issuing the statement has privileges to run it.
(c) allocates a private sql area for the statement.
oracle also determines whether there is an existing shared sql area containing the
parsed representation of the statement in the library cache. if so, the user
process uses this parsed representation and runs the statement immediately. if
not, oracle generates the parsed representation of the statement, and the user
process allocates a shared sql area for the statement in the library cache and
stores its parsed representation there.
(2) program global areas (pga), which is private to each server and background
process; there is one pga for each process. the pga holds the following:
(a) stack areas.
(b) data areas.
71. the new human resources application will be used to manage employee data in
the employees table. you are developing a strategy to manage user privileges. your
strategy should allow for privileges to be granted or revoked from individual
users or groups of users with minimal administrative effort.<br /><br />
the users of the human resources application have these requirements:<br />
a manager should be able to view the personal information of the employees in
his/her group and make changes to their title and salary.<br /><br />
what should you grant to the manager user?
answer: d
72. an insert statement failed and is rolled back. what does this demonstrate?
a. insert recovery
b. read consistency
c. transaction recovery
d. transaction rollback
answer: d
explanation:
if at any time during execution a sql statement causes an error, all effects of
the statement are rolled back. the effect of the rollback is as if that statement
had never been run. this operation is a statement-level rollback.
errors discovered during sql statement execution cause statement-level rollbacks.
an example of such an error is attempting to insert a duplicate value in a primary
key.
73. the database currently has one control file. you decide that three control
files will provide better protection against a single point of failure. to
accomplish this, you modify the spfile to point to the locations of the three
control files. the message "system altered" was received after execution of the
statement.
you shut down the database and copy the control file to the new names and
locations. on startup you receive the error ora-00205: error in identifying
control file. you look in the alert log and determine that you specified the
incorrect path for the for control file.
which steps are required to resolve the problem and start the database?<br />
a.<br />
1. connect as sysdba.<br />
2. shut down the database.<br />
3. start the database in nomount mode.<br />
4. use the alter system set control_files command to correct the error.<br />
5. shut down the database.<br />
6. start the database.<br /><br />
b.<br />
1. connect as sysdba.<br />
2. shut down the database.<br />
3. start the database in mount mode.<br />
4. remove the spfile by using a unix command.<br />
5. recreate the spfile from the pfile.<br />
6. use the alter system set control_files command to correct the error.<br />
7. start the database.<br />
c.<br />
1. connect as sysdba.<br />
2. shut down the database.<br />
3. remove the control files using the os command.<br />
4. start the database in nomount mode.<br />
5. remove the spfile by using an os command.<br />
6. re-create the spfile from the pfile.<br />
7. use the alter system set control_files command to define the control files.<br
/>
8. shut down the database.<br />
9. start the database.<br />
answer: a
explanation:
some parameters can be changed dynamically by using the alter session or alter
system statement while the instance is running. unless you are using a instance
and database startup server parameter file, changes made using the alter system
statement are only in effect for the current instance. you must manually update
the text initialization parameter file for the changes to be known the next time
you start up an instance.
when you use a server parameter file, you can update the parameters on disk, so
that changes persist across database shutdown and startup.
see question number 62: you do not need to create the spfile again. use alter
system to update the control_files parameter value.
74. which process is started when a user connects to the oracle server in a
dedicated server mode?
a. dbwn
b. pmon
c. smon
d. server
answer: d
explanation:
smon: the system monitor performs crash recovery when a failed instance starts up
again. in a cluster database (oracle9i real application clusters), the smon
process of one instance can perform instance recovery for other instances that
have failed. smon also cleans up temporary segments that are no longer in use and
recovers dead transactions skipped during crash and instance recovery because of
file-read or offline errors. these transactions are eventually recovered by smon
when the tablespace or file is brought back online.
pmon: the process monitor performs process recovery when a user process fails.
pmon is responsible for cleaning up the cache and freeing resources that the
process was using. pmon also checks on the dispatcher processes (see below) and
server processes and restarts them if they have failed.
checkpoint (ckpt): at specific times, all modified database buffers in the sga are
written to the datafiles by dbwn. this event is called a checkpoint. the
checkpoint process is responsible for signaling dbwn at checkpoints and updating
all the datafiles and control files of the database to indicate the most recent
checkpoint.
75. you are creating a new database. you do not want users to use the system
tablespace for sorting operations.
what should you do when you issue the create database statement to prevent this?
answer: b
explanation:
you can manage space for sort operations more efficiently by designating temporary
tablespaces exclusively for sorts. doing so effectively eliminates serialization
of space management operations involved in the allocation and deallocation of sort
space.
all operations that use sorts, including joins, index builds, ordering, computing
aggregates (group by), and collecting optimizer statistics, benefit from temporary
tablespaces. the performance gains are significant with real application clusters.
specify a default temporary tablespace when you create a database, using the
default temporary tablespace extension to the create database statement.
76. which four statements are true about profiles? (choose four.)
explanation:
it's true that profiles can control the use of passwords. this feature protects
the integrity of assigned usernames as well as the overall data integrity of the
oracle database. all limits of the default profile are initially unlimited. the
default profile isn't very restrictive of host system resources; in fact, default
profile gives users unlimited use of all resources definable in the database. any
option in any profile can be changed at any time; however, the change will not
take effect for users assigned to that profile until the user logs out and logs
back in. also profiles can ensure that users log off the database when they have
left their session idle for a period of time.
different profiles can be created and assigned individually to each user of the
database. a default profile is present for all users not explicitly assigned a
profile.
the resource limit feature prevents excessive consumption of global database
system resources.
to allow for greater control over database security, oracle's password management
policy is controlled by dbas and security officers through user profiles.
to alter the enforcement of resource limitation while the database remains open,
you must have the alter system system privilege.
all unspecified resource limits for a new profile take the limit set by a default
profile. initially, all limits of the default profile are set to unlimited.
77. the database writer (dbwn) background process writes the dirty buffers from
the database buffer cache into the _______.
answer: a
explanation:
database writer (dbwn)
the database writer writes modified blocks from the database buffer cache to the
datafiles. oracle allows a maximum of 20 database writer processes (dbw0-dbw9 and
dbwa-dbwj). the initialization parameter db_writer_processes specifies the number
of dbwn processes. oracle selects an appropriate default setting for this
initialization parameter (or might adjust a user specified setting) based upon the
number of cpus and the number of processor groups.
78. you used the password file utility to create a password file as follows:<br />
you created a user and granted only the sysdba privilege to that user as
follows:<br />
create user dba_user identified by dba_pass;<br />
grant sysdba to dba_user;<br /><br />
answer: c
explanation:
when prompted, connect as sys (or other administrative user) with the sysdba
system privilege:
connect sys/password as sysdba
where password is the password of the user created. in this example, it is
dba_user.
79. you intend to use only password authentication and have used the password file
utility to create a password file as follows:<br />
you created a user and granted only the sysdba privilege to that user as
follows:<br />
create user dba_user<br />
identified by dba_pass<br />
grant sysdba to dba_user<br /><br />
80. for which two constraints are indexes created when the constraint is added?
(choose two.)
a. check
b. unique
c. not null
d. primary key
e. foreign key
answer: b, d
explanation:
oracle enforces all primary key constraints using indexes.
oracle enforces unique integrity constraints with indexes.
81. you check the alert log for your database and discover that there are many
lines that say "checkpoint not complete". what are two ways to solve this problem?
(choose two.)
explanation:
checkpoint not complete" means that a checkpoint started, but before it could
finish another higher priority checkpoint was issued (usually from a log switch),
so the first checkpoint was essentially rolled-back.
i found these answers from newsgroups and they sound quite good to me:
increasing the number of redo logs seems to be most effective. normally,
checkpoints occur for 1 of 3 reasons:
1) the log_checkpoint_interval was reached.
2) a log switch occurred.
3) the log_checkpoint_timeout was reached.
the archiver copies the online redo log files to archival storage after a log
switch has occurred. although a single arcn process (arc0) is sufficient for most
systems, you can specify up to 10 arcn processes by using the dynamic
initialization parameter log_archive_max_processes. if the workload becomes too
great for the current number of arcn processes, then lgwr automatically starts
another arcn process up to the maximum of 10 processes. arcn is active only when a
database is in archivelog mode and automatic archiving is enabled.
there is no archive log file as far as i know. it is called redo log file.
82. the database needs to be shut down for hardware maintenance. all users
sessions except one have either voluntarily logged off or have been forcibly
killed. the one remaining user session is running a business critical data
manipulation language (dml) statement and it must complete prior to shutting down
the database.
which shutdown statement prevents new user connections, logs off the remaining
user, and shuts down the database after the dml statement completes?
a. shutdown
b. shutdown abort
c. shutdown normal
d. shutdown immediate
e. shutdown transactional
answer: e
explanation:
from a newsgroup:
there are four ways to shut down a database:
(a) shutdown waits for everyone to finish & log out before it shuts down. the
database is cleanly shutdown.
(b) shutdown immediate rolls back all uncommitted transactions before it shuts
down. the database is cleanly shutdown.
(c) shutdown transactional waits for all current transactions to commit or
rollback before it shuts down. the database is cleanly shutdown.
(d) shutdown abort quickly shuts down - the next restart will require instance
recovery. the database is technically crashed.
the key reason for an immediate shutdown not being immediate is because of the
need to rollback all current transactions. if a user has just started a
transaction to update emp set sal = sal * 2 where emp_id = 1000; then this will be
rolled back almost instantaneously.
however, if another user has been running a huge update for the last four hours,
and has not yet committed, then four hours of updates have to be rolled back and
this takes time.
so, if you really want to shutdown right now, then the advised route is: shutdown
abort - startup restrict - shutdown
when you shutdown abort, oracle kills everything immediately. startup restrict
will allow only dba users to get in but, more importantly, will carry out instance
recovery and recover back to a consistent state using the current on-line redo
logs. the final shutdown will perform a clean shutdown. any cold backups taken now
will be of a consistent database.
there has been much discussion on this very subject on the oracle server
newsgroups. some people are happy to backup the database after a shutdown abort,
others are not. i prefer to use the above method prior to taking a cold backup -
if i have been unable to shutdown or shutdown immediate that is.
83. when preparing to create a database, you should be sure that you have
sufficient disk space for your database files. when calculating the space
requirements you need to consider that some of the files may be multiplexed.
which two types of files should you plan to multiplex? (choose two.)
a. data files
b. control file
c. password file
d. online redo log files
e. initialization parameter file
answer: b, d
explanation:
multiplex: files are stored at more than one location.
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 3-22
multiplexed control files: as with online redo log files, oracle enables multiple,
identical control files to be open concurrently and written for the same database.
by storing multiple control files for a single database on different disks, you
can safeguard against a single point of failure with respect to control files. if
a single disk that contained a control file crashes, then the current instance
fails when oracle attempts to access the damaged control file. however, when other
copies of the current control file are available on different disks, an instance
can be restarted
easily without the need for database recovery.
if all control files of a database are permanently lost during operation, then the
instance is aborted and media recovery is required. media recovery is not
straightforward if an older backup of a control file must be used because a
current copy is not available. therefore, it is strongly recommended that you
adhere to the following practices:
(a) use multiplexed control files with each database
(b) store each copy on a different physical disk
(c) use operating system mirroring
(d) monitor backups
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 1-7
redo log files: to protect against a failure involving the redo log itself, oracle
allows a multiplexed redo log so that two or more copies of the redo log can be
maintained on different disks.
the information in a redo log file is used only to recover the database from a
system or media failure that prevents database data from being written to the
datafiles. for.database structure and space management overview example, if an
unexpected power outage terminates database operation, then data in memory cannot
be written to the datafiles, and the data is lost. however, lost data can be
recovered when the database is opened, after power is restored. by applying the
information in the most recent redo log files to the database datafiles, oracle
restores the database to the time at which the power failure occurred.
which two are requirements with respect to the directories you specify in the
db_create_file_dest and db_create_online_log_dest_n initialization parameters?
(choose two).
answer: a, d
explanation:
setting the db_create_online_log_dest_ n initialization parameter:
you specify the name of a file system directory that becomes the default location
for the creation of the operating system files for these entities. you can specify
up to five multiplexed locations.
as a conclusion, the directories must exist, but it doesn't matter what is inside.
also, it can be anywhere since you specify the location of it. and must give
permission to oracle to read and write to those directories.
85. in which two situations does the log writer (lgwr) process write the redo
entries from the redo log buffer to the current online redo log group? (choose
two.)
answer: a, d
explanation:
see ocp oracle 9i database: fundamentals i, p. 19.:
the redo log buffer writes to the redo logfile under the following situations:
(a) when a transaction commits.
(b) when the redo log buffer is one-third full.
(c) when there is more than one megabyte of changes recorded in the redo log
buffer.
(d) before the dbwn writes modified blocks in the database buffer cache to the
datafiles.
86. examine the syntax below, which creates a departments table:<br />
a. 200 k
b. 300 k
c. 450 k
d. 675 k
e. not defined
answer: d
explanation:
the size of the first and the second extent is 200k, pctincrease is set to 50%, so
we have the following calculation:
200 * 1,5 * 1,5 * 1,5 = 675.
87. after running the analyze index orders cust_idx validate structure command,
you query the index_stats view and discover that there is a high ratio of
del_lf_rows to lf_rows values for this index.
you decide to reorganize the index to free up the extra space, but the space
should remain allocated to the orders_cust_idx index so that it can be reused by
new entries inserted into the index.
which command(s) allows you to perform this task with the minimum impact to any
users who run queries that need to access this index while the index is
reorganized?
answer: b
explanation:
when you rebuild an index, you use an existing index as the data source. creating
an index in this manner enables you to change storage characteristics or move to a
new tablespace. rebuilding an index based on an existing data source removes
intra-block fragmentation. compared to dropping the index and using the create
index statement, re-creating an existing index offers better performance.
coalescing an index online vs. rebuilding an index online. online index coalesce
is an in-place data reorganization operation, hence does not require additional
disk space like index rebuild does. index rebuild requires temporary disk space
equal to the size of the index plus sort space during the operation. index
coalesce does not reduce the height of the b-tree. it only tries to reduce the
number of leaf blocks. the coalesce operation does not free up space for users but
does improve index scan performance.
while your database is open, you issue this command to start the archiver
process:<br />
alter system archive log start;<br /><br />
you shut down your database to take a back up and restart it using the
initsampledb.ora parameter file again. when you check the status of the archiver,
you find that it is disabled.<br />
answer: c
explanation:
if an instance is shut down and restarted after automatic archiving is enabled
using the alter system statement, the instance is reinitialized using the settings
of the initialization parameter file. those settings may or may not enable
automatic archiving. if your intent is to always archive redo log files
automatically, then you should include log_archive_start = true in your
initialization parameters.
answer d is correct, since every time the database is started, the init value of
the log_archive_start=false
a. redo log
b. undo segment
c. rollback segment
d. system tablespace
answer: a
explanation:
oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 13-2
undo and rollback segments
every oracle database must have a method of maintaining information that is used
to roll back, or undo, changes to the database. such information consists of
records of the actions of transactions, primarily before they are committed.
oracle refers to these records collectively as undo.
undo records are used to:
(a) roll back transactions when a rollback statement is issued.
(b) recover the database.
(c) provide read consistency.
when a rollback statement is issued, undo records are used to undo changes that
were made to the database by the uncommitted transaction. during database
recovery, undo records are used to undo any uncommitted changes applied from the
redo log to the datafiles. undo records provide read consistency by maintaining
the before image of the data for users who are accessing the data at the same time
that another user is changing it.
historically, oracle has used rollback segments to store undo. space management
for these rollback segments has proven to be quite complex. oracle now offers
another method of storing undo that eliminates the complexities of managing
rollback segment space, and enables dbas to exert control over how long undo is
retained before being overwritten. this method uses an undo tablespace.
90. there are three ways to specify national language support parameters:<br /><br
/>
1. initialization parameters<br />
2. environment variables<br />
3. alter session parameters<br /><br />
a.
1) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
2) parameters on the server side to specify the default server environment<br />
3) parameters override the default set for the session or the server<br />
b.
1) parameters on the server side to specify the default server environment<br />
2) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
3) parameters override the default set for the session or the server<br />
c.
1) parameters on the server side to specify the default server environment<br />
2) parameters override the default set for the session or the server<br />
3) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
d.
1) parameters on the client side to specify locale-dependent behavior overriding
the defaults set for the server<br />
2) parameters override the default set for the session or the server<br />
3) parameters on the server side to specify the default server environment<br />
answer: b
explanation:
oracle has attempted to provide appropriate values in the starter initialization
parameter file provided with your database software, or as created for you by the
database configuration assistant. you can edit these oracle-supplied
initialization parameters and add others, depending upon your configuration and
options and how you plan to tune the database.
91. which graphical dba administration tool would you use to tune an oracle
database?
a. sql*plus
b. oracle enterprise manager
c. oracle universal installer
d. oracle database configuration assistant
answer: b
explanation:
if you think sql*plus is a graphical tool, then i call microsoft windows an
artistic tool ;-)
you can more easily administer the database resource manager through the oracle
enterprise manager (oem). it provides an easy to use graphical interface for
administering the database resource manager. you can choose to use the oracle
enterprise manager for administering your database, including starting it up and
shutting it down. the oracle enterprise manager is a separate oracle product, that
combines a graphical console, agents, common services, and tools to provide an
integrated and comprehensive systems management platform for managing oracle
products. it enables you to perform the functions discussed in this book using a
gui interface, rather than command lines.
the database configuration assistant (dbca) an oracle supplied tool that enables
you to create an oracle database, configure database options for an existing
oracle database, delete an oracle database, or manage database templates. dbca
islaunched automatically by the oracle universal installer, but it can be invoked
standalone from the windows operating system start menu (under configuration
assistants)
a. startup
b. startup open
c. startup mount
d. startup nomount
answer: d
explanation:
start an instance without mounting a database. typically, you do this only during
database creation or while performing maintenance on the database. use the
startup command with the nomount option.
93. you just created five roles using the statements shown:<br />
which statement indicates that a user must be authorized to use the role by the
enterprise directory service before the role is enabled?
answer: b
explanation:
creating a global user - the following statement illustrates the creation of a
global user, who is authenticated by ssl and authorized by the enterprise
directory service:
create user scott
identified globally as 'cn=scott,ou=division1,o=oracle,c=us';
the string provided in the as clause provides an identifier (distinguished name,
or dn) meaningful to the enterprise directory.
in this case, scott is truly a global user. but, the disadvantage here is that
user scott must then be created in every database that he must access, plus the
directory.
94. examine the list of steps to rename the data file of a non-system tablespace
hr_tbs. the steps are arranged in random order.<br />
answer: d
explanation:
renaming datafiles in a single tablespace: to rename datafiles from a single
tablespace, complete the following steps:
(1) take the non-system tablespace that contains the datafiles offline.
for example: alter tablespace users offline normal;
(2) rename the datafiles using the operating system.
(3) use the alter tablespace statement with the rename datafile clause to change
the filenames within the database. the new files must already exist; this
statement does not create the files. also, always provide complete filenames
(including their paths) to properly identify the old and new datafiles. in
particular, specify the old datafile name exactly as it appears in the
dba_data_files view of the data dictionary.
(4) back up the database. after making any structural changes to a database,
always perform an immediate and complete backup.
(5) bring the datafile online (this was added by me, i couldn't find it in the
documents). to use this clause for datafiles and tempfiles, the database must be
mounted. the database can also be open, but the datafile or tempfile being renamed
must be offline.
so first make the tablespace offline (step 5) => answers a and b are out.
the alter renames the file, but only on the oracle. the statement does not
actually change the name of the file 'disk. you must perform this operation
through your operating system. => use the step 4 to copy the new file to the
specified location.
then execute the alter database.
you don't need to shut down and start the database.
95. for a tablespace created with automatic segment-space management, where is
free space managed?
a. in the extent
b. in the control file
c. in the data dictionary
d. in the undo tablespace
answer: a
explanation:
when you create a table in a locally managed tablespace for which automatic
segment-space management is enabled, the need to specify the pctfree (or
freelists) parameter is eliminated. automatic segment-space management is
specified at the tablespace level. the oracle database server automatically and
efficiently manages free and used space within objects created in such
tablespaces.
in my opinion, the free space is managed in the table space itself. a table space
consists of extents. therefore, extents are the actual spaces. so, i recommend
answer a. also in answer d, undo table space is used for undo purpose only and
not for space management.
96. which is true when considering the number of indexes to create on a table?
answer: c
no comment
97. more stringent user access requirements have been issued. you need to do these
tasks for the user pward:
answer: c
explanation:
creating a user who is authenticated externally:
create user scott identified externally; or use alter instead of create. the
important keyword is the identified externally.
check the picture to see how the default and temporary table spaces are set. also
the quota keyword is shown on the picture.
since the alter user also has a profile keyword, then profile can also be used.
therefore answer c is correct.
98. you create a new table named departments by issuing this statement:<br />
you realize that you failed to specify a tablespace for the table. you issue these
queries:<br />
a. temp
b. system
c. sample
d. user_data
answer: c
incorrect answers:
a: temp tablespace is set as temporary tablespace for the user, so it will not be
used to store the departments table. the default tablespace sample will be used
for this purpose.
b: user have sample as default tablespace, so it will be used, not system
tablespace, to store the departments table.
d: user_date is not defined as default tablespace for theuser, so it will not be
used to store the departments table.
99. you should back up the control file when which two commands are executed?
(choose two.)
a. create user
b. create table
c. create index
d. create tablespace
e. alter tablespace <tablespace name> add datafile
answer: d, e
explanation:
back up control files
it is very important that you back up your control files. this is true initially,
and at any time after you change the physical structure of your database. such
structural changes include:
(a) adding, dropping, or renaming datafiles.
(b) adding or dropping a tablespace, or altering the read-write state of the
tablespace.
(c) adding or dropping redo log files or groups.
100. you have two undo tablespaces defined for your database. the instance is
currently using the undo tablespace named undotbs_1. you issue this command to
switch to undotbs_2 while there are still transactions using undotbs_1:<br /><br
/>
answer: a, d
explanation:
see http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/undo.htm#9117:
switching undo tablespaces
switching undo tablespaces
you can switch from using one undo tablespace to another. because the
undo_tablespace initialization parameter is a dynamic parameter, the alter system
set statement can be used to assign a new undo tablespace.
the database is online while the switch operation is performed, and user
transactions can be executed while this command is being executed. when the switch
operation completes successfully, all transactions started after the switch
operation began are assigned to transaction tables in the new undo tablespace.
the switch operation does not wait for transactions in the old undo tablespace to
commit. if there are any pending transactions in the old undo tablespace, the old
undo tablespace enters into a pending offline mode (status). in this mode,
existing transactions can continue to execute, but undo records for new user
transactions cannot be stored in this undo tablespace.
an undo tablespace can exist in this pending offline mode, even after the switch
operation completes successfully. a pending offline undo tablespace cannot used by
another instance, nor can it be dropped. eventually, after all active transactions
have committed, the undo tablespace automatically goes from the pending offline
mode to the offline mode. from then on, the undo tablespace is available for other
instances (in an oracle real application cluster environment).
101. which two statements grant an object privilege to the user smith? (choose
two.)
answer: e, g
ad d: alter: permits the grantee of this object privilege to alter the definition
of a table or sequence only. the alter privilege on all other database objects are
considered system privileges.
102. which memory structure contains the information used by the server process to
validate the user privileges?
a. buffer cache
b. library cache
c. data dictionary cache
d. redo log buffer cache
answer: c
explanation:
ad a: false. the database buffer cache is the portion of the sga that holds copies
of data blocks read from datafiles. all user processes concurrently connected to
the instance share access to the database buffer cache. see (a58227.pdf) pg 155.
(6-3).
ad b: false. library cache the library cache includes the shared sql areas,
private sql areas, pl/sql proce-dures and packages, and control structures such as
locks and library cache handles.
ad c: true. one of the most important parts of an oracle database is its data
dictionary, which is a read-only set of tables that provides information about its
associated database. a data dictionary contains:
(a) the definitions of all schema objects in the database (tables, views, indexes,
clusters, synonyms, sequences, procedures, functions, packages, triggers, and so
on).
(b) how much space has been allocated for, and is currently used by, the schema
objects.
(c) default values for columns.
(d) integrity constraint information.
(e) the names of oracle users.
(f) privileges and roles each user has been granted.
(g) auditing information, such as who has accessed or updated various schema
objects.
(h) in trusted oracle, the labels of all schema objects and users (see your
trustedoracle documentation).
(i) other general database information.
see (a58227.pdf) pg. 134. (4-2).
ad d: false. the information in a redo log file is used only to recover the
database from a system or media failure that prevents database data from being
written to a database's datafiles. see (a58227.pdf) pg. 46. (1-12)
which three tablespaces can be created in the create database statement? (choose
three.)
a. temp
b. users
c. system
d. app_ndx
e. undotbs
f. app_data
answer: a, c, e
incorrect answers:
1) mount mounts the database for certain dba activities but does not provide user
access to the database.<br />
2) the nomount command creates only the data buffer but does not provide access to
the database.<br />
3) the open command enables users to access the database.<br />
4) the startup command starts an instance.<br />
which option correctly describes whether some or all of the statements are true or
false?
answer: b
explanation:
(1) is true:
mounted database: a database associated with an oracle instance. the database can
be opened or closed. a database must be both mounted and opened to be accessed by
users. a database that has been mounted but not opened can be accessed by dbas for
some maintenance purposes. see oracle8(tm) enterprise edition getting started
release 8.0.5 for windows nt june 19, 1998 part no. a64416-01 pg. 446.
(2) is false:
after selecting the startup nomount, the instance starts. at this point, there is
no database. only an sga (system global area is a shared memory region that
contains data and control information for one oracle instance) and background
processes are started in preparation for the creation of a new database. see
oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01 pg.
60. (a58397.pdf).
(3) is true:
opening a mounted database makes it available for normal database operations. any
valid user can connect to an open database and access its information. when you
open the database, oracle opens the online datafiles and online redo log files. if
a tablespace was offline when the database was previously shut down, the
tablespace and its corresponding datafiles will still be offline when you reopen
the database. if any of the datafiles or redo log files are not present when you
attempt to open the database, oracle returns an error. see oracle8 concepts
release 8.0 december, 1997 part no. a58227-01 pg. 149. (a58227.pdf).
(4) is true:
startup: purpose start an oracle instance with several options, including mounting
and opening a database. prerequisites you must be connected to a database as
internal, sysoper, or sysdba. you cannot be connected via a multi-threaded server.
see oracle (r) enterprise manager administrator's guide release 1.6.0 june, 1998
part no. a63731-01 (oemug.pdf) pg. 503. (b-31).
user jenny has unlimited quota on the user_tbs tablespace. which value will the
query return?<br />
a. 0
b. 1
c. -1
d. null
e. 'unlimited'
answer: c
explanation:
ad a: false. value -1, not 0, shows that user jenny has unlimited quota on the
user_tbs tablespace.
ad b: false. value -1, not 1, shows that user jenny has unlimited quota on the
user_tbs tablespace.
ad c: true. a value of -1 in max_bytes or max_blocks means that the user has an
unlimited space quota for the tablespace.
ad d: false. value null can be used to set the quota on the tablespace.
ad e: false. quota value must be numeric. it cannot be defined as string.
oca oracle 9i associate dba certification exam guide, jason couchman, p. 815-817,
chapter 15: managing database users
106. which two statements are true about rebuilding an index? (choose two.)
answer: b, d
explanation:
(a) false. the resulting index will not contain deleted entries. it's the main
reason to rebuild the index.
(b) true. you can create an index using an existing index as the data source.
creating an index in this manner allows you to change storage characteristics or
move to a new tablespace. re-creating an index based on an existing data source
also removes intra-block fragmentation. in fact, compared to dropping the index
and using the create index command, re-creating an existing index offers better
performance. (58246.pdf) pg. 178. (10-10).
(c) false. a further advantage of this approach is that the old index is still
available for queries (58246.pdf) pg. 178. (10-10).
(d) true.
answer: e
testking said b.
explanation:
see ocp oracle 9i database: fundamentals i, p. 19.:
what exactly does processing a commit statement consist of?
(1) release table/row locks acquired by transaction.
(2) release undo segement locks acquired by transaction.
(3) generate redo for commited transaction.
108. a new user, psmith, has just joined the organization. you need to create
psmith as a valid user in the database. you have the following requirements:<br
/><br />
a.
create user psmith<br />
identified externally<br />
default tablespace data_ts<br />
quota 100m on data_ts<br />
quota 500k on temp_ts<br />
temporary tablespace temp_ts;<br />
revoke drop_table, create_user from psmith;<br />
b.
create user psmith<br />
identified externally<br />
default tablespace data_ts<br />
quota 500k on temp_ts<br />
quota 100m on data_ts<br />
temporary tablespace temp_ts;<br />
grant connect, resource to psmith;<br />
c.
create user psmith<br />
identified externally<br />
default tablespace data_ts<br />
quota 100m on data_ts<br />
quota 500k on temp_ts<br />
temporary tablespace temp_ts;<br />
grant connect to psmith;<br />
d.
create user psmith<br />
indentified globally as ''<br />
default tablespace data_ts<br />
quota 500k on temp_ts<br />
quota 100m on data_ts<br />
temporary tablespace temp_ts;<br />
grant connect, resource to psmith;<br />
revoke drop_table, create_user from psmith;<br />
answer: b
explanation:
(d) is false, because the user must be identified by the operating system, while
globally as 'external_name' indicates that a user must be authenticated by the
oracle security service.
(a) and (c) has no connect and resource privileges.
create user:
purpose to create a database user, or an account through which you can log in to
the database and establish the means by which oracle permits access by the user.
you can assign the following optional properties to the user:
(a) default tablespace.
(b) temporary tablespace.
(c) quotas for allocating space in tablespaces.
(d) profile containing resource limits.
109. you are logged on to a client. you do not have a secure connection from your
client to the host where your oracle database is running. which authentication
mechanism allows you to connect to the database using the sysdba privilege?
answer: b
explanation:
local database administration:
do you want to use os authentication?
yes: use os authentication.
no: use a password file.
see: oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01
pg. 37. (a58397.pdf)
a. control file
b. password file
c. parameter files
d. archived log files
answer: a
explanation:
control file is an administrative file required to start and run the database. the
control file records the physical structure of the database. for example, a
control file contains the database name, and the names and locations of the
database's data files and redo log files. see: oracle8(tm) enterprise edition
getting started release 8.0.5 for windows nt june 19, 1998 part no. a64416-01
(a55928.pdf) pg. 109. (5-9).
111. you issue these queries to obtain information about the regions table:<br />
<font face="courier">
<p align="left">select segment_name, tablespace_name<br />from
user_segments<br />where segment_name = 'regions';</p>
</font>
<p align="left"></p>
<table border cellspacing="0" cellpadding="7" width="329">
<tr>
<td width="136" valign="top" height="19">
<p align="left"><font face="courier"><b>segment_name </b></font></td>
<td width="159" valign="top" height="19">
<p align="left"><font face="courier"><b>tablespace_name </b></font></td>
</tr>
<tr>
<td width="136" valign="top" height="33">
<p align="left"><font face="courier">regions </font> </td>
<td width="159" valign="top" height="33">
<p align="left"><font face="courier">sample </font> </td>
</tr>
</table>
<font face="courier">
<p align="left">select constraint_name, constraint_type<br />from user
constraints<br />where table_name = �regions�;</p>
<table border cellspacing="0" cellpadding="7" width="209" height="45">
<tr>
<td width="162" valign="top" height="18">
<p align="left"><b>constraint_name </b></td>
<td width="13" valign="top" height="18">
<p align="left"><b>c </b></td>
</tr>
</font>
<font face="couriernewpsmt">
<tr>
<td width="162" valign="top" height="1"><font face="courier">
region_id_nn</font></td>
<td width="13" valign="top" height="1"><font face="courier">c</font></td>
</tr>
</font>
<font face="courier">
<tr>
<td width="162" valign="top" height="1">
<p align="left">reg_id </td>
<td width="13" valign="top" height="1">
<p align="left">p </td>
</tr>
</table>
<p align="left">select index_named<br />from user indexes<br />where
table_name = �regions�;</p>
</font>
<p align="left"></p>
<table border cellspacing="0" cellpadding="7" width="268" height="1">
<tr>
<td valign="top" height="1">
<p align="left"><font face="courier"><b>index_name </b></font></td>
</tr>
<tr>
<td valign="middle" height="1">
<p align="left"><font face="courier">reg_id_pk </font> </td>
</tr>
</table>
you then issue this command to move the regions table:<br />
what else must you do to complete the move of the regions table?
answer: a
explanation:
each table's data is stored in its own data segment, while each index's data is
stored in its own index segment. so after move indexes must be rebuilt.
a. check
b. unique
c. not null
d. primary key
e. foreign key
answer: c
explanation:
see ocp oracle 9i database: fundamentals i, p. 313.:
constraint_type: displays p for primary key, r for foreign key (referential
integrity constraint), c for check constraints (including checks to see if data is
not null), and u for unique constraints.
because of this, options a and c remain. because of the name of the constraint,
emp_job_nn, i would go for c, because nn usually stands for not null.
113. temporary tablespaces should be locally managed and the uniform size should
be a multiple of the ________.
a. db_block_size
b. db_cache_size
c. sort_area_size
d. operating system block size
answer: c
explanation:
http://www.interealm.com/technotes/roby/temp_ts.html
today, depending on your rdbms version, oracle offers three varieties of temporary
tablespaces to choose from. these spaces are used for disk based sorts, large
index rebuilds, global temporary tables, etc. to ensure that your disk-based
sorting is optimal, it is critical to understand the different types, caveats, and
benefits of these temporary tablespace options:
(a) permanent tablespaces with temporary segments.
(b) tablespaces of type "temporary".
(c) temporary tablespaces.
permanent tablespaces with temporary segments
this option has been available since oracle 7.3 and is the least efficient for
disk-based sorting. in this type of configuration, temporary (sort) extents are
allocated within a permanent tablespace. compared to other temp tablespace
choices, the performance and operation of this disk-sort option suffers in the
areas of:
extent management: the st-enqueue (and subsequent recursive dictionary sql) is
used for the allocation and de-allocation of extents allotted to each sort
segment.
sort segment reuse: each process performing a disk sort creates then drops a
private sort segment. this adds additional overhead to the sorting process.
extent reuse: because of the "private sort segment" policy used in this tablespace
option, there is no ability for disk-based sorts to re-use extents that are no
longer active.
temporary tablespaces</br>
this new class of temporary tablespace was introduced in oracle 8i and provides
the most robust and efficient means of disk-based sorting in oracle today.
temporary tablespaces are created using the sql syntax create temporary
tablepspace xyz tempfile .... there are a number of performance benefits of this
tablespace option over permanent and tablespaces of type temporary in the areas
of:
extent management: extents in this tablespace are allocated via a locally-managed
bitmap. therefore use of the st-enqueue and recursive sql for this activity is
eliminated.
segment reuse: sorts assigned to a tablespaces of type temporary use a single sort
segment (multiple segments in an ops environment) that is only dropped at instance
start up and is created during the first disk-based sort.
extent reuse: sorts using this type of tablespace have the ability to reuse
extents that are no longer active. this added level of reuse reduces the amount of
resources necessary to manage individual segments and allocate / deallocate
extents.
note: if the extent management clause is not specified for temporary tablespaces,
the database will automatically set the tablespace with a uniform extent size of 1
mb.
which do i choose?
whether you are on an existing database application migrating to a newer oracle
version or a new application in the initial development phase, for optimal
performance you should use the most recent temporary tablespace option available
to your database version:
- for oracle versions 7.3.4 and below, use permanent tablespaces with temporary
segments
- for oracle versions 8.0.3 - 8.0.6 use tablespaces of type "temporary"
- for oracle versions 8.1.5 - 9.x use temporary tablespaces
which two must be true before the log writer (lgwr) can reuse a filled online redo
log file? (choose two).
answer: a, e
explanation:
archivelog: the filled online redo log files are archived before they are reused
in the cycle.
noarchivelog: the filled online redo log files are not archived.
(a58227.pdf) pg. 72. (1-38).
when you run a database in archivelog mode, the archiving of the online redo log
is enabled. information in a database control file indicates that a group of
filled online redo log files cannot be used by lgwr until the group is archived (a
true). a filled group is immediately available to the process performing the
archiving after a log switch occurs (when a group becomes inactive). the process
performing the archiving does not have to wait for the checkpoint of a log switch
to complete before it can access the inactive group for archiving (c false).
see: oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01
(a58397.pdf) pg. 454. (23-2)
115. which two statements are true about the control file? (choose two.)
answer: a, d
explanation:
ad a: true. control_files indicates one or more names of control files separated
by commas. the instance startup procedure recognizes and opens all the listed
files. the instance maintains all listed control files during database operation.
see: oracle8 administrator's guide release 8.0 december, 1997 part no. a58397-01
(a58397.pdf) pg. 126. (6-2).
ad b: false. after mounting the database, the instance finds the database control
files and opens them. (control files are specified in the control_files
initialization parameter in the parameter file used to start the instance.) oracle
then reads the control files to get the names of the database's datafiles and redo
log files. (a58227.pdf) pg. 148. (5-6).
ad c: false. the control file of a database is a small binary file necessary for
the database to start and operate successfully. a control file is updated
continuously by oracle during database use, so it must be available for writing
whenever the database is open. if for some reason the control file is not
accessible, the database will not function properly. (a58227.pdf) pg. 693. (28-
19).
ad d: true. see previous.
answer: a, b
explanation:
resource limitation can be enabled or disabled by the resource_limit
initialization parameter in the database's initialization parameter file. valid
values for the parameter are true (enables enforcement) and false. by default,
this parameter's value is set to false. once the initialization parameter file has
been edited, the database instance must be restarted to take effect. every time an
instance is started, the new parameter value enables or disables the enforcement
of resource limitation.
resource limitation feature must be altered temporarily, you can enable or disable
the enforcement of resource limitation using the sql statement alter system. after
an instance is started, an alter system statement overrides the value set by the
resource_limit initialization parameter.
117. the server parameter file (spfile) provides which three advantages when
managing initialization parameters? (choose three.)
answer: a, c, d
testking said b, c, d.
explanation:
sources:
http://download-
west.oracle.com/docs/cd/b10501_01/rac.920/a96596/glossary.htm#436831.
see ocp oracle 9i database: fundamentals i, p. 70/71.
ad c: true. use the set clause of the alter system statement to set or change
initialization parameter values. additionally, the scope clause specifies the
scope of a change as described in the following table:
(1) scope = spfile: the change is applied in the server parameter file only. the
effect is as follows:
(a) for dynamic parameters, the change is effective at the next startup and is
persistent.
(b) for static parameters, the behavior is the same as for dynamic parameters.
this is the only scope specification allowed for static parameters.
(2) scope = memory: the change is applied in memory only. the effect is as
follows:
(a) for dynamic parameters, the effect is immediate, but it is not persistent
because the server parameter file is not updated.
(b) for static parameters, this specification is not allowed.
(3) scope = both: the change is applied in both the server parameter file and
memory. the effect is as follows:
(a) for dynamic parameters, the effect is immediate and persistent.
(b) for static parameters, this specification is not allowed.
118. you examine the alert log file and notice that errors are being generated
from a sql*plus session. which files are best for providing you with more
information about the nature of the problem?
a. control file
b. user trace files
c. background trace files
d. initialization parameter files
answer: b
explanation:
ad a: false the control file of a database is a small binary file necessary for
the database to start and operate successfully. a control file is updated
continuously by oracle during database use, so it must be available for writing
whenever the database is open. if for some reason the control file is not
accessible, the database will not function properly. (a58227.pdf) pg. 693. (28-
19).
a trace file is created each time an oracle instance starts or an unexpected event
occurs in a user process or background process. the name of the trace file
includes the instance name, the process name, and the oracle process number. the
file extension or file type is usually trc, and, if different, is noted in your
operating system-specific oracle documentation. the contents of the trace file may
include dumps of the system global area, process global area, supervisor stack,
and registers. two initialization parameters specify where the trace files are
stored:
ad b: false background_dump_des specifies the location for trace files created by
the oracle background processes pmon, dbwr, lgwr, and smon.
ad c: true user_dump_dest specifies the location for trace files created by user
processes such as sql*dba, sql*plus, or pro*c.
see: oracle8(tm) error messages release 8.0.4 december 1997 part no. a58312-01
(a58312.pdf) pg. 27. (1-5).
ad d: false parameter file contains initialization parameters. these parameters
specify the name of the database, the amount of memory to allocate, the names of
control files, and various limits and other system parameters. (a58227.pdf) pg.
61. (1-27)
119. you can use the database configuration assistant to create a template using
an existing database structure.
a. data files
b. tablespaces
c. user defined schemas
d. user defined schema data
e. initialization parameters
answer: a, b, e
explanation:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/create.htm#1026131.
creating templates using dbca
from an existing template: using an existing template, you can create a new
template based on the pre-defined template settings. you can add or change any
template settings such as initialization parameters, storage parameters, or use
custom scripts.
from an existing database (structure only): you can create a new template that
contains structural information about an existing database, including database
options, tablespaces, datafiles, and initialization parameters specified in the
source database. user defined schema and their data will not be part of the
created template. the source database can be either local or remote.
from an existing database (structure as well as data--a seed database): you can
create a new template that has both the structural information and physical
datafiles of an existing database. databases created using such a template are
identical to the source database. user defined schema and their data will be part
of the created template. the source database must be local.
120. the users pward and psmith have left the company. you no longer want them to
have access to the database. you need to make sure that the objects they created
in the database remain. what do you need to do?
answer: a
explanation:
ad a: true create session right: connect to the database.
ad b: if the user's schema contains any schema objects, use the cascade option to
drop the user and all associated objects and foreign keys that depend on the
tables of the user successfully. if you do not specify cascade and the user's
schema contains objects, an error message is returned and the user is not dropped.
before dropping a user whose schema contains objects, thoroughly investigate which
objects the user's schema contains and the implications of dropping them before
the user is dropped. pay attention to any unknown cascading effects. for example,
if you intend to drop a user who owns a table, check whether any views or
procedures depend on that particular table. see: oracle8 administrator's guide
release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg. 385. (20-17).
ad c: false after deleted one can not revoke privilege.
ad d: when a user is dropped, the user and associated schema is removed from the
data dictionary and all schema objects contained in the user's schema, if any, are
immediately dropped. see: oracle8 administrator's guide release 8.0 december, 1997
part no. a58397-01 (a58397.pdf) pg. 385. (20-17).
121. you need to create an index on the customer_id column of the customers table.
the index has these requirements:
which command creates the index and meets all the requirements?
a.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctfree 20<br />
storage (initial lm next lm pctincrease 0);<br />
b.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctfree 20<br />
storage (initial 1m next 1m pctincrease 0)<br />
nologging;<br />
c.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctused 80<br />
storage (initial lm next lm pctincrease 0)<br />
nologging;<br />
d.
create unique index cust_pk on customers(customer_id)<br />
tablespace index0l<br />
pctused 80<br />
storage (initial lm next lm pctincrease 0);<br />
answer: b
explanation:
pctfree is the percentage of space to leave free for updates and insertions within
each of the index's data blocks.
tablespace is the name of the tablespace to hold the index or index partition. if
you omit this option, oracle creates the index in the default tablespace of the
owner of the schema containing the index.
logging / nologging specifies that the creation of the index will be logged
(logging) or not logged (nologging) in the redo log file.
storage pctincrease specifies the percent by which the third and subsequent
extents grow over the preceding extent. the default value is 50, meaning that each
subsequent extent is 50% larger than the preceding extent.
next specifies the size in bytes of the next extent to be allocated to the object.
you can use k or m to specify the size in kilobytes or megabytes.
initial specifies the size in bytes of the object's first extent. oracle allocates
space for this extent when you create the schema object. you can use k or m to
specify this size in kilobytes or megabytes.
asc / desc are allowed for db2 syntax compatibility, although indexes are always
created in ascending order.
(a58225.pdf) pg. 421. (4-237).
122. john has issued the following sql statement to create a new user account:
answer: a
explanation:
it is not possible to assign a role to a user within a create user statement: you
can use grant role_name to user_name command to do that.
a. a transaction completes.
b. the instance is started.
c. the instance is shut down
d. the current online redo log group is filled
e. the alter system switch logfile command is issued.
answer: d, e
explanation:
a log switch, by default, takes place automatically when the current online redo
log file group fills. see: oracle8 administrator's guide release 8.0 december,
1997 part no. a58397-01 (a58397.pdf) pg. 118. (5-10).
to force a log switch, you must have the alter system privilege. to force a log
switch, use either the switch logfile menu item of enterprise manager or the sql
command alter system with the switch logfile option. the following statement
forces a log switch: alter system switch logfile; see: oracle8 administrator's
guide release 8.0 december, 1997 part no. a58397-01 (a58397.pdf) pg. 121. (5-13)
which two statements are true about the temp_tbs tablespace? (choose two.)
answer: a, d
testking said b, d.
explanation:<br />
ad a: true. use the create temporary tablespace statement to create a locally
managed temporary tablespace, which is an allocation of space in the database that
can contain schema objects for the duration of a session.<br /><br /> if you
subsequently assign this temporary tablespace to a particular user, then oracle
will also use this tablespace for sorting operations in transactions initiated by
that user. (a96540.pdf) pg. 1258. (15-92)<br /><br />
ad b: false. because of previous. starting with oracle 9i, oracle creates non-
system tablespaces to be localley managed by default. see ocp oracle 9i database:
fundamentals i, p. 153.<br /><br />
ad c: renaming is not possible.<br /><br />
ad d: ?<br /><br />
ad e: ?<br />
answer: d
explanation:
constraint states
table constraints can be enabled and disabled using the create table or alter
table statement. in addition the validate or novalidate keywords can be used to
alter the action of the state:
(1) enable validate is the same as enable. the constraint is checked and is
guaranteed to hold for all rows.
(2) enable novalidate means the constraint is checked for new or modified rows,
but existing data may violate the constraint.
(3) disable novalidate is the same as disable. the constraint is not checked so
data may violate the constraint.
(4) disable validate means the constraint is not checked but disallows any
modification of the constrained columns.
answer: d
explanation:
ad c: false. the shared pool portion of the sga contains three major areas:
library cache, dictionary cache, and control structures.
ad d: true. in general, any item (shared sql area or dictionary row) in the shared
pool remains until it is flushed according to a modified lru algorithm. the memory
for items that are not being used regularly is freed if space is required for new
items that must be allocated some space in the shared pool.
(a58227.pdf) pg. 158. (6-6)
127. as a dba, one of your tasks is to periodically monitor the alert log file and
the background trace files. in doing so, you notice repeated messages indicating
that log writer (lgwr) frequently has to wait for a redo log group because a
checkpoint has not completed or a redo log group has not been archived.
what should you do to eliminate the wait lgwr frequently encounters?
a. increase the number of redo log groups to guarantee that the groups are always
available to lgwr.
b. increase the size of the log buffer to guarantee that lgwr always has
information to write.
c. decrease the size of the redo buffer cache to guarantee that lgwr always has
information to write.
d. decrease the number of redo log groups to guarantee that checkpoints are
completed prior to lgwr writing.
answer: a
explanation:
you need to increase the number of redo log groups to guarantee that the groups
are always available to lgwr. log writer (lgwr) frequently has to wait for a redo
log group because a checkpoint has not completed or a redo log group has not been
archived if there are not enough redo log groups or they are too small.
ad b: increasing the size of the log buffer will not affect the checkpoint
frequency. you can increase the redo log file size to eliminate the wait lgwr
frequently encounters.
ad c: decreasing the size of the redo buffer cache will not affect the checkpoint
frequency.
ad d: decreasing the number of redo log groups you will just make lgwr wait for a
redo log group more frequently because a checkpoint has not completed or a redo
log group has not been archived.
a. dba
b. sysdba
c. sysoper
d. resource
answer: b
explanation:
you must have the osdba role enabled.
the roles connect, resource, dba, exp_full_database, and imp_full_database are
defined automatically for oracle databases. these roles are provided for backward
compatibility to earlier versions of oracle and can be modified in the same manner
as any other role in an oracle database. see (a58227.pdf) pg. 622. (26-16).
ad c: false. sysoper permits you to perform startup, shutdown, alter database
open/mount, alter database backup, archive log, and recover, and includes the
restricted session privilege.
ad b: true. sysdba contains all system privileges with admin option, and the
sysoper system privilege; permits create database and time-based recovery. see
(a58227.pdf) pg. 637. (25-7).
a. undo segments
b. redo log files
c. data dictionary tables
d. archived redo log files
answer: a
explanation:
oracle7 server concepts 10-6
statement level read consistency
oracle always enforces statement-level read consistency. this guarantees that the
data returned by a single query is consistent with respect to the time that the
query began. therefore, a query never sees dirty data nor any of the changes made
by transactions that commit during query execution. as query execution proceeds,
only data committed before the query began is visible to the query. the query does
not see changes committed after statement execution begins. a consistent result
set is provided for every query, guaranteeing data consistency, with no action on
the user's part.
the sql statements select, insert with a query, update, and delete all query data,
either explicitly or implicitly, and all return consistent data. each of these
statements uses a query to determine which data it will affect (select, insert,
update, or delete, respectively). a select statement is an explicit query and may
have nested queries or a join operation. an insert statement can use nested
queries. update and delete statement can use where clauses or subqueries to affect
only some rows in a table rather than all rows.
while queries used in insert, update, and delete statements are guaranteed a
consistent set of results, they do not see the changes made by the dml statement
itself. in other words, the data the query in these operations sees reflects the
state of the data before the operation began to make changes.
for this purpose only the undo segments are necessary from the possible answers.
130. you just issued the startup command. which file is checked to determine the
state of the database?
answer: a
explanation:
oracle9i database administrator's guide release 2 (9.2) march 2002 part no.
a96521-01 (a96521.pdf) 4-16
quiescing a database
there are times when there is a need to put a database into a state where only dba
transactions, queries, fetches, or pl/sql statements are allowed. this is called a
quiesced state, in the sense that there are no ongoing non-dba transactions,
queries, fetches, or pl/sql statements in the system. this quiesced state allows
you or other administrators to perform actions that cannot safely be done
otherwise.
<table border="1">
active_state description
normal normal unquiesced state
quiescing being quiesced, but there are still active non-dba sessions
running
quiesced quiesced, no active non-dba sessions are active or allowed
since the state can be queried, i believe it's really in a control file.
131. which two are true about the data dictionary views with prefix user_? (choose
two.)
answer: a, f
explanation:
ad a: true. views with the prefix user usually exclude the column owner. this
column is implied in the user views to be the user issuing the query. see
(a58227.pdf) pg. 137. (4-5) have columns identical to the other views, except that
the column owner is implied the current user. see (a58227.pdf) pg. 138. (4-6).
ad b: false. the data dictionary views accessible to all users of an oracle
server. most views can be accessed by any user with the create_session privilege.
the data dictionary views that begin with dba_ are restricted. these views can be
accessed only by users with the select_any_table privilege. this privilege is
assigned to the dba role when the system is initially installed. see (a58242.pdf)
pg. 171 (2-1).
ad c: false. the data dictionary is always available when the database is open. it
resides in the system tablespace, which is always online. see (a58227.pdf) pg.
137. (4-5).
ad d: false. these views do not return information about all objects to which the
user has access. the data
dictionary views with prefix all_ provide this access.
ad e: false. any oracle user can use the data dictionary as a read-only reference
for information about the database. see (a58227.pdf) pg. 135. (4-3).
ad f: true.
a. pmon
b. smon
c. reco
d. arcn
e. ckpt
answer: b
explanation:
smon (oracle system monitor)
smon is an oracle background process created when you start a database
http://www.orafaq.com/glossary/faqglosd.htm instance
http://www.orafaq.com/glossary/faqglosi.htm. the smon process performs instance
http://www.orafaq.com/glossary/faqglosi.htm recovery, cleans up after dirty
shutdowns and coalesces adjacent free extents into larger free extents.
pmon (oracle process monitor)
pmon is an oracle background process created when you start a database
http://www.orafaq.com/glossary/faqglosd.htm instance
http://www.orafaq.com/glossary/faqglosi.htm. the pmon process will free up
resources if a user http://www.orafaq.com/glossary/faqglosu.htm process fails (eg.
release database http://www.orafaq.com/glossary/faqglosd.htm locks).
reco (oracle recoverer process)
reco is an oracle background process created when you start an instance
http://www.orafaq.com/glossary/faqglosi.htm with distributed_transactions= in the
init.ora http://www.orafaq.com/glossary/faqglosi.htm file
http://www.orafaq.com/glossary/faqglosf.htm. the reco process will try to resolve
in-doubt transactions across oracle distributed databases.
arch (oracle archiver process)
arch is an oracle background process created when you start an instance
http://www.orafaq.com/glossary/faqglosi.htm in archive log mode. the arch process
will archive on-line redo log http://www.orafaq.com/glossary/faqglosr.htm files to
some backup http://www.orafaq.com/glossary/faqglosb.htm media.
ckpt
ckpt (oracle http://www.orafaq.com/glossary/faqgloso.htm checkpoint
http://www.orafaq.com/glossary/faqglosc.htm process) is the oracle
http://www.orafaq.com/glossary/faqgloso.htm background process that timestams all
datafiles and control files to indicate that a checkpoint
http://www.orafaq.com/glossary/faqglosc.htm has occurred
133. your database contains a locally managed uniform sized tablespace with
automatic segment-space management, which contains only tables. currently, the
uniform size for the tablespace is 512 k.
because the tables have become so large, your configuration must change to improve
performance. now the tables must reside in a tablespace that is locally managed,
with uniform size of 5 mb and automatic segment-space management.
answer: d
explanation:
ad a: false. the new requirements can be met by creating a new tablespace with
correct settings and by moving the tables into the new tablespace.
ad b: false. it's wrong way to recreate control files. you will need that when you
will create new tablespace with new uniform size to save changes in the control
files. but changing the control files themselves will not fix the issue.
ad c: false. you cannot dynamically change the uniform size.
134. you created a tablespace sh_tbs. the tablespace consists of two data files:
sh_tbs_datal .dbf and sh_tbs_data2.dbf. you created a nonpartitioned table
sales_det in the sh_tbs tablespace.
which two statements are true? (choose two.)
a. the data segment is created as soon as the table is created.
b. the data segment is created when the first row in the table is inserted.
c. you can specify the name of the data file where the data segment should be
stored.
d. the header block of the data segment contains a directory of the extents in the
segment.
answer: a, d
explanation:
ad a: true. every nonclustered table or partition and every cluster in an oracle
database has a single data segment to hold all of its data. oracle creates this
data segment when you create the nonclustered table or cluster with the create
command. if the table or index is partitioned, each partition is stored in its own
segment. see: oracle8 concepts release 8.0 december, 1997 part no. a58227-01
(a58227.pdf) pg. 107. (2-15).
ad b: false. because of the previous.
ad c: false.
ad d: true. for maintenance purposes, the header block of each segment contains a
directory of the extents in that segment. see: oracle8 concepts release 8.0
december, 1997 part no. a58227-01 (a58227.pdf) pg. 103. (2-11).
135. the dba can structure an oracle database to maintain copies of online redo
log files to avoid losing database information.
which three are true regarding the structure of online redo log files? (choose
three.)
answer: a, c, e
explanation:
http://www.siue.edu/~dbock/cmis565/ch7-redo_log.htm
each redo log group has identical redo log files. the lgwr concurrently writes
identical information to each redo log file in a group. the oracle server needs a
minimum of two online redo log groups for normal database operation. thus, if disk
1 crashes as shown in the figure above, none of the redo log files are truly lost
because there are duplicates. if the group has more members, you need more disk
drives!
if possible, you should separate the online redo log files from the archive log
files as this reduces contention for the i/o buss path between the arcn and lgwr
background processes. you should also separate datafiles from the online redo log
files as this reduces lgwr and dbwn contention. it also reduces the risk of losing
both datafiles and redo log files if a disk crash occurs.
redo log files in a group are called members. each group member has identical log
sequence numbers and is the same size - they cannot be different sizes. the log
sequence number is assigned by the oracle server as it writes to a log group and
the current log sequence number is stored in the control files and in the header
information of all datafiles - this enables synchronization between datafiles and
redo log files.
136. which three statements are true about the use of online redo log files?
(choose three.)
answer: a, b, f
explanation:
ad a: true. the information in a redo log file is used only to recover the
database from a system or media failure that prevents database data from being
written to a database's datafiles. see (a58227.pdf) pg. 46. (1-12)
ad c: false. every oracle database has a set of two or more redo log files. 2
files can not be organized to 3 groups see (a58227.pdf) pg. 46. (1-12)
ad d: false. there is requirement to have at least two, not three redo log groups
in oracle.
ad e: false. every database contains one or more rollback segments, which are
portions of the database that record the actions of transactions in the event that
a transaction is rolled back. you use rollback segments to provide read
consistency, rollback transactions, and recover the database. (a58227.pdf) pg.
109. (2-17)
137. which steps should you follow to increase the size of the online redo log
groups?
a. use the alter database resize logfile group command for each group to be
resized.
b. use the alter database resize logfile member command for each member within the
group being resized.
c. add new redo log groups using the alter database add logfile group command with
the new size.
drop the old redo log files using the alter database drop logfile group command.
d. use the alter datbase resize logfile group command for each group to be
resized.
use the alter database resize logfile member command for each member within the
group.
answer: c
explanation:
ad a: there is no alter database resize logfile group command in oracle.
ad b: there is no alter database resize logfile member command in oracle.
ad c: to increase the size of the online redo log groups you need first to add new
redo log groups using the alter database add logfile group with increased size of
redo log group members. after that you can change status of redo log group with
small size of file by using command alter system switch logfile and than drop the
old redo log files using the alter database drop logfile group command.
ad d: there are no alter database resize logfile group and alter database resize
logfile member commands in oracle.
138. oracle guarantees read-consistency for queries against tables. what provides
read-consistency?
a. redo logs
b. control file
c. undo segments
d. data dictionary
answer: c
explanation:
ad a: false. the information in a redo log file is used only to recover the
database from a system or media failure that prevents database data from being
written to a database's datafiles. see (a58227.pdf) pg. 46. (1-12).
ad b: false. the control file of a database is a small binary file necessary for
the database to start and operate successfully. (a58227.pdf) pg. 693. (28-19).
ad c: true. every database contains one or more rollback segments, which are
portions of the database that record the actions of transactions in the event that
a transaction is rolled back. you use rollback segments to provide read
consistency, rollback transactions, and recover the database. (a58227.pdf) pg.
109. (2-17).
ad d: false. each oracle database has a data dictionary. an oracle data dictionary
is a set of tables and views that are used as a read-only reference about the
database. for example, a data dictionary stores information about both the logical
and physical structure of the database. (a58227.pdf) pg. 81, 134 (1-47, 4-1).
139. you need to shut down your database. you want all of the users who are
connected to be able to complete any current transactions. which shutdown mode
should you specify in the shutdown command?
a. abort
b. normal
c. immediate
d. transactional
answer: d
explanation:
ad a: false. this option of the shutdown command is used for emergency database
shutdown.
ad b: false. normal database shutdown proceeds with the following conditions:
(a) no new connections are allowed after the statement is issued.
(b) before the database is shut down, oracle waits for all currently connected
users to disconnect from the database.
(c) the next startup of the database will not require any instance recovery proce-
dures.
ad c: false. immediate database shutdown proceeds with the following conditions:
(a) current client sql statements being processed by oracle are terminated
immediately.
(b) any uncommitted transactions are rolled back. if long uncommitted transactions
exist, this method of shutdown might not complete quickly, despite its name.
(c) oracle does not wait for users currently connected to the database to
disconnect.
(d) oracle implicitly rolls back active transactions and disconnects all connected
users.
ad d: true. after submitting this statement, no client can start a new transaction
on this particular instance. if a client attempts to start a new transaction, they
are disconnected. after all transactions have either committed or aborted, any
client still connected to the instance is disconnected. at this point, the
instance shuts down just as it would when a shutdown immediate statement is
submitted. a transactional shutdown prevents clients from losing work, and at the
same time, does not require all users to log off.
140. you decided to use multiple buffer pools in the database buffer cache of your
database. you set the sizes of the buffer pools with the db_keep_cache_size and
db_recycle_cache_size parameters and restarted your instance.
what else must you do to enable the use of the buffer pools?
a. re-create the schema objects and assign them to the appropriate buffer pool.
b. list each object with the appropriate buffer pool initialization parameter.
c. shut down the database to change the buffer pool assignments for each schema
object.
d. issue the alter statement and specify the buffer pool in the buffer_pool clause
for the schema objects you want to assign to each buffer pool.
answer: d
explanation:
ad a: false. it is not required to recreate the schema objects to assign them to
the appropriate buffer pool. you can do that with alter table command.
ad b: false. you don't need to list each object with the appropriate buffer pool
initialization parameter. by default object is stored in the default buffer pool.
ad c: false. to change the buffer assignments for each schema object from default
to keep or recycle you need just use alter table command. you don't need to
restart database to enforce these changes.
ad d: true. unlike db_block_buffers, which specifies the number of data block-
sized buffers that can be stored in sga, oracle9i introduces a new parameter,
db_cache_size, which can be used to specify the size of the buffer cache in the
oracle sga. there are two other parameters used to set keep and recycle parts of
the buffer pools: db_keep_cache_size and db_recycle_cache_size. to enable the use
of the buffer pools you need to issue the alter statement and specify the buffer
pool (or exact part of buffer pool, default, keep or recycle) in the buffer_pool
clause for the schema objects you want to assign to each buffer pool. syntax of
these statements: alter table table_name storage (buffer_pool default), alter
table table_name storage (buffer_pool keep) or alter table table_name storage
(buffer_pool recycle).
oca oracle 9i associate dba certification exam guide, jason couchman, p. 544-547,
chapter 10: basics of the oracle database architecture
141. a user calls and informs you that a 'failure to extend tablespace' error was
received while inserting into a table. the tablespace is locally managed.
which three solutions can resolve this problem? (choose three.)
a. add a data file to the tablespace
b. change the default storage clause for the tablespace
c. alter a data file belonging to the tablespace to autoextend
d. resize a data file belonging to the tablespace to be larger
e. alter the next extent size to be smaller, to fit into the available space
answer: a, c, d
explanation:
ad a, c, d: you can add a data file to the tablespace, alter a data file belonging
to the tablespace to extend automatically, resize a data file belonging to the
tablespace to be larger.
ad b: false. changing the default storage of the tablespace will not solve the
problem.
ad e: false. if you alter the next extent size to be smaller and insert data into
a table, but it's just temporary decision of problem: error will be generated
again when the size of next extents will grow to fit the segment.
oca oracle 9i associate dba certification exam guide, jason couchman, p. 637-640,
chapter 12: managing tablespaces and datafiles
142. which table type should you use to provide fast key-based access to table
data for queries involving exact matches and range searches?
a. regular table
b. clustered table
c. partitioned table
d. index-organized table
answer: d
explanation:
ad a: regular table will require indexes to provide fast key-based access to table
data for queries involving exact matches and range searches.
ad b: false. clusters are an optional method of storing table data. clusters are
groups of one or more tables physically stored together because they share common
columns and are often used together. because related rows are physically stored
together, disk access time improves. (a58227.pdf) pg. 79. (1-45).
ad c: false. partitioning addresses the key problem of supporting very large
tables and indexes by allowing you to decompose them into smaller and more
manageable pieces called partitions. once partitions are defined, sql statements
can access and manipulate the partitions rather than entire tables or indexes.
partitions are especially useful in data warehouse applications, which commonly
store and analyze large amounts of historical data. all partitions of a table or
index have the same logical attributes, although their physical attributes can be
different.
for example, all partitions in a table share the same column and constraint
definitions; and all partitions in an index share the same index columns. however,
storage specifications and other physical attributes such as pctfree, pctused,
initrans, and maxtrans can vary for different partitions of the same table or
index. each partition is stored in a separate segment. optionally, you can store
each partition in a separate tablespace. see (a58227.pdf) pg. 244. (9-2).
ad d: true. an index-organized table differs from a regular table in that the data
for the table is held in its associated index. changes to the table data, such as
adding new rows, updating rows, or deleting rows, result only in updating the
index. the index-organized table is like a regular table with an index on one or
more of its columns, but instead of maintaining two separate storages for the
table and the b*-tree index, the database system only maintains a single b*-tree
index which contains both the encoded key value and the associated column values
for the corresponding row. benefits of index-organized tables because rows are
stored in the index, index-organized tables provide a faster key-based access to
table data for queries involving exact match and/or range search. (a58227.pdf) pg.
229. (8-29).
143. you issue the following queries to obtain information about the redo log
files:<br />
a. each online redo log file group must have two members.
b. you cannot delete any members of online redo log file groups.
c. you cannot delete any members of the current online redo log file group
d. you must delete the online redo log file in the operating system before issuing
the alter database command.
answer: c
explanation:
oracle9i database concepts release 2 (9.2) march 2002 part no. a96524-01
(a96524.pdf) 9-41
drop logfile clause
use the drop logfile clause to drop all members of a redo log file group. specify
a redo log file group as indicated for the add logfile member clause.
(a) to drop the current log file group, you must first issue an alter system
switch logfile statement.
(b) you cannot drop a redo log file group if it needs archiving.
(c) you cannot drop a redo log file group if doing so would cause the redo thread
to contain less than two redo log file groups.
see also: alter system on page 10-22 and "dropping log file members: example" on
page 9-54
if you execute switch logfile, then the current logfile will be different, so
answer c is ok.
a. the redo log buffer is not part of the shared memory area of an oracle
instance.
b. multiple instances can execute on the same computer, each accessing its own
physical database.
c. an oracle instance is a combination of memory structures, background processes,
and user processes.
d. in a shared server environment, the memory structure component of an instance
consists of a single sga and a single pga.
answer: b
explanation:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96524/c06start.htm#8106.
multiple instances can run concurrently on the same computer, each accessing its
own physical database. in clustered and massively parallel systems (mps), real
application clusters enables multiple instances to mount a single database.
ad a: false. the redo log buffer is a circular buffer in the sga that holds
information about changes made to the database. see (a58227.pdf) pg. 158, 144. (6-
6, 5-2).
ad c: false. oracle allocates a memory area called the system global area (sga)
and starts one or more oracle processes. this combination of the sga and the
oracle processes is called an oracle instance. see (a58227.pdf) pg. 144. (5-2).
ad d: true/false. ??? a pga is nonshared memory area to which a process can write.
one pga is allocated for each server process; the pga is exclusive to that server
process and is read and written only by oracle code acting on behalf of that
process. a pga is allocated by oracle when a user connects to an oracle database
and a session is created, though this varies by operating system and
configuration. the basic memory structures associated with oracle include:
(a) software code areas
(b) system global area (sga): the database buffer cache, the redo log buffer, the
shared pool
(c) program global areas (pga): the stack areas, the data areas, sort areas
a. manually edit the password file and add the new entries.
b. alter the current password file and resize if to be larger.
c. add the new entries; the password file will automatically grow.
d. drop the current password file, recreate it with the appropriate number of
entries and add everyone again.
answer: d
explanation:
you can create a password file using the password file creation utility, orapwd
or, for selected operating systems, you can create this file as part of your
standard installation.
entries: this parameter sets the maximum number of entries allowed in the password
file. this corresponds to the maximum number of distinct users allowed to connect
to the database as sysdba or sysoper. if you ever need to exceed this limit, you
must create a new password file. it is safest to select a number larger than you
think you will ever need. see (a58397.pdf) pg. 39, 41. (1-9, 1-11).
146. abc company consolidated into one office building, so the very large
employees table no longer requires the office_location column. the dba decided to
drop the column using the syntax below:<br />
dropping this column has turned out to be very time consuming and is requiring a
large amount of undo space.
what could the dba have done to minimize the problem regarding time and undo space
consumption?
answer: e
testking said b.
explanation:
http://download-
west.oracle.com/docs/cd/b10501_01/server.920/a96521/tables.htm#5508.
removing unused columns
the alter table ... drop unused columns statement is the only action allowed on
unused columns. it physically removes unused columns from the table and reclaims
disk space.
in the example that follows the optional keyword checkpoint is specified. this
option causes a checkpoint to be applied after processing the specified number of
rows, in this case 250. checkpointing cuts down on the amount of undo logs
accumulated during the drop column operation to avoid a potential exhaustion of
undo space.
alter table hr.admin_emp drop unused columns checkpoint 250;
user b informs you that the update statement seems to be hung. how can you resolve
the problem so user b can continue working?
a. no action is required
b. ask user b to abort the statement
c. ask user a to commit the transaction
d. ask user b to commit the transaction
answer: c
explanation:
because of the consistency, while a transaction not committed no one else can
modify the same columns.
ad a: false. this situation requires dba intervention if session of user a keeps
emp table locked for other users updates during a long time.
ad b: false. user a needs to commit update command to resolve this issue. user b
does not need to abort the transaction.
ad d: false. user b cannot commit his/her transaction before user a commits
his/her transaction.
148. anne issued this sql statement to grant bill access to the customers table in
anne's schema:<br /><br />
bill issued this sql statement to grant claire access to the customers table in
anne's schema:<br /><br />
later, anne decides to revoke the select privilege on the customers table from
bill.<br />
which statement correctly describes both what anne can do to revoke the privilege,
and the effect of the revoke command?
a. anne can run the revoke select on customers from bill statement. both bill and
claire lose their access to the customers table.
b. anne can run the revoke select on customers from bill statement. bill loses
access to the customers table, but claire will keep her access.
c. anne cannot run the revoke select on customers from bill statement unless bill
first revokes claire's access to the customers table.
d. anne must run the revoke select on customers from bill cascade statement. both
bill and claire lose their access to the ri istomers table.
answer: a
explanation:
anne can run the revoke select on customers from bill statement. both bill and
claire lose their access to the customers table because of cascade revoking of
privilege.
ad a: true. anne can run the revoke select on customers from bill statement. both
bill and claire lose their access to the customers table because of cascade
revoking of privilege.
ad b: false. both bill and claire lose their access to the customers table, not
only bill.
ad c: false. anne can run the revoke select on customers from bill statement.
there is no limitation in oracle that bill needs first to revoke claire's access
to the customers table if anne granted this privilege to bill with grant option.
ad d: false. anne can revoke the privilege from the bill and claire just with
revoke command. there is no cascade clause in the revoke command. but the cascade
constraints optional clause requires if you are revoking the references privilege.
in our case it is not required.
149. john has created a procedure named salary_calc. which sql query allows him to
view the text of the procedure?
a. select text from user_source where name = 'salary_calc';
b. select * from user_source where source_name = 'salary_calc';
c. select * from user_objects where object_name = 'salary_calc';
d. select * from user_procedures where object_name = 'salary_calc';
e. select text from user_source where name = 'salary_calc' and owner = 'john';
answer: a
explanation:
sql> desc user_source
150. which statement should you use to obtain information about the number, names,
status, and location of the control files?
answer: b
explanation:
ad a: false. v$parameter this view lists information about initialization
parameters. see (a58242.pdf) pg. 402.
ad b: true. v$controlfile this view lists the names of the control files. see
(a58242.pdf) pg. 360.
ad c: false. v$control_files does not exist. see (a58242.pdf).
ad d: false. v$parameter this view lists information about initialization
parameters, it has no parameter column. see (a58242.pdf) pg. 402.
151. you need to make one of the data file of the prod_tbs tablespace auto
extensible.<br />
answer: d
explanation:
try it!
which three events occour when the instance is started and the database is
mounted? (choose three)
answer: a, b, c
explanation:
see ocp oracle 9i database: fundamentals i, p. 56.:
a and c already occur with nomount option.
b also occurs with the mount option. with this option, the control file is read
(!) to obtain the names and status of the datafiles and the redo log files.
ad d, e: datafiles and redo log files are opened (and therefore checked) with the
open option.
153. you are creating a data base manually and you need to limit the number of
initial online redo log groups and members. which two keywords should you use
within the create database command to define the maximum number of online redo log
files? (choose two).
a. maxlogmembers, which determines the maximum number of members per group.
b. maxredologs, which specifies the maximum number of online redo log files.
c. maxlogfiles, which determines the absolute maximum of online redo log groups.
d. maxloggroups, which specifies the maximum number of online redo log files,
groups and members.
answer: a, c
explanation:
see ocp oracle 9i database: fundamentals i, p. 77f.:
the maxlogfiles option defines the maximum number of redo log file groups and the
maxlogmembers option defines the maximum number of members for a redo log file
group that can be created in the database.
the other options (maxredologs, maxloggroups) do not exist.
154. which four do you find in the alert log file ? (choose four)
answer: c, d, e, f
explanation:
create user and create table do not produce an entry in the alert log file.
155. you need to determine the amount of space currently used in each tablespace.
you can retrieve this information in a single sql statment using only one dba view
in the from clause providing you use either the _______ or _______ dba view.
a.dba_extents.
b.dba_segments.
c.dba_data_files.
d.dba_tablespaces.
answer: a, c
explanation:
see ocp oracle 9i database: fundamentals i, p. 211