Professional Documents
Culture Documents
DBA Notes
DBA Notes
DBA NOTES
Page 2 of 102
DBA NOTES
2nd Method Of Installation using VNC [Vertual Network Computing]
Vnc server is present on server and vnc viwer is present on client
We need to start vnc server on server through our session and must open vnc viwer on
client
To start vnc server
Syn:- vncserver
Then vnc server is started on server. When we start vnc server it asks for password
for frist time. Enter password whatever you like. After entering the password it create
hidden directory called .vnc. This file is created under home directory of user. This
directory consists of files like password,files,startup files,log and pid(process id) files .
They are
passwd
startup
linux6:1.pid
linux6:1.log
Linux6(hosthome):1(port).pid
In server,every vnc connection is created with portnumber its starts from 1 next
connection is 2
We identified this port number from a line where vnc is started .this file is
newlinux6:1(kittu) desktop is linux6:1
To search whether vnc started or not
Ps of |grep vnc
To kill process:For this we must have the process id of vnc server. This pid is stored in linux6:1.pid file
more linux6:1.pid
ps ef |grep vnc
kill -9 6470
Then vnc process is stopped or killed
How to start vnc server from client and how to acces vnc server from client?
open o/s user
type vnc server and press enter
enter passwd if we open vnc server for first type.
identified the portnumber of vnc server
Linux6:1
open vnc viewer and type ip address of server along with portnumber
192.168.0.102:1
Enter the passwd and press enter
Then we connect to x-server through vnc viwer
Page 3 of 102
DBA NOTES
Page 4 of 102
DBA NOTES
ORALCE_SID will be database name. So SID must be same as of the database name we
want to create.
2. create the oracle initialization file init+sid.ora in $ORACLE_HOME/dbs directory. In
this file we defined some parameters required to create and manage a database. The file
name should be in the format init+$ORACLE_SID.ora . Ex: initdkittu.ora
The file should be created with following parameters
db_name = dkittu
db_cache_size=500m or 50000000
shared_pool_size = 50m
log_buffer =10000
Undo_tablespace = undotso1
undo_management = /oraDB/kittu/kittudb/c1.ctl
compatible= 10.2.0.1.0
It is better to save and create database files in some particular location
Page 5 of 102
DBA NOTES
3. After completion of above process
connect to sqlplus as sysdba and issue command startup nomount.
$sqlplus
SQL>enter username: sys as sysdba
Password:
SQL> startup nomount
After executing above statement, database is created is displayed. Then execute the below
post scripts
5. Post steps: (scripts)
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
@$ORACLE_HOME/rdbms/admin/catproc.sql
Connected
SQL> @$ORACLE_HOME/sqlplus/admin/pupbld.sql
Page 6 of 102
DBA NOTES
Same process is required to create log some modifications required in initialization file and
create database.
1) $export ORACLE_SID= dbcherry
$export ORACLE_HOME=/u001/kitty/myhoome (software
installed local)
$export PATH=$PATH:$ORACLE_HOME/bin
Then create initialization file
2) initdbcherry.ora
db_name = dbcherry
db_cache_size=500m or 50000000
shared_pool_size = 50m
log_buffer =10000
undo_tablespace = undotso1
undo_retention=99
control_files= /oraDB/dittu/database/cs.ctl
undo_management = auto
compatible= 10.2.0.1.0
Then excute the below statements after executing 3 point above executed
create database dbcherry
sysaux Datafile /oraDB/kittu/databases/sysaux.dbf size
50m,
datafile /oraDB/dittu/database/system.dbf size 350m,
Logfile group 1 /oraDB/kittu/database/redo01.rdo size 5m,
group2 /oraDB/kittu/database/redo02.rdo size 5m,
Undo tablespace undots01
Datafile /oraDB/kittu/database/undo01.dbf size 50m;
Then execute the post scripts
Note: - Database name and Oracle SID must be same
Page 7 of 102
DBA NOTES
Creating Database Using DBCA( Database Configuration Assistant)
1) For this we had to run X-manager passive and export display
2) Set environment variables like ORACLE_HOME, PATH
Dont set ORACLE_SID, because we give this SID in creation itself
3) Dont create initialisation file. It also created during database creation
4) After exporting ENV variables, type dbca and press enter. Then the below steps
takes place
Welcome screen
1) Operation
Select operation you want to perform
Create database
Configure database options is database
Delete database
Manage template
2) Database templates
Select template from following list to creater database
General purpose
Transaction processing
New database
3) Global databasename: ramu
It is consider as ramu.appworld.com
sid: ramu
4) database connection option
Select the mode in which you want your database to operate by default
dedicated server mode (one user)
shared server mode (more user)
5) Initiazation parameters
By default it takes some values if not we can modify those values
memory
o typical
o custom
shared pool-5000000
buffer cache=3k
javapool=25000
large o
pga ---1500000
charactersets
o use default
dbsize
o use unicode
sort area size 524288
o choose form list
file locations
1)create serverparameter file
Trace file defination
Userprocess-adim /udump
Background process admin/bdump
Core dumps admin/cdump
Page 8 of 102
DBA NOTES
Database storage
Logfiles
System files
Controlfiles
7) create options
create database
save as templates
OK
Then tempfile is displayed which displays complete information of database
Note: for 10g we have some modifications in creation
To indentify whether database is down or up:
Connected to idle instance means database is down
This envinorments varibles values are only with that session.to make that values
permanent,save this value in bash_profile.
How to we maintain multiple database in one o/s user in a server
This is possible by using fuction in bash_profile each function caontain information
envinorment value for one database
Syn: function()
{
Export oracle_sid=dbname
Export oracle_home=myhome
Export PATH=$ORACLE_HOME=/bin
}
Define the functions .bash_profile file
When the user login into account,this functions are loaded into memory.
To execute particular function type below commad in shell prompt
Syn: $functionname
Ex:- $database
Then the envinorment variables defined in database are set
To switch into another function ,just type that name and press
To check whether database process is started or not use below command.
$ps ef | grep smon
When the database is started the process smon is started.
Homedirectory: - is the location where the files related to that particular user were
navigated.
Database: - is the location where the database files are navigated.
Page 9 of 102
DBA NOTES
Starting a database
Shutdown the database
Removal of database
Monitoring the database
Taking backups
Generally when we are creating a database we have to perform 2 phases ;1) Configuring Instance
2) Configuring Database
1) Configuring Instance : - instance is nothing but memory this activity is done creating
Init + oracle_sid.ora
When instance is started in background two things are started
a) SGA
b) BACKGROUND PROCESS
Page 10 of 102
DBA NOTES
When we issue startup nomount,it reads initdb.ora and allocates memory to sga and
instance is started
Sga is space reserved for memory(ram) to database
2)configuring database
When we write(create database statements)
Database is created
After this 3 files are created.they are
Datafiles(dbf0
Controlfiles(ctl)
Redolog files(redo)
After this we have to perform post steps
Sizes for software
Oracle 9i-----1.6gb
Oracle 10-----1.26gb
Q) how to change the accounts of bash shell from k-shell
This can be done in /etc/password file
Change bash from ksh fro the uer which we want to change save the file and exit before
doing any modification in passwd file it is better to maintance a copy of that file
in bash the autoexe file is bash_profile in ksh the autoexe.file is .profile
When we are changing the shell the data and files in the former shell is available to later
shell
Oracle memory
Oraccle memory is of 2types
.sga (system global area) or (share global area)
.pga (process global area)
Sga is space reserver for oracle database
It is shared memory where the user can share the resource of sga
sga basically contains three parts in fundamental method
Db_buffer_cache
sharepool
later one
db_buffer_cache
shared pool
javapool and large pool
sga contain five components like
db_buffer_cache
sharepool
logbuffer
largepool
javapool
Page 11 of 102
DBA NOTES
db buffer cache we will stored recently used data in buffer cache.if the user request is
present in in buffer cache it sends it to ueser .
sharepool it contain parssed sql information
it translates the sql statement to sql understandable language
logbuffer it contains trasactional data
javapool it is used for jvm operations
sga should be more than 100mb
sga should be minimum 100-200m
the size of sga cont be more than the 1/3rd size of ram
how to change sga size
sga size can be changed by chaging the parameter values in init.ora
the total size of sga is determined by a parameter sga_max_size
show parameter sga_max_size
we can also know sga size when we startup some data is displayed showing some sizes
total shared global area 165007897
fixed size
---------------------variable size
----------------------database buffer ----------------------redo buffer
-----------------------bouncing the database
shut down the database and again start the database is said to be bouncing the database
Page 12 of 102
DBA NOTES
INSTANCE:
when we do startup oracle will allocate SGA and background process are started
which are mandatory to run oracle. This is called Instance.
Instance opens database files. Each and every thing is performed by Instance.
All the logical manips like creating, reading, writing, etc., are done by instance.
All the files are managed by instance.
User cannot have access to files without instance
User just cannot to instance not to files.
User is able to view and perform data(manips) through instance only.
Making things available to user is done by instance.
Instance is nothing but ORACLE_SID.
We can create database with unique.
Three Phases When We Are Starting Databases:1. Instance : At this phase Instance will allocate SGA
2. Mount: At this phase Instance opens way to database files
3. Open: Finally database is opened
When we shutdown the databse, Instance is closed and all the memory (SGA) is
deallocated, now we can open the database files (datafiles,logfiles,control files).
There are many ways to start the database:
1. We use this when we are creating or altering databse. This stage is used for
maintainance of database ie., If we want to increase the size of datafiles, locations
of files and if any issues occurred in database.
Sys: startup nomount;
Alter database mount;
Alter database open
Page 13 of 102
DBA NOTES
2. Startup Open;
Alter databse open;
3. Startup: It is used to open a database
PGA(Process Global Area)
It is memory reserved for each user process connecting to oracle db. PGA is
sessions private information. It is external to PGA. Every connection has one PGA. This is
private to that connections. Memory is allocated when process is created and deallocated
when process is terminated.
ORACLE PROCESSES
There are 3 types of processes. They are
Server Processes
Client Processes
Background Processes
Server Process : When this session is established, server process is created. Connects to
oracle instance and is started when user establishes a session. To handle the requests of
client process, user process, connected to instance server process is created of behalf of
each users app can perform the one or more of following:
Parse and execute the sql statements issued through the application (client process)
Returns result is such a way that the application can process the information
Client Process: started at a time a databse user requests connection to oracle server.
Client process is a process which is created when client software is started.
When we execute sql plus from $ prompt ,sqlplus becomes client process.
The client proces is a process that sends message to a server, requesting the server
to perform a task (service). Client program usually manage the user-interface
portion of the application, validate the data entered by the user, dispatch requests to
server programs and sometimes executes business logic, the client bases process is
front end app that the user sees and interact with
Hand shake: When we start client process (ie., when we five sqlplus on shell prompt)
before this process interact with instance, the server process interact with client process.
This is called handshake.
Parent process: For every process there is a parent process. When we execute sqlplus,
from shell then $prompt ID becomes parent process ID to sqlplus. ie., shell spanned a
process. Lsnrctl is a software which is executable in ORACLE_HOME/bin
Page 14 of 102
DBA NOTES
Page 15 of 102
DBA NOTES
It will be used in crash recovery and temp segment cleaning. It recovers after
instance failure and monitors temporary segments and extensions. It wakes about every 5
min to perform house keeping activities smon must be always running for an instance.
PMON [Process Monitor]
It updates the headers of datafiles, control files with latest SCN(system change
number) number.writes the checkpoint informantion to control files and datafiles headers.
LGWR: [Log Writer]
Flushes the data from log buffer to redolog files, It writes logbuffer out to redolog
files.
DBWR [DB Writer]
Flushes the dba from db buffer cache to data files, we can make multiple
databuffer. It is for writing dirty buffers form the databse block cache to the databse data
files. It only writes blocks back to datafiles on commit or when cache is full add space has
to made for more blockes. We can create multiple dbwiters by defining a parameter in
init.ora.
Parameter:
db_writer_processes=2
Data reading from datafiles will be done by server process. Datawriting will be done by
background process.
ARCHITECTURE OF BACKGROUND PROCESSES
When we issue a select statements, shared pool Converts the statements to sql
understandable language, Then, it sends it ot db_cache. Then sp server process work is
stopped. Now background processes starts working. If the sql statement related information
is available in db_cache, it sends it to user. If not, it searches for the info in
datafiles and then sends to user.
when we issue a transactional statements like inserting,deletion,updation etc., then
the copy of the data is maintained in 2 locations:
1. DB WRITER
2. LGWR
Page 16 of 102
DBA NOTES
Page 17 of 102
DBA NOTES
SERVER PROCESS
In Oracle there are 2 technologies
shared server process or MTA(multi threaded architecture)
dedicated server process
Dedicated server process:
One client process connected to one unique server process is said to be dedicated
server process. Server process is complicated to One client. The server process is there, till
there is client process. It may be idle(or) working. If 100 members are connected to
database 3mb memory is needed. The Architecture we are using is dedicated server
Process.
Shared server process:
Multiple clients connected to unique shared servers is said to be shared server
process. It is olderone. The client process will connect to server process through bridge
process called Dispatcher process. Dispatcher is mediator between client process and
shared server process. We can Reduce the burden and saving some resources on the server.
This process is defined in init.ora The advantage of shared server process is saving the
resources of the system.
RAC: (REAL APPLICATION CLUSTERS)
A single database can be access to multiple instances. We have multiple servers for
instances, but only one database to all instances. This is called RAC.
STANDBY DATABASE
Maintianing a copy of instance and database is standby database.
The difference is in time log only.
DATABASE ARCHITECTURE
There are 2 types of databases Architectures.
Physical
Logical
Physical Architecture is nothing but O/S level architecture. Files that are at o/s level
are said to by physical architecture.
Physical:
1. datafiles,min(1)
2. redofiles, min(2)
3. controlfiles, min(1)
To view the set of datafiles, logfiles,controlfiles.
Page 18 of 102
DBA NOTES
Select name from v$datafiles
Select member from v$logfile
Select name from v$controlfile
Datafile: It stores actual data
Logfile: It stores transactions. The purpose of redolog files is recovery in case of failure.
Controlfile: stores information and status of datafiles and redolog files, the size of control
file is automatically take by system.
When we inserting data, if redo is filled then oracle starts reading to redo, after
fitting this is again goes back to redo. This cycle goes on repeatedly.
If we enable arching log mode then before over writing the data to previous
redlog files, it takes backup to different location.
How to identify whether a database is available or not?
This is through oratab file in /etc folder. Ex: vi /etc/oratab
It stores database names and oracle homes only databases created through
dbca are loaded manual creation of databases are to be updated manually to /oratab. It
location is changed from one location to another in different O/S.
In solaris: /var/opt/oracle
LOGICAL ARCHITECTURE:
Schema Object
Non Schema Object
Schema is nothing but a user
Seeded Databases:- Default databases
Ex: sys, system
The objects which reside in the schema are said to be schema objects. Ex:
Table, view, index, synonym, procedure, package, function, database name,
sequence, etc., The objects which are not associated with schema are said to be nonschema objects.
Ex: Tablespace, Roles
How to refer an object of schema?
1.
Either we login to schema and access the object
2.
From different login user. First the user has the permission to access that
object, fallow the below syntax
Syn: schema object
Ex: scott
To start database:
Sqlplus / as sysdba
Page 19 of 102
DBA NOTES
DATA DICTIONARY
1. Oracle will maintain entire system data into data dictionary of catalog.sql
2. System data is data which is required for functionality of database.
3. Data dictionary or catalog is set of tables, views and synonyms.
4. When we create database, some files and objects are created both physically
& logically.
Physically: control files, redolog files, data files.
Logically: Basetables tab$
Vet$
File$
fet$ obj$
ts$ luster$
idx$ v$
There tables are extracted when we run created data. we connot access these tables
directly. It is very difficult to understand the data in these tables. There are some
views to access these tables. These are created when we run catalog.sql
VIEWS
Tables
Idx
Synonyms
Views
Sequences
Clusters
Database rows
Datafiles
dba_tables
dba_indexes
dba_synonym
dba_views
dba_sequences
dba_clusters
dba_db_links
dba_datafiles
all_tables
all_indexes
-------
user_tables
user_indexes
-------
Oracle will update the activities in to base tables whatever the ddl activities done
by us. Database engine will take this responsibility. Whole oracle is working based
on there base tables.
we should use only select statement
only system and sys has dba privileges
dba_ :- It will display everything in the database (all users info) every thing
user_ :- It will display only logged in users information only own
all_ :- It will display logged in users info and the objects which have access to that
user own+access
Oracle engine will gives access to 9 tables to users by default. All these base tables
are objects to know about the objects and the dba objects.
Page 20 of 102
DBA NOTES
DICTIONARY VIEWS:
dba_objects
dba_tables
dba_indexes
dba_ind_columns
dba_synonym
dba_ views
dba_sequences
dba_clusters
dba_constraints
dba_cons_column
dba_tab_column
session_privs
V$TABLES:
The views started with v$ are said to be dynamic performers
V$database
V$datafile
V$controlfile
V$logfile
V$version
V$session
Database info
Datafile info
Controlfile info
Log file info
Version
Session info
Page 21 of 102
DBA NOTES
syn:- c/<search string>/<replace string>
cl buff: It will clear sql buffer (Clear Buffer)
save: by using this, we can save sql statement. We refer these statements as sql statements.
To run the sql scripts from any location, we had to mention the location of sql scripts ie.,
ORACLE_PATH in bash_profile
Export ORACLE_PATH = /tmp:/oraAPP:/oraAPP/kittu
Usuallly these sql scripts are saved in the location from where we fire sqlplus.
How can we capture some output in sql?
In unix, it is possible by script command
In sql, it is achived by using spool command
Syn:sql> spool <filename>
sql> statements - - sql> spool off
Ex:sql> spool a.out
sql> select * fromtab;
o/p along with statements are stored in a.out.
sql> desc abc
sql> spool off
How to format column data in sql
To know line size
:show linesize;
To set line size:
set linesize 200
To set column size in numeric format :column empno format 9999
ie., it displays column empno with 4 digits
To set column size in alphabet format:column ename format a15
Development :- Designing side is said to be development. When we work on new
project we are said to be working on development side
Production :- After everything is designed and tested then it is deployed into
production
Query to create user:Syn:- create user <username> identified by <password>
Ex:- create user xyz identified by xyz;
Granting privilages to user:Syn:- grant connect, resource to xyz;
Droping user:Syn:- drop user xyz;
Page 22 of 102
DBA NOTES
To view the source code if views:Syn:- select text from dba_views where view_name=xyz;
Granting dba to user:Syn:- grant dba to <username>;
Revoke privilage from user:Syn:- revoke <privilage> from <username>;
QUERIES
Views:To see view information
Select view_name,owner,text from dba_views;
Sequences:- To see sequence number
Select sequence_number,sequence_name from dba_sequences;
Synonym:- To see synonym information
Select owner,synonym_name,table_owner,table_name from dba_synonyms;
Indexes:To see index information
Select owner,index_name,table_name from dba_indexes
To see index column info:Select index_name,table_name,column_name from dba_ind_column;
To see row id:Select rowid from <tablename>;
Constraints:Select owner,constraint_name,constraint_type,table_name from dba_constraints;
To see constraint column:Select owner,constraint_name,table_name,column_name from dba_cons_column;
Tables:Select owner,table_name,tablespace_name,status from dba_tables;
Users:Select username,default_tablespace from dba_users;
Tablespaces:Select tablespace_name,status from dba_tablespaces;
Database:Select name,dbid,created,open_mode from v$database;
Version:V$version has only one column banner
Select * from v$version
Page 23 of 102
DBA NOTES
Page 24 of 102
DBA NOTES
TABLESPACE MANAGEMENT
Tablespace is a logical structure which binds the objects. Tablespace is a container
for data files i.e., tablespace is a collection of one or more data files. One database has
minimum one tablespace. Tablespace always associated with one or more data files.
Database is a collection of tablespaces. It is one to many relationships.
- Size of database is size of tablespace.
- Size of tablespace is size of data files.
- As we increase tablespaces, database size increases.
- We cant create tablespace without data file.
- Each tablespace has its own data file.
- Redo logs and control files will never grow in size. So we never consider these
files in the size of tablespaces. These are key structures.
- A data file cannot be shared across tablespace
Syntax to create tablespace:Create tablespace <tablespacename> data file <location>
size <size>;
Ex:- create tablespace ts01 data file
/oraAPP/kittu/ts01.dbf size 10m;
This means we cant access the data in the tablespace even we cant perform select
statement.
How do we make tablespace online?
Dropping a tablespace:
Before dropping tablespace, it is better to make tablespace offline.
Syn:- drop tablespace ts01;
In this case only tablespace is deleted but the data files are maintained in o/s level.
To delete the files in o/s level i.e. contents and data files (contents means objects
i.e. tables, views, etc) in TS.
Syn:- drop tablespace ts01 including contents and data files;
Page 25 of 102
DBA NOTES
Increasing the size of tablespace:We can increase the size of tablespace in two ways.
1) we can increase the size of data file
Syn:- alter database datafile /oraAPP/kittu/ts01.dbf
resize 50m;
Not important points:- If datafile is very big, oracle encounters some issues. So we use datafiles max. of 5GB
size.
- If we doesnt mention datafile location, it saves that file in ORACLE_HOME/DBS
directory
- When we enter data to a datafile more than its size, it shows below error.
Error:- unable to extend table msb.emp by 128 in tablespace chinni;
To see free space of tablespace:Syn:- Select sum(bytes/1024/1024) from dba_free_space where
tablespace_name=CHINNI;
Page 26 of 102
DBA NOTES
To see datafile and its size dynamically:Syn:- select filename, bytes/1024/1024 from dba_data_files
where tablespace_name =ts
how can we specify datafile size autoextend upto some size:Syn:- alter database datafile --- autoextend on maxsize
200m;
Increase the size of TS by certain size:Alter database datafile --- autoextend on next 10m maxsize
100m;
In 9i, it is not possible to rename a tablespace. To perform this we had to follow the
below steps
- create new tablespace
- move all tables from old TS to new TS
Syn:- alter table <tname> move tablespace <new TS>;
Ex:- alter table emp move tablespace venki;
-- > When we move table from one TS to another TS, the table will be maintained inuser.
BIGFILE TABLESPACE:
It is new in 10g. by this we can create very big tablespace of terra bytes size.
Maximum size is 4 terrs bytes.
Syn:- create bigfile tablespace ts01 datafile --- size 10g;
Page 27 of 102
DBA NOTES
- make tablespace online
we rename file in sqllevel to update it in data dictionary.
To see user and tablespace:select username,default_tablespace from dba_users;
Database is opened.
Generally we open the database in Mount stage to perform maintenance activities like
renaming datafiles, default tablespaces, redologs etc. In this stage we cant access dba_
views. We access only v$ views.
Page 28 of 102
DBA NOTES
We cant make system tablespace, undo default tablespace offline.
How can we rename datafile of system TS:
We cant make system TS offline in DB open mode, because if we make it offline
we cant access dictionary views i.e. we cant access users, tables etc. so we rename the
datafile of system TS in mount stage only.
Steps: open database in mount stage
startup mount
we can also rename undo TS in mount stage only, because we cant make it offline.
To know tablespace datafile names in mount stage:
Select ts#,name from v$tablespace where name=SYSTEM;
Select name fom v$datafile where ts#=0;
Undo tablespace:It is for undo operations. It will maintain old data till we issue commit.
SHUT DOWN:
There are 4 methods:1) shutdown (or) shutdown normal
2) shutdown immediate
3) shutdown abort
4) shutdown transactional
Shutdown Immediate: It is reverse of startup
Phase1Database closed
All the connections(sessions) connected to instance are killed. All the pending
transactions are rolled back.
A check point will happen and dirty buffers will be flushed to datafiles.
Datafiles and redolog files are closed.
Phase2Database dismount stage (Database dismounted)
Control file will be closed.
Phase3Instance deallocation (Instance closed)
Background processes are killed and memory for SGA is deallocated.
It does not wait for the users to disconnect from the DB.
Page 29 of 102
DBA NOTES
Difference B/W Shutdown and Shutdown Immediate:- Shutdown Immediate will kill all the existed users
- Shutdown will wait for the users to get disconnected themselves.
Alert Log File:All the startup and shutdown activities will be captured into a file called alert log
file. The extension is <alert_dbname.log>.
By default it is stored in /ORACLE_HOME/RDBMS/LOG/..
Use tail f <filename> to display alert file content and update in incremental order.
Shutdown transactional
This option is used to allow active transactions to complete first i.e. it will let the
current transactions to be finished
It doesnt allow client to start new transactions
Attempting to start new transaction results in disconnection
After completion of all transactions, any client still connected to Instance is
disconnected
Now the Instance shuts down
The next startup of database will not require any Instance recovery.
It will kill users who are idle
In real time, we use S.I and S.A
Page 30 of 102
DBA NOTES
Startup Restrict:We use this option to allow only oracle users with the Restricted session system
privilege to connect to database. i.e. only the DBA can have access to DB. We can use
alter command to disable this restrict session feature.
Syn:- alter system disable restricted session
Actually we use this when we are in maintenance. So we cant give access of the
database to other users.We can enable restrict session feature after logging to database as
sys user.
Syn:- alter system enable restricted session
SYSAUX TABLESPACE
It is new in Oracle 10g. it is used to store database components that were stored in
system tablespace in prior releases of database. It was installed as an auxiliary TS to
SYSTEM TS. When we create the database, some database components that formerly
created and used separate tablespaces row occupy the SYSAUX TS.
If the SYSAUX TS becomed unavailable, core database functionality will remain
operational. The database features that use the SYSAUX TS could fail or function with
limited capacity.
Page 31 of 102
DBA NOTES
DBA NOTES
((blocksize * filesizeblocks)/1024)/1024 -- > we get control file size
OR
By firing ls l command in unix , we get the bytes
Minimum control files 2 to 4
V$parameter:- Lists status and location of all parameter.
V$controlfile_record_section:- Provides information about the control file record status
Show parameter control files:- Lists names,status,location of control files
Page 33 of 102
DBA NOTES
Page 34 of 102
DBA NOTES
How can we know the database is in archive log (or) noarchive log mode:
- select log_mode from v$database;
(OR)
- archive log list
Arch background process:
Arch is the archiver. Its task is to automatically archive online redologs so as to
prevent them from being overwritten.
The archiver background process starts if the database is in archivelog mode and
automatic archiving is enabled. i.e. taking the data in RDO to some other location is said to
be archiving.
How can we convert database to archivelog mode:
To start the archivelog mode
Shutdown the database
Define the parameters in init.ora parameter initialized file
The parameters are
log_archive_start=true -- > Necessary in 9i only. Optional in 10g.
log_archive_dest = -- > locator where we need to archive log files
log_archive_format= %s arc
The files which are archived will be saved in the format as we define in
parameter. Syn:- alter system set log_archive_format = ---- Start the database in mount stage
Now fire the command to switch to archivelog mode.
Ex:- alter database archivelog
Archived to Noarchivelog mode: shutdown immediate
startup mount
alter database noarchivelog
alter database open
Note: We can do archivelog and noarchivelog when the database is in mont stage only.
Because we must regular the activities to control file and we done this activity in
database level
We usually refer redolog file group other than redolog file.
Why we need more than one redolog file in a group:This is because to safeguard against damage to any single file. When we create
multiple redolog files, LGWR concurrently writes the same redolog information to that
files, thereby eliminating a single point of redolog failure. Other files in a group are said to
be ___________. All are same in size and contains same data.
Page 35 of 102
DBA NOTES
V$log_history
V$log
V$logfile
(or)
Alter database add logfile group 1 ( /oraAPP/redo1.rdo,
/oraAPP/redo2.rdo) size 10m;
Page 36 of 102
DBA NOTES
How can we change binary data to text formate:
Strings a c1.ctl>a
***When we are in mount stage and archive log is enabled. But it shows automatic
archival as disabled .when we fire archive log list .
Clearing a redo log file:alter database clear logfile group3;
It will reinitialized the damaged group
Clearing unarchived log file:
Page 37 of 102
DBA NOTES
If the value column of this parameter shows some value, it indicates that database
starts using spfile.
How to create pfile from spfile ?
Syn:- create pfile
from spfile;
Then automatically all the database related files are deleted except init.ora
ALTER SYSTEM:-
DBA NOTES
Memory the new parameter value will be updated only in database
Both the new parameter value will be updated in both spfile and database.
Audit files:The files contains information, if we start sqlplus as sys as sysdba. It also updates
when we connected from another user to sys as sysdba.
Who logged in started the database is stored. It contains o/s user name, database
name, system name, oracle_home, database user, privilege, time etc.,
Whenever we connect to sys user it creates audit files.
Ex:- ora_3702. and
TABLESPACES
1) Permanent tablespaces
2) Undo tablespaces
3) Temporary tablespaces
in 10g ,we cannot have more then 65536 tbs.
Permanent tablespaces:
The tablespaces which are used to store the data permanently are said to be
permanent tablespaces.
Ex:- system,
Sysaux,
Etc.,
Undo tablespaces :
Every oracle database must have a method of maintaining information that is
used to rollback (or) undo,changes to the database. Such information consists of record of
actions of transactions, primarly before they are committed. Such records are collectively
referred as undo.
Undo tablespace is used to store undo records of database. i.e., uncommitted
transactions(pending data). We create undo tablespace at the time of database creation.
If there is no undo tablespace available, the instance starts but uses the SYSTEM
tablespace as default undo tablespace. It is not recommended option. So create undo
tablespace at the time of database creation (or) after that by setting parameter
value[undo_tablespace]
Creating undo tablespace:
Create undo tablespace undots01
Datafile /oraAPP/kittu/db1/undo1.dbf size 50m;
We can create multiple undo tablespaces. But there is no use, because we use only on undo
tablespace.
Page 39 of 102
DBA NOTES
How the data stored in undo tablespaces?
1000
Old data
UNDO TS
Rollback
1000 updated
to 5000
User TS
If the table contains the salary 1000 for some employees. If we update the salary
1000 to 5000. Then the records which contains salary 1000 will be stored into undo
tablespace and salary 5000 will be updated into table. If we do commit they remain.
Otherwise 1000 will come back to the table
We can view the tablespace type from dba_tablespaces
Select tablespace_name,contents from dba_tablespaces;
(or)
From dictionary view database_properties
How to set undo tablespace fro sql prompt?
Alter system set undo_tablespace=UNDOTS01;
If we set this it is available only for that session. If there is sp file it will be permanent to
database because sp file allows dynamic alloction.
If there is no spfile we need to specify it in.
Dropping undo tablespace:
Page 40 of 102
DBA NOTES
Ex:If we join 2 large tables, and oracle cannot do the sort in memory(see SORT_AREASIZE) initialization parameters, space will be allocated in a temporary tablespace for doing
the sort operation. Other sql operations that might require disk sorting are
create index,
Analyze,
Select distinct,
Order by,
Group by
The DBA should assign a temporary tablespace to each user in the database to
prevent them from allocating sort space in the SYSTEM tablespace.
TEMP FILES:
Unlike normal datafiles,tempfiles are not fully initialized when you create a temp file,
oracle only writes to the header and last block of the file.
This is why it is much quicker to create a temp file than to create a normal database file.
Temp files are not recorded in databases control files. The implies that are one can just
recreate them whenever we restore the database (or) after deleting them by accident.
One cannot remove datafiles from a tablespace until we drop entire tablespace.
However, one can remove a tempfile
View:- dba_temp_files
Syn:- alter database tempfile
/oraAPP/temp1.dbf drop including datafiles;
If we remove all temp files from a temporary tablespace, you may encounter.
Error ORA-25153 temporary tablespace is empty
Use the below syntax to added temp file to temporary tablespace
Syn:- alter database temp
Add tempfile /oraAPP/temp02.dbf size100m;
(or)
alter user x
Page 41 of 102
DBA NOTES
USER- MANAGEMENT
We connect to database as user only.
Creating user:
Syn: Create user username identified by password;
Ex:-create user kittu identified by kittu;
Changing password:
Ex:-alter user kittu identified by ramu;
Password expire:
password expire;
Dropping user:
Page 42 of 102
DBA NOTES
System privilege:
A system privilege is right to perform particular action(or) to perform an action
on any schema objects of particular type. For example the privileges to create tablespace
and to delete the rows of any table in a database are system privileges. To perform DDL
activities.
who can grant and revoke system privileges ?
users who have been granted a specific system privilege with the admin option.
Users with the system privilege grant any privilege
i.e., DBA can grant system privileges
granting and revoking system privileges.
Syn:Grant create session to kittu;
Grant create table to kittu;
Syn: revoke create session from kittu;
Object privilege:
Object privilege is the permission to perform a particular action on a specific
schema object. To perform DML activities on other user. Some schema objects such a
clusters, indexes, triggers, and database links, do not have associated object privileges.
Their use is controlled with system privileges.
For example, to alter a cluster, a user must own the cluster (or) have the alter any cluster
system privilege.
Who can grant object privileges
Owner of the object
A user with grant any object privilege can grant (or) revoke any specified object
privilege to another user with (or) without grant option of the grant statement
Granting and revoking object privileges
Grant:-
Page 43 of 102
DBA NOTES
Create database, spfile
Archive log and recovery
Includes the restricted session privilege
Sysoper can perform following operations:
Perform startup, shutdown
Create spfile
Alter database mount/open/backup
Archive log and recovery
Includes the restricted session privilege
DBA_COL_PRIVS is used to view on column privileges.
Other dictionary views:
Column_privileges, table_privileges, all_tab_privs_made, user_tab_privs_made
Dba_tab_privs_made doesnt exists
ROLE:
Role is a set of privileges
Managing and controlling privileges is made easier by roles, which are named
groups of related privileges that you grant, as a group , to users (or) other roles, within a
database, a role name must be unique, different from usernames and all other role names.
Unlike schema objects, roles are not contained in any schema.
who can grant (or) revoke roles?
any user with grant any role system privilege can grant or revoke any role
any user granted a role with admin option can grant (or) revoke that role to (or) from
other users (or) roles of database.
There are 18 predefined roles.
Ex:- connect,resource, dba,select_catalog_role etc.,
Creating a role:
Revoking:
Syn:- revoke create session from abc;
Dictionary views:
dba_roles is used to view total roles information in a database.
dba_role_privs is used to know which roles are assigned to users.
session_roles is used to view the roles for a particular session.
role_role_privs is used to view which roles are assigned to roles.
role_tab_privs is used to view which roles are assigned on tables (or) colums.
Page 44 of 102
DBA NOTES
role_sys_privs is used to know the privileges of roles.
user_application_roles
v$pwfile_users
A privilege is effected immediated after granting. But a role is effected only when we
connects to that session.
There is another way to activate role
Sql> set_role connect;
QUOTA:Quota is some reserved space on tablespaces. This means to limit how much space a
user uses on a tablespace.
Quota can be assigned to user at the time of creation (or) after the creation
1) create user abc quota 10m on system;
2) alter user kittu quota 10m on system;
3) deleting quota alter user kittu quota 0m on system;
dictionary views:
dba_ts_quotas is used to know the how much quota is reserved for a particular
tablespace.
user_ts_quotas
PROFILES
Profile is a set of limits on database resources. Profiles are used to manage the
resources of database.
By default, a profile named default is available in the database.
If we assign profile to user, that user cannot exceed these limits.
To enable resource limits dynamically we need to set resource_limit parameter to true.
Alter system set resource_limit=true
To see this parameter
Show parameter resource_limit
To view profile information
Select * from DBA_PROFILES;
Profile, resource, resource_name, limit
Actually profiles has 2 types of parameters.
1) resource parameters
Can be viewed by user_resource_limit view
2) password parameters
Can be viewed by user_password_limits
To create profile,we must have create profile system privilege
Syn:- create profile abc limit sessions_per_user 2
Idle_time 30
Connect_time 10
Failed_login_attempts
2;
DBA NOTES
Syn:- drop profile abc cascade;
Resource parameters
Composite _limit
Session_per_user
Idle_time
Connect_time
Cpu_per_session
Cpu_per_call
Logical_reads_per_session
Logical_reads_per_call
Private_sga
password parameters
failed_login_attempts
password_life_time
password_reuse_max
password_verify_function
password_lock_time
password_grace_times
Unlimited:
When a resource parameter specified with this, it indicates that a user assigned this
profile can use an unlimited amount of this resource when specified with password
parameter, unlimited indicates that no limit has been set for the parameter.
Sessions_per_user:
It specifies no.of concurrent multiple sessions allowed per user
Connect_time:
It specifies the allowable connect time per session in minutes.
Idle_time:
It specifies allowed continuous idle time before user is disconnected in minutes.
Failed_login_attempts:
The no.of failed attempts to log into the user account before the account is locked
Page 46 of 102
DBA NOTES
Undo_management=auto
Db_create_file_dest= /oraAPP/kittu/database1;
Db_create_online_log_dest_1=/oraAPP/kittu/database;
connect as sys as sysdba startup nomount
Then database is created with creating a directory with database name. in that 3 directories
are created ., They are
Datafile in this all the data files are stored
Online log in this , log files are created
Control file in this control files are created
SYSTEM tablespace with datafile is created with 200mb size and is auto
Extensible.
SYSAUX tablespace with data file is created with 100mb size and is auto
extensible
UNDo tablespaces named SYS_undots is created with 120 mb size and is
Auto entensible.
2 redo log groups are created each one with size 100mb .each one contains
Only one member.
it creates one control file.
*In 10g, we can mention more than one destination parameter for redo logs
DB_CREATE_ONLINE_LOG_DEST_1
DB_CREATE_ONLINE_LOG_DEST_2
DBA NOTES
when we create any tablespace, datafile, logfiles. Then a directory with database name as
a name is created in db1/kittu/kittu. In this again three directories are created
1)datafile
2) online log
3) control file
All the files we created after the creation of database, will be stored in this locations
The users related datafiles and redos created after execution of database will be stored
in these directories.
Note that OMF default size is 100mb, and the file size can be overridden at any
time. You can specify the filesize only bypass OMF and specify filename and
location in datafile clause.
Oracle enhanced the oracle 9i alter log to display message about tablespace creation
and data file creation. To see the alert_log, you must go to background_dump
destination directory.
log member:-
Uses:1)
2)
3)
4)
Page 48 of 102
DBA NOTES
PARAMETER MANAGEMENT
when we want to change database architecture, then we use alter Database command
when we want to change parameter to specific session (user), we use Alter session
command
when we want to change parameters to entire database, use alter System command
parameters are 2 types
Dynamic parameter
The parameters whose values can be modifiable dynamically at Run time.
Static parameters
The parameters whose values cannot be modifiable at run time
We can change in init.ora
Static parameters are 2 types
1)tunable parameters:-
Alter command:
Page 49 of 102
DBA NOTES
MANAGING INVENTORY
Inventory means storage
oracle inventory is the repository(directory) which stores/records oracle software
products and their oracle_home location on a machine. This inventory now a days in
XML format and called as XML inventory where
As in past it used to be in binary format called as binary inventory.
there are basically 2 types of inventories.
1) local inventory:- it is also called oracle home inventory, inventory inside each
oracle home is called as oracle_home inventory (or) local inventory. This inventory
holds information to that oracle home only.
Location: oraclehome/inventory
2) global inventory:- it is also called as central inventory. It holds the information
about the oracle homes installed on that server. It contains homes installed on that server. It
contains homes, locations like
Home_name=ramu loc=/oraAPP/kittu type= o idx=1/
This global inventory location will be determined by file orainst.loc.
It is in /etc[linux] and /var/opt/orace[solarises] . It contains
Inventory_loc=/etc/inventory
If we want to see list of oracle products on machine check for file inventory.xml:Location:- /etc/oraInventory/contents XML/
Can we have multiple global inventories on machine?
Yes, you can have multiple global inventory but if you are upgrading (or) applying
patch then change inventory pointer oraInst.loc to respective location.
What to do if my global inventory is corrupted?
No need worry if our global inventory on machine using OUI and attach already
installed oracle home by option
attach home
./runInstallersilentattachhomeinvptvloc $location_to_oraInst.loc
ORACLE_HOME=oracle_home_location
ORACLE_HOME_NAME=oracle_home_name
Cluster_nodes={}
Page 50 of 102
DBA NOTES
ORACLE NETWORKING
Actually in real time environment, we provide database, to client as a
non_local connection.
Non_local connection means connecting to database server through softwares
like sqlplus, OEM,VB,.NET,JAVA from client system.
There are some softwares called net80, oracle net for configuring tns entry in
client system. They came with oracle installation.
for java,jdbc is used to connect to oracle
for .net,odbc is used to connect to oracle
oracle connectivity components are toad, sqlplus,oem etc,
sqlplus is software which comes with oracle to connect to database. It is cebbased
technology.
To access the database from client system, we had to following the following steps:Step1:- server side
We need to configure listener on the server listener is a utility which is listening
to database connections
It is an executable file
Its location is ORACLE_HOME/network/admin/listener.ora
In one server we may have more than one listener depending on the load (no of clients
communication to the server)
Next open listener.ora file. It is readable text file. It is reachable text file
$ vi listener.ora
LISTENER_NAME =
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=IPC)(KEY=EXPROC LISTENER NAME))
(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.207)(PORT=1521))
)
SID-LIST_LISTERNAME=
(SID_LIST=
(SID_DESC=
(ORACLE_HOME=/oradb/nani9i)
(SID_NAME=databasename)
)
(SID_DESC=
(SID_NAME=PLSEXTPROC)
(ORACLE_HOME=/oradb/nani9i)
(PROGRAM=extproc)
)
)
Page 51 of 102
DBA NOTES
Listener name
List of sids (database)
Protocol (tcp/ip)
Port number (default 1521) We must have different port numbers for different
listener.
Host name (ip address)
After configuring listener in listener.ora open the database and exit and follow below
commands
$lsnrctl start -- it will start all the listener in listener.ora
$lsnrctl start kittu -- it will start listener kittu
Lsnrctl is an executable which is in ORACLE_HOME/bin
Step2: client side
Install oracle network, protocol adapter for tcp/ip (client side components)
We need to configure tns entry to connect to database through a listener
We use sqlplus, toad, oem , isqlplus as the front end to connect to database.
For configuring tnsentry,we have to know the ORACLE SOFTWARE home in client
system
We need to configure tnsentry in tnsnames.ora file present in home
/oracle/orad2k/network/admin/tnsnames.ora
For sqlplus, we find this by right clicking and select properties and find the home.
In Toad there are multiple homes we know the home by opening the toad. We can set
toad home by clicking sql .net configuration help selecting the corresponding home ad
click on set as toad home button. Then find the location of tnsnames.ora and configure it.
We need to define an alias in home
While defining alias provide
1) Alias name(tns name (or) entry)
2) Target server name (or ip address)
3) Target sid (database_name)
4) Protocol (tcp)
5) Target port number(defined in listener.ora)
After defining the alias, we check using
C:\tnsping <aliasname>
(tns ping utility for oracle, where as ping is utility for tcp/ip)
Tnsnames.ora
Tnsentryname =
(description =
(ADDRESS=(PROTOCOL=TCP)(HOST=TARGET SERVER)(PORT=1521)
(CONECT_DATA=
Page 52 of 102
DBA NOTES
Step 3:
Now we can connect to database through sqlplus (or) oem or toad
open sqlplus or toad
Sqlplus username kittu
[username in database]
Password xxxx
Host
xhni
[tnsname]
OR
Username kittu/xxxx@xhni
Toad
Username (or) databasename
Schema
Password
chinni
kittu
ram
(tnsname)
DBA NOTES
in tsnames.ora we defined, multiple tns enter with different port numbers and same
sid
Everything will be same, only we has to create and listener with different port numbers
Tip
using lsnrctl command we can
1) Start
2) Stop
3) Services to know the services of server dedicated or mts or local or
nonlocal
4) Debug
5) Status
6) Help
7) Reload it will restart listener
listener can be started regardless the status of instance.
if we want to ..
We need to define TNS_ADMIN environment variable in bash-profile
Export TNS_ADMIN =/home
Then oracle will look for listener.ora in /home directory
tnsnames.ora file can have n number of tns entries
There is no significance of tns entry name we can give any name
Trc_level admin
SESSION MANAGEMENT
We can know the each sessions information from v$session view
The commonly used columns in v$session are
Sid, serial#, username, logon_time, status
In v$session there is no username for background process
By using sid amd serial# we cam kill a session
To know own session id(statistics)
Select sid,serial# from v$mystat where rownum=1;
To kill a session
Syntax: - alter system session sid,serial#;
Ex: alter system kill session 1,20;
STORAGE MANAGEMENT
we know that the data stored in datafiles.
Audit the finest level granularity oracle database stores data in data blocks also called
logical block, oracle blocks or pages)
Page 54 of 102
DBA NOTES
Segement:
A segment is a set of extents that contains all the data, for a specific logical
storage structure with a tablespace
For example, for each table, oracle database allocates one or more extents to form the
tables data segement.
Extent:
An extent is a specific number of contigous data blocks that are allocated
for storing a specific type of information
Oracle Block:
A block a smallest unit of storage in Oracle the size of a datanlock is fixed when
the database is created and cannot be changed except by rebuilding the database from
.
This is primary data block sizes of datablocks are 2k,4k,8k,10k,32k
The size of block is determined by the parameter db_block_size in init.ora file.
In o/s also name stored in blocks only o/s file blocks only o/s file blocks size is 512
bytes or 1k
When we try to read some data oracle uses db_blocks. Oracle will retranslating while
reading data.
In 10g default blocksize is 8k
In 9i, default blocksize is 2k
when we define datafile size as 8m actually 1m datafile as it block size is 8k contains
130 blocks
Page 55 of 102
DBA NOTES
Page 56 of 102
DBA NOTES
For example:- pctused 40
In this case a data block used for this tables data segments is considered unavailable tables
data segments is considered unavailable for the insertion of any new rows until the amount
of used space in the block falls to 89% or less.
Init trans: This parameter specifies how many transactions can be accessed to the dbblock
at any point of particular time
Freelists : this parameter is used in rac:
Tip: once the primary block size is mentioned you can create new tablespace with alternate
block size for creating table with parameters.
Syntax:-
DBA NOTES
for that segment. The size of incremental extent is same or greater than the previously
allocated extent i.e, we specify the extent after initial extent through this parameter.
MINEXTENT: This parameter specifies the total number of extents to be allocates
when we create a (table (segement) or index)
MAXEXTENT: This parameter specifies the up to how many extents a segment can
hold.
PCTINCREASE: This parameter specifies the incremental percentage of extent that is
to be created after the NEXT EXTENT. By default its value is 50%.
For example consider
a)
Initial
1m
Next
1m
Minextents 1
Maxextenst 4
Pctincrease 50%
1m
b)
Initial
Next
Minextents
Maxextenst
Pctincrease
1m
1m
1.5M
2.25M
3.375M
1m
1.2m
1.5m
1m
1m
2
5
20%
1m
If we dont specify storage parameters for a extent. Oracle itself allocates the default
storage parameters.
Segment is creation of extents.
Segment name is nothing but as object name. when we create a table or index it creates
segment.
By default each extent contains max of 5 blocks and min of 2 blocks.
Page 58 of 102
DBA NOTES
Create a tablespace and segment find the storage parameters without specifying
them?
Sql>
Sql>
Sql>
Sql>
10m;
All the parameters, blocks and their size for extents are allocated as per operating system.
Create a table in the tablespace with some parameters and check the parameters?
Sql> create table chinni(a number)
Storage (initial 1m nect 1m minextents 1
pctincrease 100%);
Sql> @sess.sql
maxextents 5
OR
Select segment_name,bytes from dba_segments where
segment_name=RAM;
Extent Management
A tablespace is a logical storage unit.
Why are we saying a tablespace is not visible in the file system. oracle store data physically
in datafile.
How to create tablespace?
Create tablespace ts_name
Datafile . Size 2m
Minimum extents (this ensures that every used extent size in the tbs is a multiple of
integer)
Blocksize
Page 59 of 102
DBA NOTES
Logging: By default tbs have all changes written redo
No logging: Tbs do not have changes written redo
Online: Tablespace is online i.e available
Offline: Tablespace unavailable immediately after creation
Permanent: Tablespace can be used to hold permanent object.
Temporary: Tablespace can hold temp data
Extent Management is of two types
a) Dictionary Extent Management b)Locally Extent management
The tablespace are maintained in dictionary extent management is dictionary managed
tablespace.
The tablespace which are maintained in local extent management is called locally managed
tablespace.
Locally managed tablespace: The extents are managed with in tablespace in locally
managed tablespaces all the tablespace information and extent information is stored in
datafile header of that tablespace and dont use data dictionary table for storing
information.
Advantage of LMTS is that no DML generate and reduce contention on data
dictionary tables and no undo generated when space allocation or deallocation occurs.
The storage parameters NEXT, PCTINCRESE, MINEXTENTS, MAXEXTENTS, and
default STORAGE are not valid for segments stored in LMTS.
To create a locally managed tablespace,you specify local in extent management clause of
create tablespace statement.
We have 2 options for lmts: 1) system or auto allocate
2) uniform
q) how to create lmts
create tablespace tbs datafile star.dbf size 10m extent management local;
SYSTEM (or) AUTOALLOCATE: Autoallocate specifies that extent size are system
managed oracle will choose optimal next extent sizes starting with 64kb as the segment
grown larger extent size will increase to 1mn,8mb and eventually to 64mb .This is
recommended only for a low or unmanaged environment.
Default autoallocate i.e it takes database default storage.
Parameter
Syntax:create tablespace tbs
datafile star.dbf size 10m
extent management local autoallocate;
Page 60 of 102
DBA NOTES
UNIFORM:
It specifies that the tbs is managed with uniform extents of size bytes. The default size is
1m . The uniform extent size of lmts cannot be over written when a scheme object such as
table or index created
Syntax: -
We can alter all parameters except initial and minextents in dmts i.e if we create dmts then
extent info is stored in dictionary and real data is stored in datafile of that tablespace. In
that case we need more I/O i.e the oracle has to search for extents in dictionary. which
degrades the performance.
In oracle 8i > only dmt available
From 9i> both dmt and lmt (default)
From 10g> both dmt and lmt (default)
SEGEMENT
Segments are the storage objects within the oracle database. A segment might be table, an
index, a cluster etc.
The level of logical database storage above an extent is called segment.
A segment is a set of extents that contains all the data for a specific logical storage
structure within a tablespace.
For example for each table oracle database allocates one or more extents to form that tables
data segment and for each index, oracle database allocates one or more extents to form its
index segment
There are 11types of segments in oracle
table
table partition
index
index partition
Page 61 of 102
DBA NOTES
rollback
deferred rollback
lobindex
temporary
cache
permanent
Page 62 of 102
DBA NOTES
Auto : This option uses free lists for manging free space within segments. This is typically
called automatic segment space management it is default.
Page 63 of 102
DBA NOTES
Freelists: Freelists are lists of data blocks that have space available for inserting
Even datafile must consist of one or more o/d blocks. Each o/s block may belongs
to one and only datafile.
Every Tablespace may contain one or more segments. Each segment must exist in
one and only one tablespace.
Every segment must consist of one or more extents. Each extent must belong to one
and only extent.
Every extent must consist of one or more oracle blocks. Each oracle block may
belong to one and only one extent.
Every extent must be located in one and only one datafile.The Space in datafile
may be allocated as one or more extents
Every oracle block must consist of one or more o/s blocks.Every o/s block may be
part of one and only one oracle block.
Create a tablespace without any option of extent management and create a table amd
check?
Create tablespace tbs Datafile
aa.dbs
size 4m
Obs:
Extent_managment=local
Allocation_type=system
Segement_space_managment=auto
Initial = 65536
Min_extent=1
Max_extent= 2147483645
Bytes =65536
The extents are allocated by o/s specific, by default it takes EM as local
grant dba to hari identified by hari
alter user hari default tablespace tbs;
comm. hari/hari
create table a (a number);
Page 64 of 102
DBA NOTES
select segement_name, initial_extent, next_extent, min
_extent, max_extent, pct_increase, allocation_type, extents,
bytes where segment_name=A;
allocation_type is system
Create LMTS with uniform amd check
Create tablespace ts04 datafile ts04.dbf size 3m extent management local uniform
128k.
-
initial takes 2m
But while storing it takes extent sizes as uniform
Allocation type Manual
Create DMTS with no parameter and create table
Create tablespace ts05 datafile ts05.dbf size 5m extent
management dictionary;
Obs:- initial 40960, next 40960, min 1 , max 505 , E.M Dictionary, pct -50,
S.S.N Manual , Allocation type User
- When we create table with out parameter same values are effected
- create table a2 (a number)
Page 65 of 102
DBA NOTES
storage(initial 1m next 1m minextents 1 maxextents 5 pctincrease 50);
- All the values as mentioned per parameters are effected.
- we can alter storage param values except initial and minextents.
Create DMTS with storage parameters
Create tablespace ts07 datafile ts07.dbf size 10m
dictionary management dictionary
default storage(initial 1m next 2m minextents 2 maxextents
5);
DBA NOTES
how to identify row migration & chaining?
how to avoid row migration & row chaining?
Row Migration:
We will migrate a row when an update to that row would came it to now fit
on the block anymore (with all the data that exists there currently in that row)
A migration means that entire row will move and we just leave behind the
forwarding address. So, the original block (old block) has the row id of the new block and
the entire row is moved. In this we need more IO
Row Chaining:
A row is too large to fit into a single database block for example, if you use a 4kb
blocksize for your database, and you need to insert a row of 8kb into it, oracle will use 3
blocks and store the row in pieces. Some Conditions that will cause row chaining are
Tables whose row size exceeds the blocksize
Tables with LONG and LONG RAW columns are prone to having changed rows
Tables with more than 255 columnds will have chained rows as oracle break wide
tables up in to pieces. So, Instead of just having a forwarding address on one block and
the data on another we have data on two or more blocks
Insert and updata statements that came migration and chaning perform poorly,
because they perform additional processing.
Selects that use an index to select migrated or chained rows must perform
additional I/O
Detection:
Migrated and chained is a table or cluster can be identified by using the analyze
command with the list chained rows option. This command collects information about each
migrated or chained row and places this information into a specified output table. To create
a table that holds the chained rows, execute script utlchain.sql.
SQL> ANALYZE TABLE scott emp LIST CHAINED ROWS;
SQL> select * from chained_rows;
Resolving:
In most cases, chaining is unavoidable, especially when this involves tables with
large columns such al LONG, LOBs etc., when you have a lot of chained rows in different
tables and the average row length of the tables is not that large, then you might consider
rebuilding the database with a larger block size.
Ex:- you have a database with a 2k block size different tables have multiple large varchar
columns with an average row length of more than 2k. Then this means that you will have a
lot of chained rows because your block size is too small rebuilding the db with a larger
block size can give you a significant performance.
Page 67 of 102
DBA NOTES
Migration is caused by PCTFREE, being set too low, there is no enough room in the
block for updates. To avoid migration all tables that are updated should have there
PCTFREE set so that there is enough space within the block for updates. You need to
increase PCTFREE to avoid migrated rows. If you leave more space available in the block
for updates, then the row will be having more room to grow.
1) Update <tablename> set column=value where
2) Alter table <tablename> add column datatype
3) Alter table <tablename> modify column datatype
4) Create view <viewname> as select col1,col2 from <tablename>
5) Create index <indexname> on tablename(column)
6) Create sequence <seqname> increment by 1
7) Drop index <indexname>
8) Drop view <viewname>
9) Drop table <tablename>
10)
Drop sequence <seqname>
11)
Create synonym <synname> for object
12)
Drop synonym <synname>
vmstat, iostat
v$version
getconf LONG_BIT
platform_wave from v$datafile
address from v$sql
Page 68 of 102
DBA NOTES
When we run there scripts for first time. It creates 2 logfiles in ORACLE_HOME
->startup.log
->shutdown.log
When we start and shutdown the database startup and shutdown information will be
updated into these files
BACKUPS
Backup and recovery is one of the most important aspects of DBAs life. If you love your
companys data, you would very well love your job. Hardware and software always be
replaced but your data may be irreplaceable.
Backup: is taking the copy of data in some other location.
Restoration means copying backup files from backup. Storage area in Hard disk, Tape,
CDs, Pendrive etc., to Original location.
Recovery is the process of applying redologs to the database to roll it forward.
(or)
Applying Archieve log files to the database to get the data after the backup is taken.
Oracle has its own Backup Methods
-Physical
-Logical
Physical Backup:- means making the copies of the files related to physical architecture.
Eg: Datafiles, Control files, Redolog files
Logical Backup:- means taking the copies of logical structure of Database.
Eg: Tables, Schemas, Tablespaces, Database
In real time , all backup are run as root (maximum)
There are also third party backup technologies available one of them and the most
fastest is VERITAS
We will be integrating the veritas software & hardware with the database. There must be a
separate admin (veritas admin) to maintain this technology.
Minimum it backups terabytes data just in one hour only!!
In real time environment we use tar command to take the backup into tape.
$ tar cvf filename *
WHOLE BACKUPS:
Page 69 of 102
DBA NOTES
A whole backup is a backup of all the datafiles, control file and ( if u are using
it) the spfile. Remember that as all multiplexed copies of the control file are identical , it is
necessary to backup only one of them. You do not backup the online redologs. Online
Redolog files are provided by multiplexing and optionally by archiving. Also note that only
datafiles for permanent tablespaces can be backed up. The temp files used for your
temporary tablespaces cant be backed up by RMAN or can they be put into backup mode
for an OS backup.
PARTIAL BACKUP:
It will include one or more datafiles and control file. It is copy of just a part of
the database.
INCREMENTAL BACKUP:
A incremental backup is a backup of just some of blocks of datafile. Only the
blocks of that have been changed or added since the last full backup will be included. It is
done by RMAN
ONLINE BACKUP:
Backup which is taken when the database is up or running.
OFFLINE BACKUP:
Backup which is taken when the database is shutdown.
PHYSICAL BACKUP
Traditional
RMAN
Cold
Cold
Hot
Hot
COLD BACKUP
Backup which is taken when the database is down is said to be cold backup.
Steps to perform the Cold backup:
1) List out the datafiles, Control files and Redolog files by using v$datafile,
v$datafile, v$logfile.
Sql> select name from v$datafile;
Sql> select member from v$logfile;
Sql> select name from v$controlfile;
Page 70 of 102
DBA NOTES
Page 71 of 102
DBA NOTES
$echo select * from tab.sqlplus system/manager
It will select hit the tables in $prompt by connecting as system and return back to
$ prompt after displaying output.
$echo select * from tab;| sqlplus -s system/manager it will just display output.
how can we make cold backup fast?
By copying crd files in parallel sessions i.e, copy some files in one session and copy files in
one session.
in cold backup we take entire backup of all crd files.
Online redo lof files:
Redologs are absolutely necessary for recovery. For example, imagine that power
outage occurs,it prevents oracle to write modify data to datafiles. In this situation an old
data in datafiles can be combined with recent changes records in the online redo log to
reconstruct what wsas lost every oracle database contains a set of two ir more online redo
log files.
Oracle assign every redo logfile to with a log sequence number to uniquely identified it.
the of redos for a database is collectively know as databases redo log.
Oracle uses redolog to record all changes made to data base. oracle record every changes in
redo record. an entry in redo buffer describes what has changed assumes a user updates a
payroll table from 5 to 7 oracle records the old value in undo and new valuesw in the redo
record.
Since the redo log stored every changes to db the redo record for this transitioncontaions
three paths.
Changes to the transation table of undo
Changes to undo data block
Changes to payroll table data block
If the user commit then update to permanent table to make change permanent oracle
generate another redo record.
Archived redo log files:
if archiving is disable a filled online redo log is available once the changes
recorded in the log have been saved to the data file.
If archiving is enabling a filled online redo log is available once the changes have
been saved to the data files and the file has been archived.
Archived log files are redologs that are oracle has filled redo entries(rendered in active)
and copied to one or more log archive destinations oracle can be run in either 2 modes.
*archive log:
Oracle archives the filled online redo files before reusing thenm in the cycle.
*No archiving::
Page 72 of 102
DBA NOTES
Oracle does not archiving the filled online redo log fikes before reusing the in the
cycle.
Running the database in archiving mode has the following benefits:
The database can be completely recovered from both instance and media failure
The user perform online backups i.e is backup the ts when database is open and
available for use.
Archive redologs can be transmitted applied to stand by database.
Oracle supports multiplexed archive logs ro avoid any possible single point of failure
or the archive log .
The user has more options, such as the ability to perform tablespace-point-in time
recovery.
Running the database in noarchivelog mode has the following consequences.
The user can only back up the database while it is completely closed after a clean
shutdown.
Typically the onlyu media recovery option is to restore the whole database which
causes the loss of all transactions issued since the last backup
These archived logs should be hosted on separated physically disk.
HOT BACKUP
Back up that is taken while the database iis up and running.
Prerequisites for hot backup:
Database must be up and running
Database nust be in archive log mode
Steps for hot backup:
Check whether the database is running.
Sql>select open mode from
v$database.
Page 73 of 102
DBA NOTES
For each tablespace:
Put tablespace in hot backup mode(begin backup mode)
Sql>alter tablespace system begin backup
/stage/backup/.
Or
Page 74 of 102
DBA NOTES
ie oracle maintains full copy of changed db-blocks in the redologs .if log sitch occurs they
are archived at the point of time if any user wants to retrieve the updated data he gets that
data from redologs. If the data in redologs gets archived then he will retrieves the data from
dictionary cache where they are stored an default transactions. they stored as temporary
statements.
Suring hot backup the performer of the system slows sowm.
When we put the tablwspafe inend backup mode the headers of datafiles get released and
the scn numbers are updated using redolog files.
Tablespace ckpt:
A checkpoint scn occurred on onlu one ts is said to be checkpoint only that ts has differ scn
compared to all tss this is possible where we perform
Alter tablespace ts offline
Alter tablespace ts begin backup.
Database checkpoint:
Checkpoint occurred for only database all scns must be synchronized at this time(ie
equal).
How can we automate hot backup:
We can automate hot backup by writing shell script
Shell script for hot backup
$vi hotbackup.sh
#set the environment
Export oracle_sid=sree
Export oracle_home=/stage/10.2.0
Export path=$path:$oracle_home/bin
#make dynamic script:
Sqlplus <<E
Sys as sysdba
Set pages 0
Spool /tmp/backup.sql spoll the tablespaces begin ,copying
files tablespaces end backup syntax in backupsql.
Select alter tablespace || tablespasce_naem ||begin backup;
From dba_tablespaces where contents not in temporary union all
Select alter tablespaces || tablesapce_name||end backup; from
dba_tablesapces where contents not in (TEMPORARY);
Spool off
E
#ranking the spooled sql and taking the backup of control file;
Sqlplus <<E
Sys as sysdba
@backup.sql
Page 75 of 102
DBA NOTES
Alter database backup controlfile to /stage/hot_back;
E
Hot backup we can makes all tablespaces of database in begin backup in one shot by
Sql>alter database begin backup;
Get_system_change_number
1316516
In log we can get from v$database.
The scn of last checkpoint can be found in v$database
Page 76 of 102
DBA NOTES
Select current scn from v$database.
Smon_scn_time table allow to roughly find out which scn was currently spwcific time in
that five days.
Checkpoint
Checkpoint is oracle background process it is mandatory background process
A checkpoint performs the following three operations:
Every dirty block in the buffer cache is written to the database.
That is, it synchronizes the datablocks in the buffer cache with the datafiles on the disk.
The latest scn is written to the control file and redolog files
Checkpoints will lead to updating the datafile header if the oracle background process ckpt
is not available for our system (or) is not started lgwr will perform the task
From oracle 8.0 it is enabled by default the parameter is log_checkpointa_process it must
be set to true.
When the check point occurs
Alter system checkpoint
{for entair db}
Alter system switch logfile
Alter tablespace <tn> offline;
for only this ts
Shutdown immediate
1/3 of log buffer is fulled.
By mentioning 2 parameters in init file
Log_checkpoint_timeout
--- has expired
Log_checkpoint_interval
--- has reached
Begin backup
While redo log switches cause a checkpoint,checkpoint dont cause a switch
Size of redo log
If the size of redo log os small the performance of the checkpoint will not be optinal this is
the case if the alter.log contain message like
Thred. Cannot allocate new log
Time and scn of lsast checkpoint
The data and time of last check point can be retrived through checkpoint-time in
v$datafile_header
Sql>sekect checkpoint_time in v$datafile_header;
Difference between scn and checkpoint
Scn is representing with scn-wrap and scn-base whenever scn-base reaches
422949672909(2^32),scnwrap goes up by one and scan base will be reset to 0.the way you
can have a maximum scn at 1.8e+19.
Page 77 of 102
DBA NOTES
Checkpoint number is the scn number at which all the dirty buffers are written to
disk. The checkpoint can be at object/tablesapces/datafiles/database level.
Scn wrap,scn-base are retrieved from table smon-scn-time
Select scn-wrap,scn-base from smon-scn-time;
Checkpoint number is never update for datafiles of read-only tablespaces.
We can also query v$transaction to arrived at the scn for that transaction
Control records information about that checkpoint and archived swquences along with the
other information.
q) does oracle do either crash recovery (or) transation recovery after shutdown abort if the
checkpoint was taken right before the instance crash?
Yes, oracle perform roll forward first if there are any changesbeyond that
checkpoint and roll back any checkpoint and roll back any uncommitted transations
Scn number are begin reported at frequent intervasls by smon insmon-scn-time table.
q) when this no of highest scn will be over,then what happen will oracle restart from first
number?
If the scn really reached to its maximum allowed value (after exhausting all
wraps),database has to be opened in reset logs mode and scn will start from beginning all
over again.
Q) does all the redo ectries has scn attached to them (or) does only the commit entries has?
All changes recorded in the redo (including commits and rollbacks ) will hace scn
associated with them.
Hot backup
Conditional execution for database level:
It will show venkat-- database is up (or) down avoiding the grep statement.
Hot backup script-2
set the env
check db is up or down
check who is executing the script
check db is in archive mode or not ?
generate the backup syntax using dynamic
Start backup process
1.put ts in begin backup
2.copy datafiles
3.put ts in end backup.
take backup of control files to backup location.
Page 78 of 102
DBA NOTES
unset the duplex dest.
evlauate the size of backup.
(mean source and target sizes must match)
send mail ro dba that backup in completed.
Hot backup through dynamic script:
name
:host.sh
author
:kittu
date
:23-6-2008
purpose
:the script will evaluate the state
of db ad perform hot backup to local
mount point
#set the environment
$home /bash-profile (or) export and set env variables
Dbkittu
#who is executing the script.
Export usr=/usr/bin/who am I
If [[${usr} eq kittu]];
Then
Echo continue operation>> /tmp/success.ht
Else
Echo exit from execution>>/tmp/exit.lst
Exit
Fi
Page 79 of 102
DBA NOTES
Set pages 0
Spool hot.sql
Select alter tablesapce =|| tablespace-name|| begin backup;
From dba_tablespaces where contents not in TEMPORARY
Union all
Select alter tablespace ||tablespace_name end backup; from
dba_tablespaces where contents not in (TEMPORARY);
E
SQLPLUS <<E
SYS AS SYSDBA
@hot.sql
Alter database backup control file /stage/hot/backup.ctl;
E
$chmod 700 hot sh
$hot.sh
Dynamically passing oracle sid:
#1/bin/bash
#set the environment
Export ORACLE_SID=${1}
Export ORACLE_HOME=grep w$1 /etc/oratab/
Awk f ; {print
$2}
Export PATH=$PATH:$ORACLE_HOME/bin
#who is exporting the script
precious script
#check db is up /down
previous script
#check archive or not
previous script.
Page 80 of 102
DBA NOTES
Alter databse backupcontrolfile.to ;
Sql>alter system log_archive_duplex_dest=/stage/hot/.
13:50
7
ram
8
Page 81 of 102
DBA NOTES
Q) how to kill a session?
ps ef ?grep oracleapp | grep local =yes 21364
Select s.sid ,s.serial#p.spids.username from
V$session s,v$process p
Where s.saddr=p.addr and
p.spid=21364;
ROLL BACK SEGMENTS
each database contains one (or) more roll back segments
a roll back segments records the dd.values of the data that were changed by each
transaction. roll back segments provides read consistency, rollback transactions and recover
the database.
neither database users no administration can access or read roll back segment
only oracle can write to (or) read then.
roll back events change datablocks in the roll back segments and oracle records all
changes tp data blocks including rollback entries, in the redolog those information is very
important for active transactions ( not yet committed or rolled back) at a time of system
crash . if a system crash occurs then oracle automatically restores segments information,
including rollback entries for active transactions as part of instance or media recovery
when recovery is completed.
Oracle performs the actual roll back of transactions that had been neither committed
(nor ) roll backed at the time of system crash.
usually when we committed a transaction, oracle release the roll back data but doesnot
immediately destroy it the data will be losted or then the last extent of roll back segment
are filled at the time oracle continues writing rollback data by wrapping around to the first
extent in the segment.
each roll back segment can handle only fixed number of transactions from one instance.
oracle creats an initial rollback segments called segments when ever a db is created then
segment is in system ts we cant drop system roll back segments.
place roll back segments in separate tablespaces
to create rollback segments, the user must have create rollback segment privilege.
Creation of rollback segments:
*create a tablespace to hold rollback segments
Sql> create tablespace <tn>
Dtafile <path>
Extent management dictionary;
*Create rollback segment
Sql> create rollback segment r1
Tablespace <tn>;
*shut dowm the database
* open initfile and comment the below parameter
Page 82 of 102
DBA NOTES
#umdo_managemnt=auto
*start the database
observation:
*when we comment undo management the undo tablespace become offline.
*also rollback segments are become offline.
this can be viewed from dba_rollback_segs
Sql>select segment_name,status,tablespace_name from dba_rollback_segs;
Segment_name
status
tablesapce_name
---------------------------------------------------------------------------------System
online
system
Sysmul$
offline
undotbs
R1
offline
rbs
R2
offline
rbs
in this situation try to insert data into some table some non-system user which is
assigned to sine permanent tablespace.
Sql>conn kittu/kittu
Sql>insert into emp values(1);
Error:
Cannot use system rollback segment for non-system tablespace KITTU;
to resolve this situation we had to make the roll back segments online this can be done
in 2 ways.
1) manually
sql> alter system rollback segment r1 online;
generatew undo
Sql>conn sys as sysdba
fire the below query wether the extents for rollback segments and deallocation from
below rows
V$rollname
V$rollstat
Sql>select a.name,b.writes,b.extents,b.curext,b.xacts
From v$rollstat b,v$rollname a where a.user=b.user;
Page 83 of 102
DBA NOTES
Xactsnumber of active transactions.
Alter extents :-
UNDO MANAGEMENT
Every oracle database must have a method to maintain information that is used to rollback,
or undo changes to the database such information consists of records of actions of
transactions, primarily before they are committed.
undo records are used to
Page 84 of 102
DBA NOTES
till 8i the undo that used to generated, used to be handled rollback tablespace, which was
directly managed. In case we have choose to first create a rollback tablespace, then create
rollback segments and assign it to roll back tablespace.
now oracle 9i ,the new concept of undo tablespace is introduces,whioch helps in below
ways:
it is logically managed.
The undo segments are created by oracle itself.
The number of undo segments are generated by oracle itself.
The purpose of undo management and rollback segment is same The purpose of
undo segments amd rollback segment is same except the creation and maintaince
past.
it is not possible to use both methods in a single instance. However we can migrate
for example to created undo tablespace in database that is using rollback segments and
assign undo to db.
And to create rollback segs in database that using undo ts (or) commented it
However in both cases we must shut down and restart out database in order to effect the
switch to another.
mode of undo space management :
Manual:
If we use the rollback segments method of managing undo space you are said to be
operating in the manual undo management mode.
Auto:
If we use undo tablespace method, you are operating in automatic undo management
mode.
We usually determine this mode at instance startup using the undo-management
parameter in the init file
An undo tablepsace must available into which oracle will store undorecors. The default
undo tablespace is created at database creation (or) an undo tablepspace can be created
explicitly
The parameter to be specified to create and assign an undo tablespace is undo_tablespace.
when instance startup,oracle automatically selects for use the first available undo
tablespae if there is no undo tablespace available the instance starts,but uses system
rollback segmet. This is not recommended. And an alert message is written to alert file.
undo_retention:
Retention is period of time. it is specified in units of seconds. it cam survive system crashes
ie, undo genated before an instance is crash ,is retained until its retention time has expired
even across restarting the machine.
Page 85 of 102
DBA NOTES
When the instance is recovered undo info is returned based on current setting if
undo_retention parameter.
Default is undo_retention=900 default
It effects immediately.
oracle 10g guarantee undo retention
When we enable this option the database never overwrite unexpired undo data ie,undo data
whose age is less than undo retention period this option is disabled by default .
create a undo tablespace
Sql>create undo tablespace undotbs
Datafile /oraAPP/undo.dbf size 50m;
droping undots:
Page 86 of 102
DBA NOTES
Oracle usually takes care of creating undo segments it is introduced in oracle 9i.
To build demo tables using sql Alomg with scott user :
@?/rdbms/admin/utlsampl.sql
select undtsn, undblock from v$undostat;
Temporary tablespaces
V$sort_usage
V$temstat
V$tempfile
V_$sort_usage
Dba_temp_file
Database_properties
Package:
Utl_recomp_stored
Sql>select file_name,tablespace_name,bytes,status from
dba_temp_files;
Database creation
we can create db without mentioning the below parameter in initfile.
Db_cache_size,shared_pool_size,log_buffer and control_files;
The sizes of the above parameter are
db_cache)size =48m
shared_pool_size=32m
log_buffer=7057408
Controlfile=control<sid>.ora
Loc $oracle_home/dbs/
Total sga size = 112m.
How can we trace a session (user)?
We want to get information that what the user is doin. For this we had to follow below
process.
Open 2 sessions
1) as sysdba
2) scott/tiger
solutions;
1) we had to get the sid. Serial# for that session.
Sql> select sid, serial#,username from v$sessions where username is not null;
Sid
serial#
username
-------------------------------------------------------------------------Page 87 of 102
DBA NOTES
27
1632
scott
2) now excute the below package to enable tracing for that session.
Sql> exec dbms_system set sql_trace in session (27,1632,true);
3) perform some activities in that session
4) now server process id for that session using below query.
Sql> select p.sid from v$session s, v$process p where s.paddr=p.addr and s.sid=/s;
Spid
3683
With this spid a trace file is generated for this session in udump:5) go to udump location now convert this trace file to user unserst and able format and
also eliminate sys related data.
[ kittu@linux1 n]$ tkprof ram_ora_3683.trc sys=no]
Open this file and view what are the activities being done on that session and also
performance,
How can we disable tracing on a session:
Sql> exec dbms_system set_sql_trace_in_session (sid,serial#,false);
How to kill a session
Identify the pid,serial# of that session from v$session.
Sql> select sid,serial#,username from v$session;
27 1749 rama
Now find the server processid for this session.
Sql> select p.sid from v$session s, v$process p where
s.paddr=p.addr and s.sid=27;
Sid
----9082
First kill this session using the below query in sql level;
Sql> alter system kill session
27,1749;
Now kill this session in o/s level first find out the process for this session we already found
the server process id for this session with this id, kill the session.
[kithu@linux2 ~] ps ef|grep pracle ram
[kithu @linux2 ~] kill -9 9082
Page 88 of 102
DBA NOTES
ARCHIVED LOGS
Q) How can we controlled the number of archiver processes?
This is possible by defining a parameter named log_archive_max processes
sql> alter system set log_archive_max_processes=3;
Views:
V$database;
Sql;> select log_mode from v$database;
Logmode
------------ARCHIVELOG
V$archived_log:
It will show all archived logs information
Sql;>select name,dest_id,thread#,sequence#,arechived,completion
fromv$archived_log;
V$archived_dest:
Dest_name
name_space
archiver
log_sequence
-------------------------------------------------------------------------------------------Log_archive_dest system
arch
0
V$backup_redolog.
V$log.
V$loghistory.
Q) what is the role to grant users to allow select pr ivileges on all data dictionary views?
Select_catalog_role.
Q)what is role to grant users to allow excute privilages for packages and
And procedures in data dictionary?
Excute_catolog_role.
Q) role to delete records from system audit table(aud$)
delete_catalog_role.
Page 89 of 102
DBA NOTES
Q) role to allow query access to any object in the sys schema.
Select any distionary.
AUTHENTICATION
the Authentication which we can define users such that the
database performs both identification and authentication
of users is said to be database Authentication.
The authentication through whiech we can define users that
Authentication is performed by o/s or network service
Is called external authentication.
o/s level authentication: to connect to database.
create user exactly as o/s account
Username
------------------Sys
System
Am
s_max s_warning
0
s_cyrrent
s_high users_max
3
Page 90 of 102
DBA NOTES
Name
Value
Fixed size
1219352
Fariable size
184550632
Database Buffers
159383552
Redo Buffers
2973696
V$INSTANCE
SQL> select instance_num,ber,instance_name,host_name,version,
startup_time, status from v$ Instance;
I_N I_N
H_N
VER
startup_time
started
-----------------------------------------------------------------------------------------1
app
linux2
10.2.01.0
24-sep-08
started
SQl>select archiver,Logins,shutdown_pending,
Database,status,blocked, active_status form v$ Instance;
Achiver
Logins Shutsown
Database
blocked
Active
-------------------------------------------------------------------------------------------Stopped
allowed
No
active
no
normal
V$log_history.
V$fixed_table
Page 91 of 102
DBA NOTES
Sql> select view_definition from v$fixed_view_definition
Where view_name=v$instance;
8052 1
18947 1
0 15:12 ?
0 16:08 ?
00:00:00 ora_smo_mydb
00:00:00 ora_smo_mydb
particular word.
Page 92 of 102
DBA NOTES
Q) What is the use of ignore=y in import?
While importing tables, it assumes that the table does not exit. If the table
exists it skips out with error. To ignore this type of errors we use ignore=y
Q) What the pre requisite to import user?
User must exit in target
Q) What is the pre requisite to import database?
Database must exists on target
If we are using same file system on source and target, fallow below steps
create an empty DB
Page 93 of 102
DBA NOTES
Q)How can we import the table to target if the table already exists on target?
[app@linux6 ~]$exp system/manager file=a.dmp tables=kittu.a log=a.log
[kittu@linux6 ~]$emp system/manager file=a.dmp fromuser=kittu
touser=app ignore=y
copy the dump file from source to target and also create user in the target
import the schemas data by using below sysntax
[kittu@linux6 ~]$imp system/manager file=a.dmp
fromuser=kittu touser=kittu
Q) How can we export and import large database whole size is 500GB?
This is possible by using filesize and file options in export
[app@linux6 ~]$exp system/manager filesize=100GB
Page 94 of 102
DBA NOTES
2. Export the tablespace metadata only. This process can be done as SYSDBA
user only.
[app@linux6 ~]$ exp file=a.dmp tansport_tablespace=y
tablespaces=app log=a.log
3. Copy the dump file from source to target also copy the datafile related to that
tablespace from source to target.
4. If we are using same block size on bothsides no problem, otherwise we need to
mention block size related parameter (db_2k_cache_size) in init file of target db
and import the dump file
[kittu@linux6 ~]$ imp file=a.dmp tansport_tablespace=y
tablespaces=app Datafiles/oraAPP/kittu/kittudb/app.dbf
ignore=y log=a.log
SQL> alter tablespace ts01 offline immediate;
Usally when we put tablespace in offline mode, ckpt occurs for that tablespace. But
by using the above option, the ckpt doesnot occurs for the tablespace only Ts is put
in readonly mode.
Q) Script for exporting tables and ftp to another server?
vi expscript
Export ORACLE_SID=${1}
Export ORACLE_HOME= `grep -w ${1} /etc/oratab|awk -F ":"
'{print $2}'`
Export PATH =$PATH:$ORACLE_HOME/bin
Export d=`date '+%d%m%y'`
exp system/manager file=s_db_${d}.dmp Log =s_db_${d}.log
tables=${2},${3}
ftp n<<EOF
open 192.168.0.202
user ramu ramu
cd/oraDB/ramu
prompt off
mput s_db_${dt}.dmp
s_db_${dt}.dmp
Page 95 of 102
DBA NOTES
quit
EOF
In case of users:
In case of DB:
Owner= ${2},${3}
Expscript app scott ram
full=y
Expscript app
Block Size:
Block size for data block is created at the time of DB creation. We can also maintain
database with datablocks having multiple block size.
Actually my DB it made with 8k, DB cache retrieves the 8k blocks only. In order to get
1k,4k blocks we need to add the below parameter in init.ora file.
Db_2k_cache_size=50m (2k_blocks)
This statement allots 50m from 2k blocks in db_cache
It is going to add additional space for 2k blocks in db cache.
4k_db_4k_cache_size
8k_db_8k_cache_size
After adding the above parameters we can create the TS with different block size as below
Create tablespace ts001 Datafile /oraAPP/app/appdata/ts001.dbf
size 10 blocksize 2k;
The data from db cache is flushed using LRU(least Recently used) algorithm.
The advantage of having biggest blocksize is if retrives data at a time.
The disadvantage of having bigger blocksize is more data is flushed into dbcache.
what is the package which validates username/passwd when we use export/import
Dbms_plugts.checkuser
Q) How can we bounce listener?
lsnrctl reload <name>
Page 96 of 102
DBA NOTES
It is not ment for local DBAs, only for distributed DBAs
Sysopen is public user
Q) Row chaining?
Row spanned across the multiple blocks is called row chaining. It will came
decreasing I/O performance.
Q) Row Migration?
A row completely migrating to another block, but its address is maintained in initial
blocks.
Both should reduce I/O
Q) How can you reload stylesheets?
Dbms_metadata_util.load_stylesheets
Q) What we have do when import return error while importing starts?
Use exclude=statistics in impdb
Q) How can we Export only structure of table without Data ?
By using option rows=N
Q) What is the file which is used to read values of which are required for instance in P
file ?
i file
Options used for Input:All the options in Exp are also there in imp.
Some of other options are
Show:Just lists file Contents (N)
Ignore:It ignores the created errors (N)
From user:It indicates list of owner user names
To user:It indicates list of usernames
Compile:It compiles Procedures, packages and functions (Y)
Data files:Data files to be transported into Database
Q) How can we mcrease the speed of Exp/imp ?
By increasing Buffer size
Page 97 of 102
DBA NOTES
Page 98 of 102
DBA NOTES
Q)Why we need to pass ?
In shell script
Sql plus <<EOF
Sys as sysdba
Select name from v\$ database
EOF
Page 99 of 102
DBA NOTES
Now it will prompt us for username, password(which we wish to take backup), dump
file [default name=expdat.dmp buffer Size(4096 [d]) etc
It backup the structure ,Indexes, constraints of table also
It will Export grants ,tabledata,extent by default
Non-Interactive Model:We can bypass parameters when we exp/imp
Syn: Exp <user id/password> file=file.dmp log=file.log
DBA NOTES
Instead of using parameter file ,you may use a parameter file where the parameters are
stored.
Make all inputs in file
Naming Convention:- <file_name>.par
Syntax :- parfile=<name>.par
Syntax for exp/imp:$exp parfile=<name>.par
If the data is exported on a system which it is imported, imp must be the newer
version. If something needs to be exported form 10g into 9i, it must be exported with
9iexp
in order to use exp/imp the catexp.sql script must be run.
It was called by catalog.sql
the utilities used for export and infort are exp and imp
Exp: It will scan and read the information of object form database and copies it into
Dump file in o/s level.
Imp: It will scan and read the information of dumpfile and copies it into database.
database level
user level
DBA NOTES
$ tkprof
<tracefile>
----------- offline;
hh24:mi:ss)
db
hh24:mi:ss)
db
$ ser x
Or
Find name x mtime exec rm rf{ }+
33
106
Spid=10875
Go to odump location and convert the trace file from row format to readable format by
using tkprof.