You are on page 1of 14

dba daily activities

Oracle Database Health check scripts
Hi All,
Lot of time DBAs are asked to check the health of the Database,Health of the Database can be check
in various ways.It includes:
SL No Monitoring Scope Current Status OS Level
1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.36
2 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available.
3 Top 10 process consuming memory:No process using exceptional high memory
4 Free volumes available :Sufficient disk space is available on the mount points
5 Filesystem space Under normal threshold
Database level.
6 Check extents / Pro active Space addition:Space is being regularly added.
7 Check alert log for ORA- and warn messages.
8 Major wait events (latch/enqueue/Lib cache pin) No major wait events
9 Max Sessions
10 Long running Jobs 6 inactive sessions running for more than 8 hrs
11 Invalid objects 185
12 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:06
13 Temp usage / Rollback segment usage Normal
14 Nologging Indexes
15 Hotbackup/Coldbackup Gone fine
16 Redo generation normal
17 PQ proceses Normal
18 I/O Generation Under normal threshold
19 2 PC Pending transactions 0
DR / backup
1 Sync arch Normal
2 Purge arch Normal
3 Recovery status Normal
20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details
In Detail DATABASE Health check:
OPERATING SYSTEM:
1)Physical memory/ Load:
1) Free:free command displays amount of total, free and used physical memory (RAM) in the system
as well as showing information on shared memory, buffers, cached memory and swap space used by
the Linux kernel.
Usage:
$ free -m
2) vmstat:vmstat reports report virtual memory statistics, which has information about processes,
swap, free, buffer and cache memory, paging space, disk IO activity, traps, interrupts, context
switches and CPU activity
Usage:
$vmstat 5
3) top:top command displays dynamic real-time view of the running tasks managed by kernel and in
Linux system. The memory usage stats by top command include real-time live total, used and free
physical memory and swap memory with their buffers and cached memory size respectively
Usage:
$top
4) ps :ps command reports a snapshot on information of the current active processes. ps will show the

Open the alert log file with less or more command and search for ORAThis will give you the error details and time of occurrence. With this command.and warn messages: Checking the alert log file regulary is a vital task we have to do. ): Checking the OS space is available in all filesystems. ora_dump etc. 7) Check alert log for ORA.Index and temporary tablespaces for extend and blocks Allocation details.TABLESPACE_NAME. DATABASE : 6)Check extents / Pro active Space addition: Check each of the Data. 2) Look for the Database level or Tablespace level changes Monitor the alert log file and search the file for each Day activities happening In the Database either whether it is bouncing of Database.ps aux|sort -m We can use the top command. top memory hogging processes can be identified.Increase in the size of the Database parameters.oracle Database files. 4) Free volumes available: We have to make sure Sufficient disk space is available on the mount points on each OS servers where the Database is up and running. $df –h 5)Filesystem space: Under normal threshold. SET LINES 1000 SELECT SEGMENT_NAME.We can use the below OS commands: $df –h $du –csh * 3) Top 10 process consuming memory: We can Displaying top 10 memory consuming processes as follows: ps aux|head -1.EXTENTS.percentage of memory resource that is used by each process or task running in the system.specially the location which is having archive logs .BLOCKS FROM DBA_SEGMENTS WHERE TABLESPACE_NAME=’STAR01D’. Usage: $ps aux 2) OS Space threshold ( archive.BLOCKS FROM DBA_SEGMENTS.EXTENTS.In the 11g Database we can look for TNS errors in the alert log file.TABLESPACE_NAME.In the alert log files we have to looks for the following things: 1) Look for any of the oracle related errors. 8) Major wait events (latch/enqueue/Lib cache pin): We can check the wait events details with the help of below queries: .. SELECT SEGMENT_NAME.Check the filesystem in the OS side whether the sufficient space is available at all mount points.Increase in the size of the tablespaces. and press M which orders the process list by memory usage.

pdml_status.time_remaining_micro.sql_trace_binds. s. s.failover_method.seconds_in_wait. sw. s. '(oracle)') AS username. s. s. s.SID = sw.prev_hash_value. s. 9) Max Sessions: There should not be more than 6 inactive sessions running for more than 8 hours in a Database in order to minimize the consumption of CPU and I/O resources. s.saddr. The below Query gives details of Users sessions wait time and state: SELECT NVL (s. The following query provides clues about whether Oracle has been waiting for library cache activities: Select sid.event.plsql_entry_object_id. s. s.resource_consumer_group. s. v$statname sn where se. s. s. a)Users and Sessions CPU consumption can be obtained by below query: Set lines 1000 select ss.p1.event#.username is not null order by VALUE desc.terminal. s.prev_exec_id.p2.seq#.status='ACTIVE' and ss. s.username.plsql_entry_subprogram_id.schema#. s.username IS NOT NULL) AND (NVL (s.ownerid.SELECT s.machine. s. s.p3text.wait_class_id.VALUE/100 cpu_usage_seconds from v$session ss. s.failover_type.osuser.current_queue_duration.time_since_last_wait_micro. s.p1raw. s.sql_trace_waits. s.row_wait_block#. s. s.SID and ss.prev_exec_start.username.sql_trace_plan_stats.user#.state FROM v$session_wait sw.pq_status. s. s.p3.program) program.command. s.STATISTIC# = sn.sql_hash_value. s. s.TYPE.pdml_enabled. s. s. s. s.wait_time. s.status.taddr. s. s. s. s. v$sesstat se.prev_sql_id.row_wait_obj#.state.serial#.p2text. s. s. s. s. s.SID. s.SID.wait_class.wait_time. event.sql_child_number.last_call_et.creator_addr.sql_exec_start.module_hash. p1raw. s. s.serial#.process. 'x') <> 'SYSTEM') AND (s.client_identifier.pddl_status. s.sql_id. s.wait_time_micro.server.seconds_in_wait DESC.audsid. s. .service_name. s.plsql_subprogram_id. sw.session_edition_id. sw.sql_trace. s. s. s.blocking_instance. s.TYPE <> 'BACKGROUND') AND STATUS='ACTIVE' ) ORDER BY "PROGRAM". s.paddr. s.row_wait_file#.fixed_table_sequence. seconds_in_wait.username.seconds_in_wait. s. s. wait_time From v$session_wait Where event = 'library cache pin' And state = 'WAITING'. s. s.SID ORDER BY sw. s.failed_over. s.schemaname. s. v$session s WHERE s. s. s. se. s.STATISTIC# and NAME like '%CPU used by this session%' and se. s.blocking_session. s.osuser. s.p3raw.action. s.logon_time. s.event.p1text.SID.p2raw. s.creator_serial# FROM v$session s WHERE ( (s. s. s.plsql_object_id. s. s. s. s. UPPER (s. s.blocking_session_status. s.action_hash.module. s. s. s. s. s.sql_exec_id. s. s. sw.SID = ss.sql_address.prev_sql_addr.prev_child_number. s.client_info. s.row_wait_row#. s.lockwait.wait_class#. s.

sid and ss.LOGON_TIME. Analyzing a Running Job The status of a job or a task changes several times during its life cycle.sid=s.User wise.0) > 10 order by 8.decode(nvl(p.'J')trunc(logon_time.'DDMonYY HH24:MI') date_login.1.b) Users and Sessions CPU and I/O consumption can be obtained by below query: -.spid SPID.username. 'hh24:mi:ss dd/mm/yy') started.physical_reads disk_io.V$SESS_IO si.'J')-trunc(logon_time. 10) Long running Jobs: We can find out long running jobs with the help of the below query: col username for a20 col message for a50 col remaining for 9999 select username.s.'J')))). message from v$session_longops where time_remaining = 0 order by time_remaining desc. Stop Pending: The user has stopped the job.shows Day wise.bg.0). Initialization Error: The job or step could not be run successfully.sid and bg. time_remaining remaining.addr and ss. Stopped: The user canceled the job. ss.0.Process id of server wise.2) cpu_per_day from V$PROCESS p. round((ss.paddr=p.statistic#=12 and si.program ) program.V$SESSTAT ss. Running: The job is being executed and is in progress.to_char(s.CPU and I/O consumption set linesize 140 col spid for a6 col program for a35 trunc select p. A job can have the following as its status: Scheduled: The job is created and will run at the specified time.value/100).'J')). Reassigned: The owner of the job has changed. 12) Analyze Jobs ( once in a week ): We need to analyze the jobs that are running once in a week as a golden rule. Succeeded: The job was executed completely.(trunc(sysdate.(trunc(sysdate. Suspended: This indicates that the execution of the job is deferred. The below steps can be considered for analyzing jobs. s. The already running steps are completing execution.V$SESSION s. the job status is Initialization Error.addr and round((ss. 11) Invalid objects: We can check the invalid objects with the help of the below query: select owner||' '||object_name||' '||created||' '||status from dba_objects where status='INVALID'.'J')) days. Skipped: The job was not executed at the specified time and has been omitted.background.value/100 CPU.value/100)/(decode((trunc(sysdate.V$BGPROCESS bg where s.paddr(+)=p. The running jobs can be found out by the help of below query: . Inactive: This status indicates that the target has been deleted. If a step in a job fails initialization.1. Failed: The job was executed but failed.'J')-trunc(logon_time.sid=s.description.to_char(start_time.

Enter value for user: STARTXNAPP 14)Validating the Backup: We have to verify the Hotbackup/Coldbackup(or any physical or logical backup) of all the Production and non-production Databases went fine.statistic# = sn. 14)Hotbackup/Coldbackup: Validating the backup of Database. sys. r. NVL(s. sys.program.name = 'db_block_size'.2)||'M' "SIZE".value)/1024/1024).used_ublk * TO_NUMBER(x.Make sure you are having a valid backups of all the Databases.name = 'db_block_size' AND a.tablespace. a.taddr = t.username.v_$rollname r.statistic# and sid in (select sid from v$session where username like UPPER('&user')) and upper(sn.It should complete on time with the required data for restoring and recovery purpose if required. s.name undoseg. t.tablespace. We can find out the failed jobs and Broken jobs details with the help of the Below query: select job||' '||schema_user||' '||Broken||' '||failures||' '||what||' '||last_date||' '||last_sec from dba_jobs. job. 15) Redo generation/Archive logs generation details: .'||TO_CHAR(s.v_$transaction t.session_addr ORDER BY b. 'None') orauser. sys.blocks.program FROM sys.serial# SID_SERIAL.Check the Backup locations to make sure the Backup completed on time with the required Backup data.value desc. sn.sid "SID". v$statname sn where st. a.v_$session s.value / 1024 / 1024/1024) "GB" from v$sesstat st.blocks*p. b.name "TYPE".sid||'.select sid.usn = t. a.username. We can get the PGA usage details with the help of the below query: select st.sid. ROUND(((b.sid)||'. sys.'||a.saddr = b. 13) Temp usage / Rollback segment/PGA usage: We can get information of temporary tablespace usage details with the help of below query: Set lines 1000 SELECT b.xidusn(+) AND x.instance from dba_jobs_running. We can get information of Undo tablespace usage details with the help of the below query: set lines 1000 SELECT TO_CHAR(s.v_$session a. ceil(st.name) like '%PGA%' order by st.serial#) sid_serial. sys. st.v_$sort_usage b.v_$parameter p WHERE p.v_$parameter x WHERE s.addr AND r.value)/1024||'K' "Undo" FROM sys.

1.'99') " 05".'99') " 11".0)).'09'. to_char(sum(decode(to_char(first_time.'99') " 21".'03'.'99') " 01".'99') " 08".1.'HH24').'99') " 13".'06'. to_char(sum(decode(to_char(first_time.'99') " 20".1.'16'.'HH24').'99') " 06".'HH24').'23'.0)).'HH24').'11'. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.0)).'99') " 18".'99') " 04".0)).'99') " 02".'HH24'). to_char(sum(decode(to_char(first_time.1.1.'HH24').0)).'HH24').0)).1.'HH24').1.'99') " 03".0)).'HH24').'07'.1. We can the log switch details with the help of the below query: Redolog switch Datewise and hourwise: ------------------------------set lines 120.0)).0)).'99') " 14".'08'.If there are frequent log switches than archive logs might generate more which may decrease the performance of the Database however in a production Database log switches could vary depending upon the Server configuration between 5 to 20. to_char(sum(decode(to_char(first_time.'HH24').'09'. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.1.'99') " 06".0)).'HH24').0)).0)). We can use the below queries for archive logs generation details: a)Archive logs by dates: set lines 1000 select to_char(first_time. to_char(sum(decode(to_char(first_time.1. .'13'.1.1.'08'. to_char(sum(decode(to_char(first_time.0)).'99') " 23" from v$log_history group by to_char(first_time.'99') " 15".'99') " 07".'99') " 07".1.'HH24').'04'.'06'.0)).1.0)). to_char(sum(decode(to_char(first_time.We should make sure there should not be frequent log switch happening in a Database. to_char(sum(decode(to_char(first_time.'HH24'). to_char(sum(decode(to_char(first_time.1.'99') " 22".'03'.'99') " 16".1.1.'99') " 03".'HH24').'14'.'18'.0)).'HH24'). to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.'99') " 09". to_char(sum(decode(to_char(first_time.'07'.'20'.'00'. to_char(sum(decode(to_char(first_time.0)). to_char(sum(decode(to_char(first_time.'HH24').0)). to_char(sum(decode(to_char(first_time.1. to_char(sum(decode(to_char(first_time.1. If there are frequent log switches than archive logs might generate more which can affect the performance of Database.0)).'01'.1.1. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.'HH24').0)).1. to_char(sum(decode(to_char(first_time.'99') " 01".0)).'HH24').'05'.0)).'DD-MON-RR') "Date".'10'.1.1.'HH24').0)).'01'.1. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.0)).'02'.'99') " 00".1. set pages 999.'02'.'99') " 12".'99') " 05".'12'.1.0)).'HH24').'99') " 00". to_char(sum(decode(to_char(first_time.'HH24'). to_char(sum(decode(to_char(first_time.'DD-MON-RR') order by 1 / Archive logs generations is directly proportional to the number of log switches happening in a Database.'HH24').'HH24').0)).'HH24').1.'99') " 02".0)).'HH24'). to_char(sum(decode(to_char(first_time.'99') " 08".0)).'HH24').'HH24').0)).1.'HH24').0)).'HH24').1. to_char(sum(decode(to_char(first_time.1.'17'.'HH24').'HH24').'05'. to_char(sum(decode(to_char(first_time.0)).'19'.1.1.0)). to_char(sum(decode(to_char(first_time.'HH24').'21'.1.0)). to_char(sum(decode(to_char(first_time.'HH24').0)).'HH24').'99') " 19".'15'.'99') " 17".'99') " 10".'00'.'99') " 09".'DD-MON-RR') "Date".'99') " 04". to_char(sum(decode(to_char(first_time. select to_char(first_time.'22'.'04'.

'HH24').'99') " 15".0)).0)). to_char(sum(decode(to_char(first_time.'99') " 18".1.1.'HH24').'99') " 03". to_char(sum(decode(to_char(first_time.1.1.1.1.'18'.'HH24').'99') " 17".'05'.'01'.1. to_char(sum(decode(to_char(first_time.'04'.'HH24').'99') " 22".'00'.'HH24'). to_char(sum(decode(to_char(first_time.'HH24'). to_char(sum(decode(to_char(first_time.'15'.1.1.'99') " 04".1.0)).'HH24').1.0)). to_char(sum(decode(to_char(first_time.'DD-MON-RR') .'21'. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time. count of archived logs generated today on hourly basis: ------------------------------------------------------select to_char(first_time.'99') " 08".'13'.'HH24').0)).'99') " 18".'HH24').1. to_char(sum(decode(to_char(first_time.'99') " 05".'HH24'). to_char(sum(decode(to_char(first_time.1. to_char(sum(decode(to_char(first_time.0)).'15'.1.'99') " 19".'99') " 12".0)). to_char(sum(decode(to_char(first_time.'99') " 21".0)).'99') " 21".'HH24').0)).0)).'99') " 19".'HH24').0)).to_char(sum(decode(to_char(first_time.'99') " 16".'99') " 07".0)). to_char(sum(decode(to_char(first_time.1.'11'.0)).0)).1.'19'.'99') " 10".'HH24').'DD-MON-YYYY').'HH24').'12'.'HH24').'20'.count(*) from v$archived_log group by to_char(COMPLETION_TIME. to_char(sum(decode(to_char(first_time.'HH24').'23'.'99') " 23" from v$log_history where to_char(first_time. to_char(sum(decode(to_char(first_time.'HH24'). c) Archive log count of the day: select count(*) from v$archived_log where trunc(completion_time)=trunc(sysdate).1.'99') " 14".'HH24').1.'HH24').'HH24').'99') " 14".'DD-MON-RR') order by 1 / b)Archive log generation details Day-wise : select to_char(COMPLETION_TIME.'10'.1.'99') " 15".'HH24').0)).'18'.'13'.'99') " 20".'HH24'). to_char(sum(decode(to_char(first_time.'HH24').'99') " 12".'19'.'HH24').'99') " 13".'16'.'99') " 16".'09'.'HH24').0)).0)).'HH24').'HH24').0)).'HH24'). to_char(sum(decode(to_char(first_time.'HH24').1. to_char(sum(decode(to_char(first_time.'DD-MON-YYYY').'HH24').1.'22'.1.'99') " 02".'HH24').1.0)).0)).'22'. to_char(sum(decode(to_char(first_time.'10'.0)).0)). to_char(sum(decode(to_char(first_time.1. to_char(sum(decode(to_char(first_time.'99') " 00".'DD-MON-RR') "Date".0)).1. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.1.'DD-MON-RR')='16-AUG-10' group by to_char(first_time.'99') " 22".'99') " 06". to_char(sum(decode(to_char(first_time.0)).'12'. to_char(sum(decode(to_char(first_time.'HH24').'23'.0)).1.'14'.0)).1.'11'.1.1. to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.'99') " 11".1.'14'.'99') " 11".'08'.0)).0)).'HH24'). to_char(sum(decode(to_char(first_time.'DD-MON-YYYY') order by to_char(COMPLETION_TIME. to_char(sum(decode(to_char(first_time.0)).'99') " 10".'99') " 09".'02'.0)).1.0)).0)).1. to_char(sum(decode(to_char(first_time.1.0)).0)).'HH24').'06'.'16'.'HH24').'17'.0)).'03'. to_char(sum(decode(to_char(first_time.0)).'99') " 23" from v$log_history group by to_char(first_time.1.'20'. to_char(sum(decode(to_char(first_time.'HH24').'HH24').'99') " 01".1. to_char(sum(decode(to_char(first_time.'17'.0)). to_char(sum(decode(to_char(first_time.'07'.'21'. to_char(sum(decode(to_char(first_time.1. to_char(sum(decode(to_char(first_time.'HH24').'99') " 13".0)).'99') " 20".1. to_char(sum(decode(to_char(first_time.'99') " 17".

value/100). round((ss.V$SESS_IO si.bg.paddr(+)=p.V$SESSION s. Copy these log files to your physical standby database and register them using the ALTER .This we can check as follows: The V$ MANAGED_STANDBY view on the standby database site shows you the activities performed by both redo transport and Redo Apply processes in a Data Guard environment SELECT PROCESS. query the V$ARCHIVE_GAP view as shown in the following example: SQL> SELECT * FROM V$ARCHIVE_GAP.program ) program.username.physical_reads disk_io. v$session s where q.address = s. s.background. On a physical standby database To determine if there is an archive gap on your physical standby database.sid=s.sid and bg.'J')-trunc(logon_time. issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1): Eg: SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10.order by 1 / 16)I/O Generation: We can find out CPU and I/O generation details for all the users in the Database with the help of the below query: -. In some situations. sessionIOS.V$BGPROCESS bg where s.sid=s.2) cpu_per_day from V$PROCESS p.what kind of sql a session is using set lines 9999 set pages 9999 select s.0). eg: sid=1853 17)Sync arch: In a Dataguard environment we have to check primary is in sync with the secondary Database.sid and ss. ss.'J')))).'J')trunc(logon_time.(trunc(sysdate.to_char(s.value/100 CPU.0) > 10 order by 8. you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available. To know what the session is doing and what kind of sql it is using: -.0.s.sql_address and s.'J')-trunc(logon_time.CPU in seconds.If it display any information with row than manually we have to apply the archive logs. STATUS FROM V$MANAGED_STANDBY.'DDMonYY HH24:MI') date_login. After you identify the gap.(trunc(sysdate. The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.decode(nvl(p.V$SESSTAT ss.value/100)/(decode((trunc(sysdate.sql_text from v$sqltext q.Show IO per session.1.LOGON_TIME.spid SPID.addr and ss.1.paddr=p.addr and round((ss.statistic#=12 and si. automatic gap recovery may not take place and you will need to perform gap recovery manually.description.'J')) days. SEQUENCE#.sid. q. set linesize 140 col spid for a6 col program for a35 trunc select p.'J')). CLIENT_PROCESS. If it displays no rows than the primary Database is in sync with the standy Database.sid = &sid order by piece. For example.

ARCHIVED_SEQ# 2> FROM V$ARCHIVE_DEST_STATUS 3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE'.arc'. After you register these log files on the physical standby database.arc 1 10 /disk1/oracle/dbs/log-1292880008_10. SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_8. THREAD# SEQUENCE# FILE_NAME ---------. STATUS.------------- . query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence. FILE_NAME FROM DBA_LOGSTDBY_LOG L 2> WHERE NEXT_CHANGE# NOT IN 3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L. the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database.---------------. SEQUENCE#. Repeat this process until there are no more gaps.arc'. and 9. you can restart Redo Apply. query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence. Step 2 Determine the most recent archived redo log file. 8.arc Copy the missing log files. Enter the following query on the primary database to determine the current archived redo log file sequence numbers: SQL> SELECT THREAD#. if there is one.) The output shows that the highest registered file is sequence number 10. SEQUENCE#. with sequence numbers 7.---------. After you register these log files on the logical standby database.DATABASE REGISTER LOGFILE statement on your physical standby database.----------------------------------------------1 6 /disk1/oracle/dbs/log-1292880008_6. After resolving the identified gap and starting SQL Apply. STATUS FROM V$LOG WHERE STATUS='CURRENT'. but there is a gap at the file shown as sequence number 6: SQL> COLUMN FILE_NAME FORMAT a55 SQL> SELECT THREAD#.-----. to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#. Repeat this process until there are no more gaps. you can restart SQL Apply. For example. (If there are no gaps.arc'. ARCHIVED. Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations: SQL> SELECT DESTINATION. The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. For example: SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_7. After resolving the gap and starting Redo Apply. if there is one. ARCHIVED_THREAD#. the query will show only one file for each thread.SEQUENCE#. Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data: SQL> SELECT MAX(SEQUENCE#). The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. Monitoring Log File Archival Information: Step 1 Determine the current archived redo log file sequence numbers. Step 3 Determine the most recent archived redo log file at each destination. query the DBA_LOGSTDBY_LOG view on the logical standby database.THREAD# = THREAD#) 4> ORDER BY THREAD#. DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ# -----------------. For example: SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log1292880008_10. On a logical standby database: To determine if there is an archive gap.

'HH24').'08'.1.'01'. to_char(sum(decode(to_char(first_time.'12'.1.'HH24').'99') " 16".SHOWS ARCHIVE LOGS GENERAION DETAILS HOURLY AND DATE WISE BASIS select 'ARCHIVE LOG REPORT'.0)).1.1. THREAD# SEQUENCE# --------.1.1. to_char(sum(decode(to_char(first_time. job. Each destination has an ID number associated with it. Assume the current local destination is 1.1.0)).1.THREAD#.0)).0)).1. to_char(sum(decode(to_char(first_time.0)).'99') " 04".0)).0)).0)).'99') " 12".1.'99') " 06". to_char(sum(decode(to_char(first_time. To identify which log files are missing at the standby destination. to_char(sum(decode(to_char(first_time.'HH24').0)).'99') " 03".'HH24').'05'.'99') " 00".'99') " 01".1./private1/prmy/lad VALID 1 947 standby1 VALID 1 947 The most recently written archived redo log file should be the same for each archive destination listed.'13'.--------1 12 1 13 1 14 18)Purge arch: We have to make sure the archive logs files are purged safely or move to Tape drive or any other location in order to make space for new archive logs files in the Archive logs destination locations.'16'.'99') " 13". to_char(sum(decode(to_char(first_time.'99') " 05".'09'.'03'.'02'.0)). to_char(sum(decode(to_char(first_time.'99') " 07". to_char(sum(decode(to_char(first_time.1.'99') " 08". 20) MY DATABASE HEALTH CHECK SCRIPT: /* SCRIPT FOR MONITORING AND CHECKING HEALTH OF DATABASE-USEFUL FOR PRODUCTION DATABASES */ -. to_char(sum(decode(to_char(first_time.'99') " 11".'HH24').'HH24').0)).instance from dba_jobs_running.1.1. You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. a status other than VALID might identify an error encountered during the archival operation to that destination.0)). set lines 1000 -. to_char(sum(decode(to_char(first_time.'00'. sid. and one of the remote standby destination IDs is 2.0)).'10'.0)).'HH24').'HH24'). LOCAL.'14'.SEQUENCE# NOT IN 5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND 6> THREAD# = LOCAL. to_char(sum(decode(to_char(first_time.'HH24').SHOWS RUNNING JOBS select 'RUNNING JOBS'.1.'HH24').'HH24').'99') " 10". to_char(sum(decode(to_char(first_time.'HH24').'11'.0)).'07'.'06'.'15'.'04'.0)). .to_char(first_time.'HH24').'99') " 15". to_char(sum(decode(to_char(first_time. If it is not. SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) 3> LOCAL WHERE 4> LOCAL.0)).'DD-MON-RR') "Date".SEQUENCE# FROM 2> (SELECT THREAD#.'99') " 02".'HH24').THREAD#). to_char(sum(decode(to_char(first_time. 19)Recovery status: In order to do recover make sure you are having latest archive logs. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number. to_char(sum(decode(to_char(first_time.1.'99') " 09". to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.'HH24').so that you can restore and do the recovery if required.'HH24').1.'99') " 14". issue the following query: SQL> SELECT LOCAL. Step 4 Find out if archived redo log files have been received.'HH24').

Oracle Deutschland GmbH set linesize 120 col opname format a20 col target format a15 col units format a10 col time_remaining format 99990 heading Remaining[s] col bps format 9990.sid.SHOWS BLOCK CHANGES DETAILS AND PHYSICAL READS DETAIL select a.WHICH SESSION IS BLOCKING WHICH SESSION set lines 9999 set pages 9999 select s1.SQL rem Long Running Statements rem Helmut Pfau.sid and block_changes > 10000 order by block_changes desc.username || '@' || s1.0)).to_char(s.id1 = l2.1.sid and bg.'HH24').sid and s2. v$lock l2.'23'.id2 .99 heading [Units/s] col fertig format 90.addr and round((ss.id2 = l2.s.statistic#=12 and si.username.BLOCK=1 and l2.paddr(+)=p.sid and l1.'99') " 23" from v$log_history group by to_char(first_time.0)).'HH24'). blocking_session from v$session where blocking_session is not null. to_char(sum(decode(to_char(first_time. sessionIOS. .0)).'99') " 19".machine || ' ( SID=' || s2.SCRIPT TO IDENTIFY LONG RUNNING STATEMENTS rem LONGOPS.sid || ' ) ' AS blocking_status from v$lock l1.'J')))).0. sofar.block_gets.'DDMonYY HH24:MI') date_login. v$session s1.'22'.program ) program. to_char(sum(decode(to_char(first_time.request > 0 and l1.username.1.'HH24').physical_reads disk_io.0).'J')trunc(logon_time.'J')-trunc(logon_time.sid.username || '@' || s2.'20'.sid || ' ) is blocking ' || s2.'99') " 22".bg. to_char(sum(decode(to_char(first_time.(trunc(sysdate.process.physical_reads. opname.addr and ss.1. to_char(sum(decode(to_char(first_time.0)).description.0)).sid=l2.'99') " 17".'J')) days.0) > 10 order by 8.value/100)/(decode((trunc(sysdate.'99') " 21".to_char(sum(decode(to_char(first_time. to_char(sum(decode(to_char(first_time.V$BGPROCESS bg where s.1.'17'.WHAT ALL THE SESSIONS ARE GETTING BLOCKED select 'SESSIONS BLOCKED'.0)).value/100).1. totalwork.sid=l1.1.decode(nvl(p.'DD-MON-RR') order by 1 / -.'99') " 20".LOGON_TIME.'19'.'HH24'). -.'99') " 18".'18'.machine || ' ( SID=' || s1.1.sql set linesize 140 col spid for a6 col program for a35 trunc select p.sid and ss.'HH24').V$SESSION s.'J')-trunc(logon_time.spid SPID. -.id1 and l2.1.0)).sid=b.(trunc(sysdate.1.ss. s.'HH24'). -.consistent_gets.sid=s.sid=s.'HH24').block_changes from V$SESS_IO a. target.99 heading "complete[%]" select sid.show IO per session / CPU in seconds.V$SESSION b where a.V$SESS_IO si. to_char(sum(decode(to_char(first_time. -. v$session s2 where s1.background.b.value/100 CPU.'21'.'J')).round((ss.2) cpu_per_day from V$PROCESS p.V$SESSTAT ss.paddr=p.

round(nvl(c. -.bytes.sm$ts_free c where a.tablespace_name=c.2) Total.machine. serial#.occupied and available and Tablespace usage details along with hit ratio of various SGA components which can be very helpfull to monitor the performance of the Databases.sql: ttitle "1.99 heading "Total space(MB)" col Used format 99999. osuser.tablespace_name(+).99 heading "Free space(MB)" break on report compute sum of Total space(MB) on report compute sum of Used space(MB) on report compute sum of Free space(MB) on report select a.2) "% Used" from sys.0)*100/nvl(a.0)/1024/1024.sql_text from v$sqltext q. round(nvl(b. sid.99 heading "Used space(MB)" col Free format 99999.tablespace_name. eg:SID=1844 I would like to add one more script which will tell me details regarding the Size of the Database used.username. Database_monitor.sql_address and s. sys. v$session s where q.WHAT SQL A SESSION IS USING set lines 9999 set pages 9999 select s. ttitle off col val3 new_val phys_reads noprint select Value val3 .sid = &sid order by piece. :============== Hit Ratio Information ==================:" skip 2 set linesize 80 clear columns clear breaks set pagesize 60 heading off termout off echo off verify off REM col val1 new_val lib noprint select 100*(1-(SUM(Reloads)/SUM(Pins))) val1 from V$LIBRARYCACHE.2) Free .2) Used.0). (totalwork-sofar)/time_remaining bps.bytes. sys.bytes. q. sofar/totalwork*100 fertig from v$session_longops where time_remaining > 0 / -. ttitle off col val2 new_val dict noprint select 100*(1-(SUM(Getmisses)/SUM(Gets))) val2 from V$ROWCACHE.address = s.sm$ts_avail a. 0)/1024/1024.status from v$session where username!='NULL' and status='ACTIVE'.sid.units.bytes/1024/1024.bytes. ttitle "2.sm$ts_used b. round( nvl( b. time_remaining.ACTIVE SESSIONS IN DATABASE select 'ACTIVE SESSION'.tablespace_name=b. :============== Tablespace Usage Information ==================:" skip 2 set linesize 140 col Total format 99999.tablespace_name(+) and b. round(a.

' Shared SQL Pool '.from V$SYSSTAT where Name = 'physical reads'.Value/ DECODE((A.1. ' Dictionary Hit Ratio : '||&dict dict_hit. Executes/Stmt : '|| &avg_stmts_exe||' ' from DUAL. :============== Sort Information ==================:" skip 2 select A. . ' Cache Hit Ratio : '||&lib lib_hit. ROUND(100*A.Value Disk_Sorts.Value)). ttitle off col val4 new_val log1_reads noprint select Value val4 from V$SYSSTAT where Name = 'db block gets'.0. B.2) Pct_Disk_Sorts from V$SYSSTAT A. ' Avg. Users/Stmt : '|| &avg_users_cursor||' '.Name = 'sorts (memory)'.Value+B. V$SYSSTAT B where A.(A. ttitle off col val5 new_val log2_reads noprint select Value val5 from V$SYSSTAT where Name = 'consistent gets'. ttitle "3.Value Memory_Sorts. :============== Database Size Information ==================:" skip 2 select sum(bytes/1024/1024/1024) Avail from sm$ts_avail union all select sum(bytes/1024/1024/1024) Used from sm$ts_used union all select sum(bytes/1024/1024/1024) Free from sm$ts_free. ttitle off set termout on set heading off ttitle center 'SGA Cache Hit Ratios' skip 2 select 'Data Block Buffer Hit Ratio : '||&chr db_hit_ratio.Name = 'sorts (disk)' and B. SUM(Executions)/COUNT(*) val8 from V$SQLAREA. ttitle off col val7 new_val avg_users_cursor noprint col val8 new_val avg_stmts_exe noprint select SUM(Users_Opening)/COUNT(*) val7. ' Shared SQL Buffers (Library Cache) '.Value+B. ' Avg. ttitle "4. ttitle off col val6 new_val chr noprint select 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6 from DUAL.Value).