ADRCI – Automatic Diagnostic Repository utility

Automatic Diagnostic Repository Command Interpreter

In addition to the excellent (For RAC) TFA utility – the ADRCI has been around for some time…

You can use ADRCI assist in diagnosis of events and collecting the necessary data for analysis and debugging for Oracle support. It can be configured easily to work with multiple installed Oracle homes or against the currently set Oracle home path. This is important as the ADRCI can gather diagnostic data from multiple instance homes or focus on working against one singly set home or component.

To see the available homepaths, you can use the show command below and set the appropriate single homepath if necessary.

adrci> show homes
ADR Homes: 
diag/rdbms/sandprd/SANDPRD
diag/clients/user_oracle/host_822129311_82
diag/tnslsnr/oralinux7/listener_1521
diag/tnslsnr/oralinux7/listener
diag/diagtool/user_oracle/adrci_822129311_82
adrci> set homepath diag/rdbms/sandprd/SANDPRD
adrci> 
adrci> show homes
ADR Homes: 
diag/rdbms/sandprd/SANDPRD

You can perform a range of functions from within ADRCI , such as useful things like creating reports, packaging up the details of incidents/problems encountered for Oracle support diagnosis, viewing alert logs and trace files along with some not so useful features
such as echoing text to stdout (For the purposes of using adrci scripts).  I’m not entirely sure why this is an ADRCI thing but it exists nonetheless.

You invoke the ADRCI CLI utility as per once your environment variables have been set.

[oracle@oralinux7 ~]$ adrci
ADRCI: Release 12.1.0.2.0 - Production on Fri Dec 30 19:52:07 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
ADR base = "/u01/app/oracle"
adrci>

Health Monitor Reports:

You can view all available Health Monitor run records via ADRCI if you enter the ‘show hm_run’ command.

**********************************************************
HM RUN RECORD 163
**********************************************************
 RUN_ID 8779
 RUN_NAME HM_RUN_8779
 CHECK_NAME DB Structure Integrity Check
 NAME_ID 2
 MODE 2
 START_TIME 2016-09-26 02:48:49.069581 +01:00
 RESUME_TIME <NULL>
 END_TIME 2016-09-26 02:48:49.582055 +01:00
 MODIFIED_TIME 2016-09-26 02:48:49.582055 +01:00
 TIMEOUT 0
 FLAGS 0
 STATUS 5
 SRC_INCIDENT_ID 0
 NUM_INCIDENTS 0
 ERR_NUMBER 0
 REPORT_FILE <NULL>

From here, you can locate the run record of the Health Monitor execution you would like to investigate and generate a report.

adrci> create report hm_run HM_RUN_8779
adrci> 
adrci> show report hm_run HM_RUN_8779
<?xml version="1.0" encoding="US-ASCII"?>
<HM-REPORT REPORT_ID="HM_RUN_8779">
 <TITLE>HM Report: HM_RUN_8779</TITLE>
 <RUN_INFO>
 <CHECK_NAME>DB Structure Integrity Check</CHECK_NAME>
 <RUN_ID>8779</RUN_ID>
 <RUN_NAME>HM_RUN_8779</RUN_NAME>
 <RUN_MODE>REACTIVE</RUN_MODE>
 <RUN_STATUS>COMPLETED</RUN_STATUS>
 <RUN_ERROR_NUM>0</RUN_ERROR_NUM>
 <SOURCE_INCIDENT_ID>0</SOURCE_INCIDENT_ID>
 <NUM_INCIDENTS_CREATED>0</NUM_INCIDENTS_CREATED>
 <RUN_START_TIME>2016-09-26 02:48:49.069581 +01:00</RUN_START_TIME>
 <RUN_END_TIME>2016-09-26 02:48:49.582055 +01:00</RUN_END_TIME>
 </RUN_INFO>
 <RUN_PARAMETERS/>
 <RUN-FINDINGS/>
</HM-REPORT>
adrci>

Viewing the Alert log:

Viewing the alert log form ADRCI is a useful touch and I use it often. It can be invoked by using the syntax below and opens the alert log in your default editor.
*’set editor’ will allow you to change the default editor within adrci.

adrci> show alert
adrci> show alert -tail 
2016-12-30 19:40:04.265000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/sandprd/SANDPRD/trace/SANDPRD_j001_72239.trc:
ORA-12012: error on auto execute of job "APEX_050000"."ORACLE_APEX_MAIL_QUEUE"
ORA-04063: package body "APEX_050000.WWV_FLOW_MAIL" has errors
2016-12-30 19:45:04.278000 +00:00

Viewing Trace files

For deeper diagnosis, you can refer to the trace files mentioned in an incident or problem or the alert log above and view this via ADRCI; for example…

adrci> show trace /u01/app/oracle/diag/rdbms/sandprd/SANDPRD/trace/SANDPRD_j001_101550.trc
/u01/app/oracle/diag/rdbms/sandprd/SANDPRD/trace/SANDPRD_j001_101550.trc
----------------------------------------------------------
LEVEL PAYLOAD 
----- ------------------------------------------------------------------------------------------------------------------------------------------------ 
 Trace file /u01/app/oracle/diag/rdbms/sandprd/SANDPRD/trace/SANDPRD_j001_101550.trc
 Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
 and Unified Auditing options
 ORACLE_HOME = /u01/app/oracle/product/12102
 System name: Linux
 Node name: oralinux7
 Release: 3.8.13-118.4.1.el7uek.x86_64
 Version: #2 SMP Tue Mar 8 10:22:24 PST 2016
 Machine: x86_64
 Instance name: SANDPRD
 Redo thread mounted by this instance: 1
 Oracle process number: 46
 Unix process pid: 101550, image: oracle@oralinux7 (J001)
<<<<< Output truncated >>>>

 

Packaging up problems/incidents for an SR:

IPS is the Incident packaging service. Its allows you to gather and collate the incident diagnostics related to a particular problem/incident and zip these up for ease of collection and submission to Oracle Support.

You can display the listing of historical incidents or problems using the appropriate show command syntax.

adrci> show problem
ADR Home = /u01/app/oracle/diag/rdbms/sandprd/SANDPRD:
***********************************************************************
PROBLEM_ID PROBLEM_KEY LAST_INCIDENT LASTINC_TIME 
-------------------- ----------------------------------------------------------- -------------------- ---------------------------------------- 
3 ORA 1578 110649 2016-07-09 03:20:21.758000 +01:00 
2 ORA 4031 148883 2016-09-22 05:46:43.098000 +01:00 
1 ORA 700 [kskvmstatact: excessive swapping observed] 165764 2016-12-29 22:04:40.419000 +00:00 
3 rows fetched

A ‘show incident’ will display all present incident records which may be a long listing of events.

There are various methods for then grouping incident numbers or problem. A problem in this scenario may consist of multiple incidents occurring on a repeating basis but essentially specific to the same problem.

You can however drill down into incident IDs by using the show syntax below for example.

adrci> show incident -mode detail -p "incident_id=165764"
ADR Home = /u01/app/oracle/diag/rdbms/sandprd/SANDPRD:
*************************************************************************
**********************************************************
INCIDENT INFO RECORD 1
**********************************************************
 INCIDENT_ID 165764
 STATUS ready
 CREATE_TIME 2016-12-29 22:04:40.419000 +00:00
 PROBLEM_ID 1
 CLOSE_TIME <NULL>
 FLOOD_CONTROLLED none
 ERROR_FACILITY ORA
 ERROR_NUMBER 700
 ERROR_ARG1 kskvmstatact: excessive swapping observed
 ERROR_ARG2 <NULL>
 ERROR_ARG3 <NULL>
 ERROR_ARG4 <NULL>
 ERROR_ARG5 <NULL>
 ERROR_ARG6 <NULL>
 ERROR_ARG7 <NULL>
 ERROR_ARG8 <NULL>
 ERROR_ARG9 <NULL>
 ERROR_ARG10 <NULL>
 ERROR_ARG11 <NULL>
 ERROR_ARG12 <NULL>
 SIGNALLING_COMPONENT VOS
 SIGNALLING_SUBCOMPONENT <NULL>
 SUSPECT_COMPONENT <NULL>
 SUSPECT_SUBCOMPONENT <NULL>
 ECID <NULL>
 IMPACTS 0
 PROBLEM_KEY ORA 700 [kskvmstatact: excessive swapping observed]
 FIRST_INCIDENT 14473
 FIRSTINC_TIME 2016-03-19 23:48:46.222000 +00:00
 LAST_INCIDENT 165764
 LASTINC_TIME 2016-12-29 22:04:40.419000 +00:00
 IMPACT1 0
 IMPACT2 0
 IMPACT3 0
 IMPACT4 0
 KEY_NAME Client ProcId
 KEY_VALUE oracle@oralinux7.76118_140660454380160
 KEY_NAME SID
 KEY_VALUE 9.23553
 KEY_NAME Service
 KEY_VALUE SYS$BACKGROUND
 KEY_NAME ProcId
 KEY_VALUE 9.1
 OWNER_ID 1
 INCIDENT_FILE /u01/app/oracle/diag/rdbms/sandprd/SANDPRD/trace/SANDPRD_dbrm_76118.trc
 OWNER_ID 1
 INCIDENT_FILE /u01/app/oracle/diag/rdbms/sandprd/SANDPRD/incident/incdir_165764/SANDPRD_dbrm_76118_i165764.trc
1 rows fetched
adrci>

You can generate a package or zip output for Oracle support by means of the IPS commands below pertaining to particular incidents or problems. Here is an example specific to the problem key#1 displayed in the output of the ‘show problem’ command.

adrci> ips create package problem 1
Created package 1 based on problem id 1, correlation level typical
adrci>

This has not yet created your output zip file for transfer but has prepped the information for generation. When creating your zip output file, you specify the package identifier as noted from the output above and provide the destination output directory location also.

adrci> ips generate package 1 in "/tmp"
Generated package 1 in file /tmp/ORA700ksk_20161230203045_COM_1.zip, mode complete
adrci> 
adrci> host "ls -lrt /tmp/ORA700ksk_20161230203045_COM_1.zip" 
-rw-r--r--. 1 oracle oinstall 24020161 Dec 30 20:32 /tmp/ORA700ksk_20161230203045_COM_1.zip

And the above displays your packaged up “problem” zipfile for analysis by support.

Alternatively, you can create a packaged output by specifying an individual incident if that is preferred also.

adrci> ips create package incident 165764
Created package 2 based on incident id 165764, correlation level typical
adrci> ips generate package 2 in "/tmp"
Generated package 2 in file /tmp/ORA700ksk_20161230203545_COM_1.zip, mode complete
adrci>

You can also view the information and contained files within a package directly from ADRCI if you wish:

adrci> ips show package 2
DETAILS FOR PACKAGE 2:
 PACKAGE_ID 2
 PACKAGE_NAME ORA700ksk_20161230203545
 PACKAGE_DESCRIPTION 
 DRIVING_PROBLEM 1
 DRIVING_PROBLEM_KEY ORA 700 [kskvmstatact: excessive swapping observed]
 DRIVING_INCIDENT 165764
 DRIVING_INCIDENT_TIME 2016-12-29 22:04:40.419000 +00:00
 STATUS Generated (4)
 CORRELATION_LEVEL Typical (2)
 PROBLEMS 1 main problems, 1 correlated problems
 INCIDENTS 1 main incidents, 16 correlated incidents
 INCLUDED_FILES 350
 SEQUENCES Last 1, last full 1, last base 0
 UNPACKED FALSE
 CREATE_TIME 2016-12-30 20:35:45.772045 +00:00
 UPDATE_TIME 2016-12-30 20:35:47.037214 +00:00
 BEGIN_TIME N/A
 END_TIME N/A
 FLAGS 0
adrci>

And to list all the files included in the package….

adrci> ips show files package 2
 FILE_ID 1
 FILE_LOCATION <ADR_HOME>/incident/incdir_14473
 FILE_NAME SANDPRD_dbrm_5067_i14473.trc
 LAST_SEQUENCE 1
 EXCLUDE Included

FILE_ID 2
 FILE_LOCATION <ADR_HOME>/incident/incdir_14473
 FILE_NAME SANDPRD_dbrm_5067_i14473.trm
 LAST_SEQUENCE 1
 EXCLUDE Included
<<<< Output truncated >>>>

Note: Before taking this newly created package zip file and sending it on its merry way to Oracle for further troubleshooting, you should finalize the package which, per the documentation, correlates additional trace files, alert logs and so on and adds these into the package.

adrci> ips finalize package 2
Finalized package 2
adrci>

TDE – Troublesome disable of encryption

So as I had previously spent some time exploring TDE in version 12c; the time came for me to disable and remove it and I initially did not give any great thought as to how easy/difficult this would be.

TDE encrypts the data stored in data files with the encryption keys kept secure in a walley/keystore.

So, what I was hoping to do was to disable TDE, remove any encrypted objects, remove the wallet or keystore location, to in essence disable TDE. Sounds okay right? Nope.

This appeared to work quite successfully initially, but its admittedly not as as easy as that.

I actually tried to do some testing and noted a problem. I actually tried to restart and recreate an encryption wallet and noticed an error.

SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "Easy2rem";
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "Easy2rem"
*
ERROR at line 1:
ORA-28374: typed master key not found in wallet;

From looking through some of the MOS documentation [Doc ID 1301365.1, Doc ID 1228046.1, Doc ID 1241925.1], it transpires that even though the objects have been created encrypted, and then subsequently removed along with the wallet – the redo logs still retain mention of the encryption data.

You need to perform a full log switch through all groups to ensure this information is purged from the redo logs after you disable TDE (and a restart of the database is probably recommended at this point).

In fact, per the Oracle documentation, it is possible to recreate the TDE wallet but it should only be attempted in extreme circumstances and for goodness sake, don’t lose the wallet or wallet pass.

So I stumbled across this gem/nugget of information below…
“As of 12.1.0.2 If the key associated with the SYSTEM, SYSAUX or UNDO tablespaces is not present in the wallet you cannot associate a new master key with the database (i.e. you cannot activate that master key for the database) unless you set a hidden parameter”

SQL> administer key management use key 'AUQukK/ZR0/iv26nuN9vIqcAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' identified by "welcome1" with backup;
administer key management use key 'AUQukK/ZR0/iv26nuN9vIqcAAAAAAAAAAAAAAAAAAAAAAAAAAAAA' identified by "welcome1" with backup
*
ERROR at line 1:
ORA-28374: typed master key not found in wallet
SQL> alter system set "_db_discard_lost_masterkey"=true;
System altered.
SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "Easy2rem";
System altered.
SQL>

And now we have successfully recreated a wallet keystore, but as suggested – its probably not wise to attempt to do this unless in extreme circumstances.

TDE -Test Drive Encryption

So I had been testing using TDE (Transparent Database Encryption) in 12c with a NON-CDB database with some success…

I followed the below summary steps in order to configure it. Its a quick and dirty guide to TDE…

First, we create a wallet (okay, okay, keystore!) destination directory.

[oracle@oralinux7 admin]$ mkdir /u01/app/oracle/product/wallet
[oracle@oralinux7 ~]$ cd /u01/app/oracle/product/wallet
[oracle@oralinux7 wallet]$ ls -lrt 
total 0

Then we make some minor modifications to the sqlnet.ora file to designate the corresponding wallet location.

[oracle@oralinux7 admin]$ cat $ORACLE_HOME/network/admin/sqlnet.ora 
NAMES.DIRECTORY_PATH= (TNSNAMES)
ENCRYPTION_WALLET_LOCATION=
 (SOURCE=(METHOD=FILE)
 (METHOD_DATA=
 (DIRECTORY=/u01/app/oracle/product/wallet)))

Now we will generate the actual wallet and master encryption key…

SQL> host ls -lrt /u01/app/oracle/product/wallet
total 0
SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "Easy2rem";
System altered.
SQL> host ls -lrt /u01/app/oracle/product/wallet
total 4
-rw-r--r--. 1 oracle oinstall 2840 Sep 26 22:05 ewallet.p12

There is the newly created wallet file displayed above. Grrrreat!

Now we can open and close the TDE wallet as required (Note: manually!) by using the “old” ‘alter system…’ commands below rather than the new 12c specific ‘administer key management…’ commands…

SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "Easy2rem";
SQL> ALTER SYSTEM SET ENCRYPTION WALLET CLOSE IDENTIFIED BY "Easy2rem";
 
SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------- -------------------------------------------------- ------------------------------ -------------------- --------- --------- ----------
FILE /u01/app/oracle/product/wallet/ OPEN PASSWORD SINGLE NO 0

Please note the password used with the preceding command is the same that you used when creating the master encryption key. This becomes the password to open the wallet and make the master encryption key accessible for use.

We can check the current status of the wallet as per checking the below view.

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR FULLY_BAC CON_ID
-------------------- -------------------------------------------------- ------------------------------ -------------------- --------- --------- ----------
FILE /u01/app/oracle/product/wallet/ OPEN PASSWORD SINGLE NO 0

 

From this point, we can create a corresponding encrypted tablespace (AES192 is the default encryption type with a 192 bit key length)…

CREATE TABLESPACE securespace
 DATAFILE 
 SIZE 50M
 ENCRYPTION
 DEFAULT STORAGE(ENCRYPT);

And a table in that encrypted tablespace…

CREATE TABLE APP.customer_payment_info
 (first_name VARCHAR2(11),
 last_name VARCHAR2(10),
 order_number NUMBER(5),
 credit_card_number VARCHAR2(16),
 active_card VARCHAR2(3))TABLESPACE securespace;

And lastly, you don’t want to be manually opening and closing that encryption wallet, so what you now need is an autologin for that wallet/keystore.

[oracle@oralinux7 admin]$ $ORACLE_HOME/bin/orapki wallet create -wallet /u01/app/oracle/product/wallet -auto_login -pwd "Easy2rem";
Oracle PKI Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.
[oracle@oralinux7 admin]$ ls -lrt /u01/app/oracle/product/wallet
total 8
-rw-r--r--. 1 oracle oinstall 2840 Sep 26 22:05 ewallet.p12
-rw-------. 1 oracle oinstall 2885 Sep 26 22:26 cwallet.sso

The auto login file is created as per above.

Disappearing AWR Snapshots

AWR snapshot, where art thou?

I came across an unusual AWR snapshot ‘strangeness’ recently. There is a bug/anomaly/nuance whereby if you take a manual snapshot
within the second half of the automatic interval period, then the automatically scheduled snap is not initiated. This is something I did not know.

There is soon to be a bug/MOS document in reference to the nature of this happening. But its a nice bit of trivia to know about if for some reason
a colleague starts wondering where his/her expected snapshot disappeared to?

I am going to be testing manual snapshots with an automatic 10 Min interval:

SYS @ SANDPRD > execute dbms_workload_repository.modify_snapshot_settings
(interval=>10);
PL/SQL procedure successfully completed.

SYS @ SANDPRD > select snap_interval, retention from dba_hist_wr_control;
SNAP_INTERVAL   RETENTION
--------------------------------- ---------------------------------------------------------------------------
+00000 00:10:00.0   +00030 00:00:00.0
1 row selected.

Lets check the existing historical snapshots created…

SYS @ SANDPRD > select * from ( select snap_id,begin_interval_time, 
end_interval_time from dba_hist_snapshot order by snap_id DESC ) 
where rownum < 10;
 SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME
---------- --------------------------------- ---------------------------------------------------------------------------
 285 06-AUG-16 02.20.13.601 AM 06-AUG-16 02.30.17.148 AM
 284 06-AUG-16 02.08.23.640 AM 06-AUG-16 02.20.13.601 AM

Now, I am going to take a manual snapshot (Note the timing)…

SYS @ SANDPRD > select sysdate from dual;
SYSDATE
-------------------
06/08/2016 02:34:33
1 row selected.
SYS @ SANDPRD > execute dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
SYS @ SANDPRD >

Lets check the existing historical snapshots created and we can see our manually created snapshot with an end interval time of 02:34…

SYS @ SANDPRD > select * from ( select snap_id,begin_interval_time, 
end_interval_time from dba_hist_snapshot order by snap_id DESC ) 
where rownum < 10;
 SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME
---------- --------------------------------- ---------------------------------------------------------------------------
 286 06-AUG-16 02.30.17.148 AM 06-AUG-16 02.34.36.895 AM <--- Manual
 285 06-AUG-16 02.20.13.601 AM 06-AUG-16 02.30.17.148 AM
 284 06-AUG-16 02.08.23.640 AM 06-AUG-16 02.20.13.601 AM

This means that the manually created snapshot occurred in the first half of our scheduled interval settings of 10 minutes.

Now lets wait a few more minutes and see what happens…
Re-checking the list of historical snapshots and we can see the expected snap at the ten minute interval of 02:40 was successfully created. All is well.

SYS @ SANDPRD > select * from ( select snap_id,begin_interval_time, 
end_interval_time from dba_hist_snapshot order by snap_id DESC ) 
where rownum < 10;
 SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME
---------- --------------------------------- ---------------------------------------------------------------------------
 287 06-AUG-16 02.34.36.895 AM 06-AUG-16 02.40.20.741 AM     <-- Expected
 286 06-AUG-16 02.30.17.148 AM 06-AUG-16 02.34.36.895 AM
 285 06-AUG-16 02.20.13.601 AM 06-AUG-16 02.30.17.148 AM
 284 06-AUG-16 02.08.23.640 AM 06-AUG-16 02.20.13.601 AM

But now Lets try the same thing again with an ever so slight change in timing. This time we execute a manual snapshot notably after (02:45:57) the auto interval halfway mark has been passed….

SYS @ SANDPRD > select sysdate from dual;
SYSDATE
-------------------
06/08/2016 02:45:57
1 row selected.
SYS @ SANDPRD > execute dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
SYS @ SANDPRD

Now re-checking our list of snapshots and the most recent manual invocation.

SYS @ SANDPRD > select * from ( select snap_id,begin_interval_time, end_interval_time from dba_hist_snapshot order by snap_id DESC ) where rownum < 10;
 SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME
---------- --------------------------------- ---------------------------------------------------------------------------
 288 06-AUG-16 02.40.20.741 AM 06-AUG-16 02.46.05.968 AM <--- Manual 
 287 06-AUG-16 02.34.36.895 AM 06-AUG-16 02.40.20.741 AM
 286 06-AUG-16 02.30.17.148 AM 06-AUG-16 02.34.36.895 AM
 285 06-AUG-16 02.20.13.601 AM 06-AUG-16 02.30.17.148 AM
 284 06-AUG-16 02.08.23.640 AM 06-AUG-16 02.20.13.601 AM

So we should (logically) expect an automatically generated snapshot to take place at 02:50? Hmm, lets wait and check…

SYS @ SANDPRD > select sysdate from dual; 
SYSDATE
-------------------
06/08/2016 02:53:03
1 row selected.
SYS @ SANDPRD > select * from ( select snap_id,begin_interval_time, end_interval_time from dba_hist_snapshot order by snap_id DESC ) where rownum < 10;
 SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME
---------- --------------------------------- ---------------------------------------------------------------------------
 288 06-AUG-16 02.40.20.741 AM 06-AUG-16 02.46.05.968 AM
 287 06-AUG-16 02.34.36.895 AM 06-AUG-16 02.40.20.741 AM
 286 06-AUG-16 02.30.17.148 AM 06-AUG-16 02.34.36.895 AM
 285 06-AUG-16 02.20.13.601 AM 06-AUG-16 02.30.17.148 AM
 284 06-AUG-16 02.08.23.640 AM 06-AUG-16 02.20.13.601 AM

Wait where is our expected 02:50 snapshot??

Re-checking again a few minutes later to be sure… still not there.

SYS @ SANDPRD > select * from ( select snap_id,begin_interval_time, end_interval_time from dba_hist_snapshot order by snap_id DESC ) where rownum < 10;
 SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME
---------- --------------------------------- ---------------------------------------------------------------------------
 289 06-AUG-16 02.46.05.968 AM 06-AUG-16 03.00.27.015 AM
 288 06-AUG-16 02.40.20.741 AM 06-AUG-16 02.46.05.968 AM
 287 06-AUG-16 02.34.36.895 AM 06-AUG-16 02.40.20.741 AM

So – the 02:50 scheduled snapshot was omitted/excluded/aborted/did not take place because we executed a manual snapshot in the time period of the second half of the interval schedule. A little bit odd but notable behaviour from AWR.

DataPump Specificity

DataPump Filters

I recently had the opportunity to work on a problem and a neat solution cropped up with respect to DataPump export options.

The issue at the core of the problem, on this occassion was scale. There was tens of schemas impacted so working multiple jobs on a “per schema” basis was not preferred and there was tens of thousands of tables impacted per schema. The table names differed slightly per schema also, but had a similar naming convention.

What I was searching for was a method to capture all the impacted tables from each schema, per schema in one job.

DataPump was the tool selected as it allows for use of the INCLUDE clause and we can make use of a filter in the criteria for table inclusion.

Note: I intentionally formatted the command syntax below to isolate the INCLUDE option onto one line. This is just so I can easily escape any/all of the special characters. You could also use a param file in order to call it from expdp if you don’t want to be bothered with escaping any of the special characters.

If we had to explicitly name all the tables per schema – then (It would’ve been some long listing)…

expdp \'/ as sysdba\' schemas=HR directory=DATA_PUMP_DIR dumpfile=MYDUMP logfile=LOGGER.TXT \
INCLUDE=TABLE:"IN \(\'EMPLOYEES\',\'DEPARTMENTS\'\)\"

We could’ve used a static listing file for feeding into DataPump but as the tables were not completely consistent across schemas – this may have been more work than I’d like.

What I really wanted to demonstrate was along the lines of the below… which will work with a static listing of schemas named.
*In my real world scenario – I used FULL=y instead of the SCHEMAS parameter.

## Export a listing of defined schemas with SQL selective INCLUDE clause 
expdp \'/ as sysdba\' schemas=HR,OE directory=DATA_PUMP_DIR dumpfile=MYDUMP4 logfile=LOGGER4.TXT \
INCLUDE\=TABLE\:\"IN \(SELECT TABLE_NAME FROM DBA_TABLES WHERE TABLE_NAME LIKE \'CRIT\%\'\)\"

Here it is in action, and selectively plucking out relevant tables from two schemas…

[oracle@oralinux7 dpdump]$ expdp \'/ as sysdba\' schemas=HR,OE directory=DATA_PUMP_DIR dumpfile=MYDUMP4 logfile=LOGGER4.TXT \
> INCLUDE\=TABLE\:\"IN \(SELECT TABLE_NAME FROM DBA_TABLES WHERE TABLE_NAME LIKE \'CRIT\%\'\)\"
Export: Release 12.1.0.2.0 - Production on Sat Jul 16 04:38:11 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_02": "/******** AS SYSDBA" schemas=HR,OE directory=DATA_PUMP_DIR dumpfile=MYDUMP4 logfile=LOGGER4.TXT INCLUDE=TABLE:"IN (SELECT TABLE_NAME FROM DBA_TABLES WHERE TABLE_NAME LIKE 'CRIT%')" 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."CRITERIA" 5.062 KB 1 rows
. . exported "OE"."CRITICAL" 5.062 KB 1 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_02 is:
 /u01/app/oracle/admin/SANDPRD/dpdump/MYDUMP4.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at Sat Jul 16 04:38:42 2016 elapsed 0 00:00:23
[oracle@oralinux7 dpdump]$

So, this method of using the INCLUDE clause opens up a number of options for selective filtering on DataPump operations. Here the command is below without the escapism :). Makes it easier to read. Quite useful…

expdp \'/ as sysdba\' schemas=HR,OE directory=DATA_PUMP_DIR dumpfile=MYDUMP4 logfile=LOGGER4.TXT \
INCLUDE=TABLE:"IN (SELECT TABLE_NAME FROM DBA_TABLES WHERE TABLE_NAME LIKE 'CRIT%')"

RMAN Incomplete Recovery: Point-In-Time

So I’d like to run through a simple example involving RMAN incomplete recovery. Restore and recovery is one of those great things that should be practiced often but… just does not get done in reality. And really, you should consider how much time is taken tweaking and fixing backups and scripts in contrast to actually working on recovery scenarios (Which will likely be a much more pressure orientated situation).

So, I will be taking a fairly straightforward point-in-time, incomplete recovery scenario via RMAN and making use of the flash recovery area.

Backup

First I will take a database level zero backup of my database and then of the archive logs. Tags are just for easy of identification…

RMAN> run {
2> backup incremental level 0 database tag='DB_LEV0_1';
3> backup archivelog all tag='ARC_LEV0_1';
4>  sql 'alter system switch logfile';
5> }

Starting backup at 14-JUL-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=87 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_example_cgvp7rsq_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_sysaux_cr0r2ckr_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_system_crc516k8_.dbf
input datafile file number=00007 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_catalog__cqnqm0nx_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_undotbs1_cr0r2cl8_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_data1_crc3lf20_.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_users_cqmj64lc_.dbf
channel ORA_DISK_1: starting piece 1 at 14-JUL-16
channel ORA_DISK_1: finished piece 1 at 14-JUL-16
piece handle=/u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp tag=DB_LEV0_1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 14-JUL-16
Starting backup at 14-JUL-16
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=1 RECID=439 STAMP=917062979
input archived log thread=1 sequence=2 RECID=435 STAMP=917062977
input archived log thread=1 sequence=3 RECID=436 STAMP=917062979
input archived log thread=1 sequence=4 RECID=437 STAMP=917062979
input archived log thread=1 sequence=5 RECID=438 STAMP=917062979
input archived log thread=1 sequence=10 RECID=434 STAMP=917062944
input archived log thread=1 sequence=11 RECID=440 STAMP=917144367
input archived log thread=1 sequence=12 RECID=441 STAMP=917146249
input archived log thread=1 sequence=13 RECID=442 STAMP=917153587
channel ORA_DISK_1: starting piece 1 at 14-JUL-16
channel ORA_DISK_1: finished piece 1 at 14-JUL-16
piece handle=/u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_annnn_ARC_LEV0_1_crg353tc_.bkp tag=ARC_LEV0_1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 14-JUL-16

Starting Control File and SPFILE Autobackup at 14-JUL-16
piece handle=/u01/app/oracle/fast_recovery_area/SANDPRD/autobackup/2016_07_14/o1_mf_s_917153590_crg3574y_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 14-JUL-16
sql statement: alter system switch logfile
RMAN>

Now we can see the backup has completed successfully and we can proceed to query the backup sets. Below we can see the datafiles backed up in the first backup set 212. We can see the archive logs from sequence 1 up to sequence 13 in backup set 213. We can also see the spfile and controlfile autobackups contained in backup set 214 as we have controlfile autobackup enabled in our RMAN configuration.

RMAN> list backup;
List of Backup Sets
===================
BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
212     Incr 0  1.53G      DISK        00:00:28     14-JUL-16      
        BP Key: 212   Status: AVAILABLE  Compressed: NO  Tag: DB_LEV0_1
        Piece Name: /u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp
  List of Datafiles in backup set 212
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_system_crc516k8_.dbf
  2    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_data1_crc3lf20_.dbf
  3    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_sysaux_cr0r2ckr_.dbf
  4    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_undotbs1_cr0r2cl8_.dbf
  5    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_example_cgvp7rsq_.dbf
  6    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_users_cqmj64lc_.dbf
  7    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_catalog__cqnqm0nx_.dbf

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
213     109.38M    DISK        00:00:02     14-JUL-16      
        BP Key: 213   Status: AVAILABLE  Compressed: NO  Tag: ARC_LEV0_1
        Piece Name: /u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_annnn_ARC_LEV0_1_crg353tc_.bkp
  List of Archived Logs in backup set 213
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    1       3980182    13-JUL-16 3981489    13-JUL-16
  1    2       3981489    13-JUL-16 3981516    13-JUL-16
  1    3       3981516    13-JUL-16 3981530    13-JUL-16
  1    4       3981530    13-JUL-16 3981557    13-JUL-16
  1    5       3981557    13-JUL-16 3981968    13-JUL-16
  1    10      3983845    13-JUL-16 3983891    13-JUL-16
  1    11      3983891    13-JUL-16 3999501    14-JUL-16
  1    12      3999501    14-JUL-16 4005689    14-JUL-16
  1    13      4005689    14-JUL-16 4015752    14-JUL-16

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
214     Full    10.08M     DISK        00:00:01     14-JUL-16      
        BP Key: 214   Status: AVAILABLE  Compressed: NO  Tag: TAG20160714T045310
        Piece Name: /u01/app/oracle/fast_recovery_area/SANDPRD/autobackup/2016_07_14/o1_mf_s_917153590_crg3574y_.bkp
  SPFILE Included: Modification time: 14-JUL-16
  SPFILE db_unique_name: SANDPRD
  Control File Included: Ckp SCN: 4015762      Ckp time: 14-JUL-16

Now I am going to create a point in time for which i want to return back to. This example is purely just for demonstration purposes and flashback query would obviously suffice more appropriately here…

APP @ SANDPRD > create table stuff (notes varchar2(50), when date);
Table created.
APP @ SANDPRD > insert into stuff values ('Good data',sysdate);
1 row created.
APP @ SANDPRD > commit;
Commit complete.
APP @ SANDPRD > insert into stuff values ('Bad Data!', sysdate);
1 row created.
APP @ SANDPRD > commit;
Commit complete.

Lets have a look at what we have got?

APP @ SANDPRD > select * from stuff;
NOTES   WHEN
-------------------------------------------------- -------------------
Good data   14/07/2016 05:19:46
Bad Data!   14/07/2016 05:20:53

Now I want to return back to when our DB only contained the “good” data [14.07.2016 05:20:00]. Lets begin by just making sure we can in fact succeed in skipping backwards in time and determining what backup sets are going to be necessary. By the way – I dislike the RMAN “preview” command syntax – its much too close to the actual syntax we will use… makes me nervous.

Using the “preview” command synatx, below we can see the datafiles backups required along with the archivelog sequence backups required and the results of the validate command.

RMAN> run
{
set until time "to_date('Jul 14 2016 05:20:00','Mon DD YYYY HH24:MI:SS')";
restore database preview validate;
}
2> 3> 4> 5>
executing command: SET until clause
Starting restore at 14-JUL-16
using channel ORA_DISK_1
List of Backup Sets
===================
BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
212     Incr 0  1.53G      DISK        00:00:28     14-JUL-16      
        BP Key: 212   Status: AVAILABLE  Compressed: NO  Tag: DB_LEV0_1
        Piece Name: /u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp
  List of Datafiles in backup set 212
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_system_crc516k8_.dbf
  2    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_data1_crc3lf20_.dbf
  3    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_sysaux_cr0r2ckr_.dbf
  4    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_undotbs1_cr0r2cl8_.dbf
  5    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_example_cgvp7rsq_.dbf
  6    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_users_cqmj64lc_.dbf
  7    0  Incr 4015710    14-JUL-16 /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_catalog__cqnqm0nx_.dbf
List of Archived Log Copies for database with db_unique_name SANDPRD
=====================================================================
Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
442     1    13      A 14-JUL-16
        Name: /u01/app/oracle/fast_recovery_area/SANDPRD/archivelog/2016_07_14/o1_mf_1_13_crg35337_.arc
443     1    14      A 14-JUL-16
        Name: /u01/app/oracle/fast_recovery_area/SANDPRD/archivelog/2016_07_14/o1_mf_1_14_crg358k6_.arc
recovery will be done up to SCN 4015762
Media recovery start SCN is 4015710
Recovery must be done beyond SCN 4015710 to clear datafile fuzziness
channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp tag=DB_LEV0_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:08
using channel ORA_DISK_1
channel ORA_DISK_1: scanning archived log /u01/app/oracle/fast_recovery_area/SANDPRD/archivelog/2016_07_14/o1_mf_1_13_crg35337_.arc
channel ORA_DISK_1: scanning archived log /u01/app/oracle/fast_recovery_area/SANDPRD/archivelog/2016_07_14/o1_mf_1_14_crg358k6_.arc
Finished restore at 14-JUL-16
RMAN>

Its worth noting here around the point of recovery noted. If you have any difficulty with working with the date format mask for the RMAN commands – you can always do a quick bit of conversion to find the appropriate SCN.

For example:

select timestamp_to_scn(to_timestamp
('14/07/2016 05:20:00','dd/mm/yyyy hh24:mi:ss')) from dual;

Restore and recovery

So, let’s try to restore and recover to that point in the past. First we have to start the instance in MOUNT mode to prepare for the recovery.

RMAN> shutdown immediate;
using target database control file instead of recovery catalog
database closed
database dismounted
Oracle instance shut down
RMAN> startup mount;
connected to target database (not started)
Oracle instance started
database mounted
Total System Global Area     419430400 bytes
Fixed Size                     2925120 bytes
Variable Size                369102272 bytes
Database Buffers              41943040 bytes
Redo Buffers                   5459968 bytes
RMAN>

We can now initiate the RMAN restore and recovery block…

RMAN> run
{
set until time "to_date('Jul 14 2016 05:20:00','Mon DD YYYY HH24:MI:SS')";
restore database ;
recover database ;
sql 'alter database open resetlogs';
}2> 3> 4> 5> 6> 7>
executing command: SET until clause
Starting restore at 14-JUL-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK
flashing back control file to SCN 4015762
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_system_crc516k8_.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_data1_crc3lf20_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_sysaux_cr0r2ckr_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_undotbs1_cr0r2cl8_.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_example_cgvp7rsq_.dbf
channel ORA_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_users_cqmj64lc_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /u01/app/oracle/oradata/SANDPRD/datafile/o1_mf_catalog__cqnqm0nx_.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/SANDPRD/backupset/2016_07_14/o1_mf_nnnd0_DB_LEV0_1_crg33yqq_.bkp tag=DB_LEV0_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 14-JUL-16
Starting recover at 14-JUL-16
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:02
Finished recover at 14-JUL-16
sql statement: alter database open resetlogs
RMAN>

So, lets check our database now that the restore and recovery has succeeded as per above. Note the database had to be opened with resetlogs given the incomplete recovery. Also, be sure to note – this invalidates any previous historical backups (You now have a new incarnation) and you should backup your database immediately.

[oracle@oralinux7 ~]$ sqlplus -S app/app
select * from stuff;
NOTES   WHEN
-------------------------------------------------- ---------
Good data   14-JUL-16
1 row selected.

Looks good!

ORATop

So, I previously lauded the benefits of using the companion tool TFA for diagnostics collection here: Trace File Analyzer

I just recently discovered another benefit to using TFA which I was unaware of. There is an MOS tool called oratop which simulates the universal UNIX/Linux ‘top’ tool for displaying top activity in a system. ‘oratop’ is a very simple standalone utility to use; I can’t actually say that this tool is one I would personally reach for in the event of needing to troubleshoot a problem but it does add a little bit of novelty. 🙂

The tool can be downloaded from MOS: “oratop – Utility for Near Real-time Monitoring of Databases, RAC and Single Instance (Doc ID 1500864.1)”

It can be invoked in standalone mode as per below syntax…

./oratop.exe -i 10 / as sysdba

Just verify the permissions are correctly set on the executable and your environment variables are set. You can also run it by passing in a specific database instance identifier along with an interval for refresh and specify either a local bequeath connection or TNS connection info. This is useful as it allows you to invoke the tool locally and monitor remote instances.

What I have learned though is that (See screenshot below) the oratop utility now comes bundled with the latest versions of TFA as part of the support tools bundle. So now we can just invoke oratop from TFA as per below…

oratop

Thats quite cool. But like I said, possibly more novelty that anything else…

 

 

 

 

TFA Trace File Analyzer

TFA = Totally freaking awesome!

Trace file analyzer is a lightweight collection daemon that serves to simplify greatly the process of diagnostic log gathering for troubleshooting purposes. In a RAC clustered environments with many multiple nodes (or even on a standalone environment) – it can sometime be a very time consuming, complex and error prone process to identify, search for and capture all relevant (and who is to say whats relevant or not in
the initial stages of an investigation) log info.

The TFA client daemon endeavours to collect all relevant logging information from a problem timeframe specific for ASM, Grid infra and oracle database, including operating system metrics and all assumed necessary diagnostics logs in a single execution. It aims
to speed up the SR process by fetching all the combined diagnostics info in “one pass”.

A collections of files for analysis can be initiated adhoc (or scheduled to happen automatically) from any cluster node and this means that remote diag info will be gathered also and pushed back to the collection initiating node.

So onto installation – you must execute the installation as root.Download the latest version of TFA: Oracle Support Document 1513912.2 (TFA Collector – Tool for Enhanced Diagnostic Gathering)

See below for a sample install, entering the installation directory, specifying a local node only install with a deferred discovery.Oh and ensure perl and Java are working/setup properly as both tend to be dependencies to a successful installation/configuration.

Help options firstly…

[root@oralinux7 tmp]# ./installTFALite -h
TFA Installation Log will be written to File : /tmp/tfa_install_76796_2016_06_24-03_37_43.log
 Usage for ./installTFALite
 ./installTFALite [-local][-deferdiscovery][-tfabase <install dir>][-javahome <path to JRE>][-silent]
 -local - Only install on the local node
 -deferdiscovery - Discover Oracle trace directories after installation completes
 -tfabase - Install into the directory supplied 
 -javahome - Use this directory for the JRE
 -silent - Do not ask any install questions
 -debug - Print debug tracing and do not remove TFA_HOME on install failure

Installation

[root@oralinux7 tmp]# ./installTFALite -local -deferdiscovery -tfabase /u01/app/oracle/TFA 
TFA Installation Log will be written to File : /tmp/tfa_install_77105_2016_06_24-03_39_38.log
Starting TFA installation
Enter a Java Home that contains Java 1.5 or later : /u01/app/oracle/product/12102/jdk
Running Auto Setup for TFA as user root...
Installing TFA now...
TFA Will be Installed on oralinux7...
TFA will scan the following Directories
++++++++++++++++++++++++++++++++++++++++++++
.----------------------------.
| oralinux7 |
+-----------------+----------+
| Trace Directory | Resource |
+-----------------+----------+
'-----------------+----------'
Installing TFA on oralinux7:
HOST: oralinux7 TFA_HOME: /u01/app/oracle/TFA/tfa/oralinux7/tfa_home
.------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+-----------+---------------+-------+------+------------+----------------------+
| oralinux7 | RUNNING | 78180 | 5000 | 12.1.2.7.0 | 12127020160303214632 |
'-----------+---------------+-------+------+------------+----------------------'
Running Inventory in All Nodes...
Enabling Access for Non-root Users on oralinux7...
Adding Local Oracle Home Owners to TFA
Sucessfully added 'oracle' to TFA Access list.
.---------------------------------.
| TFA Users in oralinux7 |
+-----------+-----------+---------+
| User Name | User Type | Status |
+-----------+-----------+---------+
| oracle | USER | Allowed |
'-----------+-----------+---------'
Summary of TFA Installation:
.------------------------------------------------------------------.
| oralinux7 |
+---------------------+--------------------------------------------+
| Parameter | Value |
+---------------------+--------------------------------------------+
| Install location | /u01/app/oracle/TFA/tfa/oralinux7/tfa_home |
| Repository location | /u01/app/oracle/TFA/tfa/repository |
| Repository usage | 0 MB out of 2917 MB |
'---------------------+--------------------------------------------'
TFA is successfully installed...

Okay – so do you  interact with TFA…

You can check the TFA running status, stop it , start it, configure it to auto-commence and check its current configuration using the “tfactl” command.

[root@oralinux7 bin]# ./tfactl print status 
.-------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| oralinux7 | RUNNING | 78180 | 5000 | 12.1.2.7.0 | 12127020160303214632 | COMPLETE |
'-----------+---------------+-------+------+------------+----------------------+------------------'
[root@oralinux7 bin]# ./tfactl stop
Stopping TFA from the Command Line
Stopped OSWatcher
TFA-00002 : Oracle Trace File Analyzer (TFA) is not running
TFAmain Force Stopped Successfully
. . . 
Successfully stopped TFA..
[root@oralinux7 bin]# ./tfactl start
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . . 
Successfully started TFA Process..
. . . . . 
TFA Started and listening for commands
[root@oralinux7 bin]# ./tfactl print status 
.-------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| oralinux7 | RUNNING | 84495 | 5000 | 12.1.2.7.0 | 12127020160303214632 | COMPLETE |
'-----------+---------------+-------+------+------------+----------------------+------------------'
[root@oralinux7 bin]
[root@oralinux7 bin]# ./tfactl print config 
.---------------------------------------------------------------.
| oralinux7 |
+--------------------------------------------------+------------+
| Configuration Parameter | Value |
+--------------------------------------------------+------------+
| TFA version | 12.1.2.7.0 |
| Automatic diagnostic collection | OFF |
| Alert Log Scan | ON |
| Trimming of files during diagcollection | ON |
| Repository current size (MB) | 0 |
| Repository maximum size (MB) | 2917 |
| Inventory Trace level | 1 |
| Collection Trace level | 1 |
| Scan Trace level | 1 |
| Other Trace level | 1 |
| Max Size of TFA Log (MB) | 50 |
| Max Number of TFA Logs | 10 |
| Max Size of Core File (MB) | 20 |
| Max Collection Size of Core Files (MB) | 200 |
| Automatic Purging | ON |
| Minimum Age of Collections to Purge (Hours) | 12 |
| Minimum Space Free to enable Alert Log Scan (MB) | 500 |
'--------------------------------------------------+------------'

Diag Collection

You can specify a default collection of files specific to the last four hours of activity by entering the below command:

> ./tfactl diagcollect

You can also specify windows (using ‘from’ and ‘to’ switches) for more accurate collections.

See below for a few example syntax methods for initiating a diagnostic collection…
They are pretty self-explanatory 🙂

> ./tfactl diagcollect -all -since 1h
> ./tfactl diagcollect -from "Jun/24/2016 01:00:00" -to "Jun/24/2016 02:30:00"
> ./tfactl diagcollect -for "Jun/24/2016"

So lets run one off to see how it looks…

[root@oralinux7 bin]# ./tfactl diagcollect -for "Jun/24/2016"
Collecting data for all nodes
Scanning files for Jun/24/2016 00:00:00
Collection Id : 20160624035818oralinux7
Repository Location in oralinux7 : /u01/app/oracle/TFA/tfa/repository
Collection monitor will wait up to 30 seconds for collections to start 
2016/06/24 03:58:23 IST : Collection Name : tfa_Fri_Jun_24_03_58_18_IST_2016.zip
2016/06/24 03:58:24 IST : Scanning of files for Collection in progress...
2016/06/24 03:58:24 IST : Collecting extra files...
2016/06/24 03:58:24 IST : Completed collection of extra files...
2016/06/24 03:58:29 IST : Getting list of files satisfying time range [06/23/2016 12:00:00 IST, 06/24/2016 03:58:24 IST]
2016/06/24 03:58:29 IST : Starting Thread to identify stored files to collect
2016/06/24 03:58:29 IST : Getting List of Files to Collect 
2016/06/24 03:58:29 IST : Finished Getting List of Files to Collect
2016/06/24 03:58:29 IST : Collecting ADR incident files...
2016/06/24 03:58:29 IST : Completed Zipping of all files
2016/06/24 03:58:29 IST : Cleaning up temporary files
2016/06/24 03:58:29 IST : Finished Cleaning up temporary files
2016/06/24 03:58:29 IST : Finalizing the Collection Zip File
2016/06/24 03:58:29 IST : Finished Finalizing the Collection Zip File
2016/06/24 03:58:29 IST : Total Number of Files checked : 19
2016/06/24 03:58:29 IST : Total Size of all Files Checked : 1.5MB
2016/06/24 03:58:29 IST : Number of files containing required range : 9
2016/06/24 03:58:29 IST : Total Size of Files containing required range : 1.1MB
2016/06/24 03:58:29 IST : Number of files trimmed : 0
2016/06/24 03:58:29 IST : Total Size of data prior to zip : 1.1MB
2016/06/24 03:58:29 IST : Saved 0kB by trimming files
2016/06/24 03:58:29 IST : Zip file size : 55kB
2016/06/24 03:58:29 IST : Total time taken : 6s
2016/06/24 03:58:29 IST : Completed collection of zip files.

Logs are being collected to: /u01/app/oracle/TFA/tfa/repository/collection_Fri_Jun_24_03_58_18_IST_2016_node_all
/u01/app/oracle/TFA/tfa/repository/collection_Fri_Jun_24_03_58_18_IST_2016_node_all/oralinux7.tfa_Fri_Jun_24_03_58_18_IST_2016.zip

And there we go – we now have an efficiently gathered repo zip file output of debug files for analysis to attach to our SR – neat!

Explain it to me one more time…

The explain plan table is used for us mere mortals as a visual representation of the cost base optimisers execution plan of choice. This can be easily plucked out via GUI tools like sql developer or TOAD but manual methods are always useful to have in the bag.

You can check if the plan table () exists already by checking dba_objects.

col owner format a33
col object_name format a22
SYS @ SANDPRD > select owner, object_name, object_type from dba_objects where object_name = 'PLAN_TABLE';
OWNER OBJECT_NAME OBJECT_TYPE
--------------------------------- ---------------------- ------------------
PUBLIC PLAN_TABLE SYNONYM

You can see that the plan_table entry above is actually a synonym for a SYS object.

col synonym_name format a33
col table_owner format a33
col table_name format a33
SYS @ SANDPRD > select synonym_name, table_owner, table_name from dba_synonyms where synonym_name = 'PLAN_TABLE'; 
SYNONYM_NAME TABLE_OWNER TABLE_NAME
--------------------------------- --------------------------------- -------
PLAN_TABLE SYS PLAN_TABLE$

This ‘plan_table’ table is auto-generated since 10g days I believe, so you shouldn’t have to create it. The explain output will provide indications and information (for joins) as to
which table will be accessed first, the join order (how oracle may decide the best way to access that data is FTS or index scan?), how many estimate and actuals rows will be returned, what join mechanisms, etc., etc….

Using a simple JOIN example – This will be my test query…

SELECT /*THISISMYTEST*/ E.EMPNO, E.ENAME, E.JOB, E.DEPTNO, D.DNAME, D.LOC
FROM EMP E, DEPT D 
WHERE E.DEPTNO=D.DEPTNO
AND
E.SAL BETWEEN 1000 AND 3000
ORDER BY DEPTNO, EMPNO;

There are a number of methods for generating an explain plan – you can either elect for a predictive plan generation versus an actual chosen plan from an executed cursor.

EXPLAIN PLAN FOR…<statement>
This is the simplest bare-bones method for getting a “prediction” for a plan.

SCOTT @ SANDPRD > EXPLAIN PLAN FOR 
SELECT /*THISISMYTEST*/ E.EMPNO, E.ENAME, E.JOB, E.DEPTNO, D.DNAME, D.LOC
FROM EMP E, DEPT D 
WHERE E.DEPTNO=D.DEPTNO
AND
E.SAL BETWEEN 1000 AND 3000
ORDER BY DEPTNO, EMPNO;

Explained.

Now you can use DBMS_XPLAN to visualise the explain plan…

col execution_plan format a50
SCOTT @ SANDPRD > select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
Plan hash value: 1736908262
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9 | 405 | 7 (29)| 00:00:01 |
| 1 | SORT ORDER BY | | 9 | 405 | 7 (29)| 00:00:01 |
| 2 | MERGE JOIN | | 9 | 405 | 6 (17)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 4 | 80 | 2 (0)| 00:00:01 |
| 4 | INDEX FULL SCAN | PK_DEPT | 4 | | 1 (0)| 00:00:01 |
|* 5 | SORT JOIN | | 9 | 225 | 4 (25)| 00:00:01 |
|* 6 | TABLE ACCESS FULL | EMP | 9 | 225 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
 5 - access("E"."DEPTNO"="D"."DEPTNO")
 filter("E"."DEPTNO"="D"."DEPTNO")
 6 - filter("E"."SAL"<=3000 AND "E"."SAL">=1000)
20 rows selected.

DBMS_XPLAN package includes four sub procedures…(Taken directly from the Oracle documentation).

DISPLAY - to format and display the contents of a plan table.
DISPLAY_CURSOR - to format and display the contents of the execution plan of any loaded cursor.
DISPLAY_AWR - to format and display the contents of the execution plan of a stored SQL statement in the AWR.
DISPLAY_SQLSET - to format and display the contents of the execution plan of statements stored in a SQL tuning set.

So what if you want to examine an actual previously executed/selected plan….

If you are in a position to re-execute the statement, you can simply do the following..

SCOTT @ SANDPRD > SET SERVEROUTPUT OFF
SCOTT @ SANDPRD > -- EXECUTE YOUR SQL STATEMENT ONCE MORE....
SCOTT @ SANDPRD > SELECT /*THISISMYTEST*/ E.EMPNO, E.ENAME, E.JOB, E.DEPTNO, D.DNAME, D.LOC
FROM EMP E, DEPT D 
WHERE E.DEPTNO=D.DEPTNO
AND
E.SAL BETWEEN 1000 AND 3000
ORDER BY DEPTNO, EMPNO;

 EMPNO ENAME JOB DEPTNO DNAME LOC
---------- ---------- --------- ---------- -------------- -------------
 7782 CLARK MANAGER 10 ACCOUNTING NEW YORK
 7934 MILLER CLERK 10 ACCOUNTING NEW YORK
 7566 JONES MANAGER 20 RESEARCH DALLAS
 7788 SCOTT ANALYST 20 RESEARCH DALLAS
 7876 ADAMS CLERK 20 RESEARCH DALLAS
 7902 FORD ANALYST 20 RESEARCH DALLAS
 7499 ALLEN SALESMAN 30 SALES CHICAGO
 7521 WARD SALESMAN 30 SALES CHICAGO
 7654 MARTIN SALESMAN 30 SALES CHICAGO
 7698 BLAKE MANAGER 30 SALES CHICAGO
 7844 TURNER SALESMAN 30 SALES CHICAGO
11 rows selected.
SCOTT @ SANDPRD > SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
SQL_ID axfsa0wgkk80p, child number 0
-------------------------------------
SELECT /*THISISMYTEST*/ E.EMPNO, E.ENAME, E.JOB, E.DEPTNO, D.DNAME,
D.LOC FROM EMP E, DEPT D WHERE E.DEPTNO=D.DEPTNO AND E.SAL BETWEEN 1000
AND 3000 ORDER BY DEPTNO, EMPNO
Plan hash value: 1736908262
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7 (100)| |
| 1 | SORT ORDER BY | | 9 | 405 | 7 (29)| 00:00:01 |
| 2 | MERGE JOIN | | 9 | 405 | 6 (17)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 4 | 80 | 2 (0)| 00:00:01 |
| 4 | INDEX FULL SCAN | PK_DEPT | 4 | | 1 (0)| 00:00:01 |
|* 5 | SORT JOIN | | 9 | 225 | 4 (25)| 00:00:01 |
|* 6 | TABLE ACCESS FULL | EMP | 9 | 225 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
 5 - access("E"."DEPTNO"="D"."DEPTNO")
 filter("E"."DEPTNO"="D"."DEPTNO")
 6 - filter(("E"."SAL"<=3000 AND "E"."SAL">=1000))
27 rows selected.

Here you will note that you get the sql_id information displayed at the top of the plan.

You can also use the SQLPLUS autotrace feature
SET AUTOTRACE ON
And then execute your query once again…

If however – you want to see the explain plan for a statement which has already been executed in the past (and cannot be rerun)- you need to locate the sql_id

-- First, we have to locate our query sql_id (Hence now you see the use of the comment...
SET LINES 200
COL SQL_TEXT FORMAT A140
SCOTT @ SANDPRD > SELECT SQL_ID, SUBSTR(SQL_TEXT,1,200) SQL_TEXT
FROM V$SQL
WHERE UPPER(SQL_TEXT) LIKE '%THISISMYTEST%'; 
SQL_ID SQL_TEXT
------------- -------------------------------------------------------------
axfsa0wgkk80p SELECT /*THISISMYTEST*/ E.EMPNO, E.ENAME, E.JOB, E.DEPTNO, D.DNAME, D.LOC FROM EMP E, DEPT D WHERE E.DEPTNO=D.DEPTNO AND E.SAL BETWEEN 1000
 AND 3000 ORDER BY DEPTNO, EMPNO

Now to display the execution plan for the cursor query we searched for: axfsa0wgkk80p

SCOTT @ SANDPRD > SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('axfsa0wgkk80p',0));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
SQL_ID axfsa0wgkk80p, child number 0
-------------------------------------
SELECT /*THISISMYTEST*/ E.EMPNO, E.ENAME, E.JOB, E.DEPTNO, D.DNAME,
D.LOC FROM EMP E, DEPT D WHERE E.DEPTNO=D.DEPTNO AND E.SAL BETWEEN 1000
AND 3000 ORDER BY DEPTNO, EMPNO
Plan hash value: 1736908262
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 7 (100)| |
| 1 | SORT ORDER BY | | 9 | 405 | 7 (29)| 00:00:01 |
| 2 | MERGE JOIN | | 9 | 405 | 6 (17)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 4 | 80 | 2 (0)| 00:00:01 |
| 4 | INDEX FULL SCAN | PK_DEPT | 4 | | 1 (0)| 00:00:01 |
|* 5 | SORT JOIN | | 9 | 225 | 4 (25)| 00:00:01 |
|* 6 | TABLE ACCESS FULL | EMP | 9 | 225 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
 5 - access("E"."DEPTNO"="D"."DEPTNO")
 filter("E"."DEPTNO"="D"."DEPTNO")
 6 - filter(("E"."SAL"<=3000 AND "E"."SAL">=1000))
27 rows selected.

You can also leverage AWR snapshot history for specific SQL ids if known to examine the explain plan.

Example – To display the different execution plans associated with a known SQL ID…

SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR('&sql_id'));

RMAN Duplicate (cloning)

There are a plethora of examples, both excellent and not so excellent on the web around performing of RMAN duplicates. As with anything oracle, there is a wide spectrum of options to incorporate as part of the cloning process dependent on both your source and destination environment; whether it uses ASM or filesystem storage, OMF naming convention or filename conversions, or standby configuration if that is the goal.

I would like to minimise the complexity of bespoke options custom to my own environment and produce a most simplified and concise example of an RMAN duplicate from active (Active duplicate using OMF with mirrored filesystem structures) .

Firstly, I will run through the configuration of the primary/target (or production) side database. This is our source.

Then I will demonstrate the configuration and setup of the auxiliary or clone side database. This is the destination.

Primary source DB/instance configuration

###--> Primary is in archivelog mode 
SYS @ SANDPRD > archive log list 
Database log mode Archive Mode 
Automatic archival Enabled 
Archive destination USE_DB_RECOVERY_FILE_DEST 
Oldest online log sequence 12 
Next log sequence to archive 21 
Current log sequence 21
###--> Primary is configured with a local listener 
SYS @ SANDPRD > sho parameter local_list 
NAME TYPE VALUE 
 ----------- ------------------------------ 
local_listener string (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oralinux7)(PORT=1521)))
###--> Primary is configured to use OMF for simplicity of admin of files 
SYS @ SANDPRD > sho parameter db_create 
NAME TYPE VALUE 
----------- ------------------------------ 
db_create_file_dest string /u01/app/oracle/oradata 
db_create_online_log_dest_1 string /u01/app/oracle/oradata
###--> Primary service name and instance name etc. 
SYS @ SANDPRD > show parameter name 
NAME TYPE VALUE 
 ----------- ------------------------------ 
db_name string SANDPRD 
db_unique_name string SANDPRD 
global_names boolean FALSE 
instance_name string SANDPRD 
service_names string SANDPRD
###--> This is the listener config file on the primary side
 [oracle@oralinux7 ~]$ cd $ORACLE_HOME/network/admin 
[oracle@oralinux7 admin]$ cat listener.ora 
# ------------------- PRIMARY SIDE -------------------# 
LISTENER_1521 = 
 (DESCRIPTION_LIST = 
 (DESCRIPTION = 
 (ADDRESS = (PROTOCOL = TCP)(HOST = oralinux7)(PORT = 1521)) 
 ) 
 )
###--> This is the tnsnames file (Its the same on both primary and auxiliary side) 
[oracle@oralinux7 admin]$ cat tnsnames.ora 
# ------------------- Primary site TNS names ------------------# 
# Target 
SANDPRD = 
 (DESCRIPTION = 
 (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.13)(PORT = 1521)) 
 (CONNECT_DATA = 
 (SERVER = DEDICATED) 
 (SERVICE_NAME = SANDPRD) 
 ) 
 ) 
# Auxiliary 
SANDDUPE = 
 (DESCRIPTION = 
 (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.16)(PORT = 1521)) 
 (CONNECT_DATA = 
 (SERVER = DEDICATED) 
 (SID = SANDDUPE) 
 ) 
 )
###--> This is the listener services summary on primary 
[oracle@oralinux7 admin]$ lsnrctl status LISTENER_1521 
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 11-JUN-2016 21:26:08 
Copyright (c) 1991, 2014, Oracle. All rights reserved. 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oralinux7)(PORT=1521))) 
STATUS of the LISTENER 
------------------------ 
Alias LISTENER_1521 
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production 
Start Date 11-JUN-2016 21:25:13 
Uptime 0 days 0 hr. 0 min. 55 sec 
Trace Level off 
Security ON: Local OS Authentication 
SNMP OFF 
Listener Parameter File /u01/app/oracle/product/12102/network/admin/listener.ora 
Listener Log File /u01/app/oracle/diag/tnslsnr/oralinux7/listener_1521/alert/log.xml 
Listening Endpoints Summary... 
 (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oralinux7)(PORT=1521))) 
Services Summary... 
Service "SANDPRD" has 1 instance(s). 
 Instance "SANDPRD", status READY, has 1 handler(s) for this service... 
The command completed successfully
###--> Checking TNS ping resolution 
[oracle@oralinux7 admin]$ tnsping sandprd | tail -3 
Used TNSNAMES adapter to resolve the alias 
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.13)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = SANDPRD))) 
OK (0 msec) 
[oracle@oralinux7 admin]$ tnsping sanddupe | tail -3 
Used TNSNAMES adapter to resolve the alias 
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.16)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = SANDDUPE))) 
OK (10 msec)

Auxiliary destination instance configuration

 

###-->  This is the auxiliary side instance
[oracle@oralinux7b ~]$ export ORACLE_SID=SANDDUPE
###-->  This is the auxiliary side listener config file   
[oracle@oralinux7b admin]$ cat listener.ora  
# ------------------- STANDBY SIDE -------------------#  
# LISTENER  
LISTENER_1521 =  
  (DESCRIPTION_LIST =  
    (DESCRIPTION =  
      (ADDRESS = (PROTOCOL = TCP)(HOST = oralinux7b)(PORT = 1521))  
    )  
  )  
# Static entry for registration  
SID_LIST_LISTENER_1521 =  
  (SID_LIST =  
    (SID_DESC =  
      (GLOBAL_DBNAME = SANDDUPE)  
      (ORACLE_HOME = /u01/app/oracle/product/12102)  
      (SID_NAME = SANDDUPE)  
    )  
  )
###-->  This is the listener services summary   
[oracle@oralinux7b admin]$ lsnrctl status  LISTENER_1521  
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 11-JUN-2016 16:26:16  
Copyright (c) 1991, 2014, Oracle.  All rights reserved.  
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oralinux7b)(PORT=1521)))  
STATUS of the LISTENER  
------------------------  
Alias                    LISTENER_1521  
Version                  TNSLSNR for Linux: Version 12.1.0.2.0 - Production  
Start Date                11-JUN-2016 16:24:50  
Uptime                    0 days 0 hr. 1 min. 25 sec  
Trace Level              off  
Security                  ON: Local OS Authentication  
SNMP                      OFF  
Listener Parameter File  /u01/app/oracle/product/12102/network/admin/listener.ora  
Listener Log File        /u01/app/oracle/diag/tnslsnr/oralinux7b/listener_1521/alert/log.xml  
Listening Endpoints Summary...  
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oralinux7b)(PORT=1521)))  
Services Summary...  
Service "SANDDUPE" has 1 instance(s).  
  Instance "SANDDUPE", status UNKNOWN, has 1 handler(s) for this service...  
The command completed successfully
###-->  This is a copy of the init.ora taken from the primary and copied to the auxiliary  
[oracle@oralinux7b dbs]$ cat initSANDDUPE.ora   
*.audit_file_dest='/u01/app/oracle/admin/SANDDUPE/adump'  
*.audit_trail='XML','EXTENDED'  
*.compatible='12.1.0.2.0'  
*.db_block_size=8192  
*.db_create_file_dest='/u01/app/oracle/oradata'  
*.db_create_online_log_dest_1='/u01/app/oracle/oradata'  
*.db_domain=''  
*.db_name='SANDDUPE'  
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'  
*.db_recovery_file_dest_size=4560m  
*.diagnostic_dest='/u01/app/oracle'  
*.dispatchers=''  
*.local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oralinux7b)(PORT=1521)))'  
*.log_archive_format='%t_%s_%r.dbf'  
*.memory_target=400m  
*.open_cursors=300  
*.processes=300  
*.remote_login_passwordfile='EXCLUSIVE'  
*.undo_tablespace='UNDOTBS1'
###-->  Now, we start the auxiliary instance using the above pfile   
[oracle@oralinux7b dbs]$ echo " startup nomount pfile = 'initSANDDUPE.ora' ; " | sqlplus / as sysdba  
SQL*Plus: Release 12.1.0.2.0 Production on Sat Jun 11 16:27:45 2016  
Copyright (c) 1982, 2014, Oracle.  All rights reserved.  
Connected to an idle instance.  
SQL> ORACLE instance started.  
Total System Global Area  419430400 bytes  
Fixed Size    2925120 bytes  
Variable Size  310382016 bytes  
Database Buffers  100663296 bytes  
Redo Buffers    5459968 bytes  
SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production  
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
###-->  Now we test TNSPING to both sites   
[oracle@oralinux7b dbs]$ tnsping sandprd | tail -3  
Used TNSNAMES adapter to resolve the alias  
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.13)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = SANDPRD)))  
OK (10 msec)  
[oracle@oralinux7b dbs]$ tnsping sanddupe | tail -3  
Used TNSNAMES adapter to resolve the alias  
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.16)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = SANDDUPE)))  
OK (0 msec)
###-->  Now we test remote connectivity via SQLPLUS 
[oracle@oralinux7b dbs]$ sqlplus sys/oracle@sandprd as sysdba  
SQL*Plus: Release 12.1.0.2.0 Production on Sat Jun 11 16:29:57 2016  
Copyright (c) 1982, 2014, Oracle.  All rights reserved.  
Connected to:  
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production  
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options  
SQL> set lines 200  
SQL> select instance_name, host_name from v$instance;  
INSTANCE_NAME HOST_NAME  
----------------------------------------------------------------  
SANDPRD  oralinux7  
SQL> quit  

[oracle@oralinux7b dbs]$ sqlplus sys/oracle@sanddupe as sysdba  
SQL*Plus: Release 12.1.0.2.0 Production on Sat Jun 11 16:31:02 2016  
Copyright (c) 1982, 2014, Oracle.  All rights reserved.  
Connected to:  
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production  
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options  
SQL> set lines 200  
SQL> select instance_name, host_name from v$instance;  
INSTANCE_NAME HOST_NAME  
----------------------------------------------------------------  
SANDDUPE oralinux7b  
SQL> quit
###-->  Now prove and test RMAN connectivity to both   
[oracle@oralinux7b dbs]$ rman target sys/oracle@sandprd nocatalog auxiliary sys/oracle@sanddupe   
Recovery Manager: Release 12.1.0.2.0 - Production on Sat Jun 11 16:32:07 2016  
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.  
connected to target database: SANDPRD (DBID=504848963)  
using target database control file instead of recovery catalog  
connected to auxiliary database: SANDDUPE (not mounted)  
RMAN>

RMAN active duplicate (FULL) output

[oracle@oralinux7b dbs]$ rman target sys/oracle@sandprd nocatalog auxiliary sys/oracle@sanddupe 
Recovery Manager: Release 12.1.0.2.0 - Production on Sat Jun 11 16:32:07 2016 
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. 
connected to target database: SANDPRD (DBID=504848963) 
using target database control file instead of recovery catalog 
connected to auxiliary database: SANDDUPE (not mounted) 
RMAN> RUN 
{ 
DUPLICATE TARGET DATABASE TO SANDDUPE 
 FROM ACTIVE DATABASE ; 
}2> 3> 4> 5> 
Starting Duplicate Db at 11-JUN-16 
allocated channel: ORA_AUX_DISK_1 
channel ORA_AUX_DISK_1: SID=12 device type=DISK 
current log archived 
contents of Memory Script: 
{ 
 sql clone "create spfile from memory"; 
} 
executing Memory Script 
sql statement: create spfile from memory 
contents of Memory Script: 
{ 
 shutdown clone immediate; 
 startup clone nomount; 
} 
executing Memory Script 
Oracle instance shut down 
connected to auxiliary database (not started) 
Oracle instance started 
Total System Global Area 419430400 bytes 
Fixed Size 2925120 bytes 
Variable Size 314576320 bytes 
Database Buffers 96468992 bytes 
Redo Buffers 5459968 bytes 
contents of Memory Script: 
{ 
 sql clone "alter system set control_files = 
 ''/u01/app/oracle/oradata/SANDDUPE/controlfile/o1_mf_corxcj4s_.ctl'' comment= 
 ''Set by RMAN'' scope=spfile"; 
 sql clone "alter system set db_name = 
 ''SANDPRD'' comment= 
 ''Modified by RMAN duplicate'' scope=spfile"; 
 sql clone "alter system set db_unique_name = 
 ''SANDDUPE'' comment= 
 ''Modified by RMAN duplicate'' scope=spfile"; 
 shutdown clone immediate; 
 startup clone force nomount 
 restore clone from service 'sandprd' primary controlfile; 
 alter clone database mount; 
} 
executing Memory Script 
sql statement: alter system set control_files = ''/u01/app/oracle/oradata/SANDDUPE/controlfile/o1_mf_corxcj4s_.ctl'' comment= ''Set by RMAN'' scope=spfile 
sql statement: alter system set db_name = ''SANDPRD'' comment= ''Modified by RMAN duplicate'' scope=spfile 
sql statement: alter system set db_unique_name = ''SANDDUPE'' comment= ''Modified by RMAN duplicate'' scope=spfile 
Oracle instance shut down 
Oracle instance started 
Total System Global Area 419430400 bytes 
Fixed Size 2925120 bytes 
Variable Size 314576320 bytes 
Database Buffers 96468992 bytes 
Redo Buffers 5459968 bytes 
Starting restore at 11-JUN-16 
allocated channel: ORA_AUX_DISK_1 
channel ORA_AUX_DISK_1: SID=12 device type=DISK 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: restoring control file 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02 
output file name=/u01/app/oracle/oradata/SANDDUPE/controlfile/o1_mf_corxcj4s_.ctl 
Finished restore at 11-JUN-16 
database mounted 
contents of Memory Script: 
{ 
 sql clone 'alter database flashback off'; 
 set newname for clone datafile 1 to new; 
 set newname for clone datafile 2 to new; 
 set newname for clone datafile 3 to new; 
 set newname for clone datafile 4 to new; 
 set newname for clone datafile 5 to new; 
 set newname for clone datafile 6 to new; 
 set newname for clone datafile 7 to new; 
 restore 
 from service 'sandprd' clone database 
 ; 
 sql 'alter system archive log current'; 
} 
executing Memory Script 
sql statement: alter database flashback off 
executing command: SET NEWNAME 
executing command: SET NEWNAME 
executing command: SET NEWNAME 
executing command: SET NEWNAME 
executing command: SET NEWNAME 
executing command: SET NEWNAME 
executing command: SET NEWNAME 
Starting restore at 11-JUN-16 
using channel ORA_AUX_DISK_1 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_system_%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:16 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_data1_%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_sysaux_%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_undotbs1_%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_example_%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_users_%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02 
channel ORA_AUX_DISK_1: starting datafile backup set restore 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_AUX_DISK_1: restoring datafile 00007 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_catalog__%u_.dbf 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 
Finished restore at 11-JUN-16 
sql statement: alter system archive log current 
current log archived 
contents of Memory Script: 
{ 
 restore clone force from service 'sandprd' 
 archivelog from scn 2253301; 
 switch clone datafile all; 
} 
executing Memory Script 
Starting restore at 11-JUN-16 
using channel ORA_AUX_DISK_1 
channel ORA_AUX_DISK_1: starting archived log restore to default destination 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: restoring archived log 
archived log thread=1 sequence=22 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02 
channel ORA_AUX_DISK_1: starting archived log restore to default destination 
channel ORA_AUX_DISK_1: using network backup set from service sandprd 
channel ORA_AUX_DISK_1: restoring archived log 
archived log thread=1 sequence=23 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01 
Finished restore at 11-JUN-16 
datafile 2 switched to datafile copy 
input datafile copy RECID=8 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_data1_corxdzh6_.dbf 
datafile 1 switched to datafile copy 
input datafile copy RECID=9 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_system_corxdj9j_.dbf 
datafile 3 switched to datafile copy 
input datafile copy RECID=10 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_sysaux_corxf0mz_.dbf 
datafile 4 switched to datafile copy 
input datafile copy RECID=11 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_undotbs1_corxfhsl_.dbf 
datafile 5 switched to datafile copy 
input datafile copy RECID=12 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_example_corxfm14_.dbf 
datafile 6 switched to datafile copy 
input datafile copy RECID=13 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_users_corxg2bs_.dbf 
datafile 7 switched to datafile copy 
input datafile copy RECID=14 STAMP=914258057 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_catalog__corxg3fr_.dbf 
contents of Memory Script: 
{ 
 set until scn 2253419; 
 recover 
 clone database 
 delete archivelog 
 ; 
} 
executing Memory Script 
executing command: SET until clause 
Starting recover at 11-JUN-16 
using channel ORA_AUX_DISK_1 
datafile 2 not processed because file is read-only 
starting media recovery 
archived log for thread 1 with sequence 22 is already on disk as file /u01/app/oracle/fast_recovery_area/SANDDUPE/archivelog/2016_06_11/o1_mf_1_22_corxg736_.arc 
archived log for thread 1 with sequence 23 is already on disk as file /u01/app/oracle/fast_recovery_area/SANDDUPE/archivelog/2016_06_11/o1_mf_1_23_corxg873_.arc 
archived log file name=/u01/app/oracle/fast_recovery_area/SANDDUPE/archivelog/2016_06_11/o1_mf_1_22_corxg736_.arc thread=1 sequence=22 
archived log file name=/u01/app/oracle/fast_recovery_area/SANDDUPE/archivelog/2016_06_11/o1_mf_1_23_corxg873_.arc thread=1 sequence=23 
media recovery complete, elapsed time: 00:00:00 
Finished recover at 11-JUN-16 
Oracle instance started 
Total System Global Area 419430400 bytes 
Fixed Size 2925120 bytes 
Variable Size 314576320 bytes 
Database Buffers 96468992 bytes 
Redo Buffers 5459968 bytes 
contents of Memory Script: 
{ 
 sql clone "alter system set db_name = 
 ''SANDDUPE'' comment= 
 ''Reset to original value by RMAN'' scope=spfile"; 
 sql clone "alter system reset db_unique_name scope=spfile"; 
} 
executing Memory Script 
sql statement: alter system set db_name = ''SANDDUPE'' comment= ''Reset to original value by RMAN'' scope=spfile 
sql statement: alter system reset db_unique_name scope=spfile 
Oracle instance started 
Total System Global Area 419430400 bytes 
Fixed Size 2925120 bytes 
Variable Size 314576320 bytes 
Database Buffers 96468992 bytes 
Redo Buffers 5459968 bytes 
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "SANDDUPE" RESETLOGS ARCHIVELOG 
 MAXLOGFILES 16 
 MAXLOGMEMBERS 3 
 MAXDATAFILES 100 
 MAXINSTANCES 8 
 MAXLOGHISTORY 292 
 LOGFILE 
 GROUP 1 SIZE 50 M , 
 GROUP 2 SIZE 50 M , 
 GROUP 3 SIZE 50 M , 
 GROUP 4 SIZE 50 M , 
 GROUP 5 SIZE 50 M , 
 GROUP 6 SIZE 50 M , 
 GROUP 7 SIZE 50 M , 
 GROUP 8 SIZE 50 M , 
 GROUP 9 SIZE 50 M , 
 GROUP 10 SIZE 50 M 
 DATAFILE 
 '/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_system_corxdj9j_.dbf' 
 CHARACTER SET AL32UTF8 
contents of Memory Script: 
{ 
 set newname for clone tempfile 1 to new; 
 switch clone tempfile all; 
 catalog clone datafilecopy "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_sysaux_corxf0mz_.dbf", 
 "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_undotbs1_corxfhsl_.dbf", 
 "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_example_corxfm14_.dbf", 
 "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_users_corxg2bs_.dbf", 
 "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_catalog__corxg3fr_.dbf"; 
 switch clone datafile all; 
} 
executing Memory Script 
executing command: SET NEWNAME 
renamed tempfile 1 to /u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_temp_%u_.tmp in control file 
cataloged datafile copy 
datafile copy file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_sysaux_corxf0mz_.dbf RECID=1 STAMP=914258085 
cataloged datafile copy 
datafile copy file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_undotbs1_corxfhsl_.dbf RECID=2 STAMP=914258085 
cataloged datafile copy 
datafile copy file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_example_corxfm14_.dbf RECID=3 STAMP=914258085 
cataloged datafile copy 
datafile copy file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_users_corxg2bs_.dbf RECID=4 STAMP=914258085 
cataloged datafile copy 
datafile copy file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_catalog__corxg3fr_.dbf RECID=5 STAMP=914258085 
datafile 3 switched to datafile copy 
input datafile copy RECID=1 STAMP=914258085 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_sysaux_corxf0mz_.dbf 
datafile 4 switched to datafile copy 
input datafile copy RECID=2 STAMP=914258085 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_undotbs1_corxfhsl_.dbf 
datafile 5 switched to datafile copy 
input datafile copy RECID=3 STAMP=914258085 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_example_corxfm14_.dbf 
datafile 6 switched to datafile copy 
input datafile copy RECID=4 STAMP=914258085 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_users_corxg2bs_.dbf 
datafile 7 switched to datafile copy 
input datafile copy RECID=5 STAMP=914258085 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_catalog__corxg3fr_.dbf 
Reenabling controlfile options for auxiliary database 
Executing: alter database force logging 
contents of Memory Script: 
{ 
 Alter clone database open resetlogs; 
} 
executing Memory Script 
database opened 
Executing: alter database flashback on 
contents of Memory Script: 
{ 
 catalog clone datafilecopy "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_data1_corxdzh6_.dbf"; 
 switch clone datafile 2 to datafilecopy 
 "/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_data1_corxdzh6_.dbf"; 
 #online the readonly tablespace 
 sql clone "alter tablespace DATA1 online"; 
} 
executing Memory Script 
cataloged datafile copy 
datafile copy file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_data1_corxdzh6_.dbf RECID=6 STAMP=914258092 
datafile 2 switched to datafile copy 
input datafile copy RECID=6 STAMP=914258092 file name=/u01/app/oracle/oradata/SANDDUPE/datafile/o1_mf_data1_corxdzh6_.dbf 
sql statement: alter tablespace DATA1 online 
Cannot remove created server parameter file 
Finished Duplicate Db at 11-JUN-16 
RMAN> quit 
Recovery Manager complete.