computer-hands-overhead
Oracle

Data Pump Oracle12c Tips

Oracle Data pump enables very high-speed movement of Data and metadata from one database to another.

Datapump Export == > Unload data and metadata into a set of operating system files called a dump file set

Datapump Import == > Load an export dump file set into a target database.

The following tips will help to tune these data pump exports and import process.

Tip #1

With the METRICS=Y option data pump helps quantify exactly how long Data Pump spends in each section and prints a line “Completed ….. in X seconds” after each section.

For example:

$ expdp dumpfile=DATA_PUMP_DIR:demo.dmp nologfile=y metrics=y userid=scott/tiger

Export: Release 11.2.0.3.0 – Production

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.3.0 – 64bit Production
FLASHBACK automatically enabled to preserve database integrity.
Starting “SCOTT”.”SYS_EXPORT_SCHEMA_01″:  dumpfile=DATA_PUMP_DIR:demo.dmp nologfile=y metrics=y userid=scott/********
Estimate in progress using BLOCKS Tip…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS Tip: 192 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Completed 1 PROCACT_SCHEMA objects in 3 seconds
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Completed 4 TABLE objects in 4 seconds
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Completed  TABLE objects in  seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Completed 2 INDEX objects in 1 seconds
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Completed 2 CONSTRAINT objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Completed  CONSTRAINT objects in  seconds
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Completed 1 REF_CONSTRAINT objects in 9 seconds
. . exported “SCOTT”.”DEPT”                              5.937 KB       4 rows
. . exported “SCOTT”.”EMP”                               8.585 KB      14 rows
. . exported “SCOTT”.”SALGRADE”                          5.867 KB       5 rows
. . exported “SCOTT”.”BONUS”                                 0 KB       0 rows
Master table “SCOTT”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/prd1/dpdump/demo.dmp
Job “SCOTT”.”SYS_EXPORT_SCHEMA_01″ successfully completed

Tip #2

Disable AWR (or Statspack) snapshots and the Automatic Maintenance Tasks while running Data Pump.  No need to run these auto jobs while data pump is still importing the DATA.

EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(INTERVAL=>0);
EXEC DBMS_AUTO_TASK_ADMIN.DISABLE;

After importing re-enable:

EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(INTERVAL=>15);
EXEC DBMS_AUTO_TASK_ADMIN.ENABLE;

Tip #3

Perform a SQL trace of the Data Pump process.  When exporting sometimes the database will spend an excessively long time exporting a certain object type.  This can be due to inefficient catalog queries.   A SQL trace will show the exact details of the queries being executed by expdp.  Problem internal catalog queries can sometimes be remedied with updated dictionary or fixed object statistics or with patches or other workarounds (such as EVENT parameters) as directed by internal bugs and other MOS notes.

Enable a trace of the Data Pump process by adding the TRACE=400300 expdp parameter.  My Oracle Support note 286496.1 explains the various Data Pump tracing options in detail.  However usually simply adding the Data Pump parameter TRACE=400301 will suffice.  The resulting trace files can then be analyzed using utilities such as TKPROF or ORASRP just like any other SQL Trace file.

Ensure that the user that connects to the database with the Export Data Pump or Import Data Pump utility, is a privileged user (i.e. a user who has the DBA role or the EXP_FULL_DATABASE resp. IMP_FULL_DATABASE role),

SQL> CONNECT / AS SYSDBA
SQL> GRANT exp_full_database TO scott;

$ expdp scott/tiger DIRECTORY=my_dir DUMPFILE=expdp_s.dmp \
LOGFILE=expdp_s.log TABLES=emp TRACE=480300

Also ensure that the init.ora/spfile initialization parameter MAX_DUMP_FILE_SIZE is large enough to capture all the trace information (or set it to unlimited which is the default value), e.g.:
— Ensure enough trace data can be written to the trace files:

SQL> CONNECT / as sysdba
SQL> SHOW PARAMETER max_dump

NAME TYPE VALUE
———————————– ———– ——————————
max_dump_file_size string 10M

SQL> ALTER SYSTEM SET max_dump_file_size = unlimited SCOPE = both;
SQL> SHOW PARAMETER max_dump

NAME TYPE VALUE
———————————— ———– ——————————
max_dump_file_size string UNLIMITED
The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only, e.g.:
To run a Data Pump job with standard tracing:

— This results in two trace files in BACKGROUND_DUMP_DEST:

— Master Process trace file: <SID>_dm<number>_<process_id>.trc
— Worker Process trace file: <SID>_dw<number>_<process_id>.trc

% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_s.dmp \
LOGFILE=expdp_s.log SCHEMAS=scott TRACE=480300

Tip #4

Adding object statics can be a time consuming part of the overall Data Pump import process.  However this time can be mitigated by pre-calculating and capturing them from a dry-run or practice migration.

The database’s optimizer statistics are version dependent.  Further objects are essentially reorganized by the Data Pump migration process and hence space and data distribution/density properties can be changed significantly by the migration process.  Therefore it doesn’t make sense to have Data Pump export and import optimizer the source database’s statistics as the process can be time consuming and they should be recalculated on the target database.

To save time, calculate object statistics prior to the actual migration window using migrated data from a dry-run.  Save these migration statistics to a table in the target databases using DBMS_STATS.EXPORT_*_STATS where they can be restored from after the actual migration takes place.

Tip #5

Data Pump supports importing directly through the network via the NETWORK_LINK parameter without dump file.  When used, only an import operation is required – there is no associated Data Pump export.  The NETWORK_LINK parameter must refer to a valid database link defined on the target database.

Importing through the NETWORK_LINK parameter can save a considerable amount of migration time as the table data no longer has to be written to disk, possibly copied from one system’s disk to another’s, and read from disk.

Some tables will not be applicable for Data Pump imports using the NETWORK_LINK parameter.  For example tables with LONG or LONG RAW data types.  These problematic tables must therefore be migrated using the traditional Data Pump approach of exporting to a dump file, copying that dump file, and importing by reading that dump file.

Also note that to use the NETWORK_LINK parameter the XDB and XML options must be installed in both the source and target databases.

See the examples below. Here we have two databases – SRCDB (source) and TGTDB(target)

SQL>  select name from v$database;

 

NAME

———

SRCDB

SQL> show user
USER is “SCOTT”

SQL> select * from tab;

no rows selected

SQL> create table example_tab1 as select * from all_objects;

Table created.

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
EXAMPLE_TAB1                   TABLE

 

Configure TNS entry (File location: $ORACLE_HOME/network/admin/tnsnames.ora) for SRCDB in TGTDB database host. Entry as below:

SRCDB =
(description =
(address =
(protocol = tcp)
(host = devdata.abc.com)
(port = 1522)
)
(connect_data =
(server = dedicated)
(sid = SRCDB)
)
)
Test the connectivity using the tnsping utility

$ tnsping SRCDB

TNS Ping Utility for Solaris:

Copyright (c) 1997, 2008, Oracle.  All rights reserved.

Used parameter files:

 

Used TNSNAMES adapter to resolve the alias
Attempting to contact (description = (address = (protocol = tcp) (host = devdata.abc.com) (port = 1522)) (connect_data = (server = dedicated) (sid = SRCDB)))
OK (20 msec)

Connect to TGTDB using sqlplus and create a database link to SRCDB with scott user

$ sqlplus

Enter user-name: scott/tiger

SQL> create database link SRCDB connect to scott identified by scott using ‘SRCDB’;

Database link created.

SQL> select * from tab@SRCDB;

TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
EXAMPLE_TAB1                   TABLE

Database link is working and ready from the database TGTDB to SRCDB

Import the scott schema of SRCDB database to TGTDB database without dumpfile.

$ impdp scott/tiger directory=exp_dir logfile=impnetworkscott.log network_link=SRCDB

 

Connected to: Oracle Database 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_IMPORT_SCHEMA_01″:  scott/******** directory=exp_dir logfile=impnetworkscott.log network_link=SRCDB
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported “SCOTT”.”EXAMPLE_TAB1″                      9507 rows
Job “SCOTT”.”SYS_IMPORT_SCHEMA_01″ completed.

SQL> select name from v$database;

NAME
———
TGTDB

SQL> show user
USER is “SCOTT”

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
DEPT                           TABLE
EMP1                           TABLE
EMP2                           TABLE
EXAMPLE                      TABLE
EXAMPLE_P                  TABLE
T_EMP                         TABLE
TEST                           TABLE

8 rows selected.
Table EXAMPLE_TAB1 has been imported without dump file to TGTDB database!!!!

Oracle 12c Moving Datafile Online
Contributions, Oracle

Oracle 12c – Moving Datafile Online

This is one of the tedious jobs which every DBA needs to perform at some point in time due to:

  • Insufficient Disk Space
  • Move from Non-ASM to ASM disk

 

Prior to Oracle 12c, it was a challenge for DBAs to perform this activity because clients always want minimal downtime, plus it requires additional database activities to be performed. (For example, Issue of some OS command followed by database commands.)

In order to understand the ease of this process, let’s see what DBAs needs to do to move a datafile prior 12c.

Continue Reading…

Performance Tuning Approaches Oracle
Contributions, Health Check, Linux, Oracle

Performance Tuning Approaches (Oracle)

An understanding of systems architecture and design provides the foundation for a proactive monitoring stance, but there is no better prevention than a periodic health check. Whether you accomplish this with OraChk, AWR reporting, or Statspack, regular inspection of a database is the most effective way of preventing issues before they happen.

Server issues (memory, CPU, and network bandwidth) all affect system performance. But how can you differentiate those from SQL performance issues, requiring different strategies and scope in the diagnostic phase?

Continue Reading…

cloud-table-hands
Cloud, Contributions, Oracle

Back-up to Cloud

Over the last few years, a proliferation of web services paved the way for saving enterprise back-ups. Oracle recently added the Oracle secure back-up (OSB) cloud module to its inventory of back-up solutions, which allows RMAN to write database back-ups directly to the Amazon storage cloud. It offers all the major features of RMAN and provides a cost-effective alternative to tape and online storage. This document highlights some of the steps involved in configuring RMAN backup to the Amazon cloud.

Requirements-

OS: OSB is currently supported on Linux & Window platforms.

JAVA: JDK 1.5 or higher is required to run the cloud module to configure the environment.

Note: Typically Oracle database 11gR2 installation may contain Java 1.5 JDK. This can be used in case the server doesn’t have the appropriate JDK installed. JDK location under Oracle home –
/u01/app/oracle/product/11.2.0.3/db1/jdk/bin.

Continue Reading…

A Quick and Simple Way to Measure the I/O
Contributions, Oracle

A Quick and Simple Way to Measure the I/O

The following post highlights the use of Oracle’s in-built I/O performance tool introduced in 11gR1. It is a part of the DBMS_RESOURCE_MANAGER (CALIBRATE_IO procedure) PL/SQL package. Though this post may not provide the complete picture of I/O in every respect, it is useful to quickly understand the maximum sustainable IOPs and average latency, particularly for OLTP workloads. Understanding the latency for single block reads is quite important since it directly impacts the performance of transaction-based queries that make use of index reads.

A few points:

  • Make sure ‘timed_statistics’ is true (there is no need for further changes if statistics_level is ‘typical’).
  • Set ‘filesystemio_options’ to either setall or async (restart of the instance may be required)
  • Run this process during off-hours since it will generate I/O and to avoid mix-ups with other things going on in the database.
  • If RAC is used, make sure all instances are up and running.

Continue Reading…