computer-hands-overhead
Oracle

Data Pump Oracle12c Tips

Oracle Data pump enables very high-speed movement of Data and metadata from one database to another.

Datapump Export == > Unload data and metadata into a set of operating system files called a dump file set

Datapump Import == > Load an export dump file set into a target database.

The following tips will help to tune these data pump exports and import process.

Tip #1

With the METRICS=Y option data pump helps quantify exactly how long Data Pump spends in each section and prints a line “Completed ….. in X seconds” after each section.

For example:

$ expdp dumpfile=DATA_PUMP_DIR:demo.dmp nologfile=y metrics=y userid=scott/tiger

Export: Release 11.2.0.3.0 – Production

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Release 11.2.0.3.0 – 64bit Production
FLASHBACK automatically enabled to preserve database integrity.
Starting “SCOTT”.”SYS_EXPORT_SCHEMA_01″:  dumpfile=DATA_PUMP_DIR:demo.dmp nologfile=y metrics=y userid=scott/********
Estimate in progress using BLOCKS Tip…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS Tip: 192 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Completed 1 PROCACT_SCHEMA objects in 3 seconds
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Completed 4 TABLE objects in 4 seconds
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Completed  TABLE objects in  seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Completed 2 INDEX objects in 1 seconds
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Completed 2 CONSTRAINT objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Completed  CONSTRAINT objects in  seconds
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Completed 1 REF_CONSTRAINT objects in 9 seconds
. . exported “SCOTT”.”DEPT”                              5.937 KB       4 rows
. . exported “SCOTT”.”EMP”                               8.585 KB      14 rows
. . exported “SCOTT”.”SALGRADE”                          5.867 KB       5 rows
. . exported “SCOTT”.”BONUS”                                 0 KB       0 rows
Master table “SCOTT”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/prd1/dpdump/demo.dmp
Job “SCOTT”.”SYS_EXPORT_SCHEMA_01″ successfully completed

Tip #2

Disable AWR (or Statspack) snapshots and the Automatic Maintenance Tasks while running Data Pump.  No need to run these auto jobs while data pump is still importing the DATA.

EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(INTERVAL=>0);
EXEC DBMS_AUTO_TASK_ADMIN.DISABLE;

After importing re-enable:

EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(INTERVAL=>15);
EXEC DBMS_AUTO_TASK_ADMIN.ENABLE;

Tip #3

Perform a SQL trace of the Data Pump process.  When exporting sometimes the database will spend an excessively long time exporting a certain object type.  This can be due to inefficient catalog queries.   A SQL trace will show the exact details of the queries being executed by expdp.  Problem internal catalog queries can sometimes be remedied with updated dictionary or fixed object statistics or with patches or other workarounds (such as EVENT parameters) as directed by internal bugs and other MOS notes.

Enable a trace of the Data Pump process by adding the TRACE=400300 expdp parameter.  My Oracle Support note 286496.1 explains the various Data Pump tracing options in detail.  However usually simply adding the Data Pump parameter TRACE=400301 will suffice.  The resulting trace files can then be analyzed using utilities such as TKPROF or ORASRP just like any other SQL Trace file.

Ensure that the user that connects to the database with the Export Data Pump or Import Data Pump utility, is a privileged user (i.e. a user who has the DBA role or the EXP_FULL_DATABASE resp. IMP_FULL_DATABASE role),

SQL> CONNECT / AS SYSDBA
SQL> GRANT exp_full_database TO scott;

$ expdp scott/tiger DIRECTORY=my_dir DUMPFILE=expdp_s.dmp \
LOGFILE=expdp_s.log TABLES=emp TRACE=480300

Also ensure that the init.ora/spfile initialization parameter MAX_DUMP_FILE_SIZE is large enough to capture all the trace information (or set it to unlimited which is the default value), e.g.:
— Ensure enough trace data can be written to the trace files:

SQL> CONNECT / as sysdba
SQL> SHOW PARAMETER max_dump

NAME TYPE VALUE
———————————– ———– ——————————
max_dump_file_size string 10M

SQL> ALTER SYSTEM SET max_dump_file_size = unlimited SCOPE = both;
SQL> SHOW PARAMETER max_dump

NAME TYPE VALUE
———————————— ———– ——————————
max_dump_file_size string UNLIMITED
The majority of errors that occur during a Data Pump job, can be diagnosed by creating a trace file for the Master Control Process (MCP) and the Worker Process(es) only, e.g.:
To run a Data Pump job with standard tracing:

— This results in two trace files in BACKGROUND_DUMP_DEST:

— Master Process trace file: <SID>_dm<number>_<process_id>.trc
— Worker Process trace file: <SID>_dw<number>_<process_id>.trc

% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_s.dmp \
LOGFILE=expdp_s.log SCHEMAS=scott TRACE=480300

Tip #4

Adding object statics can be a time consuming part of the overall Data Pump import process.  However this time can be mitigated by pre-calculating and capturing them from a dry-run or practice migration.

The database’s optimizer statistics are version dependent.  Further objects are essentially reorganized by the Data Pump migration process and hence space and data distribution/density properties can be changed significantly by the migration process.  Therefore it doesn’t make sense to have Data Pump export and import optimizer the source database’s statistics as the process can be time consuming and they should be recalculated on the target database.

To save time, calculate object statistics prior to the actual migration window using migrated data from a dry-run.  Save these migration statistics to a table in the target databases using DBMS_STATS.EXPORT_*_STATS where they can be restored from after the actual migration takes place.

Tip #5

Data Pump supports importing directly through the network via the NETWORK_LINK parameter without dump file.  When used, only an import operation is required – there is no associated Data Pump export.  The NETWORK_LINK parameter must refer to a valid database link defined on the target database.

Importing through the NETWORK_LINK parameter can save a considerable amount of migration time as the table data no longer has to be written to disk, possibly copied from one system’s disk to another’s, and read from disk.

Some tables will not be applicable for Data Pump imports using the NETWORK_LINK parameter.  For example tables with LONG or LONG RAW data types.  These problematic tables must therefore be migrated using the traditional Data Pump approach of exporting to a dump file, copying that dump file, and importing by reading that dump file.

Also note that to use the NETWORK_LINK parameter the XDB and XML options must be installed in both the source and target databases.

See the examples below. Here we have two databases – SRCDB (source) and TGTDB(target)

SQL>  select name from v$database;

 

NAME

———

SRCDB

SQL> show user
USER is “SCOTT”

SQL> select * from tab;

no rows selected

SQL> create table example_tab1 as select * from all_objects;

Table created.

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
EXAMPLE_TAB1                   TABLE

 

Configure TNS entry (File location: $ORACLE_HOME/network/admin/tnsnames.ora) for SRCDB in TGTDB database host. Entry as below:

SRCDB =
(description =
(address =
(protocol = tcp)
(host = devdata.abc.com)
(port = 1522)
)
(connect_data =
(server = dedicated)
(sid = SRCDB)
)
)
Test the connectivity using the tnsping utility

$ tnsping SRCDB

TNS Ping Utility for Solaris:

Copyright (c) 1997, 2008, Oracle.  All rights reserved.

Used parameter files:

 

Used TNSNAMES adapter to resolve the alias
Attempting to contact (description = (address = (protocol = tcp) (host = devdata.abc.com) (port = 1522)) (connect_data = (server = dedicated) (sid = SRCDB)))
OK (20 msec)

Connect to TGTDB using sqlplus and create a database link to SRCDB with scott user

$ sqlplus

Enter user-name: scott/tiger

SQL> create database link SRCDB connect to scott identified by scott using ‘SRCDB’;

Database link created.

SQL> select * from tab@SRCDB;

TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
EXAMPLE_TAB1                   TABLE

Database link is working and ready from the database TGTDB to SRCDB

Import the scott schema of SRCDB database to TGTDB database without dumpfile.

$ impdp scott/tiger directory=exp_dir logfile=impnetworkscott.log network_link=SRCDB

 

Connected to: Oracle Database 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SCOTT”.”SYS_IMPORT_SCHEMA_01″:  scott/******** directory=exp_dir logfile=impnetworkscott.log network_link=SRCDB
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported “SCOTT”.”EXAMPLE_TAB1″                      9507 rows
Job “SCOTT”.”SYS_IMPORT_SCHEMA_01″ completed.

SQL> select name from v$database;

NAME
———
TGTDB

SQL> show user
USER is “SCOTT”

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
DEPT                           TABLE
EMP1                           TABLE
EMP2                           TABLE
EXAMPLE                      TABLE
EXAMPLE_P                  TABLE
T_EMP                         TABLE
TEST                           TABLE

8 rows selected.
Table EXAMPLE_TAB1 has been imported without dump file to TGTDB database!!!!

computer-hands-upclose
Contributions

Oracle 12c – Moving Datafile Online

This is one of the tedious jobs which every DBA needs to perform at some point of time due to

  • Insufficient Disk Space
  • Move from Non-ASM to ASM disk

Prior 12c it is challenge for the DBA’s to perform this activity because client always looks for minimal downtime plus it requires some additional database activities to be performed like Issue of some OS command followed by database commands

So in order to understand the ease of this process let’s see what DBA’s needs to do to move a datafile prior 12c

High level steps include

  1. Change Tablespace Offline – We need to do this so that DBWR do not write new blocks there. Database downtime starts from this command
  2. Move/Copy datafile to target location – This is an OS command and not all DBA’s have server level access
  3. Update Controlfile – This is a must because after the file moves to target location Controlfile needs to be updated.
  4. Bring tablespace Online – We need to do this so that DBWR start writing blocks again. Database downtime ends

Let’s see an example of moving datafile prior Oracle 12c

login as: oracle
oracle@lab1’s password:
Last login: Sun Sep 18 22:02:35 2016 from 10.62.109.49
[oracle@lab1]$ ps -ef|grep pmon
oracle 4239 4208 0 07:52 pts/0 00:00:00 grep pmon
oracle 5721 1 0 Aug22 ? 00:04:50 ora_pmon_LAB11G
grid 8794 1 0 Jul30 ? 00:07:54 asm_pmon_+ASM
oracle 10213 1 0 Jul30 ? 00:08:13 ora_pmon_LAB12C_NC
oracle 10585 1 0 Jul30 ? 00:08:16 ora_pmon_LAB12C_C

[oracle@lab1]$ . oraenv
ORACLE_SID = [oracle] ? LAB11G
The Oracle base for ORACLE_HOME=/opt/app/oracle/database/product/11.2.0.4/db_home11g is /opt/app/oracle/database
[oracle@lab1]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Fri Sep 23 07:57:46 2016
Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

STEP 1: TAKE TABLESPACE OFFLINE – DOWNTIME STARTS HERE

SQL> ALTER TABLESPACE USERS OFFLINE;

tablespace altered.

STEP 2: MAKE CHANGES AT OS LAYER

SQL>!

[oracle@lab1]$ mv /u01/data/users01.dbf /u02/data/users01.dbf

 

STEP 3: UPDATE CONTROL FILE WITH NEW DATAFILE LOCATION

SQL> Alter database datafile ‘/u01/data/users01.dbf’ rename to ‘/u02/data/users01.dbf’;

STEP 4: BRING TABLESPACE ONLINE – DOWNTIME ENDS HERE

SQL> ALTER TABLESPACE USERS ONLINE;

tablespace altered.

Starting Oracle 12c, all the above steps can be completed with a single command. Let’s see with an example

STEP 1 & DONE:

[oracle@lab1]$ . oraenv
ORACLE_SID = [oracle] ? LAB12C_NC
The Oracle base for ORACLE_HOME=/opt/app/oracle/database/product/12.2.0.2/db_home12c is /opt/app/oracle/database1
[oracle@lab1]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Sep 23 09:14:38 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
SQL> Alter database move datafile ‘/u01/data/users01.dbf’ to ‘/u02/data/users01.dbf’;

Database altered.

See how easy DBA’s life will be:

  1. No Downtime requirement so everyone including client is happy
  2. Online operation on the datafile can be performed while the datafile move is in progress
  3. No need for the DBA to move the file at OS layer, the command will take care of that. If the file already exists in target location use the REUSE clause to overwrite it.

SQL> Alter database move datafile ‘/u01/data/users01.dbf’ to ‘/u02/data/users01.dbf’ REUSE;

4.. If you want to copy the file just use the KEEP clause so that it retain the original file at source

SQL> Alter database move datafile ‘/u01/data/users01.dbf’ to ‘/u02/data/users01.dbf’ KEEP;

5. No need to update CONTROLFILE SEPARATELY

 

The real advantage of this is the ability to move datafile from Non-ASM to ASM and vice versa without downtime and without using RMAN

SQL> Alter database move datafile ‘/u01/data/users01.dbf’ to ‘+DATA’;

Database altered.

Note: This command will throw ORA-01516 (Non Existent Log file, Data File etc.) when working with Pluggable Database, because we cannot move PDB’s datafile having current container set to CDB

[oracle@lab1]$ . oraenv
ORACLE_SID = [oracle] ? LAB12C_C
The Oracle base for ORACLE_HOME=/opt/app/oracle/database/product/12.2.0.2/db_home_1_12c is /opt/app/oracle/database2
[oracle@lab1]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Sep 23 09:14:38 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> alter database move datafile ‘/u01/data/users01.dbf’ to ‘/u02/data/users01.dbf’ REUSE;
alter database move datafile ‘/u01/data/users01.dbf’ to ‘/u02/data/users01.dbf’ REUSE
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file “17”

If we are working with CONTAINER DATABASES, in order to move files we must set our container to the PDB and run the command

SQL> alter session set container=pdb1;
Session altered.

SQL> alter database move datafile ‘/u01/data/users01.dbf’ to ‘/u02/data/users01.dbf’ REUSE;
Database altered.

SQL> alter session set container=CDB$ROOT;
Session altered.

Oracle 12c Moving Datafile Online
Contributions, Oracle

Oracle 12c – Moving Datafile Online

This is one of the tedious jobs which every DBA needs to perform at some point in time due to:

  • Insufficient Disk Space
  • Move from Non-ASM to ASM disk

 

Prior to Oracle 12c, it was a challenge for DBAs to perform this activity because clients always want minimal downtime, plus it requires additional database activities to be performed. (For example, Issue of some OS command followed by database commands.)

In order to understand the ease of this process, let’s see what DBAs needs to do to move a datafile prior 12c.

Continue Reading…

Performance Tuning Approaches Oracle
Contributions, Health Check, Linux, Oracle

Performance Tuning Approaches (Oracle)

An understanding of systems architecture and design provides the foundation for a proactive monitoring stance, but there is no better prevention than a periodic health check. Whether you accomplish this with OraChk, AWR reporting, or Statspack, regular inspection of a database is the most effective way of preventing issues before they happen.

Server issues (memory, CPU, and network bandwidth) all affect system performance. But how can you differentiate those from SQL performance issues, requiring different strategies and scope in the diagnostic phase?

Continue Reading…

cloud-table-hands
Cloud, Contributions, Oracle

Back-up to Cloud

Over the last few years, a proliferation of web services paved the way for saving enterprise back-ups. Oracle recently added the Oracle secure back-up (OSB) cloud module to its inventory of back-up solutions, which allows RMAN to write database back-ups directly to the Amazon storage cloud. It offers all the major features of RMAN and provides a cost-effective alternative to tape and online storage. This document highlights some of the steps involved in configuring RMAN backup to the Amazon cloud.

Requirements-

OS: OSB is currently supported on Linux & Window platforms.

JAVA: JDK 1.5 or higher is required to run the cloud module to configure the environment.

Note: Typically Oracle database 11gR2 installation may contain Java 1.5 JDK. This can be used in case the server doesn’t have the appropriate JDK installed. JDK location under Oracle home –
/u01/app/oracle/product/11.2.0.3/db1/jdk/bin.

Continue Reading…

Stretch Database in SQL Server 2016
Contributions, Microsoft Azure, SQL Server 2016

Overview of New Feature- Stretch Database in SQL Server 2016

SQL Server 2016 has the new feature to store cold data transparently and securely to the Microsoft Azure cloud. This new feature is known as Stretch Database and the feature will be beneficial to those needing to keep transactional data for long periods of time and those looking to save money on storage. Being able to seamlessly migrate data to the Microsoft Azure Cloud will give you the ability to archive data without having to change the way your applications query the data. Stretch Database migrates entire tables.

If your database is already set up to store archive data in separate tables from current data, you will be able to easily migrate the archive data to Azure. Once you enable Stretch Database, it will silently migrate your data to an Azure SQL Database. Stretch Database leverages the processing power in Azure to run queries against remote data by rewriting the query. You will see this as a “remote query” operator in the query plan.

Continue Reading…

Overview of Query Store Feature - SQL Server 2016
Contributions, SQL Server 2016

Overview of Query Store Feature – SQL Server 2016

Query Store automatically captures and retains a history of queries, query execution plans, and runtime execution statistics for troubleshooting performance problems caused by query plan changes. This feature helps a great deal in simplifying performance troubleshooting by helping us to quickly find performance differences.  Query Store is available in all the editions of SQL Server 2016.

We can find information captured by query store even after a server restart. This is because the information persists inside internal tables of the database, not in the memory. When the Query Optimizer compiles a new Execution Plan for a query that performs much worse than the previous Execution Plan for that same query, it allows us to choose a specific execution plan to use, rather than the default plan. It is similar to force plans in earlier versions.

Continue Reading…

instant-file-initialization-performance-improvement
Contributions, Troubleshooting

Instant File Initialization – Performance Improvement

When your database files grow (either manual or automatically), does it take time and lead to performance issues?

The answer is yes! Do you find that the following operations are consuming more time than you thought required?:

  • Creating new databases that are bigger in size
  • Adding files
  • Restoring databases from backup

 

Instant file initialization help make the operations mentioned above faster. Creating bigger databases initially is always recommended to ensure that there is enough room to support the workload during peak utilizations. It also helps keep pace with the future growth.

Continue Reading…

A Quick and Simple Way to Measure the I/O
Contributions, Oracle

A Quick and Simple Way to Measure the I/O

The following post highlights the use of Oracle’s in-built I/O performance tool introduced in 11gR1. It is a part of the DBMS_RESOURCE_MANAGER (CALIBRATE_IO procedure) PL/SQL package. Though this post may not provide the complete picture of I/O in every respect, it is useful to quickly understand the maximum sustainable IOPs and average latency, particularly for OLTP workloads. Understanding the latency for single block reads is quite important since it directly impacts the performance of transaction-based queries that make use of index reads.

A few points:

  • Make sure ‘timed_statistics’ is true (there is no need for further changes if statistics_level is ‘typical’).
  • Set ‘filesystemio_options’ to either setall or async (restart of the instance may be required)
  • Run this process during off-hours since it will generate I/O and to avoid mix-ups with other things going on in the database.
  • If RAC is used, make sure all instances are up and running.

Continue Reading…

Welcome to DBA Stack
Contributions

Welcome to DBAStack.com – The Official DBA Source

We are so glad you stopped by. We are just getting started and we welcome your feedback. DBAStack.com is a growing group of Database Administrators with a  mission to provide DBAs with valuable information that can be shared, expanded upon and put to practical use.  To get DBA Stack started, we’ll be focusing on a few subjects, such as Oracle, SAP, SQL Server 2016, and Mongo DB. However, we will be expanding to more and more as we grow, so we welcome all DBA related topics of interest.

We are also incorporating a FAQ section where you can ask your questions and have others from the community respond. More information coming soon. Have a question or want to help provide answers? Shoot us an email!

The main goal for DBAStack.com is to form a community for DBAs to help each other through brainstorming, networking, sharing, and learning. DBA Stack couldn’t exists with out the contributions from our awesome community. Want your voice to be heard? We love contributions!

For DBAs, free time can be a very rare and precious thing. That is why we only want to post content that you care about, want to read, and can help make life a little easier. We believe no idea is too small, so let us know what you want to see.

We are still in our infancy stage, so thank you for your patience as we are getting started and building. New features and updates are added weekly, stay tuned.

Thank you for checking DBAStack.com out.  We hope you will find it your go-to source for all things DBA!