EXPDP/IMPDP has performance issue due to “Streams AQ: enqueue blocked on low memory”

In this post, we are discussing about performance issue which I faced with EXPDP schema backup.

We have a production database, for which we take daily EXPDP schema backup before EOD starts. The control-m job configuration has made in such a way that EOD will not start until schema level EXPDP backup gets complete. From last month, EXPDP backup started taking considerable time. Initially it used to take hardly 15 minutes, and all of sudden it started taking 90 minutes, in turn causing delays in start of EOD. Application team raised problem ticket with us for the same issue.

I did basic tuning, like increasing PGA or excluding statistics, but none of it helped me to minimize backup time. Then I thought of observing wait event for EXPDP master & slave processes. It was “Streams AQ: enqueue blocked on low memory”. Of course it was something to related to performance issue I was observing.

I did check for memory configuration of database & found memory is configured with automatic management:

If you see above output, due to sudden growth in shared pool size, other components like large pool, db cache & streams pool have shrunk.

I tried to dig in history backup logfiles & found, till 6th Jan 2017, backups were running fine. So lack of streams pool memory is causing EXPDP to run longer.

So as to resolve this issue, I switched from automatic memory to manual one & allocated 200 MB to streams pool. Post memory configuration changes, EXPDP performance was back to normal.

After searching on MOS for similar issues, I found EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1)

Hope so u will find this post very useful:-)




ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

From last two days, I have been getting “ORA-27054: NFS file system not mounted with correct options” error while running an RMAN backup to a Sun ZFS Storage Appliance.

This error have been observed particularly, while taking controlfile backups. RMAN datafile or archivelog backups without controlfile were running fine.

RMAN-08132: WARNING: cannot update recovery area reclaimable file list
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ch01 channel at 02/15/2016 16:02:44
ORA-01580: error creating control backup file /mnt/bkp1/TEST1/TEST1_CTRL_snapcf.ctl
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
Additional information: 1

After searching on MOS, I found following note, which explains different mount point options for configuring the NFS mounts.

Sun ZFS Storage Appliance: Oracle Database 11g R2 NFS Mount Point Recommendations (Doc ID 1567137.1)

After discussing the same with SOLARIS team, we found that backup mount point were mounted with all recommended parameters.

So tried to search more documents on MOS and found the one:

Oracle ZFS Storage: FAQ: Exadata RMAN Backup with The Oracle ZFS Storage Appliance (Doc ID 1354980.1)

As per note “DNFS is strongly recommended when protecting an Exadata with the ZFS Storage Appliance. It is required to achieve the published backup and restore rates”.

You can confirm that DNFS is enabled by running following query on database:

select * from v$dnfs_servers;

In my case it was not enabled. DNFS to be enabled on a each database node with the following command:

$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on

dcli may be used to enable Oracle Direct NFS on all of the database nodes simultaneously:

$ dcli -l oracle -g /home/oracle/dbs_group make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on

The database instance should be restarted after enabling Oracle Direct NFS.

After enabling DNFS activity, my backup started running without any issues.


Hope so u will find this post very useful 🙂




Compression of dumpfiles with EXP/IMP/EXPDP:

One of the biggest challenges for DBAs is the lack of disk space especially for the backups. Two days before, I came across the  situation where I was need to do export/import schema from UAT to DEV, but none of the mount points on filesystem were having sufficient space available to fit export dumpfile. In my case, taking EXPDP schema backup followed by compresing dumpfile with GZIP/BZIP2 was not possible due to insufficient disk space.

So I started exploring different methodologies to apply compression on dumpfile which will allow me to compress dumpfile on fly.

1. Use of pipes:


cd /home/oracle
# create a named pipe
mknod exp_disc_schema_scott.pipe p
# read the pipe - output to zip file in the background
gzip < exp_disc_schema_scott.pipe > exp_disc_schema_scott.dmp.gz &
# feed the pipe
exp file=exp_disc_schema_scott.pipe log=exp_disc_schema_scott.log owner=scott


cd /home/oracle
# create a name pipe
mknod imp_disc_schema_scott.pipe p
# read the zip file and output to pipe
gunzip < imp_disc_schema_scott.dmp.gz > imp_disc_schema_scott.pipe &
# feed the pipe
imp file=imp_disc_schema_scott.pipe log=imp_disc_full.log fromuser=scott touser=scott1


1. Compression happens parallel with the export.
2. Compressed dumpfile can be used for import without decompression. (As shown above)
3. Can be used for 10g as well as 11g.


1. Can’t use this methodology for EXPDP/IMPDP.

2. Advanced compression:

From 11g, You can used advanced compression menthology to compress dumpfiles on fly. This can be used for compressing data, metadata (which is the default value), both and none.

expdp directory=DATA_PUMP_DIR1 dumpfile=exp_disc_schema_scott.dmp logfile=exp_disc_schema_scott.log schemas=scott compression=all.


1. Compression happens parallel with the export.
2. Compressed dumpfile can be directly used for import without decompression.


1. It takes more time than normal EXPDP operation.(without compression)
2. Advanced Compression License option must be enabled which is an extra cost.
3. This option is just available from Oracle 11g


Hope so u will find this post very useful 🙂




RMAN restore database fails with ORA-19693: backup piece XXX already included.

Yesterday I was recreating DR database with production database backup, when restore database command failed with error: ORA-19693

RMAN> restore database;
Starting restore at 05-AUG-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to G:\OPDB\REPOSITORY\DATABASE112_SE_X64\ORADATA\OPDB\SYSTEM01.DBF
channel ORA_DISK_1: restoring datafile 00015 to G:\OPDB\REPOSITORY\DATABASE112_SE_X64\ORADATA\OPDB\IFLOWDB_3.ORA
channel ORA_DISK_1: restoring datafile 00017 to G:\OPDB\REPOSITORY\DATABASE112_SE_X64\ORADATA\OPDB\IFLOWDB_2.ORA
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/05/2015 17:19:34
ORA-19693: backup piece G:\BACKUP\RMAN\OPDB_DBF_7829_1_886892410 already included

Before proceeding with restore, I had cataloged all backuppieces that have been copied using command : catalog start with ‘G:\BACKUP\RMAN\’;

After searching on metalink and google I got different solutions, none of which worked for me.

Some suggested to catalog backuppieces using “catalog start with” (which I had already done), while other suggested to uncatalog backuppiece.

So I tried following command to uncatlog one of the backuppiece:

RMAN> change backuppiece 'G:\BACKUP\RMAN\OPDB_DBF_7829_1_886892410' uncatalog;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of uncatalog command at 08/05/2015 17:19:52
RMAN-20261: ambiguous backup piece handle
RMAN-06092: error while looking up backup piece

After concentrating on error “RMAN-06092: error while looking up backup piece”, as per my understanding though “catalog start with” command was successful haven’t cataloged any of the backuppieces.

So instead of command “catalog start with”, I preferred cataloging individual backuppieces using following command:

RMAN> catalog backuppiece 'G:\BACKUP\RMAN\OPDB_DBF_7829_1_886892410';
cataloged backup piece
backup piece handle=G:\BACKUP\RMAN\OPDB_DBF_7829_1_886892410 RECID=7851 STAMP=88
And so on.....

After cataloging all backuppieces individually, I was able to restore database without any issues.

Hope so u will find this post very useful 🙂




expdp backup on NFS mount point hangs due to Direct NFS: please check that oradism is setuid

Some days before we were trying to schedule expdp schema backup on one of the UAT environments. Expdp command was very simple & as follows:

expdp system/manager directory=data_pump_dir dumpfile=uatdb_test1.dmp logfile=uatdb_test1.log schemas=test1

Though schema size was hardly 300 MB, backup was taking ages just to start. Finally I canceled EXPDP session:

[oracle@UATDB] $ expdp system/manager directory=data_pump_dir dumpfile=uatdb_test1.dmp logfile=uatdb_test1.log schemas=test1
Export: Release - Production on Tue Dec 30 09:14:59 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
^CUDE-00001: user requested cancel of current operation
UDE-01013: operation generated ORACLE error 1013
ORA-01013: user requested cancel of current operation

I checked alert log for getting some clue & got it. Alert log was showing following warnings:

DM00 started with pid=52, OS id=42757, job SYSTEM.SYS_EXPORT_SCHEMA_03
Direct NFS: please check that oradism is setuid
Tue Dec 30 09:16:01 2014
Direct NFS: please check that oradism is setuid

I checked for permissions for oradism binary

[oracle@UATDB] $ ls -la $ORACLE_HOME/bin/oradism
-rwxr-x--- 1 oracle oinstall 1340408 Sep 25 2011 /u01/app/oracle/UATDB/

Searched on metalink to find out correct permissions for oradism. Yes !! In my environment it was incorrectly set:

[root@UATDB] $ chown root:root /u01/app/oracle/UATDB/
[root@UATDB] $ chmod 4755 /u01/app/oracle/UATDB/ 
[root@UATDB] $ ls -la $ORACLE_HOME/bin/oradism
-rwsr-xr-x 1 root root 1340408 Sep 25 2011 /u01/app/oracle/UATDB/

Now tried EXPDP operation & as expected it was successful!!

Hope so u will find this post very useful 🙂




IMPDP with network_link option fails with ORA-39126, ORA-06502, ORA-39127, ORA-44002

Some day before while performing import operation using IMPDP using network_link parameter, I came across this issue.

My IMPDP command was as given below:


I got following error, before it got terminated due to fatal error:

ORA-39127: unexpected error from call to "SYS"."DBMS_CUBE_EXP"."INSTANCE_EXTENDED_INFO_EXP"
ORA-44002: invalid object name
ORA-06512: at "SYS.DBMS_ASSERT", line 316
ORA-06512: at "SYS.DBMS_METADATA", line 9114
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.FETCH_XML_OBJECTS [PROCACT_SCHEMA:"XXDBA"]
ORA-06502: PL/SQL: numeric or value error: invalid LOB locator specified: ORA-22275
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9001
----- PL/SQL Call Stack -----
 object line object
 handle number name
5408b1e60 20462 package body SYS.KUPW$WORKER
5408b1e60 9028 package body SYS.KUPW$WORKER
5408b1e60 10935 package body SYS.KUPW$WORKER
5408b1e60 2728 package body SYS.KUPW$WORKER
5408b1e60 9697 package body SYS.KUPW$WORKER
5408b1e60 1775 package body SYS.KUPW$WORKER
54eda2e28 2 anonymous block
Job "SYSTEM"."SYS_IMPORT_FULL_02" stopped due to fatal error at 17:10:20

Tried excluding schema XXDBA …. but same issue 😦

After searching on MOS, I got number for notes regarding same error, in which this note was explaining exact scenario which I got : DataPump Export (EXPDP) Errors ORA-39127 SYS.DBMS_CUBE_EXP.INSTANCE_EXTENDED_INFO_EXP ORA-44002 After Upgrade (Doc ID 1353491.1)

This note has given following solution:

The issue directs to an incomplete OLAP installation issue or orphan OLAP objects. If the OLAP option is not used, then delete DBMS_CUBE_EXP OLAP package from the export view as follows:
connect / as sysdba
create table exppkgact$_bck as select * from sys.exppkgact$;
delete from sys.exppkgact$ where package = 'DBMS_CUBE_EXP' and schema = 'SYS';

This solution resolved my issue 🙂

Though my issue was resolved, I was curious about table SYS.EXPPKGACT$ , for which I got following info:

EXPPKGACT$ is the table, which is the datapump handler table that contains all the packages that should be executed to complete the export.

So in my case, the OLAP Deinstallation, although successful, left behind a couple of references to OLAP in the EXPPKGACT$ table, which was causing the issue.

Hope so u will find this post very useful 🙂




RMAN fails with kgefec: fatal error 0 kgepop: no error frame to pop to for error 603

We got issue in one of the production environment, RMAN archivelog backup was failing with following errors:

kgefec: fatal error 0 
kgepop: no error frame to pop to for error 603

We were using following command in rcv file to backup archivelogs:

BACKUP archivelog all delete input;

After searching on metalink, I found one note with same scenario : RMAN Archivelog backup with DELETE INPUT fails kgepop: no error frame to pop to for error 603 (Doc ID 1921228.1)

According to this note, RMAN errors out when over 5000 archivelogs were being backed up resulting in an excessively large pl/sql to be executed.

Yes in my case, RMAN was processing more than 2TB archives in one go.

------------------------------ -------------- ------------- ----------
+RECO_FIN 3145728 2085370 66.29

As a solution I tried reducing archive logs volume that being backed using following command :

DELETE force noprompt archivelog until time '(sysdate - 2)' backed up 1 times to device type disk;

DONE!!!!  This resolved my issue 🙂

Hope so u will find this post very useful 🙂