Oracle Histograms Simplified: Part 1

Oracle Histogram is very nice feature to help CBO to make right decision while constructing execution plan. But many of DBA’s do not use it, assuming it is complex concept to use. In next few posts, I will try to simplify it, so that we can achieve maximum benefit from it.

What it Histogram & what is its purpose:

A histogram is a special type of column statistic that provides more detailed information about the data distribution in a table column. In simple words, it is a feature in CBO and it helps optimizer to determine how data are distributed/skewed within column. Predicates with skewed data are good candidates for histogram creation.

Basically CBO assumes uniform data distribution of data inside column across the distinct values. For the columns that contain skewed data, histogram helps to generate more accurate execution plans.

Below example will explain you significance of histograms:


I have a table ADI.SALES_ORDER with 100000 rows with index SALES_ORDER_N1 on column GENERATED.

GENERATED column has 4 distinct values: A, B, Y, N.  CBO will assume uniform data distribution across these values. So any query with column GENERATED as predicate will go for FTS, as CBO will assume it has to select 25% data from table.

But in reality data under column generated in skewed as given below:

I gathered fresh statistics without histogram on table as shown below:

Now will compare execution plan with different values for predicate:

As expected this query is going for FTS, as it has to select approx. 80% of data.

But I will expect query to pick index in case of GENERATED=’N’ as it will just select less than 1% of rows from table.

But NO its still going for FTS 😦 & it is due to CBO’s default behavior.

Now here Histogram should come into picture:

Now I will once again compare plans:

As expected following query is going for FTS, as it has to select approx. 80% of data.

But following query has picked up index due to creation of histogram. 🙂

In this post, we focused on what is histogram & why it is used.

Hope so u will find this post very useful:-)

Stay tuned for Oracle Histograms Simplified: Part 2, in which I will explain what are different ways to create & drop histogram on particular column.

Cheers

Regards,
Adityanath

Advertisements

Drop empty tablespace fails with ORA-14404 or ORA-144047 partitioned table contains partitions/subpartitions in a different tablespace

Recently I was performing database reorg operation on one of pre-prod environment with traditional way of table/lob movement followed by index rebuild. After moving all segments from current tablespace to new, I was ready to drop original one.


SQL> select count(0) from dba_segments where tablespace_name='MY_TABLESPACE';
  COUNT(0)
----------         
0

And it errored out!!!


SQL> DROP TABLESPACE MY_TABLESPACE INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE MY_TABLESPACE INCLUDING CONTENTS AND DATAFILES
*ERROR at line 1:
ORA-14404: partitioned table contains partitions in a different tablespace

As first step, I tried purging dba_recyclebin, but it didn’t help.

After searching on metalink, I found following note with similar issue: ORA-14404 or ORA-14407 When Trying to Drop a Tablespace (Doc ID 1674989.1)

which talks about two causes of this issue:

1. An attempt was made to drop a tablespace which contains tables whose partitions or subpartitions are not completely contained in this tablespace. 

OR

2. This can happen due to deferred segment creation introduced in 11.2.

First cause was matching my case:

I tried finding out tables whose partitions were spread across other partitions along with tablespace MY_TABLESPACE & moved to it to new tablespace as well:


SELECT    'alter table '|| table_owner|| '.'|| table_name|| ' move partition '|| partition_name|| '   tablespace MY_TABLESPACE_NEW;'  FROM (SELECT DISTINCT table_name, partition_name          FROM dba_tab_partitions         WHERE tablespace_name = 'MY_TABLESPACE') a,       dba_tab_partitions b WHERE b.tablespace_name <> 'MY_TABLESPACE'       AND a.table_name = b.table_name;

After moving all affected table partitions, I once again tried dropping tablespace: MY_TABLESPACE. It once again failed with same error: ORA-14404.
Then I found same issue with some of index partitions as well, for which I used following command:


select distinct 'alter index ' || index_owner || '.' || a.index_name || ' rebuild partition ' || a.partition_name || ' tablespace MY_TABLESPACE_NEW;'from ( select distinct index_name,partition_namefrom dba_ind_partitionswhere tablespace_name = 'MY_TABLESPACE' ) a, dba_ind_partitions bwhere b.tablespace_name <> 'MY_TABLESPACE' and a.index_name = b.index_name;

Once indexes were rebuild, I was able to drop tablespace MY_TABLESPACE without any issues 🙂

Note: You may also get ORA-14407 is some cases, wherein sub-partitions are the issue, for which you will use above commands just with sub-partitions views.

Just to elaborate seconds possible cause of this error: deferred segment creation:


The Oracle Database 11.2 includes a new space allocation method. When you create a non-partitioned heap table, the table segment creation is deferred to the first row insert. This functionality is enabled by default with the initialization parameter DEFERRED_SEGMENT_CREATION  set to TRUE.So in simple words, if you have set this parameter to value TRUE, particular table will be populated in dba_tables once created, but will not be populated in dba_segments unless we insert a row to it. So in the case of defered segments, you will not be able to drop tablespace.


Hope so u will find this post very useful:-)

Cheers

Regards,
Adityanath

How to Migrate characterset of 12c database using Database Migration Assistant for Unicode (DMU) Tool

This post will help you to convert characterset of database from WE8ISO8859P15 to AL32UTF8 in 12C. From 12c you will need to use DMU tool for characterset conversion instead of traditional method of CSSCAN & CSALTER. DMU is the Database Migration Assistant for Unicode . It converts the NLS_CHARACTERSET of an existing database to AL32UTF8 or UTF8.

DMU divides coversion into 4 steps:

1. Install DMU repository.
2. Scan the database.
3. Resolve the migration issues.
4. Convert the database.

Once DMU is started, you will need to create database connection as given below:

dmu1 dmu2

So as to make use of DMU, you need to create DBMS_DUMA_INTERNAL package under SYS schema. For creating mentioned package, you need to run following script.

?/rdbms/admin/prvtdumi.plb

If not you will get following error:

dmu3

Then you can complete DMU repository creation:

dmu4 dmu5 dmu6 dmu7

dmu8

dmu9

Now your database is ready for SCAN:

dmu10

dmu11

dmu12

dmu13dmu14

dmu15

Wait till scanning completes then right click the connection name and choose “Database Scan Report”

dmu16 dmu17 dmu18

Once you get results, you need to tackle invalid representations & Over column limit issues.

For invalid representations, you will need to right click a table and choose “Cleansing Editor”. The characters with issues will appear as small squares. You can edit data so as to fix invalid represenations.

For over column limit, you will need to use bulk cleansing option as shown below:

dmu22 dmu23 dmu23 dmu24 dmu25 dmu26 dmu27 dmu28

Now you will notice no migration issues observed after scanning database.

dmu29

Now your database is ready for characterset migration🙂

dmu30 dmu31 dmu32 dmu33

Done!!!!!!!!!!

Hope so u will find this post very useful 🙂

Cheers

Regards,

Adityanath

ORA-39726: unsupported add/drop column operation on compressed tables

Few days back, I was working on one of the UAT environments so as to drop columns from 2 tables which were already set to unused as per request from one of my application team.


Note: For identifying unused columns for any table, you need to query dba_tab_cols view as follows. If count is greater than zero then table has unused columns in it. This columns will not be visible in view dba_tab_columns.

select count(0) from dba_tab_cols where table_name=<table_name> and column_name like ‘SYS%$’;


Both the tables were compressed one & as per Oracle’s documentation, Compressed tables do not support drop columns.

So if I need drop these unused columns then , I will need to follow simple approach of uncompessing tables first, then dropping unused columns followed by compressing it back.

This approach worked well for first table. 🙂


SQL> select count(0) from dba_tab_cols where table_name='LPP_TMP' and column_name like 'SYS%$';
COUNT(0)
----------
 2
SQL> ALTER TABLE TEST1.LPP_TMP DROP UNUSED COLUMNS;
Table altered.
SQL> select count(0) from dba_tab_cols where table_name='LPP_TMP' and column_name like 'SYS%$';
COUNT(0)
----------
 2
SQL> ALTER TABLE TEST1.LPP_TMP nocompress;
Table altered.
SQL> ALTER TABLE TEST1.LPP_TMP DROP UNUSED COLUMNS;
Table altered.
SQL> select count(0) from dba_tab_cols where table_name='LPP_TMP' and column_name like 'SYS%$';
COUNT(0)
----------
 0
SQL> ALTER TABLE TEST1.LPP_TMP compress;
Table altered.

Though it worked well for first table, it gave me “ORA-39726: unsupported add/drop column operation on compressed tables” while applying same strategy on second table. 😦


SQL> ALTER TABLE TEST1.LPPF_TMP nocompress;
Table altered.
SQL> ALTER TABLE TEST1.LPPF_TMP DROP UNUSED COLUMNS;
ALTER TABLE TEST1.LPPF_TMP DROP UNUSED COLUMNS
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

After searching on metalink I found note: ORA-39726: Clarification on error when Drop Unused Column does not help (Doc ID 1574318.1), which indicates to use “table move” or “table redefinition” to make the table completely uncompressed, but still you cannot drop the column.

I still tried workaround of moving table first before uncompressing it & it worked for me 🙂


SQL> select count(0) from dba_tab_cols where table_name='LPPF_TMP' and column_name like 'SYS%$';
COUNT(0)
----------
 2
SQL> ALTER TABLE TEST1.LPPF_TMP move;
Table altered.
SQL> ALTER TABLE TEST1.LPPF_TMP nocompress;
Table altered.
SQL> ALTER TABLE TEST1.LPPF_TMP DROP UNUSED COLUMNS;
Table altered.
SQL> select count(0) from dba_tab_cols where table_name='LPPF_TMP' and column_name like 'SYS%$';
COUNT(0)
----------
 0
SQL> ALTER TABLE TEST1.LPPF_TMP compress;
Table altered.

Hope so u will find this post very useful 🙂

Cheers

Regards,

Adityanath

Compression of dumpfiles with EXP/IMP/EXPDP:

One of the biggest challenges for DBAs is the lack of disk space especially for the backups. Two days before, I came across the  situation where I was need to do export/import schema from UAT to DEV, but none of the mount points on filesystem were having sufficient space available to fit export dumpfile. In my case, taking EXPDP schema backup followed by compresing dumpfile with GZIP/BZIP2 was not possible due to insufficient disk space.

So I started exploring different methodologies to apply compression on dumpfile which will allow me to compress dumpfile on fly.

1. Use of pipes:

Export:


cd /home/oracle
# create a named pipe
mknod exp_disc_schema_scott.pipe p
# read the pipe - output to zip file in the background
gzip < exp_disc_schema_scott.pipe > exp_disc_schema_scott.dmp.gz &
# feed the pipe
exp file=exp_disc_schema_scott.pipe log=exp_disc_schema_scott.log owner=scott

Import:


cd /home/oracle
# create a name pipe
mknod imp_disc_schema_scott.pipe p
# read the zip file and output to pipe
gunzip < imp_disc_schema_scott.dmp.gz > imp_disc_schema_scott.pipe &
# feed the pipe
imp file=imp_disc_schema_scott.pipe log=imp_disc_full.log fromuser=scott touser=scott1

Advantages:

1. Compression happens parallel with the export.
2. Compressed dumpfile can be used for import without decompression. (As shown above)
3. Can be used for 10g as well as 11g.

Disavantages:

1. Can’t use this methodology for EXPDP/IMPDP.


2. Advanced compression:

From 11g, You can used advanced compression menthology to compress dumpfiles on fly. This can be used for compressing data, metadata (which is the default value), both and none.


expdp directory=DATA_PUMP_DIR1 dumpfile=exp_disc_schema_scott.dmp logfile=exp_disc_schema_scott.log schemas=scott compression=all.

Advantages:

1. Compression happens parallel with the export.
2. Compressed dumpfile can be directly used for import without decompression.

Disavantages:

1. It takes more time than normal EXPDP operation.(without compression)
2. Advanced Compression License option must be enabled which is an extra cost.
3. This option is just available from Oracle 11g


 

Hope so u will find this post very useful 🙂

Cheers

Regards,

Adityanath

How to start & stop Enterprise Manager(EM) or Oracle Management Services(OMS) in 12C

To stop enterprise manager 12c you need to follow the below steps :

1. Stop the Oracle Management Service:

Run the below command from $OMS_HOME
emctl stop oms -all

2. Stop the Management Agent:

Run the below command from $AGENT_HOME
emctl stop agent

3. Stop Oracle Management Repository

For this u need to stop database & listener services.

To Start enterprise manager 12c you need to follow the below steps :

1. Start Oracle Management Repository:

Start database & listener services.

2. Start the Oracle Management Service:

Run the below command from $OMS_HOME
emctl start oms

3. Start the Management Agent:

Run the below command from $AGENT_HOME
emctl start agent


Hope so u will find this post very useful 🙂

Cheers

Regards,

Adityanath

Deploying Oracle E-Business Suite plug-in or Oracle Application Management Pack into ORACLE OEM 12C

This post will provide u steps to deploy Oracle E-Business Suite plug-in into ORACLE OEM 12C which is also known as Oracle Application Management Pack for Oracle E-Business Suite.

The Oracle Application Management Pack for Oracle E-Business Suite extends Oracle Enterprise Manager 12c Cloud Control to help monitor and manage Oracle E-Business Suite systems more effectively. The pack integrates Oracle Applications Manager with Cloud Control to provide a consolidated, end-to-end Oracle E-Business Suite management solution. The pack can be used to manage both Oracle E-Business Suite Release 11i systems and Release 12 systems.

Before proceeding with deployment u first need to download “Oracle Application Management Suite for Oracle E-Business Suite 12.1.0.3.0” from https://edelivery.oracle.com. This will be a zip file named :V46070-01.zip & with size 433MB.

Now u r ready for OAM suite deployment:


oracle@rmb-zdr-oem01:$export EMCLI_STATE_DIR=/export/home/oracle/plugins
oracle@rmb-zdr-oem01:$export PATH=$PATH:$EMCLI_STATE_DIR
oracle@rmb-zdr-oem01:$ /u01/oracle/12C/middleware/oms/bin/emcli setup -dir=$EMCLI_STATE_DIR -url=https://oracle@rmb-zdr-oem01:7799 -user=sysman
Oracle Enterprise Manager 12c Release 4.
Copyright (c) 1996, 2014 Oracle Corporation and/or its affiliates. All rights reserved.
The configuration directory "/export/home/oracle/plugins" may not be local. See the "dir" option in the help for the setup command.
Do you want to continue using this directory? [yes/no] yes
Enter password:

This process should successfully complete giving status : Emcli setup successful.

Extract V46070-01.zip which will give u this file : ~/software/EBS-PlugIn/12.1.0.3.0/12.1.0.3.0_oracle.apps.ebs_2000_0.opar. Now update this plugin into OEM.


oracle@zbx-oem01:$ /u01/oracle/12C/middleware/oms/bin/emcli import_update -file="/export/home/oracle/plugins/software/EBS-PlugIn/12.1.0.3.0/12.1.0.3.0_oracle.apps.ebs_2000_0.opar" -omslocal
Processing update: Plug-in - Enterprise Manager for Oracle E-Business Suite consists of System Management and Change Management Feature Sets
Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update.

Now U need to use OEM/OMS console to deploy plugin. PFB screenshots for the same: Go to Setup => Extensibility => Plug-ins:

ebiz pluggin

 

ebiz pluggin

ebiz pluggin2

ebiz pluggin3

ebiz pluggin4

ebiz pluggin5

ebiz pluggin7

After this U need to monitor progress using command : emctl status oms -details:

Once everything is back up & running, U can log out and login to OEM console to have deployed Oracle E-Business Suite plug-in 🙂

ebiz plugin screenshots status

ebiz pluggin9

ebiz pluggin10

Hope so u will find this post very useful 🙂

Cheers

Regards,

Adityanath