recent

Titulo

DBMS_REDEFINATION Package


The data inside a table grows with time if you don't set up proper purging process. Sometime a business requires all data and may not be purged or archived. Most relational DBMS handle big table pretty well but it may make the table maintenance nightmare for Database Administrators. The rule is to make table as small as possible. Smaller tables are fast for Select, Insert, Delete and occupies less disk-space. Who does not like small tables and why can't we make them small?

It is a duty of a Developer and DBA to monitor table's growth and make appropriate recommendations and changes with time and data volume. This may not always be the case as they both are busy supporting new or existing projects. They may not know until the application or user start to see performance degradation. Once the table grows big, the performance is slow because it has to read from a big table. The Delete, Insert, and Update operations are slow because the Indexes on the table are big as well. Also, there are many more reasons why big table are slow.

There are two recommendations I can make for these big slow running tables to make it faster. Periodically, the table's data needs evaluation and purge it or archived when possible. The only way you can achieve is by deleting data which may take it forever from big table. If it was a partitioned table, you could have drop the unwanted partitions which may take 2 min or less. 2 min or less is much better than a Delete operations that takes forever.

The second approach is to make the big table a partitioned table. Partitioned table are easy to maintain and are fast for DML operations. Why are they fast for DML? because you will working with the data from particular partition. You can drop partitions if you don't need the data, this is fast compare to delete on a regular table. With this article, we are going to convert a big table into a partitioned table online. Yes, it is going to be online, no downtime required for this operation. Your application stays live while we convert a regular table into a partitioned table.

How do you accomplish this? There are two ways to get this done. The first one is your traditional method where you create a brand new partitioned table and move data from old to a new partitioned table along with their grants, privileges and dependencies. The entire process takes time to move data but works 100% percent of the time which requires a table to be offline.

The second method is what I like the most by using Oracle supplied DBMS_REDIFINATION package. Remember, it comes with a disclaimer " not all tables qualify for online redefinition". This package will check before you can begin redefining. This is the best approach for mission critical application that are serving customers 24/7 and can't afford a single minuter of downtime. I have used redefinition package and I loved it. After you are done reading the article, you will love the approach which I am about to show you. I am going to show you how you can use this package to make your busy life simple and easy just the way every DBA and Developer want.

We have a sales table on baniya schema. This table is pretty much empty and I am going to insert random sales table from year 2000 -2016. All the DDL and DML scripts  for this examples are below with the instructions. The sales table has around 24068 records and holds data since 2000. Our goal is to convert [baniya.sales] table into a partitioned table where each partition stores one year's worth of sales data. After a successful table redefinition, the sales table will have 17 partitions. Are you ready?

First, we will prepare an environment by creating a sales table that holds data from year 2000. Next we will redefine regular sales table into a range-interval table.

Prep Work:
  1. Create Sales table.
  2. Create Sequence for PK (sales_id)
  3. Write a PL/SQL script to  randomly Insert sales data from 2000.
  4. Verify sales table.
DBMS_REDEFINITION ONLINE:
  1. Check if the table qualifies for Online Redefinition.
  2. Create Interim Partitioned Table.
  3. Start Online Redefinition Process
  4. Create table Constraints and dependencies on Interim table.
  5. Complete Redefinition Process.
  6. Verify partition table ( partition, upper bound, partition key column, partition type, insert, each partition had proper data, etc..)
  7. Clean up Interim Table..
We will write SQL DDL for everything we mentioned on prep work and Online partitions below:

Prep Work:

Sales Table:
CREATE TABLE baniya.sales(
 sales_id  NUMBER PRIMARY KEY,
 sales_date DATE,
 sales_amt  NUMBER(5, 2));

Sequence:
CREATE sequence baniya.sales_id_seq start with 500 increment BY 1 NOMAXVALUE;

For Oracle Version 12c or higher, you may not need Sequence, instead you can define sales_id column as IDENTITY. This is similar to that of MySQL and SQL Sever which is an auto increment column.

Next, we are back filling random sales data from 2000 till system date.The goal here it to create at least one record from each day since 2000 so we will have more partitions when we are done redefining table. We will run this insert script data 4 times to load more data into sales table. You can run as many times as possible. The script is designed for re-run able script.

Insert Random Data:
SET serveroutput ON;
DECLARE
 counter   NUMBER := 0;
 sales_date  DATE := '01-JAN-2000';
 my_sales_date DATE := '01-JAN-2000';
 sales_amt  NUMBER(5, 2);
BEGIN
 LOOP
  my_sales_date := sales_date + counter;
  --random number between 200 and 999
  sales_amt := dbms_random.value(200, 999);
  INSERT INTO baniya.sales(
   sales_id,
   sales_date,
   sales_amt)
  VALUES(
   baniya.sales_id_seq.NEXTVAL,
   my_sales_date,
   sales_amt) ;
  dbms_output.put_line('SALES DATE '||my_sales_date) ;
  dbms_output.put_line('SALES AMT '||sales_amt) ;
  counter :=(counter + 1) ;
  IF my_sales_date =(TRUNC(sysdate)) THEN
   EXIT;
  END IF;
 END LOOP;
 COMMIT;
END;
/

Verifying sales table and the data we just inserted here:
SELECT * FROM baniya.sales;
SELECT COUNT(*) FROM baniya.sales; 
--inserted around 24068 records. 
select min(sales_date), max(sales_date) from baniya.sales; 
-- 01-JAN-2000 00:00.00 21-JUN-2016 00:00.00

DBMS_REDEFINITION ONLINE:

Step 1: Checking to ensure sales table qualifies for Online Redefinition:

BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('BANIYA', 'SALES');
END;
/
Output:
PL/SQL procedure successfully completed.

Step 2: Create Interim table. This table is what you would like to see your new partition table.

CREATE TABLE baniya.sales_interim(
 sales_id  NUMBER PRIMARY KEY,
 sales_date DATE,
 sales_amt  NUMBER(5, 2))
 PARTITION BY RANGE(sales_date)
 INTERVAL(numtoyminterval(1, 'YEAR'))
 (PARTITION PART_2000 VALUES LESS THAN(TO_DATE('2001-JAN-01', 'YYYY-MON-DD')));

Verify the interim table by making sure part_2000 partition exist and the new data insert creates a new partition. Delete all the data before moving into step 3. You may also use Oracle views to check partition key, partition type and upper value.

--partition key column
SELECT * FROM dba_part_key_columns
WHERE name ='SALES_INTERIM'
AND owner ='BANIYA';
--partition type  
SELECT owner,
table_name,
partitioning_type,
ref_ptn_constraint_name
FROM dba_part_tables
WHERE owner = 'BANIYA'
AND table_name = 'SALES_INTERIM'; 
--Checking partition name and high value.
SELECT partition_name, high_value
FROM dba_tab_partitions
WHERE table_owner ='BANIYA'
AND table_name ='SALES_INTERIM';

Step 3 -Start Online Redefination:
BEGIN
   DBMS_REDEFINITION.START_REDEF_TABLE('BANIYA', 'SALES', 'SALES_INTERIM') ;
END;
/
Output:
PL/SQL procedure successfully completed.

Step 4 -Copy Dependents objects:
DECLARE
 num_errors PLS_INTEGER;
BEGIN
   DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('BANIYA', 'SALES', 'SALES_INTERIM', 
   DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors) ;
END;
/
Output:
PL/SQL procedure successfully completed.

Step 5 - Checks for errors:
 SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS;

Step 6 -Verify data count on original and interim table. They both should have same number of rows.
-- Verify the data on main table
SELECT COUNT(*) FROM baniya.sales; 

--verify the data on interium table
SELECT COUNT(*) FROM baniya.sales_interim;

Step 7 - Resync the table. This copies data from the original to interim table.
BEGIN
  EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE('BANIYA','SALES','SALES_INTERIM');
END;
/

Step 9 -Finish the online redefinition, the redefinition process will be over and the interim table will become original table.

BEGIN
 DBMS_REDEFINITION.FINISH_REDEF_TABLE('BANIYA','SALES','SALES_INTERIM');
END;
/
Output:
PL/SQL procedure successfully completed

We are done with the redefinition process but we need to make sure all the data, constraints and grants remained same as original table. There are few things I would like to checks like the number of rows, constraints, index, partition column key and the number of partitions.

--Verify the final table..
-- displays partition key columns
SELECT * FROM dba_part_key_columns
 WHERE name ='SALES'
 AND owner ='BANIYA';
-- displays partition type  
SELECT owner,
  table_name,
  partitioning_type,
  ref_ptn_constraint_name
 FROM dba_part_tables
 WHERE owner = 'BANIYA'
 AND table_name = 'SALES'; 
--displays partition name and high value.
SELECT partition_name, high_value
 FROM dba_tab_partitions
 WHERE table_owner ='BANIYA'
 AND table_name ='SALES';

Step 10 Clean up interim table.
DROP TABLE baniya.sales_interim;

Congratulations!! You are done, you are now a certificated dbms_redefinition professional if you have completed the lab.


Interested in working with me? I can be reached at pbaniya04[at]gmail.com for any questions, consulting opportunities or you may drop a line to say HELLO. Thank your again for visiting my blog and looking forward to serving you more.

Have a Database-ious Day!

No comments

Powered by Blogger.