That is—all of the data for the month of February 2020. If we insert another row in some other month, as follows, we can see that another partition, SYS_P1624, is added that contains all of the data for the month of June 2020:
SQL> insert into audit_trail (ts,data) values ( to_date(’25-jun-2020′, ‘dd-mon-yyyy’), ‘xx’ );1 row created.
SQL> select a.partition_name, a.tablespace_name, a.high_value, decode( a.interval, ‘YES’, b.interval ) intervalfrom user_tab_partitions a, user_part_tables b where a.table_name = ‘AUDIT_TRAIL’ and a.table_name = b.table_nameorder by a.partition_position;
You might be looking at this output and asking why everything is in the USERS tablespace. We clearly asked for the data to be spread out over the USERS tablespace and the EXAMPLE tablespace, so why is everything in a single tablespace?
It has to do with the fact that when the database is figuring out what partition the data goes into, it is also computing which tablespace it would go into. Since each of our partitions is an even number of months away from each other and we are using just two tablespaces, we end up using the same tablespace over and over.
If we only loaded “every other month” into this table, we would end up using only a single tablespace. We can see that the EXAMPLE tablespace can be used by adding some row that is an “odd” number of months away from our existing data:
SQL> insert into audit_trail (ts,data) values ( to_date(’15-mar-2020′,’dd-mon-yyyy’), ‘xx’ );1 row created.
SQL> select a.partition_name, a.tablespace_name, a.high_value, decode( a.interval, ‘YES’, b.interval ) intervalfrom user_tab_partitions a, user_part_tables b where a.table_name = ‘AUDIT_TRAIL’and a.table_name = b.table_name order by a.partition_position;
Now we have used the EXAMPLE tablespace. This new partition was slid in between the two existing partitions and will contain all of our March 2020 data.
You might be asking, “What happens if I roll back at this point?” If we were to roll back, it should be obvious that the AUDIT_TRAIL rows we just inserted would go away:
SQL> select * from audit_trail;
SQL> rollback;Rollback complete.
SQL> select * from audit_trail; no rows selected
But what isn’t clear immediately is what would happen to the partitions we added:do they stay or will they go away as well? A quick query will verify that they will stay:
SQL> select a.partition_name, a.tablespace_name, a.high_value, decode( a.interval, ‘YES’, b.interval ) interval from user_tab_partitions a, user_part_tables b where a.table_name = ‘AUDIT_TRAIL’ and a.table_name = b.table_name order by a.partition_position;PARTITION_ TABLESPACE HIGH_VALUE INTERVAL
As soon as they are created, they are committed and visible. These partitions are created using a recursive transaction, a transaction executed separate and distinct from any transaction you might already be performing.
When we went to insert the row and the database discovered that the partition we needed did not exist, the database immediately started a new transaction, updated the data dictionary to reflect the new partition’s existence, and committed its work.
It must do this, or there would be severe contention (serialization) on many inserts as other transactions would have to wait for us to commit to be able to see this new partition. Therefore, this DDL is done outside of your existing transaction, and the partitions will persist.
You might have noticed that the database names the partition for us; SYS_P1625 is the name of the newest partition. The names are not sortable nor very meaningful in the sense most people would be used to.
They show the order in which the partitions were added to the table (although you cannot rely on that always being true; it is subject to change) but not much else. Normally, in a range partitioned table, the DBA would have named the partition using some naming scheme and in most cases would have made the partition names sortable.
For example, the February data would be in a partition named PART_2020_02 (using a format of PART_yyyy_mm), March would be in PART_2020_03, and so on. With interval partitioning, you have no control over the partition names as they are created, but you can easily rename them afterward if you like.
For example, we could query out the HIGH_VALUE string and using dynamic SQL convert that into nicely formatted, meaningful names.
We can do this because we understand how we’d like the names formatted; the database does not. For example:
SQL> declarel_str varchar2(4000);beginfor x in ( select a.partition_name, a.tablespace_name, a.high_value from user_tab_partitions a
where a.table_name = ‘AUDIT_TRAIL’and a.interval = ‘YES’and a.partition_name like ‘SYS_P%’ escape ‘\’ )loopexecute immediate’select to_char( ‘ || x.high_value || ‘-numtodsinterval(1,”second”), ””PART_”yyyy_mm” ) from dual’ into l_str;execute immediate’alter table audit_trail rename partition “‘ || x.partition_name || ‘” to “‘ || l_str || ‘”‘;end loop;end;/
PL/SQL procedure successfully completed.
So, what we’ve done is take the HIGH_VALUE and subtract one second from it. We know that the HIGH_VALUE represents the strictly less than value, so one second before its value would be a value in the range. Once we have that, we applied the format “PART_”yyyy_mm to the resulting TIMESTAMP and get a string such as PART_2020_03 for March 2020. We use that string in a rename command, and now our data dictionary looks like this:
SQL> select a.partition_name, a.tablespace_name, a.high_value, decode( a.interval, ‘YES’, b.interval ) intervalfrom user_tab_partitions a, user_part_tables b where a.table_name = ‘AUDIT_TRAIL’and a.table_name = b.table_nameorder by a.partition_position;
We would just run that script every now and then to rename any newly added partitions to keep the nice naming convention in place. Bear in mind, to avoid any SQL injection issues (we are using string concatenation, not bind variables; we cannot use bind variables in DDL), we would want to keep this script as an anonymous block or as an invoker’s rights routine if we decide to make it a stored procedure. That will prevent others from running SQL in our schema as if they were us, which could be a disaster.