Tuesday, December 03, 2013

Oracle 12cR1, UDF Pragma and HyperLogLog

One interesting enhancement in 12cR1 PL/SQL is UDF pragma which has the following description:
The UDF pragma tells the compiler that the PL/SQL unit is a user defined function that is used primarily in SQL statements, which might improve its performance.
I though it would be very cool to try it out with my HyperLogLog post I did recently and see if it results in any measurable performance improvement.

Test Table

I'll use the same test table as I did in my original post:
SQL> create table z_hll_test as
  2   select dbms_random.string('x', 4)||rpad('x', 500, 'x') n
  3    from dual
  4    connect by level <= 1000000;
 
Table created

SQL> alter table z_hll_test cache;
 
Table altered
Note that I'm explicitly setting the table to cache in order to make in-memory PQ kick in and eliminate disk I/O as a factor from my test case. For each test I made sure that no physical I/O has happened (including temp I/O for native distinct test).

Regular Function
create or replace function num_zeroes(
 p_n binary_integer
 ) return binary_integer deterministic is
 l_t binary_integer;
 l_b binary_integer;
begin
 --assume 32-bit hash value, 10-bits for bucket
 if (p_n = 0) then return 22; end if;

 l_t := 1;
 l_b := 0;

 while ( bitand(p_n,l_t) = 0 )
 loop
  l_t := l_t*2;
  l_b := l_b+1;
 end loop;

  return l_b;

end num_zeroes;

SQL> select
  2    case
  3      when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4      when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5      else round(hll)
  6    end num_distinct
  7    from (
  8      select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9        from (
 10          select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11            from (
 12              select /*+ parallel(z 4) */ mod(ora_hash(n), 1024) bucket,
 13                  max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 14                from z_hll_test z
 15                group by mod(ora_hash(n), 1024)
 16            )
 17        )
 18  );
 
NUM_DISTINCT
------------
      748175
 
Executed in 0.889 seconds
Pragma UDF Function
create or replace function num_zeroes(
 p_n binary_integer
 ) return binary_integer deterministic is
 pragma udf;
 l_t binary_integer;
 l_b binary_integer;
begin
 --assume 32-bit hash value, 10-bits for bucket
 if (p_n = 0) then return 22; end if;

 l_t := 1;
 l_b := 0;

 while ( bitand(p_n,l_t) = 0 )
 loop
  l_t := l_t*2;
  l_b := l_b+1;
 end loop;

  return l_b;

end num_zeroes;

SQL> select
  2    case
  3      when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4      when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5      else round(hll)
  6    end num_distinct
  7    from (
  8      select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9        from (
 10          select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11            from (
 12              select /*+ parallel(z 4) */ mod(ora_hash(n), 1024) bucket,
 13                  max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 14                from z_hll_test z
 15                group by mod(ora_hash(n), 1024)
 16            )
 17        )
 18  );
 
NUM_DISTINCT
------------
      748175
 
Executed in 0.593 seconds
Pragma UDF gives us ~33% performance boost which is not too bad considering we didn't have to do anything else. However, that's not the most interesting part -- let's take a look at the native distinct next.

Native Distinct
SQL> select /*+ parallel(z 4) */ count(distinct n) from z_hll_test z;
 
COUNT(DISTINCTN)
----------------
          753204
 
Executed in 0.983 seconds

Note that this was an optimal execution!

Summary

Let's summarize results in a table:
Regular FunctionPragma UDF FunctionNative Distinct
0.8890.5930.983

As you can see pragma udf actually beats native implementation by a considerable margin which is very impressive given the fact that distict had an optimal execution.

Tuesday, November 26, 2013

Result Cache Latch and PDBs

One interesting aspect of Oracle 12cR1 database when it comes to PDBs is how latching is done. For example, if all PDBs have to work under the same latch then contention in one PDB can easily affect users in other PDBs too.

Continuing my series of posts on the Result Cache latch today I'll check what happens when you try to acquire RC latch from two different PDBs.

Session 1

The first session is connected under container name TEST:
SQL> select sys_context('userenv', 'con_name') con_name from dual;

CON_NAME
--------------------
TEST

SQL> select addr from v$latch where name='Result Cache: RC Latch';

ADDR
----------------
0000000060041C78
Session 2

The second session is connected under container name TEST2:
SQL> select sys_context('userenv', 'con_name') con_name from dual;

CON_NAME
--------------------
TEST2

SQL> select addr from v$latch where name='Result Cache: RC Latch';

ADDR
----------------
0000000060041C78
As you can see the address of the latch is exactly the same under both containers so both PDBs appear to share exactly the same latch. Let's confirm it by trying to acquire the latch in exclusive mode in both sessions:

Session 1
SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kslgetsl_w 0x0000000060041C78 1 1 1 16
Function returned 1
Session 2
SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kslgetsl_w 0x0000000060041C78 1 1 1 16
...session hangs...
...and we can confirm that it waits on the RC latch:
SQL> select event, p1, to_char(p1, 'fmxxxxxxxxxxx') addr, state, seconds_in_wait
  2   from v$session_wait
  3   where sid=18;
 
EVENT              P1 ADDR         STATE               SECONDS_IN_WAIT
---------- ---------- ------------ ------------------- ---------------
latch free 1610882168 60041c78     WAITING                          25
The bottom line is that the single RC latch appears to be shared by all PDBs.

Monday, November 18, 2013

How to use HyperLogLog to incrementally maintain number of distinct values

In this post I'll show how extremely easy it is to maintain the number of distinct values when using HyperLogLog. Please reference to my previous post for some description how HyperLogLog works.

Let's assume we have a table with some existing data:
SQL> create table existing_data as
  2   select round(dbms_random.value(0, 77777)) n
  3    from dual
  4    connect by level <= 100000;
 
Table created
Precise number of distinct values:
SQL> select count(distinct n) from existing_data;
 
COUNT(DISTINCTN)
----------------
           56192
Now in order for the incremental refresh to work we first need to create HyperLogLog synopsis:
SQL> create table existing_hll as
  2   select mod(ora_hash(n), 1024) bucket,
  3     max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
  4    from existing_data
  5    group by mod(ora_hash(n), 1024);
 
Table created
The table is extremely small as it contains only 1024 rows yet it would be enough to describe data with billions of rows in it. Of course we can now use that synopsis to estimate number of distinct values we have using HyperLogLog:
SQL> select
  2    case
  3     when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4     when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5     else round(hll)
  6   end num_distinct
  7   from (
  8    select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9     from (
 10      select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11       from existing_hll
 12       )
 13   );
 
NUM_DISTINCT
------------
       57676
Again, the result is within 2% of the precise distinct, which is pretty good considering how little information we had to store about the entire data set.

Now let's say there is some new data you want to add into the existing table:
SQL> create table new_data as
  2   select round(dbms_random.value(5555, 111111)) n
  3    from dual
  4    connect by level <= 10000;
 
Table created
So what we're going to do is calculate HyperLogLog synopsis about this new data and then merge it with HyperLogLog synopsis for the existing data:
SQL> merge into existing_hll e
  2   using (
  3    select mod(ora_hash(n), 1024) bucket,
  4      max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
  5     from new_data
  6     group by mod(ora_hash(n), 1024)
  7   ) n on (e.bucket=n.bucket)
  8   when matched then update set e.val=greatest(e.val, n.val)
  9   when not matched then insert values (n.bucket, n.val);

1024 rows merged.

SQL> commit;

Commit complete.
Of course the above is a lot more efficient than what you would have to do otherwise, i.e. calculate the number of distinct values from scratch for the entire data set. Once the synopsis has been refreshed we can estimate the new number of distinct values we have:
SQL> select
  2    case
  3     when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4     when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5     else round(hll)
  6   end num_distinct
  7   from (
  8    select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9     from (
 10      select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11       from existing_hll
 12       )
 13   );
 
NUM_DISTINCT
------------
       62288
And it's no different had I computed HyperLogLog for both data sets from scratch:
SQL> select
  2    case
  3     when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4     when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5     else round(hll)
  6   end num_distinct
  7   from (
  8    select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9     from (
 10      select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11       from (
 12        select mod(ora_hash(n), 1024) bucket,
 13          max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 14         from (
 15          select n from existing_data
 16          union all
 17          select n from new_data
 18         ) group by mod(ora_hash(n), 1024)
 19       )
 20       )
 21   );
 
NUM_DISTINCT
------------
       62288
And the precise distinct count:
SQL> select count(distinct n) from
  2   (
  3   select * from existing_data
  4   union all
  5   select * from new_data
  6   );
 
COUNT(DISTINCTN)
----------------
           60983
I think the ability to incrementally refresh the data is the most powerful aspect of HyperLogLog.

Sunday, November 17, 2013

HyperLogLog in Oracle

Calculating number of unique values using Oracle distinct for big data sets has two major problems:
  • It may require lots of memory for sort/hash group by.
  • It is very difficult to refresh distinct numbers incrementally meaning every time you append some new data you generally have to perform distinct calculations from scratch.
Oracle introduced approximate NDV algorithm in 11G to solve the above problems but it's practical application is pretty much limited to stats gathering at the moment.

Places like Facebook and Google solve the above problems by using a very interesting algorithm called HyperLogLog. There is a pretty good description of how it works but, in a nutshell, it relies on counting the number of trailing (or leading) zeroes in the binary stream and using that information to estimate number of unique values with a very high degree of accuracy.

HyperLogLog is widely used due to the following properties:
  • Extremely low memory footprint -- 1KB of memory is enough to estimate the number of unique values with a very high degree of precision across billions of rows.
  • It can be used to refresh the numbers incrementally by calculating HyperLogLog for the new data set and then combining it with the HyperLogLog for the existing one.
  • It is very parallel friendly. Each piece of data can be independently computed before being combined for the final result -- essentially utilizing the same mechanism as would be used for incremental refresh.
After reading a couple of HyperLogLog papers I became fascinated by it. The elegance of the math behind it is simply brilliant. So I became naturally curious to try it out in Oracle.

Of course the simplest way was to implement it as a user-defined aggregate, which I did. Unfortunately the performance of that solution left a lot to be desired due to a simple fact that there is quite a bit of overhead in executing a user-defined aggregate function for each row in the data set. Based on that I don't think there is a lot of practical applications for HyperLogLog in this form apart from doing a technology demo.

However, it's quite simple to compute most of the HyperLogLog directly inside the SQL statement itself and, as expected, that provides a much better performance. So here is a quick example which demonstrates HyperLogLog in action.

Fist I will create a test table:
SQL> create table z_hll_test as
  2   select dbms_random.string('x', 4)||rpad('x', 500, 'x') n
  3    from dual
  4    connect by level <= 1000000;
 
Table created
Next I will set my pga_aggregate_target=64m to simulate a situation where distinct will have to do a lot of spilling to disk due to large amounts of data being processed as I don't want to spent a lot of time generating hundreds of billions of rows instead.

First let's run Oracle's distinct:
SQL> set timing on
SQL> select /*+ parallel(z 4) */ count(distinct n) from z_hll_test z;
          753521

Elapsed: 00:00:46.02
As you may figure out, most of the 46 seconds were spent spilling to temp:



Let's see what happens if we use HyperLogLog instead. First I will create a function to calculate the number of trailing zeroes we have in a number:
create or replace function num_zeroes(
 p_n binary_integer
 ) return binary_integer deterministic is
 l_t binary_integer;
 l_b binary_integer;
begin
 --assume 32-bit hash value, 10-bits for bucket
 if (p_n = 0) then return 22; end if;

 l_t := 1;
 l_b := 0;

 while ( bitand(p_n,l_t) = 0 )
 loop
  l_t := l_t*2;
  l_b := l_b+1;
 end loop;

  return l_b;

end num_zeroes;
Now we can use this function to compute the HyperLogLog value directly in a SQL statement:
SQL> select
        case
  2    3                when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
                when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  4    5                else round(hll)
  6     end num_distinct
  7  from (
  8  select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9  from (
 10     select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11     from (
 12             select /*+ parallel(z 4) */
 13                             mod(ora_hash(n), 1024) bucket,
 14                             max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 15                     from z_hll_test z
 16                     group by mod(ora_hash(n), 1024)
 17             )
 18     )
 19  );
      738105

Elapsed: 00:00:02.08
The SQL might look a bit complicated but it's pretty straightforward once you realize how HyperLogLog works (see the link I provided above). For you see, the performance is 22x faster because the entire thing was able to compute in-memory (there are only 1024 distinct values left after the group by as far as Oracle is concerned) and result is within 2% of the precise distinct count -- very impressive!

The algoirithm can have even more potential in the Exadata environments, if we imagine for a second that Oracle gives us a native implementation which is off-loaded, because each cell can independently compute it's part of the data and then simply sent the results out for final merge. Indeed, this is how places like Facebook and Google scale it across lots of small servers.

Tuesday, August 27, 2013

Flashback query FTS costs

There has been some information written on the subject already (see this post by Randolf Geist).

In a nutshell, the way optimizer costs full table scans when using flashback query makes it look much more expensive than without. What further complicates the problem is the fact that index access costs remain the same regardless of whether you're using flashback query or not. As a result you will be much more likely to see an index access paths when using flashback query.

Consider the following example:
SQL> create table test as
  2   select level n, rpad('x', 200, 'x') v
  3    from dual
  4    connect by level <= 10000;
 
Table created
 
SQL> alter table test add constraint pk_test primary key (n);
 
Table altered
 
SQL> exec dbms_stats.gather_table_stats(user, 'test');
 
PL/SQL procedure successfully completed
If I were to execute the following select it will run using a HASH JOIN:
SQL> with v as
(select /*+ cardinality(100) */ level n from dual connect by level <= 100)
select *
        from v, test t
        where v.n=t.n;  2    3    4    5

Execution Plan
----------------------------------------------------------
Plan hash value: 690578125

---------------------------------------------------------------------------------------
| Id  | Operation                      | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |      |   100 | 21800 |    60   (2)| 00:00:01 |
|*  1 |  HASH JOIN                     |      |   100 | 21800 |    60   (2)| 00:00:01 |
|   2 |   VIEW                         |      |   100 |  1300 |     2   (0)| 00:00:01 |
|*  3 |    CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   4 |     FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   5 |   TABLE ACCESS FULL            | TEST | 10000 |  2001K|    57   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("V"."N"="T"."N")
   3 - filter(LEVEL<=100)
In the particular case I'm talking about the flashback query was hidden behind the view (to make it transparent to the application) so we essentially had the following:
SQL> create or replace view v_test as
  2  select * from test as of scn dbms_flashback.get_system_change_number;
 
View created
(dbms_flashback.get_system_change_number is only used here as a substitute example)

Now let's see what happens if we run the same query as above but join into this view instead:
SQL> with v as
(select /*+ cardinality(100) */ level n from dual connect by level <= 100)
select *
        from v, v_test t
        where v.n=t.n;  2    3    4    5

Execution Plan
----------------------------------------------------------
Plan hash value: 3196053776

-------------------------------------------------------------------------------------------
| Id  | Operation                       | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |         |   100 | 21800 |   102   (0)| 00:00:02 |
|   1 |  NESTED LOOPS                   |         |       |       |            |          |
|   2 |   NESTED LOOPS                  |         |   100 | 21800 |   102   (0)| 00:00:02 |
|   3 |    VIEW                         |         |   100 |  1300 |     2   (0)| 00:00:01 |
|*  4 |     CONNECT BY WITHOUT FILTERING|         |       |       |            |          |
|   5 |      FAST DUAL                  |         |     1 |       |     2   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN            | PK_TEST |     1 |       |     0   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID   | TEST    |     1 |   205 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(LEVEL<=100)
   6 - access("V"."N"="N")
The plan suddenly changed to NL join! What happened is flashback query cranked the FTS cost up while leaving index access cost to be the same thus making an FTS to be much less appealing choice for the optimizer. This can make a lot of a difference especially if you're running in the Exadata environment. So what do you do?

One way to deal with the problem is to see how much more expensive the table scan became when using flashback query:
SQL> select * from test
union all
select * from v_test;  2    3

Execution Plan
----------------------------------------------------------
Plan hash value: 2275963031

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 20000 |  4003K|   367  (85)| 00:00:05 |
|   1 |  UNION-ALL         |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| TEST | 10000 |  2001K|    57   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| TEST | 10000 |  2001K|   310   (0)| 00:00:04 |
---------------------------------------------------------------------------
The cost went up from 57 to 310. That's how much more expensive the view will going to make an FTS look like to the optimizer. So now we can counter-balance that increase with the corresponding increase in cost of index scans using opt_param hint in our view:
SQL> select round((310/57)*100) x from dual;
 
         X
----------
       544
 
SQL> create or replace view v_test as
  2  select /*+ opt_param('optimizer_index_cost_adj',544) */ *
  3   from test as of scn dbms_flashback.get_system_change_number;
 
View created
The plan will now go back to a HASH JOIN
SQL> with v as
(select /*+ cardinality(100) */ level n from dual connect by level <= 100)
select *
        from v, v_test t
        where v.n=t.n;  2    3    4    5

Execution Plan
----------------------------------------------------------
Plan hash value: 690578125

---------------------------------------------------------------------------------------
| Id  | Operation                      | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |      |   100 | 21800 |   313   (1)| 00:00:04 |
|*  1 |  HASH JOIN                     |      |   100 | 21800 |   313   (1)| 00:00:04 |
|   2 |   VIEW                         |      |   100 |  1300 |     2   (0)| 00:00:01 |
|*  3 |    CONNECT BY WITHOUT FILTERING|      |       |       |            |          |
|   4 |     FAST DUAL                  |      |     1 |       |     2   (0)| 00:00:01 |
|   5 |   TABLE ACCESS FULL            | TEST | 10000 |  2001K|   310   (0)| 00:00:04 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("V"."N"="N")
   3 - filter(LEVEL<=100)
Of course there are a lot of limitations to this approach, mainly that any query which references the view will get the index access cost adjusted accordingly so if you have a bunch of flashback and non-flashback tables in the same query it deserves a careful thought.

Thursday, July 25, 2013

Enkitec E4 2013

Just a quick note that I'll be presenting at this year's Enkitec E4 conference. You can find the schedule here. I did some under the hood investigation regarding how the whole DBFS stack works from the performance perspective and, needless to say, some findings simply left me startled. If you want to see a session which will forever change the way you look at DBFS then this will definitely be the one to attend.

Tuesday, July 23, 2013

Oracle GoldenGate Integrated Capture #2

I have already written some words on the subject before. However, since then some interesting things have happened.

To recap (or save you some time if you don't want to read the original article) GoldenGate Integrated Capture is nothing else but Oracle Streams Capture in disguise. When running in the Integrated Capture mode pretty much the entire GoldenGate front-end gets yanked out of the picture and replaced with Oracle Streams Capture technology instead.

Let's take a closer look at some of the differences that happened between now and then. Here are two capture processes:
SQL> select capture_name, rule_set_name, purpose from dba_capture order by 1;
 
CAPTURE_NAME                   RULE_SET_NAME                  PURPOSE
------------------------------ ------------------------------ -------------------
OGG$CAP_GG1_EXT1               OGG$GG1_EXT1_CAPTURE_I         GoldenGate Capture
OGG$CAP_GG1_EXT2                                              GoldenGate Capture
OGG$CAP_GG1_EXT1 was created using GoldenGate 11.2.1.0.3 while OGG$CAP_GG1_EXT2 was created using 11.2.1.0.6BP3. Notice how RULE_SET_NAME is empty for the second one? Without the rule set the Capture process will just capture everything that's happening in the database so my first concern was whether it's going to negatively affect the performance?

Indeed, if we take a look at the stats for both processes...
 
       SID CAPTURE_NAME     STARTUP_TIME          TOTAL_PREFILTER_DISCARDED TOTAL_MESSAGES_ENQUEUED
---------- ---------------- --------------------- ------------------------- -----------------------
        33 OGG$CAP_GG1_EXT1 23/07/2013 2:21:56 PM                        47                      71
       147 OGG$CAP_GG1_EXT2 23/07/2013 2:21:56 PM                        28                    2589
...notice a big difference in TOTAL_MESSAGES_ENQUEUED. Both processes were started at exactly the same time and they are capturing from exactly the same objects. This difference makes sense -- without a rule set in place the second process will enqueue every change it sees. By itself, this change would be pretty bad, but something else has happened as well:
SQL> select capture_name, v.program
  2   from v$streams_capture c, v$session v
  3   where c.SID = v.SID;
 
CAPTURE_NAME                   PROGRAM
------------------------------ ------------------------------------------------
OGG$CAP_GG1_EXT1               oracle@gg1.quadro.com (CP01)
OGG$CAP_GG1_EXT2               extract@gg1.quadro.com (TNS V1-V3)
Notice how OGG$CAP_GG1_EXT1 is an Oracle shadow process (a Streams Capture process) while OGG$CAP_GG1_EXT2 is the Extract process itself! What we had before is the Extract process acting as an XStreams client to the Streams Capture process. With this change it is no longer required and we can see that the XStreams client is now detached from the server:
SQL> select server_name, capture_name, status from dba_xstream_outbound order by 1;
 
SERVER_NAME                    CAPTURE_NAME                   STATUS
------------------------------ ------------------------------ --------
OGG$GG1_EXT1                   OGG$CAP_GG1_EXT1               ATTACHED
OGG$GG1_EXT2                   OGG$CAP_GG1_EXT2               DETACHED
That clears the concern of an empty rule set -- the Extract can now filter the records internally because nothing needs to be shipped out anyways where before it would have resulted in a massive traffic between the Capture process, buffered queue and the XStreams client.

Wednesday, March 13, 2013

Parallel unfriendly

Take a look at the following Parallel section of a SQL Monitor report:

Any query which produces such a report won't care about how much parallel you're running because virtually all the time is spent by the query coordinator (which is a serial process) being busy.

In this case the query in question is quite simple:
select /*+ parallel(t,8) */ median(basket_amount) from whs.fact_sale t
The reason it behaves the way it does has everything to do with how Oracle executes it:
Execution Plan
----------------------------------------------------------
Plan hash value: 712547042

-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |            |     1 |     4 |   110K  (3)| 00:00:03 |       |       |        |      |            |
|   1 |  SORT GROUP BY                |            |     1 |     4 |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR              |            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)        | :TQ10000   |   724M|  2763M|   110K  (3)| 00:00:03 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR         |            |   724M|  2763M|   110K  (3)| 00:00:03 |     1 |1048575|  Q1,00 | PCWC |            |
|   5 |      TABLE ACCESS STORAGE FULL| FACT_SALE  |   724M|  2763M|   110K  (3)| 00:00:03 |     1 |1048575|  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------------------
Each parallel query slave will gets it's own chunk of the table to read from and then simply send data back to the coordinator. The coordinator will then have to deal with all this data by sorting more than 700M rows which, of course, won't be particularly fast. In this sense median poses an interesting problem since Oracle can't calculate (or, rather, discover) it without having access to the entire data set and query coordinator is the only process which can do it.

So what do you do when you get impacted by a particular choice of algorithm implemented by Oracle? One way to deal with it is to see whether you can trade one set of problem for another, in case the alternative can be executed in a better way. In this particular case the fact table contains the sale transactions for a particular store chain. While there are many different ways to spent money, the number of distinct spending amounts should be relatively low compared to the number of rows we have in the table and in such a case we can calculate the median in a different way.

What we can do instead is count how many occurrences of each spending we have and, when sorted by the spending amount, that will give us a compressed form of the raw data which still retains all the information required to find a median. Let's say you have a table with the following data:
SQL> select n from z_t;
 
         N
----------
         1
         1
         2
         3
         3
         5
         7
 
7 rows selected
The first step is to find how many occurrences of each value do we have:
SQL> select n, count(*) cnt
  2   from z_t
  3   group by n;
 
         N        CNT
---------- ----------
         1          2
         2          1
         5          1
         3          2
         7          1
If the number of distinct values is relatively low, the group by will be able to collapse the result set enough as to make subsequent work to be not very significant as well as do it in a very parallel friendly way. The next step is to calculate the cardinality of the data set, at which places we have each distinct value as well as how many values are there:
SQL> select n, lag(running_sum, 1, 0) over (order by n) prev_running_sum, running_sum, total_row_count
  2  from (
  3  select n,
  4   sum(cnt) over (order by n) running_sum,
  5   sum(cnt) over () total_row_count
  6  from (
  7  select n, count(*) cnt
  8   from z_t
  9   group by n
 10  ));
 
         N PREV_RUNNING_SUM RUNNING_SUM TOTAL_ROW_COUNT
---------- ---------------- ----------- ---------------
         1                0           2               7
         2                2           3               7
         3                3           5               7
         5                5           6               7
         7                6           7               7
So what the above tells us is we have two 1s, followed by a single 2, followed by two 3s and so on. Because we have seven elements in our data set, we know that the median will be the item number four which we can now easily discover:
SQL> select avg(n) from (
  2  select n, lag(value_begin, 1, 0) over (order by n) prev_value_begin, value_begin, total_row_count
  3  from (
  4  select n,
  5   sum(cnt) over (order by n) value_begin,
  6   sum(cnt) over () total_row_count
  7  from (
  8  select n, count(*) cnt
  9   from z_t
 10   group by n
 11  ))) where total_row_count/2 between prev_value_begin and value_begin;
 
    AVG(N)
----------
         3
The avg is there for a case where we have an even number of elements since the median in this case equals to a mean value of two values in the middle.

Our new real query will look like this:
select avg(n) from (
select n, lag(value_begin, 1, 0) over (order by n) prev_value_begin, value_begin, total_row_count
from (
select n,
 sum(cnt) over (order by n) value_begin,
 sum(cnt) over () total_row_count
from (
select /*+ parallel(t,8) */ basket_amount n, count(*) cnt
 from whs.fact_sale t
 group by basket_amount
))) where total_row_count/2 between prev_value_begin and value_begin;
So what does a group by and a set of analytic functions is able to bring to a table? Let's take a look:

The total execution time has dropped from almost 26 minutes down to 28 seconds. Moreover, the workload is now much more skewed towards parallel query slaves, which is exactly what we want to see. Of course, the trick only works if the group by is able to collapse the data sufficiently enough.

Tuesday, February 26, 2013

In-memory PQ and physical reads

In my previous post I've demonstrated how in-memory PQ can access the table directly from the buffer cache even when you're using manual DOP.

One interesting question, however, is what happens when PQ slave needs to read some blocks from disk given that object has been qualified for in-memory (cached) access? Would the slave do it using direct or buffered I/O?

The answer becomes somewhat clear once you realize that in-memory PQ is enabled by simply utilizing buffered reads. Since direct path reads can not take advantage of any blocks stored in the buffer cache (local or remote), trying to do direct path reads will defeat the whole point of in-memory PQ.

To demonstrate the point here is the excerpt from a tkprof output for one of the parallel query slaves:
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         1          1          1     SORT AGGREGATE (cr=62838 pr=62500 pw=0 time=7492751 us)
     57696      62500      72120      PX BLOCK ITERATOR (cr=62838 pr=62500 pw=0 time=24528381 us cost=18846 size=0 card=500000)
     57696      62500      72120       TABLE ACCESS FULL Z_TEST (cr=62838 pr=62500 pw=0 time=23944184 us cost=18846 size=0 card=500000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Execution Msg                         128        0.19          1.06
  db file scattered read                       4546        0.16         43.99
  latch free                                      1        0.00          0.00
  latch: cache buffers lru chain                 31        0.01          0.03
  latch: cache buffers chains                     3        0.00          0.00
  latch: object queue header operation            4        0.00          0.00
Note that the slave waited on db file scattered read event which is nothing else but buffered multiblock reads. If you were to run the same test on the Exadata platform you would see cell multiblock physical read event instead, given that in-memory PQ did get utilized. There are a couple of consequences for this:
  • There is no need to do object level checkpoint. This makes in-memory PQ somewhat more friendly to be running in OLTP environment since it doesn't need to flush any dirty buffers to disk.
  • If you running on the Exadata platform, none of the offloading will happen, even if you have to read the significant portion of the table from disk.

On another note it looks like the things came full circle. Serial sessions are now able to utilize direct path reads while parallel query slaves are able to do buffered I/O.

All tests were done on 11.2.0.3 (both Exadata and non-Exadata).

Monday, February 18, 2013

Does in-memory PQ work with PARALLEL_DEGREE_POLICY=MANUAL?

In-memory parallel execution seems to be gaining popularity especially among people running x2-8 and x3-8 Exadata systems or any other system that have large amounts of memory capable of caching lots of data.

Oracle documentation suggests that in order to utilize in-memory PQ, parallel_degree_policy needs to be set to auto.

_parallel_cluster_cache_policy

One of the parameters influenced by parallel_degree_policy is _parallel_cluster_cache_policy. When using Auto DOP _parallel_cluster_cache_policy will be set to cached. The question then becomes what happens if we set _parallel_cluster_cache_policy=cached while still keeping Manual DOP? Will the system use in-memory PQ?

Test table

Below is a test table setup:
SQL> create table z_test tablespace data as
        select level n, rpad('*', 4000, '*') v
                from dual
                connect by level <= 500000;

Table created.

SQL> alter table z_test add constraint pk_z_test primary key (n);

Table altered.

SQL> select bytes/power(1024,2) mb
        from user_segments
        where segment_name='Z_TEST';

        MB
----------
      3968

SQL> exec dbms_stats.gather_table_stats(user, 'z_test');

PL/SQL procedure successfully completed.
The instance is running with 12G buffer cache so it'll have no problems fully caching the above table. All tests were done on my in-house test lab with Oracle 11.2.0.3.3 running inside a Linux VM.

Classic PQ #1

Without setting any additional parameters PQ behave the way it always did -- by utilizing direct path reads directly to the process' memory:
SQL> set timing on
SQL> set autot trace exp stat
SQL> select /*+ parallel(8) full(z_test) */ count(*) from z_test;

Elapsed: 00:00:02.86

Execution Plan
----------------------------------------------------------
Plan hash value: 2128527892

--------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 | 18846   (1)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| Z_TEST   |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint


Statistics
----------------------------------------------------------
         25  recursive calls
          0  db block gets
     500525  consistent gets
     500000  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
It took us about 2.86 seconds to full scan the table which equals 1387MB/s throughput (my test lab storage setup is described here). The above clearly shows that we had to do physical reads in order to access the entire table.

Caching the table

Of course, before testing the in-memory PQ, we need to make sure that our entire table sits in the buffer cache. The easiest way to do it is perform an FTS on the table using an index:
SQL> select /*+ index(z_test,pk_z_test) */ v from z_test;

500000 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 579016438

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |   500K|  1907M|   501K  (1)| 00:00:03 |
|   1 |  TABLE ACCESS BY INDEX ROWID| Z_TEST    |   500K|  1907M|   501K  (1)| 00:00:03 |
|   2 |   INDEX FULL SCAN           | PK_Z_TEST |   500K|       |  1052   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     534311  consistent gets
     501105  physical reads
          0  redo size
 2021185355  bytes sent via SQL*Net to client
     367187  bytes received via SQL*Net from client
      33335  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     500000  rows processed
Let's check to make sure all table blocks are in the buffer cache:
SQL> set autot off
SQL> select count(*)
        from v$bh
        where objd= (select data_object_id from user_objects where object_name='Z_TEST')
                and status='xcur';

  COUNT(*)
----------
    500001
Now we're good to go!

Classic PQ #2

Even with the table entirely cached we still get it using physical reads when utilizing classic PQ -- as it should be:
SQL> set autot trace exp stat
SQL> select /*+ parallel(8) full(z_test) */ count(*) from z_test;

Elapsed: 00:00:02.83

Execution Plan
----------------------------------------------------------
Plan hash value: 2128527892

--------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 | 18846   (1)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| Z_TEST   |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint


Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
     500525  consistent gets
     500000  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
In-memory PQ

Let's flip the parameter responsible for in-memory PQ (while still keeping parallel_degree_policy=manual) and see what happens:
SQL> alter session set "_parallel_cluster_cache_policy"=cached;

Session altered.

Elapsed: 00:00:00.01
SQL> select /*+ parallel(8) full(z_test) */ count(*) from z_test;

Elapsed: 00:00:00.36

Execution Plan
----------------------------------------------------------
Plan hash value: 2128527892

--------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 | 18846   (1)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| Z_TEST   |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint


Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
     502709  consistent gets
          0  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
For you see -- the entire table got read from the buffer cache this time and much faster! The fact that we did zero physical IOs shows in-memory PQ kicking in.

Conclusion

It is possible to use in-memory PQ with Manual DOP by setting _parallel_cluster_cache_policy=cached. Of course, always consult with Oracle support before flipping any of the underscore parameters.

Monday, January 28, 2013

GoldenGate and transient PK updates

The problem of transient PK updates is well known and comes from the fact that pretty much every replication solution on the market applies changes using row-by-row approach.

To quickly recap the problem, if you have a table like:
SQL> create table tpk (n number primary key);
 
Table created
 
SQL> insert into tpk values (1);
 
1 row inserted
 
SQL> insert into tpk values (2);
 
1 row inserted
 
SQL> commit;
 
Commit complete
Then executing the following statement...
update tpk set n=n+1
...will result in a transient PK problem since the replication solution will have to decompose it into the following two statements:
update tpk set n=2 where n=1;
update tpk set n=3 where n=2;
There are two immediate (and major) problems with the above statements. The first problem is that we can't execute the first statement without violating the primary key constraint. Another problem is, even if we somehow could execute the first statement, the second statement will result in updating both rows since they now have the same value!

Oracle Streams historically dealt with that problem using internal mechanism which you could leverage by executing a specially constructed LCR. Other (third-party) replication solution were pretty much out of luck and had to rely on elaborate tricks in order to work around the problem.

This is all about to change.

The necessity to better integrate GoldenGate and bring it feature set up has driven quite a bit of exciting innovation. One of these innovations solves the transient PK problem not only for GoldenGate but for everybody else.

dbms_xstream_gg package

The above package has been available at least since 11.2.0.2 and has two procedures which are directly relevant to the problem described above. I'm talking about enable_tdup_workspace and disable_tdup_workspace. Here is a quick demonstration of how they work:
SQL> --this will result in PK violation
SQL> update tpk set n=2 where n=1;
 
update tpk set n=2 where n=1
 
ORA-00001: unique constraint (ROOT.SYS_C005031) violated
 
SQL> exec dbms_xstream_gg.enable_tdup_workspace;
 
PL/SQL procedure successfully completed
 
SQL> --this is now works!
SQL> update tpk set n=2 where n=1;
 
1 row updated
 
SQL> update tpk set n=3 where n=2;
 
1 row updated
 
SQL> exec dbms_xstream_gg.disable_tdup_workspace;
 
PL/SQL procedure successfully completed
 
SQL> commit;
 
Commit complete
As you can see, the procedure allows us to avoid classical transient PK problem! Indeed, that's what GoldenGate uses internally to avoid getting in troubles as well. The implementation seems to be leveraging the same delete+insert trick Oracle Streams did:
SQL> exec dbms_xstream_gg.enable_tdup_workspace;
 
PL/SQL procedure successfully completed
 
SQL> select n, rowid from tpk;
 
         N ROWID
---------- ------------------
         1 AAAO25AAFAAClWFAAA
         2 AAAO25AAFAAClWFAAB
 
SQL> update tpk set n=2 where n=1;
 
1 row updated
 
SQL> select n, rowid from tpk;
 
         N ROWID
---------- ------------------
         2 AAAO25AAFAAClWFAAB
 
SQL> update tpk set n=3 where n=2;
 
1 row updated
 
SQL> select n, rowid from tpk;
 
         N ROWID
---------- ------------------
         2 AAAO25AAFAAClWFAAA
         3 AAAO25AAFAAClWFAAB
Note how the row mysteriously disappears after the first update and then suddenly comes back after the second one?

I think anybody who were into any sort of replication and its problems will find this to be one of the most significant new features made available. The only caveat is that the above package is not documented so anyone thinking about leveraging it needs to carefully think about the way it behaves.