Friday, September 1, 2017

DYNAMIC_SAMPLING SQL Plan Directive in the cloud or not

With 12, Oracle introduced SQL Plan Directives, which in my view is really giving the optimizer a
chance to learn from its mistakes.   This is a step in the direction of a leaning optimizer.  In a way the optimizer is taking notes on how something ran and if not quite right, this note will help it not make the same mistake again.  It’s limited in 12.1 to just one type DYNAMIC_SAMPLING and in 12.2 another one appears DYNAMIC_SAMPLING_RESULT.  Right now I’m going to focus on the first one.

So when it says that the directive is to do dynamic sampling, is that like doing a table level dynamic sample?  That seems like over kill since the directives will have column names associated with them.  I suspected that they were column based, so I set out to prove this to be true or false.  And the tool I used was the good old 10053 trace, a trace of a hard parse. 

The test bears out that yes it is doing the sampling based on the columns of the directive.  The rest of this is a summary of my test.  If anyone would like to run this test on their own, let me know and I can send you the files to set up the test tables and the like. 

I used the newer DBMS_SQLDIAG.DUMP_TRACE technique to get the 10053 trace.  This is very convenient as I can run the query then ask for a 10053 trace on a given SQL_ID and CHILD_NUMBER.

Here is my test case SQL:
SQL> get jcomp_opt
  1  select /*+ qb_name(opt) */b.object_name, b.object_type, a.username
  2    from allusers_tab a, big_tab b
  3   where a.username = b.owner
  4     and b.object_type = 'PROCEDURE'
  5*    and a.username not in ('SYS','SYSTEM')

ALLUSERS_TAB is quite small with 48 rows and BIG_TAB has 2,422,880 rows.  The “not in” predicate on the BIG_TAB table gives the optimizer some math issues and it over calculates the cardinality for BIG_TAB.  It thinks at first it’s getting about 65,000 rows when in reality it only gets just under 3,000.  Because of this mismatch, after a couple runs I see this for the plan:

SQL> select * from table(dbms_xplan.display_cursor('gnshwskp49773',2, 'allstats last'));


SQL_ID  gnshwskp49773, child number 2
select /*+ qb_name(opt) */b.object_name, b.object_type, a.username
from allusers_tab a, big_tab b  where a.username = b.owner    and
b.object_type = 'PROCEDURE'    and a.username not in ('SYS','SYSTEM')

Plan hash value: 3435153054

| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT                     |                 |      1 |        |   2784 |00:00:00.01 |    1773 |
|   1 |  NESTED LOOPS                        |                 |      1 |   1576 |   2784 |00:00:00.01 |    1773 |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| BIG_TAB         |      1 |   1610 |   2784 |00:00:00.01 |    1584 |
|*  3 |    INDEX RANGE SCAN                  | BIG_OBJTYPE_IDX |      1 |   3403 |   3424 |00:00:00.01 |     199 |
|*  4 |   INDEX UNIQUE SCAN                  | USERNAME_PK     |   2784 |      1 |   2784 |00:00:00.01 |     189 |

Predicate Information (identified by operation id):

   2 - filter(("B"."OWNER"<>'SYS' AND "B"."OWNER"<>'SYSTEM'))
   3 - access("B"."OBJECT_TYPE"='PROCEDURE')
   4 - access("A"."USERNAME"="B"."OWNER")
       filter(("A"."USERNAME"<>'SYS' AND "A"."USERNAME"<>'SYSTEM'))

   - dynamic statistics used: dynamic sampling (level=2)
   - 2 Sql Plan Directives used for this statement

Key for this test is the note about the use of the Plan Directives.    The directives on the table are:
------------ -------------------- ------------ ------------ -----------------------
BIG_TAB      11891983782874668880              TABLE        DYNAMIC_SAMPLING_RESULT
BIG_TAB      14819284582793040278 OBJECT_TYPE  COLUMN       DYNAMIC_SAMPLING
BIG_TAB      14819284582793040278              TABLE        DYNAMIC_SAMPLING
BIG_TAB      8668036953221628977  OBJECT_TYPE  COLUMN       DYNAMIC_SAMPLING
BIG_TAB      8668036953221628977  OWNER        COLUMN       DYNAMIC_SAMPLING
BIG_TAB      8668036953221628977               TABLE        DYNAMIC_SAMPLING
BIG_TAB      8700850869231480407               TABLE        DYNAMIC_SAMPLING_RESULT

There are really 4 directives, but only the dynamic sampling ones are of interest for this test.   So what are these dynamic sampling directives really doing?  Well to find out I looked in the 10053 trace of this SQLID and child number and I found three queries being run on the table, one for each first directive, and two on the second:

Plan directive ID 8668036953221628977 has this one (I’ll refer to this one as 977 from now on, the last three digits of the directive ID):
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  */ NVL(SUM(C1),0) FROM (SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "B")  */ 1 AS C1 FROM "BIG_TAB" "B" WHERE ("B"."OBJECT_TYPE"='PROCEDURE') AND ("B"."OWNER"<>'SYS') AND ("B"."OWNER"<>'SYSTEM')) innerQuery (objid = 9445728533958271359)

Plan directive ID 14819284582793040278 has these two (I’ll refer to this one as 278):
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  OPT_ESTIMATE(@"innerQuery", TABLE, "B", ROWS=1610) */ NVL(C1,0), NVL(C2,0), NVL(C3,0) FROM (SELECT /*+ qb_name("innerQuery") INDEX( "B" "BIG_OBJTYPE_IDX")  */ COUNT(*) AS C1, 4294967295 AS C2, COUNT(*) AS C3  FROM "BIG_TAB" "B" WHERE ("B"."OBJECT_TYPE"='PROCEDURE')) innerQuery (objid = 12743291823137504172)

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel  OPT_ESTIMATE(@"innerQuery", TABLE, "B", ROWS=1610) OPT_ESTIMATE(@"innerQuery", INDEX_SCAN, "B", "BIG_OBJTYPE_IDX", ROWS=3424) */ NVL(C1,0), NVL(C2,0), NVL(C3,0) FROM (SELECT /*+ qb_name("innerQuery") INDEX( "B" "BIG_OBJTYPE_IDX")  */ COUNT(*) AS C1, 4294967295 AS C2, COUNT(*) AS C3  FROM "BIG_TAB" "B" WHERE ("B"."OBJECT_TYPE"='PROCEDURE')) innerQuery (objid = 12743291823137504172)

When I ran the query in 977 (after taking out the last bit starting with innerQuery) it’s really just counting up how many rows in BIG_TAB match the given predicates on the two columns.  In this case its 2,784.  This is then used to recalculate the cardinality of BIG_TAB from 65,483 to 1,610 for the full table scan estimate.  This line is in the 10053 trace  after the select doing the count:

Single Tab Card adjusted from 65483.243243 to 1610.000000 due to adaptive dynamic sampling

For 278 it's really about using the index on the OBJECT_TYPE column.  Notice that in these queries (which is done after 977) they both use the 1,610 number as a corrected cardinality with the OPT_ESTIMATE hint in them.  And the second one is also using the value retrieved from the first one (3,424) as a corrected cardinality on the index also with an OPT_ESTIMATE hint.   Clearly the selects associated with the SQL Directives are working together and build off each other.

Both the queries for 278 are doing counts on the table for just the OBJECT_TYPE predicate.  I’m not sure what it does the count twice in both queries as the C1 and C3 columns in the inner query, and 4,294,967,295 literal for C2 is odd as well.  That number only appears in these queries; it’s nowhere else in the trace.  It is the max values for an unsigned 32 bit integer, which is interesting but I’m not sure what that has to do with anything.  For this test, the number both queries come back with is 3,424 for both C1 and C3.  This value is then used for the index cardinality. This line is in the 10053 trace after the two selects doing the counts are done:

Index Card adjusted from 65893.938102 to 3424.000000 due to adaptive dynamic sampling

Maybe the two counts for C1 and C3 can be different in other version of these queries, but here they both are the exact same thing ’COUNT(*)’ so they can’t be different in this version of the query.  I may investigate this more in the future, but my testing this time is done.   

The conclusion of the test is yes, the dynamic sample is really being done on the columns as listed in the directive, also it does just the predicates in the sampling hence it should get very good counts to base the plan on.  The bad news is that this sampling could take some time on really big tables with lots of directives, which is likely why this feature is now turned off by default in 12.2.  Hopefully the great folks out in Redwood Shores will figure out a way to rein this in a bit and make it the great feature it appears to be.

Monday, August 28, 2017

Index monitoring in the cloud or not

A while back I wrote a post about getting a better idea of index usage using a query on v$sql_plan_statistics_all.  You can see that post here.   New in 12.2 is a much better way in monitor index usage.  While I believe still falling a little short of the mark, it’s hugely better than the YES/NO flag of days before.

One short coming is that it still counts collecting stats as a use of the index.  OK I get it, the index was scanned for the collecting of stats, but really that is not a use that the majority of us are interested in.  What we want to know is, when was it used to satisfy a query.  What this means is that realistically no index would every have a zero for usage since even unused indexes are going to have stats collected on them. 

Also this is on by default in 12.2, which is good.  Also it by default uses some sort of sampling technique.  You can set it such that it will catch all uses of the index, but likely that may have a negative impact on performance in a high use system.  Thanks to Franck Pachot for his post showing the parameter to do this, it can be set at the system or session level:

ALTER SESSION SET "_iut_stat_collection_type"=ALL;
ALTER SESSION SET "_iut_stat_collection_type"=SAMPLED;

OK so how about a little test.   One note is that the flush of the information collected only happens every 15 minutes.  So if you run this yourself you’ll need to wait 15 minutes to see the result of the final query.

set echo on
set feedback on
ALTER SESSION SET "_iut_stat_collection_type"=ALL;
drop table emp99 purge;

create table emp99 as select * from emp;

create index emp99_ename on emp99 (ename);

exec dbms_stats.gather_table_stats(ownname=> 'OP', tabname => 'EMP99');


SELECT owner, name, total_access_count,
 total_exec_count, total_rows_returned, last_used
FROM   dba_index_usage
where name = 'EMP99_ENAME'
ORDER BY owner, name;

ALTER SESSION SET "_iut_stat_collection_type"=SAMPLED;

The output from query (after waiting 15 minutes) was this:
SQL> SELECT owner, name, total_access_count,
  2   total_exec_count, total_rows_returned, last_used
  3  FROM   dba_index_usage
  4  where name = 'EMP99_ENAME'
  5  ORDER BY owner, name;

------ ----------- ---------- -------- ------------- ---------
OP     EMP99_ENAME          1        1            14 28-AUG-17

1 row selected.

So pretty clearly it’s counting the collecting of stats as a usage.  There is also a set of columns in the table that give you a histogram like view of the usage of the index.  


The access buckets appear to mean just that, how many times was a query run where the number of rows were returned.  Interestingly a collection counts as the number of rows used for the statistics.  For example my EMP99 table has 14 columns in it and that run showed up in the 11 to 100 bucket.  The rows returned also mean what they say.  Notice there is not a rows returned bucket for the first two access buckets.   This is because those buckets return either 1 or no rows.  Whereas the other buckets are a range of rows returned, so it tracks how many were really turned per accesses in those buckets.  Pretty cool really.  

For example here I’ve run a query that returned one row twice and no rows once and had the stats collection and I see this output for the buckets (not all of them just the first couple, the rest were 0).  I used a column command to format the column data.

  3  FROM   dba_index_usage
  4  where name = 'EMP99_ENAME';

 A_0  A_1 A_2_10 R_2_10 A_11_100 R_11_100
---- ---- ------ ------ -------- --------
   1    2      0      0        1       14

With this information it sure makes it much better to know what indexes are in use and which ones are not.  And this will make it much easier to determine which indexes you need to keep and which ones you need to take a serious look at to see if you really need them.

Friday, August 18, 2017

UPDATE! Getting the SQL_ID in the cloud or not

A few years back I post a block of code that would take as input a sql file with one sql statement in it and return the SQL_ID and HASH_VALUE using DBMS_SQL_TRANSLATOR.  The post is here.  It turns out there was a pretty big flaw in that code.  It was assuming there would only be one slash (/) at the end of the statement.  Ack!  Of course if you use the /*+ for hints or /* for a comment, then it would trim off the file at the first slash it found which is clearly wrong.

So here is the new code:
set termout on heading off feedback off verify off
-- File name hgetsqlid.sql
-- Get the SQLID/HASH for a SQL statement in a file
-- The file must contain only ONE SQL statement
-- The Directory "SQL_FILE_LOCATION" must be defined
-- This is the location where this will read the file from
-- Example
-- The file MUST end with a "/" on last line
-- Example:
-- Select * from emp
-- /
-- May 2015 RVD initial coding
-- Aug 2017 RVD fixed issue with finding the / in commment or hint would trim the file too short
--              some formating and other minor changes

set tab off
set serveroutput on
column directory_path format a100 wrap
prompt *********************************************
prompt Get SQL_ID and HASH VALUE for a SQL statement
prompt One statement in the file and must end with /
prompt Current setting of SQL_FILE_LOCATION:
prompt *********************************************
select directory_path from dba_directories where directory_name = 'SQL_FILE_LOCATION';
prompt *********************************************
accept hgetsqlid_file prompt 'Enter the full file name (with extension): '

    v_bfile BFILE;
    v_clob  CLOB;
    v_sqlid VARCHAR2(13);
    v_sqlhash number;
    v_slash integer := 0;
    e_noslash exception;
    v_bfile := BFILENAME ('SQL_FILE_LOCATION', '&hgetsqlid_file');
        DBMS_LOB.OPEN (v_bfile);
        DBMS_LOB.LOADFROMFILE (v_clob, v_bfile, DBMS_LOB.GETLENGTH (v_bfile));
        DBMS_LOB.CLOSE (v_bfile);
        -- remove all carrage returns (ASCII 13) from the clob
        -- each line must end with only a line feed (ASCII 10)
        v_clob := replace(v_clob, CHR(13) , '');
        -- trims off training spaces at the end of the file
        v_clob := rtrim(v_clob);
        -- trim off anything else at the end back to the /
        while (dbms_lob.substr(v_clob,1,(DBMS_LOB.GETLENGTH (v_clob)))) <> '/'
           DBMS_LOB.TRIM (v_clob,(DBMS_LOB.GETLENGTH(v_clob))-1);
          end loop;
        -- remove any trailing spaces or tabs (ASCII 9)
        while DBMS_LOB.INSTR (v_clob, ' '||CHR(10)) > 0 or
              DBMS_LOB.INSTR (v_clob, CHR(9)||CHR(10)) > 0
             v_clob := replace(v_clob, ' '||CHR(10), CHR(10));
             v_clob := replace(v_clob, CHR(9)||CHR(10), CHR(10));
             end loop;
        -- Find the / at the end of the file
        v_slash := DBMS_LOB.INSTR (v_clob,'/',DBMS_LOB.GETLENGTH(v_clob));
        IF v_slash = 0 THEN RAISE e_noslash; END IF;
        -- remove the line with the slash and everything after it
        DBMS_LOB.TRIM (v_clob,v_slash-2);
        v_sqlid   :=  DBMS_SQL_TRANSLATOR.SQL_ID (v_clob);
        v_sqlhash :=  DBMS_SQL_TRANSLATOR.SQL_HASH (v_clob);
        dbms_output.put_line ('*************************');
        dbms_output.put_line ('The SQL ID is '||v_sqlid);
        dbms_output.put_line ('Hash value is '||v_sqlhash);
        dbms_output.put_line ('*************************');
        dbms_output.put_line ('** File not found **');
    END IF;
        when e_noslash then
        dbms_output.put_line ('-+-+-+-+-+-+-+-+-');
        dbms_output.put_line ('Slash not found!');
        dbms_output.put_line ('-+-+-+-+-+-+-+-+-');

set serveroutput off
set heading on

And an example of using the code:

SQL> @hgetsqlid
Get SQL_ID and HASH VALUE for a SQL statement
One statement in the file and must end with /
Current setting of SQL_FILE_LOCATION:

Enter the full file name (with extension): with.sql
The SQL ID is 2n63z3ab978kn
Hash value is 2526257748
SQL> @with

SQL> select PREV_SQL_ID,PREV_CHILD_NUMBER  from v$session WHERE audsid = userenv('sessionid');

------------- -----------------
2n63z3ab978kn                 0

SQL> get with
  1  select /*+ qb_name(allobjs) */ count(*)
  2    from withlab_allobjects a,
  3  (select /*+ qb_name(owners1) */distinct owner username
  4     from withlab_allobjects ) owners
  5   where a.owner = owners.username
  6  union all
  7  select /*+ qb_name(dbaobjs) */ count(*)
  8    from withlab_dbaobjects d,
  9  (select /*+ qb_name(owners2) */distinct owner username
 10     from withlab_allobjects ) owners
 11*  where d.owner = owners.username